id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
54,093,847 | https://en.wikipedia.org/wiki/Delsarte%E2%80%93Goethals%20code | The Delsarte–Goethals code is a type of error-correcting code.
History
The concept was introduced by mathematicians Ph. Delsarte and J.-M. Goethals in their published paper.
A new proof of the properties of the Delsarte–Goethals code was published in 1970.
Function
The Delsarte–Goethals code DG(m,r) for even m ≥ 4 and 0 ≤ r ≤ m/2 − 1 is a binary, non-linear code of length , size and minimum distance
The code sits between the Kerdock code and the second-order Reed–Muller codes. More precisely, we have
When r = 0, we have DG(m,r) = K(m) and when r = m/2 − 1 we have DG(m,r) = RM(2,m).
For r = m/2 − 1 the Delsarte–Goethals code has strength 7 and is therefore an orthogonal array OA(.
References
Coding theory
Error detection and correction | Delsarte–Goethals code | [
"Mathematics",
"Engineering"
] | 221 | [
"Discrete mathematics",
"Coding theory",
"Reliability engineering",
"Error detection and correction"
] |
54,096,927 | https://en.wikipedia.org/wiki/SuperKEKB | SuperKEKB is a particle collider located at KEK (High Energy Accelerator Research Organisation) in Tsukuba, Ibaraki Prefecture, Japan. SuperKEKB collides electrons with positrons at the centre-of-momentum energy close to the mass of the Υ(4S) resonance making it a second-generation B-factory for the Belle II experiment. The accelerator is an upgrade to the KEKB accelerator, providing approximately 40 times higher luminosity, due mostly to superconducting quadrupole focusing magnets. The accelerator achieved "first turns" (first circulation of electron and positron beams) in February 2016. First collisions occurred on 26 April 2018. At 20:34 on 15 June 2020, SuperKEKB achieved the world’s highest instantaneous luminosity for a colliding-beam accelerator, setting a record of 2.22×1034 cm−2s−1.
Description
The SuperKEKB design reuses many components from KEKB. Under normal operation, SuperKEKB collides electrons at 7 GeV with positrons at 4 GeV (compared to KEKB at 8 GeV and 3.5 GeV respectively). The centre-of-momentum energy of the collisions is therefore at the mass of the Υ(4S) resonance (10.58 GeV/c2). The accelerator will also perform short runs at energies of other Υ resonances, in order to obtain samples of other B mesons and baryons. The asymmetry in the beam energy provides a relativistic Lorentz boost to the B meson particles produced in the collision. The direction of the higher-energy beam determines the 'forward' direction, and that affects the design of much of the Belle II detector.
As with KEKB, SuperKEKB consists of two storage rings: one for the high-energy electron beam (the High Energy Ring, HER) and one for the lower energy positron beam (the Low Energy Ring, LER). The accelerator has a circumference of 3016 m with four straight sections and experimental halls in the centre of each, named "Tsukuba", "Oho", "Fuji", and "Nikko". The Belle II experiment is located at the single interaction point in Tsukuba Hall.
Luminosity
The target luminosity for SuperKEKB is 6.5×1035 cm−2s−1, this is 30 times larger than the luminosity at KEKB. The improvement is mostly due to a so-called 'nano-beam' scheme, originally proposed for the cancelled SuperB experiment. In the nano-beam scheme at SuperKEKB, the beams are squeezed in the vertical direction and the crossing angle is increased, which reduces the area of the crossing. The luminosity is further increased by a factor of two, due to a higher beam current than KEKB. The focus and crossing angle is achieved by two new superconducting quadrupole magnets at the interaction point that were installed in February 2017.
On June 15, 2020, SuperKEKB set a new world record for the highest instantaneous luminosity for a colliding-beam accelerator: 2.22×1034 cm−2s−1. (On June 21, 2020, SuperKEKB broke its own record and achieved an instantaneous luminosity of 2.40×1034 cm−2s−1.) The previous world record of 2.14×1034 cm−2s−1 was achieved by LHC in 2018.
See also
KEKB
Large Electron-Positron Collider
Large Hadron Collider
References
External links
SuperKEKB webpage
KEK webpage
SuperKEKB on INSPIRE-HEP
Belle II on INSPIRE-HEP
Particle physics facilities
Particle experiments
B physics
Accelerator physics
Particle accelerators
Science and technology in Japan
Tsukuba, Ibaraki | SuperKEKB | [
"Physics"
] | 823 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
38,375,237 | https://en.wikipedia.org/wiki/Server-based%20signatures | In cryptography, server-based signatures are digital signatures in which a publicly available server participates in the signature creation process. This is in contrast to conventional digital signatures that are based on public-key cryptography and public-key infrastructure. With that, they assume that signers use their personal trusted computing bases for generating signatures without any communication with servers.
Four different classes of server based signatures have been proposed:
1. Lamport One-Time Signatures. Proposed in 1979 by Leslie Lamport. Lamport one-time signatures are based on cryptographic hash functions. For signing a message, the signer just sends a list of hash values (outputs of a hash function) to a publishing server and therefore the signature process is very fast, though the size of the signature is many times larger, compared to ordinary public-key signature schemes.
2. On-line/off-line Digital Signatures. First proposed in 1989 by Even, Goldreich and Micali in order to speed up the signature creation procedure, which is usually much more time-consuming than verification. In case of RSA, it may be one thousand times slower than verification. On-line/off-line digital signatures are created in two phases. The first phase is performed off-line, possibly even before the message to be signed is known. The second (message-dependent) phase is performed on-line and involves communication with a server. In the first (off-line) phase, the signer uses a conventional public-key digital signature scheme to sign a public key of the Lamport one-time signature scheme. In the second phase, a message is signed by using the Lamport signature scheme. Some later works
have improved the efficiency of the original solution by Even et al.
3. Server-Supported Signatures (SSS). Proposed in 1996 by Asokan, Tsudik and Waidner in order to delegate the use of time-consuming operations of asymmetric cryptography from clients (ordinary users) to a server. For ordinary users, the use of asymmetric cryptography is limited to signature verification, i.e. there is no pre-computation phase like in the case of on-line/off-line signatures. The main motivation was the use of low-performance mobile devices for creating digital signatures, considering that such devices could be too slow for creating ordinary public-key digital signatures, such as RSA. Clients use hash chain based authentication to send their messages to a signature server in an authenticated way and the server then creates a digital signature by using an ordinary public-key digital signature scheme. In SSS, signature servers are not assumed to be Trusted Third Parties (TTPs) because the transcript of the hash chain authentication phase can be used for non repudiation purposes. In SSS, servers cannot create signatures in the name of their clients.
4. Delegate Servers (DS). Proposed in 2002 by Perrin, Bruns, Moreh and Olkin in order to reduce the problems and costs related to individual private keys. In their solution, clients (ordinary users) delegate their private cryptographic operations to a Delegation Server (DS). Users authenticate to DS and request to sign messages on their behalf by using the server's own private key. The main motivation behind DS was that private keys are difficult for ordinary users to use and easy for attackers to abuse. Private keys are not memorable like passwords or derivable from persons like biometrics, and cannot be entered from keyboards like passwords. Private keys are mostly stored as files in computers or on smart-cards, that may be stolen by attackers and abuse off-line. In 2003, Buldas and Saarepera proposed a two-level architecture of delegation servers that addresses the trust issue by replacing trust with threshold trust via the use of threshold cryptosystems.
References
Cryptography | Server-based signatures | [
"Mathematics",
"Engineering"
] | 787 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
38,381,971 | https://en.wikipedia.org/wiki/Vibronic%20spectroscopy | Vibronic spectroscopy is a branch of molecular spectroscopy concerned with vibronic transitions: the simultaneous changes in electronic and vibrational energy levels of a molecule due to the absorption or emission of a photon of the appropriate energy. In the gas phase, vibronic transitions are also accompanied by changes in rotational energy.
Vibronic spectra of diatomic molecules have been analysed in detail; emission spectra are more complicated than absorption spectra. The intensity of allowed vibronic transitions is governed by the Franck–Condon principle. Vibronic spectroscopy may provide information, such as bond length, on electronic excited states of stable molecules. It has also been applied to the study of unstable molecules such as dicarbon (C2) in discharges, flames and astronomical objects.
Principles
Electronic transitions are typically observed in the visible and ultraviolet regions, in the wavelength range approximately 200–700 nm (50,000–14,000 cm−1), whereas fundamental vibrations are observed below about 4000 cm−1. When the electronic and vibrational energy changes are so different, vibronic coupling (mixing of electronic and vibrational wave functions) can be neglected and the energy of a vibronic level can be taken as the sum of the electronic and vibrational (and rotational) energies; that is, the Born–Oppenheimer approximation applies. The overall molecular energy depends not only on the electronic state but also on vibrational and rotational quantum numbers, denoted v and J respectively for diatomic molecules. It is conventional to add a double prime for levels of the electronic ground state and a single prime for electronically excited states.
Each electronic transition may show vibrational coarse structure, and for molecules in the gas phase, rotational fine structure. This is true even when the molecule has a zero dipole moment and therefore has no vibration-rotation infrared spectrum or pure rotational microwave spectrum.
It is necessary to distinguish between absorption and emission spectra. With absorption the molecule starts in the ground electronic state, and usually also in the vibrational ground state because at ordinary temperatures the energy necessary for vibrational excitation is large compared to the average thermal energy. The molecule is excited to another electronic state and to many possible vibrational states . With emission, the molecule can start in various populated vibrational states, and finishes in the electronic ground state in one of many populated vibrational levels. The emission spectrum is more complicated than the absorption spectrum of the same molecule because there are more changes in vibrational energy level.
For absorption spectra, the vibrational coarse structure for a given electronic transition forms a single progression, or series of transitions with a common level, here the lower level . There are no selection rules for vibrational quantum numbers, which are zero in the ground vibrational level of the initial electronic ground state, but can take any integer values in the final electronic excited state. The term values for a harmonic oscillator are given by
where is a vibrational quantum number, and is the harmonic wavenumber. In the next approximation the term values are given by
where is an anharmonicity constant. This is, in effect, a better approximation to the Morse potential near the potential minimum. The spacing between adjacent vibrational lines decreases with increasing quantum number because of anharmonicity in the vibration. Eventually the separation decreases to zero when the molecule photo-dissociates into a continuum of states. The second formula is adequate for small values of the vibrational quantum number. For higher values further anharmonicity terms are needed as the molecule approaches the dissociation limit, at the energy corresponding to the upper (final state) potential curve at infinite internuclear distance.
The intensity of allowed vibronic transitions is governed by the Franck–Condon principle. The intensity distribution within a progression is governed by the difference in the equilibrium bond lengths of the initial electronic ground state and the final electronic excited state of the molecule. In accordance with the Born-Oppenheimer approximation, where electronic motion is near instantaneous compared to nuclear motion, transitions between vibrational levels happen with essentially no change in nuclear coordinates between the ground and excited electronic states. These nuclear coordinates are referred to as classical "turning points", where the equilibrium bond lengths of the initial and final electronic states are equal. These transitions can be represented as vertical lines between the various vibrational levels within electronic states on an energy level diagram.
It is generally true that the greater the changes to the bond length of a molecule upon excitation, the greater the contribution of vibrational states to a progression. The width of this progression itself is dependent on the range of transition energies available for internuclear distances close to the turning points of the initial vibration state. As the "well" of the potential energy curve of the final electronic state grows steeper, there are more final vibrational states available for transitions, and thus more energy levels to yield a wider spectrum.
Emission spectra are complicated due to the variety of processes through which electronically excited molecules can spontaneously return to lower energy states. There is a tendency for molecules to undergo vibrational energy relaxation, where energy is lost non-radiatively from the Franck–Condon state (the vibrational state achieved after a vertical transition) to surroundings or to internal processes. The molecules can settle in the ground vibrational level of the excited electronic state, where they can continue to decay to various vibrational levels in the ground electronic state, before ultimately returning to the lowest vibrational level of the ground state.
If emission occurs before vibrational relaxation can occur, then the resulting fluorescence is referred to as resonance fluorescence. In this case, the emission spectrum is identical to the absorbance spectrum. Resonance fluorescence, however, is not very common and is mainly observed in small molecules (such as diatomics) in the gas phase. This lack of prevalence is due to short radiative lifetimes of the excited state, during which energy can be lost.
Emission from the ground vibrational level of the excited state after vibrational relaxation is much more prevalent, referred to as relaxed fluorescence. Emission peaks for a molecule exhibiting relaxed fluorescence are found at longer wavelengths than the corresponding absorption spectra, with the difference being the Stokes shift of the molecule.
Vibronic spectra of diatomic molecules in the gas phase have been analyzed in detail. Vibrational coarse structure can sometimes be observed in the spectra of molecules in liquid or solid phases and of molecules in solution. Related phenomena including photoelectron spectroscopy, resonance Raman spectroscopy, luminescence, and fluorescence are not discussed in this article, though they also involve vibronic transitions.
Diatomic molecules
The vibronic spectra of diatomic molecules in the gas phase also show rotational fine structure. Each line in a vibrational progression will show P- and R-branches. For some electronic transitions there will also be a Q-branch. The transition energies, expressed in wavenumbers, of the lines for a particular vibronic transition are given, in the rigid rotor approximation, that is, ignoring centrifugal distortion, by
Here are rotational constants and are rotational quantum numbers. (For B also, a double prime indicates the ground state and a single prime an electronically excited state.) The values of the rotational constants may differ appreciably because the bond length in the electronic excited state may be quite different from the bond length in the ground state, because of the operation of the Franck-Condon principle. The rotational constant is inversely proportional to the square of the bond length. Usually as is true when an electron is promoted from a bonding orbital to an antibonding orbital, causing bond lengthening. But this is not always the case; if an electron is promoted from a non-bonding or antibonding orbital to a bonding orbital, there will be bond-shortening and .
The treatment of rotational fine structure of vibronic transitions is similar to the treatment of rotation-vibration transitions and differs principally in the fact that the ground and excited states correspond to two different electronic states as well as to two different vibrational levels. For the P-branch , so that
Similarly for the R-branch , and
Thus, the wavenumbers of transitions in both P- and R-branches are given, to a first approximation, by the single formula
Here positive values refer to the R-branch (with ) and negative values refer to the P-branch (with ). The wavenumbers of the lines in the P-branch, on the low wavenumber side of the band origin at increase with . In the R-branch, for the usual case that , as increases the wavenumbers at first lie increasingly on the high wavenumber side of the band origin but then start to decrease, eventually lying on the low wavenumber side. The Fortrat diagram illustrates this effect. In the rigid rotor approximation the line wavenumbers lie on a parabola which has a maximum at
The line of highest wavenumber in the R-branch is known as the band head. It occurs at the value of which is equal to the integer part of , or of .
When a Q-branch is allowed for a particular electronic transition, the lines of the Q-branch correspond to the case , and wavenumbers are given by
The Q-branch then consists of a series of lines with increasing separation between adjacent lines as increases. When the Q-branch lies to lower wavenumbers relative to the vibrational line.
Predissociation
The phenomenon of predissociation occurs when an electronic transition results in dissociation of the molecule at an excitation energy less than the normal dissociation limit of the upper state. This can occur when the potential energy curve of the upper state crosses the curve for a repulsive state, so that the two states have equal energy at some internuclear distance. This allows the possibility of a radiationless transition to the repulsive state whose energy levels form a continuum, so that there is blurring of the particular vibrational band in the vibrational progression.
Applications
The analysis of vibronic spectra of diatomic molecules provides information concerning both the ground electronic state and the excited electronic state. Data for the ground state can also be obtained by vibrational or pure rotational spectroscopy, but data for the excited state can only be obtained from the analysis of vibronic spectra. For example, the bond length in the excited state may be derived from the value of the rotational constant B′.
In addition to stable diatomic molecules, vibronic spectroscopy has been used to study unstable species, including CH, NH, hydroxyl radical, OH, and cyano radical, CN. The Swan bands in hydrocarbon flame spectra are a progression in the C–C stretching vibration of the dicarbon radical, C2 for the electronic transition. Vibronic bands for 9 other electronic transitions of C2 have been observed in the infrared and ultraviolet regions.
Polyatomic molecules and ions
For polyatomic molecules, progressions are most often observed when the change in bond lengths upon electronic excitation coincides with the change due to a ″totally symmetric″ vibration. This is the same process that occurs in resonance Raman spectroscopy. For example, in formaldehyde (methanal), H2CO, the transition involves excitation of an electron from a non-bonding orbital to an antibonding pi orbital which weakens and lengthens the C–O bond. This produces a long progression in the C–O stretching vibration. Another example is furnished by benzene, C6H6. In both gas and liquid phase the band around 250 nm shows a progression in the symmetric ring-breathing vibration.
As an example from inorganic chemistry the permanganate ion, , in aqueous solution has an intense purple colour due to an ligand-to-metal charge transfer band (LMCT) in much of the visible region. This band shows a progression in the symmetric Mn–O stretching vibration. The individual lines overlap each other extensively, giving rise to a broad overall profile with some coarse structure.
Progressions in vibrations which are not totally symmetric may also be observed.
d–d electronic transitions in atoms in a centrosymmetric environment are electric-dipole forbidden by the Laporte rule. This will apply to octahedral coordination compounds of the transition metals. The spectra of many of these complexes have some vibronic character. The same rule also applies to f–f transitions in centrosymmetric complexes of lanthanides and actinides. In the case of the octahedral actinide chloro-complex of uranium(IV), UCl62− the observed electronic spectrum is entirely vibronic. At the temperature of liquid helium, 4 K, the vibronic structure was completely resolved, with zero intensity for the purely electronic transition, and three side-lines corresponding to the asymmetric U–Cl stretching vibration and two asymmetric Cl–U–Cl bending modes. Later studies on the same anion were also able to account for vibronic transitions involving low-frequency lattice vibrations.
Notes
References
Bibliography
Chapter: Molecular Spectroscopy 2.
Chapter 4: Fundamentals of Fluorescence and Fluorescence Microscopy
Spectroscopy | Vibronic spectroscopy | [
"Physics",
"Chemistry"
] | 2,693 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
38,382,619 | https://en.wikipedia.org/wiki/Thrombodynamics%20test | Thrombodynamics test is a method for blood coagulation monitoring and anticoagulant control. This test is based on imitation of coagulation processes occurring in vivo, is sensitive both to pro- and anticoagulant changes in the hemostatic balance. Highly sensitive to thrombosis.
The method was developed in the Physical Biochemistry Laboratory under the direction of Prof. Fazly Ataullakhanov.
Technology description
Thrombodynamics designed to investigate the in vitro spatial-temporal dynamics of blood coagulation initiated by localized coagulation activator under conditions similar to the conditions of the blood clotting in vivo. Thrombodynamics takes into account the spatial heterogeneity trombodinamiki processes in blood coagulation. The test is performed without mixing in a thin layer of plasma.
The measurement cuvette with the blood plasma sample is placed inside the water thermostat. Clotting starts when activator with immobilized TF is immersed into the cuvette. The clot then propagates from the activating surface into the bulk of plasma. Image of growing clot is registered via the CCD camera using a time-lapse microscopy mode in scattered light and then parameters of coagulation are calculated on the computer. Thrombodynamics analyser T-2 device also supports measurement of spatial dynamics of thrombin propagation during the process of clot growth via usage of the fluorogenic substrate for thrombin. Blood plasma sample is periodically irradiated with the excitation light and the emission of the fluorophore is registered by CCD camera.
Mathematical methods are used to restore spatio-temporal distribution of the thrombin from the fluorophore signal. This experimental model worked well in research and has demonstrated good sensitivity to various disorders of the coagulation system.
Features
Observing spatial growth of fibrin clots in vitro
Measuring thrombin generation as a function of space and time during thrombus formation
Time- and space- resolved imaging of other active coagulation factors
High sensitivity to both pro- and anticoagulant changes in the hemostatic balance
Applications
Prothrombotic disorders
Hypercoagulation in the perioperative period, DIC, cancer patients, patients with thrombotic risk factors
Diagnostics of coagulopathy
Monitoring of anticoagulant therapy
Individual therapy planning
Bleeding disorders
Hemophilia A and B, DIC, blood loss, anticoagulant overdose
Diagnostics of coagulopathy
Monitoring of replacement therapy
Identification of coagulopathy pathogenesis
Development of drugs and biomaterials
See also
Blood coagulation
References
Blood tests
Mathematics in medicine | Thrombodynamics test | [
"Chemistry",
"Mathematics"
] | 568 | [
"Blood tests",
"Chemical pathology",
"Mathematics in medicine",
"Applied mathematics"
] |
38,383,500 | https://en.wikipedia.org/wiki/RRT%20Global | RRT Global is an international company that specializes in the development of technologies for oil refining process. The company's CEO, Douglas Harris, is a former vice president of TNK-BP. The company is a resident of the Energy Efficient Technologies cluster of the Skolkovo Foundation.
RRT Global is the first company to implement the conversion of light gasoline fractions in a combined process.
History
The company was founded in St. Petersburg by chemical engineers Oleg Parputs () and Oleg Giiazov (), who had previously worked in an engineering company that served the oil refining sector. Startup financing was provided by Foresight Ventures, Bright Capital and the Skolkovo Foundation.
The first laboratory was established on the campus of the Saint Petersburg State Institute of Technology. The laboratory was unheated, which meant that researchers had to work there for 6–8 hours in padded jackets.
The company became a resident of the Energy Efficient Technologies cluster of the Skolkovo Foundation in 2011. Douglas Harris, the former Vice President of Refining at BP and TNK-BP, became the company’s CEO in the same year. The company has patented the PRIS technology in Russia, Europe and the United States.
The Prime Minister of the Russian Federation, Dmitry Medvedev, met with the company’s management team in October 2011, and visited the company’s laboratory in September 2012 during his official visit to Saint Petersburg.
The company was rated one of the Top 10 Startups of the Year in 2012 by the Russian Startup Rating.
After conducting a company audit in 2012, PricewaterhouseCoopers awarded the company an AAA rating.
In 2015-2016 RRT Global formed technology alliances with American engineering company KBR and Russian IT company Yandex.
Management and company structure
The company is headquartered in the United States. RRT Global has subsidiary operating in the Russian Federation
RRT Global’s R&D center is located in St. Petersburg. The center includes a pilot plant park, laboratory facilities for studying catalytic systems, an analytical laboratory, and an administrative and logistics center.
The company’s senior management includes Dmitry Shalupkin (CTO), Douglas Harris (CEO), Oleg Giiazov (Director in Russia).
Technologies
One of the company’s areas of focus is to improve the technology to obtain MSAT-2 gasoline components based on combining catalytic systems and refining in a single unit. The company is making extensive use of 3D printing for the production of certain equipment components.
PRIS
PRIS is a technology developed by the company for converting light gasoline fractions in a combined process. Papers on this technology have been published in the journal Chemical Engineering and Processing, as well as other specialized scientific journals. Worldwide Refinery Processing Review Included the technology in four of the State-of-the-Art commercial isomerization technologies, together with international providers of advanced technologies: UOP, Axens, GTC. Rossiyskaya Gazeta called the technology “revolutionary”.
According to this technology, refining and catalytic systems are combined in a single unit, reducing capital and energy costs and reducing environmental pollution. A similar combination principle had previously been used primarily in the pharmaceutical industry. The PRIS technology allows the use of a low-octane straight-run gasoline fraction with a benzole-containing fraction as raw materials. The technology allows the production of high-octane gasoline components meeting the EURO 5 standard.
IC7
IС7 is a technology developed by the company for isomerization product of oil refining, heptanes. Earlier heptanes not found practical applications and have been used as solvents. The technology helps to increase production of high-octane gasoline.
See also
Oil refinery
List of oil refineries
Distillation
References
External links
Official website
Russian cleantech Euro-5 gasoline production technology unveiled
俄罗斯RRT Global公司: 寻求机会,结识伙伴--新闻中心
NTV. Репортаж о RRT Global
Kommersant. Чистая эффективность
Молодые петербургские ученые готовят революцию в производстве бензина
Petroleum technology
Engineering companies of Russia | RRT Global | [
"Chemistry",
"Engineering"
] | 904 | [
"Petroleum engineering",
"Petroleum technology"
] |
38,383,946 | https://en.wikipedia.org/wiki/Hexoskin | Hexoskin is an open data smart shirt for monitoring EKG, heart rate, heart rate variability, breathing rate, breathing volume, actigraphy and other activity measurements like step counting and cadence. Hexoskin allows real-time remote health monitoring on smartphones and tablets using Bluetooth. The smart shirt was created to be used for personal self-experiments, and has also been used by health researchers to study physiology, elite and professional athletes to optimize their physical conditioning, and astronauts to train for space missions.
All the articles quoted below are hearsay from the company itself and are currently un contactable
Hexoskin embeds physiological sensors in smart textiles materials, and is a connected object in the sense of the Internet of things concept.
See also
Clothing technology
E-textiles
Wearable technology
References
External links
Hexoskin web site
Biomedical engineering | Hexoskin | [
"Engineering",
"Biology"
] | 176 | [
"Biological engineering",
"Medical technology",
"Biomedical engineering"
] |
44,031,786 | https://en.wikipedia.org/wiki/Classical%20control%20theory | Classical control theory is a branch of control theory that deals with the behavior of dynamical systems with inputs, and how their behavior is modified by feedback, using the Laplace transform as a basic tool to model such systems.
The usual objective of control theory is to control a system, often called the plant, so its output follows a desired control signal, called the reference, which may be a fixed or changing value. To do this a controller is designed, which monitors the output and compares it with the reference. The difference between actual and desired output, called the error signal, is applied as feedback to the input of the system, to bring the actual output closer to the reference.
Classical control theory deals with linear time-invariant (LTI) single-input single-output (SISO) systems. The Laplace transform of the input and output signal of such systems can be calculated. The transfer function relates the Laplace transform of the input and the output.
Feedback
To overcome the limitations of the open-loop controller, classical control theory introduces feedback. A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.
Closed-loop controllers have the following advantages over open-loop controllers:
disturbance rejection (such as hills in a cruise control)
guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact
unstable processes can be stabilized
reduced sensitivity to parameter variations
improved reference tracking performance
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.
A common closed-loop controller architecture is the PID controller.
Classical vs modern
A Physical system can be modeled in the "time domain", where the response of a given system is a function of the various inputs, the previous system values, and time. As time progresses, the state of the system and its response change. However, time-domain models for systems are frequently modeled using high-order differential equations which can become impossibly difficult for humans to solve and some of which can even become impossible for modern computer systems to solve efficiently.
To counteract this problem, classical control theory uses the Laplace transform to change an Ordinary Differential Equation (ODE) in the time domain into a regular algebraic polynomial in the frequency domain. Once a given system has been converted into the frequency domain it can be manipulated with greater ease.
Modern control theory, instead of changing domains to avoid the complexities of time-domain ODE mathematics, converts the differential equations into a system of lower-order time domain equations called state equations, which can then be manipulated using techniques from linear algebra.
Laplace transform
Classical control theory uses the Laplace transform to model the systems and signals. The Laplace transform is a frequency-domain approach for continuous time signals irrespective of whether the system is stable or unstable. The Laplace transform of a function , defined for all real numbers , is the function , which is a unilateral transform defined by
where s is a complex number frequency parameter
, with real numbers and .
Closed-loop transfer function
A common feedback control architecture is the servo loop, in which the output of the system y(t) is measured using a sensor F and subtracted from the reference value r(t) to form the servo error e. The controller C then uses the servo error e to adjust the input u to the plant (system being controlled) P in order to drive the output of the plant toward the reference. This is shown in the block diagram below. This kind of controller is a closed-loop controller or feedback controller.
This is called a single-input-single-output (SISO) control system; MIMO (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).
If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations:
Solving for Y(s) in terms of R(s) gives
The expression is referred to as the closed-loop transfer function of the system. The numerator is the forward (open-loop) gain from to , and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If , i.e., it has a large norm with each value of s, and if , then is approximately equal to and the output closely tracks the reference input.
PID controller
The PID controller is probably the most-used (alongside much cruder Bang-bang control) feedback control design. PID is an initialism for Proportional-Integral-Derivative, referring to the three terms operating on the error signal to produce a control signal. If is the control signal sent to the system, is the measured output and is the desired output, and tracking error , a PID controller has the general form
The desired closed loop dynamics is obtained by adjusting the three parameters , and , often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well established class of control systems: however, they cannot be used in several more complicated cases, especially if multiple-input multiple-output systems (MIMO) systems are considered.
Applying Laplace transformation results in the transformed PID controller equation
with the PID controller transfer function
There exists a nice example of the closed-loop system discussed above. If we take
PID controller transfer function in series form
1st order filter in feedback loop
linear actuator with filtered input
, = const
and insert all this into expression for closed-loop transfer function , then tuning is very easy: simply put
and get identically.
For practical PID controllers, a pure differentiator is neither physically realisable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach is used instead, or a differentiator with low-pass roll-off.
Tools
Classical control theory uses an array of tools to analyze systems and design controllers for such systems. Tools include the root locus, the Nyquist stability criterion, the Bode plot, the gain margin and phase margin. More advanced tools include Bode integrals to assess performance limitations and trade-offs, and describing functions to analyze nonlinearities in the frequency domain.
See also
Minor loop feedback a classical method for designing feedback control systems.
State space (control)
References
Control engineering
Mathematical modeling | Classical control theory | [
"Mathematics",
"Engineering"
] | 1,545 | [
"Applied mathematics",
"Control engineering",
"Mathematical modeling"
] |
44,036,593 | https://en.wikipedia.org/wiki/International%20Centre%20for%20Radio%20Astronomy%20Research | The International Centre for Radio Astronomy Research (ICRAR) is a multi-institutional astronomy research centre based in Perth, Western Australia. The centre is a joint venture between Curtin University and the University of Western Australia, with 'nodes' located at both universities. As of 2024, ICRAR has approximately 150 staff and students across both nodes.
History
ICRAR launched in August 2009 with funding support from the State Government of Western Australia. Initially funded for five years to support Australia's bid to host the SKA telescopes, its funding was extended for a additional five year periods in 2013 (ICRAR II), 2019 (ICRAR III) and 2024 (ICRAR IV).
In 2013, ICRAR became the first user of the Pawsey Supercomputing Centre, based in Kensington.
Research
Although radio astronomy features in the centre's name, its research has expanded to include optical and multi-wavelength astronomy.
Each of the centre's two university nodes specialises in different areas of astronomical research. The Curtin node specialises in extragalactic radio science, accretion physics and slow transients, the epoch of reionisation, and pulsars & other fast transients. The UWA node specialises in studying galaxies in the local and distant Universe, and cosmological theory, with a particular focus on galactic and cosmological simulations. The UWA node also operates a data intensive astronomy program, which researches techniques for managing and processing the large amounts of data created by current and future radio telescopes.
Both nodes also operate engineering research programs, largely dedicated to the design and operation of radio telescopes and development of related spin-off technologies. In particular, the timing and synchronisation system for the SKA-Mid radio telescope and the power and signal distribution system for the SKA-Low radio telescope were designed at developed at ICRAR's UWA and Curtin nodes, respectively.
ICRAR has also contributed to the design, technical operations and science programs of several Australian SKA precursors and prototypes, including the Murchison Widefield Array (MWA), the Australian Square Kilometre Array Pathfinder (ASKAP), and the Aperture Array Verification Systems (AAVS1,2&3), located at Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory.
Management
ICRAR is governed by a board, with representatives from the governmnent, both universities, and other stakeholders including the CSIRO. The inaugural board chair was Bernard Bowen, (February 2009 - July 2016). The current chair is David Skellern, appointed March 2024.
ICRAR's day-to-day operations are managed by an executive team with members across both university nodes. The founding executive director was Peter Quinn (2009-2022). The current executive director is Simon Ellingsen.
Citizen Science
ICRAR has run several successful citizen science projects.
theSkyNet employed Internet-connected computers owned by the general public to do research in astronomy using BOINC technology. It combined the spectral coverage of the GALEX, Pan-STARRS1, and WISE to generate a multi-wavelength (ultra-violet - optical - near infra-red) galaxy atlas for the nearby Universe. In September 2014 theSkyNet had 13573 total users, and 5198 recent users. theSkyNet was powered down in 2018.
AstroQuest launched in 2019, and aimed to help Australian scientists understand how galaxies grow and evolve. Users inspected images of galaxies, and used paint tools to help classify light as coming from the galaxy or from other sources. As of 2021, approximately 10,000 users had classified the complete dataset of 60,000 galaxies, and the project is on indefinite hold awaiting more galaxies to classify.
Notable discoveries
In 2022, an unusual slow periodic radio transient was discovered in archival data in GLEAM (GaLactic and Extragalactic All-sky Murchison Widefield Array Survey), catalogued as GLEAM-XJ162759.5-523504, the astrophysical radio source had an 18 minute period with 1 minute long bursts, not matching any then known periodic variables.
See also
List of astronomical societies
References
Astronomy organizations
Astronomy in Australia
Radio astronomy
Scientific organizations established in 2009 | International Centre for Radio Astronomy Research | [
"Astronomy"
] | 864 | [
"Radio astronomy",
"Astronomy organizations",
"Astronomical sub-disciplines"
] |
44,038,801 | https://en.wikipedia.org/wiki/Smart%20inorganic%20polymer | Smart inorganic polymers (SIPs) are hybrid or fully inorganic polymers with tunable (smart) properties such as stimuli responsive physical properties (shape, conductivity, rheology, bioactivity, self-repair, sensing etc.). While organic polymers are often petrol-based, the backbones of SIPs are made from elements other than carbon which can lessen the burden on scarce non-renewable resources and provide more sustainable alternatives. Common backbones utilized in SIPs include polysiloxanes, polyphosphates, and polyphosphazenes, to name a few.
SIPs have the potential for broad applicability in diverse fields spanning from drug delivery and tissue regeneration to coatings and electronics. As compared to organic polymers, inorganic polymers in general possess improved performance and environmental compatibility (no need for plasticizers, intrinsically flame-retardant properties). The unique properties of different SIPs can additionally make them useful in a diverse range of technologically novel applications, such as solid polymer electrolytes for consumer electronics, molecular electronics with non-metal elements to replace metal-based conductors, electrochromic materials, self-healing coatings, biosensors, and self-assembling materials.
Role of COST action CM1302
COST action 1302 is a European Community "Cooperation in Science and Technology" research network initiative that supported 62 scientific projects in the area of smart inorganic polymers resulting in 70 publications between 2014 and 2018, with the mission of establishing a framework with which to rationally design new smart inorganic polymers. This represents a large share of the total body of work on SIPs. The results of this work are reviewed in the 2019 book, Smart Inorganic Polymers: Synthesis, Properties, and Emerging Applications in Materials and Life Sciences.
Smart polysiloxanes
Polysiloxane, commonly known as silicone, is the most commonly commercially available inorganic polymer. The large body of existing work on polysiloxane has made it a readily available platform for functionalization to create smart polymers, with a variety of approaches reported which generally center around the addition of metal oxides to a commercially available polysiloxane or the inclusion of functional side-chains on the polysiloxane backbone. The applications of smart polysiloxanes vary greatly, ranging from drug delivery, to smart coatings, to electrochromics.
Drug delivery
Synthesis of smart stimuli responsive polysiloxanes through the addition of a polysiloxane amine to an α,β-unsaturated carbonyl via aza-Michael addition to create a polysiloxane with N-isopropyl amide side-chains has been reported. This polysiloxane was shown to be able to load ibuprofen (a hydrophobic NSAID) and then release it in response to changes in temperature, showing it to be a promising candidate for smart drug delivery of hydrophobic drugs. This action was attributed to the polymer's ability to retain the ibuprofen above the lower critical solution temperature (LCST), and conversely, to dissolve below the LCST, thus releasing the loaded ibuprofen at a given, known temperature.
Coatings
Commercial polysiloxane coatings are readily commercially available and capable of protecting surfaces from damaging pollutants, but the addition of TiO2 gives them the smart ability to degrade pollutants stuck to their surface in the presence of sunlight. This particular phenomena is promising in the field of monument preservation. Similar hybrid textile coatings made of amino-functionalized polysiloxane with TiO2 and silver nanoparticles have been reported to have smart stain-repellent yet hydrophilic properties, making them unique in comparison to typical hydrophobic stain-repellant coatings. Smart properties have also been reported for polysiloxane coatings without metal oxides, namely, a polysiloxane/polyethylenimine coating designed to protect magnesium from corrosion that was found to be capable of self-healing small scratches.
Poly-(ε-caprolactone)/siloxane
Poly-(ε-caprolactone)/siloxane is an inorganic-organic hybrid material which, when used as a solid electrolyte matrix with a lithium perchlorate electrolyte, paired to a W2O3 film, responds to a change in electrical potential by changing transparency. This makes it a potentially useful electrochromic smart glass.
Smart phosphorus polymers
There exist a sizable number of phosphorus polymers with backbones ranging from primarily phosphorus to primarily organic with phosphorus subunits. Some of these have been shown to possess smart properties, and are largely of-interest due to the biocompatibility of phosphorus for biological applications like drug delivery, tissue engineering, and tissue repair.
Polyphosphates
Polyphosphate (PolyP) is an inorganic polymer made from phosphate subunits. It typically exists in its deprotonated form, and can form salts with physiological metal cations like Ca2+, Sr2+, and Mg2+. When salted to these metals, it can selectively induce bone regeneration (Ca-PolyP), bone hardening (Sr-PolyP), or cartilage regeneration (Mg-PolyP) depending on the metal to which it is salted. This smart ability to attenuate the kind of tissue regenerated in response to different metal cations makes it a promising polymer for biomedical applications.
Polyphosphazenes
Polyphosphazene is an inorganic polymer with a backbone consisting of phosphorus and nitrogen, which can also form inorganic-organic hybrid polymers with the addition of organic substituents. Some polyphosphazenes have been designed through the addition of amino acid ester side chains such that their LCST is near body temperature and thus they can form a gel in situ upon injection into a person, making them potentially useful for drug delivery. They biodegrade into a near-neutral pH mixture of phosphates and ammonia that has been shown to be non-toxic, and the rate of their biodegradation can be tuned with the addition of different substituents from full decomposition within days with glyceryl derivatives, to biostable with fluoroalkoxy substituents.
Poly-ProDOT-Me2
Poly-ProDOT-Me2 is a phosphorus-based inorganic-organic hybrid polymer, which, when paired to a V2O5 film, provides a material that changes color upon application of an electrical current. This 'smart glass' is capable of reducing light transmission from 57% to 28% in under 1 second, a much faster transformation than that of commercially available photochromic lenses.
Smart metalloid and metal containing polymers
While metals are not typically associated with polymeric structures, the inclusion of metal atoms either throughout the backbone of, or as pendant structures on a polymer can provide unique smart properties, especially in relation to their redox and electronic properties. These desirable properties can range from self-repair of oxidation, to sensing, to smart material self-assembly, as discussed below.
Polystannanes
Polystannane, a unique polymer class with a tin backbone, is the only known polymer to possess a completely organometallic backbone. It is especially unique in the way that the conductive tin backbone is surrounded by organic substituents, making it act as an atomic-scale insulated wire. Some polystannanes such as (SnBu2)n and (SnOct2)n have shown the smart ability to align themselves with external stimuli, which could see them become useful for pico electronics. However, polystannane is very unstable to light, so any such advancement would require a method for stabilizing it against light degradation.
Icosahedral boron polymers
Icosahedral boron is a geometrically unusual allotrope of boron, which can be either added as side chains to a polymer or co-polymerized into the backbone. Icosahedral boron side chains on polypyrrole have been shown to allow the polypyrrole to self-repair when overoxidized because the icosahedral boron acts as a doping agent, enabling overoxidation to be reversed.
Polyferrocenylsilane
Polyferrocenylsilanes are a group of common organosilicon metallopolymer with backbones consisting of silicon and ferrocene. Variants of polyferroceylsilanes have been found to exhibit smart self-assembly in response to oxidation and subsequent smart self-disassembly upon reduction, as well as variants which can respond to electrochemical stimulation. One such example is a thin film of a polystyrene-polyferrocenylsilane inorganic-organic hybrid copolymer that was found to be able to adsorb and release ferritin with the application of an electrical potential.
Ferrocene biosensing
A number of ferrocene-organic inorganic-organic hybrid polymers have been reported to have smart properties that make them useful for application in biosensing. Multiple polymers with ferrocene side-chains cross-linked with glucose oxidase have shown oxidation activity which results in electrical potential in the presence of glucose, making them useful as glucose biosensors. This sort of activity is not limited to glucose, as other enzymes can be crosslinked to allow for sensing of their corresponding molecules, like a poly(vinylferrocene)/carboxylated multiwall carbon nanotube/gelatin composite that was bound to uricase, giving it the ability to act as a biosensor for uric acid.
See also
Coatings
Coordination Polymers
Drug Delivery
Electrochromism
Inorganic Polymers
Smart Materials
References
Inorganic polymers | Smart inorganic polymer | [
"Chemistry"
] | 2,008 | [
"Inorganic polymers",
"Inorganic compounds"
] |
44,039,518 | https://en.wikipedia.org/wiki/Tree%20box%20filter | A tree box filter is a best management practice (BMP) or stormwater treatment system widely implemented along sidewalks, street curbs, and car parks. They are used to control the volume and amount of urban runoff pollutants entering into local waters, by providing areas where water can collect and naturally infiltrate or seep into the ground. Such systems usually consist of a tree planted in a soil media, contained in a small, square, concrete box. Tree box filters are popular bioretention and infiltration practices, as they collect, retain, and filter runoff as it passes through vegetation and microorganisms in the soil. The water is then either consumed by the tree or transferred into the storm drain system.
Construction
Design considerations
Before construction of the tree box filter, several factors must be considered to maximize the effectiveness and impact of the system. Such factors include:
area available
area of coverage
types of contaminants
level of rainfall
aesthetic appeal
maintenance
budget.
In order to accommodate such considerations, the location, design, and type of material of the box filter may be altered.
Location
Tree box filters are designed to accommodate a low volume of rainfall. A filter surface area of can only cover up to of impervious or nonporous surface. As a result, strategically positioning multiple tree boxes around the area of coverage is vital, when trying to reduce costs and work.
Design
Tree box filters consist of four main parts.
Tree
Open-Bottom Concrete Box
Porous Soil Mix
Underdrain
The tree is planted in a soil mixture of construction sand, unscreened topsoil, and compost. The soil layer must be deep enough to accommodate nutrient and space requirements of the tree. It is recommended that there be
of soil for every of tree canopy. Therefore, a five by six foot tree box must contain at least two feet of soil media in order to sustain a tree with a canopy of thirty square feet. Underneath the layer of soil lies the underdrain. This consists of a layer of crushed stone, at least two feet (0.6 meters) deep, surrounding a perforated drainage pipe. The drainage pipe connects to the municipality's existing storm drain system, allowing excess water to flow out, preventing overflow. These layers are encapsulated in a concrete box, hence the name tree box filter. Optionally, a metal grate may be placed on top of the concrete box, blocking large pieces of debris from entering the soil layer. When the tree box filter is located next to the street, a storm drain inlet may be implemented, allowing stormwater to enter from the street gutter. Stormwater from urban roof runoff can also be channeled to the tree-pits via roof drainage pipes.
Installation procedure
Installing a tree box filter may take only two to three days to accomplish, as all the necessary layers are delivered inside the box, ready to plant. First, preexisting, underground pipes and cables around the work site are marked out. Next, a rubber-tire backhoe will excavate the area where the box will be placed. Next, the concrete box containing all the main parts, except the tree, is set into the hole on a leveled base. Then underdrain pipes are connected, and any gaps around the tree box are refilled. Finally, the tree is planted, and if included, the metal grate is installed. Final tests and inspections of the tree box filter's function conclude the installation procedure. Depending on the location and area of coverage, installation can cost between $12,500 and $65,000.
Maintenance
Maintenance of tree box filters may include, but is not limited to
Tree health and safety inspections
Pruning or trimming
Replacing mulch and fertilizer
Litter removal
Stake removal
Tree straightening
The cost of care can range from $100 to $500 per year for each tree box filter. In order to extend the life and efficiency of the tree box filter, it is recommended that inspections be conducted yearly.
Filtration efficiency
When implemented properly, tree box filters can significantly reduce the amount of pollutant in the stormwater that it infiltrates.
The ratio of pollutants exiting versus entering the tree box filter is known as the load ratio. Tree box filters show load ratios of 0.1 to 0.3 in the reduction of soluble metals, 0.35 to 0.6 in the reduction of organics and nutrients, and 0.09 in the reduction of total suspended solids.
Tree box filters remove about 80-90% of total suspended solids, 38-65% of nitrogen, and 50-80% of phosphorus, 54% of zinc, 40% of copper, and 90% of petroleum hydrocarbons. Based on these results, it can be concluded that tree box filters can significantly reduce the amount of pollutants in water that flows through the system, greatly lessening the impact on local surface waters.
References
External links
Overview of stormwater infiltration - Minnesota Stormwater Manual
Stormwater Management Practices at EPA Facilities - US Environmental Protection Agency
Drainage
Environmental engineering
Environmental soil science
Filters
Hydrology and urban planning
Landscape architecture
Stormwater management
Sustainable design
Trees | Tree box filter | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,040 | [
"Hydrology",
"Water treatment",
"Stormwater management",
"Chemical equipment",
"Chemical engineering",
"Landscape architecture",
"Filters",
"Water pollution",
"Civil engineering",
"Hydrology and urban planning",
"Filtration",
"Environmental engineering",
"Environmental soil science",
"Arch... |
44,042,166 | https://en.wikipedia.org/wiki/INTEROP-VLab | The INTEROP V-Lab (International Virtual Laboratory for Enterprise Interoperability) is a network of organizations, which links scientists, research centers, representatives of industry, and small and medium-sized enterprises. The members come from several European countries as well as China and represent 250 scientists and 70 organizations.
INTEROP-VLab was founded in 2007 and is the continuation of the INTEROP Network of Excellence (Interoperability research for networked enterprise applications and software), a research initiative of the European Union founded early 2000s, which developed the Model Driven Interoperability (MDI) Framework.
In 2012 Guy Doumeingts was appointed general manager of INTEROP-VLab.
Overview
INTEROP-VLab is an initiative that is working within the context of interoperability, in particular the so-called Enterprise Interoperability (EI). It aims to link together in a network researchers and research institutions and industry representatives, engaged in developing approaches and integrative solutions to connect heterogeneous industrial systems, public administrations or organizations.
The basic objective of INTEROP-VLab is the defragmentation of the European research and scientific landscape and support the cooperation of other regions of the world:
through support of research, teaching and innovation in the field of Enterprise Interoperability
through the work of a center of excellence in the field of Enterprise Interoperability world
Activities
The activities of INTEROP-VLab consist of research, teaching and training services and standardization consultancy.
The independent research within INTEROP-VLabs is based on the following three key components:
Information and communication technology as a technological foundation of interoperable systems
Modeling of processes, organizations and organizational units to develop and implement appropriate structures for interoperable companies and public organizations
The development and specification of ontologies to ensure semantic consistency within organizations affiliated with the following priorities:
Theoretical groundwork
Investigation and development of key technologies
Development of exemplary applications
Within INTEROP-VLab developed solutions include:
IV kmap (INTEROP-VLab Knowledge Map ), a competence management system within the EI range that is based on an ontology-based search engine and allows to find relevant documents and content relating to a particular knowledge domain.
The IV e-learning platform, with 50 web-courses and seminars in the field of EI, modeling, ontologies offers and Architecture & Platforms
Members
The members of the INTEROP-VLab are organize in poles of geographic regions within a State or group of States. Activities of each organization are coordinated at European level. The members of the INTEROP-VLab are:
INTEROP-VLab poles France Grand Sud-Ouest (PGSO)
DFI (German Forum for interoperability eV)
INTEROP-VLab UK Pole
INTEROP-VLab China Pole
INTEROP-VLab INTERVAL Pole
INTEROP-VLab Portuguese poles (INTEROP-ptrp)
INTEROP-VLab.IT
INTEROP North Pole
References
External links
INTEROP-VLab Website
Enterprise modelling
International scientific organizations
Organizations established in 2007
International organisations based in Belgium | INTEROP-VLab | [
"Engineering"
] | 616 | [
"Systems engineering",
"Enterprise modelling"
] |
44,044,088 | https://en.wikipedia.org/wiki/Paradox%20of%20radiation%20of%20charged%20particles%20in%20a%20gravitational%20field | The paradox of a charge in a gravitational field is an apparent physical paradox in the context of general relativity. A charged particle at rest in a gravitational field, such as on the surface of the Earth, must be supported by a force to prevent it from falling. According to the equivalence principle, it should be indistinguishable from a particle in flat spacetime being accelerated by a force. Maxwell's equations say that an accelerated charge should radiate electromagnetic waves, yet such radiation is not observed for stationary particles in gravitational fields.
One of the first to study this problem was Max Born in his 1909 paper about the consequences of a charge in uniformly accelerated frame. Earlier concerns and possible solutions were raised by Wolfgang Pauli (1918), Max von Laue (1919), and others, but the most recognized work on the subject is the resolution of Thomas Fulton and Fritz Rohrlich in 1960.
Background
It is a standard result from Maxwell's equations of classical electrodynamics that an accelerated charge radiates. That is, it produces an electric field that falls off as in addition to its rest-frame Coulomb field. This radiation electric field has an accompanying magnetic field, and the whole oscillating electromagnetic radiation field propagates independently of the accelerated charge, carrying away momentum and energy. The energy in the radiation is provided by the work that accelerates the charge.
The theory of general relativity is built on the equivalence principle of gravitation and inertia. This principle states that it is impossible to distinguish through any local measurement whether one is in a gravitational field or being accelerated. An elevator out in deep space, far from any planet, could mimic a gravitational field to its occupants if it could be accelerated continuously "upward". Whether the acceleration is from motion or from gravity makes no difference in the laws of physics. One can also understand it in terms of the equivalence of so-called gravitational mass and inertial mass. The mass in Newton's law of universal gravitation (gravitational mass) is the same as the mass in Newton's second law of motion (inertial mass). They cancel out when equated, with the result discovered by Galileo Galilei in 1638, that all bodies fall at the same rate in a gravitational field, independent of their mass. A famous demonstration of this principle was performed on the Moon during the Apollo 15 mission, when a hammer and a feather were dropped at the same time and struck the surface at the same time.
Closely tied in with this equivalence is the fact that gravity vanishes in free fall. For objects falling in an elevator whose cable is cut, all gravitational forces vanish, and things begin to look like the free-floating absence of forces one sees in videos from the International Space Station. It is a linchpin of general relativity that everything must fall together in free fall. Just as with acceleration versus gravity, no experiment should be able to distinguish the effects of free fall in a gravitational field, and being out in deep space far from any forces.
Statement of the paradox
Putting together these two basic facts of general relativity and electrodynamics, we seem to encounter a paradox. For if we dropped a neutral particle and a charged particle together in a gravitational field, the charged particle should begin to radiate as it is accelerated under gravity, thereby losing energy and slowing relative to the neutral particle. Then a free-falling observer could distinguish free fall from the true absence of forces, because a charged particle in a free-falling laboratory would begin to be pulled upward relative to the neutral parts of the laboratory, even though no obvious electric fields were present.
Equivalently, we can think about a charged particle at rest in a laboratory on the surface of the Earth. In order to be at rest, it must be supported by something which exerts an upward force on it. This system is equivalent to being in outer space accelerated constantly upward at 1 g, and we know that a charged particle accelerated upward at 1 g would radiate. However, we do not see radiation from charged particles at rest in the laboratory. It would seem that we could distinguish between a gravitational field and acceleration, because an electric charge apparently only radiates when it is being accelerated through motion, but not through gravitation.
Resolution by Rohrlich
The resolution of this paradox, like the twin paradox and ladder paradox, comes through appropriate care in distinguishing frames of reference. This section follows the analysis of Fritz Rohrlich (1965), who shows that a charged particle and a neutral particle fall equally fast in a gravitational field. Likewise, a charged particle at rest in a gravitational field does not radiate in its rest frame, but it does so in the frame of a free-falling observer. The equivalence principle is preserved for charged particles.
The key is to realize that the laws of electrodynamics, Maxwell's equations, hold only within an inertial frame, that is, in a frame in which all forces act locally, and there is no net acceleration when the net local forces are zero. The frame could be free fall under gravity, or far in space away from any forces. The surface of the Earth is not an inertial frame, as it is being constantly accelerated. We know that the surface of the Earth is not an inertial frame because an object at rest there may not remain at rest—objects at rest fall to the ground when released. Gravity is a non-local fictitious “force” within the Earth's surface frame, just like centrifugal “force”. So we cannot naively formulate expectations based on Maxwell's equations in this frame. It is remarkable that we now understand the special-relativistic Maxwell equations do not hold, strictly speaking, on the surface of the Earth, even though they were discovered in electrical and magnetic experiments conducted in laboratories on the surface of the Earth. (This is similar to how the concept of mechanics in an inertial frame is not applicable to the surface of the Earth even disregarding gravity due to its rotation - cf. e.g. Foucault pendulum, yet they were originally found from considering ground experiments and intuitions.) Therefore, in this case, we cannot apply Maxwell's equations to the description of a falling charge relative to a "supported", non-inertial observer.
Maxwell's equations can be applied relative to an observer in free fall, because free-fall is an inertial frame. So the starting point of considerations is to work in the free-fall frame in a gravitational field—a "falling" observer. In the free-fall frame, Maxwell's equations have their usual, flat-spacetime form for the falling observer. In this frame, the electric and magnetic fields of the charge are simple: the falling electric field is just the Coulomb field of a charge at rest, and the magnetic field is zero. As an aside, note that we are building in the equivalence principle from the start, including the assumption that a charged particle falls equally as fast as a neutral particle.
The fields measured by an observer supported on the surface of the Earth are different. Given the electric and magnetic fields in the falling frame, we have to transform those fields into the frame of the supported observer. This manipulation is not a Lorentz transformation, because the two frames have a relative acceleration. Instead, the machinery of general relativity must be used.
In this case the gravitational field is fictitious because it can be "transformed away" by appropriate choice of coordinate system in the falling frame. Unlike the total gravitational field of the Earth, here we are assuming that spacetime is locally flat, so that the curvature tensor vanishes. Equivalently, the lines of gravitational acceleration are everywhere parallel, with no convergences measurable in the laboratory. Then the most general static, flat-space, cylindrical metric and line element can be written:
where is the speed of light, is proper time, are the usual coordinates of space and time, is the acceleration of the gravitational field, and is an arbitrary function of the coordinate but must approach the observed Newtonian value of . This formula is the metric for the gravitational field measured by the supported observer.
Meanwhile, the metric in the frame of the falling observer is simply the Minkowski metric:
From these two metrics Rohrlich constructs the coordinate transformation between them:
When this coordinate transformation is applied to the electric and magnetic fields of the charge in the rest frame, it is found to be radiating. Rohrlich emphasizes that this charge remains at rest in its free-fall frame, just as a neutral particle would. Furthermore, the radiation rate for this situation is Lorentz-invariant, but it is not invariant under the coordinate transformation above because it is not a Lorentz transformation.
To see whether the supported charge should radiate, we start again in the falling frame.
As observed from the freefalling frame, the supported charge appears to be accelerated uniformly upward. The case of constant acceleration of a charge is treated by Rohrlich. He finds a charge uniformly accelerated at rate has a radiation rate given by the Lorentz invariant:
The corresponding electric and magnetic fields of an accelerated charge are also given in Rohrlich. To find the fields of the charge in the supporting frame, the fields of the uniformly accelerated charge are transformed according to the coordinate transformation previously given. When that is done, one finds no radiation in the supporting frame from a supported charge, because the magnetic field is zero in this frame. Rohrlich does note that the gravitational field slightly distorts the Coulomb field of the supported charge, but not enough to be observable. So although the Coulomb law was discovered in a supporting frame, general relativity tells us that the field of such a charge is not precisely .
Fate of the radiation
The radiation from the supported charge viewed in the freefalling frame (or vice versa) is something of a curiosity: one might ask where it goes. David G. Boulware (1980) finds that the radiation goes into a region of spacetime inaccessible to the co-accelerating, supported observer. In effect, a uniformly accelerated observer has an event horizon, and there are regions of spacetime inaccessible to this observer. Camila de Almeida and Alberto Saa (2006) have a more accessible treatment of the event horizon of the accelerated observer.
References
Books
Physical paradoxes
General relativity
Radiation
Relativistic paradoxes | Paradox of radiation of charged particles in a gravitational field | [
"Physics",
"Chemistry"
] | 2,129 | [
"Transport phenomena",
"Physical phenomena",
"General relativity",
"Waves",
"Radiation",
"Theory of relativity"
] |
36,949,586 | https://en.wikipedia.org/wiki/Basset%E2%80%93Boussinesq%E2%80%93Oseen%20equation | In fluid dynamics, the Basset–Boussinesq–Oseen equation (BBO equation) describes the motion of – and forces on – a small particle in unsteady flow at low Reynolds numbers. The equation is named after Joseph Valentin Boussinesq, Alfred Barnard Basset and Carl Wilhelm Oseen.
Formulation
The BBO equation, in the formulation as given by and , pertains to a small spherical particle of diameter having mean density whose center is located at . The particle moves with Lagrangian velocity in a fluid of density , dynamic viscosity and Eulerian velocity field . The fluid velocity field surrounding the particle consists of the undisturbed, local Eulerian velocity field plus a disturbance field – created by the presence of the particle and its motion with respect to the undisturbed field For very small particle diameter the latter is locally a constant whose value is given by the undisturbed Eulerian field evaluated at the location of the particle center, . The small particle size also implies that the disturbed flow can be found in the limit of very small Reynolds number, leading to a drag force given by Stokes' drag. Unsteadiness of the flow relative to the particle results in force contributions by added mass and the Basset force. The BBO equation states:
This is Newton's second law, in which the left-hand side is the rate of change of the particle's linear momentum, and the right-hand side is the summation of forces acting on the particle. The terms on the right-hand side are, respectively, the:
Stokes' drag,
Froude–Krylov force due to the pressure gradient in the undisturbed flow, with the gradient operator and the undisturbed pressure field,
added mass,
Basset force and
other forces acting on the particle, such as gravity, etc.
The particle Reynolds number
has to be less than unity, , for the BBO equation to give an adequate representation of the forces on the particle.
Also suggest to estimate the pressure gradient from the Navier–Stokes equations:
with the material derivative of Note that in the Navier–Stokes equations is the fluid velocity field, while, as indicated above, in the BBO equation is the velocity of the undisturbed flow as seen by an observer moving with the particle. Thus, even in steady Eulerian flow depends on time if the Eulerian field is non-uniform.
Notes
References
Equations of fluid dynamics | Basset–Boussinesq–Oseen equation | [
"Physics",
"Chemistry"
] | 503 | [
"Equations of fluid dynamics",
"Equations of physics",
"Fluid dynamics"
] |
36,950,793 | https://en.wikipedia.org/wiki/Construction%20of%20a%20complex%20null%20tetrad | Calculations in the Newman–Penrose (NP) formalism of general relativity normally begin with the construction of a complex null tetrad , where is a pair of real null vectors and is a pair of complex null vectors. These tetrad vectors respect the following normalization and metric conditions assuming the spacetime signature
Only after the tetrad gets constructed can one move forward to compute the directional derivatives, spin coefficients, commutators, Weyl-NP scalars , Ricci-NP scalars and Maxwell-NP scalars and other quantities in NP formalism. There are three most commonly used methods to construct a complex null tetrad:
All four tetrad vectors are nonholonomic combinations of orthonormal tetrads;
(or ) are aligned with the outgoing (or ingoing) tangent vector field of null radial geodesics, while and are constructed via the nonholonomic method;
A tetrad which is adapted to the spacetime structure from a 3+1 perspective, with its general form being assumed and tetrad functions therein to be solved.
In the context below, it will be shown how these three methods work.
Note: In addition to the convention employed in this article, the other one in use is .
Nonholonomic tetrad
The primary method to construct a complex null tetrad is via combinations of orthonormal bases. For a spacetime with an orthonormal tetrad ,
the covectors of the nonholonomic complex null tetrad can be constructed by
and the tetrad vectors can be obtained by raising the indices of via the inverse metric .
Remark: The nonholonomic construction is actually in accordance with the local light cone structure.
Example: A nonholonomic tetrad
Given a spacetime metric of the form (in signature(-,+,+,+))
the nonholonomic orthonormal covectors are therefore
and the nonholonomic null covectors are therefore
la (na) aligned with null radial geodesics
In Minkowski spacetime, the nonholonomically constructed null vectors respectively match the outgoing and ingoing null radial rays. As an extension of this idea in generic curved spacetimes, can still be aligned with the tangent vector field of null radial congruence. However, this type of adaption only works for , or coordinates where the radial behaviors can be well described, with and denote the outgoing (retarded) and ingoing (advanced) null coordinate, respectively.
Example: Null tetrad for Schwarzschild metric in Eddington-Finkelstein coordinates reads
so the Lagrangian for null radial geodesics of the Schwarzschild spacetime is
which has an ingoing solution and an outgoing solution . Now, one can construct a complex null tetrad which is adapted to the ingoing null radial geodesics:
and the dual basis covectors are therefore
Here we utilized the cross-normalization condition as well as the requirement that should span the induced metric for cross-sections of {v=constant, r=constant}, where and are not mutually orthogonal. Also, the remaining two tetrad (co)vectors are constructed nonholonomically. With the tetrad defined, one is now able to respectively find out the spin coefficients, Weyl-Np scalars and Ricci-NP scalars that
Example: Null tetrad for extremal Reissner–Nordström metric in Eddington-Finkelstein coordinates reads
so the Lagrangian is
For null radial geodesics with , there are two solutions
(ingoing) and (outgoing),
and therefore the tetrad for an ingoing observer can be set up as
With the tetrad defined, we are now able to work out the spin coefficients, Weyl-NP scalars and Ricci-NP scalars that
Tetrads adapted to the spacetime structure
At some typical boundary regions such as null infinity, timelike infinity, spacelike infinity, black hole horizons and cosmological horizons, null tetrads adapted to spacetime structures are usually employed to achieve the most succinct Newman–Penrose descriptions.
Newman-Unti tetrad for null infinity
For null infinity, the classic Newman-Unti (NU) tetrad is employed to study asymptotic behaviors at null infinity,
where are tetrad functions to be solved. For the NU tetrad, the foliation leaves are parameterized by the outgoing (advanced) null coordinate with , and is the normalized affine coordinate along ; the ingoing null vector acts as the null generator at null infinity with . The coordinates comprise two real affine coordinates and two complex stereographic coordinates , where are the usual spherical coordinates on the cross-section (as shown in ref., complex stereographic rather than real isothermal coordinates are used just for the convenience of completely solving NP equations).
Also, for the NU tetrad, the basic gauge conditions are
Adapted tetrad for exteriors and near-horizon vicinity of isolated horizons
For a more comprehensive view of black holes in quasilocal definitions, adapted tetrads which can be smoothly transited from the exterior to the near-horizon vicinity and to the horizons are required. For example, for isolated horizons describing black holes in equilibrium with their exteriors, such a tetrad and the related coordinates can be constructed this way. Choose the first real null covector as the gradient of foliation leaves
where is the ingoing (retarded) Eddington–Finkelstein-type null coordinate, which labels the foliation cross-sections and acts as an affine parameter with regard to the outgoing null vector field , i.e.
Introduce the second coordinate as an affine parameter along the ingoing null vector field , which obeys the normalization
Now, the first real null tetrad vector is fixed. To determine the remaining tetrad vectors and their covectors, besides the basic cross-normalization conditions, it is also required that: (i) the outgoing null normal field acts as the null generators; (ii) the null frame (covectors) are parallelly propagated along ; (iii) spans the {t=constant, r=constant} cross-sections which are labeled by real isothermal coordinates .
Tetrads satisfying the above restrictions can be expressed in the general form that
The gauge conditions in this tetrad are
Remark: Unlike Schwarzschild-type coordinates, here r=0 represents the horizon, while r>0 (r<0) corresponds to the exterior (interior) of an isolated horizon. People often Taylor expand a scalar function with respect to the horizon r=0,
where refers to its on-horizon value. The very coordinates used in the adapted tetrad above are actually the Gaussian null coordinates employed in studying near-horizon geometry and mechanics of black holes.
See also
Newman–Penrose formalism
References
General relativity
Mathematical methods in general relativity | Construction of a complex null tetrad | [
"Physics"
] | 1,458 | [
"General relativity",
"Theory of relativity"
] |
36,951,863 | https://en.wikipedia.org/wiki/Near-horizon%20metric | The near-horizon metric (NHM) refers to the near-horizon limit of the global metric of a black hole. NHMs play an important role in studying the geometry and topology of black holes, but are only well defined for extremal black holes. NHMs are expressed in Gaussian null coordinates, and one important property is that the dependence on the coordinate is fixed in the near-horizon limit.
NHM of extremal Reissner–Nordström black holes
The metric of extremal Reissner–Nordström black hole is
Taking the near-horizon limit
and then omitting the tildes, one obtains the near-horizon metric
NHM of extremal Kerr black holes
The metric of extremal Kerr black hole () in Boyer–Lindquist coordinates can be written in the following two enlightening forms,
where
Taking the near-horizon limit
and omitting the tildes, one obtains the near-horizon metric (this is also called extremal Kerr throat )
NHM of extremal Kerr–Newman black holes
Extremal Kerr–Newman black holes () are described by the metric
where
Taking the near-horizon transformation
and omitting the tildes, one obtains the NHM
NHMs of generic black holes
In addition to the NHMs of extremal Kerr–Newman family metrics discussed above, all stationary NHMs could be written in the form
where the metric functions are independent of the coordinate r, denotes the intrinsic metric of the horizon, and are isothermal coordinates on the horizon.
Remark: In Gaussian null coordinates, the black hole horizon corresponds to .
See also
Extremal black hole
Reissner–Nordström metric
Kerr metric
Kerr–Newman metric
References
General relativity
Black holes | Near-horizon metric | [
"Physics",
"Astronomy"
] | 362 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"General relativity",
"Density",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects"
] |
36,952,575 | https://en.wikipedia.org/wiki/Plasma-facing%20material | In nuclear fusion power research, the plasma-facing material (or materials) (PFM) is any material used to construct the plasma-facing components (PFC), those components exposed to the plasma within which nuclear fusion occurs, and particularly the material used for the lining the first wall or divertor region of the reactor vessel.
Plasma-facing materials for fusion reactor designs must support the overall steps for energy generation, these include:
Generating heat through fusion,
Capturing heat in the first wall,
Transferring heat at a faster rate than capturing heat.
Generating electricity.
In addition PFMs have to operate over the lifetime of a fusion reactor vessel by handling the harsh environmental conditions, such as:
Ion bombardment causing physical and chemical sputtering and therefore erosion.
Ion implantation causing displacement damage and chemical composition changes
High-heat fluxes (e.g. 10 MW/m) due to ELMS and other transients.
Limited tritium codeposition and sequestration.
Stable thermomechanical properties under operation.
Limited number of negative nuclear transmutation effects
Currently, fusion reactor research focuses on improving efficiency and reliability in heat generation and capture and on raising the rate of transfer. Generating electricity from heat is beyond the scope of current research, due to existing efficient heat-transfer cycles, such as heating water to operate steam turbines that drive electrical generators.
Current reactor designs are fueled by deuterium-tritium (D-T) fusion reactions, which produce high-energy neutrons that can damage the first wall, however, high-energy neutrons (14.1 MeV) are needed for blanket and Tritium breeder operation. Tritium is not a naturally abundant isotope due to its short half-life, therefore for a fusion D-T reactor it will need to be bred by the nuclear reaction of lithium (Li), boron (B), or beryllium (Be) isotopes with high-energy neutrons that collide within the first wall.
Requirements
Most magnetic confinement fusion devices (MCFD) consist of several key components in their technical designs, including:
Magnet system: confines the deuterium-tritium fuel in the form of plasma and in the shape of a torus.
Vacuum vessel: contains the core fusion plasma and maintains fusion conditions.
First wall: positioned between the plasma and magnets in order to protect outer vessel components from radiation damage.
Cooling system: removes heat from the confinement and transfers heat from the first wall.
The core fusion plasma must not actually touch the first wall. ITER and many other current and projected fusion experiments, particularly those of the tokamak and stellarator designs, use intense magnetic fields in an attempt to achieve this, although plasma instability problems remain. Even with stable plasma confinement, however, the first wall material would be exposed to a neutron flux higher than in any current nuclear power reactor, which leads to two key problems in selecting the material:
It must withstand this neutron flux for a sufficient period of time to be economically viable.
It must not become sufficiently radioactive so as to produce unacceptable amounts of nuclear waste when lining replacement or plant decommissioning eventually occurs.
The lining material must also:
Allow the passage of a large heat flux.
Be compatible with intense and fluctuating magnetic fields.
Minimize contamination of the plasma.
Be produced and replaced at a reasonable cost.
Some critical plasma-facing components, such as and in particular the divertor, are typically protected by a different material than that used for the major area of the first wall.
Proposed materials
Materials currently in use or under consideration include:
Tungsten
Molybdenum
Beryllium
Lithium
Tin
Boron carbide
Silicon carbide
Carbon fibre composite (CFC)
Graphite
Multi-layer tiles of several of these materials are also being considered and used, for example:
A thin molybdenum layer on graphite tiles.
A thin tungsten layer on graphite tiles.
A tungsten layer on top of a molybdenum layer on graphite tiles.
A boron carbide layer on top of CFC tiles.
A liquid lithium layer on graphite tiles.
A liquid lithium layer on top of a boron layer on graphite tiles.
A liquid lithium layer on tungsten-based solid PFC surfaces or divertors.
Graphite was used for the first wall material of the Joint European Torus (JET) at its startup (1983), in Tokamak à configuration variable (1992) and in National Spherical Torus Experiment (NSTX, first plasma 1999).
Beryllium was used to reline JET in 2009 in anticipation of its proposed use in ITER.
Tungsten is used for the divertor in JET, and will be used for the divertor in ITER. It is also used for the first wall in ASDEX Upgrade. Graphite tiles plasma sprayed with tungsten were used for the ASDEX Upgrade divertor. Studies of tungsten in the divertor have been conducted at the DIII-D facility. These experiments utilized two rings of tungsten isotopes embedded in the lower divertor to characterize erosion tungsten during operation.
Molybdenum is used for the first wall material in Alcator C-Mod (1991).
Liquid lithium (LL) was used to coat the PFC of the Tokamak Fusion Test Reactor in the Lithium Tokamak Experiment (TFTR, 1996).
Considerations
Development of satisfactory plasma-facing materials is one of the key problems still to be solved by current programs.
Plasma-facing materials can be measured for performance in terms of:
Power production for a given reactor size.
Cost to generate electricity.
Self-sufficiency of tritium production.
Availability of materials.
Design and fabrication of the PFC.
Safety in waste disposal and in maintenance.
The International Fusion Materials Irradiation Facility (IFMIF) will particularly address this. Materials developed using IFMIF will be used in DEMO, the proposed successor to ITER.
French Nobel laureate in physics Pierre-Gilles de Gennes said of nuclear fusion, "We say that we will put the sun into a box. The idea is pretty. The problem is, we don't know how to make the box."
Recent developments
Solid plasma-facing materials are known to be susceptible to damage under large heat loads and high neutron flux. If damaged, these solids can contaminate the plasma and decrease plasma confinement stability. In addition, radiation can leak through defects in the solids and contaminate outer vessel components.
Liquid metal plasma-facing components that enclose the plasma have been proposed to address challenges in the PFC. In particular, liquid lithium (LL) has been confirmed to have various properties that are attractive for fusion reactor performance.
Tungsten
Tungsten is widely recognized as the preferred material for plasma-facing components in next-generation fusion devices, largely due to its unique combination of properties and potential for enhancement. Its low erosion rates make it particularly suitable for the high-stress environment of fusion reactors, where it can withstand the intense conditions without degrading rapidly. Additionally, tungsten's low tritium retention through co-deposition and implantation is crucial in fusion contexts, helping to minimize the accumulation of this radioactive isotope.
Another key advantage of tungsten is its high thermal conductivity, essential for managing the extreme heat generated in fusion processes. This property ensures efficient heat dissipation, reducing the risk of damage to the reactor's internal components. Furthermore, the potential for developing radiation-hardened alloys of tungsten presents an opportunity to enhance its durability and performance under the intense radiation conditions typical in fusion reactors.
Despite these benefits, tungsten is not without its drawbacks. One notable issue is its tendency to contribute to high core radiation, a significant challenge in maintaining the plasma performance in fusion reactors. Nevertheless, tungsten has been selected as the plasma-facing material for the ITER project's first-generation divertor, and it is likely to be used for the reactor's first wall as well.
Understanding the behavior of tungsten in fusion environments, including its sourcing, migration, and transport in the scrape-off-layer (SOL), as well as its potential for core contamination, is a complex task. Significant research is ongoing to develop a mature and validated understanding of these dynamics, particularly for predicting the behavior of high-Z (high atomic number) materials like tungsten in next-step tokamak devices.
To address tungsten's intrinsic brittleness, which limits its operational window, a composite material known as W-fibre enhanced W-composite (Wf/W) has been developed. This material incorporates extrinsic toughening mechanisms to significantly increase toughness, as demonstrated in small Wf/W samples.
In the context of future fusion power plants, tungsten stands out for its resilience against erosion, the highest melting point among metals, and relatively benign behavior under neutron irradiation. However, its ductile to brittle transition temperature (DBTT) is a concern, especially as it increases under neutron exposure. To overcome this brittleness, several strategies are being explored, including the use of nanocrystalline materials, tungsten alloying, and W-composite materials.
Particularly notable are the tungsten laminates and fiber-reinforced composites, which leverage tungsten's exceptional mechanical properties. When combined with copper's high thermal conductivity, these composites offer improved thermomechanical properties, extending beyond the operational range of traditional materials like CuCrZr. For applications requiring even higher temperature resilience, tungsten-fibre reinforced tungsten-composites (Wf/W) have been developed, incorporating mechanisms to enhance toughness, thereby broadening the potential applications of tungsten in fusion technology.
Lithium
Lithium (Li) is an alkali metal with a low Z (atomic number). Li has a low first ionization energy of ~5.4 eV and is highly chemically reactive with ion species found in the plasma of fusion reactor cores. In particular, Li readily forms stable lithium compounds with hydrogen isotopes, oxygen, carbon, and other impurities found in D-T plasma.
The fusion reaction of D-T produces charged and neutral particles in the plasma. The charged particles remain magnetically confined to the plasma. The neutral particles are not magnetically confined and will move toward the boundary between the hotter plasma and the colder PFC. Upon reaching the first wall, both neutral particles and charged particles that escaped the plasma become cold neutral particles in gaseous form. An outer edge of cold neutral gas is then “recycled”, or mixed, with the hotter plasma. A temperature gradient between the cold neutral gas and the hot plasma is believed to be the principal cause of anomalous electron and ion transport from the magnetically confined plasma. As recycling decreases, the temperature gradient decreases and plasma confinement stability increases. With better conditions for fusion in the plasma, the reactor performance increases.
Initial use of lithium in 1990s was motivated by a need for a low-recycling PFC. In 1996, ~ 0.02 grams of lithium coating was added to the PFC of TFTR, resulting in the fusion power output and the fusion plasma confinement to improve by a factor of two. On the first wall, lithium reacted with neutral particles to produce stable lithium compounds, resulting in low-recycling of cold neutral gas. In addition, lithium contamination in the plasma tended to be well below 1%.
Since 1996, these results have been confirmed by a large number of magnetic confinement fusion devices (MCFD) that have also used lithium in their PFC, for example:
TFTR (US), CDX-U (2005)/LTX(2010) (US), CPD (Japan), HT-7 (China), EAST (China), FTU (Italy).
NSTX (US), T-10 (Russia), T-11M (Russia), TJ-II (Spain), RFX (Italy).
The primary energy generation in fusion reactor designs is from the absorption of high-energy neutrons. Results from these MCFD highlight additional benefits of liquid lithium coatings for reliable energy generation, including:
Absorb high-energy, or fast-moving, neutrons. About 80% of the energy produced in a fusion reaction of D-T is in the kinetic energy of the newly produced neutron.
Convert kinetic energies of absorbed neutrons into heat on the first wall. The heat that is produced on the first wall can then be removed by coolants in ancillary systems that generate electricity.
Self-sufficient breeding of tritium by nuclear reaction with absorbed neutrons. Neutrons of varying kinetic energies will drive tritium-breeding reactions.
Liquid lithium
Newer developments in liquid lithium are currently being tested, for example:
Coatings made of increasingly complex liquid lithium compounds.
Multi-layered coatings of LL, B, F, and other low-Z metals.
Higher density coatings of LL for use on PFC designed for greater heat loads and neutron flux.
Silicon carbide
Silicon carbide (SiC), a low-Z refractory ceramic material, has emerged as a promising candidate for structural materials in magnetic fusion energy devices. While the remarkable properties of SiC once attracted attention for fusion experiments, past technological limitations hindered its wider use. However, the evolving capabilities of SiC fiber composites (SiCf/SiC) in Gen-IV fission reactors have renewed interest in SiC as a fusion material.
Modern versions of SiCf/SiC combine many desirable attributes found in carbon fiber composites, such as thermo-mechanical strength and high melting point. These versions also present unique benefits: they exhibit minimal degradation of properties when exposed to high levels of neutron damage. However, tritium retention in silicon carbide plasma-facing components is about 1.5-2 times higher than in graphite, leading to reduced fuel efficiency and increased safety risks in fusion reactors. SiC traps more tritium, limiting its availability for fusion and increasing the potential for hazardous buildup, which complicates tritium management. Additionally, the chemical and physical sputtering of SiC is still significant and contributes to the key issue of increasing tritium inventory through co-deposition over time and with particle fluency. For those reasons, carbon-based materials have been ruled out in ITER, DEMO, and other devices.
SiC has demonstrated a tritium diffusivity lower than that observed in other structural materials, a property that can be further optimized by applying a thin layer of monolithic SiC on a SiC/SiCf substrate.
Siliconization, as a wall conditioning method, has been demonstrated to reduce oxygen impurities and enhance plasma performance. Current research efforts focus on understanding SiC behavior under conditions relevant to reactors, providing valuable insights into its potential role in future fusion technology. Silicon-rich films on divertor PFCs were recently developed using Si pellet injections in high confinement mode scenarios in DIII-D, prompting further research into refining the technique for broader fusion applications.
See also
International Fusion Materials Irradiation Facility#Background
Lithium Tokamak Experiment
References
External links
Max Planck Institute project page on PFM
13th International Workshop on Plasma-Facing Materials and Components for Fusion Applications / 1st International Conference on Fusion Energy Materials Science
Materials science
Fusion power | Plasma-facing material | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,127 | [
"Applied and interdisciplinary physics",
"Plasma physics",
"Materials science",
"Fusion power",
"nan",
"Nuclear fusion"
] |
41,170,519 | https://en.wikipedia.org/wiki/Fuel%20cell%20forklift | A fuel cell forklift (also called a fuel cell lift truck) is a fuel cell powered industrial forklift used to lift and transport materials.
History
1960Allis-Chalmers builds the first fuel cell forklift.
Market
In 2013 there were over 4,000 fuel cell forklifts used in material handling in the United States. As of 2024, approximately 50,000 hydrogen forklifts are in operation worldwide (the bulk of which are in the U.S.), as compared with 1.2 million battery electric forklifts that were purchased in 2021.
Uses
PEM fuel-cell-powered forklifts provide benefits over petroleum-powered forklifts as they produce no local emissions. While LP Gas (propane) forklifts are more popular and often used indoors, they cannot accommodate certain food sector applications. Fuel cell power efficiency (40–50%) is about half that of lithium-ion batteries (80–90%), but they have a higher energy density which may allow forklifts to run longer. Fuel-cell-powered forklifts are often used in refrigerated warehouses as their performance is not as affected by temperature as some types of lithium batteries. Most fuel cells used for material handling purposes are powered by PEM fuel cells, although some DMFC forklifts are coming onto the market. In design the FC units are often made as drop-in replacements.
Research
2013Toyota Industries (Toyota Shokki) showcased a new fuel cell powered forklift, co-developed with Toyoda Gosei Co., Ltd.
2015HySA Systems (UWC) showcased a fuel cell powered forklift using a refueling station based on metal hydrides. The customer was Implats, a mining company in South Africa. This was the first project of this type on the African continent.
Standards
SAE J 2601/3 - SAE J 2601/3 - Fueling Protocols for Gaseous Hydrogen Powered Industrial Forklifts
References
Engineering vehicles
Electric vehicles
Trucks
Material-handling equipment
Fuel cell vehicles | Fuel cell forklift | [
"Engineering"
] | 418 | [
"Engineering vehicles"
] |
41,175,727 | https://en.wikipedia.org/wiki/Transistor%20fault | Transistor Fault model is a Fault model used to describe faults for CMOS logic gates. At transistor level, a transistor may be stuck-short or stuck-open. In stuck-short, a transistor behaves as it is always conducts (or stuck-on), and stuck-open is when a transistor never conducts current (or stuck-off). Stuck-short will usually produce a short between VDD and VSS.
In the example picture, a faulty PMOS transistor in a CMOS NAND Gate is shown (M3-highlighted transistor). If M3 is stuck-open, then in case we apply A=1 and B=0 then the output of the circuit will become Z. And if M3 is stuck-short, then the output will always be connected to 1, and it also may short VCC to GND in case we apply A=B=1.
Digital electronics
Electronic design
Electronic circuit verification | Transistor fault | [
"Engineering"
] | 203 | [
"Electronic design",
"Electronic engineering",
"Design",
"Digital electronics"
] |
41,175,952 | https://en.wikipedia.org/wiki/Biomimetic%20synthesis | Biomimetic synthesis is an area of organic chemical synthesis that is specifically biologically inspired. The term encompasses both the testing of a "biogenetic hypothesis" (conjectured course of a biosynthesis in nature) through execution of a series of reactions designed to parallel the proposed biosynthesis, as well as programs of study where a synthetic reaction or reactions aimed at a desired synthetic goal are designed to mimic one or more known enzymic transformations of an established biosynthetic pathway. The earliest generally cited example of a biomimetic synthesis is Sir Robert Robinson's organic synthesis of the alkaloid tropinone.
A more recent example is E.J. Corey's carbenium-mediated cyclization of an engineered linear polyene to provide a tetracyclic steroid ring system, which built upon studies of cationic cyclizations of linear polyenes by the Albert Eschenmoser and Gilbert Stork, and the extensive studies of the W.S. Johnson to define the requirements to initiate and terminate the cyclization, and to stabilize the cationic carbenium group during the cyclization (as nature accomplishes via enzymes during biosynthesis of steroids such as cholesterol). In relation to the second definition, synthetic organic or inorganic catalysts applied to accomplish a chemical transformation accomplished in nature by a biocatalyst (e.g., a purely proteinaceous catalyst, a metal or other cofactor bound to an enzyme, or a ribozyme) can be said to be accomplishing a biomimetic synthesis, where design and characterization of such catalytic systems has been termed biomimetic chemistry.
Synthesis of proto-daphniphylline
Proto-daphniphylline is a precursor in the biosynthesis of a family of alkaloids found in Daphniphyllum macropodum. It is of interest due to its complex molecular structure making it a challenging target for conventional organic synthesis methods due to the fused ring structure and the spiro carbon centre. Based on a proposed biosynthesis pathway of proto-daphniphylline from squalene, Clayton Heathcock and co-workers developed a remarkably elegant and short total synthesis of proto-daphniphylline from simple starting materials. This is an example of how biomimetic synthesis can simplify the total synthesis of a complex natural product.
The key step in Heathcock's synthetic route involves a cyclization of acyclic dialdehydes A or B to form proto-daphniphylline. Both dialdehydes (A or B) have carbon skeletons analogous to squalene and can be synthesized from simple starting materials. Treating A or B with a sequence of simple reagents containing potassium hydroxide, ammonia, and acetic acid led to the formation of proto-daphniphylline. Six σ-bonds and five rings were formed in this remarkable step. It was proposed in the original report that hydroxyldihydropyran intermediate C was first formed when the dialdehyde starting material (A) was treated with potassium hydroxide. A 2-aza-1, 3-diene intermediate (D) was generated from the reaction of intermediate C with ammonia. An acid-catalyzed Diels-Alder reaction formed intermediate E which was further converted to the final product under the reaction conditions.
Examples of biomimetic syntheses in Wikipedia
carpanone, via the Chapman approach
spirotryprostatin B, via the Ganesan approach
endiandric acid, see Biomimetic Total synthesis, via Nicolaou approach
Further literature examples of biomimetic syntheses
Merck synthesis of nakiterpiosin-type C-nor-D-homosteroids, e.g., Structural: Cleaved, contracted, and expanded rings (seco-, nor-, and homosteroids), via C-13 atom migration
Heathcock synthesis of squalene-derived daphniphylline-type alkaloids, via tetracyclization or pentacyclization cascades
References
Further reading
Chemical synthesis | Biomimetic synthesis | [
"Chemistry"
] | 870 | [
"nan",
"Chemical synthesis"
] |
41,176,434 | https://en.wikipedia.org/wiki/Radio%20spectrum%20scope | The radio spectrum scope (also radio panoramic receiver, panoramic adapter, pan receiver, pan adapter, panadapter, panoramic radio spectroscope, panoramoscope, panalyzor and band scope) was invented by Marcel Wallace - and measures and shows the magnitude of an input signal versus frequency within one or more radio bands - e.g. shortwave bands. A spectrum scope is normally a lot cheaper than a spectrum analyzer, because the aim is not high quality frequency resolution - nor high quality signal strength measurements.
The spectrum scope use can be to:
find radio channels quickly of known and unknown signals when receiving.
find radio amateurs activity quickly e.g. with the intent of communicating with them.
Modern spectrum scopes, like the Elecraft P3, also plot signal frequencies and amplitudes over time, in a rolling format called a waterfall plot.
References
External links
K9rod.net: Spectrum Scope
ac8gy.com: Panadapter for FT-950
k2za.blogspot.dk: FT-817 Tip - Spectrum Scope
portabletubes.co.uk: Panoramic Radio Products PCA-2 Panadapter
Navy Receiving Panoramic Adaptor (Panadaptor) Info
Receiver (radio)
Radio electronics
Telecommunications equipment
Radio technology
Signal processing
Spectroscopy | Radio spectrum scope | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 278 | [
"Information and communications technology",
"Radio electronics",
"Telecommunications engineering",
"Molecular physics",
"Computer engineering",
"Spectrum (physical sciences)",
"Signal processing",
"Instrumental analysis",
"Receiver (radio)",
"Radio technology",
"Spectroscopy"
] |
48,883,792 | https://en.wikipedia.org/wiki/Krogh%20model | Krogh model is a scientific model of mass transfer explaining the concentration of molecular oxygen through a cylindrical capillary tube as a function of a changing position over the capillary tube's length. It was first conceptualized by August Krogh in 1919 with the help of Agner Krarup Erlang to describe oxygen supply in living tissues from human blood vessels.
Its applicability has been extended to various academic fields, and has been successful explaining drug diffusion, water transport, and ice formation in tissues.
Mathematical modeling
Krogh model is derived by applying Fick's laws of diffusion and the law of conservation of mass over a radial interval
Limitations
Although Krogh model is a good approximation, it underestimates oxygen consumption because the cylinder model does not include all the tissue surrounding the capillary.
Notes
References
Diffusion
Scientific models
Mathematics in medicine | Krogh model | [
"Physics",
"Chemistry",
"Mathematics"
] | 177 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Applied mathematics",
"Mathematics in medicine"
] |
48,884,622 | https://en.wikipedia.org/wiki/Esterlin | The esterlin is an obsolete French unit of mass weighing about 1.5138 gram. It was used as a unit of mass for gold in France weighing 28 ½ Grain.
In the Austrian Netherlands its place in the unit-chain was
1 livre or pond/pound = 2 mark = 16 once = 320 esterlin = 1280 felins = 10240 aß = 1 pond troy of Holland
1 pond = 320 esterlin = 49.215,18 centigram
In Belgium, it was valid that 1 livre = 1000 esterlin = 1000 gram
References
External links
Units of mass | Esterlin | [
"Physics",
"Mathematics"
] | 120 | [
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
48,885,825 | https://en.wikipedia.org/wiki/Incompressibility%20method | In mathematics, the incompressibility method is a proof method like the probabilistic method, the counting method or the pigeonhole principle. To prove that an object in a certain class (on average) satisfies a certain property, select an object of that class that is incompressible. If it does not satisfy the property, it can be compressed by computable coding. Since it can be generally proven that almost all objects in a given class are incompressible, the argument demonstrates that almost all objects in the class have the property involved (not just the average). To select an incompressible object is ineffective, and cannot be done by a computer program. However, a simple counting argument usually shows that almost all objects of a given class can be compressed by only a few bits (are incompressible).
History
The incompressibility method depends on an objective, fixed notion of incompressibility. Such a notion was provided by the Kolmogorov complexity theory, named for Andrey Kolmogorov.
One of the first uses of the incompressibility method with Kolmogorov complexity in the theory of computation was to prove that the running time of a one-tape Turing machine is quadratic for accepting a palindromic language and sorting algorithms require at least time to sort items. One of the early influential papers using the incompressibility method was published in 1980. The method was applied to a number of fields, and its name was coined in a textbook.
Applications
Number theory
According to an elegant Euclidean proof, there is an infinite number of prime numbers. Bernhard Riemann demonstrated that the number of primes less than a given number is connected with the 0s of the Riemann zeta function. Jacques Hadamard and Charles Jean de la Vallée-Poussin proved in 1896 that this
number of primes is asymptotic to ; see Prime number theorem (use for the natural logarithm and for the binary logarithm). Using the incompressibility method, G. J. Chaitin argued as follows: Each can be described by its prime factorization (which is unique), where are the first primes which are (at most) and the exponents (possibly) 0. Each exponent is (at most) , and can be described by bits. The description of can be given in bits, provided we know the value of (enabling one to parse the consecutive blocks of exponents). To describe requires only bits. Using the incompressibility of most positive integers, for each there is a positive integer of binary length which cannot be described in fewer than bits. This shows that the number of primes, less than , satisfies
A more-sophisticated approach attributed to Piotr Berman (present proof partially by John Tromp) describes every incompressible by and , where is the largest prime number dividing . Since is incompressible, the length of this description must exceed . To parse the first block of the description must be given in prefix form , where is an arbitrary, small, positive function. Therefore, . Hence, with for a special sequence of values . This shows that the expression below holds for this special sequence, and a simple extension shows that it holds for every :
Both proofs are presented in more detail.
Graph theory
A labeled graph with nodes can be represented by a string of bits, where each bit indicates the presence (or absence) of an edge between the pair of nodes in that position. , and the degree of each vertex satisfies
To prove this by the incompressibility method, if the deviation is larger we can compress the description of below ; this provides the required contradiction. This theorem is required in a more complicated proof, where the incompressibility argument is used a number of times to show that the number of unlabeled graphs is
Combinatorics
A transitive tournament is a complete directed graph, ; if , . Consider the set of all transitive tournaments on nodes. Since a tournament is a labeled, directed complete graph, it can be encoded by a string of bits where each bit indicates the direction of the edge between the pair of nodes in that position. Using this encoding, every transitive tournament contains a transitive subtournament on (at least) vertices with
This was shown as the first problem. It is easily solved by the incompressibility method, as are the coin-weighing problem, the number of covering families and expected properties; for example, at least a fraction of of all transitive tournaments on vertices have transitive subtournaments on not more than vertices. is large enough.
If a number of events are independent (in probability theory) of one another, the probability that none of the events occur can be easily calculated. If the events are dependent, the problem becomes difficult. Lovász local lemma is a principle that if events are mostly independent of one another and have an individually-small probability, there is a positive probability that none of them will occur. It was proven by the incompressibility method. Using the incompressibility method, several versions of expanders and superconcentrator graphs were shown to exist.
Topological combinatorics
In the Heilbronn triangle problem, throw points in the unit square and determine the maximum of the minimal area of a triangle formed by three of the points over all possible arrangements. This problem was solved for small arrangements, and much work was done on asymptotic expression as a function of . The original conjecture of Heilbronn was during the early 1950s. Paul Erdős proved that this bound is correct for , a prime number. The general problem remains unsolved, apart from the best-known lower bound (achievable; hence, Heilbronn's conjecture is not correct for general ) and upper bound (proven by Komlos, Pintsz and Szemeredi in 1982 and 1981, respectively). Using the incompressibility method, the average case was studied. It was proven that if the area is too small (or large) it can be compressed below the Kolmogorov complexity of a uniformly-random arrangement (high Kolmogorov complexity). This proves that for the overwhelming majority of the arrangements (and the expectation), the area of the smallest triangle formed by three of points thrown uniformly at random in the unit square is . In this case, the incompressibility method proves the lower and upper bounds of the property involved.
Probability
The law of the iterated logarithm, the law of large numbers and the recurrence property were shown to hold using the incompressibility method and Kolmogorov's zero–one law, with normal numbers expressed as binary strings (in the sense of E. Borel) and the distribution of 0s and 1s in binary strings of high Kolmogorov complexity.
Turing-machine time complexity
The basic Turing machine, as conceived by Alan Turing in 1936, consists of a memory: a tape of potentially-infinite cells on which a symbol can be written and a finite control, with a read-write head attached, which scans a cell on the tape. At each step, the read-write head can change the symbol in the cell being scanned and move one cell left, right, or not at all according to instruction from the finite control. Turing machines with two tape symbols may be considered for convenience, but this is not essential.
In 1968, F. C. Hennie showed that such a Turing machine requires order to recognize the language of binary palindromes in the worst case. In 1977, W. J. Paul presented an incompressibility proof which showed that order time is required in the average case. For every integer , consider all words of that length. For convenience, consider words with the middle third of the word consisting of 0s. The accepting Turing machine ends with an accept state on the left (the beginning of the tape). A Turing-machine computation of a given word gives for each location (the boundary between adjacent cells) a sequence of crossings from left to right and right to left, each crossing in a particular state of the finite control. Positions in the middle third of a candidate word all have a crossing sequence of length (with a total computation time of ), or some position has a crossing sequence of . In the latter case, the word (if it is a palindrome) can be identified by that crossing sequence.
If other palindromes (ending in an accepting state on the left) have the same crossing sequence, the word (consisting of a prefix up to the position of the involved crossing sequence) of the original palindrome concatenated with a suffix the remaining length of the other palindrome would be accepted as well. Taking the palindrome of , the Kolmogorov complexity described by bits is a contradiction.
Since the overwhelming majority of binary palindromes have a high Kolmogorov complexity, this gives a lower bound on the average-case running time. The result is much more difficult, and shows that Turing machines with work tapes are more powerful than those with work tapes in real time (here one symbol per step).
In 1984, W. Maass and M. Li and P. M. B. Vitanyi showed that the simulation of two work tapes by one work tape of a Turing machine takes time deterministically (optimally, solving a 30-year open problem) and time nondeterministically (in, this is . More results concerning tapes, stacks and queues, deterministically and nondeterministically, were proven with the incompressibility method.
Theory of computation
Heapsort is a sorting method, invented by J. W. J. Williams and refined by R. W. Floyd, which always runs in time. It is questionable whether Floyd's method is better than Williams' on average, although it is better in the worst case. Using the incompressibility method, it was shown that Williams' method runs on average in time and Floyd's method runs on average in time. The proof was suggested by Ian Munro.
Shellsort, discovered by Donald Shell in 1959, is a comparison sort which divides a list to be sorted into sublists and sorts them separately. The sorted sublists are then merged, reconstituting a partially-sorted list. This process repeats a number of times (the number of passes). The difficulty of analyzing the complexity of the sorting process is that it depends on the number of keys to be sorted, on the number of passes and the increments governing the scattering in each pass; the sublist is the list of keys which are the increment parameter apart. Although this sorting method inspired a large number of papers, only the worst case was established. For the average running time, only the best case for a two-pass Shellsort and an upper bound of for a particular increment sequence for three-pass Shellsort were established. A general lower bound on an average -pass Shellsort was given which was the first advance in this problem in four decades. In every pass, the comparison sort moves a key to another place a certain distance (a path length). All these path lengths are logarithmically coded for length in the correct order (of passes and keys). This allows the reconstruction of the unsorted list from the sorted list. If the unsorted list is incompressible (or nearly so), since the sorted list has near-zero Kolmogorov complexity (and the path lengths together give a certain code length) the sum must be at least as large as the Kolmogorov complexity of the original list. The sum of the path lengths corresponds to the running time, and the running time is lower-bounded in this argument by . This was improved to a lower bound of
where . This implies for example the Jiang-Li-Vitanyi lower bound for all -pass increment sequences and improves that lower bound for particular increment sequences;
the Janson-Knuth upper bound is matched by a lower bound for the used increment sequence, showing that three pass Shellsort for this increment sequence uses inversions.
Another example is as follows. are natural numbers and , it was shown that for every there is a Boolean matrix; every submatrix has a rank at least by the incompressibility method.
Logic
According to Gödel's first incompleteness theorem, in every formal system with computably enumerable theorems (or proofs) strong enough to contain Peano arithmetic, there are true (but unprovable) statements or theorems. This is proved by the incompressibility method; every formal system can be described finitely (for example, in bits). In such a formal system, we can express since it contains arithmetic. Given and a natural number , we can search exhaustively for a proof that some string of length satisfies . In this way, we obtain the first such string; : contradiction.
Comparison with other methods
Although the probabilistic method generally shows the existence of an object with a certain property in a class, the incompressibility method tends to show that the overwhelming majority of objects in the class (the average, or the expectation) have that property. It is sometimes easy to turn a probabilistic proof into an incompressibility proof or vice versa. In some cases, it is difficult or impossible to turn a proof by incompressibility into a probabilistic (or counting proof). In virtually all the cases of Turing-machine time complexity cited above, the incompressibility method solved problems which had been open for decades; no other proofs are known. Sometimes a proof by incompressibility can be turned into a proof by counting, as happened in the case of the general lower bound on the running time of Shellsort.
References
Mathematical principles
Computability theory
Information theory | Incompressibility method | [
"Mathematics",
"Technology",
"Engineering"
] | 2,897 | [
"Mathematical principles",
"Telecommunications engineering",
"Applied mathematics",
"Mathematical logic",
"Computer science",
"Information theory",
"Computability theory"
] |
48,886,744 | https://en.wikipedia.org/wiki/Allen%E2%80%93Millar%E2%80%93Trippett%20rearrangement | The Allen–Millar–Trippett rearrangement is a ring expansion reaction in which a cyclic phosphine is transformed into a cyclic phosphine oxide. This name reaction, first reported in the 1960s by David W. Allen, Ian T. Millar, and Stuart Trippett, occurs by alkylation or acylation of the phosphorus, followed by reaction with hydroxide to give a rearranged product. The hydroxide first attacks the phosphonium atom, followed by collapse to the phosphine oxide with one of the groups migrating off of the phosphorus.
References
Name reactions
Phosphorus heterocycles
Ring expansion reactions | Allen–Millar–Trippett rearrangement | [
"Chemistry"
] | 133 | [
"Name reactions",
"Ring expansion reactions",
"Organic chemistry stubs",
"Organic reactions"
] |
48,888,279 | https://en.wikipedia.org/wiki/Human%20milk%20oligosaccharide | Human milk oligosaccharides (HMOs), also known as human milk glycans, are short polymers of simple sugars that can be found in high concentrations in human breast milk. Human milk oligosaccharides promote the development of the immune system, can reduce the risk of pathogen infections and improve brain development and cognition. The HMO profile of human breast milk shapes the gut microbiota of the infant by selectively stimulating bifidobacteria and other bacteria.
Functions
In contrast to the other components of breast milk that are absorbed by the infant through breastfeeding, HMOs are indigestible for the nursing child. However, they have a prebiotic effect and serve as food for intestinal bacteria, especially bifidobacteria. The dominance of these intestinal bacteria in the gut reduces the colonization with pathogenic bacteria (probiosis) and thereby promotes a healthy intestinal microbiota and reduces the risk of dangerous intestinal infections. Recent studies suggest that HMOs significantly lower the risk of viral and bacterial infections and thus diminish the chance of diarrhoea and respiratory diseases.
This protective function of the HMOs is activated when in contact with specific pathogens, such as certain bacteria or viruses. These have the ability to bind themselves to the glycan receptors (receptors for long chains of connected sugar molecules on the surface of human cells) located on the surface of the intestinal cells and can thereby infect the cells of the intestinal mucosa. Researchers have discovered that HMOs mimic these glycan receptors so the pathogens bind themselves to the HMOs rather than the intestinal cells. This reduces the risk of an infection with a pathogen. It has also been demonstrated that HMOs can bind to several intestinal viruses, such as norovirus and Norwalk virus, moreover they can reduce the virus load from influenza and RSV.
In addition to this, HMOs seem to influence the reaction of specific cells of the immune system in a way that reduces inflammatory responses. It is also suspected that HMOs reduce the risk of premature infants becoming infected with the potentially life-threatening disease necrotizing enterocolitis (NEC).
Some of the metabolites directly affect the nervous system or the brain and can sometimes influence the development and behavior of children in the long term. There are studies that indicate certain HMOs supply the child with sialic acid residues. Sialic acid is an essential nutrient for the development of the child’s brain and mental abilities.
In experiments designed to test the suitability of HMOs as a prebiotic source of carbon for intestinal bacteria it was discovered that they are highly selective for a commensal bacteria known as Bifidobacteria longum biovar infantis. The presence of genes unique to B. infantis, including co-regulated glycosidases, and its efficiency at using HMOs as a carbon source may imply a co-evolution of HMOs and the genetic capability of select bacteria to utilize them.
Occurrence
Milk oligosaccharides seem to be more abundant in humans than in other animals and to be more complex and varied. Oligosaccharides in primate milk are generally more complex and diverse than in non-primates.
Human milk oligosaccharides (HMOs) form the third most abundant solid component (dissolved or emulsified or suspended in water) of human milk, after lactose and fat. HMOs are present in a concentration of 11.3 – 17.7 g/L (1.5 oz/gal – 2.36 oz/gal) in human milk, depending on lactation stages. Approximately 200 structurally different human milk oligosaccharides are known, and they can be categorized into fucosylated, sialylated and neutral core HMOs. The composition of human milk oligosaccharides in breast milk is individual to each mother and varies over the period of lactation. The dominant oligosaccharide in 80% of all women is 2′-fucosyllactose, which is present in human breast milk at a concentration of approximately 2.5 g/L; other abundant oligosacchadies include lacto-N-tetraose, lacto-N-neotetraose, and lacto-N-fucopentaose. It has been found by numerous studies that the concentration of each individual human milk oligosaccharide changes throughout the different periods of lactation (colostrum, transitional, mature and late milk) and depend on various factors such as the mother's genetic secretor status and length of gestation.
Applications
Infant formula: Historically HMOs were not part of infant formula, and bottle-fed babies could not benefit from their positive health effects. However recently more and more HMOs, including 2'-Fucosyllactose and Lacto-N-neotetraose, are being added as supplements to modern infant formula. Recently an infant formula with a combination of 5 different HMOs (2′-fucosyllactose, 2′,3-di-fucosyllactose, lacto-N-tetraose, 3′-sialyllactose, and 6′-sialyllactose) was tested in a clinical trial with positive effects on gut microflora. However it is important to note that even this type of infant formula is far from the natural abundance of nearly 200 HMOs present in human milk.
Irritable bowel syndrome: Human milk oligosaccharides are also used to treat the symptoms of irritable bowel syndrome (IBS), which is a gastrointestinal disorder affecting 10–15% of the developed world. A 12-week treatment with an orally taken HMO mixture (2'FL and LNnT) showed significant improvement of the life quality of IBS patients.
Synthesis
Biosynthesis in humans
All HMOs derive from lactose, which can be decorated by four monosaccharides (N-acetyl-D-glucosamine, D-galactose, sialic acid and/or L-fucose) to form an oligosaccharide. The HMO variability in human mothers depend on two specific enzymes, the α1-2-fucosyltransferase (FUT2) and the α1-3/4-fucosyltransferase (FUT3). The milk of mothers with inactivated FUT2 enzyme do not contain α1-2-fucosylated HMOs, and likewise with inactivated FUT3 enzyme there can be almost no α1-4-fucasylated HMOs found. Typically 20% of the global population of mothers do not have active FUT2 enzyme, but still have an active FUT3 enzyme, whereas 1% of mothers express neither FUT2 nor FUT3 enzymes.
Industrial large-scale synthesis
Human milk oligosaccharides can be synthesized in large quantities using precision industrial fermentation methods e.g. by the commonly used, non-pathogenic bacteria Escherichia coli. During the fermentation process the bacteria are fed with a carbon-source (e.g. glucose), salts, minerals and trace elements under aseptic conditions in a stainless steel bioreactor, while lactose is added to the process as precursor molecule. Bacteria are then converting the lactose into human milk oligosaccharides by decorating it with other sugar monomers. After the fermentation process the HMOs are completely separated from the bacteria, proteins and DNA using different filtration techniques. Subsequently the HMOs are purified, crystallized, dried, packaged and delivered to infant formula manufacturers where they are mixed with other components of infant formula.
Enzymatic synthesis
Enzymatic synthesis of HMOs through transgalactosylation is an efficient way for production. Various donors, including p-nitrophenyl-β-galactopyranoside, uridine diphosphate galactose and lactose, can be used in transgalactosylation. In particular, lactose may act as either a donor or an acceptor in a variety of enzymatic reactions and is available in large quantities from the whey produced as a co-processing product from cheese production. There is a lack of published data, however, describing the large-scale production of such galacto-oligosaccharides.
References
Oligosaccharides
Nutrition
Carbohydrate chemistry
Breastfeeding
Microbiology
Microbial growth and nutrition | Human milk oligosaccharide | [
"Chemistry",
"Biology"
] | 1,788 | [
"Carbohydrates",
"Microbiology",
"Oligosaccharides",
"Microscopy",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Glycobiology"
] |
48,888,710 | https://en.wikipedia.org/wiki/Quantized%20enveloping%20algebra | In mathematics, a quantum or quantized enveloping algebra is a q-analog of a universal enveloping algebra. Given a Lie algebra , the quantum enveloping algebra is typically denoted as . The notation was introduced by Drinfeld and independently by Jimbo.
Among the applications, studying the limit led to the discovery of crystal bases.
The case of
Michio Jimbo considered the algebras with three generators related by the three commutators
When , these reduce to the commutators that define the special linear Lie algebra . In contrast, for nonzero , the algebra defined by these relations is not a Lie algebra but instead an associative algebra that can be regarded as a deformation of the universal enveloping algebra of .
See also
Quantum group
Notes
References
External links
Quantized enveloping algebra at the nLab
Quantized enveloping algebras at at MathOverflow
Does there exist any "quantum Lie algebra" imbedded into the quantum enveloping algebra ? at MathOverflow
Quantum groups
Representation theory
Mathematical quantization | Quantized enveloping algebra | [
"Physics",
"Mathematics"
] | 216 | [
"Algebra stubs",
"Quantum mechanics",
"Fields of abstract algebra",
"Mathematical quantization",
"Representation theory",
"Algebra"
] |
51,231,053 | https://en.wikipedia.org/wiki/Homomorphic%20equivalence | In graph theory, a branch of mathematics, two graphs G and H are called homomorphically equivalent if there exists a graph homomorphism and a graph homomorphism . An example usage of this notion is that any two cores of a graph are homomorphically equivalent.
Homomorphic equivalence also comes up in the theory of databases. Given a database schema, two instances I and J on it are called homomorphically equivalent if there exists an instance homomorphism and an instance homomorphism .
Deciding whether two graphs are homomorphically equivalent is NP-complete.
In fact for any category C, one can define homomorphic equivalence. It is used in the theory of accessible categories, where "weak universality" is the best one can hope for in terms of injectivity classes; see
References
Graph theory
Equivalence (mathematics) | Homomorphic equivalence | [
"Mathematics"
] | 166 | [
"Graph theory stubs",
"Discrete mathematics",
"Graph theory",
"Combinatorics",
"Mathematical relations"
] |
51,231,089 | https://en.wikipedia.org/wiki/Quantum%20hadrodynamics | Quantum hadrodynamics (QHD) is an effective field theory pertaining to interactions between hadrons, that is, hadron-hadron interactions or the inter-hadron force. It is "a framework for describing the nuclear many-body problem as a relativistic system of baryons and mesons". Quantum hadrodynamics is closely related and partly derived from quantum chromodynamics, which is the theory of interactions between quarks and gluons that bind them together to form hadrons, via the strong force.
An important phenomenon in quantum hadrodynamics is the nuclear force, or residual strong force. It is the force operating between those hadrons which are nucleons – protons and neutrons – as it binds them together to form the atomic nucleus. The bosons which mediate the nuclear force are three types of mesons: pions, rho mesons and omega mesons. Since mesons are themselves hadrons, quantum hadrodynamics also deals with the interaction between the carriers of the nuclear force itself, alongside the nucleons bound by it. The hadrodynamic force keeps nuclei bound, against the electrodynamic force which operates to break them apart (due to the mutual repulsion between protons in the nucleus).
Quantum hadrodynamics, dealing with the nuclear force and its mediating mesons, can be compared to other quantum field theories which describe fundamental forces and their associated bosons: quantum chromodynamics, dealing with the strong interaction and gluons; quantum electrodynamics, dealing with electromagnetism and photons; quantum flavordynamics, dealing with the weak interaction and W and Z bosons.
See also
Atomic nucleus
Hadron
Nuclear force
Quantum chromodynamics and strong interaction
Quantum electrodynamics and electromagnetism
Quantum flavordynamics and weak interaction
References
Quantum chromodynamics
Nuclear physics | Quantum hadrodynamics | [
"Physics"
] | 400 | [
"Particle physics stubs",
"Particle physics",
"Nuclear physics"
] |
51,236,979 | https://en.wikipedia.org/wiki/Unique%20molecular%20identifier | Unique molecular identifiers (UMIs), or molecular barcodes (MBC) are short sequences or molecular "tags" added to DNA fragments in some next generation sequencing library preparation protocols to identify the input DNA molecule. These tags are added before PCR amplification, and can be used to reduce errors and quantitative bias introduced by the amplification.
Applications include analysis of unique cDNAs to avoid PCR biases in iCLIP, variant calling in ctDNA, gene expression in single-cell RNA-seq (scRNA-seq) and haplotyping via linked reads.
See also
Batch effect
Multiplex (assay)
BRB-seq
References
DNA sequencing | Unique molecular identifier | [
"Chemistry",
"Biology"
] | 142 | [
"Molecular biology techniques",
"DNA sequencing"
] |
42,596,301 | https://en.wikipedia.org/wiki/WRF-SFIRE | WRF-SFIRE is a coupled atmosphere-wildfire model, which combines the Weather Research and Forecasting Model (WRF) with a fire-spread model, implemented by the level-set method. A version from 2010 was released based on the WRF 3.2 as WRF-Fire.
References
Coen, J., M. Cameron, J. Michalakes, E. Patton, P. Riggan, and K. Yedinak, 2013: WRF-Fire: Coupled Weather-Wildland Fire Modeling with the Weather Research and Forecasting Model. J. Appl. Meteorol. Climatol. 52, 16-38, .
Jan Mandel, Jonathan D. Beezley, Janice L. Coen, Minjeong Kim, Data Assimilation for Wildland Fires: Ensemble Kalman filters in coupled atmosphere-surface models, IEEE Control Systems Magazine 29, Issue 3, June 2009, 47-65. Preprint at , December 2007.
Jan Mandel, Jonathan D. Beezley, and Adam K. Kochanski, Coupled atmosphere-wildland fire modeling with WRF 3.3 and SFIRE 2011, Geoscientific Model Development (GMD) 4, 591-610, 2011.
External links
Users guide
Wildfire suppression
Computational physics
Firefighting
Mathematical modeling
National Weather Service numerical models | WRF-SFIRE | [
"Physics",
"Mathematics"
] | 278 | [
"Applied mathematics",
"Mathematical modeling",
"Computational physics stubs",
"Computational physics"
] |
42,598,658 | https://en.wikipedia.org/wiki/Problem%20of%20time | In theoretical physics, the problem of time is a conceptual conflict between quantum mechanics and general relativity. Quantum mechanics regards the flow of time as universal and absolute, whereas general relativity regards the flow of time as malleable and relative. This problem raises the question of what time really is in a physical sense and whether it is truly a real, distinct phenomenon. It also involves the related question of why time seems to flow in a single direction, despite the fact that no known physical laws at the microscopic level seem to require a single direction.
Time in quantum mechanics
In classical mechanics, a special status is assigned to time in the sense that it is treated as a classical background parameter, external to the system itself. This special role is seen in the standard Copenhagen interpretation of quantum mechanics: all measurements of observables are made at certain instants of time and probabilities are only assigned to such measurements. Furthermore, the Hilbert space used in quantum theory relies on a complete set of observables which commute at a specific time.
Time in general relativity
In general relativity time is no longer a unique background parameter, but a general coordinate. The field equations of general relativity are not parameterized by time but formulated in terms of spacetime. Many of the issues related to the problem of time exist within general relativity. At the cosmic scale, general relativity shows a closed universe with no external time. These two very different roles of time are incompatible.
Impact on quantum gravity
Quantum gravity describes theories that attempt to reconcile or unify quantum mechanics and general relativity, the current theory of gravity. The problem of time is central to these theoretical attempts. It remains unclear how time is related to quantum probability, whether time is fundamental or a consequence of processes, and whether time is approximate, among other issues. Different theories try different answers to the questions but no clear solution has emerged.
The Frozen Formalism Problem
The most commonly discussed aspect of the problem of time is the Frozen Formalism Problem. The non-relativistic equation of quantum mechanics includes time evolution:
where is an energy operator characterizing the system and the wave function over space evolves in time, .
In general relativity the energy operator becomes a constraint in the Wheeler–DeWitt equation:
where the operator varies throughout space, but the wavefunction here, called the wavefunction of the universe, is constant. Consequently this cosmic universal wavefunction is frozen and does not evolve. Somehow, at a smaller scale, the laws of physics, including a concept of time, apply within the universe while the cosmic level is static.
Proposed solutions to the problem of time
Work started by Don Page and William Wootters suggests that the universe appears to evolve for observers on the inside because of energy entanglement between
an evolving system and a clock system, both within the universe. In this way the overall system can remain timeless while parts experience time via entanglement. The issue remains an open question closely related to attempted theories of quantum gravity.
In other words, time is an entanglement phenomenon, which places all equal clock readings (of correctly prepared clocks – or any objects usable as clocks) into the same history.
In 2013, at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, Ekaterina Moreva, together with Giorgio Brida, Marco Gramegna, Vittorio Giovannetti, Lorenzo Maccone, and Marco Genovese performed the first experimental test of Page and Wootters' ideas. They confirmed for photons that time is an emergent phenomenon for internal observers but absent for external observers of the universe just as the Wheeler–DeWitt equation predicts.
Consistent discretizations approach developed by Jorge Pullin and Rodolfo Gambini have no constraints. These are lattice approximation techniques for quantum gravity. In the canonical approach, if one discretizes the constraints and equations of motion, the resulting discrete equations are inconsistent: they cannot be solved simultaneously. To address this problem, one uses a technique based on discretizing the action of the theory and working with the discrete equations of motion. These are automatically guaranteed to be consistent. Most of the hard conceptual questions of quantum gravity are related to the presence of constraints in the theory. Consistent discretized theories are free of these conceptual problems and can be straightforwardly quantized, providing a solution to the problem of time. It is a bit more subtle than this. Although without constraints and having "general evolution", the latter is only in terms of a discrete parameter that isn't physically accessible. The way out is addressed in a way similar to the Page–Wooters approach. The idea is to pick one of the physical variables to be a clock and ask relational questions. These ideas, where the clock is also quantum mechanical, have actually led to a new interpretation of quantum mechanics — the Montevideo interpretation of quantum mechanics. This new interpretation solves the problems of the use of environmental decoherence as a solution to the problem of measurement in quantum mechanics by invoking fundamental limitations, due to the quantum mechanical nature of clocks, in the process of measurement. These limitations are very natural in the context of generally covariant theories as quantum gravity where the clock must be taken as one of the degrees of freedom of the system itself. They have also put forward this fundamental decoherence as a way to resolve the black hole information paradox. In certain circumstances, a matter field is used to de-parametrize the theory and introduce a physical Hamiltonian. This generates physical time evolution, not a constraint.
Reduced phase-space quantization constraints are solved first and then quantized. This approach was considered for some time to be impossible, as it seems to require first finding the general solution to Einstein's equations. However, with the use of ideas involved in Dittrich's approximation scheme (built on ideas of Carlo Rovelli) a way to explicitly implement, at least in principle, a reduced phase-space quantization was made viable.
Avshalom Elitzur and Shahar Dolev argue that quantum-mechanical experiments such as the "quantum liar" provide evidence of inconsistent histories, and that spacetime itself may therefore be subject to change affecting entire histories. Elitzur and Dolev also believe that an objective passage of time and relativity can be reconciled and that it would resolve many of the issues with the block universe and the conflict between relativity and quantum mechanics.
One solution to the problem of time proposed by Lee Smolin is that there exists a "thick present" of events, in which two events in the present can be causally related to each other, but in contrast to the block-universe view of time in which all time exists eternally. Marina Cortês and Lee Smolin argue that certain classes of discrete dynamical systems demonstrate time asymmetry and irreversibility, which is consistent with an objective passage of time.
Weyl time in scale-invariant quantum gravity
Motivated by the Immirzi ambiguity in loop quantum gravity and the near-conformal invariance of the standard model of elementary particles, Charles Wang and co-workers have argued that the problem of time may be related to an underlying scale invariance of gravity–matter systems. Scale invariance has also been proposed to resolve the hierarchy problem of fundamental couplings. As a global continuous symmetry, scale invariance generates a conserved Weyl current according to Noether’s theorem. In scale-invariant cosmological models, this Weyl current naturally gives rise to a harmonic time. In the context of loop quantum gravity, Charles Wang et al. suggest that scale invariance may lead to the existence of a quantized time.
Thermal time hypothesis
The thermal time hypothesis is a possible solution to the problem of time in classical and quantum theory as has been put forward by Carlo Rovelli and Alain Connes. Physical time flow is modeled as a fundamental property of the theory, a macroscopic feature of thermodynamical origin.
See also
Hořava–Lifshitz gravity
De Donder–Weyl theory
References
Further reading
The Order of Time by Carlo Rovelli
Time Reborn by Lee Smolin
The Singular Universe and the Reality of Time by Lee Smolin and Roberto Mangabeira Unger
Philosophy of physics
Philosophical problems
Quantum gravity
Theoretical physics
Philosophy of time | Problem of time | [
"Physics"
] | 1,687 | [
"Philosophy of physics",
"Applied and interdisciplinary physics",
"Physical quantities",
"Time",
"Theoretical physics",
"Unsolved problems in physics",
"Quantum gravity",
"Philosophy of time",
"Spacetime",
"Physics beyond the Standard Model"
] |
61,628,773 | https://en.wikipedia.org/wiki/Thermal%20oscillator | A thermal oscillator is a system where conduction along thermal gradients overshoots thermal equilibrium, resulting in thermal oscillations where parts of the system oscillate between being colder and hotter than average.
References
Thermodynamics | Thermal oscillator | [
"Physics",
"Chemistry",
"Mathematics"
] | 54 | [
"Thermodynamics stubs",
"Physical chemistry stubs",
"Thermodynamics",
"Dynamical systems"
] |
52,766,867 | https://en.wikipedia.org/wiki/List%20of%20types%20of%20sets | Sets can be classified according to the properties they have.
Relative to set theory
Empty set
Finite set, Infinite set
Countable set, Uncountable set
Power set
Relative to a topology
Closed set
Open set
Clopen set
Fσ set
Gδ set
Compact set
Relatively compact set
Regular open set, regular closed set
Connected set
Perfect set
Meagre set
Nowhere dense set
Relative to a metric
Bounded set
Totally bounded set
Relative to measurability
Borel set
Baire set
Measurable set, Non-measurable set
Universally measurable set
Relative to a measure
Negligible set
Null set
Haar null set
In a linear space
Convex set
Balanced set, Absolutely convex set
Relative to the real/complex numbers
Fractal set
Ways of defining sets/Relation to descriptive set theory
Recursive set
Recursively enumerable set
Arithmetical set
Diophantine set
Hyperarithmetical set
Analytical set
Analytic set, Coanalytic set
Suslin set
Projective set
Inhabited set
More general objects still called sets
Multiset
See also
Basic concepts in set theory
Sets
Set theory | List of types of sets | [
"Mathematics"
] | 215 | [
"Mathematical logic",
"Basic concepts in set theory",
"Set theory"
] |
52,768,371 | https://en.wikipedia.org/wiki/Syrosingopine | Syrosingopine is a drug, derived from reserpine. It is used (since about 1960) to treat hypertension.
Research
A combination of the diabetes drug metformin and syrosingopine killed tumor cells in blood samples from leukemia patients, while it did not damage blood cells in samples from healthy patients. The combination of metformin and syrosingopine also reduced or eliminated tumors in mice with malignant liver cancer. The drugs interfere with the cancer cells' glucose (i.e. energy) supply and utilization. Cancer cells have much higher energy requirements than normal cells, making them vulnerable when there is a reduction in the available energy supply. Syrosingopine inhibits the degradation of sugars within the cells.
References
Antihypertensive agents
Indoloquinolizines
Tryptamine alkaloids
Indole ethers at the benzene ring
Monoamine-depleting agents
VMAT inhibitors
Heterocyclic compounds with 5 rings
Methyl esters
Carbonate esters
Methoxy compounds | Syrosingopine | [
"Chemistry"
] | 212 | [
"Tryptamine alkaloids",
"Alkaloids by chemical classification"
] |
52,768,731 | https://en.wikipedia.org/wiki/C24H36O4 | {{DISPLAYTITLE:C24H36O4}}
The molecular formula C24H36O4 (molar mass: 388.54 g/mol) may refer to:
Bolandiol dipropionate, or norpropandrolate
Methandriol diacetate, or methylandrostenediol diacetate
Testosterone acetate propionate
Molecular formulas | C24H36O4 | [
"Physics",
"Chemistry"
] | 83 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
52,768,889 | https://en.wikipedia.org/wiki/CSIRO%20Oceans%20and%20Atmosphere | CSIRO Oceans and Atmosphere (O&A) (2014–2022) was one of the then 8 Business Units (formerly: Flagships) of the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia's largest government-supported science research agency. In December 2022 it was merged with CSIRO Land and Water to form a single, larger Business Unit called simply, "CSIRO Environment".
History
The CSIRO Oceans and Atmosphere (O&A) Business Unit was formed in 2014 as one of the then 10 "Flagship" operational units of the Commonwealth Scientific and Industrial Research Organisation (CSIRO) as part of a major organisational restructure; from 2015 onwards the term "Flagship" was officially dropped. This Business Unit was formed essentially as a synthesis of the pre-existing CSIRO Division of Marine and Atmospheric Research (CMAR), representing the scientific capability, and the previously established Wealth from Oceans (WfO) Flagship, which was the route via which much of the relevant Australian government research funding was directed. In 2016, its Director was Dr. Ken Lee, previously WfO Flagship Director; in 2017 its Director was Dr. Tony Worby, previously with the Antarctic Climate and Ecosystems Cooperative Research Centre (ACE CRC); and for the period 2021–2022 its final Director was Dr. Dan Metcalfe. The O&A Business Unit employed between 350 and 400 staff who were/are located at its various laboratories including Hobart (Tasmania), Aspendale (Victoria), Dutton Park (Queensland), Black Mountain (Canberra) and Floreat Park (Western Australia). For 2016 it was quoted as operating with an annual budget of $108M Australian Dollars with its research organised into the following programs: Climate Science Centre; Coastal Development and Management; Earth System Assessment; Engineering and Technology; Marine Resources and Industries; and Ocean and Climate Dynamics. Certain previous CMAR activities, notably those involving the operation of the Marine National Facility (research vessel) RV Investigator and several scientific collections, are now managed within the separate CSIRO National Facilities and Collections Program.
The previous CSIRO Division of Marine and Atmospheric Research was itself formed as a result of a 2005 merger between the former CSIRO Division of Marine Research, with laboratories in Hobart, Brisbane, and Perth, and CSIRO Division of Atmospheric Research, with laboratories in Aspendale and Canberra; the Division of Marine Research was formed in 1997 as a merger between two previous CSIRO Divisions, the Division of Fisheries Research and the Division of Oceanography, both with their headquarters in Hobart since 1984; prior to that time, the Division of Fisheries and Oceanography (subsequently separate Divisions) had occupied facilities in Cronulla, New South Wales since its inception in 1938 (following the CSIRO's departure this site became the New South Wales State Cronulla Fisheries Research Centre). Additional details of the somewhat convoluted organisational history of the relevant Divisions and their predecessors are available here.
In December 2022 it was announced that CSIRO Oceans and Atmosphere was to merge with CSIRO Land and Water to form a new Business Unit, simply entitled Environment.
Seagoing capabilities
Through the 1980s and 1990s the marine Divisions of CSIRO had the use of both the RV Southern Surveyor, equipped for biological as well as oceanographic research, and the purpose-built RV Franklin for physical and chemical oceanographic research, both of which served at various times as the Marine National Facility for the nation (meaning that other agencies could also carry out research using these vessels at what was effectively a subsidised rate by the Australian government). The last of the vessels to be retired, the Southern Surveyor, was replaced in 2014 by a new purpose-built research vessel to serve as the Marine National Facility, the RV Investigator. Coupled with these major vessels, all capable of significant ocean-going research expeditions, staff were able to use a range of smaller boats and sometimes, charter vessels to carry out research in a range of coastal waters.
2016 Climate Science cuts controversy and subsequent partial restoration
In February 2016 the chief executive of CSIRO, Dr Larry Marshall, announced that research into the fundamentals of climate science was no longer a priority for CSIRO and up to 110 jobs were feared to be cut from the climate research section(s) of the Oceans and Atmosphere Unit. After overwhelming negative reaction both within Australia and overseas, along with the forced redundancy of prominent climate scientists including the internationally renowned sea level expert Dr John Church, the Australian Government intervened with a directive and promise of new money to support the restoration of 15 jobs and the creation of a new Climate Science Centre to be based in Hobart with a staff of 40, with funding guaranteed for 10 years from 2016, although the expected number of job losses for O&A was still estimated at 75. While the establishment of the new Centre was described as a "major U-turn in the direction of the CSIRO" and a win for the Turnbull government over the previous CSIRO announcement, the generally positive reaction from other scientists was qualified by the fact that the new Centre would still represent a net loss to CSIRO's previous capability in this area.
Selected notable scientists associated with O&A and its predecessors
Kenneth Radway Allen - fisheries biologist, International Whaling Commission (IWC) panel member, and former head of the CSIRO Division of Fisheries and Oceanography in Cronulla
Greg Ayers - atmospheric scientist, Fellow of the Australian Academy of Technological Sciences and Engineering, and subsequently Director of the Australian Bureau of Meteorology, 2009-2012
John A. Church - renowned climate scientist, winner of a number of medals and Fellow of the Australian Academy of Science, also co-convening lead author for the International Panel for Climate Change (IPCC) Fifth Assessment Report
Shirley Jeffrey - discoverer of chlorophyll C and internationally renowned microalgal researcher, winner of numerous medals and Fellow of the Australian Academy of Science
Peter R. Last - ichthyologist, former curator of the Australian National Fish Collection, and responsible for the description of numerous new shark and ray species; co-author (with John Stevens) of Sharks and Rays of Australia (2009)
Trevor McDougall - oceanographer, Fellow of the Royal Society and 2011 winner of the Prince Albert I Medal for significant work in the physical and chemical sciences of the oceans
Graeme Pearman - international expert on climate change, winner of numerous medals and Fellow of the Australian Academy of Science
Michael Raupach - climate scientist and founding co-chair of the Global Carbon Project (GCP) and Fellow of the Australian Academy of Science
Keith J. Sainsbury - researcher on shelf ecosystems and winner of the 2004 Japan Prize for scientific achievement
Penny Whetton - climate researcher, a lead author of the IPCC's Third Assessment Report, and of the Fourth Assessment Report which was awarded the 2007 Nobel Peace Prize (jointly with Al Gore)
Susan Wijffels - oceanographer with special interest in the international Argo float program; winner the Australian Meteorological and Oceanographic Society's Priestly Medal and the Australian Academy of Science's Dorothy Hill Award in recognition of her efforts to understand the role of the oceans in climate change.
Books on CSIRO's marine research activities
CSIRO At Sea, a "popular" account of the early research activities of the marine components of the relevant CSIRO Divisions (former Divisions of Fisheries, Fisheries and Oceanography, Oceanography, and Fisheries Research) was published in 1988, a few years after the relocation of the majority of CSIRO's marine research activities to Hobart from Cronulla, New South Wales.
See also
Network of Aquaculture Centres in Asia-Pacific
References
External links
Former CSIRO Oceans and Atmosphere web page (Archived copy, November 2022)
Former CSIRO Wealth from Oceans web page (Archived copy, February 2014)
Former CSIRO Marine and Atmospheric Research Division home page (accessed 4 January 2017)
CSIRO Marine and Atmospheric Research Publications Lists - CMAR compilations (accessed 4 January 2017)
CSIRO Marine and Atmospheric Research Publications as listed by Google Scholar (accessed 4 January 2017)
2014 establishments in Australia
2022 disestablishments in Australia
Scientific organisations based in Australia
Marine biology
Fisheries agencies
Oceanography
Governmental meteorological agencies in Oceania
CSIRO | CSIRO Oceans and Atmosphere | [
"Physics",
"Biology",
"Environmental_science"
] | 1,665 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"Marine biology"
] |
52,772,118 | https://en.wikipedia.org/wiki/Nanopipette | Nanopipettes are pipettes in nanometer scale, generally made from quartz capillaries with the help of laser-based pipette puller system. Glass and carbon nanopipettes are most encountered ones in literature. Carbon nanopipettes are nanopipette shaped, hollow carbon layer. The thickness or diameter of nanopipettes can be altered. Glass nanopipettes have a wide range of usage areas like electrophysiological settings, microinjection needles etc. After glass nanopipettes are fabricated and coated with carbon layer in different ways, wet-etching method is applied to get carbon nanopipettes. Wet-etching method is done to get rid of glass nanopipettes.
Nanopipettes allowed to explore protusions, dendrites and more general the properties of nanostructures, known as nanophysiology.
References
Nanotechnology | Nanopipette | [
"Materials_science",
"Engineering"
] | 179 | [
"Nanotechnology",
"Materials science"
] |
52,772,188 | https://en.wikipedia.org/wiki/Drosomycin | Drosomycin is an antifungal peptide from Drosophila melanogaster and was the first antifungal peptide isolated from insects. Drosomycin is induced by infection by the Toll signalling pathway, while expression in surface epithelia like the respiratory tract is instead controlled by the immune deficiency pathway (Imd). This means that drosomycin, alongside other antimicrobial peptides (AMPs) such as cecropins, diptericin, drosocin, metchnikowin and attacin, serves as a first line defence upon septic injury. However drosomycin is also expressed constitutively to a lesser extent in different tissues and throughout development.
Structure
Drosomycin is a 44-residue defensin-like peptide containing four disulfide bridges. These bridges stabilize a structure involving one α-helix and three β-sheets. Owing to these four disulfide bridges, drosomycin is resistant to degradation and the action of proteases. The cysteine stabilized αβ motif of drosomycin is also found in Drosophila defensin, and some plant defensins. Drosomycin has greater sequence similarity with these plant defensins (up to 40%), than with other insect defensins. The structure was discovered in 1997 by Landon and his colleagues The αβ motif of drosomycin is also found in a scorpion neurotoxin, and drosomycin potentiates the action of this neurotoxin on nerve excitation.
Drosomycin multigene family
At the nucleotide level, drosomycin is a 387 bp-long gene (Drs) which lies on Muller element 3L, very near six other drosomycin-like (Drsl) genes. These various drosomycins are referred to as the drosomycin multigene family. However, only drosomycin itself is part of the systemic immune response, while the other genes are regulated in different fashions. The antimicrobial activity of these various drosomycin-like peptides also differs. In 2015 Gao and Zhu found that in some Drosophila species (D. takahashii) some of these genes have been duplicated and this Drosophila has 11 genes in the drosomycin multigene family in total.
Function
It seems that drosomycin has about three major functions on fungi, the first is partial lysis of hyphae, the second is inhibition of spore germination (in higher concentrations of drosomycin), and the last is delaying of hypha growth, which leads to hyphae branching (at lower concentrations of drosomycin). The exact mechanism of function to fungi still has to be clarified. In 2019, Hanson and colleagues generated the first drosomycin mutant, finding that indeed flies lacking drosomycin were more susceptible to fungal infection.
References
Peptides
Antifungals
Defensins | Drosomycin | [
"Chemistry"
] | 631 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
52,772,642 | https://en.wikipedia.org/wiki/Brune%20test | The Brune test (named after the South African mathematician Otto Brune) is used to check the permissibility of the combination of two or more two-port networks (or quadripoles) in electrical circuit analysis. The test determines whether the network still meets the port condition after the two-ports have been combined. The test is a sufficient, but not necessary, test.
Series-series connection
To check if two two-port networks can be connected in a series-series configuration, first of all just the input ports are connected in series, a voltage is applied to the input and the open-circuit voltage is measured/calculated between the output terminals to be connected. If there is a voltage drop, the two-port networks cannot be combined in series. The same test is repeated from the output side of the two-port networks (series connection of the output ports, application of a voltage to the output, measurement/calculation of the open-circuit voltage between the input terminals to be connected). Only if there is no voltage drop in both cases, a combination of the two-ports networks is permissible.
examples
The first example fails the series-series test because the through path between the lower terminals of 2-port #1 short-circuit part of the circuitry in 2-port #2. The second example passes the series-series test. The 2-ports are the same as in the first example, but 2-port #2 has been flipped or equivalently the choice of terminals to be placed in series has changed. The result is that the through path between the lower terminals of 2-port #1 simply provide a parallel path to the through path between the upper terminals of 2-port #2. The third example is the same as the first example, except that it passes the Brune test because ideal isolating transformers have been placed at the right side terminals which break the through paths.
Parallel-parallel connection
To check if two two-port networks can be connected in a parallel-parallel configuration, first of all just the input ports are connected in parallel, a voltage is applied to the input and the open-circuit voltage is measured/calculated between the outputs that are short-circuited each. If there is a voltage drop, the two-port networks cannot be combined in parallel. The same test is repeated from the output side of the two-port networks (parallel connection of the output ports, application of a voltage to the output, measurement/calculation of the open-circuit voltage between the inputs that are short-circuited each). Only if there is no voltage drop in both cases, a combination of the two-ports networks is permissible.
Hybrid connection
A similar approach as above works for the hybrid connection (series-parallel connection) and the inverse hybrid connection (parallel-series connection).
References
Two-port networks | Brune test | [
"Engineering"
] | 579 | [
"Two-port networks",
"Electronic engineering"
] |
52,773,150 | https://en.wikipedia.org/wiki/Algorithmic%20transparency | Algorithmic transparency is the principle that the factors that influence the decisions made by algorithms should be visible, or transparent, to the people who use, regulate, and are affected by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services, the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit.
The phrases "algorithmic transparency" and "algorithmic accountability" are sometimes used interchangeably – especially since they were coined by the same people – but they have subtly different meanings. Specifically, "algorithmic transparency" states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. "Algorithmic accountability" implies that the organizations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being.
Current research around algorithmic transparency interested in both societal effects of accessing remote services running algorithms., as well as mathematical and computer science approaches that can be used to achieve algorithmic transparency In the United States, the Federal Trade Commission's Bureau of Consumer Protection studies how algorithms are used by consumers by conducting its own research on algorithmic transparency and by funding external research. In the European Union, the data protection laws that came into effect in May 2018 include a "right to explanation" of decisions made by algorithms, though it is unclear what this means. Furthermore, the European Union founded The European Center for Algoritmic Transparency (ECAT).
See also
Black box
Explainable AI
Regulation of algorithms
Reverse engineering
Right to explanation
Algorithmic accountability
References
Accountability
Algorithms
Theoretical computer science
Transparency (behavior) | Algorithmic transparency | [
"Mathematics"
] | 361 | [
"Theoretical computer science",
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
32,790,202 | https://en.wikipedia.org/wiki/Varespladib | Varespladib is an inhibitor of the IIa, V, and X isoforms of secretory phospholipase A2 (sPLA2). The molecule acts as an anti-inflammatory agent by disrupting the first step of the arachidonic acid pathway of inflammation. From 2006 to 2012, varespladib was under active investigation by Anthera Pharmaceuticals as a potential therapy for several inflammatory diseases, including acute coronary syndrome and acute chest syndrome. The trial was halted in March 2012 due to inadequate efficacy. The selective sPLA2 inhibitor varespladib (IC50 value 0.009 μM in chromogenic assay, mole fraction 7.3X10-6) was studied in the VISTA-16 randomized clinical trial (clinicaltrials.gov Identifier: NCT01130246) and the results were published in 2014. The sPLA2 inhibition by varespladib in this setting seemed to be potentially harmful, and thus not a useful strategy for reducing adverse cardiovascular outcomes from acute coronary syndrome. Since 2016, scientific research has focused on the use of Varespladib as an inhibitor of snake venom toxins using various types of in vitro and in vivo models. Varespladib showed a significant inhibitory effect to snake venom PLA2 which makes it a potential first-line drug candidate in snakebite envenomation therapy. In 2019, the U.S. Food and Drug Administration (FDA) granted varespladib orphan drug status for its potential to treat snakebite.
History
Varespladib methyl was originally developed jointly by Eli Lilly and Company and Shionogi & Co., Ltd., and was acquired by Anthera Pharmaceuticals in 2006.
A Phase II study demonstrated selective sPLA2 inhibition as well as statistically significant anti-inflammatory responses and reductions in LDL cholesterol levels. Two other Phase II trials, conducted in patients with coronary artery disease, found significant decreases in sPLA2 and LDL cholesterol levels, as well as C-reactive protein (CRP) and other inflammatory biomarkers. Varespladib methyl has also been shown to further reduce LDL and inflammatory biomarker levels when administered in conjunction with a cholesterol lowering statin therapy.
In 2010, a Phase III study entitled VISTA-16 was initiated to evaluate the safety and efficacy of short-term treatment with varespladib methyl in subjects with ACS. The trial was halted in March 2012 due to insufficient efficacy. On November 18, 2013, an excess of myocardial infarctions, and of the composite endpoint of cardiovascular mortality, myocardial infarctions and stroke in the VISTA-16 study were reported.
First report on its efficacy as an antidote for snake venoms comes from 2016. Due to oral bioavailability it is considered as a potential first-line field-treatment for snakebite envenomation, which could be applied before provision of definitive medical care.
Oral varespladib
Varespladib methyl (also known as A-002, formerly LY333013 and S-3013) is a secretory phospholipase A2 (sPLA2) inhibitor formerly under development by Anthera Pharmaceuticals as a treatment for acute coronary syndrome (ACS). Varespladib methyl is an orally bioavailable prodrug of the molecule varespladib. From 2006 to 2012, varespladib methyl was under active investigation by Anthera Pharmaceuticals as a potential therapy for several inflammatory diseases, including acute coronary syndrome. In March 2012, Anthera halted further investigation of varespladib per a recommendation from an independent Data Safety Monitoring Board. Varespladib and varespladib methyl were characterised as effective molecules in neutralization of snakes venoms and are under experimental evaluation.
Intravenous varespladib
Varespladib sodium (also known as A-001, previously LY315920 and S-5920) is a sodium salt of varespladib designed for intravenous delivery. It was under evaluation by Anthera Pharmaceuticals as an anti-inflammatory sPLA2 inhibitor for the prevention of acute chest syndrome (ACS), the leading cause of death for patients with sickle-cell disease.
Elevated serum levels of sPLA2 have been observed in sickle-cell patients preceding and during ACS episodes. This profound elevation in sPLA2 levels is not observed in sickle-cell patients at steady-state or during a vaso-occlusive crisis, or in patients with respiratory diseases such as pneumonia. A reduction in serum sPLA2 levels, for example through blood transfusion, reduces the risk of an ACS, suggesting that sPLA2 plays an important role in the onset of ACS.
Anthera Pharmaceuticals acquired varespladib sodium from Lilly and Shionogi in 2006. In 2007, the U.S. Food and Drug Administration (FDA) granted varespladib sodium orphan drug status for its potential to treat patients with sickle-cell disease, which was later withdrawn. In 2009, Anthera Pharmaceuticals completed a Phase II study of varespladib sodium in subjects with sickle cell disease at risk for ACS.
Inhibitory effect on snake venoms
Snakebite envenomation can cause local tissue damage, with edema, hemorrhage, myonecrosis, and systemic toxic responses, including organ failure. In an early report on inhibition of snake venom toxicities, Varespladib, and its orally bioavailable prodrug methyl-varespladib (LY333013) showed strong inhibition of 28 types of svPLA2s from six continents. Varespladib treatment exerted a significant inhibitory effect on snake venom PLA2 both in vitro and in vivo. Hemorrhage and myonecrosis initiated by D. acuts, A. halys, N. atra, and B. multicinctus in an animal model were significantly reversed by varespladib. Furthermore, edema in gastrocnemius muscle was also attenuated. The sPLA2 inhibitor, LY315920 (varespladib sodium), and its orally bioavailable prodrug, LY333013 (varespladib methyl) were highly effective in preventing lethality following experimental envenoming by M. fulvius in a porcine animal model.
Considering that some of the toxins of snake venoms are enzymes, the search for low molecular weight enzyme inhibitors that could be safely administered immediately after a snakebite re-focused scientists' attention on Varespladib. Its ability to neutralize the enzymatic and toxic activities of three isolated PLA2 toxins (from medically important snakes found in different region around the world) of structural groups I (pseudexin) and II (crotoxin B and myotoxin I) was evaluated. The results obtained showed that Varespladib was able to neutralize the in vitro cytotoxic and in vivo myotoxic activities of purified PLA2s of both the structural group I (pseudexin) and II (myotoxin-I and crotoxin B), however further detailed analysis are needed. Varespladib also effectively inhibited the non-enzymatic myotoxic activity of the snake venom PLA2-like protein (MjTX-II). Co-crystallization of Varespladib with MjTX-II toxin revealed that the compound binds to a hydrophobic channel of the protein. Such interaction blocks fatty acids binding, thus inhibiting allosteric activation of the toxin. This leads to the toxin losing its ability to disrupt cell membranes.
In 2019, the U.S. Food and Drug Administration (FDA) designated varespladib orphan drug status for its potential to treat snakebite, without being FDA approved for Orphan Indication.
Mechanism
Prodrug activation
Varespladib methyl, in contrast to varespladib, is orally bioavailable and after absorption from the GI tract, it undergoes rapid ester hydrolysis to the active molecule – varespladib.
sPLA2 inhibition
Increased levels of sPLA2 have been observed in patients with cardiovascular disease, and may lead to both acute and chronic disease manifestations by promoting vascular inflammation. Plasma levels of sPLA2 can predict coronary events in patients who recently suffered an ACS as well as in those with stable coronary artery disease.
Furthermore, sPLA2 remodels lipoproteins, notably low-density lipoproteins (LDL) and their receptors, which are responsible for removing cholesterol from the body. This remodeling can lead to increased deposition of LDL and cholesterol in the artery wall. In combination with chronic vascular inflammation, these deposits lead to atherosclerosis.
Varespladib inhibits the IIA, V and X isoforms of sPLA2 to reduce inflammation, lower and modulate lipid levels, and reduce levels of C-reactive protein (CRP) and interleukin-6 (IL-6), both indicators of inflammation.
Snake venom antidote activity
sPLA2 is also present in snake venoms and implicated in their toxicity. It plays a role in the morbidity and mortality from snakebite envenomations, triggering induced cell lysis, disrupted hemostasis, and diminished oxygen transport, as well as myotoxicity and neurotoxicity which can lead to paralysis.
Varespladib methyl, as well as varespladib, were found to be inhibitors of the sPLA2 of snake venoms. Varespladib methyl was less potent than varespladib. Both showed activity against a broad spectrum of different snake venoms originating from six continents. They protected rodents against neurotoxicity and hemostatic toxicity, increasing survival of envenomed animals.
Varespladib also effectively inhibited in vitro and in vivo the non-enzymatic myotoxic activity of snake venom's PLA2-like protein (MjTX-II). Co-crystallization of varespladib with MjTX-II toxin (PDB code: 6PWH) revealed that the drug binds to a hydrophobic channel of the protein. This blocks fatty acids from binding there, thus inhibiting their allosteric activation of the toxin, thereby impairing its ability to disrupt cell membranes.
References
External links
Varespladib methyl at Anthera.com
Varespladib sodium at Anthera.com
Varespladib methyl Phase III study at ClinicalTrials.gov
Varespladib sodium Phase II study at ClinicalTrials.gov
Tryptamines
Carboxamides
Carboxylic acids | Varespladib | [
"Chemistry"
] | 2,313 | [
"Carboxylic acids",
"Functional groups"
] |
32,793,668 | https://en.wikipedia.org/wiki/Rabbit%20hybridoma | A rabbit hybridoma is a hybrid cell line formed by the fusion of an antibody producing rabbit B cell with a cancerous B-cell (myeloma).
History
The rabbit immune system has been documented as a vehicle for developing antibodies with higher affinity and more diverse recognition of many molecules including phospho-peptides, carbohydrates and immunogens that are not otherwise immunogenic in mouse. However, until recently, the type of antibodies available from rabbit had been limited in scope to polyclonal antibodies. Several efforts were made to generate rabbit monoclonal antibodies after the development of mouse hybridoma technology in the 1970s. Research was conducted into mouse-rabbit hetero-hybridomas to make rabbit monoclonal antibodies. However, these hetero-hybridomas were ultimately difficult to clone, and the clones, generally unstable, and did not secrete antibody over a prolonged period of time.
Initial fusion partner
In 1995, Katherine Knight and her colleagues, at Loyola University of Chicago, succeeded in developing a double transgenic rabbit over-expressing the oncogenes v-abl and c-myc under the control of the immunoglobulin heavy and light chain enhancers. The rabbit formed a myeloma-like tumor, allowing the isolation of a plasmacytoma cell line, named 240E-1. Fusion of 240E-1 cells with rabbit lymphocytes produced hybridomas that secreted rabbit monoclonal antibodies in a consistent manner. However, like the early mouse myeloma lines developed in the 1970s, stability was a concern. A number of laboratories which had received the 240E-1 cell line from Dr. Knight’s laboratory reported stability problems with the fusion cell line 240E-1.
Improved fusion partner
In 1996, Weimin Zhu and Robert Pytela, at the University of California San Francisco (UCSF), obtained 240E-1 from Dr. Knight’s laboratory and attempted to develop an improved rabbit hybridoma. Improvements in the characteristics of 240E-1 were accomplished by repeated subcloning, selection for high fusion efficiency, robust growth, and morphological characteristics such as a bright appearance under a phase-contrast microscope. Selected subclones were further tested for their ability to produce a stable hybridoma and monoclonal antibody secretion. After multiple rounds of subcloning and selection processes, a new cell line named 240E-W, was identified and which expressed better fusion efficiency and stability. Cell line 240E-W has since been further developed and optimized for production of rabbit monoclonal antibodies for research and commercial applications.
Process
The process of hybridoma formation in a rabbit first entails obtaining B-cells from a rabbit that has been immunized. There are numerous immunization protocols for rabbit, notably for the generation of polyclonal antibodies. After immunization, B-cells are fused with a candidate rabbit fusion partner cell line to form hybridomas. Resulting antibodies from hybridomas are screened for an antigen which meets criteria of interest by diagnostic tests such as ELISA, western blot, immunohistochemistry, and FACS. The resulting hybrdomas may be subcloned to ensure monoclonal characteristics.
Humanization of rabbit antibodies
Mitchell Ho and Ira Pastan at National Cancer Institute (Bethesda, USA) isolated a group of rabbit monoclonal antibodies (e.g. YP218, YP223) that recognize rare epitopes of mesothelin, including poorly immunogenic sites close to the C terminal end, for cancer therapy. Dr. Ho's laboratory analyzed the complex structures of rabbit antibodies with their antigens from the Protein Data Bank, and identified antigen-contacting residues on the rabbit Fv within the 6 Angstrom distance to its antigen. They named "HV4" and "LV4", non-complementarity-determining region (CDR) loops that are structurally close to the antigen and located in framework 3 of the rabbit heavy chain and light chain, respectively. Based on computational structural modeling, Ho and Zhang designed a humanization strategy by grafting the combined Kabat/IMGT/Paratome CDRs into a human germline framework sequence. The immunotoxins composed of the humanized rabbit Fvs (e.g. hYP218) fused to a clinically used toxin showed stronger cytotoxicity against tumor cells than the immunotoxins derived from their original rabbit Fvs. The CAR T cells based on the hYP218 antibody also show effective inhibition of tumor growth in mice. The method (i.e. grafting combined Kabat/IMGT/Paratome rabbit CDRs to a stable human germline framework) has been suggested as a general approach to humanizing rabbit antibodies.
References
External links
External links
Cellosaurus entry for 240E-1
Cellosaurus entry for 240E-W
Immunology
Biotechnology
Immune system
Monoclonal antibodies
Therapeutic antibodies
Rabbit cell lines | Rabbit hybridoma | [
"Biology"
] | 1,034 | [
"Immune system",
"Biotechnology",
"Organ systems",
"Immunology",
"nan"
] |
32,798,132 | https://en.wikipedia.org/wiki/PSR%20J0357%2B3205 | PSR J0357+3205 is a pulsar is located about 1,600 light years from Earth. PSR J0357+3205 was originally discovered by the Fermi Gamma-ray Space Telescope in 2009.
X-ray Bright Tail
A very long, X-ray bright tail (over 4 light years across) may be stretching away from PSR J0357+3205. The tail is puzzling because it shares characteristics with other tails extending from pulsars, but differs in certain properties. If the tail is at the same distance as the pulsar then it stretches for 4.2 light years in length. This would make it one of the longest X-ray tails ever associated with a so-called "rotation- powered" pulsar, a class of pulsar that get its power from the energy lost as the rotation of the pulsar slows down. (Other types of pulsars include those driven by strong magnetic fields and still others that are powered by material falling onto the neutron star.)
Explanation Of The Tail
The X-ray tail may be produced by emission from energetic particles in a pulsar wind, with the particles produced by the pulsar spiralling around magnetic field lines. Other X- ray tails around pulsars have been interpreted as bow-shocks generated by the supersonic motion of pulsars through space, with the wind trailing behind as its particles are swept back by the pulsar's interaction with the interstellar gas it encounters. However, this bow shock interpretation may or may not be correct for PSR J0357, with several issues that need to be explained. For example, PSR J0357 is losing a very small amount of energy as its spin slows down with time. This energy loss is important, because it is converted into radiation and powering a particle wind from the pulsar. This places limits on the amount of energy that particles in the wind can attain, and so might not account for the quantity of X-rays seen in the tail. Another challenge to this explanation is that other pulsars with bow shocks show bright X-ray emission surrounding the pulsar, and this is not seen for PSR J0357+3205. Also, the brightest portion of the tail is well away from the pulsar and this differs from what has been seen for other pulsars with bow shocks. If the pulsar is seen moving in the opposite direction from that of the tail, this would support the bow-shock idea.
References
Perseus (constellation)
Pulsars
Articles containing video clips | PSR J0357+3205 | [
"Astronomy"
] | 543 | [
"Perseus (constellation)",
"Constellations"
] |
32,799,114 | https://en.wikipedia.org/wiki/Riemann%20invariant | Riemann invariants are mathematical transformations made on a system of conservation equations to make them more easily solvable. Riemann invariants are constant along the characteristic curves of the partial differential equations where they obtain the name invariant. They were first obtained by Bernhard Riemann in his work on plane waves in gas dynamics.
Mathematical theory
Consider the set of conservation equations:
where and are the elements of the matrices and where and are elements of vectors. It will be asked if it is possible to rewrite this equation to
To do this curves will be introduced in the plane defined by the vector field . The term in the brackets will be rewritten in terms of a total derivative where are parametrized as
comparing the last two equations we find
which can be now written in characteristic form
where we must have the conditions
where can be eliminated to give the necessary condition
so for a nontrivial solution is the determinant
For Riemann invariants we are concerned with the case when the matrix is an identity matrix to form
notice this is homogeneous due to the vector being zero. In characteristic form the system is
with
Where is the left eigenvector of the matrix and is the characteristic speeds of the eigenvalues of the matrix which satisfy
To simplify these characteristic equations we can make the transformations such that
which form
An integrating factor can be multiplied in to help integrate this. So the system now has the characteristic form
on
which is equivalent to the diagonal system
The solution of this system can be given by the generalized hodograph method.
Example
Consider the one-dimensional Euler equations written in terms of density and velocity are
with being the speed of sound is introduced on account of isentropic assumption. Write this system in matrix form
where the matrix from the analysis above the eigenvalues and eigenvectors need to be found. The eigenvalues are found to satisfy
to give
and the eigenvectors are found to be
where the Riemann invariants are
( and are the widely used notations in gas dynamics). For perfect gas with constant specific heats, there is the relation , where is the specific heat ratio, to give the Riemann invariants
to give the equations
In other words,
where and are the characteristic curves. This can be solved by the hodograph transformation. In the hodographic plane, if all the characteristics collapses into a single curve, then we obtain simple waves. If the matrix form of the system of pde's is in the form
Then it may be possible to multiply across by the inverse matrix so long as the matrix determinant of is not zero.
See also
Simple wave
References
Invariant theory
Partial differential equations
Conservation equations
Bernhard Riemann | Riemann invariant | [
"Physics",
"Mathematics"
] | 542 | [
"Group actions",
"Conservation laws",
"Mathematical objects",
"Equations",
"Invariant theory",
"Conservation equations",
"Symmetry",
"Physics theorems"
] |
57,418,392 | https://en.wikipedia.org/wiki/Surface%20nuclear%20magnetic%20resonance | Surface nuclear magnetic resonance (SNMR), also known as magnetic resonance Sounding (MRS), is a geophysical technique specially designed for hydrogeology. It is based on the principle of nuclear magnetic resonance (NMR) and measurements can be used to indirectly estimate the water content of saturated and unsaturated zones in the earth's subsurface. SNMR is used to estimate aquifer properties, including the quantity of water contained in the aquifer, porosity, and hydraulic conductivity.
History
The MRS technique was originally conceived in the 1960s by Russell H. Varian, one of the inventors of the proton magnetometer. SNMR is a product of a joint effort by many scientists and engineers who started developing this method in the USSR under the guidance of A.G. Semenov and continued this work all over the world. Semenov's team used nuclear magnetic resonance (NMR) for non-invasive detection of proton-containing liquids (hydrocarbons or water) in the subsurface. The Voevodsky Institute of Chemical Kinetics and Combustion of the Siberian Branch of the Russian Academy of Sciences fabricated the first version of the instrument for measurements of magnetic resonance signals from subsurface water ("hydroscope") in 1981.
Principles
The basic principle of operation of magnetic resonance sounding, hitherto known as surface proton magnetic resonance (PMR), is similar to that of the proton magnetometer. They both assume records of the magnetic resonance signal from a proton-containing liquid (for example, water or hydrocarbons). However, in the proton magnetometer, a special sample of liquid is placed into the receiving coil and only the signal frequency is a matter of interest. In MRS, a wire loop 100 m in diameter is used as a transmitting/receiving antenna to probe water in the subsurface. Thus, the main advantage of the MRS method, compared with other geophysical methods, is that the surface measurement of the PMR signal from water molecules ensures that this method only responds to the subsurface water.
A typical MRS survey is conducted in three stages. First, the ambient electromagnetic (EM) noise is measured. Then, a pulse of electrical current is transmitted through a cable on the surface of the ground, applying an external EM field to the subsurface. Finally, the external EM field is terminated, and the magnetic resonance signal is measured.
Three parameters of the measured MRS signal are:
Amplitude (E0), which depends on the number of protons and hence on the quantity of water.
Decay time (T*2), which generally correlates with the mean size of the pores in water-saturated rocks. This is important for aquifer characterization.
Phase (j0), which is measured in the field and is used for a qualitative estimation of the electrical conductivity of rocks.
As with many other geophysical methods, MRS is site-dependent. Modeling results show that MRS performance depends on the magnitude of the natural geomagnetic field, the electrical conductivity of rocks, the electromagnetic noise and other factors
Usage
SNMR can be used in both oil and water exploration, but since oil is generally deep down, the more common usage is in water exploration. With depth resolution of 200 meter, SNMR is the best way to model aquifers.
See also
Aquifer storage and recovery
Aquifer properties
Groundwater model
Groundwater pollution
Hydraulic tomography
Nuclear magnetic resonance
Earth's field NMR
References
Hydrology
Nuclear magnetic resonance | Surface nuclear magnetic resonance | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 727 | [
"Environmental engineering",
"Hydrology",
"Nuclear magnetic resonance",
"Nuclear physics"
] |
57,421,934 | https://en.wikipedia.org/wiki/Heat%20flux%20measurements%20of%20thermal%20insulation | Heat flux measurements of thermal insulation are applied in laboratory and industrial environments to obtain reference or in-situ measurements of the thermal properties of an insulation material.
Thermal insulation is tested using nondestructive testing techniques relying on heat flux sensors. Procedures and requirements for in-situ measurements are standardized in ASTM C1041 standard: "Standard Practice for In-Situ Measurements of Heat Flux in Industrial Thermal Insulation Using Heat Flux Transducers".
Laboratory methods
On-site methods
On-site heat flux measurements are often focused on testing the thermal transport properties of for example pipes, tanks, ovens and boilers, by calculating the heat flux q or the apparent thermal conductivity . The real-time energy gain or loss is measured under pseudo steady state-conditions with minimal disturbance by a heat flux transducer (HFT). This on-site method is for flat surfaces (non-pipes) only.
Measurement procedure
Placement of the HFT:
The sensor should be placed on an area of insulation that represents the overall system. For example, it should not be placed closed to an inlet or outlet of a boiler or near a heating element.
Shield the sensor from other sources of heat flux that are not relevant to the measurement, e.g. solar radiation.
Make sure that the HFT is connected to the insulation surface via thermal paste or other conductive material. The emittance of the HFT should match the emittance of surface as close as possible. Air or other material between the sensor and the surface of measurement could lead to measurement errors.
Pre-measurements:
Measure the thickness of the insulation material to the nearest millimetre.
Log ambient weather conditions if necessary. Humidity, air movement and precipitation may be of interest for the interpretation of the results
Measure the temperature of the insulation surface near the sensor and the temperature at the inside of the insulation material, i.e. the process surface.
After successful application of these preparations connect the sensor to a datalogger or integrating voltmeter and wait until pseudo steady-state is achieved. It is advised to average the readings over a short time period when steady-state is achieved. This voltage measurement is the final measurement, but for good measure these steps should be applied on multiple relevant locations on the insulation.
Calculation and precision
The heat flux can be calculated from the voltage by:
V is the voltage measured by the HFT (measured in volt, V)
S is the sensitivity of the HFT (measured in volt / watt per square meter )
The apparent thermal conductivity can be calculated from:
q is the heat flux calculated from the HFT (measured in watt per square meter, )
D is the thickness of the insulation material (measured in millimeter, mm)
the temperature of the process surface, the inside of the material
the temperature of the surface near the HFT, the outside of the material
The interpretation and precision of the results depends on the section of measurement, the choice of HFT and external conditions. The correct heat flux sensor and measurement test section are of importance for a good in-situ measurement and should be based on manufacturer recommendations, past experience and careful consideration of the testing area.
Standards
ASTM C1041: Standard Practice for In-Situ Measurements of Heat Flux in Industrial Thermal Insulation Using Heat Flux Transducers
See also
R-value (insulation)
References
Bibliography
Johannesson, G., “Heat Flow Measurements, Thermoelectrical Meters, Function Principles, and Sources of Error”, Division of Building Technology, Lund Institute of Technology, Report TUBH-3003, Lund,
Sweden, 1979. (Draft Translation, March 1982, U.S. Army Corps of Engineers)
Poppendiek, H. F., “Why Not Measure Heat Flux Directly?”, Environmental Quarterly 15, No. 1, March 1, 1969.
Gilbo, C. F., “Conductimeters, Their Construction and Use”, ASTM Bulletin No. 212, February, 1956.
Materials testing
ASTM standards
Heat conduction | Heat flux measurements of thermal insulation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 815 | [
"Materials testing",
"Heat conduction",
"Thermodynamics",
"Materials science"
] |
54,100,403 | https://en.wikipedia.org/wiki/Imre%20F%C3%A9nyes | Imre Fényes (; 29July 191713November 1977) was a Hungarian physicist who was the first to propose a stochastic interpretation of quantum mechanics.
Selected publications
References
External links
Imre Fényes biography
1917 births
1977 deaths
People from Békés County
Scientists from Budapest
Franz Joseph University alumni
Academic staff of the University of Debrecen
Academic staff of Eötvös Loránd University
20th-century Hungarian physicists
Theoretical physicists
Quantum physicists | Imre Fényes | [
"Physics"
] | 93 | [
"Theoretical physics",
"Theoretical physicists",
"Quantum mechanics",
"Quantum physicists"
] |
54,101,695 | https://en.wikipedia.org/wiki/G%20equation | In Combustion, G equation is a scalar field equation which describes the instantaneous flame position, introduced by Forman A. Williams in 1985 in the study of premixed turbulent combustion. The equation is derived based on the Level-set method. The equation was first studied by George H. Markstein, in a restrictive form for the burning velocity and not as a level set of a field.
Mathematical description
The G equation reads as
where
is the flow velocity field
is the local burning velocity with respect to the unburnt gas
The flame location is given by which can be defined arbitrarily such that is the region of burnt gas and is the region of unburnt gas. The normal vector to the flame, pointing towards the burnt gas, is .
Local burning velocity
According to Matalon–Matkowsky–Clavin–Joulin theory, the burning velocity of the stretched flame, for small curvature and small strain, is given by
where
is the burning velocity of unstretched flame with respect to the unburnt gas
and are the two Markstein numbers, associated with the curvature term and the term corresponding to flow strain imposed on the flame
are the laminar burning speed and thickness of a planar flame
is the planar flame residence time.
A simple example - Slot burner
The G equation has an exact expression for a simple slot burner. Consider a two-dimensional planar slot burner of slot width . The premixed reactant mixture is fed through the slot from the bottom with a constant velocity , where the coordinate is chosen such that lies at the center of the slot and lies at the location of the mouth of the slot. When the mixture is ignited, a premixed flame develops from the mouth of the slot to a certain height in the form of a two-dimensional wedge shape with a wedge angle . For simplicity, let us assume , which is a good approximation except near the wedge corner where curvature effects will becomes important. In the steady case, the G equation reduces to
If a separation of the form is introduced, then the equation becomes
which upon integration gives
Without loss of generality choose the flame location to be at . Since the flame is attached to the mouth of the slot , the boundary condition is , which can be used to evaluate the constant . Thus the scalar field is
At the flame tip, we have , which enable us to determine the flame height
and the flame angle ,
Using the trigonometric identity , we have
In fact, the above formula is often used to determine the planar burning speed , by measuring the wedge angle.
References
Fluid dynamics
Combustion
Functions of space and time | G equation | [
"Physics",
"Chemistry",
"Engineering"
] | 534 | [
"Functions of space and time",
"Chemical engineering",
"Combustion",
"Piping",
"Spacetime",
"Fluid dynamics"
] |
50,390,340 | https://en.wikipedia.org/wiki/Residence%20time | The residence time of a fluid parcel is the total time that the parcel has spent inside a control volume (e.g.: a chemical reactor, a lake, a human body). The residence time of a set of parcels is quantified in terms of the frequency distribution of the residence time in the set, which is known as residence time distribution (RTD), or in terms of its average, known as mean residence time.
Residence time plays an important role in chemistry and especially in environmental science and pharmacology. Under the name lead time or waiting time it plays a central role respectively in supply chain management and queueing theory, where the material that flows is usually discrete instead of continuous.
History
The concept of residence time originated in models of chemical reactors. The first such model was an axial dispersion model by Irving Langmuir in 1908. This received little attention for 45 years; other models were developed such as the plug flow reactor model and the continuous stirred-tank reactor, and the concept of a washout function (representing the response to a sudden change in the input) was introduced. Then, in 1953, Peter Danckwerts resurrected the axial dispersion model and formulated the modern concept of residence time.
Distributions
The time that a particle of fluid has been in a control volume (e.g. a reservoir) is known as its age. In general, each particle has a different age. The frequency of occurrence of the age in the set of all the particles that are located inside the control volume at time is quantified by means of the (internal) age distribution .
At the moment a particle leaves the control volume, its age is the total time that the particle has spent inside the control volume, which is known as its residence time. The frequency of occurrence of the age in the set of all the particles that are leaving the control volume at time is quantified by means of the residence time distribution, also known as exit age distribution .
Both distributions are positive and have by definition unitary integrals along the age:
In the case of steady flow, the distributions are assumed to be independent of time, that is , which may allow to redefine the distributions as simple functions of the age only.
If the flow is steady (but a generalization to non-steady flow is possible) and is conservative, then the exit age distribution and the internal age distribution can be related one to the other:
Distributions other than and can be usually traced back to them. For example, the fraction of particles leaving the control volume at time with an age greater or equal than is quantified by means of the washout function , that is the complementary to one of the cumulative exit age distribution:
Averages
Mean age and mean residence time
The mean age of all the particles inside the control volume at time t is the first moment of the age distribution:
The mean residence time or mean transit time, that is the mean age of all the particles leaving the control volume at time t, is the first moment of the residence time distribution:
The mean age and the mean transit time generally have different values, even in stationary conditions:
: examples include water in a lake with the inlet and outlet on opposite sides and radioactive material introduced high in the stratosphere by a nuclear bomb test and filtering down to the troposphere.
: E and I are exponential distributions. Examples include radioactive decay and first order chemical reactions (where the reaction rate is proportional to the amount of reactant).
: most of the particles entering the control volume pass through quickly, but most of the particles contained in the control volume pass through slowly. Examples include water in a lake with the inlet and outlet that are close together and water vapor rising from the ocean surface, which for the most part returns quickly to the ocean, while for the rest is retained in the atmosphere and returns much later in the form of rain.
Turnover time
If the flow is steady and conservative, the mean residence time equals the ratio between the amount of fluid contained in the control volume and the flow rate through it:
This ratio is commonly known as the turnover time or flushing time. When applied to liquids, it is also known as the hydraulic retention time (HRT), hydraulic residence time or hydraulic detention time. In the field of chemical engineering this is also known as space time.
The residence time of a specific compound in a mixture equals the turnover time (that of the compound, as well as that of the mixture) only if the compound does not take part in any chemical reaction (otherwise its flow is not conservative) and its concentration is uniform.
Although the equivalence between the residence time and the ratio does not hold if the flow is not stationary or it is not conservative, it does hold on average if the flow is steady and conservative on average, and not necessarily at any instant. Under such conditions, which are common in queueing theory and supply chain management, the relation is known as Little's Law.
Simple flow models
Design equations are equations relating the space time to the fractional conversion and other properties of the reactor. Different design equations have been derived for different types of the reactor and depending on the reactor the equation more or less resemble that describing the average residence time. Often design equations are used to minimize the reactor volume or volumetric flow rate required to operate a reactor.
Plug flow reactor
In an ideal plug flow reactor (PFR) the fluid particles leave in the same order they arrived, not mixing with those in front and behind. Therefore, the particles entering at time t will exit at time t + T, all spending a time T inside the reactor. The residence time distribution will be then a Dirac delta function delayed by T:
The mean is T and the variance is zero.
The RTD of a real reactor deviates from that of an ideal reactor, depending on the hydrodynamics within the vessel. A non-zero variance indicates that there is some dispersion along the path of the fluid, which may be attributed to turbulence, a non-uniform velocity profile, or diffusion. If the mean of the distribution is earlier than the expected time T it indicates that there is stagnant fluid within the vessel. If the RTD curve shows more than one main peak it may indicate channeling, parallel paths to the exit, or strong internal circulation.
In PFRs, reactants enter the reactor at one end and react as they move down the reactor. Consequently, the reaction rate is dependent on the concentrations which vary along the reactor requiring the inverse of the reaction rate to be integrated over the fractional conversion.
Batch reactor
Batch reactors are reactors in which the reactants are put in the reactor at time 0 and react until the reaction is stopped. Consequently, the space time is the same as the average residence time in a batch reactor.
Continuous stirred-tank reactor
In an ideal continuous stirred-tank reactor (CSTR), the flow at the inlet is completely and instantly mixed into the bulk of the reactor. The reactor and the outlet fluid have identical, homogeneous compositions at all times. The residence time distribution is exponential:
Where; the mean is T and the variance is 1. A notable difference from the plug flow reactor is that material introduced into the system will never completely leave it.
In reality, it is impossible to obtain such rapid mixing, as there is necessarily a delay between any molecule passing through the inlet and making its way to the outlet, and hence the RTD of a real reactor will deviate from the ideal exponential decay, especially in the case of large reactors. For example, there will be some finite delay before E reaches its maximum value and the length of the delay will reflect the rate of mass transfer within the reactor. Just as was noted for a plug-flow reactor, an early mean will indicate some stagnant fluid within the vessel, while the presence of multiple peaks could indicate channeling, parallel paths to the exit, or strong internal circulation. Short-circuiting fluid within the reactor would appear in an RTD curve as a small pulse of concentrated tracer that reaches the outlet shortly after injection.
Reactants continuously enter and leave a tank where they are mixed. Consequently, the reaction proceeds at a rate dependent on the outlet concentration:
Laminar flow reactor
In a laminar flow reactor, the fluid flows through a long tube or parallel plate reactor and the flow is in layers parallel to the walls of the tube. The velocity of the flow is a parabolic function of radius. In the absence of molecular diffusion, the RTD is
The variance is infinite. In a real reactor, diffusion will eventually mix the layers so that the tail of the RTD becomes exponential and the variance finite; but laminar flow reactors can have variance greater than 1, the maximum for CTSD reactors.
Recycle reactors
Recycle reactors are PFRs with a recycle loop. Consequently, they behave like a hybrid between PFRs and CSTRs.
In all of these equations : is the consumption rate of A, a reactant. This is equal to the rate expression A is involved in. The rate expression is often related to the fractional conversion both through the consumption of A and through any k changes through temperature changes that are dependent on conversion.
Variable volume reactions
In some reactions the reactants and the products have significantly different densities. Consequently, as the reaction proceeds the volume of the reaction changes. This variable volume adds terms to the design equations. Taking this volume change into consideration the volume of the reaction becomes:
Plugging this into the design equations results in the following equations:
Batch
Plug flow reactors
Continuous stirred-tank reactors
Generally, when reactions take place in the liquid and solid phases the change in volume due to reaction is not significant enough that it needs to be taken into account. Reactions in the gas phase often have significant changes in volume and in these cases one should use these modified equations.
Determining the RTD experimentally
Residence time distributions are measured by introducing a non-reactive tracer into the system at the inlet. Its input concentration is changed according to a known function and the output concentration measured. The tracer should not modify the physical characteristics of the fluid (equal density, equal viscosity) or the hydrodynamic conditions and it should be easily detectable.
In general, the change in tracer concentration will either be a pulse or a step. Other functions are possible, but they require more calculations to deconvolute the RTD curve.
Pulse experiments
This method required the introduction of a very small volume of concentrated tracer at the inlet of the reactor, such that it approaches the Dirac delta function. Although an infinitely short injection cannot be produced, it can be made much smaller than the mean residence time of the vessel. If a mass of tracer, , is introduced into a vessel of volume and an expected residence
time of , the resulting curve of can be transformed into a dimensionless residence time distribution curve by the following relation:
Step experiments
The concentration of tracer in a step experiment at the reactor inlet changes abruptly from 0 to . The concentration of tracer at the outlet is measured and normalized to the concentration to obtain the non-dimensional curve which goes from 0 to 1:
The step- and pulse-responses of a reactor are related by the following:
A step experiment is often easier to perform than a pulse experiment, but it tends to smooth over some of the details that a pulse response could show. It is easy to numerically integrate an experimental pulse response to obtain a very high-quality estimate of the step response, but the reverse is not the case because any noise in the concentration measurement will be amplified by numeric differentiation.
Applications
Chemical reactors
In chemical reactors, the goal is to make components react with a high yield. In a homogeneous, first-order reaction, the probability that an atom or molecule will react depends only on its residence time:
for a rate constant . Given a RTD, the average probability is equal to the ratio of the concentration of the component before and after:
If the reaction is more complicated, then the output is not uniquely determined by the RTD. It also depends on the degree of micromixing, the mixing between molecules that entered at different times. If there is no mixing, the system is said to be completely segregated, and the output can be given in the form
For given RTD, there is an upper limit on the amount of mixing that can occur, called the maximum mixedness, and this determines the achievable yield. A continuous stirred-tank reactor can be anywhere in the spectrum between completely segregated and perfect mixing.
The RTD of chemical reactors can be obtained by CFD simulations. The very same procedure that is performed in experiments can be followed. A pulse of inert tracer particles (during a very short time) is injected into the reactor. The linear motion of tracer particles is governed by Newton's second law of motion and a one-way coupling is stablished between fluid and tracers. In one-way coupling, fluid affects tracer motion by drag force while tracer does not affect fluid. The size and density of tracers are chosen so small that the time constant of tracers becomes very small. In this way, tracer particles exactly follow the same path as the fluid does.
Groundwater flow
Hydraulic residence time (HRT) is an important factor in the transport of environmental toxins or other chemicals through groundwater. The amount of time that a pollutant spends traveling through a delineated subsurface space is related to the saturation and the hydraulic conductivity of the soil or rock. Porosity is another significant contributing factor to the mobility of water through the ground (e.g. toward the water table). The intersection between pore density and size determines the degree or magnitude of the flow rate through the media. This idea can be illustrated by a comparison of the ways water moves through clay versus gravel. The retention time through a specified vertical distance in clay will be longer than through the same distance in gravel, even though they are both characterized as high porosity materials. This is because the pore sizes are much larger in gravel media than in clay, and so there is less hydrostatic tension working against the subsurface pressure gradient and gravity.
Groundwater flow is important parameter for consideration in the design of waste rock basins for mining operations. Waste rock is heterogeneous material with particles varying from boulders to clay-sized particles, and it contains sulfidic pollutants which must be controlled such that they do not compromise the quality of the water table and also so the runoff does not create environmental problems in the surrounding areas. Aquitards are clay zones that can have such a degree of impermeability that they partially or completely retard water flow. These clay lenses can slow or stop seepage into the water table, although if an aquitard is fractured and contaminated then it can become a long-term source of groundwater contamination due to its low permeability and high HRT.
Water treatment
Primary treatment for wastewater or drinking water includes settling in a sedimentation chamber to remove as much of the solid matter as possible before applying additional treatments. The amount removed is controlled by the hydraulic residence time (HRT). When water flows through a volume at a slower rate, less energy is available to keep solid particles entrained in the stream and there is more time for them to settle to the bottom. Typical HRTs for sedimentation basins are around two hours, although some groups recommend longer times to remove micropollutants such as pharmaceuticals and hormones.
Disinfection is the last step in the tertiary treatment of wastewater or drinking water. The types of pathogens that occur in untreated water include those that are easily killed like bacteria and viruses, and those that are more robust such as protozoa and cysts. The disinfection chamber must have a long enough HRT to kill or deactivate all of them.
Surface science
Atoms and molecules of gas or liquid can be trapped on a solid surface in a process called adsorption. This is an exothermic process involving a release of heat, and heating the surface increases the probability that an atom will escape within a given time. At a given temperature , the residence time of an adsorbed atom is given by
where is the gas constant, is an activation energy, and is a prefactor that is correlated with the vibration times of the surface atoms (generally of the order of seconds).
In vacuum technology, the residence time of gases on the surfaces of a vacuum chamber can determine the pressure due to outgassing. If the chamber can be heated, the above equation shows that the gases can be "baked out"; but if not, then surfaces with a low residence time are needed to achieve ultra-high vacuums.
Environmental
In environmental terms, the residence time definition is adapted to fit with ground water, the atmosphere, glaciers, lakes, streams, and oceans. More specifically it is the time during which water remains within an aquifer, lake, river, or other water body before continuing around the hydrological cycle. The time involved may vary from days for shallow gravel aquifers to millions of years for deep aquifers with very low values for hydraulic conductivity. Residence times of water in rivers are a few days, while in large lakes residence time ranges up to several decades. Residence times of continental ice sheets is hundreds of thousands of years, of small glaciers a few decades.
Ground water residence time applications are useful for determining the amount of time it will take for a pollutant to reach and contaminate a ground water drinking water source and at what concentration it will arrive. This can also work to the opposite effect to determine how long until a ground water source becomes uncontaminated via inflow, outflow, and volume. The residence time of lakes and streams is important as well to determine the concentration of pollutants in a lake and how this may affect the local population and marine life.
Hydrology, the study of water, discusses the water budget in terms of residence time. The amount of time that water spends in each different stage of life (glacier, atmosphere, ocean, lake, stream, river), is used to show the relation of all of the water on the earth and how it relates in its different forms.
Pharmacology
A large class of drugs are enzyme inhibitors that bind to enzymes in the body and inhibit their activity. In this case it is the drug-target residence time (the length of time the drug stays bound to the target) that is of interest. The residence time is defined as the reciprocal value of the koff rate constant (residence time = 1/koff). Drugs with long residence times are desirable because they remain effective for longer and therefore can be used in lower doses. This residence time is determined by the kinetics of the interaction, such as how complementary the shape and charges of the target and drug are and whether outside solvent molecules are kept out of the binding site (thereby preventing them from breaking any bonds formed), and is proportional to the half-life of the chemical dissociation. One way to measure the residence time is in a preincubation-dilution experiment where a target enzyme is incubated with the inhibitor, allowed to approach equilibrium, then rapidly diluted. The amount of product is measured and compared to a control in which no inhibitor is added.
Residence time can also refer to the amount of time that a drug spends in the part of the body where it needs to be absorbed. The longer the residence time, the more of it can be absorbed. If the drug is delivered in an oral form and destined for the upper intestines, it usually moves with food and its residence time is roughly that of the food. This generally allows 3 to 8 hours for absorption. If the drug is delivered through a mucous membrane in the mouth, the residence time is short because saliva washes it away. Strategies to increase this residence time include bioadhesive polymers, gums, lozenges and dry powders.
Biochemical
In size-exclusion chromatography, the residence time of a molecule is related to its volume, which is roughly proportional to its molecular weight. Residence times also affect the performance of continuous fermentors.
Biofuel cells utilize the metabolic processes of anodophiles (electronegative bacteria) to convert chemical energy from organic matter into electricity. A biofuel cell mechanism consists of an anode and a cathode that are separated by an internal proton exchange membrane (PEM) and connected in an external circuit with an external load. Anodophiles grow on the anode and consume biodegradable organic molecules to produce electrons, protons, and carbon dioxide gas, and as the electrons travel through the circuit they feed the external load. The HRT for this application is the rate at which the feed molecules are passed through the anodic chamber. This can be quantified by dividing the volume of the anodic chamber by the rate at which the feed solution is passed into the chamber. The hydraulic residence time (HRT) affects the substrate loading rate of the microorganisms that the anodophiles consume, which affects the electrical output. Longer HRTs reduce substrate loading in the anodic chamber which can lead to reduced anodophile population and performance when there is a deficiency of nutrients. Shorter HRTs support the development of non-exoelectrogenous bacteria which can reduce the Coulombic efficiency electrochemical performance of the fuel cell if the anodophiles must compete for resources or if they do not have ample time to effectively degrade nutrients.
See also
Baseflow residence time
Lake retention time
Micromixing
RTD studies of plug flow reactor
References
Further reading
External links
Mean residence time (MRT): Understanding how long drug molecules stay in the body
Calculate the Hydraulic Retention Time (Lenntech)
Aerospace engineering
Biogeochemical cycle
Chemical reaction engineering
Ecology
Environmental engineering
Geochemistry
Hydraulic engineering
Pharmacokinetics
Queueing theory
Waste treatment technology | Residence time | [
"Physics",
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 4,500 | [
"Pharmacology",
"Chemical reaction engineering",
"Hydrology",
"Pharmacokinetics",
"Water treatment",
"Aerospace engineering",
"Chemical engineering",
"Ecology",
"Physical systems",
"Biogeochemical cycle",
"Hydraulics",
"Biogeochemistry",
"Civil engineering",
"nan",
"Environmental enginee... |
38,390,513 | https://en.wikipedia.org/wiki/Acoustoelastic%20effect | The acoustoelastic effect is how the sound velocities (both longitudinal and shear wave velocities) of an elastic material change if subjected to an initial static stress field. This is a non-linear effect of the constitutive relation between mechanical stress and finite strain in a material of continuous mass. In classical linear elasticity theory small deformations of most elastic materials can be described by a linear relation between the applied stress and the resulting strain. This relationship is commonly known as the generalised Hooke's law. The linear elastic theory involves second order elastic constants (e.g. and ) and yields constant longitudinal and shear sound velocities in an elastic material, not affected by an applied stress. The acoustoelastic effect on the other hand include higher order expansion of the constitutive relation (non-linear elasticity theory) between the applied stress and resulting strain, which yields longitudinal and shear sound velocities dependent of the stress state of the material. In the limit of an unstressed material the sound velocities of the linear elastic theory are reproduced.
The acoustoelastic effect was investigated as early as 1925 by Brillouin. He found that the propagation velocity of acoustic waves would decrease proportional to an applied hydrostatic pressure. However, a consequence of his theory was that sound waves would stop propagating at a sufficiently large pressure. This paradoxical effect was later shown to be caused by the incorrect assumptions that the elastic parameters were not affected by the pressure.
In 1937 Francis Dominic Murnaghan presented a mathematical theory extending the linear elastic theory to also include finite deformation in elastic isotropic materials. This theory included three third-order elastic constants , , and . In 1953 Huges and Kelly used the theory of Murnaghan in their experimental work to establish numerical values for higher order elastic constants for several elastic materials including Polystyrene, Armco iron, and Pyrex, subjected to hydrostatic pressure and uniaxial compression.
Non-linear elastic theory for hyperelastic materials
The acoustoelastic effect is an effect of finite deformation of non-linear elastic materials. A modern comprehensive account of this can be found in. This book treats the application of the non-linear elasticity theory and the analysis of the mechanical properties of solid materials capable of large elastic deformations. The special case of the acoustoelastic theory for a compressible isotropic hyperelastic material, like polycrystalline steel, is reproduced and shown in this text from the non-linear elasticity theory as presented by Ogden.
Note that the setting in this text as well as in is isothermal, and no reference is made to thermodynamics.
Constitutive relation – hyperelastic materials (Stress-strain relation)
A hyperelastic material is a special case of a Cauchy elastic material in which the stress at any point is objective and determined only by the current state of deformation with respect to an arbitrary reference configuration (for more details on deformation see also the pages Deformation (mechanics) and Finite strain). However, the work done by the stresses may depend on the path the deformation takes. Therefore, a Cauchy elastic material has a non-conservative structure, and the stress cannot be derived from a scalar elastic potential function. The special case of Cauchy elastic materials where the work done by the stresses is independent of the path of deformation is referred to as a Green elastic or hyperelastic material. Such materials are conservative and the stresses in the material can be derived by a scalar elastic potential, more commonly known as the Strain energy density function.
The constitutive relation between the stress and strain can be expressed in different forms based on the chosen stress and strain forms. Selecting the 1st Piola-Kirchhoff stress tensor (which is the transpose of the nominal stress tensor ), the constitutive equation for a compressible hyper elastic material can be expressed in terms of the Lagrangian Green strain () as:
where is the deformation gradient tensor, and where the second expression uses the Einstein summation convention for index notation of tensors. is the strain energy density function for a hyperelastic material and have been defined per unit volume rather than per unit mass since this avoids the need of multiplying the right hand side with the mass density of the reference configuration.
Assuming that the scalar strain energy density function can be approximated by a Taylor series expansion in the current strain , it can be expressed (in index notation) as:
Imposing the restrictions that the strain energy function should be zero and have a minimum when the material is in the un-deformed state (i.e. ) it is clear that there are no constant or linear term in the strain energy function, and thus:
where is a fourth-order tensor of second-order elastic moduli, while is a sixth-order tensor of third-order elastic moduli.
The symmetry of together with the scalar strain energy density function implies that the second order moduli have the following symmetry:
which reduce the number of independent elastic constants from 81 to 36. In addition the power expansion implies that the second order moduli also have the major symmetry
which further reduce the number of independent elastic constants to 21. The same arguments can be used for the third order elastic moduli . These symmetries also allows the elastic moduli to be expressed by the Voigt notation (i.e. and ).
The deformation gradient tensor can be expressed in component form as
where is the displacement of a material point from coordinate in the reference configuration to coordinate in the deformed configuration (see Figure 2 in the finite strain theory page). Including the power expansion of strain energy function in the constitutive relation and replacing the Lagrangian strain tensor with the expansion given on the finite strain tensor page yields (note that lower case have been used in this section compared to the upper case on the finite strain page) the constitutive equation
where
and higher order terms have been neglected
(see for detailed derivations).
For referenceM by neglecting higher order terms in this expression reduce to
which is a version of the generalised Hooke's law where is a measure of stress while is a measure of strain, and is the linear relation between them.
Sound velocity
Assuming that a small dynamic (acoustic) deformation disturb an already statically stressed material the acoustoelastic effect can be regarded as the effect on a small deformation superposed on a larger finite deformation (also called the small-on-large theory). Let us define three states of a given material point. In the reference (un-stressed) state the point is defined by the coordinate vector while the same point has the coordinate vector in the static initially stressed state (i.e. under the influence of an applied pre-stress). Finally, assume that the material point under a small dynamic disturbance (acoustic stress field) have the coordinate vector . The total displacement of the material points (under influence of both a static pre-stress and a dynamic acoustic disturbance) can then be described by the displacement vectors
where
describes the static (Lagrangian) initial displacement due to the applied pre-stress, and the (Eulerian) displacement due to the acoustic disturbance, respectively.
Cauchy's first law of motion (or balance of linear momentum) for the additional Eulerian disturbance can then be derived in terms of the intermediate Lagrangian deformation assuming that the small-on-large assumption
holds.
Using the Lagrangian form of Cauchy's first law of motion, where the effect of a constant body force (i.e. gravity) has been neglected, yields
Note that the subscript/superscript "0" is used in this text to denote the un-stressed reference state, and a dotted variable is as usual the time () derivative of the variable, and is the divergence operator with respect to the Lagrangian coordinate system .
The right hand side (the time dependent part) of the law of motion can be expressed as
under the assumption that both the unstressed state and the initial deformation state are static and thus .
For the left hand side (the space dependent part) the spatial Lagrangian partial derivatives with respect to can be expanded in the Eulerian by using the chain rule and changing the variables through the relation between the displacement vectors as
where the short form has been used. Thus
Assuming further that the static initial deformation (the pre-stressed state) is in equilibrium means that , and the law of motion can in combination with the constitutive equation given above be reduced to a linear relation (i.e. where higher order terms in ) between the static initial deformation and the additional dynamic disturbance as (see for detailed derivations)
where
This expression is recognised as the linear wave equation. Considering a plane wave of the form
where is a Lagrangian unit vector in the direction of propagation (i.e., parallel to the wave number normal to the wave front), is a unit vector referred to as the polarization vector (describing the direction of particle motion), is the phase wave speed, and is a twice continuously differentiable function (e.g. a sinusoidal function). Inserting this plane wave in to the linear wave equation derived above yields
where is introduced as the acoustic tensor, and depends on as
This expression is called the propagation condition and determines for a given propagation direction the velocity and polarization of possible waves corresponding to plane waves. The wave velocities can be determined by the characteristic equation
where is the determinant and is the identity matrix.
For a hyperelastic material is symmetric (but not in general), and the eigenvalues () are thus real. For the wave velocities to also be real the eigenvalues need to be positive. If this is the case, three mutually orthogonal real plane waves exist for the given propagation direction . From the two expressions of the acoustic tensor it is clear that
and the inequality (also called the strong ellipticity condition) for all non-zero vectors and guarantee that the velocity of homogeneous plane waves are real. The polarization corresponds to a longitudinal wave where the particle motion is parallel to the propagation direction (also referred to as a compressional wave). The two polarizations where corresponds to transverse waves where the particle motion is orthogonal to the propagation direction (also referred to as shear waves).
Isotropic materials
Elastic moduli for isotropic materials
For a second order isotropic tensor (i.e. a tensor having the same components in any coordinate system) like the Lagrangian strain tensor have the invariants where is the trace operator, and . The strain energy function of an isotropic material can thus be expressed by , or a superposition there of, which can be rewritten as
where are constants. The constants and are the second order elastic moduli better known as the Lamé parameters, while and are the third order elastic moduli introduced by, which are alternative but equivalent to and introduced by Murnaghan.
Combining this with the general expression for the strain energy function it is clear that
where . Historically different selection of these third order elastic constants have been used, and some of the variations is shown in Table 1.
Example values for steel
Table 2 and 3 present the second and third order elastic constants for some steel types presented in literature
Acoustoelasticity for uniaxial tension of isotropic hyperelastic materials
A cuboidal sample of a compressible solid in an unstressed reference configuration can be expressed by the Cartesian coordinates , where the geometry is aligned with the Lagrangian coordinate system, and is the length of the sides of the cuboid in the reference configuration. Subjecting the cuboid to a uniaxial tension in the -direction so that it deforms with a pure homogeneous strain such that the coordinates of the material points in the deformed configuration can be expressed by , which gives the
elongations
in the -direction. Here signifies the current (deformed) length of the cuboid side and where the ratio between the length of the sides in the current and reference configuration are denoted by
called the principal stretches. For an isotropic material this corresponds to a deformation without any rotation (See polar decomposition of the deformation gradient tensor where and the rotation ). This can be described through spectral representation by the principal stretches as eigenvalues, or equivalently by the elongations .
For a uniaxial tension in the -direction ( we assume that the increase by some amount. If the lateral faces are free of traction (i.e., ) the lateral elongations and are limited to the range . For isotropic symmetry the lateral elongations (or contractions) must also be equal (i.e. ). The range corresponds to the range from total lateral contraction (, which is non-physical), and to no change in the lateral dimensions (). It is noted that theoretically the range could be expanded to values large than 0 corresponding to an increase in lateral dimensions as a result of increase in axial dimension. However, very few materials (called auxetic materials) exhibit this property.
Expansion of sound velocities
If the strong ellipticity condition () holds, three orthogonally polarization directions ( will give a non-zero and real sound velocity for a given propagation direction . The following will derive the sound velocities for óne selection of applied uniaxial tension, propagation direction, and an orthonormal set of polarization vectors. For a uniaxial tension applied in the -direction, and deriving the sound velocities for waves propagating orthogonally to the applied tension (e.g. in the -direction with propagation vector ), one selection of orthonormal polarizations may be
which gives the three sound velocities
where the first index of the sound velocities indicate the propagation direction (here the -direction, while the second index indicate the selected polarization direction ( corresponds to particle motion in the propagation direction – i.e. longitudinal wave, and corresponds to particle motion perpendicular to the propagation direction – i.e. shear wave).
Expanding the relevant coefficients of the acoustic tensor, and substituting the second- and third-order elastic moduli and with their isotropic equivalents, and respectively, leads to the sound velocities expressed as
where
are the acoustoelastic coefficients related to effects from third order elastic constants.
Measurement methods
To be able to measure the sound velocity, and more specifically the change in sound velocity, in a material subjected to some stress state, one can measure the velocity of an acoustic signal propagating through the material in question. There are several methods to do this but all of them use one of two physical relations of the sound velocity. The first relation is related to the time it takes a signal to propagate from one point to another (typically the distance between two acoustic transducers or two times the distance from one transducer to a reflective surface). This is often referred to as "Time-of-flight" (TOF) measurements, and use the relation where is the distance the signal travels and is the time it takes to travel this distance. The second relation is related to the inverse of the time, the frequency, of the signal. The relation here is where is the frequency of the signal and is the wave length. The measurements using the frequency as measurand use the phenomenon of acoustic resonance where number of wave lengths match the length over which the signal resonate. Both these methods are dependent on the distance over which it measure, either directly as in the Time-of-flight, or indirectly through the matching number of wavelengths over the physical extent of the specimen which resonate.
Example of ultrasonic testing techniques
In general there are two ways to set up a transducer system to measure the sound velocity in a solid. One is a setup with two or more transducers where one is acting as a transmitter, while the other(s) is acting as a receiver. The sound velocity measurement can then be done by measuring the time between a signal is generated at the transmitter and when it is recorded at the receiver while assuming to know (or measure) the distance the acoustic signal have traveled between the transducers, or conversely to measure the resonance frequency knowing the thickness over which the wave resonate. The other type of setup is often called a pulse-echo system. Here one transducer is placed in the vicinity of the specimen acting both as transmitter and receiver. This requires a reflective interface where the generated signal can be reflected back toward the transducer which then act as a receiver recording the reflected signal. See ultrasonic testing for some measurement systems.
Longitudinal and polarized shear waves
As explained above, a set of three orthonormal polarizations () of the particle motion exist for a given propagation direction in a solid. For measurement setups where the transducers can be fixated directly to the sample under investigation it is possible to create these three polarizations (one longitudinal, and two orthogonal transverse waves) by applying different types of transducers exciting the desired polarization (e.g. piezoelectric transducers with the needed oscillation mode). Thus it is possible to measure the sound velocity of waves with all three polarizations through either time dependent or frequency dependent measurement setups depending on the selection of transducer types. However, if the transducer can not be fixated to the test specimen a coupling medium is needed to transmit the acoustic energy from the transducer to the specimen. Water or gels are often used as this coupling medium. For measurement of the longitudinal sound velocity this is sufficient, however fluids do not carry shear waves, and thus to be able to generate and measure the velocity of shear waves in the test specimen the incident longitudinal wave must interact at an oblique angle at the fluid/solid surface to generate shear waves through mode conversion. Such shear waves are then converted back to longitudinal waves at the solid/fluid surface propagating back through the fluid to the recording transducer enabling the measurement of shear wave velocities as well through a coupling medium.
Applications
Engineering material – stress estimation
As the industry strives to reduce maintenance and repair costs, non-destructive testing of structures becomes increasingly valued both in production control and as a means to measure the utilization and condition of key infrastructure. There are several measurement techniques to measure stress in a material. However, techniques using optical measurements, magnetic measurements, X-ray diffraction, and neutron diffraction are all limited to measuring surface or near surface stress or strains. Acoustic waves propagate with ease through materials and provide thus a means to probe the interior of structures, where the stress and strain level is important for the overall structural integrity.
Since the sound velocity of such non-linear elastic materials (including common construction materials like aluminium and steel) have a stress dependency, one application of the acoustoelastic effect may be measurement of the stress state in the interior of a loaded material utilizing different acoustic probes (e.g. ultrasonic testing) to measure the change in sound velocities.
Granular and porous materials – geophysics
seismology study the propagation of elastic waves through the Earth and is used in e.g. earthquake studies and in mapping the Earth's interior. The interior of the Earth is subjected to different pressures, and thus the acoustic signals may pass through media in different stress states. The acoustoelastic theory may thus be of practical interest where nonlinear wave behaviour may be used to estimate geophysical properties.
Soft tissue – medical ultrasonics
Other applications may be in medical sonography and elastography measuring the stress or pressure level in relevant elastic tissue types
(e.g.,
), enhancing non-invasive diagnostics.
See also
Acoustoelastography
Finite strain
Sound velocity
Ultrasonic testing
References
Materials science
Acoustics
Imaging | Acoustoelastic effect | [
"Physics",
"Materials_science",
"Engineering"
] | 4,138 | [
"Applied and interdisciplinary physics",
"Classical mechanics",
"Materials science",
"Acoustics",
"nan"
] |
38,399,123 | https://en.wikipedia.org/wiki/Iron%28I%29%20hydride | Iron(I) hydride, systematically named iron hydride and poly(hydridoiron) is a solid inorganic compound with the chemical formula (also written or FeH). It is both thermodynamically and kinetically unstable toward decomposition at ambient temperature, and as such, little is known about its bulk properties.
Iron(I) hydride is the simplest polymeric iron hydride. Due to its instability, it has no practical industrial uses. However, in metallurgical chemistry, iron(I) hydride is fundamental to certain forms of iron-hydrogen alloys.
Nomenclature
The systematic name iron hydride, a valid IUPAC name, is constructed according to the compositional nomenclature. However, as the name is compositional in nature, it does not distinguish between compounds of the same stoichiometry, such as molecular species, which exhibit distinct chemical properties. The systematic names poly(hydridoiron) and poly[ferrane(1)], also valid IUPAC names, are constructed according to the additive and electron-deficient substitutive nomenclatures, respectively. They do distinguish the titular compound from the others.
Hydridoiron
Hydridoiron, also systematically named ferrane(1), is a related compound with the chemical formula FeH (also written [FeH]). It is also unstable at ambient temperature with the additional propensity to autopolymerize, and so cannot be concentrated.
Hydridoiron is the simplest molecular iron hydride. In addition, it may be considered to be the iron(I) hydride monomer. It has been detected in isolation only in extreme environments, like trapped in frozen noble gases, in the atmosphere of cool stars, or as a gas at temperatures above the boiling point of iron. It is assumed to have three dangling valence bonds, and is therefore a free radical; its formula may be written FeH3• to emphasize this fact.
At very low temperatures (below 10 K), FeH may form a complex with molecular hydrogen FeH·H2.
Hydridoiron was first detected in the laboratory by B. Kleman and L. Åkerlind in the 1950s.
Properties
Radicality and acidity
A single electron of another atomic or molecular species can join with the iron centre in hydridoiron by substitution:
[FeH] + RR → [FeHR] + ·R
Because of this capture of a single electron, hydridoiron has radical character. Hydridoiron is a strong radical.
An electron pair of a Lewis base can join with the iron centre by adduction:
[FeH] + :L → [FeHL]
Because of this capture of an adducted electron pair, hydridoiron has Lewis-acidic character. It should be expected that iron(I) hydride has significantly diminished radical properties, but has similar acid properties, however reaction rates and equilibrium constants are different.
Structure
In iron(I) hydride, the atoms form a network, individual atoms being interconnected by covalent bonds. Since it is a polymeric solid, a monocrystalline sample is not expected to undergo state transitions, such as melting and dissolution, as this would require the rearrangement of molecular bonds and consequently, change its chemical identity. Colloidal crystalline samples, wherein intermolecular forces are relevant, are expected to undergo state transitions.
Iron(I) hydride adopts a double hexagonal close-packed crystalline structure with the P63/mmc space group, also referred to as epsilon-prime iron hydride in the context of the iron-hydrogen system. It is predicted to exhibit polymorphism, transitioning at some temperature below to a face-centred crystalline structure with the Fmm space group.
Electromagnetic properties
FeH is predicted to have a quartet and a sextet ground states.
The FeH molecule has at least four low energy electronic states caused by the non bonding electron taking up positions in different orbitals: X4Δ, a6Δ b6Π, and c6Σ+. Higher energy states are termed B4Σ−, C4Φ, D4Σ+, E4Π, and F4Δ. Even higher levels are labelled G4Π and H4Δ from the quartet system, and d6Σ−, e6Π, f6Δ, and g6Φ. In the quartet states the inner quantum number J takes on values 1/2, 3/2, 5/2, and 7/2.
FeH has an important absorption band (called the Wing-Ford band) in the near infrared with a band edge at 989.652 nm and a maximum absorption at 991 nm. It also has lines in the blue at 470 to 502.5 nm and in green from 520 to 540 nm.
The small isotope shift of the deuterated FeD compared to FeH at this wavelength shows that the band is due to a (0,0) transition from the ground state, namely F4Δ—X4Δ.
Various other bands exists in each part of the spectrum due to different vibrational transitions. The (1,0) band, also due to F4Δ—X4Δ transitions, is around 869.0 nm and the (2,0) band around 781.8 nm.
Within each band there are a great number of lines. These are due to transition between different rotational states. The lines are grouped into subbands 4Δ7/2—4Δ7/2 (strongest) and 4Δ5/2—4Δ5/2, 4Δ3/2—4Δ3/2 and 4Δ1/2—4Δ1/2. The numbers like 7/2 are values for Ω the spin component. Each of these has two branches P and R, and some have a Q branch. Within each there is what is called Λ splitting that results in a lower energy lines (designated "a") and higher energy lines (called "b"). For each of these there is a series of spectral lines dependent on J, the rotational quantum number, starting from 3.5 and going up in steps of 1. How high J gets depends on the temperature. In addition there are 12 satellite branches 4Δ7/2—4Δ5/2, 4Δ5/2—4Δ3/2, 4Δ3/2—4Δ1/2, 4Δ5/2—4Δ7/2, 4Δ3/2—4Δ5/2 and 4Δ1/2—4Δ3/2 with P and R branches.
Some lines are magnetically sensitive, such as 994.813 and 995.825 nm. They are broadened by the Zeeman effect yet others in the same band are insensitive to magnetic fields like 994.911 and 995.677 nm. There are 222 lines in the (0-0) band spectrum.
Occurrence in outer space
Iron hydride is one of the few molecules found in the Sun. Lines for FeH in the blue-green part of the solar spectrum were reported in 1972, including many absorption lines in 1972. Also sunspot umbras show up the Wing-Ford band prominently.
Bands for FeH (and other hydrides of transition metals and alkaline earths) show up prominently in the emission spectra for M dwarfs and L dwarfs, the hottest kind of brown dwarf. For cooler T dwarfs, the bands for FeH do not appear, probably due to liquid iron clouds blocking the view of the atmosphere, and removing it from the gas phase of the atmosphere. For even cooler brown dwarfs (<1350 K), signals for FeH reappear, which is explained by the clouds having gaps.
The explanation for the kind of stars that the FeH Wing-Ford band appears in, is that the temperature is around 3000 K and pressure is sufficient to have a large number of FeH molecules formed. Once the temperature reaches 4000 K as in a K dwarf the line is weaker due to more of the molecules being dissociated. In M giant red giants the gas pressure is too low for FeH to form.
Elliptical and lenticular galaxies also have an observable Wing-Ford band, due to a large amount of their light coming from M dwarfs.
In 2021, traces of FeH was confirmed to be present in the atmosphere of hot Jupiter WASP-79b.
Production
Kleman and Åkerlind first produced FeH in the laboratory by heating iron to 2600 K in a King-type furnace under a thin hydrogen atmosphere.
Molecular FeH can also be obtained (together with FeH2 and other species) by vaporizing iron in an argon-hydrogen atmosphere and freezing the gas on a solid surface at about 10 K (-263 °C). The compound can be detected by infrared spectroscopy, and about half of it disappears when the sample is briefly warmed to 30 K. A variant technique uses pure hydrogen atmosphere condensed at 4 K.
This procedure also generates molecules that were thought to be FeH3 (ferric hydride) but were later assigned to an association of FeH and molecular hydrogen H2.
Molecular FeH has been produced by the decay of 57Co embedded in solid hydrogen. Mössbauer spectroscopy revealed an isomer shift of 0.59 mm/s compared with metallic iron and quadrupole splitting of 2.4 mm/s. FeH can also be produced by the interaction of Iron pentacarbonyl vapour and atomic hydrogen in a microwave discharge.
See also
Chromium hydride
Magnesium monohydride
Calcium monohydride
References
Extra reading
FeH Bibliography from ExoMol
Iron compounds
Metal hydrides | Iron(I) hydride | [
"Chemistry"
] | 2,036 | [
"Metal hydrides",
"Inorganic compounds",
"Reducing agents"
] |
47,181,909 | https://en.wikipedia.org/wiki/Pulsatile%20secretion | Pulsatile secretion is a biochemical phenomenon observed in a wide variety of cell and tissue types, in which chemical products are secreted in a regular temporal pattern. The most common cellular products observed to be released in this manner are intercellular signaling molecules such as hormones or neurotransmitters. Examples of hormones that are secreted pulsatilely include insulin, thyrotropin, TRH, gonadotropin-releasing hormone (GnRH) and growth hormone (GH). In the nervous system, pulsatility is observed in oscillatory activity from central pattern generators. In the heart, pacemakers are able to work and secrete in a pulsatile manner. A pulsatile secretion pattern is critical to the function of many hormones in order to maintain the delicate homeostatic balance necessary for essential life processes, such as development and reproduction. Variations of the concentration in a certain frequency can be critical to hormone function, as evidenced by the case of GnRH agonists, which cause functional inhibition of the receptor for GnRH due to profound downregulation in response to constant (tonic) stimulation. Pulsatility may function to sensitize target tissues to the hormone of interest and upregulate receptors, leading to improved responses. This heightened response may have served to improve the animal's fitness in its environment and promote its evolutionary retention.
Pulsatile secretion in its various forms is observed in:
Hypothalamic–pituitary–gonadal axis (HPG) related hormones
Glucocorticoids
Insulin
Growth hormone
Parathyroid hormone
Neuroendocrine pulsatility
Nervous system control over hormone release is based in the hypothalamus, from which the neurons that populate the pariventricular and arcuate nuclei originate. These neurons project to the median eminence, where they secrete releasing hormones into the hypophysial portal system connecting the hypothalamus with the pituitary gland. There, they dictate endocrine function via the four HPG axes. Recent studies have begun to offer evidence that many pituitary hormones which have been observed to be released episodically are preceded by pulsatile secretion of their associated releasing hormone from the hypothalamus in a similar pulsatile fashion. Novel research into the cellular mechanisms associated with pituitary hormone pulsatility, such as that observed for luteinizing hormone (LH) and follicle-stimulating hormone (FSH), has indicated similar pulses into the hypophyseal vessels of gonadotropin-releasing hormone (GnRH).
Luteinizing hormone and follicle-stimulating hormone (HPG axis)
LH is released from the pituitary gland along with FSH in response to GnRH release into the hypophyseal portal system. Pulsatile GnRH release causes pulsatile LH and FSH release to occur, which modulates and maintains appropriate levels of bioavailable gonadal hormone—testosterone in males and estradiol in females—subject to the requirements of a superior feedback loop. In females the levels of LH is typically 1–20 IU/L during the reproductive period and is estimated to be 1.8–8.6 IU/L in males over 18 years of age.
Adrenocorticotropin and glucocorticoids (HPA axis)
Regular pulses of glucocorticoids, mainly cortisol in the case of humans, are released regularly from the adrenal cortex following a circadian pattern in addition to their release as a part of the stress response. Cortisol release follows a high frequency of pulses forming an ultradian rhythm, with amplitude being the primary variation in its release, so that the signal is amplitude modulated. Glucocorticoid pulsatlity has been observed to follow a circadian rhythm, with highest levels observed before waking and before anticipated mealtimes. This pattern in amplitude of release is observed to be consistent across vertebrates. Studies done in humans, rats, and sheep have also observed a similar circadian pattern of release of adrenocorticotropin (ACTH) shortly preceding the pulse in the resulting corticosteroid. It is currently hypothesized that the observed pulsatility of ACTH and glucocorticoids is driven via pulsatility of corticotropin-releasing hormone (CRH), however there exist few data to support this due to difficulty in measuring CRH.
Thyrotropin and thyroid hormones (HPT axis)
The secretion pattern of thyrotropin (TSH) is shaped by infradian, circadian and ultradian rhythms. Infradian rhythmis are mainly represented by circannual variation mirroring the seasonality of thyroid function. Circadian rhythms lead to peak (acrophase) secretion around midnight and nadir concentrations around noon and in the early afternoon. A similar pattern is observed for triiodothyronine, however with a phase shift. Pulsatile release contributes to the ultradian rhythm of TSH concentration with about 10 pulses per 24 hours. The amplitude of the circadian and ultradian rhythms is reduced in severe non-thyroidal illness syndrome (TACITUS).
Contemporary theories assume that autocrine and paracrine (ultrashort) feedback mechanisms controlling TSH secretion within the anterior pituitary gland are a major factor contributing to the evolution of its pulsatility.
Insulin
Pulsatile insulin secretion from individual beta cells is driven by oscillation of the calcium concentration in the cells. In beta cells lacking contact (i.e. outside islets of Lagerhans), the periodicity of these oscillations is rather variable (2–10 min). However, within an islet of Langerhans, the oscillations become synchronized by electrical coupling between closely located beta cells that are connected by gap junctions, and the periodicity is more uniform (3–6 min). In addition to gap junctions, pulse coordination is managed by ATP signaling. Alpha and beta cells in the pancreas also share secrete factors in a similar pulsatile manner.
References
Biochemistry
Cell biology
Hormones
Physiology
Secretion | Pulsatile secretion | [
"Chemistry",
"Biology"
] | 1,334 | [
"Biochemistry",
"Cell biology",
"nan",
"Physiology"
] |
47,182,539 | https://en.wikipedia.org/wiki/Fundamental%20series | The fundamental series is a set of spectral lines in a set caused by transition between d and f orbitals in atoms.
Originally the series was discovered in the infrared by Fowler and independently by Arno Bergmann. This resulted in the name Bergmann series used for such a set of lines in a spectrum. However the name was changed as Bergmann also discovered other series of lines. And other discoverers also established other such series. They became known as the fundamental series. Bergmann observed lithium at 5347 cm−1, sodium at 5416 cm−1 potassium at 6592 cm−1. Bergmann observed that the lines in the series in the caesium spectrum were double. His discovery was announced in Contributions to the Knowledge of the Infra-Red Emission Spectra of the Alkalies, Jena 1907. Carl Runge called this series the "new series". He predicted that the lines of potassium and rubidium would be in pairs. He expressed the frequencies of the series lines by a formula and predicted a connection of the series limit to the other known series. In 1909 W. M. Hicks produced approximate formulas for the various series and noticed that this series had a simpler formula than the others and thus called it the "fundamental series" and used the letter F.
The formula that more resembled the hydrogen spectrum calculations was because of a smaller quantum defect. There is no physical basis to call this fundamental. The fundamental series was described as badly-named. It is the last spectroscopic series to have a special designation. The next series involving transitions between F and G subshells is known as the FG series.
Frequencies of the lines in the series are given by this formula:
R is the Rydberg constant, is the series limit, represented by 3D, and is represented by mF. A shortened formula is then given by with values of m being integers from 4 upwards. The two numbers separated by the "−" are called terms, that represent the energy level of an atom.
The limit of the fundamental series is the same as the 3D level.
The terms can have different designations, mF for single line systems, mΦ for doublets and mf for triplets.
Lines in the fundamental series are split into compound doublets, due to the D and F subshells having different spin possibilities. The splitting of the D subshell is very small and that of the F subshell even less so, so the fine structure in the fundamental series is harder to resolve than that in the sharp or diffuse series.
Lithium
The quantum defect for lithium is 0.
Sodium
The fundamental series lines for sodium appear in the near infrared.
Potassium
The fundamental series lines for potassium appear in the near infrared.
Rubidium
The fundamental series lines for rubidium appear in the near infrared. The valence electron moves from the 4d level as the 3d is contained in an inner shell. They were observed by R von Lamb.
Relevant energy levels are 4p64d j=5/2 19,355.282 cm−1 and j=3/2 19,355.623 cm−1, and the first f levels at 4p64f j=5/2 26,792.185 cm−1 and j=7/2 26,792.169 cm−1.
Caesium
References
Spectroscopy
Atomic physics
Emission spectroscopy | Fundamental series | [
"Physics",
"Chemistry"
] | 675 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Emission spectroscopy",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
47,185,837 | https://en.wikipedia.org/wiki/Joakim%20Edsj%C3%B6 | Joakim Edsjö is a Swedish professor of theoretical physics at Stockholm University. His research is carried out at the interface of particle physics, astrophysics and cosmology, and is particularly concerned with the search for dark matter.
Education and academic career
Edsjö received his PhD in 1997 from the Uppsala University, with a thesis by the title of "Aspects of neutrino detection of neutralino dark matter". He then went on for a postdoctoral fellowship at the University of California, Berkeley, before returning to Sweden to take up a position as professor at Stockholm University. Edsjö is well known for having developed DarkSUSY, a famous numerical package for neutralino dark matter calculations, together with Paolo Gondolo, Lars Bergström, Piero Ullio, Mia Schelke and Edward Baltz. Together with Pat Scott and Malcolm Fairbairn, he developed DarkStars, a code that implements capture, annihilation and energy transport by dark matter in a stellar evolution code.
His most cited paper,"DarkSUSY: Computing Supersymmetric Dark Matter Properties Numerically" has been cited 1035 times according to Google Scholar
References
External links
Joakim Edsjö's website
Oskar Klein Centres web site
Physics Department of Stockholm university
DarkSUSY
Year of birth missing (living people)
Living people
Swedish physicists
Theoretical physicists
Academic staff of Stockholm University
Uppsala University alumni | Joakim Edsjö | [
"Physics"
] | 280 | [
"Theoretical physics",
"Theoretical physicists"
] |
36,962,483 | https://en.wikipedia.org/wiki/Ocean%20dredging | Ocean dredging was an oceanography technique introduced in the nineteenth century and developed by naturalist Edward Forbes. This form of dredging removes substrate and fauna specifically from the marine environment. Ocean dredging techniques were used on the HMS Challenger expeditions as a way to sample marine sediment and organisms.
History
Edward Forbes
Edward Forbes would lay out the dredged material on the deck to examine, preserve and study it. The practice was chronicled in a remembrance of Forbes by William Jerdan in his 1866 book Men I Have Known.
HMS Challenger
Ocean dredging was a common sampling technique used on the Challenger expedition. The expedition, led by oceanographer John Murray and chief scientist Charles Wyville Thomson, set sail in 1872 and returned to England in 1876. The ship was equipped with 34 dredges and 20 dredge nets, completing 133 dredges at 111 stations during the 4 year long expedition. Thomson and Murray detail the following instructions for surveying dredged organisms:"Examine mud brought up by dredge from different depths for living diatoms; examine also for the same purpose the stomachs of Salpae and other marine animals."The expedition successfully dredged, collected, and preserved marine sediments, plants, algae, and invertebrates. The Challenger expedition is attributed to discovering approximately 4,700 new marine species and expanding the current knowledge of ocean sediments and geology.
Seafloor effects
Ocean dredging can negatively affect benthic ecosystems. When dredging equipment is moved along the seafloor, habitat-forming epifauna is damaged or removed. As emergent corals, sponges, and seagrasses are damaged there is less habitat complexity for juvenile fishes to find protection in. Dredging also removes the sand waves in which juvenile Atlantic cod settle.
The top 2–6 cm of marine substrate is disturbed during dredging, which can have negative impacts on deposit feeders, nutrient flux, and burrowing species. Dredging is often banned or highly restricted within marine protected areas in order to protect recovering ecosystems.
Equipment used
Dredging in the marine environment can be carried out with a variety of equipment, depending on the purpose of the dredge. If the purpose is to remove sand or redistribute sediment, then a dredge drag head attached to a trailing suction hopper dredger ship is used. A fishing dredge (also known as a scallop dredge) is used for collecting edible species of oysters, mussels, scallops, clams, and crabs from the seafloor.
See also
Marine sediment
Terrigenous sediment
Deep sea mining
Coastal erosion
Trawling
References
Oceanography | Ocean dredging | [
"Physics",
"Environmental_science"
] | 533 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
36,966,162 | https://en.wikipedia.org/wiki/San%20Joaquin%20Marsh%20Wildlife%20Sanctuary | The San Joaquin Marsh Wildlife Sanctuary is a constructed wetland in Irvine, California, in the flood plain of San Diego Creek just above its outlet into the Upper Newport Bay.
History
The site is owned by the Irvine Ranch Water District; it was used for farmland in the 1950s and 1960s, and (prior to its reconstruction) as a duck hunting range. Restoration of the wetlands began in 1988 and was completed in 2000.
Now, the site serves a dual purpose of removing nitrates from the creek water and providing a bird habitat. The water district also operates an adjacent wastewater treatment facility but the treated wastewater does not enter the wildlife sanctuary.
Description
Within the sanctuary, water from the creek percolates through a system of ponds, constructed in 1997 and ringed with bulrushes; the ponds are periodically drained and re-seeded, and the surrounding land is covered with native plants. A small hill at one edge of the site serves as an arboretum for non-native trees, planted for Earth Day in 1990.
Wildlife
The landscaping has been designed to attract birds, and nesting boxes for the birds have been provided. While waterbirds such as herons, egrets, pelicans, sandpipers, ducks, geese, and kingfisher spredominate, monthly censuses have found over 120 species of birds, including terrestrial hawks, swallows, roadrunners, and hummingbirds.
Public access
The sanctuary is open to the public daily during the daytime, and has over of wheelchair-accessible hiking trails. The facilities also include free parking, restrooms, benches, and trail maps. The Duck Club, a building that was moved to the site in the 1940s and was until 1988 the base for two hunting clubs, serves as a free meeting facility for non-profit organizations. The Audubon Society maintains a chapter office in another building, the former bunkhouse of the Duck Club.
References
External links
Photos from the San Joaquin Wildlife Sanctuary group pool on Flickr
Constructed wetlands
Wetlands of California
Protected areas of Orange County, California
Geography of Irvine, California
Natural history of Orange County, California
Landforms of Orange County, California | San Joaquin Marsh Wildlife Sanctuary | [
"Chemistry",
"Engineering",
"Biology"
] | 426 | [
"Bioremediation",
"Constructed wetlands",
"Environmental engineering"
] |
36,968,172 | https://en.wikipedia.org/wiki/Inter-universal%20Teichm%C3%BCller%20theory | Inter-universal Teichmüller theory (IUT or IUTT) is the name given by mathematician Shinichi Mochizuki to a theory he developed in the 2000s, following his earlier work in arithmetic geometry. According to Mochizuki, it is "an arithmetic version of Teichmüller theory for number fields equipped with an elliptic curve". The theory was made public in a series of four preprints posted in 2012 to his website. The most striking claimed application of the theory is to provide a proof for various outstanding conjectures in number theory, in particular the abc conjecture. Mochizuki and a few other mathematicians claim that the theory indeed yields such a proof but this has so far not been accepted by the mathematical community.
History
The theory was developed entirely by Mochizuki up to 2012, and the last parts were written up in a series of four preprints.
Mochizuki made his work public in August 2012 with none of the fanfare that typically accompanies major advances, posting the papers only to his institution's preprint server and his website, and making no announcement to colleagues.
Soon after, the papers were picked up by Akio Tamagawa and Ivan Fesenko and the mathematical community at large was made aware of the claims to have proven the abc conjecture.
The reception of the claim was at first enthusiastic, though number theorists were baffled by the original language introduced and used by Mochizuki. Workshops on IUT were held at RIMS in March 2015, in Beijing in July 2015,
in Oxford in December 2015 and at RIMS in July 2016. The last two events attracted more than 100 participants. Presentations from these workshops are available online.
However, these did not lead to broader understanding of Mochizuki's ideas and the status of his claimed proof was not changed by these events.
In 2017, a number of mathematicians who had examined Mochizuki's argument in detail pointed to a specific point which they could not understand, near the end of the proof of Corollary 3.12, in paper three of four.
In March 2018, Peter Scholze and Jakob Stix visited Kyoto University for five days of discussions with Mochizuki and Yuichiro Hoshi;
while this did not resolve the differences, it brought into focus where the difficulties lay.
It also resulted in the publication of reports of the discussion by both sides:
In May 2018, Scholze and Stix wrote a 10-page report, updated in September 2018, detailing the (previously identified) gap in Corollary 3.12 in the proof, describing it as "so severe that in [their] opinion small modifications will not rescue the proof strategy", and that Mochizuki's preprint cannot claim a proof of abc.
In September 2018, Mochizuki wrote a 41-page summary of his view of the discussions and his conclusions about which aspects of his theory he considers misunderstood. In particular he names:
"re-initialization" of (mathematical) objects, making their previous "history" inaccessible;
"labels" for different "versions" of objects;
the emphasis on the types ("species") of objects.
In July and October 2018, Mochizuki wrote 8- and 5-page reactions to the May and September versions of the Scholze and Jakob Stix report, maintaining that the gap is the result of their simplifications, and that there is no gap in his theory.
Mochizuki published his work in a series of four journal papers in 2021, in the journal Publications of the Research Institute for Mathematical Sciences, Kyoto University, for which he is editor-in-chief. In a review of these papers in zbMATH, Peter Scholze wrote that his concerns from 2017 and 2018 "have not been addressed in the published version". Other authors have pointed to the unresolved dispute between Mochizuki and Scholze over the correctness of this work as an instance in which the peer review process of mathematical journal publication has failed in its usual function of convincing the mathematical community as a whole of the validity of a result.
Mathematical significance
Scope of the theory
Inter-universal Teichmüller theory is a continuation of Mochizuki's previous work in arithmetic geometry. This work, which has been peer-reviewed and well received by the mathematical community, includes major contributions to anabelian geometry, and the development of p-adic Teichmüller theory, Hodge–Arakelov theory and Frobenioid categories. It was developed with explicit references to the aim of getting a deeper understanding of abc and related conjectures. In the geometric setting, analogues to certain ideas of IUT appear in the proof by Bogomolov of the geometric Szpiro inequality.
The key prerequisite for IUT is Mochizuki's mono-anabelian geometry and its reconstruction results, which allows to retrieve various scheme-theoretic objects associated to a hyperbolic curve over a number field from the knowledge of its fundamental group, or of certain Galois groups. IUT applies algorithmic results of mono-anabelian geometry to reconstruct relevant schemes after applying arithmetic deformations to them; a key role is played by three rigidities established in Mochizuki's etale theta theory. Roughly speaking, arithmetic deformations change the multiplication of a given ring, and the task is to measure how much the addition is changed. Infrastructure for deformation procedures is decoded by certain links between so called Hodge theaters, such as a theta-link and a log-link.
These Hodge theaters use two main symmetries of IUT: multiplicative arithmetic and additive geometric. On one hand, Hodge theaters generalize such classical objects in number theory as the adeles and ideles in relation to their global elements. On the other hand, they generalize certain structures appearing in the previous Hodge–Arakelov theory of Mochizuki. The links between theaters are not compatible with ring or scheme structures and are performed outside conventional arithmetic geometry. However, they are compatible with certain group structures, and absolute Galois groups as well as certain types of topological groups play a fundamental role in IUT. Considerations of multiradiality, a generalization of functoriality, imply that three mild indeterminacies have to be introduced.
Consequences in number theory
The main claimed application of IUT is to various conjectures in number theory, among them the abc conjecture, but also more geometric conjectures such as
Szpiro's conjecture on elliptic curves and Vojta's conjecture for curves.
The first step is to translate arithmetic information on these objects to the setting of Frobenioid categories. It is claimed that extra structure on this side allows one to deduce statements which translate back into the claimed results.
One issue with Mochizuki's arguments, which he acknowledges, is that it does not seem possible to get intermediate results in his claimed proof of the abc conjecture using IUT. In other words, there is no smaller subset of his arguments more easily amenable to an analysis by outside experts, which would yield a new result in Diophantine geometries.
Vesselin Dimitrov extracted from Mochizuki's arguments a proof of a quantitative result on abc, which could in principle give a refutation of the proof.
References
External links
Shinichi Mochizuki (1995–2018), Papers of Shinichi Mochizuki
Shinichi Mochizuki (2014), A panoramic overview of inter-universal Teichmüller theory
Yuichiro Hoshi; Go Yamashita (2015), RIMS Joint Research Workshop: On the verification and further development of inter-universal Teichmuller theory
Ivan Fesenko (2015), Arithmetic deformation theory via arithmetic fundamental groups and nonarchimedean theta functions, notes on the work of Shinichi Mochizuki.
Yuichiro Hoshi (2015) Introduction to inter-universal Teichmüller theory, a survey in Japanese
Algebraic geometry
Number theory | Inter-universal Teichmüller theory | [
"Mathematics"
] | 1,639 | [
"Fields of abstract algebra",
"Discrete mathematics",
"Number theory",
"Algebraic geometry"
] |
44,044,538 | https://en.wikipedia.org/wiki/Siegfried%20Wolff | Siegfried Wolff is a now-retired Degussa chemist noted for first recognizing the potential of using silica in tire treads to reduce rolling resistance.
Education
Siegfried Wolff was born in Germany.
Career
Wolff started his career at Degussa in 1953 as a student apprentice, later moving into research and development of carbon black.
In the 1960s Wolff investigated the mechanisms of rubber reinforcement by fillers. He introduced new parameters for characterizing furnace black and silica, enabling improved quantification of the contribution of filler structure and surface area to rubber properties.
In addition, Wolff studied vulcanization systems using organosilanes and triazine-based chemicals.
Wolff originated the development of all-silica tire tread compounds. He first disclosed this use of silica to achieve low rolling resistance in papers presented in 1984 at the Tire Society meeting in Akron, Ohio.
Eventually, Wolff rose to the head of the department of applied research for fillers and rubber chemicals.
He retired in 1992.
Honors and awards
1996 - Charles Goodyear Medal
References
Living people
Polymer scientists and engineers
20th-century German chemists
Year of birth missing (living people)
Place of birth missing (living people) | Siegfried Wolff | [
"Chemistry",
"Materials_science"
] | 237 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
44,046,734 | https://en.wikipedia.org/wiki/Matched%20molecular%20pair%20analysis | Matched molecular pair analysis (MMPA) is a method in cheminformatics that compares the properties of two molecules that differ only by a single chemical transformation, such as the substitution of a hydrogen atom by a chlorine one. Such pairs of compounds are known as matched molecular pairs (MMP). Because the structural difference between the two molecules is small, any experimentally observed change in a physical or biological property between the matched molecular pair can more easily be interpreted. The term was first coined by Kenny and Sadowski in the book Chemoinformatics in Drug Discovery.
Introduction
MMP can be defined as a pair of molecules that differ in only a minor single point change (See Fig 1). Matched molecular pairs (MMPs) are widely used in medicinal chemistry to study changes in compound properties which includes biological activity, toxicity, environmental hazards and much more, which are associated with well-defined structural modifications. Single point changes in the molecule pairs are termed a chemical transformation or Molecular transformation. Each molecular pair is associated with a particular transformation. An example of transformation is the replacement of one functional group by another. More specifically, molecular transformation can be defined as the replacement of a molecular fragment having one, two or three attachment points with another fragment. Useful Molecular transformation in a specified context is termed as "Significant" transformations. For example, a transformation may systematically decrease or increase a desired property of chemical compounds. Transformations that affect a particular property/activity in a statistically significant sense are called as significant transformations. The transformation is considered significant, if it increases the property value "more often" than it decreases it or vice versa. Thus, the distribution of increasing and decreasing pairs should be significantly different from the binomial ("no effect") distribution with a particular p-value (usually 0.05).
Significance of MMP based analysis
MMP based analysis is an attractive method for computational analysis because they can be algorithmically generated and they make it possible to associate defined structural modifications at the level of compound pairs with chemical property changes, including biological activity.
Interpretable QSAR models
MMPA is quite useful in the field of quantitative structure–activity relationship (QSAR) modelling studies. One of the issues of QSAR models is they are difficult to interpret in a chemically meaningful manner. While it can be pretty easy to interpret simple linear regression models, the most powerful algorithms like neural networks, support vector machine are similar to "black boxes", which provide predictions that can't be easily interpreted. This problem undermines the applicability of QSAR model in helping the medicinal chemist to make the decision. If the compound is predicted to be active against some microorganism, what are the driving factors of its activity? Or if it is predicted to be inactive, how its activity can be modulated? The black box nature of the QSAR model prevents it from addressing these crucial issues. The use of predicted MMPs allows to interpret models and identify which MMPs were learned by the model. The MMPs, which were not reproduced by the model, could correspond to experimental errors or deficiency of the model (inappropriate descriptors, too few data, etc.).
Analysis of MMPs (matched molecular pair) can be very useful for understanding the mechanism of action. A medicinal chemist might be interested particularly in "activity cliff". Activity cliff is a minor structural modification, which changes the target activity significantly.
Activity Cliff
Activity cliffs are pairs or groups of compounds that are highly similar in the structures but have large different in potency towards the same target. Activity cliffs received great attention in computational chemistry and drug discovery as they represent a discontinuity in structure-activity relationship (SAR). This discontinuity also indicates high SAR information content, because small chemical changes in the set of similar compounds lead to large changes in activity. The assessment of activity cliffs requires careful consideration of similarity and potency difference criteria.
Types of MMP based analysis
Matched molecular pair (MMPA) analyses can be classified into two types: supervised and unsupervised MMPA.
Supervised MMPA
In supervised MMPA, the chemical transformations are predefined, then the corresponding matched pair compounds are found within the data set and the change in end point computed for each transformation.
Unsupervised MMPA
Also known as automated MMPAs. A machine learning algorithm is used to finds all possible matched pairs in a data set according to a set of predefined rules. This results in much larger numbers of matched pairs and unique transformations, which are typically filtered during the process to identify those transformations that correspond to statistically significant changes in the targeted property with a reasonable number of matched pairs.
Matched molecular series
Here instead of looking at the pair of molecules which differ only at one point, a series of more than 2 molecules different at a single point is considered. The concept of matching molecular series was introduced by Wawer and Bajorath. It is argued that longer matched series is more likely to exhibit preferred molecular transformation while, matched pairs exhibit only a small preference.
Limitations
The application of the MMPA across large chemical databases for the optimization of ligand potency is problematic because same structural transformation may increase or decrease or doesn't affect the potency of different compounds in the dataset. Selection of practical significant transformation from a dataset of molecules is a challenging issue in the MMPA. Moreover, the effect of a particular molecular transformation can significantly depend on the Chemical context of transformations.
Beside these, MMPA might pose some limitations in terms of computational resources, especially when dealing with databases of compounds with a large number of breakable bonds. Further, more atoms in the variable part of the molecule also leads to combinatorial explosion problems. To deal with this, the number of breakable bonds and number of atoms in the variable part can be used to pre-filter the database.
References
Cheminformatics
Biostatistics | Matched molecular pair analysis | [
"Chemistry"
] | 1,196 | [
"Computational chemistry",
"Cheminformatics",
"nan"
] |
44,048,160 | https://en.wikipedia.org/wiki/Homology-derived%20Secondary%20Structure%20of%20Proteins | HSSP (Homology-derived Secondary Structure of Proteins) is a database that combines structural and sequence information about proteins. This database has the information of the alignment of all available homologs of proteins from the PDB database As a result of this, HSSP is also a database of homology-based implied protein structures.
See also
Protein Data Bank (PDB)
STING
References
External links
HSSP
Protein databases
Protein structure | Homology-derived Secondary Structure of Proteins | [
"Chemistry"
] | 87 | [
"Protein structure",
"Structural biology"
] |
44,048,805 | https://en.wikipedia.org/wiki/Affimer | Affimer molecules are small proteins that bind to target proteins with affinity in the nanomolar range. These engineered non-antibody binding proteins are designed to mimic the molecular recognition characteristics of monoclonal antibodies in different applications. These affinity reagents have been optimized to increase their stability, make them tolerant to a range of temperatures and pH, reduce their size, and to increase their expression in E.coli and mammalian cells.
Development
Affimer proteins were developed initially at the MRC Cancer Cell Unit in Cambridge then across two laboratories at the University of Leeds. Derived from the cysteine protease inhibitor family of cystatins, which function in nature as cysteine protease inhibitors, these 12–14 kDa proteins share the common tertiary structure of an alpha-helix lying on top of an anti-parallel beta-sheet.
Affimer proteins display two peptide loops that can all be randomized to bind to desired target proteins, in a similar manner to monoclonal antibodies. Stabilization of the two peptides by the protein scaffold constrains the possible conformations that the peptides can take. This increases the binding affinity and specificity compared to libraries of free peptides, though can limit the target repertoire of Affimers.
Production
Phage display libraries of 109 randomized sequences are used to screen for Affimer proteins that exhibit high-specificity binding to the target protein with binding affinities in the nM range. The ability to direct in vitro screening techniques allows the identification of specific, high affinity Affimers. In vitro screening and development also mean that the target space for Affimers is not limited by the animal immune system. Affimers are generated using recombinant systems, so their generation is more rapid and reproducible compared to the production of polyclonal antibodies.
Multimeric forms Affimers have been generated and shown to yield titres in the range of 200–400 mg/L under small-scale culture using bacterial host systems. Multimeric forms of Affimers with the same target specificity provide avidity effects in target binding.
Many different tags and fusion proteins, such as fluorophores, single-stranded DNA, His, and c-Myc tags can be conjugated to Affimers. Specific cysteine residues can be introduced to the protein to allow thiol chemistry to uniformly orient Affimers on a solid support eg ELISA plates. This flexible functionalisation of the Affimer molecule allows functionality across multiple applications and assay formats.
Properties
Affimers are recombinant proteins. As they are manufactured using recombinant bacterial production processes, the batch-to-batch consistency for Affimers is improved compared to polyclonal antibodies, overcoming some of the issues of reproducibility and security of supply.
These synthetic antibodies were engineered to be stable, non-toxic, biologically neutral and contain no post-translational modifications or disulfide bridges. Two separate loop sequences, incorporating a total of 12 to 36 amino acids, form the target interaction surface so interaction surfaces can range form 650–1000 Å. The large interaction surface results allows binding to target proteins.
Applications
Affimer technology has been commercialised and developed by Avacta, which is developing these affinity reagents as tools for diagnostics and as biotherapeutics.
Reagents and Diagnostics
Affimer binders have been used across a number of platforms, including ELISA, surface plasmon resonance, affinity purification. Affimers that inhibit protein-protein interactions can be produced with the potential to express these inhibitors in mammalian cells modify signalling pathways as cell therapies.
Therapeutics
The small size and stability profile of Affimers combined with their human origin confer drug-like properties. This may represent advantages over antibodies in terms of tissue penetration, for example in solid tumours where Avacta are developing PD-L1 inhibitors as alternatives to Opdivo and Yervoy, though requires half life modification to prevent rapid excretion through the kidney.
Affimers can be conjugated to form multimers for the design of therapeutics. Examples include the production of multi-specific Affimer molecules to albumin binders to increase their half-life in vivo and for use as the targeting moiety in chimeric receptors or modified to carry a toxin in Affimer-drug conjugates.
Affimers as therapeutics are in discovery and preclinical development to tackle cancer, both via CAR-T cell therapy and as immune checkpoint inhibitors. Early studies using ex vivo human samples showed low immunogenicity associated with the Affimer scaffold, at levels comparable to a marketed antibody therapeutic. Furthermore, initial preclinical studies showed good efficacy and tolerability of the anti-PDL1 immuno-oncology Affimers in mice. It is anticipated that IND filing for the first Affimer therapeutic will occur in 2023.
References
External links
Avacta
An Introduction to Affimer Technology - video
Antibody mimetics | Affimer | [
"Chemistry"
] | 1,037 | [
"Antibody mimetics",
"Molecular biology"
] |
44,050,315 | https://en.wikipedia.org/wiki/Royal%20Spring%2C%20Warsaw | The Royal Spring (Zdrój Królewski) is a well located in Romuald Traugutt Park on Zakroczymska Street in Warsaw. The spring's building was built in the 18th century, with construction beginning in 1770. It is also called the King Stanislaus Augustus Spring. In the 18th century it was very popular with residents of the Warsaw New Town as well as travellers, although the drinking water had to be paid for.
Architecture and style
In the first half of the 18th century the Royal Spring was a wooden structure, and from 1770 to 1772 proper walls were built for it. The building was financed by King Stanisław August Poniatowski (1732-1798), with 50 ducats.
The monument was covered by the construction of the Warsaw Citadel in 1832. Discovered and excavated later it was restored to its original style and shape. From 1834 to 1836, engineering work began under the supervision of the City of Warsaw's engineer Edward Klopmann as part of the city's water works. It was built to a design by Enrico Marconi in Neo-Gothic style. It used the characteristic ogive-style arches and the interior walls and exterior were covered with typical red brick. In contrast, the roof of the monument was surrounded by decorative vases.
A final renovation was carried out from 1931 to 1933. The designers were Stanisław Płoski i Andrzej Węgrzecki. They added new elements including a roof crowned with a crenellated parapet. In 1959 a new roof was added.
On the front side of the building is a plaque in Latin: "STANISLAUS AUGUSTUS PROSPICIENDO PUBLICAE SALUBRITATI Hunca FONTANA RESTAURARI JUSSIT. ANNO MDCCLXXI" (Stanislaus Augustus in the interests of public health commanded the source to be restored to order in the year 1771).
Bibliography
References
Monuments and memorials in Warsaw
Water wells
New Town, Warsaw | Royal Spring, Warsaw | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 401 | [
"Hydrology",
"Water wells",
"Environmental engineering"
] |
44,051,448 | https://en.wikipedia.org/wiki/List%20of%20Martin%20Gardner%20Mathematical%20Games%20columns | Over a period of 24 years (January 1957 – December 1980), Martin Gardner wrote 288 consecutive monthly "Mathematical Games" columns for Scientific American magazine. During the next years, until June 1986, Gardner wrote 9 more columns, bringing his total to 297. During this period other authors wrote most of the columns. In 1981, Gardner's column alternated with a new column by Douglas Hofstadter called "Metamagical Themas" (an anagram of "Mathematical Games"). The table below lists Gardner's columns.
Twelve of Gardner's columns provided the cover art for that month's magazine, indicated by "[cover]" in the table with a hyperlink to the cover.
Other articles by Gardner
Gardner wrote 5 other articles for Scientific American. His flexagon article in December 1956 was in all but name the first article in the series of Mathematical Games columns and led directly to the series which began the following month. These five articles are listed below.
References
External links
A Quarter Century of Recreational Mathematics, by Martin Gardner preserved at the Internet Archive
A subject index for the fifteen books of Martin Gardner's Mathematical Games columns
The Top 10 Martin Gardner Scientific American Articles
Columns (periodical)
Recreational mathematics
Mathematics-related lists
Mathematical Games columns | List of Martin Gardner Mathematical Games columns | [
"Mathematics"
] | 255 | [
"Recreational mathematics"
] |
44,052,408 | https://en.wikipedia.org/wiki/Scytovirin | Scytovirin is a 95-amino acid antiviral protein isolated from the cyanobacteria Scytonema varium. It has been cultured in E. coli and its structure investigated in detail. Scytovirin is thought to be produced by the bacteria to protect itself from viruses that might otherwise attack it, but as it has broad-spectrum antiviral activity against a range of enveloped viruses, scytovirin has also been found to be useful against a range of major human pathogens, most notably HIV / AIDS but also including SARS coronavirus and filoviruses such as Ebola virus and Marburg virus. While some lectins such as cyanovirin and Urtica dioica agglutinin are thought likely to be too allergenic to be used internally in humans, studies so far on scytovirin and griffithsin have not shown a similar level of immunogenicity. Scytovirin and griffithsin are currently being investigated as potential microbicides for topical use.
References
Proteins
Entry inhibitors | Scytovirin | [
"Chemistry",
"Biology"
] | 224 | [
"Biomolecules by chemical classification",
"Biotechnology stubs",
"Biochemistry stubs",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
44,054,294 | https://en.wikipedia.org/wiki/FGI-106 | FGI-106 is a broad-spectrum antiviral drug developed as a potential treatment for enveloped RNA viruses, in particular viral hemorrhagic fevers from the bunyavirus, flavivirus and filovirus families. It acts as an inhibitor which blocks viral entry into host cells. In animal tests FGI-106 shows both prophylactic and curative action against a range of deadly viruses for which few existing treatments are available, including the bunyaviruses hantavirus, Rift Valley fever virus and Crimean-Congo hemorrhagic fever virus, the flavivirus dengue virus, and the filoviruses Ebola virus and Marburg virus.
See also
Brincidofovir
BCX4430
Favipiravir
FGI-103
FGI-104
LJ-001
TKM-Ebola
ZMapp
References
Anti–RNA virus drugs
Antiviral drugs
Ebola
Experimental antiviral drugs
Nitrogen heterocycles
Heterocyclic compounds with 4 rings
Dimethylamino compounds | FGI-106 | [
"Biology"
] | 214 | [
"Antiviral drugs",
"Biocides"
] |
44,054,592 | https://en.wikipedia.org/wiki/LJ-001 | LJ-001 is a broad-spectrum antiviral drug developed as a potential treatment for enveloped viruses. It acts as an inhibitor which blocks viral entry into host cells at a step after virus binding but before virus–cell fusion, and also irreversibly inactivates the virions themselves by generating reactive singlet oxygen molecules which damage the viral membrane. In cell culture tests in vitro, LJ-001 was able to block and disable a wide range of different viruses, including influenza A, filoviruses, poxviruses, arenaviruses, bunyaviruses, paramyxoviruses, flaviviruses, and HIV. Unfortunately LJ-001 itself was unsuitable for further development, as it has poor physiological stability and requires light for its antiviral mechanism to operate. However the discovery of this novel mechanism for blocking virus entry and disabling the virion particles has led to LJ-001 being used as a lead compound to develop a novel family of more effective antiviral drugs with improved properties.
See also
Brincidofovir
BCX4430
Favipiravir
FGI-103
FGI-104
FGI-106
References
Antiviral drugs
Abandoned drugs
Phenyl compounds
Furans
Thiazolidines
Allyl compounds | LJ-001 | [
"Chemistry",
"Biology"
] | 268 | [
"Antiviral drugs",
"Biocides",
"Drug safety",
"Abandoned drugs"
] |
44,055,082 | https://en.wikipedia.org/wiki/FGI-103 | FGI-103 is an antiviral drug developed as a potential treatment for the filoviruses Ebola virus and Marburg virus. In tests on mice FGI-103 was effective against both Ebola and Marburg viruses when administered up to 48 hours after infection. The mechanism of action of FGI-103 has however not yet been established, as it was found not to be acting by any of the known mechanisms used by similar antiviral drugs.
See also
FGI-104
FGI-106
LJ-001
References
Antiviral drugs
Ebola
Experimental antiviral drugs
Benzimidazoles
Benzofurans | FGI-103 | [
"Biology"
] | 131 | [
"Antiviral drugs",
"Biocides"
] |
44,055,412 | https://en.wikipedia.org/wiki/EMIAS | Unified Medical Information and Analytical System of Moscow (EMIAS) is an information system that automatises the booking of hospital visits and work of medical professionals in Moscow city. The system includes online appointment services, Electronic Health Record management, and Electronic prescribing based on "Cloud" technology. EMIAS is the digital system designed to increase the quality and access of medical services in public health clinics. The project was designed and is being implemented as part of the «Digital city» program in execution with the Moscow Government's order from April 7, 2014 (as Moscow government amended on 21.05.2013 No. 22-PP).
General information
The project is developed and being implemented as part of “Digital city” program by Moscow IT department. As of October 1, 2013, 557 public health facilities including female counselling centers and dental clinics got involved in EMIAS. Furthermore, EMIAS is organizing and uniting information between outpatient and inpatient facilities.
The system allows managing flows of patients, contains outpatient card integrated in the system, and provides an opportunity to manage consolidated managerial accounting and personalized list of medical help. Besides that, the system contains information about availability of the medical institutions and various doctors. EMIAS allows managing medical registers to resolve medical organizing questions concerning different categories of people, having specific diseases.
EMIAS is being implemented in three steps. The first step is switching the registration process to be completely electronic at public clinics and hospitals. This will allow people to schedule a visit to a doctor remotely. The second step of implementation is creating medical records and using electronic prescriptions for each patient which will be consolidated and shared throughout the public medical sector. The third step will be in uniting the public medical sector with the private medical sector through the sharing of medical records and the introduction of EMIAS services.
In 2016, in a study by PricewaterhouseCoopers (PWC) “Cities Managing Data” in the field of healthcare informatization, Moscow took a leading position and became the only one of the cities studied where the Unified Emias City Polyclinics System was fully implemented. In 2021, the EMIAS.Info mobile application was included in the five winners of the EHEALTHCARE LEADERSHIP Awards International Prize, having received an honorary prize in the category of “Best Platform-oriented Appendix”.
Modules
Executives
Yermolaev Artem - Minister of Moscow Government, Head of Moscow City IT Department
Makarov Vladimir - Deputy Head of Moscow City IT Department, EMIAS General constructor.
References
External links
emias.info
Government of Moscow
Government programs
Healthcare in Russia
Health informatics
Health care software
Electronic health record software
Electronic prescribing | EMIAS | [
"Biology"
] | 539 | [
"Health informatics",
"Medical technology"
] |
62,631,267 | https://en.wikipedia.org/wiki/BiPhePhos | BiPhePhos is an organophosphorus compound that is used as a ligand in homogeneous catalysis. Classified as a diphosphite, BiPhePhos is derived from three 2,2'-biphenol groups, which constrain its shape in such a way to confer high selectivity to derived catalysts. Originally described by workers at Union Carbide, it has become a standard ligand in hydroformylation.
See also
2,2'-Biphenylene phosphorochloridite (C12H8O2PCl) precursor to BiPhePhos.
References
Chelating agents
Organophosphites | BiPhePhos | [
"Chemistry"
] | 142 | [
"Chelating agents",
"Process chemicals"
] |
62,634,280 | https://en.wikipedia.org/wiki/Hydrocarbon%20poisoning | Hydrocarbon poisoning is either the swallowing or breathing in of hydrocarbons. Swallowing hydrocarbons may result in symptoms include coughing or vomiting. Breathing in hydrocarbons may result in low blood oxygen and shortness of breath. Complications may include confusion or seizures.
Hydrocarbons may include gasoline, mineral oil, or paint thinner. Treatment is supportive care. Efforts to empty the stomach are not recommended.
References
Hydrocarbons
Poisons
Toxicology | Hydrocarbon poisoning | [
"Chemistry",
"Environmental_science"
] | 90 | [
"Hydrocarbons",
"Toxicology",
"Toxicology stubs",
"Organic compounds",
"Poisons"
] |
62,638,447 | https://en.wikipedia.org/wiki/Medicinal%20Chemistry%20Research | Medicinal Chemistry Research is a peer-reviewed scientific journal of medicinal chemistry emphasizing the structure-activity relationships of biologically active compounds. It was founded in 1991 by Alfred Burger (University of Virginia), who also founded the Journal of Medicinal Chemistry. The journal is currently edited by Longqin Hu.
Editors in chief
Alfred Burger served as its first editor-in-chief before passing on the mantle to Richard Glennon (Virginia Commonwealth University). Stephen J. Cutler (University of South Carolina) then took over and served between 2002 and 2019. Longqin Hu (Rutgers University–New Brunswick) became editor in 2020.
Abstracting and indexing
The journal is abstracted and indexed in the following bibliographic databases:
References
External links
Medicinal chemistry journals
Academic journals established in 1991
Monthly journals
English-language journals
Springer Science+Business Media academic journals | Medicinal Chemistry Research | [
"Chemistry"
] | 171 | [
"Biochemistry stubs",
"Medicinal chemistry journals",
"Medicinal chemistry",
"Medicinal chemistry stubs"
] |
62,639,397 | https://en.wikipedia.org/wiki/Volume%20correction%20factor | In thermodynamics, the Volume Correction Factor (VCF), also known as Correction for the effect of Temperature on Liquid (CTL), is a standardized computed factor used to correct for the thermal expansion of fluids, primarily, liquid hydrocarbons at various temperatures and densities. It is typically a number between 0 and 2, rounded to five decimal places which, when multiplied by the observed volume of a liquid, will return a "corrected" value standardized to a base temperature (usually 60 °Fahrenheit or 15 °Celsius).
Conceptualization
In general, VCF / CTL values have an inverse relationship with observed temperature relative to the base temperature. That is, observed temperatures above 60 °F (or the base temperature used) typically correlate with a correction factor below "1", while temperatures below 60 °F correlate with a factor above "1". This concept lies in the basis for the kinetic theory of matter and thermal expansion of matter, which states as the temperature of a substance rises, so does the average kinetic energy of its molecules. As such, a rise in kinetic energy requires more space between the particles of a given substance, which leads to its physical expansion.
Conceptually, this makes sense when applying the VCF to observed volumes. Observed temperatures below the base temperature generate a factor above "1", indicating the corrected volume must increase to account for the contraction of the substance relative to the base temperature. The opposite is true for observed temperatures above the base temperature, generating factors below "1" to account for the expansion of the substance relative to the base temperature.
Exceptions
While the VCF is primarily used for liquid hydrocarbons, the theory and principles behind it apply to most liquids, with some exceptions. As a general principle, most liquid substances will contract in volume as temperature drops. However, certain substances, water for example, contain unique angular structures at the molecular level. As such, when these substances reach temperatures just above their freezing point, they begin to expand, since the angle of the bonds prevent the molecules from tightly fitting together, resulting in more empty space between the molecules in a solid state. Other substances which exhibit similar properties include silicon, bismuth, antimony and germanium.
While these are the exceptions to general principles of thermal expansion and contraction, they would seldom, if ever, be used in conjunction with VCF / CTL, as the correction factors are dependent upon specific constants, which are further dependent on liquid hydrocarbon classifications and densities.
Formula and usage
The formula for Volume Correction Factor is commonly defined as:
Where:
refers to the mathematical constant, , raised to the power of
refers to the change in observed temperature () minus the base temperature () in degrees Fahrenheit . When computing , is commonly set to 60 °F.
refers to a small base temperature correction value. If correcting to 60 °F,
refers to the coefficient of thermal expansion at the base temperature. If a base temperature of 60 °F is used, is written as ,
and
refers to the density [Kg/M3] at the base temperature, , and 0 psig pressure. When correlated with at 60 °F ()
, , and refer to a specific set of constants, dependent upon the liquid's classification and density at 60 °F
E.G. For Crude oils , , and = 341.0957, 0, and 0, respectively. See table below for typical values used.
Usage
In standard applications, computing the VCF or CTL requires the observed temperature of the product, and its API gravity at 60 °F. Once calculated, the corrected volume is the product of the VCF and the observed volume.
Since API gravity is an inverse measure of a liquid's density relative to that of water, it can be calculated by first dividing the liquid's density by the density of water at a base temperature (usually 60 °F) to compute Specific Gravity (SG), then converting the Specific Gravity to Degrees API as follows:
Traditionally, VCF / CTL are found by matching the observed temperature and API gravity within standardized books and tables published by the American Petroleum Institute. These methods are often more time-consuming than entering the values into a VCF calculator; however, due to the variance in methodology and computation of constants, the tables published by the American Petroleum Institute are preferred when dealing with the purchase and sale of crude oil and residual fuels.
Formulas for Reference
Density of pure water at 60 °F or
Note: There is no universal agreement on the exact density of pure water at various temperatures since each industry will often use a different standard. For example the, USGS says it is 0.99907 g/cm3. While the relative variance between values may be low, it is best to use the agreed upon standard for the industry you are working in,
References
Thermodynamic properties | Volume correction factor | [
"Physics",
"Chemistry",
"Mathematics"
] | 988 | [
"Thermodynamic properties",
"Quantity",
"Thermodynamics",
"Physical quantities"
] |
41,179,389 | https://en.wikipedia.org/wiki/Chrysosporium%20keratinophilum | Chrysosporium keratinophilum is a mold that is closely related to the dermatophytic fungi (Family Arthrodermataceae) and is mainly found in soil and the coats of wild animals to break down keratin. Chrysosporium keratinophilum is one of the more commonly occurring species of the genus Chrysosporium in nature. It is easily detected due to its characteristic "light-bulb" shape and flat base. Chrysosporium keratinophilum is most commonly found in keratin-rich, dead materials such as feathers, skin scales, hair, and hooves. Although not identified as pathogenic, it is a regular contaminant of cutaneous specimens which leads to the common misinterpretation that this fungus is pathogenic.
Description
Chrysosporium keratinophilum colonies grow rapidly at 25 °C approximately 60–100 mm in 21 days. Colonies can be flat or folded, dry, powdery, or velvety with a white- or cream-coloured center The colony surface is dotted with droplets of clear or brown exuded liquid. The hyphae are septate and the conidia are hyaline, broad-based and one-celled. The conidia are large, smooth to slightly rough-walled, sometimes slightly curved and occasionally septate. The conidia are broadly "light-bulb" shaped with an abruptly flattened smooth base. Colonies grow to about 30 mm in diameter in one week and are flat with a powdery to suede-like texture. The colony reverse is also cream-coloured. Chrysosporium keratinophilum produces abundant aleurioconidia that resemble the microconidia of dermatophytes; however, the conidia of C. keratinophilum are considerably larger. Chrysosporium keratinophilum has been associated with two closely related sexual states: Aphanoascus keratinophilus and Aphanoascus fulvescens.
Habitat and distribution
Chrysosporium keratinophilum is often referred to as a keratinophilic fungus in reference to its affinity for growth on keratin-rich non-living materials such as skin scales and hairs separated from the host. Chrysosporium keratinophilum produces a keratin-degrading enzyme that functions at 90 °C. Its process of digestion occurs in two stages, requiring keratin to be chemically altered to a structureless form before it is digested. The method of hair digestion is carried out with perforating bodies. Chrysosporium spp. are asexual states of fungi in the genera Aphanoascus, Nannizziopsis, and Uncinocarpus.
The fungus commonly grows on feathers, hooves, hair and other dead matter. It is rarely found on human skin and more commonly found in soil in temperate areas, plant material, dung and on birds. A study on keratinophylic fungi in the water sediments of India, by Katiyar and Kushwaha, found C. keratinophilum in sediments of catch basins and sewage sludge in India and Poland. Chrysosporium keratinophilum is associated to mud sludge structure, high humidity, volatile solids, low carbon nitrogen ratios and tolerance to heavy metals. Together, these give C. keratinophilum a high long-term survival probability in superficial water which may present an exposure risk, especially to people in India who bathe in these waters. Apart from inhabiting water sediments, a study in Egypt identified and isolated the teleomorph of C. keratinophilium, Aphanoascus fulvescens, in half of samples gathered from floor dusts in university student housing, demonstrating its regularity in indoor environments. Similarly, Bahkali and Parvez found C. keratinophilum to be widespread mold in house dust from homes in Saudi Arabia. In a study of 29 sandpits from kindergarten schools and public parks in the West Bank of Jordan, Shtayeh found that over half of the fungal isolates from these materials contained fungi known to cause disease. Amongst the non-pathogenic fungi found, Chrysosporium keratinophilum was the most common dermatophyte relative.
A study looking at species diversity of keratin-degrading fungi in different soil types by Bohacz and Korniłłowicz-Kowalska, determined the most frequently isolated species was C. keratinophilum. Together with its teleomorph, Aphanoascus fulvescens, it constituted nearly half of all isolations. The frequency of this fungus was positively correlated with the content of humus, nitrogen, CaCO3 and phosphorus in the soils, and the fungus demonstrated high tolerance for pH (e.g., from pH 4.5–9.5). Chrysosporium keratinophilum accounted for nearly two thirds of isolations of keratinophilic fungi from phaeozem (the upper-layer, humus-rich soil horizon) and over half of keratinophilic fungi from cambisol. Increased populations of C. keratinophilum were found at higher pH.
Isolation
Generally, the hair-bait technique has been used to selectively isolate keratinophilic fungi from soil. However, because of the poor keratinolytic activity of Chrysosporium spp., some of these fungi are not adequately isolated from soils by using this conventional technique. Therefore, by utilizing the higher temperature tolerance of some Chrysosporium spp, a selective technique to isolate C. keratinophilum, C. indicum, and C. tropicum has been developed. By implementing a pre-incubation treatment of keratin baited soil samples at 38 °C, the fast-growing, competitive and thermosensitive strains are eliminated, thereby reducing the competition and allowing C. keratinophilum and other thermotolerant species to continue to grow.
Antagonistic activity
In a study by Singh an co-workers, eighteen fungi were isolated from soil and tested for their antagonistic interactions. The maximum inhibition of Microsporum equinum, M. fulvum, M. gypseum and M. racemosum was caused by multiple fungi, including C. keratinophilum. On the other hand, staling products of C. lucknowense accelerated the growth of many fungi, including C. keratinophilum. Another study tested C. keratinophilum for its anti-dermatophyte activity against Trichophyton mentagrophytes and Epidermophyton flocossum. In their study, C. keratinophilum inhibited T. rubrum, T. tonsurans and T. mentagrophytes, but not M. gypseum and Microsporum nanum.
Pathogenicity
Members of the genus Chrysosporium have weak pathogenic potential, with human and animal infection reported for only a few taxa. Experimental studies have shown inoculation of this fungus on guinea pig skin to produce erythematous scaling lesions which disappear after 3–5 weeks; however, no apparent invasion of the hair shaft occurs. In white mice, after inoculation, granulomas with necrotic centers can be observed, although conidia of the fungus appear to remain intact.
Chrysosporium keratinophilum is one of several soil organisms that is occasionally isolated from skin and nails. Isolation of this species from clinical specimens is generally from human onychomycoses, the mycotic superficial invasion of keratinized tissue of the nail plate. In practice, C. keratinophilum is interpreted to be an infrequent contaminant of keratinaceous clinical specimens, such as hair, skin and nails, with no clinical significance. A pathogenic role for C. keratinophilum is unlikely; however, its ability to remain viable for weeks on skin may suggest pathogenic potential. However, a critical component in limiting its pathogenic potential is its inability to grow at 37 °C, which is the human body temperature discouraging the possibility for this fungi to be infectious to humans.
The first report of onychomycosis caused by C. keratinophilum in animals was reported by Pin and his colleagues. The seven Bennett's wallabies (Macropus rufogriseus rufogriseus) they observed had swollen, abnormal claws from which Chrysosporium keratinophilum was repeatedly identified in culture, suggesting that the fungus may factor in disease. In another experimental study, C. keratinophilum showed pathogenic potential in the white mouse, remaining viable in the peritoneal cavity for up to two months. It is possible that C. keratinophilum can cause more generalized infections in a weakened mammalian host.
Biotechnological applications
Leather tanning
Chrysosporium keratinophilum produces a thermostable, keratinolytic alkaline protease when grown in medium containing keratin. When grown in a medium that lacked keratin, it had no enzymatic function indicating the inducibility of the enzyme. The keratinolytic protease had maximum activity at pH 9.0 and a temperature maximum of 90 °C, whereas many other fungi, such as T. mentagraphytes, Microsporum gypseum, T. rubrum, had maximum activity below pH 9.0. Alkaline proteolytic keratinases are important for leather tanning as a ready means of removing hair from hides.
Bioremediation
Waste removal from slaughterhouses is sometimes ploughed into nearby fields becoming a potential health risk since controlled keratin decomposition by anaerobic bacteria produces large quantities of hydrogen sulfide and ammonia. Current studies are demonstrating the usefulness of the proteases produced by C. keratinophilum in bioremediation of this keratinic waste.
Caffeine degradation
In another study comparing caffeine degradation by four different fungi, Nayak and her colleagues found that C, keratinophilum produces the highest rate of caffeine degradation both in the presence and absence of a nitrogen source. This finding suggests that C. keratinophilum may have commercial use for the decaffeination of coffee pulp, and in the process, it could provide nutrient supplement for animal feed or improved substrates for bioethanol production.
References
External links
DoctorFungus
Onygenales
Fungus species | Chrysosporium keratinophilum | [
"Biology"
] | 2,235 | [
"Fungi",
"Fungus species"
] |
41,180,634 | https://en.wikipedia.org/wiki/Induction%20of%20regular%20languages | In computational learning theory, induction of regular languages refers to the task of learning a formal description (e.g. grammar) of a regular language from a given set of example strings. Although E. Mark Gold has shown that not every regular language can be learned this way (see language identification in the limit), approaches have been investigated for a variety of subclasses. They are sketched in this article. For learning of more general grammars, see Grammar induction.
Definitions
A regular language is defined as a (finite or infinite) set of strings that can be described by one of the mathematical formalisms called "finite automaton", "regular grammar", or "regular expression", all of which have the same expressive power. Since the latter formalism leads to shortest notations, it shall be introduced and used here. Given a set Σ of symbols (a.k.a. alphabet), a regular expression can be any of
∅ (denoting the empty set of strings),
ε (denoting the singleton set containing just the empty string),
a (where a is any character in Σ; denoting the singleton set just containing the single-character string a),
r + s (where r and s are, in turn, simpler regular expressions; denoting their set's union)
r ⋅ s (denoting the set of all possible concatenations of strings from r's and s's set),
r + (denoting the set of n-fold repetitions of strings from r's set, for any n ≥ 1), or
r * (similarly denoting the set of n-fold repetitions, but also including the empty string, seen as 0-fold repetition).
For example, using Σ = {0,1}, the regular expression (0+1+ε)⋅(0+1) denotes the set of all binary numbers with one or two digits (leading zero allowed), while 1⋅(0+1)*⋅0 denotes the (infinite) set of all even binary numbers (no leading zeroes).
Given a set of strings (also called "positive examples"), the task of regular language induction is to come up with a regular expression that denotes a set containing all of them.
As an example, given {1, 10, 100}, a "natural" description could be the regular expression 1⋅0*, corresponding to the informal characterization "a 1 followed by arbitrarily many (maybe even none) 0's".
However, (0+1)* and 1+(1⋅0)+(1⋅0⋅0) is another regular expression, denoting the largest (assuming Σ = {0,1}) and the smallest set containing the given strings, and called the trivial overgeneralization and undergeneralization, respectively.
Some approaches work in an extended setting where also a set of "negative example" strings is given; then, a regular expression is to be found that generates all of the positive, but none of the negative examples.
Lattice of automata
Dupont et al. have shown that the set of all structurally complete finite automata
generating a given input set of example strings forms a lattice, with the trivial undergeneralized and the trivial overgeneralized automaton as bottom and top element, respectively.
Each member of this lattice can be obtained by factoring the undergeneralized automaton by an appropriate equivalence relation.
For the above example string set {1, 10, 100}, the picture shows at its bottom the undergeneralized automaton Aa,b,c,d in , consisting of states , , , and . On the state set {a,b,c,d}, a total of 15 equivalence relations exist, forming a lattice. Mapping each equivalence E to the corresponding quotient automaton language L(Aa,b,c,d / E) obtains the partially ordered set shown in the picture.
Each node's language is denoted by a regular expression. The language may be recognized by quotient automata w.r.t. different equivalence relations, all of which are shown below the node. An arrow between two nodes indicates that the lower node's language is a proper subset of the higher node's.
If both positive and negative example strings are given, Dupont et al. build the lattice from the positive examples, and then investigate the separation border between automata that generate some negative example and such that do not.
Most interesting are those automata immediately below the border.
In the picture, separation borders are shown for the negative example strings 11 (), 1001 (, 101 (), and 0 ().
Coste and Nicolas present an own search method within the lattice, which they relate to Mitchell's version space paradigm.
To find the separation border, they use a graph coloring algorithm on the state inequality relation induced by the negative examples.
Later, they investigate several ordering relations on the set of all possible state fusions.
Kudo and Shimbo use the representation by automaton factorizations to give a unique framework for the following approaches (sketched below):
k-reversible languages and the "tail clustering" follow-up approach,
Successor automata and the predecessor-successor method, and
pumping-based approaches (framework-integration challenged by Luzeaux, however).
Each of these approaches is shown to correspond to a particular kind of equivalence relations used for factorization.
Approaches
k-reversible languages
Angluin considers so-called "k-reversible" regular automata, that is, deterministic automata in which each state can be reached from at most one state by following a transition chain of length k.
Formally, if Σ, Q, and δ denote the input alphabet, the state set, and the transition function of an automaton A, respectively, then A is called k-reversible if: ∀a0, ..., ak ∈ Σ ∀s1, s2 ∈ Q: δ*(s1, a0...ak) = δ*(s2, a0...ak) ⇒ s1 = s2, where δ* means the homomorphic extension of δ to arbitrary words.
Angluin gives a cubic algorithm for learning of the smallest k-reversible language from a given set of input words; for k = 0, the algorithm has even almost linear complexity.
The required state uniqueness after k + 1 given symbols forces unifying automaton states, thus leading to a proper generalization different from the trivial undergeneralized automaton.
This algorithm has been used to learn simple parts of English syntax;
later, an incremental version has been provided.
Another approach based on k-reversible automata is the tail clustering method.
Successor automata
From a given set of input strings, Vernadat and Richetin build a so-called successor automaton, consisting of one state for each distinct character and a transition between each two adjacent characters' states.
For example, the singleton input set {} leads to an automaton corresponding to the regular expression (a+⋅b+)*.
An extension of this approach is the predecessor-successor method which generalizes each character repetition immediately to a Kleene + and then includes for each character the set of its possible predecessors in its state.
Successor automata can learn exactly the class of local languages.
Since each regular language is the homomorphic image of a local language, grammars from the former class can be learned by lifting, if an appropriate (depending on the intended application) homomorphism is provided.
In particular, there is such a homomorphism for the class of languages learnable by the predecessor-successor method.
The learnability of local languages can be reduced to that of k-reversible languages.
Early approaches
Chomsky and Miller (1957)
used the pumping lemma: they guess a part v of an input string uvw and try to build a corresponding cycle into the automaton to be learned; using membership queries they ask, for appropriate k, which of the strings uw, uvvw, uvvvw, ..., uvkw also belongs to the language to be learned, thereby refining the structure of their automaton. In 1959, Solomonoff generalized this approach to context-free languages, which also obey a pumping lemma.
Cover automata
Câmpeanu et al. learn a finite automaton as a compact representation of a large finite language.
Given such a language F, they search a so-called cover automaton A such that its language L(A) covers F in the following sense: , where is the length of the longest string in F, and denotes the set of all strings not longer than .
If such a cover automaton exists, F is uniquely determined by A and .
For example, F = {ad, read, reread } has and a cover automaton corresponding to the regular expression (r⋅e)*⋅a⋅d.
For two strings x and y, Câmpeanu et al. define x ~ y if xz ∈ F ⇔ yz ∈ F for all strings z of a length such that both xz and yz are not longer than . Based on this relation, whose lack of transitivity causes considerable technical problems, they give an O(n4) algorithm to construct from F a cover automaton A of minimal state count.
Moreover, for union, intersection, and difference of two finite languages they provide corresponding operations on their cover automata.
Păun et al. improve the time complexity to O(n2).
Residual automata
For a set S of strings and a string u, the Brzozowski derivative u−1S is defined as the set of all rest-strings obtainable from a string in S by cutting off its prefix u (if possible), formally: , cf. picture.
Denis et al. define a residual automaton to be a nondeterministic finite automaton A where each state q corresponds to a Brzozowski derivative of its accepted language L(A), formally: ∀q∈Q ∃u∈Σ*: L(A,q) = u−1L(A), where L(A,q) denotes the language accepted from q as start state.
They show that each regular language is generated by a uniquely determined minimal residual automaton. Its states are -indecomposable Brzozowski derivatives, and it may be exponentially smaller than the minimal deterministic automaton.
Moreover, they show that residual automata for regular languages cannot be learned in polynomial time, even assuming optimal sample inputs.
They give a learning algorithm for residual automata and prove that it learns the automaton from its characteristic sample of positive and negative input strings.
Query Learning
Regular languages cannot be learned in polynomial time
using only membership queries or using only equivalence queries.
However, Angluin has shown that regular languages can be learned in polynomial time
using membership queries and equivalence queries, and has provided a learning algorithm
termed L* that does exactly that.
The L* algorithm was later generalised to output an NFA (non-deterministic finite automata) rather than a DFA (deterministic finite automata), via an algorithm termed NL*.
This result was further generalised, and an algorithm that outputs an AFA (alternating finite automata) termed AL* was devised. It is noted that NFA can be exponentially more succinct than DFAs, and that AFAs can be exponentially more succinct than NFAs and doubly-exponentially more succinct than DFAs. The L* algorithm and its generalizations have significant implications in the field of automata theory and formal language learning, as they demonstrate the feasibility of efficiently learning more expressive automata models, such as NFA and AFA, which can represent languages more concisely and capture more complex patterns compared to traditional DFAs.
Reduced regular expressions
Brill defines a reduced regular expression to be any of
a (where a is any character in Σ; denoting the singleton set just containing the single-character string a),
¬a (denoting any other single character in Σ except a),
• (denoting any single character in Σ)
a*, (¬a)*, or •* (denoting arbitrarily many, possibly zero, repetitions of characters from the set of a, ¬a, or •, respectively), or
r ⋅ s (where r and s are, in turn, simpler reduced regular expressions; denoting the set of all possible concatenations of strings from r's and s's set).
Given an input set of strings, he builds step by step a tree with each branch labelled by a reduced regular expression accepting a prefix of some input strings, and each node labelled with the set of lengths of accepted prefixes.
He aims at learning correction rules for English spelling errors,
rather than at theoretical considerations about learnability of language classes.
Consequently, he uses heuristics to prune the tree-buildup, leading to a considerable improvement in run time.
Applications
Finding common patterns in DNA and RNA structure descriptions (Bioinformatics)
Modelling natural language acquisition by humans
Learning of structural descriptions from structured example documents, in particular Document Type Definitions (DTD) from SGML documents
Learning the structure of music pieces
Obtaining compact representations of finite languages
Classifying and retrieving documents
Generating of context-dependent correction rules for English grammatical errors
Notes
References
Formal languages
Computational learning theory | Induction of regular languages | [
"Mathematics"
] | 2,815 | [
"Formal languages",
"Mathematical logic"
] |
41,184,901 | https://en.wikipedia.org/wiki/Garfield%20Thomas%20Water%20Tunnel | The Garfield Thomas Water Tunnel is one of the U.S. Navy's principal experimental hydrodynamic research facilities and is operated by the Penn State Applied Research Laboratory. The facility was completed and entered operation in 1949. The facility is named after Lieutenant W. Garfield Thomas Jr., a Penn State journalism graduate who was killed in World War II. For a long time, the Garfield Thomas Water Tunnel was the largest circulating water tunnel in the world. It has been declared a historic mechanical engineering landmark by the American Society of Mechanical Engineers.
Today, in addition to many of its Navy projects, the facility tunnel-based research has expanded into pumps for the Space Shuttle, advanced propulsors for ships, heating and cooling systems, artificial heart valves, vacuum cleaner fans, and other pump and propulsor related products.
History
After the end of WW II, the US military started investing heavily in higher education nationwide. At the same time, Harvard terminated its Underwater Sound Laboratory (USL) which invented the first acoustical homing torpedo (FIDO); consequently Penn State hired Eric Walker, USL's assistant director to head its electrical engineering department, and the Navy transferred USL's torpedo division to Penn State - where it became the Ordnance Research Laboratory (ORL). The ORL eventually became the Applied Research Laboratory.
The Garfield Thomas Water Tunnel was built at Penn State in cooperation with ORL by the ARL for further torpedo research. Construction completed on October 7, 1949, and began operating six months later. Since then, the facility has expanded into viscosity, sound, wave, and wind research.
In 1992, the facility underwent a complete overhaul.
Capabilities
The facility consists of a number of closed circuit, closed jet and open jet facilities.
Water Tunnels
The facility operates four water tunnels.
Garfield Thomas Water Tunnel
The Garfield Thomas Water Tunnel is the facility's largest water tunnel. The 100 feet long, 32 feet high, 100,000 gallons tunnel is a closed-circuit, closed-jet. The system is powered by 1,491 kW (2,000-hp) pump, with a 4-blade adjustable pitch impeller and can produce a maximum water velocity of 18.29 m/s (40.91 mph). The system is capable of producing pressures between 413.7 and 20.7 kPa.
The tunnel is equipped with an array of instruments including: Propeller dynamometers, Five-hole pressure probe, Pitot probes, lasers, pressure sensors, hydrophones, planar motion mechanism (PMM), force balances, accelerometers, and acoustics arrays.
Smaller Water Tunnels
The facility operates two additional smaller water tunnels with diameters of 12 inches and 6 inches. Both are closed-circuit, closed-jet. The 12-incher is a 150 horsepower (111.8 kW) system capable of producing maximum water velocity of 24.38 m/s (54.53 mph). The 6-incher is a 25 hp (18.64 kW) system that can deliver a max velocity of 21.34 m/s (47.74 mph).
Both tunnels are equipped with lasers, pressure sensors, pressure transducers, and hydrophones
Ultra-High Speed Cavitation Tunnel
The facility also has a 1.5 inch closed-circuit, closed-jet cavitation tunnel capable of producing a maximum velocity of 83.8 m/s (187 mph). The stainless steel, 75 hp (55.9 kW) tunnel supports pressures as high as 41.4 kPa and temperatures of 16 °C to 176 °C.
Other facilities
In addition to the water tunnels, the facility operates an array of wind tunnels, glycerin tunnels, and anechoic chamber for used in many physics problems. The Boundary Layer Research Facility (BLRF) operates a 12-inch turbulent pipe flow of glycerine. Additionally, the facility operates a 20 hp (14.91 kW), open-jet, 1,750 rpm Axial-Flow Fan with a 36.58 m/s (81.83 mph) maximum velocity used for basic engineering research in turbomachinery blading. Another 2.75 meter diameter, 100 hp (74.6 kW) closed-circuit used specifically for research in viscous sublayer and in modeling of turbulent flow of fluids next to a wall at large scale.
See also
Pennsylvania State University Applied Research Laboratory
Water tunnel (hydrodynamic)
Ship model basin
Pennsylvania State University
List of historic mechanical engineering landmarks
References
External links
ARL Homepage
Ship design
Model boats
Pennsylvania State University campus
United States Navy installations
1949 establishments in Pennsylvania
Landmarks in Pennsylvania | Garfield Thomas Water Tunnel | [
"Physics"
] | 947 | [
"Scale modeling"
] |
41,188,349 | https://en.wikipedia.org/wiki/Electrostatic%20spray%20ionization | Electrostatic spray ionization (ESTASI) is an ambient ionization method for mass spectrometry (MS) analysis of samples located on a flat or porous surface, or inside a microchannel. It was developed in 2011 by Professor Hubert H. Girault’s group at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. In a typical ESTASI process, a droplet of a protic solvent containing analytes is deposited on a sample area of interest which itself is mounted to an insulating substrate. Under this substrate and right below the droplet, an electrode is placed and connected with a pulsed high voltage (HV) to electrostatically charge the droplet during pulsing. When the electrostatic pressure is larger than the surface tension, droplets and ions are sprayed. ESTASI is a contactless process based on capacitive coupling. One advantage of ESTASI is, that the electrode and sample droplet act contact-less avoiding thereby any oxidation or reduction of the sample compounds at the electrode surface, which often happens during standard electrospray ionization (ESI). ESTASI is a powerful new ambient ionization technique that has already found many applications in the detection of different analytes, such as organic molecules, peptides and proteins with molecule weight up to 70 kDa. Furthermore, it was used to couple MS with various separation techniques including capillary electrophoresis and gel isoelectric focusing, and it was successfully applied under atmospheric pressure to the direct analysis of samples with only few preparation steps.
Principle of operation
ESTASI is a contactless electrospray ionization (ESI) method, like dielectric barrier ESI and induced or inductive ESI. In ESTASI, an electrode is placed close to a sample only separated by an insulation layer. The sample is covered with a droplet of solution (nano-liters to micro-liters); and a square wave HV is applied between the electrode and the mass spectrometer inlet capillary. Sample ionization occurs and ions are collected for mass spectrometry analysis. The square wave HV can be generated by amplifying the square wave voltage of a function generator. Alternatively, it can be produced by an electric circuit comprising one direct current HV power source and two switches that connect the electrode either to the HV source or to the ground.
When a positive HV is applied to the electrode with respect to the mass spectrometer, a spray of cations is generated out of the droplet containing the sample analytes because of the strong electric filed between the electrode and the mass spectrometer. Excess anions stay inside the droplet on the substrate and are subsequently sprayed when grounding the electrode. In this way, both cations and anions are subsequently measured by mass spectrometry in one experiment.
Applications
ESTASI method can be applied to a wide range of geometries, e.g. samples in capillary, in a disposable pipette tip, in a polymer microchannel and in micro-, nano-liter droplets on a polymer or porous plate. With the last geometry, molecules on a surface can be directly ionized for MS detection by simply adding a droplet of buffer that can dissolve the target molecule. The current developed applications of ESTASI mainly include:
Interfacing capillary electrophoresis (CE) and MS analysis
Fractions of proteins or peptides from capillary electrophoresis were collected on an insulating plastic slide. Dry sample spots were formed by evaporating all solvents and then analyzed by ESTASI MS where droplets of acidic solution (1% acetic acid in water) were deposited on the dry sample spots to dissolve analytes from the sample. This is the first application and example of direct analysis of samples on a plat surface.
Interfacing gel electrophoresis and MS analysis
Samples inside porous matrices can also be analyzed by the ESTASI MS. During gel electrophoresis, peptides or proteins can be fractionated into different bands inside a gel. The gel is then placed on an insulating plastic plate for ESTASI MS analysis. The extraction of proteins/peptides from the gel for ESTASI MS analysis is realized by depositing a highly acidic solution droplet and by applying an HV. The protons migrate into the gel to protonate peptides/proteins inside the gel band and then the cations are extracted by the HV into the acidic droplet for ESTASI MS analysis.
Ambient ionization MS for direct analysis of samples with minimal preparation
ESTASI of many sample types can be carried without specific sample pre-treatments. One example is the fast analysis of perfume, where the perfume is directly sprayed on a smelling paper to form micro- nano-liter droplets, from which the ESTASI is generated.
References
External links
WO 2013102670 A1 - Patent "Electrostatic spray ionization method"
Sniffing out fake perfumes - Sep 25, 2013 by Nik Papageorgiou, phys.org
Ion source | Electrostatic spray ionization | [
"Physics"
] | 1,042 | [
"Ion source",
"Mass spectrometry",
"Spectrum (physical sciences)"
] |
41,189,850 | https://en.wikipedia.org/wiki/C4H6Cl2 | {{DISPLAYTITLE:C4H6Cl2}}
The molecular formula C4H6Cl2 (molar mass: 124.996 g/mol, exact mass: 123.9847 u) may refer to:
1,1-Bis(chloromethyl)ethylene
1,4-Dichlorobut-2-ene
Molecular formulas | C4H6Cl2 | [
"Physics",
"Chemistry"
] | 83 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
48,901,154 | https://en.wikipedia.org/wiki/Double%20Fourier%20sphere%20method | In mathematics, the double Fourier sphere (DFS) method is a technique that transforms a function defined on the surface of the sphere to a function defined on a rectangular domain while preserving periodicity in both the longitude and latitude directions.
Introduction
First, a function on the sphere is written as using spherical coordinates, i.e.,
The function is -periodic in , but not periodic in . The periodicity in the latitude direction has been lost. To recover it, the function is "doubled up” and a related function on is defined as
where and for . The new function is -periodic in and , and is constant along the lines and , corresponding to the poles.
The function can be expanded into a double Fourier series
History
The DFS method was proposed by Merilees and developed further by Steven Orszag. The DFS method has been the subject of relatively few investigations since (a notable exception is Fornberg's work), perhaps due to the dominance of spherical harmonics expansions. Over the last fifteen years it has begun to be used for the computation of gravitational fields near black holes and to novel space-time spectral analysis.
References
Black holes
Boundary value problems
Coordinate systems
Variants of random walks
Equations of astronomy | Double Fourier sphere method | [
"Physics",
"Astronomy",
"Mathematics"
] | 247 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Coordinate systems",
"Concepts in astronomy",
"Equations of astronomy",
"Unsolved problems in physics",
"Astronomy stubs",
"Astrophysics",
"Stellar astronomy stubs",
"Astrophysics stubs",
"Density",
"Relativity stubs",
"Theory of... |
42,610,100 | https://en.wikipedia.org/wiki/Part%20program | The part program is a sequence of instruction that describe the work that is to be done to a part. Typically these instructions are generated in Computer-aided manufacturing software and are then fed into the computer numerical control (CNC) software on the machines, such as drills, lathes, mills, grinders, routers, that are performing work on the part. The CNC computer then translates the set of instructions into a standardized format of G-code and M-code commands and follow the instructions in the order they are written left to right or top to bottom.
When multiple repetitive operations are needed on a large number of parts canned cycles can be used to reduce the number of operation blocks in a part program. In some cases a part might need to go between multiple machines and have multiple operations performed on it to generate the geometry that is need.Example of the order of operations seen in a typical program
Program start
Load selected tool
Turn spindle on
Turn coolant on
G00 Rapid to starting position above part
All machining operations to part
Turn coolant off
Turn spindle off
Move to safe position
End program
Types of operations
Each type CNC machine will have different name for the operation they perform to parts to achieve the desired geometry on a part such as tasks like drilling, or cutting threads, but perform these actions differently. The choice to use a mill or lathe comes down to the geometry of the part you are making and how the machine can hold onto the part and what operation you want to perform to achieve the geometry needed. If the part is very complicated it may go through multiple part programs to achieve the desired results.
CNC Mill
Contour milling
Milling pockets
Cutting slots
Chamfering edges
Thread milling
Drilling holes
Tapping holes
CNC Lathe
Rough facing surfaces
Finish facing surfaces
Face drilling
Cross drilling
Face contouring
Cutting threads
Cutting groves
Cutting parts off
See also
G-code
Computer-aided manufacturing
Computer-aided design
Canned cycle
Computer-aided technologies
References
FUNDAMENTALS OF PART PROGRAMMING
Computer-aided engineering | Part program | [
"Engineering"
] | 403 | [
"Construction",
"Industrial engineering",
"Computer-aided engineering"
] |
42,610,193 | https://en.wikipedia.org/wiki/Permeability%20of%20soils | A number of factors affect the permeability of soils, from particle size, impurities in the water, void ratio, the degree of saturation, and adsorbed water, to entrapped air and organic material.
Background
Soil aeration maintains oxygen levels in the plants' root zone, needed for microbial and root respiration, and important to plant growth. Additionally, oxygen levels regulate soil temperatures and play a role in some chemical processes that support the oxidation of elements like Mn2+ and Fe2+ that can be toxic.
Determination of the permeability coefficient
Laboratory experiments:
Constant Head Permeability Test,
Low-level permeability test,
Horizontal permeability test.
Field experiments:
Free aquifer,
Pressured aquifer.
Composition
There is great variability in the composition of soil air as plants consume gases and microbial processes release others. Soil air is relatively moist compared with atmospheric air, and CO2 concentrations tend to be higher, while O2 is usually quite a bit lower. O2 levels are higher in well-aerated soils, which also have higher levels of CH4 and N2O than atmospheric air.
Particle size
It was studied by Allen Hazen that the coefficient of permeability (k) of a soil is directly proportional to the square of the particle size (D). Thus permeability of coarse grained soil is very large as compared to that of fine grained soil. The permeability of coarse sand may be more than one million times as much that of clay.
Impurities in soil
The presence of fine particulate impurities in a soil can decrease its permeability by progressive clogging of its porosity.
Void ratio (e)
The coefficient of permeability varies with the void ratio as e/sup>/(1+e). For a given soil, the greater the void ratio, the higher the value of the coefficient of permeability. Here 'e' is the void ratio.
Based on other concepts it has been established that the permeability of a soil varies as e2 or e3/(1+e). Whatever may be the exact relationship, all soils have e versus log k plot as a straight line.
Degree of saturation
If the soil is not fully saturated, it contains air pockets. The permeability is reduced due to the presence of air which causes a blockage to the passage of water. Consequently, the permeability of a partially saturated soil is considerably smaller than that of fully saturated soil. In fact, Darcy's Law is not strictly applicable to such soils.
Absorbed water
Fine grained soils have a layer of adsorbed water strongly attached to their surface. This adsorbed layer is not free to move under gravity. It causes an obstruction to the flow of water in the pores and hence reduces the permeability of soils. According to Casagrande, it may be taken as the void ratio occupied by absorbed water and the permeability may be roughly assumed to be proportional to the square of the net voids ratio of (e - 0.1)
Entrapped air and organic matter
Air entrapped in the soil and organic matter block the passage of water through soil, hence permeability considerably decreases. In permeability tests, the sample of soil used should be fully saturated to avoid errors.
See also
Darcy's Law
Hydraulic conductivity
Permeability (Earth sciences)
Soil
Soil mechanics
References
Soil science
Pedology
Soil mechanics | Permeability of soils | [
"Physics"
] | 711 | [
"Soil mechanics",
"Applied and interdisciplinary physics"
] |
42,610,902 | https://en.wikipedia.org/wiki/Lieb%E2%80%93Robinson%20bounds | The Lieb–Robinson bound is a theoretical upper limit on the speed at which information can propagate in non-relativistic quantum systems. It demonstrates that information cannot travel instantaneously in quantum theory, even when the relativity limits of the speed of light are ignored. The existence of such a finite speed was discovered mathematically by Elliott H. Lieb and Derek W. Robinson in 1972. It turns the locality properties of physical systems into the existence of, and upper bound for this speed. The bound is now known as the Lieb–Robinson bound and the speed is known as the Lieb–Robinson velocity. This velocity is always finite but not universal, depending on the details of the system under consideration. For finite-range, e.g. nearest-neighbor, interactions, this velocity is a constant independent of the distance travelled. In long-range interacting systems, this velocity remains finite, but it can increase with the distance travelled.
In the study of quantum systems such as quantum optics, quantum information theory, atomic physics, and condensed matter physics, it is important to know that there is a finite speed with which information can propagate. The theory of relativity shows that no information, or anything else for that matter, can travel faster than the speed of light. When non-relativistic mechanics is considered, however, (Newton's equations of motion or Schrödinger's equation of quantum mechanics) it had been thought that there is then no limitation to the speed of propagation of information. This is not so for certain kinds of quantum systems of atoms arranged in a lattice, often called quantum spin systems. This is important conceptually and practically, because it means that, for short periods of time, distant parts of a system act independently.
One of the practical applications of Lieb–Robinson bounds is quantum computing. Current proposals to construct quantum computers built out of atomic-like units mostly rely on the existence of this finite speed of propagation to protect against too rapid dispersal of information.
Set up
To define the bound, it is necessary to first describe basic facts about quantum mechanical systems composed of several units, each with a finite dimensional Hilbert space.
Lieb–Robinson bounds are considered on a -dimensional lattice ( or ) , such as the square lattice
.
A Hilbert space of states is associated with each point . The dimension of this space is finite, but this was generalized in 2008 to include infinite dimensions (see below). This is called quantum spin system.
For every finite subset of the lattice, , the associated Hilbert space is given by the tensor product
.
An observable supported on (i.e., depends only on) a finite set is a linear operator on the Hilbert space .
When is finite dimensional, choose a finite basis of operators that span the set of linear operators on . Then any observable on can be written as a sum of basis operators on .
The Hamiltonian of the system is described by an interaction . The interaction is a function from the finite sets to self-adjoint observables supported in . The interaction is assumed to be finite range (meaning that if the size of exceeds a certain prescribed size) and translation invariant. These requirements were lifted later.
Although translation invariance is usually assumed, it is not necessary to do so. It is enough to assume that the interaction is bounded above and below on its domain. Thus,
the bound is quite robust in the sense that it is tolerant of changes of the Hamiltonian. A finite range is essential, however. An interaction is said to be of finite range if there is a finite number such that for any set with diameter greater than the interaction is zero, i.e., . Again, this requirement was lifted later.
The Hamiltonian of the system with interaction is defined formally by:
.
The laws of quantum mechanics say that corresponding to every physically observable quantity there is a self-adjoint operator .
For every observable with a finite support Hamiltonian defines a continuous one-parameter group
of transformations of the observables
given by
Here, has a physical meaning of time.
(Technically speaking, this time evolution is defined by a power-series expansion that is known to be a norm-convergent series , see, Theorem 7.6.2, which is an adaptation from. More rigorous details can be found in.)
The bound in question was proved in and is the following: For any observables and with finite supports and , respectively, and for any time the following holds for some positive constants and :
where denotes the distance between the sets and . The operator is called the commutator of the operators and , while the symbol denotes the norm, or size, of an operator . The bound has nothing to do with the state of the quantum system, but depends only on the Hamiltoninan governing the dynamics. Once this operator bound is established it necessarily carries over to any state of the system.
A positive constant depends on the norms of the observables and , the sizes of the supports and , the interaction, the lattice structure and the dimension of the Hilbert space . A positive constant depends on the interaction and the lattice structure only. The number
can be chosen at will provided is chosen sufficiently large. In other words, the further out one goes on the light cone, , the sharper the exponential decay rate is.
(In later works authors tended to regard as a fixed constant.) The constant is called the group velocity or Lieb–Robinson velocity.
The bound () is presented slightly differently from the equation in the original paper which derived velocity-dependent decay rates along spacetime rays with velocity greater than . This more explicit form () can be seen from the proof of the bound
Lieb–Robinson bound shows that for times the norm on the right-hand side is exponentially small. This is the exponentially small error mentioned above.
The reason for considering the commutator on the left-hand side of the Lieb–Robinson bounds is the following:
The commutator between observables and is zero if their supports are disjoint.
The converse is also true: if observable is such that its commutator with any observable supported outside some set is zero, then has a support inside set .
This statement is also approximately true in the following sense: suppose that there exists some such that for some observable and any observable that is supported outside the set . Then there exists an observable with support inside set that approximates an observable , i.e. .
Thus, Lieb–Robinson bounds say that the time evolution of an observable with support in a set is supported (up to exponentially small errors) in a -neighborhood of set , where with being the Lieb–Robinson velocity. Outside this set there is no influence of . In other words, this bounds assert that the speed of propagation of perturbations in quantum spin systems is bounded.
Improvements
In Robinson generalized the bound () by considering exponentially decaying interactions (that need not be translation invariant), i.e., for which the strength of the interaction decays exponentially with the diameter of the set.
This result is discussed in detail in, Chapter 6. No great interest was shown in the Lieb–Robinson bounds until 2004 when Hastings applied them to the Lieb–Schultz–Mattis theorem.
Subsequently, Nachtergaele and Sims extended the results of to include models on vertices with a metric and to derive exponential decay of correlations. From 2005 to 2006 interest in Lieb–Robinson bounds strengthened with additional applications to exponential decay of correlations (see and the sections below). New proofs of the bounds were developed and, in particular, the constant in () was improved making it independent of the dimension of the Hilbert space.
Several further improvements of the constant in () were made.
In 2008 the Lieb–Robinson bound was extended to the case in which each is infinite dimensional.
In it was shown that on-site unbounded perturbations do not change the Lieb–Robinson bound. That is, Hamiltonians of the following form can be considered on a finite subset :
where is a self-adjoint operator over , which needs not to be bounded.
Harmonic and anharmonic Hamiltonians
The Lieb–Robinson bounds were extended to certain continuous quantum systems, that is to a general harmonic Hamiltonian, which, in a finite volume , where are positive integers, takes the form:
where the periodic boundary conditions are imposed and , . Here are canonical basis vectors in .
Anharmonic Hamiltonians with on-site and multiple-site perturbations were considered and the Lieb–Robinson bounds were derived for them,
Further generalizations of the harmonic lattice were discussed,
Irreversible dynamics
Another generalization of the Lieb–Robinson bounds was made to the irreversible dynamics,
in which case the dynamics has a Hamiltonian part and also a dissipative part. The dissipative part is described by terms of Lindblad form, so that the dynamics satisfies the Lindblad-Kossakowski master equation.
Lieb–Robinson bounds for the irreversible dynamics were considered by in the classical context and by for a class of quantum lattice systems with finite-range interactions. Lieb–Robinson bounds for lattice models with a dynamics generated by both Hamiltonian and dissipative interactions with suitably fast decay in space, and that may depend on time, were proved by, where they also proved the existence of the infinite dynamics as a strongly continuous cocycle of unit preserving completely positive maps.
Power-law interactions
The Lieb–Robinson bounds were also generalized to interactions that decay as a power-law, i.e. the strength of the interaction is upper bounded by where is the diameter of the set and is a positive constant. Understanding whether locality persists for power-law interactions hold serious implications for systems such as trapped ions, Rydberg atoms, ultracold atoms and molecules.
In contrast to the finite-range interacting systems where information may only travel at a constant speed, power-law interactions allow information to travel at a speed that increases with the distance. Thus, the Lieb–Robinson bounds for power-law interactions typically yield a sub-linear light cone that is asymptotically linear in the limit A recent analysis using quantum simulation algorithm implied a light cone , where is the dimension of the system. Tightening the light cone for power-law interactions is still an active research area.
Some applications
Lieb–Robinson bounds are used in many areas of mathematical physics. Among the main applications of the bound there is the error bounds on quantum simulation algorithms, the existence of the thermodynamic limit, the exponential decay of correlations and the Lieb–Schultz–Mattis theorem.
Digital quantum simulation algorithms
The aim of digital quantum simulation is to simulate the dynamics of a quantum system using the fewest elementary quantum gates. For a nearest-neighbor interacting system with particles, simulating its dynamics for time using the Lie product formula requires quantum gates. In 2018, Haah et al. proposed a near optimal quantum algorithm that uses only quantum gates. The idea is to approximate the dynamics of the system by dynamics of its subsystems, some of them spatially separated. The error of the approximation is bounded by the original Lieb–Robinson bound. Later, the algorithm is generalized to power-law interactions and subsequently used to derive a stronger Lieb–Robinson bound.
Thermodynamic limit of the dynamics
One of the important properties of any model meant to describe properties of bulk matter is the existence of the thermodynamic limit. This says that intrinsic properties of the system should be essentially independent of the size of the system which, in any experimental setup, is finite.
The static thermodynamic limit from the equilibrium point of view was settled much before the Lieb–Robinson bound was proved, see for example. In certain cases one can use a Lieb–Robinson bound to establish the existence of a thermodynamic limit of the dynamics, , for an
infinite lattice as the limit of finite lattice dynamics. The limit is usually considered over an increasing sequence of finite subsets , i.e. such that for , there is an inclusion . In order to prove the existence of the infinite dynamics as a strongly continuous, one-parameter group of automorphisms, it was proved that is a Cauchy sequence and consequently is convergent. By elementary considerations, the existence of the thermodynamic limit then follows. A more detailed discussion of the thermodynamic limit can be found in section 6.2.
Robinson was the first to show the existence of the thermodynamic limit for exponentially decaying interactions. Later, Nachtergaele et al. showed the existence of the infinite volume dynamics for almost every type of interaction described in the section "Improvements of Lieb–Robinson bounds" above.
Exponential decay of correlations
Let denote the expectation value of the observable in a state . The correlation function between two observables and is defined as
Lieb–Robinson bounds are used to show that the correlations decay exponentially in distance for a system with an energy gap above a non-degenerate ground state , see. In other words, the inequality
holds for observables and with support in the sets and respectively. Here and are some constants.
Alternatively the state can be taken as a product state, in which case correlations decay exponentially without assuming the energy gap above the ground state.
Such a decay was long known for relativistic dynamics, but only guessed for Newtonian dynamics. The Lieb–Robinson bounds succeed in replacing the relativistic symmetry by local estimates on the Hamiltonian.
Lieb–Schultz–Mattis theorem
Lieb–Schultz–Mattis theorem implies that the ground state of the Heisenberg antiferromagnet on a bipartite lattice with isomorphic sublattices, is non-degenerate, i.e., unique, but the gap can be very small.
For one-dimensional and quasi-one-dimensional systems of even length and with half-integral spin Affleck and Lieb, generalizing the original result by Lieb, Schultz, and Mattis, proved that the gap in the spectrum above the ground state is bounded above by
where is the size of the lattice and is a constant. Many attempts were made to extend this result to higher dimensions, ,
The Lieb–Robinson bound was utilized by Hastings and by Nachtergaele-Sims in a proof of the Lieb–Schultz–Mattis Theorem for higher-dimensional cases.
The following bound on the gap was obtained:
.
Discretisation of the continuum via Gauss quadrature rules
In 2015, it was shown that the Lieb–Robinson bound can also have applications outside of the context of local Hamiltonians as we now explain. The spin-boson model describes the dynamics of a spin coupled to a continuum of oscillators. It has been studied in great detail and explains quantum dissipative effects in a wide range of quantum systems. Let denote the Hamiltonian of the Spin-Boson model with a continuum bosonic bath, and denote the Spin-Boson model whose bath has been discretised to include harmonic oscillators with frequencies chosen according to Gauss quadrature rules. For all observables on the Spin Hamiltonian, the error on the expectation value of induced by discretising the Spin-Boson model according to the above discretisation scheme is bounded by
where are positive constants and is the Lieb–Robinson velocity which in this case is directly proportional to , the maximum frequency of the bath in the Spin-Boson model. Here, the number of discrete modes play the role of a distance mentioned below Eq. (). One can also bound the error induced by local Fock space truncation of the harmonic oscillators
Experiments
The first experimental observation of the Lieb–Robinson velocity was done by Cheneau et al.
References
Quantum information theory
Limits of computation | Lieb–Robinson bounds | [
"Physics"
] | 3,312 | [
"Physical phenomena",
"Limits of computation"
] |
42,611,813 | https://en.wikipedia.org/wiki/Run-off%20transcription | A run-off transcription assay is an assay in molecular biology which is conducted in vitro to identify the position of the transcription start site (1 base pair upstream) of a specific promoter along with its accuracy and rate of in vitro transcription.
Run-off transcription can be used to quantitatively measure the effect of changing promoter regions on in vitro transcription levels, Because of its in vitro nature, however, this assay cannot accurately predict cell-specific gene transcription rates, unlike in vivo assays such as nuclear run-on.
To perform a run-off transcription assay, a gene of interest, including the promoter, is cloned into a plasmid. The plasmid is digested at a known restriction enzyme cut site downstream from the transcription start site such that the expected mRNA run-off product would be easily separated by gel electrophoresis.
DNA needs to be highly purified prior to running this assay. To initiate transcription, radiolabeled UTP, the other nucleotides, and RNA polymerase are added to the linearized DNA. Transcription continues until the RNA polymerase reaches the end of the DNA where it simply “runs off” the DNA template, resulting in an mRNA fragment of a defined length. This fragment can then be separated by gel electrophoresis, alongside size standards, and autoradiographed. The corresponding size of the band will represent the size of the mRNA from the restriction enzyme cut site to the transcription start site (+1). The intensity of the band will indicate the amount of mRNA produced.
Additionally, it can be used to detect whether or not transcription is carried out under certain conditions (i.e. in the presence of different chemicals).
References
Molecular biology techniques | Run-off transcription | [
"Chemistry",
"Biology"
] | 357 | [
"Molecular biology techniques",
"Molecular biology"
] |
42,615,525 | https://en.wikipedia.org/wiki/CANDU%20Owners%20Group | CANDU Owners Group is a private, not-for-profit corporation funded voluntarily by CANDU operating utilities worldwide, Canadian Nuclear Laboratories (CNL) and supplier participants. It is dedicated to providing programs for cooperation, mutual assistance and exchange of information for the successful support, development, operation, maintenance and economics of CANDU technology. All CANDU Operators in the world are members of COG. This includes plants in Canada (Pickering Nuclear Generating Station, Darlington Nuclear Generating Station, Bruce Nuclear Generating Station and Point Lepreau Nuclear Generating Station), Argentina (Embalse Nuclear Power Station), China (Qinshan Nuclear Power Plant), India (Rajasthan Atomic Power Station), Pakistan (Karachi Nuclear Power Complex), South Korea (Wolseong Nuclear Power Plant), and Romania (Cernavodă Nuclear Power Plant). Its headquarters is in Toronto, Ontario, Canada.
COG was formed in 1984 by an agreement among the Canadian CANDU-owning utilities Ontario Hydro (now Ontario Power Generation), Hydro-Québec and New Brunswick Power, plus Atomic Energy of Canada Limited. It became a non-profit corporation in 1999.
References
Nuclear industry organizations | CANDU Owners Group | [
"Engineering"
] | 235 | [
"Nuclear industry organizations",
"Nuclear organizations"
] |
42,616,903 | https://en.wikipedia.org/wiki/Falling-film%20column | A falling-film column (or wetted-wall column) is a kind of laboratory equipment used to achieve mass and heat transfer between two fluid phases (in general one gas phase and one liquid phase).
It consists of a vertical tube-shaped vessel: the liquid stream flows downwards through the inner wall of the tube and the gas stream flows up through of the centre of the tube.
Description
In the most common case, the column contains one liquid stream and one gas stream. The liquid forms a thin film that covers the inner surface of the vessel; the gas stream is normally injected from the bottom of the column, so the two fluids are subjected to a counter-current exchange of matter and heat, that happens through the gas-liquid interface.
Sometimes, the same equipment is used to achieve the co-current mass and heat transfer between two immiscible liquids.
Applications
Because they are easy to model, falling-film columns are generally used as laboratory equipment, for example to measure experimentally the values of transport coefficients. A significant experiment was carried out in 1934 by Edwin R. Gilliland and Thomas Kilgore Sherwood that used a falling-film column to study the mass transfer phenomenon between a liquid phase and a gas phase, obtaining an experimental correlation between the Sherwood number, Reynolds number and Schmidt number.
Falling-film columns are not used at industrial scales, because they have a low surface area and liquid hold-up compared to other gas-liquid contactors (e.g. a packed column or a plate column).
References
Chemical equipment | Falling-film column | [
"Chemistry",
"Engineering"
] | 315 | [
"Chemical equipment",
"nan"
] |
32,802,555 | https://en.wikipedia.org/wiki/Weisner%27s%20method | In mathematics, Weisner's method is a method for finding generating functions for special functions using representation theory of Lie groups and Lie algebras, introduced by . It includes Truesdell's method as a special case, and is essentially the same as Rainville's method.
References
Generating functions | Weisner's method | [
"Mathematics"
] | 62 | [
"Sequences and series",
"Generating functions",
"Mathematical structures"
] |
32,804,088 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Systems%20Biology | The International Conference on Systems Biology (ICSB) is the primary international conference for systems biology research. Created by Hiroaki Kitano in 2000, its organization is now coordinated by the International Society for Systems Biology (ISSB).
Previous conferences
ICSB-2023, Hartford, Connecticut, USA
ICSB-2022, in person - Berlin, Germany.
ICSB-2021, canceled due to COVID
ICSB-2020, canceled due to COVID
ICSB-2019, Okinawa, Japan (Okinawa Institute of Science and Technology Graduate University).
ICSB-2018, Lyon, France (Université de Lyon).
ICSB-2017, Blacksburg, Virginia (Virginia Tech).
ICSB-2016, Barcelona, Spain.
ICSB-2015 was originally announced to be in Shanghai, but then was held in Singapore.
ICSB-2014, Melbourne, Australia.
ICSB-2013, Copenhagen, Denmark.
ICSB-2012, Toronto, Canada.
ICSB-2011, Mannheim, Germany.
ICSB-2010 Edinburgh, UK
ICSB-2009 Stanford, California (Stanford University)
ICSB-2008 Gothenburg, Sweden (CMB, Chalmers Biocenter, YSBN, NYRC)
ICSB-2007 Long Beach, California (Caltech, UC Irvine, etc.)
ICSB-2006 Yokohama (The Systems Biology Institute, JST, RIKEN, AIST)
ICSB-2005 Boston (Harvard, MIT, Boston University)
ICSB-2004 Heidelberg (DFKI, EMBL, etc.)
ICSB-2003 St. Louis (Washington University in St. Louis)
ICSB-2002 Stockholm (Karolinska Institute)
ICSB-2001 Pasadena, California (California Institute of Technology)
ICSB-2000 Tokyo (Japan Science and Technology Agency)
Upcoming conferences
ICSB-2024, Bombay, India (IIT)
External links
International Society for Systems Biology
References
Systems biology
Biology conferences | International Conference on Systems Biology | [
"Biology"
] | 393 | [
"Systems biology"
] |
51,237,098 | https://en.wikipedia.org/wiki/Mean%20square | In mathematics and its applications, the mean square is normally defined as the arithmetic mean of the squares of a set of numbers or of a random variable.
It may also be defined as the arithmetic mean of the squares of the deviations between a set of numbers and a reference value (e.g., may be a mean or an assumed mean of the data), in which case it may be known as mean square deviation.
When the reference value is the assumed true value, the result is known as mean squared error.
A typical estimate for the sample variance from a set of sample values uses a divisor of the number of values minus one, n-1, rather than n as in a simple quadratic mean, and this is still called the "mean square" (e.g. in analysis of variance):
The second moment of a random variable, is also called the mean square.
The square root of a mean square is known as the root mean square (RMS or rms), and can be used as an estimate of the standard deviation of a random variable when the random variable is zero-mean.
References
Means | Mean square | [
"Physics",
"Mathematics"
] | 230 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
51,239,611 | https://en.wikipedia.org/wiki/Techno-authoritarianism | Techno-authoritarianism, also known as IT-backed authoritarianism, digital authoritarianism or digital dictatorship, refers to the state use of information technology in order to control or manipulate both foreign and domestic populations. Tactics of digital authoritarianism may include mass surveillance including through biometrics such as facial recognition, internet firewalls and censorship, internet blackouts, disinformation campaigns, and digital social credit systems. Although some institutions assert that this term should only be used to refer to authoritarian governments, others argue that the tools of digital authoritarianism are being adopted and implemented by governments with "authoritarian tendencies", including democracies.
Most notably, China and Russia have been accused by the Brookings Institution of leveraging the Internet and information technology to repress opposition domestically while undermining democracies abroad.
Definition
IT-backed authoritarianism refers to an authoritarian regime using cutting-edge information technology in order to penetrate, control and shape the behavior of actors within society and the economy.
According to reports and articles on China's practice, the basis of the digital authoritarianism is an advanced, all-encompassing and in large parts real-time surveillance system, which merges government-run systems and data bases (e.g. traffic monitoring, financial credit rating, education system, health sector etc.) with company surveillance systems (e.g. of shopping preferences, activities on social media platforms etc.). IT-backed authoritarianism institutionalizes the data transfer between companies and governmental agencies providing the government with full and regular access to data collected by companies. The authoritarian government remains the only entity with unlimited access to the collected data. IT-backed authoritarianism thus increases the authority of the regime vis-à-vis national and multinational companies as well as vis-à-vis other decentral or subnational political forces and interest groups. The collected data is utilized by the authoritarian regime to analyze and influence the behavior of a country’s citizens, companies and other institutions. It does so with the help of algorithms based on the principles and norms of the authoritarian regime, automatically calculating credit scores for every individual and institution. In contrast to financial credit ratings, these “social credit scores” are based on the full range of collected surveillance data, including financial as well as non-financial information. IT-backed authoritarianism only allows full participation in a country’s economy and society for those who have a good credit scoring and thus respect the rules and norms of the respective authoritarian regime. Behavior deviating from these norms incurs automatic punishment through a bad credit scoring, which leads to economic or social disadvantages (loan conditions, lower job opportunities, no participation in public procurement etc.). Severe violation or non-compliance can lead to the exclusion from any economic activities on the respective market or (for individuals) to an exclusion from public services.
Examples
China
China has been viewed as the cutting edge and the enabler of digital authoritarianism. With its Great Firewall of a state-controlled Internet, it has deployed high-tech repression against Uyghurs in Xinjiang and exported surveillance and monitoring systems to 18 countries as of 2019.
According to Freedom House, the China model of digital authoritarianism through Internet control against those who are critical of the CCP features legislations of censorship, surveillance using artificial intelligence (AI) and facial recognition, manipulation or removal of online content, cyberattacks and spear phishing, suspension and revocation of social media accounts, detention and arrests, and forced disappearance and torture, among other means. A report by Carnegie Endowment for International Peace also highlights similar digital repression techniques. In 2013, The Diplomat reported that the Chinese hackers behind the malware attacks on Falun Gong supporters in China, the Philippines, and Vietnam were the same ones responsible for attacks against foreign military powers, targeting email accounts and stealing Microsoft Outlook login information and email contents.
The 2022 analysis by The New York Times of over 100,000 Chinese government bidding documents revealed a range of surveillance and data collection practices, from personal biometrics to behavioral data, which are fed into AI systems. China utilizes these data capabilities not only to enhance governmental and infrastructural efficiency but also to monitor and suppress dissent among its population, particularly in Xinjiang, where the government targets the Uyghur community under the guise of counterterrorism and public security.
Russia
The Russian model of digital authoritarianism relies on strict laws of digital expression and the technology to enforce them. Since 2012, as part of a broader crackdown on civil society, the Russian Parliament has adopted numerous laws curtailing speech and expression. Hallmarks of Russian digital authoritarianism include:
The surveillance of all Internet traffic through the System for Operative Investigative Activities (SORM) and the Semantic Archive;
Restrictive laws on the freedom of speech and expression, including the blacklisting of hundreds of thousands of websites, and punishment including fines and jail time for activities including slander, "insulting religious feelings," and "acts of extremism".
Infrastructure regulations including requirements for Internet service providers (ISPs) to install deep packet inspection equipment under the 2019 Sovereign Internet Law.
Myanmar
Since the coup d'état in February 2021, the military junta blocked all but 1,200 websites and imposed Internet shutdowns, with pro-military dominating the content on the remaining accessible websites. In May 2021, Reuters reported that telecom and Internet service providers had been secretly ordered to install spyware allowing the military to "listen in on calls, view text messages and web traffic including emails, and track the locations of users without the assistance of the telecom and internet firms." In February 2022, Norwegian service provider Telenor was forced to sell its operation to a local company aligned with the military junta. The military junta also sought to criminalize virtual private networks (VPNs), imposed mandatory registration of devices, and increased surveillance on both social media platforms and via telecom companies.
In July 2022, the military executed activist Kyaw Min Yu, after arresting him in November 2021 for prodemocracy social media posts criticizing the coup.
Africa
A study by the African Digital Rights Network (ADRN) revealed that governments in ten African countries—South Africa, Cameroon, Zimbabwe, Uganda, Nigeria, Zambia, Sudan, Kenya, Ethiopia, and Egypt—have employed various forms of digital authoritarianism. The most common tactics include digital surveillance, disinformation, Internet shutdowns, censorship legislation, and arrests for anti-government speech. The researchers highlighted the growing trend of complete Internet or mobile system shutdowns. Additionally, all ten countries utilized Internet surveillance, mobile intercept technologies, or artificial intelligence to monitor targeted individuals using specific keywords.
References
Authoritarianism
Mass surveillance
Government by algorithm | Techno-authoritarianism | [
"Engineering"
] | 1,356 | [
"Government by algorithm",
"Automation"
] |
51,244,168 | https://en.wikipedia.org/wiki/Hedonic%20game | In cooperative game theory, a hedonic game (also known as a hedonic coalition formation game) is a game that models the formation of coalitions (groups) of players when players have preferences over which group they belong to. A hedonic game is specified by giving a finite set of players, and, for each player, a preference ranking over all coalitions (subsets) of players that the player belongs to. The outcome of a hedonic game consists of a partition of the players into disjoint coalitions, that is, each player is assigned a unique group. Such partitions are often referred to as coalition structures.
Hedonic games are a type of non-transferable utility game. Their distinguishing feature (the "hedonic aspect") is that players only care about the identity of the players in their coalition, but do not care about how the remaining players are partitioned, and do not care about anything other than which players are in their coalition. Thus, in contrast to other cooperative games, a coalition does not choose how to allocate profit among its members, and it does not choose a particular action to play. Some well-known subclasses of hedonic games are given by matching problems, such as the stable marriage, stable roommates, and the hospital/residents problems.
The players in hedonic games are typically understood to be self-interested, and thus hedonic games are usually analyzed in terms of the stability of coalition structures, where several notions of stability are used, including the core and Nash stability. Hedonic games are studied both in economics, where the focus lies on identifying sufficient conditions for the existence of stable outcomes, and in multi-agent systems, where the focus lies on identifying concise representations of hedonic games and on the computational complexity of finding stable outcomes.
Definition
Formally, a hedonic game is a pair of a finite set of players (or agents), and, for each player a complete and transitive preference relation over the set of coalitions that player belongs to. A coalition is a subset of the set of players. The coalition is typically called the grand coalition.
A coalition structure is a partition of . Thus, every player belongs to a unique coalition in .
Solution concepts
Like in other areas of game theory, the outcomes of hedonic games are evaluated using solution concepts. Many of these concepts refer to a notion of game-theoretic stability: an outcome is stable if no player (or possibly no coalition of players) can deviate from the outcome so as to reach a subjectively better outcome. Here we give definitions of several solution concepts from the literature.
A coalition structure is in the core (or is core stable) if there is no coalition whose members all prefer to . Formally, a non-empty coalition is said to block if for all . Then is in the core if there are no blocking coalitions.
A coalition structure is in the strict core (or is strictly core stable) if there is no weakly blocking coalition where all members weakly prefer to and some member strictly prefers to . In other words, is in the strict core if .
A coalition structure is Nash-stable if no player wishes to change coalition within . Formally, is Nash-stable if there is no such that for some . Notice that, according to Nash-stability, a deviation by a player is allowed even if members of the group that are joined by are made worse off by the deviation.
A coalition structure is individually stable if no player wishes to join another coalition whose members all welcome the player. Formally, is individually stable if there is no such that for some where for all .
A coalition structure is contractually individually stable if there is no player who belongs to a coalition willing to let him leave and who wants to join a coalition willing to have him. In other words, is contractually individually stable if .
One can also define Pareto optimality of a coalition structure. In the case that the preference relations are represented by utility functions, one can also consider coalition structures that maximize social welfare.
Examples
The following three-player game has been named "an undesired guest".From these preferences, we can see that and like each other, but dislike the presence of player .
Consider the partition . Notice that in , player 3 would prefer to join the coalition , because , and hence is not Nash-stable. However, if player were to join , player (and also player ) would be made worse off by this deviation, and so player 's deviation does not contradict individual stability. Indeed, one can check that is individually stable. We can also see that there is no group of players such that each member of prefers to their coalition in and so the partition is also in the core.
Another three-player example is known as "two is a company, three is a crowd".In this game, no partition is core-stable: The partition (where everyone is alone) is blocked by ; the partition (where everyone is together) is blocked by ; and partitions consisting of one pair and a singleton are blocked by another pair, because the preferences contain a cycle.
Concise representations and restricted preferences
Since the preference relations in a hedonic game are defined over the collection of all subsets of the player set, storing a hedonic game takes exponential space. This has inspired various representations of hedonic games that are concise, in the sense that they (often) only require polynomial space.
Individually rational coalition lists represent a hedonic game by explicitly listing the preference rankings of all agents, but only listing individually rational coalitions, that is coalitions with . For many solution concepts, it is irrelevant how precisely the player ranks unacceptable coalitions, since no stable coalition structure can contain a coalition that is not individually rational for one of the players. Note that if there are only polynomially many individually rational coalitions, then this representation only takes polynomial space.
Hedonic coalition nets represent hedonic games through weighted Boolean formulas. As an example, the weighted formula means that player receives 5 utility points in coalitions that include but do not include . This representation formalism is universally expressive and often concise (though, by necessity, there are some hedonic games whose hedonic coalition net representation requires exponential space).
Additively separable hedonic games are based on every player assigning numerical values to the other players; a coalition is as good for a player as the sum of the values of the players. Formally, additively separable hedonic games are those for which there exist valuations for every such that for all players and all coalitions , we have if and only if . A similar definition, using the average rather than the sum of values, leads to the class of fractional hedonic games.
In anonymous hedonic games, players only care about the size of their coalition, and agents are indifferent between any two coalitions with the same cardinality: if then . These games are anonymous in the sense that the identities of the individuals do not influence the preference ranking.
In Boolean hedonic games, each player has a Boolean formula whose variables are the other players. Each player prefers coalitions that satisfy its formula to coalitions that do not, but is otherwise indifferent.
In hedonic games with preferences depending on the worst player (or W-preferences), players have a preference ranking over players, and extend this ranking to coalitions by evaluating a coalition according to the (subjectively) worst player in it. Several similar concepts (such as B-preferences) have been defined.
Existence guarantees
Not every hedonic game admits a coalition structure that is stable. For example, we can consider the stalker game, which consists of just two players with and . Here, we call player 2 the stalker. Notice that no coalition structure for this game is Nash-stable: in the coalition structure , where both players are alone, the stalker 2 deviates and joins 1; in the coalition structure , where the players are together, player 1 deviates into the empty coalition so as to not be together with the stalker. There is a well-known instance of the stable roommates problem with 4 players that has empty core, and there is also an additively separable hedonic game with 5 players that has empty core and no individually stable coalition structures.
For symmetric additively separable hedonic games (those that satisfy for all ), there always exists a Nash-stable coalition structure by a potential function argument. In particular, coalition structures that maximize social welfare are Nash-stable. A similar argument shows that a Nash-stable coalition structure always exists in the more general class of subset-neutral hedonic games. However, there are examples of symmetric additively separable hedonic games that have empty core.
Several conditions have been identified that guarantee the existence of a core coalition structure. This is the case in particular for hedonic games with the common ranking property, with the top coalition property, with top or bottom responsiveness, with descending separable preferences, and with dichotomous preferences. Moreover, common ranking property has been shown to guarantee the existence of a coalition structure which is core stable, individually stable and Pareto optimal at the same time.
Computational complexity
When considering hedonic games, the field of algorithmic game theory is usually interested in the complexity of the problem of finding a coalition structure satisfying a certain solution concept when given a hedonic game as input (in some concise representation). Since it is usually not guaranteed that a given hedonic game admits a stable outcome, such problems can often be phrased as a decision problem asking whether a given hedonic game admits stable outcome. In many cases, this problem turns out to be computationally intractable. One exception is hedonic games with common ranking property where a core coalition structure always exists, and it can be found in polynomial time. However, it is still NP-hard to find a Pareto optimal or socially optimal outcome.
In particular, for hedonic games given by individually rational coalition lists, it is NP-complete to decide whether the game admits a core-stable, a Nash-stable, or an individually stable outcome. The same is true for anonymous games. For additively separable hedonic games, it is NP-complete to decide the existence of a Nash-stable or an individually stable outcome and complete for the second level of the polynomial hierarchy to decide whether there exists a core-stable outcome, even for symmetric additive preferences. These hardness results extend to games given by hedonic coalition nets. While Nash- and individually stable outcomes are guaranteed to exist for symmetric additively separable hedonic games, finding one can still be hard if the valuations are given in binary; the problem is PLS-complete. For the stable marriage problem, a core-stable outcome can be found in polynomial time using the deferred acceptance algorithm; for the stable roommates problem, the existence of a core-stable outcome can be decided in polynomial time if preferences are strict, but the problem is NP-complete if preference ties are allowed. Hedonic games with preferences based on the worst player behave very similarly to stable roommates problems with respect to the core, but there are hardness results for other solution concepts. Many of the preceding hardness results can be explained through meta-theorems about extending preferences over single players to coalitions.
Applications
Robotics
For a robotic system consisting of multiple autonomous intelligent robots (e.g., swarm robotics), one of their decision making issues is how to make a robotic team for each of given tasks requiring collaboration of the robots. Such a problem can be called multi-robot task allocation or multi-robot coalition formation problem. This problem can be modelled as a hedonic game, and the preferences of the robots in the game may reflect their individual favours (e.g., possible battery consumption to finish a task) and/or social favours (e.g., complementariness of other robots' capabilities, crowdedness for shared resource).
Some of the particular concerns in such robotics application of hedonic games relative to the other applications include the communication network topology of robots (e.g., the network is most likely partially connected network) and the need of a decentralised algorithm that finds a Nash-stable partition (because the multi-robot system is a decentralised system).
Using anonymous hedonic games under SPAO(Single-Peaked-At-One) preference, a Nash-stable partition of decentralised robots, where each coalition is dedicated to each task, is guaranteed to be found within of iterations, where is the number of the robots and is their communication network diameter. Here, the implication of SPAO is robots' social inhibition (i.e., reluctancy of being together), which normally arises when their cooperation is subadditive.
References
Game theory game classes | Hedonic game | [
"Mathematics"
] | 2,648 | [
"Game theory game classes",
"Game theory"
] |
55,665,052 | https://en.wikipedia.org/wiki/Liquid%20marbles | Liquid marbles are non-stick droplets (normally aqueous) wrapped by micro- or nano-metrically scaled hydrophobic, colloidal particles (Teflon, polyethylene, lycopodium powder, carbon black, etc.); representing a platform for a diversity of chemical and biological applications. Liquid marbles are also found naturally; aphids convert honeydew droplets into marbles. A variety of non-organic and organic liquids may be converted into liquid marbles. Liquid marbles demonstrate elastic properties and do not coalesce when bounced or pressed lightly. Liquid marbles demonstrate a potential as micro-reactors, micro-containers for growing micro-organisms and cells, micro-fluidics devices, and have even been used in unconventional computing. Liquid marbles remain stable on solid and liquid surfaces. Statics and dynamics of rolling and bouncing of liquid marbles were reported. Liquid marbles coated with poly-disperse and mono-disperse particles have been reported. Liquid marbles are not hermetically coated by solid particles but connected to the gaseous phase. Kinetics of the evaporation of liquid marbles has been investigated.
Interfacial water marbles
Liquid marbles were first reported by P. Aussillous and D. Quere in 2001, who described a new method to construct portable water droplets in the atmospheric environment with hydrophobic coating on their surface to prevent the contact between water and the solid ground (Figure 1). Liquid marbles provide a new approach to transport liquid mass on the solid surface, which sufficiently transform the inconvenient glass containers into flexible, user-specified hydrophobic coating composed of powders of hydrophobic materials. Since then, the applications of liquid marbles in no-loss mass transport, microfluidics and microreactors have been extensively investigated. However, liquid marbles only reflect the water behavior at the solid-air interface, while there is no report on the water behavior at the liquid-liquid interface, as a result of the so-called coalescence cascade phenomenon.
When a water droplet is in contact with a water reservoir, it will quickly pinch off from the reservoir and form a smaller daughter droplet, while this daughter droplet will continue to go through a similar contact-pinch off-splitting process until completed coalescence into the reservoir, the combination or summary of these self-similar coalescence processes is called coalescence cascade. The underlying mechanism of coalescence cascade has been studied in detail but there has been mere attempt to control and make use of it. Until recently, Liu et al. has filled this void by proposing a new method to control coalescence cascade by using nanostructured coating at the liquid-liquid interface, —the interfacial liquid marbles.
Similar to liquid marbles at the solid-air interface, the interfacial liquid marbles are constructed on the hexane/water interface using water droplets with a surface coating composed of nanoscale materials with special wettability (Figure 2). To realize interfacial water marbles at hexane/water interface, the individual particle size of the surface coating layer should be as small as possible, so that the contact line between the particles and the water reservoir can be minimized; special wettability with mixed hydrophobicity and hydrophilicity is also preferred for the interfacial water marble formation. The interfacial water marble can be fabricated by firstly coating a water droplet with nanomaterials with special wettability, e.g. hybrid carbon nanowires, graphene oxide. Afterwards a secondary coating layer of polyvinylidene fluoride (PVDF) is applied onto the coated water droplet. The doubly-coated water droplet is then cast into the hexane/water mixture and eventually settled at the hexane/water interface to form the interfacial water marble. During this process, the PVDF coating quickly diffused into hexane to balance the hydrophobic interaction between hexane and the water droplet, while the nanomaterials quickly self-assembled into a nanostructured protective layer on the droplet surface through the Marangoni effect.
The interfacial water marble can completely resist coalescence cascade and exist nearly permanently at the hexane/water interface, providing that the hexane phase is not depleted by vaporization. The interfacial water marbles can also realize a series of stimuli-responsive motions by integrating the functional materials into the surface coating layer. Due to their uniqueness in both form and behavior, the interfacial water marbles are speculated to have remarkable applications in microfluidics, microreactors and mass-transport.
See also
Pickering emulsion
Spherification (culinary process)
References
Liquids
Microfluidics
Surface science | Liquid marbles | [
"Physics",
"Chemistry",
"Materials_science"
] | 983 | [
"Microfluidics",
"Microtechnology",
"Phases of matter",
"Surface science",
"Condensed matter physics",
"Matter",
"Liquids"
] |
55,665,108 | https://en.wikipedia.org/wiki/Ultradivided%20matter | In physics, ultradivided matter is a family of states of matter characterised by a heterogeneous mixture of two or more different materials, where the interaction energy between the suspended phase is larger than kT. The term 'ultradivided matter' encapsulates several types of substance including: soap micelles, emulsions, and suspensions of solids such as colloids.).
An ultradivided state differs from a solution. In a steady-state solution, all interactions between a solution's constituent molecules are on the order of the thermal energy kT. Thus any otherwise aggregative force between similar molecules in a solution is subordinate to thermal fluctuations, and the solution does allow flocculation of one of the constituent components. Ultradivided matter, however, is characterised by large interfacial energies where the intermolecular interactions of one or more constituents of the substance are stronger than kT. This leads to different behaviour. An intuitive example of this can be seen in the tendency of a biphasic mixture of water (a polar liquid) and oil (a non-polar liquid) to spontaneously separate into two phases. This may seem to imply a change from a state with higher disorder or higher entropy (a suspension of oil droplets in water) to a lower-entropy arrangement (a neat separation into two regions of different material). Such a transition would seem to violate the second law of thermodynamics, which is impossible. The resolution to this apparent paradox is that the interface between oil and water necessitates an ordered alignment of oil and water molecules at the interface. Minimisation of the surface area between the two phases thus correlates with an increase of the entropy of the system. The highest entropy state thus has a minimum interfacial surface area between the two phases and thus a neat separation is created, into two regions of different material.
See also
Colloid
References
Solvents
Solutions
Condensed matter physics
Soft matter | Ultradivided matter | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 406 | [
"Soft matter",
"Phases of matter",
"Materials science",
"Homogeneous chemical mixtures",
"Condensed matter physics",
"Solutions",
"Matter"
] |
55,671,999 | https://en.wikipedia.org/wiki/Bypass%20transition | A bypass transition is a laminar–turbulent transition in a fluid flow over a surface. It occurs when a laminar boundary layer transitions to a turbulent one through some secondary instability mode, bypassing some of the pre-transitional events that typically occur in a natural laminar–turbulent transition.
History
The bypass transition scenario was first observed experimentally by P. S. Klebanoff, during his experiments in elevated free-stream turbulence flow. Klebanoff identified an important aspect of the bypass transition. In an experiment using hot wires, he studied flow over a flat plate that was subjected to a 0.3% free-stream turbulence level. At this moderate free-stream turbulence level, he observed a velocity perturbation signal with a frequency under 12Hz, much smaller than the usual Tollmien-Schlichting wave frequency. He also observed fluctuations in boundary layer thickness, which does not occur in low-turbulence free-stream flow.
Pre-transitional flow structures
In bypass transition flow, the pre-transitional flow structures are different from those of very low turbulence free-stream flow. Through various laboratory experiments and computational studies, it has been observed that low frequency streaky flow structures are present in the laminar boundary layers. These streaky structures are called Klebanoff modes or K-modes.
Notes
References
Boundary layers
Turbulence
Fluid dynamics | Bypass transition | [
"Chemistry",
"Engineering"
] | 280 | [
"Turbulence",
"Chemical engineering",
"Boundary layers",
"Piping",
"Fluid dynamics"
] |
63,989,376 | https://en.wikipedia.org/wiki/Archimedean%20ordered%20vector%20space | In mathematics, specifically in order theory, a binary relation on a vector space over the real or complex numbers is called Archimedean if for all whenever there exists some such that for all positive integers then necessarily
An Archimedean (pre)ordered vector space is a (pre)ordered vector space whose order is Archimedean.
A preordered vector space is called almost Archimedean if for all whenever there exists a such that for all positive integers then
Characterizations
A preordered vector space with an order unit is Archimedean preordered if and only if for all non-negative integers implies
Properties
Let be an ordered vector space over the reals that is finite-dimensional. Then the order of is Archimedean if and only if the positive cone of is closed for the unique topology under which is a Hausdorff TVS.
Order unit norm
Suppose is an ordered vector space over the reals with an order unit whose order is Archimedean and let
Then the Minkowski functional of (defined by ) is a norm called the order unit norm.
It satisfies and the closed unit ball determined by is equal to (that is,
Examples
The space of bounded real-valued maps on a set with the pointwise order is Archimedean ordered with an order unit (that is, the function that is identically on ).
The order unit norm on is identical to the usual sup norm:
Examples
Every order complete vector lattice is Archimedean ordered.
A finite-dimensional vector lattice of dimension is Archimedean ordered if and only if it is isomorphic to with its canonical order.
However, a totally ordered vector order of dimension can not be Archimedean ordered.
There exist ordered vector spaces that are almost Archimedean but not Archimedean.
The Euclidean space over the reals with the lexicographic order is Archimedean ordered since for every but
See also
References
Bibliography
Functional analysis
Order theory | Archimedean ordered vector space | [
"Mathematics"
] | 405 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Mathematical relations",
"Order theory"
] |
63,990,466 | https://en.wikipedia.org/wiki/WASP-36 | WASP-36 is a yellow main sequence star in the Hydra constellation.
Star characteristics
WASP-36 is a yellow main sequence star of spectral class G2, similar to the Sun. It has an unconfirmed stellar companion with apparent magnitude 14.03.
Planetary system
In 2010, the SuperWASP survey found the Hot Jupiter class planet WASP-36b using the transit method. Its temperature was measured to be 1705 K. The planetary transmission spectrum taken in 2016 has turned out to be anomalous: the planet appears to be surrounded by a blue-tinted halo that is too wide to be an atmosphere and may represent a measurement error.
Planetary dayside temperature measured in 2020 is 1440 K.
References
Planetary systems with one confirmed planet
Hydra (constellation)
G-type main-sequence stars
Planetary transit variables
36
J08461929-0801370 | WASP-36 | [
"Astronomy"
] | 175 | [
"Constellations",
"Hydra (constellation)",
"Astronomy organizations",
"Wide Angle Search for Planets"
] |
63,990,518 | https://en.wikipedia.org/wiki/Europium%20hydride | Europium hydride is the most common hydride of europium with a chemical formula EuH2. In this compound, europium atom is in the +2 oxidation state and the hydrogen atoms are -1. It is a ferromagnetic semiconductor.
Production
Europium hydride can be produced by directly reacting europium and hydrogen gas:
Eu + H2 → EuH2
Uses
EuH2 can be used as a source of Eu2+ to create metal-organic frameworks that have the Eu2+ ion.
References
Europium(II) compounds
Metal hydrides
Ferromagnetic materials
Semiconductor materials | Europium hydride | [
"Physics",
"Chemistry"
] | 138 | [
"Inorganic compounds",
"Ferromagnetic materials",
"Semiconductor materials",
"Reducing agents",
"Materials",
"Metal hydrides",
"Matter"
] |
63,991,113 | https://en.wikipedia.org/wiki/Base%2050 | A Base 50 engine is a generic term for engines that are reverse-engineered from the Honda air-cooled four-stroke single cylinder engine. Honda first offered these engines in 1958, on their Honda Super Cub 50. Honda has offered variations of this engine continuously, in sizes up to , since its introduction. The Honda Super Cub has been produced in excess of 100,000,000 units, the most successful motorized vehicle in history. With multiple manufactures utilizing clones of the Honda 50 engine for current mopeds, scooters, small motorcycles and power sport machines, it is the most produced engine in history.
The engines are usually identical in form, fit and function to Honda 50cc engines and the parts are usually interchangeable with genuine Honda parts.
The term Base 50 has originated from the importation of modern styled, small Pit Bikes that use the Honda CRF50 as a base for design. Base 50's have also been known as Chondas, a slang term due to influx of Honda clone engines being primarily from China. The name, a portmanteau of "Chinese" or "clone", and "Honda", is seldom used in direct sales marketing.
Intellectual property infringement
Honda has a presence in China and shares manufacturing facilities with local industry, as required by Chinese local content trade law. Despite this, Honda has never pursued infringements regarding patents or intellectual property specifically regarding the Honda 50 or Base 50 engines, as the patents for said engines exceed 30 years. Honda has pursued legal action against Lifan regarding other business practices, including Lifans use of Hongda for marketing motorcycles. Honda also won a patent lawsuit in 2007 regarding a separate infringement for the sale of the Lifan LF100T motorcycle
References
Engines
Honda motorcycles
engines | Base 50 | [
"Physics",
"Technology",
"Engineering"
] | 354 | [
"Physical systems",
"Machines",
"Reverse engineering",
"Engines"
] |
63,992,022 | https://en.wikipedia.org/wiki/Phoslactomycin%20B | Phoslactomycin (PLM) is a natural product from the isolation of Streptomyces species. This is an inhibitor of the protein serine/threonine phosphatase which is the protein phosphate 2A (PP2A). The PP2A involves the growth factor of the cell such as to induce the formation of mitogen-activated protein interaction and playing a role in cell division and signal transduction. Therefore, PLM is used for the drug that prevents the tumor, cancer, or bacteria. There are nowsaday has 7 kinds of different PLM from PLM A to PLM G which differ the post-synthesis from the biosynthesis of PLM.
Phoslactomycin B (PLM B) is the product of the post synthase of the biosynthesis of phoslactomycin and the intermediate to produce the other PLMs. The biosynthesis of phoslactomycin belongs to type I polyketide synthase (PKS). A polyketide is are characterized by a macrocyclic lactone and is produced by bacteria and fungi. From the PLM B, there are many articles wrote about the synthesis of different PLM A through PLM G.
lyketide synthase domains
The domains in the polyketide synthase type I:
ACP: acyl carrier protein (serves as chaperone)
AT: acyl transferase (transfers acyl group form CoA to ACP)
KS: keto synthase (forms the new carbon-carbon bond)
KR: keto reductase (NADPH-dependent reduces beta-ketone to beta-hydroxyl)
DH: dehydratase (eliminates beta-OH to anpha/beta-unsaturation)
ER: enoyl reductase (NADPH-dependent reduces anpha/beta-unsaturation)
TE: thioesterase (hydrolyses / cyclizes)
Biosynthesis pathway of phoslactomycin
The PKS of phoslactomycin has one loading domain, 7 modules and 6 proteins that encode PnA, PnB, PnC, PnD, PnE, and PnF. The biosynthesis starts the loading with the cyclohexyl- CoA. Stepping in each module, there always need the keto synthase (KS) to create the new linkage of carbon-carbon to elongate the chain, the acyl transferase to transfer acyl to ACP domain. Then ACP serves as the acyl carrier protein to the further reaction, and each module has the keto reductase at the end to reduce the ketone to hydroxyl group with more stable. Module 1 uses the precursor malonyl-CoA and dehydrase domain to create the double bond.
Similarly, module 2, module 5, and module 7 have the same 5 domains KS-AT-ACP-DH-KR, but module 7 has one more domain at the end is thioesterase (TE) to create the ring member of the phoslactomycin product. Module 4 and module 6 have 4 domains which are KS-AT-ACP-KR and use the precursor ethylmalonyl-CoA. The final product is phoslactomycin.
Phoslactomycin family
Isolation from Streptomyces platensis, PLM is produces. Genes PnT1 and PnT2 regulate the post-synthesis of PLM to form PLM B by the phosphorylation and added the amine group . Figure 4 is based on the biosynthesis analysis on the Gene journal introduced that PLM B is used to produce PLM A, and 4 more PLMs C-F. The PLM A-F are the post synthesis product of the biosynthesis of PLM with the modification of many enzyme PnT1-T8.
PLMs regulates the actin cytoskeleton as they induce actin depolymerization by the indirect way. In the experiment, PLM F does not affect to the polymerization of purified actin in vitro. However, PLM F enhances the phosphorylation of intracellular vimentin.
Footnotes
Phosphatase inhibitors
Natural products | Phoslactomycin B | [
"Chemistry"
] | 907 | [
"Natural products",
"Medicinal chemistry"
] |
63,992,561 | https://en.wikipedia.org/wiki/Sulfate%20nitrates | The sulfate nitrates are a family of double salts that contain both sulfate and nitrate ions (NO3−, SO42−). They are in the class of mixed anion compounds. A few rare minerals are in this class. Two sulfate nitrates are in the class of anthropogenic compounds, accidentally made as a result of human activities in fertilizers that are a mix of ammonium nitrate and ammonium sulfate, and also in the atmosphere as polluting ammonia, nitrogen dioxide, and sulfur dioxide react with the oxygen and water there to form solid particles. The nitro group (NO3−) can act as a ligand, and complexes containing it can form salts with sulfate.
List
References
Sulfates
Nitrates
Mixed anion compounds | Sulfate nitrates | [
"Physics",
"Chemistry"
] | 153 | [
"Matter",
"Mixed anion compounds",
"Sulfates",
"Oxidizing agents",
"Nitrates",
"Salts",
"Ions"
] |
60,491,326 | https://en.wikipedia.org/wiki/Magnetic%20space%20group | In solid state physics, the magnetic space groups, or Shubnikov groups, are the symmetry groups which classify the symmetries of a crystal both in space, and in a two-valued property such as electron spin. To represent such a property, each lattice point is colored black or white, and in addition to the usual three-dimensional symmetry operations, there is a so-called "antisymmetry" operation which turns all black lattice points white and all white lattice points black. Thus, the magnetic space groups serve as an extension to the crystallographic space groups which describe spatial symmetry alone.
The application of magnetic space groups to crystal structures is motivated by Curie's Principle. Compatibility with a material's symmetries, as described by the magnetic space group, is a necessary condition for a variety of material properties, including ferromagnetism, ferroelectricity, topological insulation.
History
A major step was the work of Heinrich Heesch, who first rigorously established the concept of antisymmetry as part of a series of papers in 1929 and 1930. Applying this antisymmetry operation to the 32 crystallographic point groups gives a total of 122 magnetic point groups. However, although Heesch correctly laid out each of the magnetic point groups, his work remained obscure, and the point groups were later re-derived by Tavger and Zaitsev. The concept was more fully explored by Shubnikov in terms of color symmetry. When applied to space groups, the number increases from the usual 230 three dimensional space groups to 1651 magnetic space groups, as found in the 1953 thesis of Alexandr Zamorzaev. While the magnetic space groups were originally found using geometry, it was later shown the same magnetic space groups can be found using generating sets.
Description
Magnetic space groups
The magnetic space groups can be placed into three categories. First, the 230 colorless groups contain only spatial symmetry, and correspond to the crystallographic space groups. Then there are 230 grey groups, which are invariant under antisymmetry. Finally are the 1191 black-white groups, which contain the more complex symmetries. There are two common conventions for giving names to the magnetic space groups. They are Opechowski-Guccione (named after Wladyslaw Opechowski and Rosalia Guccione) and Belov-Neronova-Smirnova. For colorless and grey groups, the conventions use the same names, but they treat the black-white groups differently. A full list of the magnetic space groups (in both conventions) can be found both in the original papers, and in several places online.
The types can be distinguished by their different construction. Type I magnetic space groups, are identical to the ordinary space groups,.
Type II magnetic space groups, , are made up of all the symmetry operations of the crystallographic space group, , plus the product of those operations with time reversal operation, . Equivalently, this can be seen as the direct product of an ordinary space group with the point group .
Type III magnetic space groups, , are constructed using a group , which is a subgroup of with index 2.
Type IV magnetic space groups, , are constructed with the use of a pure translation, , which is Seitz notation for null rotation and a translation, . Here the is a vector (usually given in fractional coordinates) pointing from a black colored point to a white colored point, or vice versa.
Magnetic point groups
The following table lists all of the 122 possible three-dimensional magnetic point groups. This is given in the short version of Hermann–Mauguin notation in the following table. Here, the addition of an apostrophe to a symmetry operation indicates that the combination of the symmetry element and the antisymmetry operation is a symmetry of the structure. There are 32 Crystallographic point groups, 32 grey groups, and 58 magnetic point groups.
The magnetic point groups which are compatible with ferromagnetism are colored cyan, the magnetic point groups which are compatible with ferroelectricity are colored red, and the magnetic point groups which are compatible with both ferromagnetism and ferroelectricity are purple. There are 31 magnetic point groups which are compatible with ferromagnetism. These groups, sometimes called admissible, leave at least one component of the spin invariant under operations of the point group. There are 31 point groups compatible with ferroelectricity; these are generalizations of the crystallographic polar point groups. There are also 31 point groups compatible with the theoretically proposed ferrotorodicity. Similar symmetry arguments have been extended to other electromagnetic material properties such as magnetoelectricity or piezoelectricity.
The following diagrams show the stereographic projection of most of the magnetic point groups onto a flat surface. Not shown are the grey point groups, which look identical to the ordinary crystallographic point groups, except they are also invariant under the antisymmetry operation.
Black-white Bravais lattices
The black-white Bravais lattices characterize the translational symmetry of the structure like the typical Bravais lattices, but also contain additional symmetry elements. For black-white Bravais lattices, the number of black and white sites is always equal. There are 14 traditional Bravais lattices, 14 grey lattices, and 22 black-white Bravais lattices, for a total of 50 two-color lattices in three dimensions.
The table shows the 36 black-white Bravais lattices, including the 14 traditional Bravais lattices, but excluding the 14 gray lattices which look identical to the traditional lattices. The lattice symbols are those used for the traditional Bravais lattices. The suffix in the symbol indicates the mode of centering by the black (antisymmetry) points in the lattice, where s denotes edge centering.
Magnetic superspace groups
When the periodicity of the magnetic order coincides with the periodicity of crystallographic order, the magnetic phase is said to be commensurate, and can be well-described by a magnetic space group. However, when this is not the case, the order does not correspond to any magnetic space group. These phases can instead be described by magnetic superspace groups, which describe incommensurate order. This is the same formalism often used to describe the ordering of some quasicrystals.
Phase transitions
The Landau theory of second-order phase transitions has been applied to magnetic phase transitions. The magnetic space group of disordered structure, , transitions to the magnetic space group of the ordered phase, . is a subgroup of , and keeps only the symmetries which have not been broken during the phase transition. This can be tracked numerically by evolution of the order parameter, which belongs to a single irreducible representation of .
Important magnetic phase transitions include the paramagnetic to ferromagnetic transition at the Curie temperature and the paramagnetic to antiferromagnetic transition at the Néel temperature. Differences in the magnetic phase transitions explain why Fe2O3, MnCO3, and CoCO3 are weakly ferromagnetic, whereas the structurally similar Cr2O3 and FeCO3 are purely antiferromagnetic. This theory developed into what is now known as antisymmetric exchange.
A related scheme is the classification of Aizu species which consist of a prototypical non-ferroic magnetic point group, the letter "F" for ferroic, and a ferromagnetic or ferroelectric point group which is a subgroup of the prototypical group which can be reached by continuous motion of the atoms in the crystal structure.
Applications and extensions
The main application of these space groups is to magnetic structure, where the black/white lattice points correspond to spin up/spin down configuration of electron spin. More abstractly, the magnetic space groups are often thought of as representing time reversal symmetry. This is in contrast to time crystals, which instead have time translation symmetry. In the most general form, magnetic space groups can represent symmetries of any two valued lattice point property, such as positive/negative electrical charge or the alignment of electric dipole moments. The magnetic space groups place restrictions on the electronic band structure of materials. Specifically, they place restrictions on the connectivity of the different electron bands, which in turn defines whether material has symmetry-protected topological order. Thus, the magnetic space groups can be used to identify topological materials, such as topological insulators.
Experimentally, the main source of information about magnetic space groups is neutron diffraction experiments. The resulting experimental profile can be matched to theoretical structures by Rietveld refinement or simulated annealing.
Adding the two-valued symmetry is also a useful concept for frieze groups which are often used to classify artistic patterns. In that case, the 7 frieze groups with the addition of color reversal become 24 color-reversing frieze groups. Beyond the simple two-valued property, the idea has been extended further to three colors in three dimensions, and to even higher dimensions and more colors.
See also
Space group
Magnetic structure
Group theory
References
External links
Crystallography
Magnetic ordering
Group theory | Magnetic space group | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,893 | [
"Electric and magnetic fields in matter",
"Materials science",
"Group theory",
"Magnetic ordering",
"Crystallography",
"Fields of abstract algebra",
"Condensed matter physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.