id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
72,320,451 | https://en.wikipedia.org/wiki/Harmonic%20tensors | In this article spherical functions are replaced by polynomials that have been well known in electrostatics since the time of Maxwell and associated with multipole moments. In physics, dipole and quadrupole moments typically appear because fundamental concepts of physics are associated precisely with them.
Dipole and quadrupole moments are:
,
,
where is density of charges (or other quantity).
Octupole moment
is used rather seldom. As a rule, high-rank moments are calculated with the help of spherical functions.
Spherical functions are convenient in scattering problems. Polynomials are preferable in calculations with differential operators. Here, properties of tensors, including high-rank moments as well, are considered to repeat basically features of solid spherical functions but having their own specifics.
Using of invariant polynomial tensors in Cartesian coordinates, as shown in a number of recent studies, is preferable and simplifies the fundamental scheme of calculations
.
The spherical coordinates are not involved here. The rules for using harmonic symmetric tensors are demonstrated that directly follow from their properties. These rules are
naturally reflected in the theory of special functions, but are not always obvious, even though the group properties are general
.
At any rate, let us recall the main property of harmonic tensors: the trace over any pair of indices vanishes
.
Here, those properties of tensors are selected that not only make analytic calculations more compact and reduce 'the number of factorials' but also allow correctly formulating some fundamental questions of the theoretical physics
.
General properties
Four properties of symmetric tensor lead to the use of it in physics.
A. Tensor is homogeneous polynomial:
,
where is the number of indices, i.e., tensor rank;
B. Tensor is symmetric with respect to indices;
C. Tensor is harmonic, i.e., it is a solution of the Laplace equation:
;
D. Trace over any two indices vanishes:
,
where symbol denotes remaining indices after equating .
Components of tensor are solid spherical functions. Tensor can be divided by factor to acquire components in the form of spherical functions.
Multipole tensors in electrostatics
The multipole potentials arise when the potential of a point charge is expanded in powers of coordinates of the radius vector ('Maxwell poles')
. For potential
,
there is well known formula:
,
where the following notation is used. For the th tensor power of the radius vector
,
and for a symmetric harmonic tensor of rank ,
.
The tensor is a homogeneous harmonic polynomial with described the general properties. Contraction over any two indices (when the two gradients become the operator) is null. If tensor is divided by , then a multipole harmonic tensor arises
,
which is also a homogeneous harmonic function with homogeneity degree .
From the formula for potential follows that
,
which allows to construct a ladder operator.
Theorem on power-law equivalent moments in electrostatics
There is an obvious property of contraction
,
that give rise to a theorem simplifying essentially the calculation of moments in theoretical physics.
Theorem
Let be a distribution of charge. When calculating a multipole potential,
power-law moments can be used instead of harmonic tensors (or instead of spherical functions ):
.
It is an advantage in comparing with using of spherical functions.
Example 1.
For the quadrupole moment, instead of the integral
,
one can use 'short' integral
.
Moments are different but potentials are equal each other.
Formula for a harmonic tensor
Formula for the tensor was considered in using a ladder operator.
It can be derived using the Laplace operator. Similar approach is known in the theory of special functions. The first term in the formula, as is easy
to see from expansion of a point charge potential, is equal to
.
The remaining terms can be obtained by repeatedly applying the Laplace operator and
multiplying by an even power of the modulus . The coefficients are easy to determine by substituting expansion in the Laplace equation . As a result, formula is following:
.
This form is useful for applying differential operators of quantum mechanics and electrostatics to it. The differentiation generates product of the Kronecker symbols.
Example 2
,
,
.
The last quality can be verified using the contraction with . It is convenient
to write the differentiation formula in terms of the symmetrization operation.
A symbol for it was proposed in, with the help of sum taken over all independent
permutations of indices:
.
As a result, the following formula is obtained:
,
where the symbol is used for a tensor power of the Kronecker symbol and conventional symbol [..] is used for the two subscripts that are being changed under symmetrization.
Following one can find the relation between the tensor and solid spherical functions. Two unit vectors are needed: vector directed along the -axis and complex vector .
Contraction with their powers gives the required relation
,
where is a Legendre polynomial .
Special contractions
In perturbation theory, it is necessary to expand the source in terms of spherical functions. If the source is a polynomial, for example, when calculating the Stark effect, then the integrals are standard, but cumbersome. When calculating with the help of invariant tensors, the expansion coefficients are simplified, and there is then no need to integrals. It suffices, as shown in, to calculate contractions that lower the rank of the tensors under consideration.
Instead of integrals, the operation of calculating the trace
of a tensor over two indices is used. The following rank reduction formula is useful:
,
where symbol [m] denotes all left (l-2) indices.
If the brackets contain several factors with the Kronecker delta, the following relation formula
holds:
.
Calculating the trace reduces the number of the Kronecker symbols by one, and the rank of the harmonic tensor on the right-hand side of the equation decreases by two. Repeating the calculation of the trace k times eliminates all the Kronecker symbols:
.
Harmonic 4D tensors
The Laplace equation in four-dimensional 4D space has its own specifics. The potential of a point charge in 4D space is equal to
. From the expansion of the point-charge potential with respect to powers the multipole 4D potential arises:
.
The harmonic tensor in the numinator has a structure similar to 3D harmonic tensor. Its contraction with respect to any two indices must vanish. The dipole and quadruple 4-D tensors, as follows from here, are expressed as
,
,
The leading term of the expansion, as can be seen, is equal to
The method described for 3D tensor, gives relations
,
.
Four-dimensional tensors are structurally simpler than 3D tensors.
Decomposition of polynomials in terms of harmonic functions
Applying the contraction rules allows decomposing the tensor with respect to the harmonic ones.
In the perturbation theory, even the third approximation often considered good. Here, the decomposition of the tensor power up to the rank l=6 is presented:
, ,
, ,
, ,
, ,
, :.
To derive the formulas, it is useful to calculate the contraction with respect two indices, i.e., the trace. The formula for then implies the formula for . Applying the trace, there is convenient to use rules of previous section. Particular, the last term of the relations for even values of has the form
.
Also useful is the frequently occurring contraction over all indices,
which arises when normalizing the states.
Decomposition of polynomials in 4D space
The decomposition of tensor powers of a vector is also compact in four dimensions:
, ,
, ,
, ,
, ,
, :.
When using the tensor notation with indices suppressed, the last equality becomes
, .
Decomposition of higher powers is not more difficult using contractions over two indices.
Ladder operator
Ladder operators are useful for representing eigen functions in a compact form.
They are a basis for constructing coherent states
. Operators considered here, in mani respects close to the 'creation' and 'annihilation' operators of an oscillator.
Efimov's operator that increases the value of rank by one was introduced in
. It can be obtained from expansion of point-charge potential:
.
Straightforward differentiation on the left-hand side of the equation yields a vector operator acting on a harmonic tensor:
,
where operator
multiplies homogeneous polynomial by degree of homogeneity .
In particular,
,
.
As a result of an - fold application to unity, the harmonic tensor arises:
,
written here in different forms.
The relation of this tensor to the angular momentum operator is as follows:
.
Some useful properties of the operator in vector form given below. Scalar product
yields a vanishing trace over any two indices. The scalar product of vectors
and is
,
,
and, hence, the contraction of the tensor with the vector can be expressed as
,
where is a number.
The commutator in the scalar product on the sphere is equal to unity:
.
To calculate the divergence of a tensor, a useful formula is
,
whence
( on the right-hand side is a number).
Four-dimensional ladder operator
The raising operator in 4D space
has largely similar properties. The main formula for it is
where is a 4D vector, ,
,
and the operator multiplies a homogeneous polynomial by its degree. Separating the variable is convenient for physical problems:
.
In particular,
,
.
The scalar product of the ladder operator and is as simple as in 3D space:
.
The scalar product of and is
.
The ladder operator is now associated with the angular momentum operator and additional operator of rotations in 4D space . They perform Lie algebra as the angular momentum and the Laplace-Runge-Lenz operators.
Operator has the simple form
.
Separately for the 3D -component and the forth coordinate
of the raising operator, formulas are
,
.
See also
Tensor
Spherical harmonics
Operator (physics)
Laplace-Runge-Lenz vector
Angular momentum operator
Ladder operator
Multipolar exchange interaction
References
External links
Harmonic analysis
Rotational symmetry
Quantum mechanics
Operator theory | Harmonic tensors | [
"Physics"
] | 2,023 | [
"Theoretical physics",
"Quantum mechanics",
"Symmetry",
"Rotational symmetry"
] |
72,323,059 | https://en.wikipedia.org/wiki/Structural%20identifiability | In the area of system identification, a dynamical system is structurally identifiable if it is possible to infer its unknown parameters by measuring its output over time. This problem arises in many branch of applied mathematics, since dynamical systems (such as the ones described by ordinary differential equations) are commonly utilized to model physical processes and these models contain unknown parameters that are typically estimated using experimental data.
However, in certain cases, the model structure may not permit a unique solution for this estimation problem, even when the data is continuous and free from noise. To avoid potential issues, it is recommended to verify the uniqueness of the solution in advance, prior to conducting any actual experiments. The lack of structural identifiability implies that there are multiple solutions for the problem of system identification, and the impossibility of distinguishing between these solutions suggests that the system has poor forecasting power as a model. On the other hand, control systems have been proposed with the goal of rendering the closed-loop system unidentifiable, decreasing its susceptibility to covert attacks targeting cyber-physical systems.
Examples
Linear time-invariant system
Consider a linear time-invariant system with the following state-space representation:
and with initial conditions given by and . The solution of the output is
which implies that the parameters and are not structurally identifiable. For instance, the parameters generates the same output as the parameters .
Non-linear system
A model of a possible glucose homeostasis mechanism is given by the differential equations
where (c, si, p, α, γ) are parameters of the system, and the states are the plasma glucose concentration G, the plasma insulin concentration I, and the beta-cell functional mass β. It is possible to show that the parameters p and si are not structurally identifiable: any numerical choice of parameters p and si that have the same product psi are indistinguishable.
Practical identifiability
Structural identifiability is assessed by analyzing the dynamical equations of the system, and does not take into account possible noises in the measurement of the output. In contrast, practical non-identifiability also takes noises into account.
Other related notions
The notion of structurally identifiable is closely related to observability, which refers to the capacity of inferring the state of the system by measuring the trajectories of the system output. It is also closely related to data informativity, which refers to the proper selection of inputs that enables the inference of the unknown parameters.
The (lack of) structural identifiability is also important in the context of dynamical compensation of physiological control systems. These systems should ensure a precise dynamical response despite variations in certain parameters. In other words, while in the field of systems identification, unidentifiability is considered a negative property, in the context of dynamical compensation, unidentifiability becomes a desirable property.
Identifiability also appears in the context of inverse optimal control. Here, one assumes that the data comes from a solution of an optimal control problem with unknown parameters in the objective function. Here, identifiability refers to the possibility of infering the parameters present in the objective function by using the measured data.
Software
There exist many software that can be used for analyzing the identifiability of a system, including non-linear systems:
PottersWheel: MATLAB toolbox that uses profile likelihood for structural and practical identifiability analysis.
STRIKE-GOLDD: MATLAB toolbox for structural identifiability analysis.
StructuralIdentifiability.jl: Julia library for assessing structural parameter identifiability.
LikelihoodProfiler.jl: Julia library for practical identifiability analysis.
See also
System identification
Observability
Model order reduction
Adaptive control
References
Dynamical systems | Structural identifiability | [
"Physics",
"Mathematics"
] | 784 | [
"Mechanics",
"Dynamical systems"
] |
72,326,221 | https://en.wikipedia.org/wiki/Submarine%20detection%20system | Submarine detection systems are an aspect of antisubmarine warfare. They are of particular importance in nuclear deterrence, as they directly undermine one of the three arms of the nuclear triad by making counter-force attacks on submarines possible.
Types of system
They break down into two broad categories; acoustic and non-acoustic.
Acoustic systems in turn break down into active sonar systems and passive sonar systems designed to detect the acoustic signature of submarines such as SOSUS.
Non-acoustic systems can work on a variety of different physical principles, including the use of magnetic anomaly detectors and systems such as SOKS, which are believed to work by detecting phenomena such as trace chemicals, heat changes, and radioactivity left in a submarine's wake. There is evidence that some Royal Navy submarines are fitted with wake-detection systems.
References
Further reading
Soviet Antisubmarine Warfare: Current Capabilities And Priorities, 1972 CIA report, declassified and published in 1995
External links
Submarine Detection and Monitoring: Open-Source Tools and Technologies, at NPI.org
Anti-submarine warfare
Military technology
Nuclear warfare | Submarine detection system | [
"Chemistry"
] | 218 | [
"Radioactivity",
"Nuclear warfare"
] |
65,151,168 | https://en.wikipedia.org/wiki/Loam%20molding | Loam molding was formerly used for making cast iron or bronze cannon and is still used for casting large bells.
Loam (pronounced 'low-m') is a mixture of sand and clay with water, sometimes with horse dung (valuable for its straw content), animal hair or coke. The object of including dung or hair was to make the mould permeable and allow gas (such as steam) to escape during casting.
The mold for a cylindrically symmetrical object, such as a cannon, is built up in stages around a spindle, to which is fixed a strickle board with the shape of the eventual casting. The mold also has provision for the casting of a gunhead, beyond the muzzle of the cannon, into which slag can float during casting. If the object is to be hollow, a straw rope is wound around the spindle and covered in a friable material to the dimensions of the exterior of the cannon, the strickle board being turned on the spindle to ensure it is cylindrical. Decorative elements and models of the trunnions are then attached. This is then covered in a thick layer of loam. The mold is then fired. After this the straw rope is then pulled out with the rest of the material used to form the shape of the cannon.
The mould is then mounted vertically in a casting put in front of the furnace. If the cannon is to be cast hollow, a core is mounted in the mould. The furnace was then tapped and metal run into the mold. The mold is then broken off the casting. The gunhead is cut off, and the bore of the cannon reamed out using a boring mill.
The process for the cylinder for a steam engine would be similar. The process for casting a bell is of the same nature, but the procedure is necessarily different.
References
Metalworking
History of metallurgy
Metallurgy | Loam molding | [
"Chemistry",
"Materials_science",
"Engineering"
] | 389 | [
"Metallurgy",
"History of metallurgy",
"Materials science",
"nan"
] |
66,412,064 | https://en.wikipedia.org/wiki/Depleted%20uranium%20hexafluoride | Depleted uranium hexafluoride (DUHF; also referred to as depleted uranium tails, depleted uranium tailings or DUF6) is a byproduct of the processing of uranium hexafluoride into enriched uranium. It is one of the chemical forms of depleted uranium (up to 73-75%), along with depleted triuranium octoxide (up to 25%) and depleted uranium metal (up to 2%). DUHF is 1.7 times less radioactive than uranium hexafluoride and natural uranium.
History
The concept of depleted and enriched uranium emerged nearly 150 years after the discovery of uranium by Martin Klaproth in 1789. In 1938, two German physicists Otto Hahn and Fritz Strassmann had made the discovery of the fission of the atomic nucleus of the 235U isotope, which was theoretically substantiated by Lise Meitner, Otto Robert Frisch and in parallel with them Gottfried von Droste and Siegfried Flügge. This discovery marked the beginning of the peaceful and military use of the nuclear energy of uranium. A year later, Yulii Khariton and Yakov Zeldovich were the first to prove theoretically that with an enrichment of 235U in natural uranium, a chain reaction could be sustained. This nuclear chain reaction requires on average that at least one neutron, released by the fission of an atom of 235U, will be captured by another atom of 235U and will cause it also to fission. The probability of a neutron being captured by a fissile nucleus should be high enough to sustain the reaction. To increase this probability, an increase in the proportion of 235U is necessary, which in natural uranium constitutes only 0.72%, along with 99.27% 238U and 0.0055% 234U.
Competition
By the mid-1960s, the United States had a monopoly on the supply of uranium fuel for Western nuclear power plants. In 1968, the USSR declared its readiness to accept orders for uranium enrichment. As a result, a competitive market formed in the world, and commercial enrichment companies began to appear (e.g., URENCO and Eurodif). In 1971, the first Soviet contract was signed with the French Alternative Energies and Atomic Energy Commission, where nuclear power plants were actively built. In 1973, roughly 10 long-term contracts were signed with power companies from Italy, Germany, Great Britain, Spain, Sweden, Finland, Belgium and Switzerland. By 2017, large commercial enrichment plants have been operating in France, Germany, the Netherlands, Great Britain, the United States, Russia and China. The development of the enrichment market has led to the accumulation of over 2 million tons of DUHF in the world during this period.
Other forms of depleted uranium
Depleted uranium may exist in several chemical forms; in the form of DUHF, the most common form, with a density of 5.09 g/cm3, in the form of depleted triuranium octoxide (U3O8) with a density of 8.38 g/cm3, and in the form of depleted uranium metal with a density of 19.01 g/cm3.
Physical properties
Since the various uranium isotopes share the same chemical properties, the chemical and physical properties of depleted, enriched, and unenriched UF6 are identical, except for the degree of radioactivity. Like other forms of UF6, under standard conditions, DUHF forms white crystals, with a density of 5.09 g/cm3. At pressures below 1.5 atm, the solid DUHF sublimes into gas when heated, with no liquid form. At 1 atm, the sublimation point is 56.5 °C. The critical temperature of DUHF is 230.2 °C, and the critical pressure is 4.61 MPa.
Radioactivity
The radioactivity of DUHF is determined by the isotopic composition of uranium because the fluorine in the compound is stable. The radioactive decay rate of natural UF6 (with 0.72% 235U) is 1.7×104 Bq/g of which 97.6% is due to 238U and 234U.
When uranium is enriched, the content of light isotopes, 234U and 235U, increases. Although 234U, despite its much lower mass fraction, contributes more to the activity, the target isotope for nuclear industry use is 235U. Therefore, the degree of uranium enrichment or depletion is specified by the content of 235U. The reduction of 234U, and to a slight degree 235U, content reduces the radioactivity below unenriched UF6.
Production
Low enriched uranium with enrichment of 2 to 5% 235U (with some exceptions when using 0.72% in natural composition, for example in Canadian CANDU reactors) is used for nuclear power, in contrast to weapons-grade highly enriched uranium with 235U content of over 20% and usually over 90%. Various methods of isotope separation are used to produce enriched uranium, mainly gas centrifugation and, in the past, the gaseous diffusion method. Most of them work with gaseous UF6, which in turn is produced by fluorination of elemental uranium tetrafluoride (UF4 + F2 → UF6) or uranium oxides (UO2F2 + 2 F2 → UF6 + O2), both highly exothermic.
Since UF6 is the only uranium compound that is gaseous at a relatively low temperature, it plays a key role in the nuclear fuel cycle as a substance suitable for separating 235U and 238U. After obtaining enriched UF6, the remainder (approximately 95% of the total mass) is transformed into depleted UF6 , which consists mainly of 238U, because its 235U content is reduced by perhaps a factor of three, and its 234U content by a factor of six (depending on the degree of depletion). In 2020, nearly two million tons of depleted uranium was accumulated in the world. Most of it is stored in the form of DUHF in special steel tanks.
The methods of handling depleted uranium in different countries depends on their nuclear fuel cycle strategy. The International Atomic Energy Agency (IAEA) recognizes that policy determination is the prerogative of the government (para. VII of the Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management). Given the technological capabilities and concepts of the nuclear fuel cycle in each country, with access to separation facilities, DUHF may be considered as a valuable raw material on one hand or low-level radioactive waste on the other. Therefore, there is no unified legal and regulatory status for DUHF in the world. The IAEA expert report , 2001 and the joint report of the OECD, NEA and IAEA Management of Depleted Uranium, 2001 recognize DUHF as a valuable raw material.
Applications
As a result of chemical conversion of DUHF, anhydrous hydrogen fluoride and/or its aqueous solution (i.e. hydrofluoric acid) are obtained, which have a certain demand in non-nuclear energy markets, such as the aluminum industry, in production of refrigerants, herbicides, pharmaceuticals, high-octane gasoline, plastics, etc. It is also applied in the reuse of hydrogen fluoride in the production of UF6 via the conversion of U3O8 into uranium tetrafluoride (UF4), before further fluorination into UF6.
Processing
There are multiple directions in the world practice of DUHF reprocessing. Some of them have been tested in a semi-industrial setting, while others have been and are being operated on an industrial scale with an effort to reduce the reserves of uranium tailings and provide the chemical industry with hydrofluoric acid and industrial organofluorine products.
Depending on nuclear fuel cycle strategy, technological capabilities, international conventions and programs, such as the Sustainable Development Goals (SDG) and the UN Global Compact, each country approaches the issue of the use of accumulated depleted uranium individually. The United States has adopted a number of long-term programs for the safe storage and reprocessing of DUHF stocks prior to their final disposal.
Sustainable development goals
Under the UN SDG, nuclear power plays a significant role not only in providing access to affordable, reliable, sustainable and modern energy (Goal 7), but also in contributing to other goals, including supporting poverty, hunger and water scarcity elimination, economic growth and industry innovation. Several countries, such as the United States, France, Russia, and China, through their leading nuclear power operators, have committed to achieving the Sustainable Development Goals. To achieve these goals, various technologies are being applied both in the reprocessing of spent fuel and in the reprocessing of accumulated DUHF.
Transportation
International policies for transporting radioactive materials are regulated by the IAEA since 1961. These regulations are implemented in the policies of the International Civil Aviation Organization, International Maritime Organization, and regional transport organizations.
Depleted UF6 is transported and stored under standard conditions in solid form and in sealed metal containers with wall thickness of about 1 cm (0.39 in), designed for extreme mechanical and corrosive impacts. For example, the most common "48Y" containers for transportation and storage contain up to 12.5 tons of DUHF in solid form. DUHF is loaded and unloaded from these containers under factory conditions when heated, in liquid form and via special autoclaves.
Dangers
Due to its low radioactivity, the main health hazards of DUHF are connected to its chemical effects on bodily functions. Chemical exposure is a major hazard at facilities associated with the processing of DUHF. Uranium and fluoride compounds such as hydrogen fluoride (HF) are toxic at low levels of chemical exposure. When DUHF comes in contact with air moisture, it reacts to form HF and gaseous uranyl fluoride. HF is a corrosive acid that can be extremely dangerous if inhaled; it is one of the major work hazards in such industries.
In many countries, current occupational exposure limits for soluble uranium compounds are related to a maximum concentration of 3 μg of uranium per gram of kidney tissue. Any effects caused by exposure to these levels on the kidneys are considered minor and temporary. Current practices based on these limits provide adequate protection for workers in the uranium industry. To ensure that these kidney concentrations are not exceeded, legislation limits long-term (8 hours) concentrations of soluble uranium in workplace air to 0.2 mg per cubic meter and short-term (15 minutes) to 0.6 mg per cubic meter
Incidents during transportation
In August 1984, the freighter MS Mont Louis sank in the English Channel with 18 containers of slightly depleted (0.67% 238U) uranium hexafluoride on board, along with enriched and natural UF6. The 30 containers (type 48Y) of UF6 were recovered, as well as 16 of the 22 empty containers (type 30B). Examination of the 30 containers revealed, in one case, a small leak in the shutoff valve. There were 217 samples taken, subjected to 752 different analyses and 146 measurements of dose levels on the containers. There was no evidence of leakage of either radioactive (natural or recycled uranium) or hazardous chemical substances (fluorine or hydrofluoric acid). According to The Washington Post, this incident was not hazardous because the uranium cargo was in its natural state, with an isotope 235U content of 0.72% or less, and only some of it was enriched to 0.9%.
See also
Traveling wave reactor - a reactor concept that uses depleted uranium for fuel
Notes
References
Element toxicology
Uranium, Depleted
Uranium | Depleted uranium hexafluoride | [
"Physics",
"Chemistry"
] | 2,441 | [
"Element toxicology",
"Biology and pharmacology of chemical elements",
"Materials",
"Nuclear materials",
"Matter"
] |
75,118,815 | https://en.wikipedia.org/wiki/Common%20symbiosis%20signaling%20pathway | The common symbiosis signaling pathway (CSSP) is a signaling cascade in plants that allows them to interact with symbiotic microbes. It corresponds to an ancestral pathway that plants use to interact with arbuscular mycorrhizal fungi (AMF). It is known as "common" because different evolutionary younger symbioses also use this pathway, notably the root nodule symbiosis with nitrogen-fixing rhizobia bacteria. The pathway is activated by both Nod-factor perception (for nodule forming rhizobia), as well as by Myc-factor perception that are released from AMF. The pathway is distinguished from the pathogen recognition pathways, but may have some common receptors involved in both pathogen recognition as well as CSSP. A recent work by Kevin Cope and colleagues showed that ectomycorrhizae (a different type of mycorrhizae) also uses CSSP components such as Myc-factor recognition.
The AMF colonization requires the following chain of events that can be roughly divided into the following steps:
1: Pre-Contact Signaling
2: The CSSP2: A: Perception
2: B: Transmission
2: C: Transcription3: The Accommodation program
Outline
To accurately recognize the infection thread of a different species of organism, and to establish a mutually beneficial association requires robust signaling. AM fungi are also fatty acid auxotrophs; therefore they depend on a plant for supply of fatty acids.
At the pre-symbiotic signaling, plants and AMF release chemical factors in their surroundings so that the partners can recognise and find each other.' Plant root exudates play roles in complex microbial interaction, by releasing a variety of compounds, among which strigolactone has been identified to facilitate both AMF colonisation and pathogen infection.
Phosphate starvation in plant induces strigolactone production as well as AMF colonisation. Plants release strigolactone, a class of caroteinoid-based plant hormone, which also attracts the fungal symbionts, and stimulate the fungal oxidative metabolism along with growth and branching of the fungal partner. Strigolactone promotes hyphal branching in germinating AMF spores and facilitates colonisation.
The common symbiosis signalling pathway is called so because it has common components for fungal symbiosis as well as rhizobial symbiosis. The common signalling pathway probably evolved when the existing pathway for arbuscular mycorrhizae was exploited by rhizobia.
The perception happens when fungal Myc factor is detected by the plant. Myc factors are comparable to rhizobial nod factors. The chemical nature of some Myc-factors has recently been revealed as lipo-chito-oligosaccharide (Myc-LCOs) and chito-oligosaccharides (Myc-COs) that work as symbiotic signal.
The presence of Strigolactone enhances the production of Myc-CO production by AMF.
Myc-factor receptor (MFR) is still putative. However, a protein called DMI2 (or SYMRK) has a prominent role in perception process and it is thought to be a co-receptor of MFR. Other plants such as rice may employ different mechanisms using OsCERK1 and OsCEBiP to detect chitin oligomers. However, recent work has demonstrated that rice SYMRK is essential for AM symbiosis.
The transmission happens when the signal is transmitted after detection to the plant nucleus. This process is mediated by two nucleoporins NUP85 and NUP133, Alternatively, another hypothesis says HMG-CoA reductase is activated on perception, which then converts HMG-CoA into mevalonate. This mevalonate acts as a second messenger and activates a nuclear K+ cation channel (DMI-1 or Pollux). The transmission stage ends by creating a ‘calcium spike’ in the nucleus.
The transcription stage starts when a Calcium and Calmodulin dependent kinase (CCaMK) is activated. This kinase stimulates a target protein CYCLOPS. CCaMK and CYCLOPS probably forms a complex that along with DELLA protein, regulates the gene expression of RAM1 (Reduced Arbuscular Mycorrhyza1).
The accommodation process involves the extensive remodelling of host cortical cells. This includes invagination of host plasmalemma, proliferation of endoplasmic reticulum, golgi apparatus, trans-golgi network and secretory vesicles. Plastids multiply and form “stromules”. Vacuoles also undergoe extensive reorganization.
The Pre-Contact Signaling
Chemical signalling starts prior to the two symbionts coming into contact. From the host plant's side, it synthesizes and releases a range of caroteinoid based phytohormone, called strigolactones. They have a conserved tricyclic lactone structure also known as ABC rings. Strigolactone biosynthesis occurs mainly in plastid, where D27 (Rice DWARF 27; Arabidopsis ortholog ATD27), an Iron binding beta-carotene isomerase works at upstream of strigolactone biosynthesis. Then carotenoid cleavage dioxygenase enzyme CCD7 and CCD8 modifies the structure, which has following orthologs:
The alpha/beta fold hydrolase D3 and also D14L (D14-Like) (the later one has an Arabidopsis ortholog, KAI2, or KARRIKIN INSENSITIVE-2) is reported to have important roles in mycorrhizal symbiosis, notably, D3, D14 and D14L are localised in the nucleus.
NOPE1 or 'NO PERCEPTION 1', is a transporter protein in Rice (Oryza sativa) and Maize (Zea mays), also required for the priming stage for colonisation by the fungus. NOPE1 is a member of Major Facilitator Super family of transport proteins, capable of N-acetylglucosamine transport. Since nope1 mutants root exudates fail to elicited transcriptional responses in fungi, it strongly seems that NOPE1 secretes something (not yet characterised) that promotes fungal response.
Perception
There are two main type of root symbiosis; one is root nodule symbiosis by Rhizobia (RN-type) and another is Arbuscular Mycorrhiza (AM-type). There are common genes involved in between these two pathways. these key common components, form the Common Symbiosis pathway (CSP or CSSP). It has been proposed that, RN symbiosis has originated from AM symbiosis. The perception of the presence of the fungal symbiont takes place mainly through fungal chemical secretions generally termed as Myc-factors. Receptors for Myc-factors are yet to be identified. However, DMI2/SYMRK probably acts as a co-receptor of Myc factor receptor (MFR). The AM fungal secreted materials relevant to symbiosis are Myc-LCOs, Myc-COs, N-Acetylglucosamine
{| class="wikitable"
|+ Fungal Myc-factors and the plant protein they act on
|-
! Myc factor!! Plant protein it mainly act on
|-
| Myc-LCOs || LYS11 in Lotus japonicus
|-
| Short chain chitin oligomers (COs) || OsCERK1 and OsCEBiP in rice
|-
| N-acetylglucosamine || NOPE-1 in maize
|-
|}
Fungal Molecules that triggers CSSP
Myc-LCOs (lipochitooligosaccharides)
Like Rhizobial LCOs (Nod factors); Myc-LCOs play important role in perception stage. They are a kind of secreted compounds from AM fungi, mainly mixtures of lipo-chito-oligosaccharides (Myc-LCOs). In Lotus japonicus, LYS11, a receptor for LCOs, was expressed in root cortex cells associated with intra-radical colonizing arbuscular mycorrhizal fungi
Short chain chitin oligomers (Myc-COs)
AM host plants show symbiotic-activated calcium waves upon exposure to short chain chitin oligomers. It has been reported that production of these molecules by the AM fungus Rhizophagus irregularis, is strongly stimulated upon exposure to strigolactones. This suggests that plants secrete strigolactones and in response, the fungus increases short chain chitin oligomers, which in turns elicits the plant response to accommodate the fungus. The lysine motif domain of OsCERK1 and OsCEBiP is thought to be involved in the perception of short chain chitin oligomers.
N-Acetylglucosamine
NOPE-1 is transporter (described above). NOPE-1 also shows a strong N-acetylglucosamine uptake activity, and is thought to be associated with recognition of presence of fungal symbiont.
Some plant proteins are suspected to recognise Myc-factors, and the rice OsCERK1 Lysin motif (LysM) receptor-like kinase, is one of them.
Cell Surface Receptors
There are multiple families of pattern recognition receptors and co-receptors involved in recognition of microbial pathogens and symbionts. Some of the relevant families involved in CSSP, are Membrane bound LysMs (LYM), Soluble LysM Receptor like Protein, LYK (LysM receptors with active Kinase domain), LYR (LysM proteins with inactive kinase domain), etc.
Seemingly, different combinations of a LYK and LYR receptors perceive and generate differential signals, such as some combinations generate a pathogen recognition signal whereas some combinations generate symbiotic signals.
Receptor-like Kinases (RLKs) DMI2/ SYMRK is a receptor-like kinase, an important protein in endosymbiosis signal perception, reported in several plants (Mt-DMI2 or Mt-NORK in Medicago trancatula; Lj-SYMRK in Lotus japonicas; Ps-SYM19 in Pisum sativum; OsSYMRK in Rice). OsSYMRK lacks an N-terminal domain and exclusively regulate AM symbiosis (is not involved in the RN symbiosis). Notably, it has been found that a Nod-factor inducible gene, MtENOD11, is activated in the presence of AMF exudates; little is known about this phenomenon.
LysM receptor-like kinase Lysin Motif (LysM) receptor-like kinase are a subfamily related to membrane bound Receptor-like kinase (RLKs) with an extracellular region consisting of 3 Lysine motifs. They have some important orthologs in different plants, that vary in their function. In some plant species they are involved in AM symbiosis, in others they are not. Tomato (Solanum lycopersicum), a non-legume eudicot, also have a similar LysM receptor, SlLYK10 that Promotes AM symbiosis. There are some co-receptors of Myc-factor receptor viz., OsCEBiP in Rice, a LysM membrane protein can function as a co-receptor of OsCERK1 but it participates in a different pathway.
Most of these kinases are serine/threonine kinases, some are tyrosine kinases. Also, they are type-1 transmembrane proteins, that indicates their N-terminal domain towards the outside of the cell, and the C-terminal domain is towards inside of the cell.
Transmission
The transmission of signal cascades into the nucleus is not well understood. However, this transmission includes carrying the message up to the nuclear membrane and generation of a calcium wave. Some elements involved in this process are:
Nucleoporins
Lotus japonicus Nucleoporins LjNUP85 and LjNUP133 has potential role in transmission of the signal. Lj-NENA is another important nucleoporin that plays role in AM symbiosis.
HMGR and Mevalonate.
It has been proposed that the enzyme 3-hydroxy-3-methylglutaryl-CoA reductase (HMG CoA reductase or HMGR) has potential role in the transmission stage. The enzyme is activated by SYMRK/DMI2, and forms mevalonate. This mevalonate acts as a second messenger, and activates a nuclear potassium channel, DMI1 or POLLUX.
Nuclear membrane cation channels.
The nuclear calcium channel CNGC15, which is cyclic nucleotide gated ion channel; mediates the symbiotic nuclear Ca2+ influx, and it is countered by K+ efflux by DMI1.
TranscriptionCalmodulin is a widespread regulatory protein that functions along with Ca2+ in various biological processes. In AM symbiosis signalling, it modulates CCaMK. CCaMK or DMI3 is a calcium-and-calmodulin-dependent kinase (CCaMK) thought to be a key decoder of Ca2+ oscillations and an important regulatory kinase protein. Nuclear Ca2+ spiking promotes binding of Ca2+ calmodulin with CCaMK. Binding of Ca2+ calmodulin with CCaMK causes conformational change of CCaMK that stimulates a target protein, CYCLOPS, which has different orthologs. CYCLOPS is a coiled coil domain containing protein possibly form a complex with CCaMK that works along with DELLA proteins. DELLA proteins are a kind of GRAS-domain protein originally identified as repressors of the Gibberellin signalling pathway, however now it is seen that DELLA participates in many signalling pathways. There are two DELLA proteins in Medicago trancatula and Pisum sativum that play a role in symbiosis whereas in rice only one DELLA protein fulfils this task. Reduced Arbuscular Mycorrhiza or RAM1 is a GRAS protein whose gene is directly regulated by DELLA and CCaMK/ CYCLOPS. By using chromatin immunoprecipitation assays, it has been shown that RAM1 binds to RAM2 gene promoter. RAM1 also regulates many of the plant genes that participate in AMF accommodation.
Some GRAS proteins play a role in AM symbiosis but these roles are not yet fully understood. These include RAM1, RAD1 (REQUIRED FOR ARBUSCLE DEVELOPMENT 1), MIG1 (MYCORRHIZA INDUCED GRAS1), NSP1 and NSP2. WRKY transcription factor genes are thought to play very important roles in establishment of mycorrhizal symbiosis and they perhaps work through regulating plant defense genes.
The Accommodation program
Root cortex cells experience important changes in order to accommodate for the fungal endosymbiont. The pre-penetration apparatus (PPA) in outer cell layers and the peri-arbuscular membrane that surrounds arbuscules in inner cell layers need to be formed and the plant cell cytoplasm needs to rearrange, the vacuole retracts in size, the nucleus and nucleolus enlarge in size and chromatin decondense indicating heightened transcriptional activity. Plastids multiply and stay connected with “stromulus”. Furthermore, it was suggested that the apoplastic longitudinal hyphal growth is probably regulated by plant genes such as taci1 and CDPK1.
Genes and proteins playing a role in the accommodation programme
Although various proteins have been identified which may play role on how this accommodation process occurs, the detailed signalling cascade is not fully understood. Some of the proteins and mechanisms involved in the deposition on peri-arbuscular membrane are EXOCYST complex, EXO70 subunit, a symbiosis-specific splice variant of SYP132, VAPYRIN, and two variants of VAMP721. Plant enzymes FatM and RAM2 and ABC transporter STR/STR2 are putatively involved in the synthesis and supplying of a lipid 16:0 β-monoacylglycerol to the AM fungi. Recently discovered kinases that regulate the AMF accommodation programm include ADK1, AMK8, AMK24, ARK1 and ARK2.
The protein composition of the peri-arbuscular membrane is very different from that of the plasma membrane. It includes some special transporters such as phosphate transporters (Mt-PT4, Os-PT11, Os-PT13) and ammonium transporters (Mt-AMT2 and 3). It also includes ABC transporters such as STR/STR2 putatively involved in lipid transport.
Evolutionary significance
AM fungi and plants co-evolved and developed a very complex interaction that allow the plant accommodate the AM-fungal host. It has been proposed that the RN symbiosis has originated from the AM symbiosis.
See also
Receptor tyrosine kinase
Serine threonine kinase
Pattern recognition receptors
Monoglyceride
Strigolactone
Plant intelligence
Cell signalling
Signal transduction
Pathogen Associated Molecular Pattern
Microbe Associated Molecular Pattern
Mutualism
Karrikin signaling
Mevalonate pathway
References
Symbiosis
Mycology
Plant communication
Soil biology | Common symbiosis signaling pathway | [
"Biology"
] | 3,715 | [
"Behavior",
"Symbiosis",
"Plants",
"Biological interactions",
"Plant communication",
"Mycology",
"Soil biology"
] |
75,120,620 | https://en.wikipedia.org/wiki/Kinetotroph | A kinetotroph or kinetic harvester is a hypothetical organism that would use kinetic energy to produce complex molecules like adenosine triphosphate (ATP). Kinetotrophs could obtain their energy from numerous sources like wind, tides, or currents; this would allow them to inhabit locations with minimal light for photosynthesis. Kinetotrophs could descend from chemotrophs, and have been hypothesized to take the form of sedentary ciliates and reed-like organisms.
There are no known kinetotrophs on Earth, likely because the process is less efficient than other sources of energy like light or chemicals. However, similar transducer systems have been observed in some organisms. For example, some fish possess a lateral line organ, which uses cilia to turn the movement of fluid into electric signals.
Mechanisms
The theoretical mechanics that would allow kinetotrophism vary widely. One pathway proposed by Dirk Schulze-Makuch and Louis N. Irwin involves lever-like proteins that would be moved by the flow of fluid. When inside a protein channel with cilia-like proteins that could act as channel guards, the levers could allow specific molecules into or out of the cell. Harnessing the Gibbs–Donnan effect, sodium ions could be made to enter the cell and fuel a hydrogen transporter similar to those in mitochondria, thus allowing for energy-storing molecules like ATP to be synthesized. This mechanism would act like a battery; thus, only enough time and a flow of fluid in the range of millimetres per second would be required for the synthesizing of complex molecules.
Another mechanism to derive energy from kinetics would be a spring-like structure. Fluid currents or tides could place pressure on cilia structures, bending them and creating tensile energy. When the pressure subsides, that tension would be released and could create usable energy.
Habitat
It has been proposed that kinetotrophs could exist underneath the ice sheet of the Jovian moon Europa. These organisms might attach to the underside of the ice sheet, or to substrates on the ocean floor. The kinetic energy these organisms would harness could be provided by convection cells, where currents are created by the varying temperatures of fluid throughout the water column.
References
Trophic ecology
Hypothetical life forms
Kinetic energy | Kinetotroph | [
"Physics",
"Biology"
] | 471 | [
"Mechanical quantities",
"Physical quantities",
"Hypothetical life forms",
"Kinetic energy",
"Biological hypotheses"
] |
75,122,285 | https://en.wikipedia.org/wiki/International%20Particle%20Physics%20Outreach%20Group | The International Particle Physics Outreach Group (IPPOG) is a network of scientists, educators, and communicators from several countries that works to improve the public's understanding and appreciation of particle physics. Established in 1997 at CERN, IPPOG works in collaboration with particle physics laboratories and experiments, including CERN, the Pierre Auger Observatory, DESY, and GSI.
History
IPPOG started out as the European Particle Physics Outreach Group (EPPOG) in 1997 with the sponsorship of the European Committee for Future Accelerators (ECFA) and the High Energy Particle Physics Board of the European Physical Society (EPS-HEPP).
In November 2012, EPPOG became IPPOG and soon after added the US as its first country member, followed by Israel, Ireland, Slovenia, Australia and South Africa.
Activities
IPPOG focuses on creating and implementing outreach initiatives related to particle physics. These include exhibitions, educational materials, and events designed for various audiences. Additionally, IPPOG provides resources for science communicators, educators, and physicists to assist in their outreach efforts, aiming to convey the concepts of particle physics in a comprehensible manner.
References
External links
Physics education
CERN
Particle physics
1997 establishments in Switzerland | International Particle Physics Outreach Group | [
"Physics"
] | 256 | [
"Applied and interdisciplinary physics",
"Physics education",
"Particle physics"
] |
75,123,579 | https://en.wikipedia.org/wiki/James%20A.%20Shayman | James Alan Shayman is an American physician scientist, nephrologist, and pharmacologist. He is Professor of Internal Medicine and Pharmacology and the Agnes C. And Frank D. McKay Professor at the Medical School of the University of Michigan. He also serves as a staff nephrologist at the Ann Arbor Veterans Administration Medical Center.
Shayman's research interests span the study of lysosomal biology and related disorders. His group is most known for the development of small-molecule inhibitors of glycosphingolipid synthesis and their use in lysosomal glycosphingolipid storage disorders. His team also discovered and characterized a novel lysosomal phospholipase A2, PLA2G15 and is investigating its role in phospholipidosis. He has published over 160 articles.
Shayman is a Fellow of the American Heart Association and American Society of Nephrology as well as a Life Fellow of Clare Hall at the University of Cambridge. He has served as an Associate Editor for the Journal of Clinical Investigation and Translational Research and is serving in the same role for the Journal of the American Society of Nephrology.
Education and early career
Shayman obtained a Bachelor of Arts degree from Cornell University in 1976, and received an M.D. in 1980 from Washington University in St. Louis. From July 1980 to June 1983, he served as a house officer in Medicine at Barnes Hospital in St. Louis, Missouri. Beginning in 1983, he pursued a Postdoctoral Fellowship with a specialization in Nephrology and Pharmacology under the mentorship of Aubrey Morrison and Oliver H. Lowry at Washington University School of Medicine in St. Louis.
Career
Following his post-doctoral fellowship training, in 1985, Shayman began his academic career as an instructor in the Renal department of Washington University School of Medicine. He was recruited to the University of Michigan where from 1986 to 1992 he was appointed as assistant professor in the Department of Internal Medicine, Division of Nephrology. He subsequently was promoted to the positions of associate professor in 1992 and professor in 1997, respectively with a secondary appointment in Pharmacology. He has been serving as the Agnes C. and Frank D. McKay Professor.
Shayman was the Associate Chair for Research Programs at the Department of Internal Medicine and Associate Vice President for Research in Health Sciences of the University of Michigan. In addition, he has been serving as a staff nephrologist Veterans Administration Medical Center in Michigan.
Research
Shayman's research is focused on lysosomal biology, the pathophysiology of traditional lysosomal storage disorders, and the role of the lysosome in more prevalent diseases including diabetes mellitus and polycystic kidney disease. A particular emphasis has been on the development of drug therapeutics for disorders of glycosphingolipid metabolism. This work has resulted in several patents including "Amino ceramide-like compounds and therapeutic methods of use" and "Pyridine inhibitors of glucosylceramide synthase and therapeutic methods using the same."
Substrate reduction therapy
An early collaboration with Norman Radin focused on substrate reduction as an alternative to enzyme replacement therapy for the treatment of lysosomal disorders such as Gaucher disease. It was suggested that substrate reduction posits that inhibition of metabolites that accumulate in the lysosome due to the loss of activity of a specific hydrolase can be treated with reversible inhibitors of specific anabolic enzymes. Following an early collaboration with Radin, the Shayman group went on to develop inhibitors of glucosylceramide synthase followed by proof of concept studies in models of Gaucher and Fabry disease that experimentally established the viability substrate reduction therapy. Although this concept was initially met with skepticism from the academic and pharmaceutical communities, these compounds were eventually licensed to the Genzyme Corporation for clinical development in 2000. In 2014 eliglustat tartrate was approved by the Food and Drug Administration and the European Medicines Association. Eliglustat tartrate was the first orally bioavailable agent approved as the first stand-alone substrate reduction therapy for Gaucher disease type 1.
Glycosphingolipid synthesis inhibitor
Shayman's work on developing the "first in class" glycosphingolipid synthesis inhibitor led to the consideration of whether more common disorders might be amenable to targeting glucosylceramide synthase. Based on fundamental studies by his group and others demonstrating a role for glucosylceramide metabolism in conditions associated with aerobic glycolysis, including diabetes and polycystic kidney disease, glucosylceramide synthase inhibitors have been the focus of preclinical and clinical studies evaluating the potential for extended use applications of eliglustat and related compounds.
Brain penetrant glycolipid synthesis inhibitors
In collaboration with Scott D. Larsen, Shayman's work has also been directed toward the identification of brain-penetrant glycolipid synthesis inhibitors for the treatment of Gaucher disease types 2 and 3, GM2 gangliosidoses including Tay-Sachs and Sandhoff disease, and GM1 gangliosidosis. Using computational analysis comparing eliglustat to known CNS penetrant compounds, novel glucosylceramide synthase inhibitors were designed around the eliglustat pharmacophore, demonstrating the lower glucosylceramide and ganglioside levels within the brain.
Vasculopathy of fabry disease
The Shayman group has worked on the elucidation of the mechanisms underlying the vasculopathy of Fabry disease. His initial work led to the identification of three inducible models of vascular disease in the alpha-galactosidase A knockout mouse. These models included oxidant-induced arterial thrombosis, accelerated atherogenesis, and impaired arterial relaxation. Both decreased nitric oxide bioavailability and endothelial nitric oxide synthase uncoupling have been demonstrated to underlie these abnormalities. The insights led to identifying 3-nitrotyrosine as a biomarker for endothelial dysfunction in both experimental models and patients affected by classic forms of Fabry disease.
PLA2GXV
Attempts to delineate potential off-target effects of eliglustat led to the discovery of a novel lysosomal hydrolase, phospholipase A2 group XV (PLA2GXV). This enzyme was initially identified as a transacylase and named 1-O-acylceramide synthase. PLA2GXV is 50 percent identical to LCAT. In collaboration with John Tesmer and colleagues, the structure of PLA2GXV and, by extension, of lecithin cholesterol acyltransferase (LCAT) were solved. Mice engineered to be deficient in PLA2GXV developed a pulmonary phenotype associated with the conversion of alveolar macrophages to foam cells, a phenotype that resembles amiodarone toxicity. A 2021 work has also identified PLA2GXV as the site of action for many drugs that cause a form of toxicity termed phospholipidosis.
Awards and honors
1991 – Henry Christian Award, American Federation for Clinical Research
1994 – Elected Member, American Society for Clinical Investigation
2000 – Elected Member, Association of American Physicians
2001 – Fellow, American Heart Association
2003 – Fellow, American Society of Nephrology
2016 – Distinguished University Innovator Award, University of Michigan
2020 – Life Fellow, Clare Hall, University of Cambridge
Bibliography
Selected books
Renal Pathophysiology (1995) ISBN 978-0397513727
Essentials of Internal Medicine (2000) ISBN 978-0781719377
Selected articles
Rani CS, Abe A, Chang Y, Rosenzweig N, Saltiel AR, Radin NS, and Shayman JA. Cell cycle arrest induced by an inhibitor of glucosylceramide synthase. Correlation with cyclin-dependent kinases. J Biol Chem. 1995;270(6):2859-67.
Abe A, Shayman JA, and Radin NS. A novel enzyme that catalyzes the esterification of N-acetylsphingosine. Metabolism of C2-ceramides. J Biol Chem. 1996;271(24):14383-9.
Abe A, and Shayman JA. Purification and characterization of 1-O-acylceramide synthase, a novel phospholipase A2 with transacylase activity. J Biol Chem. 1998;273(14):8467-74.
Lee L, Abe A, and Shayman JA. Improved inhibitors of glucosylceramide synthase. J Biol Chem. 1999;274(21):14662-9.
Abe A, Gregory S, Lee L, Killen PD, Brady RO, Kulkarni A, and Shayman JA. Reduction of globotriaosylceramide in Fabry disease mice by substrate deprivation. J Clin Invest. 2000;105(11):1563-71.
Hiraoka M, Abe A, and Shayman JA. Cloning and characterization of a lysosomal phospholipase A2, 1-O-acylceramide synthase. J Biol Chem. 2002;277(12):10090-9.
Eitzman DT, Bodary PF, Shen Y, Khairallah CG, Wild SR, Abe A, Shaffer-Hartman J, and Shayman JA. Fabry disease in mice is associated with age-dependent susceptibility to vascular thrombosis. J Am Soc Nephrol. 2003;14(2):298-302.
Hiraoka M, Abe A, and Shayman JA. Structure and function of lysosomal phospholipase A2: identification of the catalytic triad and the role of cysteine residues. J Lipid Res. 2005;46(11):2441-7.
Abe A, Hiraoka M, and Shayman JA. A role for lysosomal phospholipase A2 in drug induced phospholipidosis. Drug Metab Lett. 2007;1(1):49-53.
Shayman JA. ELIGLUSTAT TARTRATE: Glucosylceramide Synthase Inhibitor Treatment of Type 1 Gaucher Disease. Drugs Future. 2010;35(8):613-20.
Shayman JA, Kelly R, Kollmeyer J, He Y, and Abe A. Group XV phospholipase A(2), a lysosomal phospholipase A(2). Prog Lipid Res. 2011;50(1):1-13.
Glukhova A, Hinkovska-Galcheva V, Kelly R, Abe A, Shayman JA, and Tesmer JJ. Structure and function of lysosomal phospholipase A2 and lecithin:cholesterol acyltransferase. Nat Commun. 2015;6:6250.
Hinkovska-Galcheva V, Treadwell T, Shillingford JM, Lee A, Abe A, Tesmer JJG, et al. Inhibition of lysosomal phospholipase A2 predicts drug-induced phospholipidosis. J Lipid Res. 2021;62:100089.
Wilson MW, Shu L, Hinkovska-Galcheva V, Jin Y, Rajeswaran W, Abe A, et al. Optimization of Eliglustat-Based Glucosylceramide Synthase Inhibitors as Substrate Reduction Therapy for Gaucher Disease Type 3. ACS Chem Neurosci. 2020;11(20):3464-73.
References
Physician-scientists
Nephrologists
American nephrologists
Pharmacologists
American pharmacologists
Cornell University alumni
Washington University School of Medicine alumni
University of Michigan faculty
Year of birth missing (living people)
Living people
Washington University School of Medicine faculty | James A. Shayman | [
"Chemistry"
] | 2,540 | [
"Pharmacology",
"Biochemists",
"Pharmacologists"
] |
75,126,609 | https://en.wikipedia.org/wiki/Muironolide%20A | Muironolide A is a tetrachloro polyketide discovered in 200 that has two unusual structural details: a hexahydro-1H-isoindolone-triketide ring and a trichlorocarbinol ester. It is suspected that it is the product of a sponge–microorganism (cyanobacteria) association. It was isolated from the marine sponge Phorbas sp.
Biosynthesis and synthesis
Muironolide A possibly has its biosynthetic route coming from PKS 1 responsible for forming the lactonic ring, with an amino acid residue, which forms the isoindole ring present in the molecule and successive enzymatic transformations of reduction, oxidation, cyclication, denaturation and additions of halogens ( Chlorine - Cl) result in the final molecule.
There are proposals for synthetic routes that elucidate the synthesis process. In 2015, Xiao and collaborators carried out the synthesis and structural revision of muironolide A molecules.
Biological activities
Muironolide A has already been tested for antineoplastic activity in 56 different models using different cell lines and did not provide biological activity in any of them. Phorbas sp. also produces the macrolides phorboxazoles A and B and phorbaside A, which do have antifungal and cytostatic activity.
References
Polyketides
Cyclopropyl compounds
Lactones
Lactams
Trichloromethyl compounds
Heterocyclic compounds with 3 rings | Muironolide A | [
"Chemistry"
] | 319 | [
"Biomolecules by chemical classification",
"Natural products",
"Polyketides"
] |
63,622,786 | https://en.wikipedia.org/wiki/DUT-5 | DUT-5 (DUT ⇒ Dresden University of Technology) is a material in the class of metal-organic frameworks (MOFs). Metal-organic frameworks are crystalline materials, in which metals are linked by ligands (linker molecules) to form repeating three-dimensional structures known as coordination entities. The DUT-5 framework is an expanded version of the MIL-53 structure and consists of Al3+ metal centers and biphenyl-4,4'-dicarboxylate (BPDC) linker molecules. It consists of inorganic [M-OH] chains, which are connected by the biphenyl-4,4'-dicarboxylate linkers to four neighboring inorganic chains. The resulting structure contains diamond-shaped micropores extending in one dimension.
Structural analogs
The DUT-5 structure was initially synthesized with Al3+ as metal center, but other isostructural materials, whose structures are comparable to DUT-5, have also been prepared with metals having oxidation states of +II or +IV .
Due to the tool-box like design of metal-organic framework materials, other organic molecules, which are structurally similar to biphenyl-4,4'-dicarboxylate, have also been used as linker molecules for the synthesis of functionalized DUT-5 materials, which contain uncoordinated functional groups in their framework structure. For the functionalized DUT-5 materials, the additional functional groups at the functional biphenyl-4,4'dicarboxylate linkers in the DUT-5 framework have been used for post-synthetic modification reactions to further modify the framework structure after the initial synthesis or to alter the adsorption properties.
References
Metal-organic frameworks | DUT-5 | [
"Chemistry",
"Materials_science"
] | 370 | [
"Porous polymers",
"Metal-organic frameworks"
] |
63,622,954 | https://en.wikipedia.org/wiki/Auxiliary%20normed%20space | In functional analysis, a branch of mathematics, two methods of constructing normed spaces from disks were systematically employed by Alexander Grothendieck to define nuclear operators and nuclear spaces.
One method is used if the disk is bounded: in this case, the auxiliary normed space is with norm
The other method is used if the disk is absorbing: in this case, the auxiliary normed space is the quotient space
If the disk is both bounded and absorbing then the two auxiliary normed spaces are canonically isomorphic (as topological vector spaces and as normed spaces).
Induced by a bounded disk – Banach disks
Throughout this article, will be a real or complex vector space (not necessarily a TVS, yet) and will be a disk in
Seminormed space induced by a disk
Let will be a real or complex vector space. For any subset of the Minkowski functional of defined by:
If then define to be the trivial map and it will be assumed that
If and if is absorbing in then denote the Minkowski functional of in by where for all this is defined by
Let will be a real or complex vector space. For any subset of such that the Minkowski functional is a seminorm on let denote
which is called the seminormed space induced by where if is a norm then it is called the normed space induced by
Assumption (Topology): is endowed with the seminorm topology induced by which will be denoted by or
Importantly, this topology stems entirely from the set the algebraic structure of and the usual topology on (since is defined using the set and scalar multiplication). This justifies the study of Banach disks and is part of the reason why they play an important role in the theory of nuclear operators and nuclear spaces.
The inclusion map is called the canonical map.
Suppose that is a disk.
Then so that is absorbing in the linear span of
The set of all positive scalar multiples of forms a basis of neighborhoods at the origin for a locally convex topological vector space topology on
The Minkowski functional of the disk in guarantees that is well-defined and forms a seminorm on
The locally convex topology induced by this seminorm is the topology that was defined before.
Banach disk definition
A bounded disk in a topological vector space such that is a Banach space is called a Banach disk, infracomplete, or a bounded completant in
If its shown that is a Banach space then will be a Banach disk in TVS that contains as a bounded subset.
This is because the Minkowski functional is defined in purely algebraic terms.
Consequently, the question of whether or not forms a Banach space is dependent only on the disk and the Minkowski functional and not on any particular TVS topology that may carry.
Thus the requirement that a Banach disk in a TVS be a bounded subset of is the only property that ties a Banach disk's topology to the topology of its containing TVS
Properties of disk induced seminormed spaces
Bounded disks
The following result explains why Banach disks are required to be bounded.
Hausdorffness
The space is Hausdorff if and only if is a norm, which happens if and only if does not contain any non-trivial vector subspace.
In particular, if there exists a Hausdorff TVS topology on such that is bounded in then is a norm.
An example where is not Hausdorff is obtained by letting and letting be the -axis.
Convergence of nets
Suppose that is a disk in such that is Hausdorff and let be a net in
Then in if and only if there exists a net of real numbers such that and for all ;
moreover, in this case it will be assumed without loss of generality that for all
Relationship between disk-induced spaces
If then and on so define the following continuous linear map:
If and are disks in with then call the inclusion map the canonical inclusion of into
In particular, the subspace topology that inherits from is weaker than 's seminorm topology.
The disk as the closed unit ball
The disk is a closed subset of if and only if is the closed unit ball of the seminorm ; that is,
If is a disk in a vector space and if there exists a TVS topology on such that is a closed and bounded subset of then is the closed unit ball of (that is, ) (see footnote for proof).
Sufficient conditions for a Banach disk
The following theorem may be used to establish that is a Banach space.
Once this is established, will be a Banach disk in any TVS in which is bounded.
Note that even if is not a bounded and sequentially complete subset of any Hausdorff TVS, one might still be able to conclude that is a Banach space by applying this theorem to some disk satisfying
because
The following are consequences of the above theorem:
A sequentially complete bounded disk in a Hausdorff TVS is a Banach disk.
Any disk in a Hausdorff TVS that is complete and bounded (e.g. compact) is a Banach disk.
The closed unit ball in a Fréchet space is sequentially complete and thus a Banach disk.
Suppose that is a bounded disk in a TVS
If is a continuous linear map and is a Banach disk, then is a Banach disk and induces an isometric TVS-isomorphism
Properties of Banach disks
Let be a TVS and let be a bounded disk in
If is a bounded Banach disk in a Hausdorff locally convex space and if is a barrel in then absorbs (that is, there is a number such that
If is a convex balanced closed neighborhood of the origin in then the collection of all neighborhoods where ranges over the positive real numbers, induces a topological vector space topology on When has this topology, it is denoted by Since this topology is not necessarily Hausdorff nor complete, the completion of the Hausdorff space is denoted by so that is a complete Hausdorff space and is a norm on this space making into a Banach space. The polar of is a weakly compact bounded equicontinuous disk in and so is infracomplete.
If is a metrizable locally convex TVS then for every bounded subset of there exists a bounded disk in such that and both and induce the same subspace topology on
Induced by a radial disk – quotient
Suppose that is a topological vector space and is a convex balanced and radial set.
Then is a neighborhood basis at the origin for some locally convex topology on
This TVS topology is given by the Minkowski functional formed by which is a seminorm on defined by
The topology is Hausdorff if and only if is a norm, or equivalently, if and only if or equivalently, for which it suffices that be bounded in
The topology need not be Hausdorff but is Hausdorff.
A norm on is given by where this value is in fact independent of the representative of the equivalence class chosen.
The normed space is denoted by and its completion is denoted by
If in addition is bounded in then the seminorm is a norm so in particular,
In this case, we take to be the vector space instead of so that the notation is unambiguous (whether denotes the space induced by a radial disk or the space induced by a bounded disk).
The quotient topology on (inherited from 's original topology) is finer (in general, strictly finer) than the norm topology.
Canonical maps
The canonical map is the quotient map which is continuous when has either the norm topology or the quotient topology.
If and are radial disks such that then so there is a continuous linear surjective canonical map defined by sending
to the equivalence class where one may verify that the definition does not depend on the representative of the equivalence class that is chosen.
This canonical map has norm and it has a unique continuous linear canonical extension to that is denoted by
Suppose that in addition and are bounded disks in with so that and the inclusion is a continuous linear map.
Let and be the canonical maps.
Then and
Induced by a bounded radial disk
Suppose that is a bounded radial disk.
Since is a bounded disk, if then we may create the auxiliary normed space with norm ; since is radial,
Since is a radial disk, if then we may create the auxiliary seminormed space with the seminorm ; because is bounded, this seminorm is a norm and so
Thus, in this case the two auxiliary normed spaces produced by these two different methods result in the same normed space.
Duality
Suppose that is a weakly closed equicontinuous disk in (this implies that is weakly compact) and let
be the polar of
Because by the bipolar theorem, it follows that a continuous linear functional belongs to if and only if belongs to the continuous dual space of where is the Minkowski functional of defined by
Related concepts
A disk in a TVS is called infrabornivorous if it absorbs all Banach disks.
A linear map between two TVSs is called infrabounded if it maps Banach disks to bounded disks.
Fast convergence
A sequence in a TVS is said to be fast convergent to a point if there exists a Banach disk such that both and the sequence is (eventually) contained in and in
Every fast convergent sequence is Mackey convergent.
See also
Notes
References
Bibliography
External links
Nuclear space at ncatlab
Functional analysis | Auxiliary normed space | [
"Mathematics"
] | 1,922 | [
"Functional analysis",
"Mathematical objects",
"Functions and mappings",
"Mathematical relations"
] |
63,623,716 | https://en.wikipedia.org/wiki/NAD%2B%20Five-prime%20cap | In molecular biology, the NAD+ five-prime cap (NAD+ 5' cap) refers to a molecule of nicotinamide adenine dinucleotide (NAD+), a nucleoside-containing metabolite, covalently bonded the 5' end of cellular mRNA. While the more common methylated guanosine (m7G) cap is added to RNA by a capping complex that associates with RNA polymerase II (RNAP II), the NAD cap is added during transcriptional initiation by the RNA polymerase itself, acting as a non-canonical initiating nucleotide (NCIN). As such, while m7G capping can only occur in organisms possessing specialized capping complexes, because NAD capping is performed by RNAP itself, it is hypothesized to occur in most, if not all, organisms.
The NAD+ 5' cap has been observed in bacteria, contrary to the long-held belief that prokaryotes lacked 5'-capped RNA, as well as on the 5' cap of eukaryotic mRNA, in place of the m7G cap. This modification also potentially allows for selective degradation of RNA]within prokaryotes as different pathways are involved in the degradation of NAD+-capped and uncapped 5′-triphosphate-RNAs.
In eukaryotic cells, while the more commonly observed m7G cap promotes the stability of the mRNA and supports translation, the NAD+ cap targets the RNA transcript for decay, facilitated by the non-canonical decapping enzyme, DXO. Considering the centrality of NAD in redox chemistry and post-translational protein modification, its attachment to RNA represents potentially undiscovered pathways in RNA metabolism and regulation.
Function in prokaryotes
In prokaryotes, the 5' NAD+ modification is established by bacterial RNAP during transcription initiation and has been shown to display functions analogous to those of the eukaryotic 5' cap. In-vitro transcribed NAD-modified RNA was shown to be more resistant to RNase E, the main enzyme in the decay pathway of E. coli. NAD-modification further was shown to decelerate RNA processing by RNA pyrophosphohydrolase (RppH), which is known to trigger RNase-E-mediated decay through the conversion of 5′-triphosphate-RNA to 5′-monophosphate-RNA. Nudc, a nudix phosphohydrolase, can decap NAD-RNA through hydrolyzing NAD(H) into NMN(H) and AMP, causing RNase-E-mediated decay, but is inactive against 5′-triphosphate-RNA. This 5' modification allows for the selective initiation of degradation for a subset of RNAs as the NAD-capped RNAs are stabilized in the presence of RppH, but are decapped by Nudc, while the 5′-triphosphate-RNAs are susceptible to RppH but not Nudc.
Next generation sequencing (NGS) of the NAD-RNA conjugates in E. coli revealed an abundance of a specific group of small regulatory RNAs (sRNAs) which are known to be involved in stress response systems, as well as enzymes involved in cellular metabolism. The small number of RNA transcripts with a NAD cap might allow the cell to selectively degrade these RNAs separate from other pathways. Considering that the stress responses are known to affect NAD+ concentration, this finding further supports the possibility of undiscovered pathways linking the energetic state of a cell to mRNA turnover.
NAD capping has also been suggested to recruit specific proteins to the 5' end of the RNA as NAD is one of the most common protein ligands. NAD-binding pockets are well characterized in many proteins and could help the localization of the RNA to an enzyme or receptor. Many NAD-utilizing metabolic enzymes can also bind to RNA, presenting the possibility of unknown ribonucleoprotein complexes.
Function in eukaryotes
NAD+ 5' capped RNA have been found in yeast, humans, and Arabidopsis thaliana. In eukaryotes, the NAD+ cap is removed by non-canonical decapping enzymes from the DXO family. DeNADing by DXO results in a 5' end monophosphate RNA distinct from NudC which results in NMN plus 5′ monophosphate RNA. Importantly, DXO is ~6 fold more efficient at decapping NAD+ compared to m7G, suggesting that it selectively degrades NAD-capped RNA rather than the more common m7G cap, similar to NudC.
The m7G cap has been shown to promote translation through recruitment of the initiation complex onto the mRNA. However, the NAD+-cap does not provide a similar function as NAD+-capped and polyadenylated mRNA displayed similar levels of translation in vitro to uncapped mRNA. Additionally, the 5' NAD+ cap further promotes decay of the RNA it is attached to, NAD+-capped and polyadenylated mRNA were demonstrated in vitro to be less stable than mRNAs lacking a 5' cap, suggesting that the NAD+ modification is actively facilitating DXO-mediated RNA decay.
While the relationship between RNA-binding proteins, such as glyceraldehyde-3-phosphate dehydrogenase (GAPDH), and NAD+ concentration is established, the NAD+ cap has been hypothesized to represent a direct link between RNA expression levels and cellular metabolism. It is known that energy stresses such as glucose deprivation and caloric restriction influence NAD+ concentrations and can possibly impact NAD+ capping. Additionally, as low-nutrient conditions can affect mRNA stability, and seeing as NAD+ caps promote mRNA decay, it is possible that the energetic state of a cell could affect NAD+-capping and thus mRNA turnover. Certain findings, such as the higher abundance of NAD+-capped transcripts in stationary-phase bacteria as well as yeast grown on synthetic media, point toward this possibility.
References
Coenzymes
Nucleotides
Cellular respiration
Anti-aging substances | NAD+ Five-prime cap | [
"Chemistry",
"Biology"
] | 1,277 | [
"Cellular respiration",
"Anti-aging substances",
"Coenzymes",
"Organic compounds",
"Senescence",
"Biochemistry",
"Metabolism"
] |
63,628,505 | https://en.wikipedia.org/wiki/Insensible%20perspiration | Insensible perspiration, also known as transepidermal water loss, is the passive vapour diffusion of water through the epidermis. Insensible perspiration takes place at an almost constant rate and reflects evaporative loss from the epithelial cells of the skin. Unlike sweating, the lost fluid is pure without additional solutes. For this reason, it can also be referred to as "insensible water loss".
The amount of water lost in this way is deemed to be approximately per day. Some sources broaden the definition of insensible perspiration to include not only the water lost through the skin, but also the water lost through the epithelium of the respiratory tract, which is also approximately per day.
Insensible perspiration is the main source of heat loss from the body, with the figure being placed around 480 kCal per day, which is approximately 25% of basal heat production. Insensible perspiration is not under regulatory control.
History
Known in Latin as , the concept was already known to Galen in ancient Greece and was studied by the Venetian Santorio Santorio, who experimented on himself and observed that a significant part of the weight of what he ate and drank was not excreted in his faeces or urine but was also not being added to his body weight. He was able to measure the loss through a chair that he designed.
References
Excretion | Insensible perspiration | [
"Biology"
] | 300 | [
"Excretion"
] |
63,629,904 | https://en.wikipedia.org/wiki/Metabolomics%20%28journal%29 | Metabolomics is a peer-reviewed scientific journal covering topics including whole metabolome analysis of organisms, metabolite target analysis, with applications within animals, plants and microbes, pharmacometabolomics for precision medicine, as well as systems biology. It is published by Springer Science+Business Media and the current editor-in-chief is Roy Goodacre.
The 2018 impact factor was 3.167.
References
Academic journals established in 2005
English-language journals
Metabolomics journals | Metabolomics (journal) | [
"Chemistry"
] | 102 | [
"Biochemistry stubs",
"Biochemistry journal stubs"
] |
63,630,157 | https://en.wikipedia.org/wiki/Phonurgia%20Nova | Phonurgia Nova ("New Science of Sound Production") is a 1673 work by the Jesuit scholar Athanasius Kircher. It is notable for being the first book ever dedicated entirely to the science of acoustics, and for containing the earliest description of an aeolian harp. It was dedicated to the Holy Roman Emperor Leopold I and printed in Kempten by Rudoph Dreherr.
Purpose and argument
Kircher was prompted to write the work because Samuel Morland had published a claim to have invented the speaking trumpet. Kircher wished to defend his own priority in this invention, asserting that he had used a "tuba stentorophonica" for many years at the shrine of Saint Eustace at the :it:Santuario della Mentorella to broadcast calls for the faithful to come to mass. As evidence he referred to his own work Musurgia Universalis, published in 1650.
The work is divided in two books. The first, the Phonosophia anacamptica offered a detailed examination of the phenomenon of the echo. He expounded his theory that sound moved in waves, bouncing off surfaces like light off a mirror; indeed the first chapter opens with the maxim “Sonus lucis simia est” (“sound is like light”). He also described the use of various designs of tube to pick up and amplify sound. As he developed his argument, Kircher described various devices of his own invention including speaking statues, musical instruments with internal mechanisms that generated unexpected harmonies, and the aeolian harp. The second book, Phonosophia nova discussed the influence of music on the human mind, and the therapeutic use of music. Among other things he looked in detail at tarantism.
Illustrations
The frontispiece of the work depicts, at the top, a choir and orchestra of angels gathered around a pyramid representing the Holy Trinity. Beneath them an allegorical figure of Fame flies across the heavens blowing her trumpet and carrying a banner proclaiming "Canit inclyta caeseris arma" ("She proclaims the Emperor's illustrious arms"). On the left sits Apollo surrounded by the nine Muses on Mount Parnassus and below then Pan leads a group celebrating a bacchanalia. On the right a group of tritons escort Poseidon across the sea, kettledrums and trumpets accompany a cavalry charge, and a huntsman blows his horn while chasing deer. In the centre the figure standing appears to be Fame once again, standing on a pedestal blowing a horn. She also holds a trumpet, into which putti are blowing from above, while beneath them a man speaks into a tube while facing the surface of the pedestal. An echo, denoted by a dotted line, carries the sound from the pedestal to the ear of a man reclining at the bottom of the illustration.
The original artwork for the portrait of Emperor Leopold I was by Franz Georg Hermann and the frontispiece was by Felix Cheurier. The engravings for both were undertaken by Georg Andreas Wolfgang the Elder.
External links
digital copy of Phonurgia Nova
References
Acoustics
1673 works
1673 in science
Athanasius Kircher | Phonurgia Nova | [
"Physics"
] | 656 | [
"Classical mechanics",
"Acoustics"
] |
63,631,543 | https://en.wikipedia.org/wiki/Introduction%20to%203-Manifolds | Introduction to 3-Manifolds is a mathematics book on low-dimensional topology. It was written by Jennifer Schultens and published by the American Mathematical Society in 2014 as volume 151 of their book series Graduate Studies in Mathematics.
Topics
A manifold is a space whose topology, near any of its points, is the same as the topology near a point of a Euclidean space; however, its global structure may be non-Euclidean. Familiar examples of two-dimensional manifolds include the sphere, torus, and Klein bottle; this book concentrates on three-dimensional manifolds, and on two-dimensional surfaces within them. A particular focus is a Heegaard splitting, a two-dimensional surface that partitions a 3-manifold into two handlebodies. It aims to present the main ideas of this area, but does not include detailed proofs for many of the results that it states, in many cases because these proofs are long and technical.
The book has seven chapters. The first two are introductory, providing material about manifolds in general, the Hauptvermutung proving the existence and equivalence of triangulations for low-dimensional manifolds, the classification of two-dimensional surfaces, covering spaces, and the mapping class group. The third chapter begins the book's material on 3-manifolds, and
on the decomposition of manifolds into smaller spaces by cutting them along surfaces. For instance, the three-dimensional Schoenflies theorem states that cutting Euclidean space by a sphere can only produce two topological balls; an analogous theorem of J. W. Alexander states that at least one side of any torus in Euclidean space must be a solid torus. However, for more complicated manifolds, cutting along incompressible surfaces can be used to construct the JSJ decomposition of a manifold. This chapter also includes material on Seifert fiber spaces. Chapter four concerns knot theory, knot invariants, thin position, and the relation between knots and their invariants to manifolds via knot complements, the subspaces of Euclidean space on the other sides of tori.
Reviewer Bruno Zimmermann calls chapters 5 and 6 "the heart of the book", although reviewer Michael Berg disagrees, viewing chapter 4 on knot theory as more central. Chapter 5 discusses normal surfaces, surfaces that intersect the tetrahedra of a triangulation of a manifold in a controlled way. By parameterizing these surfaces by how many pieces of each possible type they can have within each tetrahedron of a triangulation, one can reduce many questions about manifolds such as the recognition of trivial knots and trivial manifolds to questions in number theory, on the existence of solutions to certain Diophantine equations. The book uses this tool to prove the existence and uniqueness of prime decompositions of manifolds. Chapter 6 concerns Heegaard splittings, surfaces which split a given manifold into two handlebodies. It includes the theorem of Reidemeister and Singer on common refinements ("stabilizations") of Heegaard splittings, the reducibility of splittings, the uniqueness of splittings of a given genus for Euclidean space, and the Rubinstein–Scharlemann graphic, a tool for studying Heegaard splittings.
A final chapter surveys more advanced topics including the geometrization conjecture, Dehn surgery, foliations, laminations, and curve complexes.
There are two appendices, on general position and Morse theory.
Audience and reception
Although written in the form of an introductory-level graduate textbook, this book presents many recent developments, making it also of interest to specialists in this area. A small amount of background in general topology is needed, and additional familiarity with algebraic topology and differential geometry could be helpful in reading the book. Many illustrations and exercises are included.
Reviewer Bruno Zimmermann states that the book "is written in a nice and intuitive way which makes it pleasant to read". Reviewer Michael Berg calls it "an excellent book that richly illustrates the scope of her chosen subject ... very well written, clear and explicit in its presentation".
Related reading
Other related books on the mathematics of 3-manifolds include 3-manifolds by John Hempel (1976), Knots, links, braids and 3-manifolds by Victor V. Prasolov and Alexei B. Sosinskiĭ (1997), Algorithmic topology and classification of 3-manifolds by Sergey V. Matveev (2nd ed., 2007), and a collection of unpublished lecture notes on 3-manifolds by Allen Hatcher.
References
Geometric topology
Mathematics books
2014 non-fiction books
Publications of the American Mathematical Society | Introduction to 3-Manifolds | [
"Mathematics"
] | 944 | [
"Topology",
"Geometric topology"
] |
63,632,458 | https://en.wikipedia.org/wiki/Praseodymium%28III%29%20fluoride | Praseodymium(III) fluoride is an inorganic compound with the formula PrF3, being the most stable fluoride of praseodymium.
Preparation
The reaction between praseodymium(III) nitrate and sodium fluoride will obtain praseodymium(III) fluoride as a green crystalline solid:
Pr(NO3)3 + 3 NaF → 3 NaNO3 + PrF3
There are also literature reports on the reaction between chlorine trifluoride and various oxides of praseodymium (Pr2O3, Pr6O11 and PrO2), where praseodymium(III) fluoride is the only product. The reaction between bromine trifluoride and praseodymium oxide left in the air for a period of time also produces praseodymium(III) fluoride, but the reaction is incomplete; the reaction between praseodymium(III) oxalate hydrate and bromine trifluoride can obtain praseodymium(III) fluoride, and carbon is also produced from this reaction. Praseodymium(III) fluoride can also be obtained by reacting praseodymium oxide and sulfur hexafluoride at 584 °C.
Properties
Physical
Praseodymium(III) fluoride forms pale green crystals of trigonal system (or hexagonal system), space group P 3c1, (or P 6/mcm), cell parameters a = 0.7078 nm, c = 0.7239 nm, Z = 6, structure like cerium(III) fluoride (CeF3).
Chemical
Praseodymium(III) fluoride is a green, odourless, hygroscopic solid that is insoluble in water.
Uses
Praseodymium(III) fluoride is used as a doping material for laser crystals.
See also
Praseodymium(III) chloride
Praseodymium(IV) fluoride
References
Fluorides
Praseodymium(III) compounds
Inorganic compounds
Lanthanide halides | Praseodymium(III) fluoride | [
"Chemistry"
] | 461 | [
"Fluorides",
"Inorganic compounds",
"Salts"
] |
76,715,889 | https://en.wikipedia.org/wiki/Property%20graph |
Property Graphs
The data model of "property graphs" , "labeled property graphs ", or "attributed graphs " has emerged since the early 2000s as a common denominator of various models of graph-oriented databases. It can be defined informally as follows:
In computer science terms, a property graph is a data structure representing entities associated by directed relationships, where the nodes and relations can both include multiple attributes / properties
In terms of graph theory, a property graph is a directed multigraph, whose vertices/nodes represent the entities of the corresponding data structure
Properties take the form of key-value pairs, as used for example in JSON. Keys are defined by character strings. Values are either numeric or also character strings. These properties fall within the usual definition of attributes as understood in entity-attribute-value or object-oriented modeling. This is why the phrase "attributed graph" is relevant. Unlike what is the case with RDF graphs, properties are not arcs of the graph proper. This is another reason why it would be preferable to call them attributed graphs, or graphs with properties, rather than "property graphs", which is misleading.
Relationships are represented by arcs of the graph. These are often called edges, even though, strictly speaking, edges belong in undirected graphs. Arcs must have an identifier, a source node and a target node, and may have one or more attributes/properties in the previous sense
Formal definition
Building upon widely adopted definitions, a property graph/attributed graph can be defined by a 7-tuple (N, A, P, V, α, , π), where
N is the set of nodes /vertices of the graph
A is the set of arcs (directed edges) of the graph
K is a set of keys, taken from a countable set, defining the nature of attributes/properties
V is a set of values, to be associated with these keys in order to define full-fledged attributes
is a total function, defining the multigraph proper. For a ∈ A, u∈ N, v ∈ N, α (a) = (u, v) means that a is an arc of the graph having node u for origin and node v for target
is a binary relation over (A∪N) and K (formally defined as a subset of the cartesian product (A∪N)×K ), associating zero, one or several keys to each arc and node of the graph
is a partial function, providing values for the properties of the nodes and the arcs which include them. For u ∈ N, a ∈ A and k ∈ K, π (u, k) (respectively π (a, k)) is the value associated with the property key k for the node u, (respectively the arc a), if the corresponding attribute property is defined there.
A complementary construct, used in several implementations of property graphs with commercial graph databases, is that of labels, which can be associated both with nodes and arcs of the graph. Labels have a practical rather than theoretical justification, as they were originally intended for users of Entity-Relationship models and relational databases, to facilitate the import of their legacy data sets into graph databases :. labels make it possible to associate the same identifier (that of the relational table, or of the ER entity) to all graph nodes which would correspond to the different rows of this relational table, or to instances of the same generic entity / class. With the proposed definition, these labels could in fact be viewed as attributes defined only by a key, without an associated value (this is why is defined separately as a binary relation, and π as a partial function). The basic definition thus becomes much clearer, simpler, and satisfies a principle of parsimony. Alternatively, and more consistently, labels can be defined through type graphs, as special types associated with nodes and arcs.
Relations with other models
Graph theory and classical graph algorithmics
Attributed graphs, as defined above, are especially useful and relevant in that they provide an "umbrella" hypernymic concept ( i.e. common generalization) for several key graph-theoretic models, which have long-since been widely used in classical graph algorithms
Labeled graphs associate labels to each vertex and/or edge of a graph. Matched with attributed graphs, these labels would correspond to attributes comprising only a key, taken from a countable set (typically a character string, or an integer)
Colored graphs, as used in classical graph coloring problems, are but special cases of labeled graphs, whose labels are defined on a finite set of keys, matched to colors.
Weighted graphs associate a numerical value to arcs/edges, and, when relevant, to the vertices of a directed or undirected graph. These weights/valuations would correspond to the differents values of a set of attributes with the same key. As an example, for a graph modeling a road network, we could have a set of weights corresponding to the capacities (measured in number of vehicles per unit of time), and another representing the distances, these two valuations being associated with each road segment represented by an edge of the graph, and differentiated by two corresponding keys.
Flow networks are weighted graphs whose weights are interpreted as a capacities. They are used in all kinds of very classical models of transport networks, used e.g. with maximum flow algorithms.
Shortest path problems, as solved by very classical algorithms (like Dijkstra's algorithm), operate on weighted graphs for which the weights correspond to distances, real or virtual.
Knowledge graphs and RDF graphs
Knowledge graphs, usually represented as RDF graphs, are in fact hybrid labeled graphs, whose node labels correspond to instance identifiers (IRI)s or literals, and edge labels identify types (not instances) of predicates. They have now acquired a visibility which tends to obscure the longer-established use of graphs as direct model for systems of all kinds. Attributed graphs are, by their versatility and expressivity, the best-adapted for this type of modeling, where graphs which can rightly be called cyber-physical do not merely capture weakly structured about a physical system, as would be the case with a knowledge graph, but attempt to directly capture the structure of a physical system, as matched by the connectivity structure of the graph. In contrast, an RDF graph would mix structural relationships with attached properties, and category / class information with instance / individuals, drowning out the structure The expressivity of attributed graphs, on the level of higher order logic, is also far above that of RDF graphs, which is limited to first order logic. Properties of relationships, which are at the heart of the attributed graph model, require a very cumbersome reification process to be expressed in RDF.
Standardization
NGSI-LD
The NGSI-LD data model specified by ETSI has been the first attempt to standardize property graphs under a de jure standards body.
Compared to the basic model defined here, the NGSI-LD meta-model adds a formal definition of basic categories (entity, relation, property) on the basis of semantic webstandards (OWL, RDFS, RDF), which makes it possible to convert all data represented in NGSI-LD into RDF datasets, through JSON-LD serialization. NGSI-LD entities, relations and properties are thus defined by reference to types which can themselves be defined by reference to ontologies, thesauri, taxonomies or microdata vocabularies, for the purpose of ensuring the semantic interoperability of the corresponding information.
GQL
The ISO/IEC JTC1/SC32/WG3 group of ISO, which established the SQL standard, is in the process of specifying a new query language suitable for graph-oriented databases, called GQL (Graph Query Language). This standard will include the specification of a property graph data model, which should be along the lines of the basic model described here, possibly adding notions of labels, types, and schemas .
Type graphs and schemas
Graph-oriented databases are, compared to relational databases, touted for not requiring the prior definition of a schema to start populating the base. This is desirable and suitable for environments and applications where one operates under an open world assumption, such as the description of complex systems and systems of systems, characterized by bottom-up organization and evolution, not control of a single stakeholder. However, even in such environments, it may be needed to constrain the representation of specific subsets of the information entered into the database, in a way that may resemble a traditional database schema, while keeping the openness of the overall graph for addition of unforeseen data or configurations. For example, the description of a smart city falls under the open world assumption and will be described by the upper level of a graph database, without a schema. However, specific technical sub-systems of this city remain top-down closed-world systems managed by a single operator, who may impose a stronger structuring of information, as customarily represented by a schema.
The notions of "type graphs" and schemas make it possible to meet this need, with types playing a role similar to that of labels in classical graph databases, but with the added possibility of specifying relations between these types and constraining them by keys and properties. The type graph is itself a property graph, linked by a relation of graph homomorphism with the graphs of instances that use the types it defines, playing a role similar to that of a schema in a data definition language.
The ontologies, thesauri or taxonomies used to reference NGSI-LD types are also defined by graphs, but these are RDF graphs rather than property graphs, and they typically have broader scopes than database schemas. The complementary use, possible with NGSI-LD types, of type graphs and referencing of external ontologies, makes it possible to enforce strong data structuration and consistency, while affording semantic grounding and interoperability.
References
Graph databases
Extensions and generalizations of graphs | Property graph | [
"Mathematics"
] | 2,064 | [
"Graph databases",
"Mathematical relations",
"Graph theory",
"Extensions and generalizations of graphs"
] |
76,718,163 | https://en.wikipedia.org/wiki/Katsuhiko%20Hayashi | is a Japanese reproductive geneticist and stem cell researcher. He achieved same-sex reproduction of male mice and thus ranked Nature's 10. He has been studying the mechanism by which germ cells that transmit genetic information to the next generation are produced. In particular, he has identified the molecules and environmental factors that control the differentiation process of the oocyte lineage leading to eggs by reconstructing this process using pluripotent stem cells. Recently, based on these studies, he has been conducting research into the quality control mechanism of genetic information in the oocyte lineage and the diseases caused by its breakdown.
References
Stem cell researchers
21st-century Japanese biologists
1972 births
Living people | Katsuhiko Hayashi | [
"Biology"
] | 139 | [
"Stem cell researchers",
"Stem cell research"
] |
76,719,625 | https://en.wikipedia.org/wiki/Covering%20%28construction%29 | In construction, covering is the exterior layer of a building's roof. The covering ensures waterproofing by directing and collecting rainwater. It also provides mechanical protection against various external elements such as dust and intrusions. Additionally, it must withstand static mechanical pressures from snow and dynamic forces from strong winds (pressure and uplift).
Considered as the fifth facade of the building, it also contributes to the aesthetic appeal and character of the structure.
Functions
The roof covering is the exterior part of the roof and does not contribute to the building's stability. It is designed to endure all weather conditions such as rain, snow, hail, and wind, as well as external environmental factors like marine environments and the weight of maintenance personnel. From the ridge to the drainage system, the roof covering directs rainwater by gravity and contributes to waterproofing.
As a visible element from the outside, the roof covering contributes to the heritage and architectural value of the building.
Composition
A roof covering is composed of various elements including:
Roof support (beams, boards, rafters, battens, etc.)
Roof underlayment (waterproof membrane, thermal insulation, etc.)
Ventilation elements for the underlayment (moisture and vapor evacuation)
Roof covering, visible exterior coating (tiles, slates, shingles, etc.)
Elements ensuring rainproofing and proper drainage of the roof (ridge caps, flashing, edge waterproofing elements, etc.)
A water collection system (eaves) and rainwater drainage (gutters)
Roof windows or skylights.
Roof underlayment
The roof underlayment is used to prevent accidental penetration of rainwater or powdery snow, to prevent convective exchanges with thermal insulation, and to control the migration of water vapor. It is an element of the building's thermal performance.
The roof underlayment is placed between the frame and the roof support. Two types of underlayments are distinguished: rigid underlayments, usually made of wood such as panels and boards, and flexible underlayments made of bituminous material or synthetic material, reinforced or not. Flexible underlayments can have High Vapor Permeability (HVP), which affects water vapor migration and also impacts the installation of thermal insulation.
Roof support
The roof support, attached to the frame, serves as a fixing support for the roof elements. It is usually a lathing or boarding. Lathing is a network of horizontal wooden slats, square or rectangular in section, called battens. In the presence of a flexible underlayment, counter-battens are placed under the lathing. Boarding consists of a decking of boards, which are wooden planks. Alternative industrial solutions, such as fiber cement, exist. Some roofing elements, such as steel roofing or slate, do not require roof support.
Ventilation
Roof ventilation ensures the proper preservation of the timber in the attic and regulates the humidity level by preventing condensation. Two ventilation systems are distinguished:
The ridge vent system consists of punctual openings (generally with a 1 dm2 section) arranged to facilitate air circulation. These are placed at the bottom of the windward slopes (in areas of positive pressure) and at the top of the leeward slopes (in areas of negative pressure).
The linear system allows air to pass at the base of the roof (at the eaves), along the slopes, and at the ridge. It is mandatory in mountainous areas and has the advantage, especially with underlayment, of homogenizing ventilation.
Roofing elements
Technologically, two types of installations are distinguished: roofing with small elements and roofing with large elements. Roofing with small elements includes slates, tiles, and shingles. The principle of waterproofing guiding their installation is overlapping. Roofing with large elements includes sheet metals, profiled metal or plastic trays, and corrugated fiber-cement sheets. Their waterproofing principles may involve overlapping, stapling, the application of elastomer seals, etc.
Some traditional roofing materials, such as thatch or green roofs, do not fit into these two categories.
Roofing with small elements
Tiles
Tiles are rigid plates manufactured by molding or pressing. They come in various shapes depending on regional specifics or their location on the roof: flat, corrugated, curved, and saddleback. The material is often terracotta, but it can also be concrete, glass, or metal (zinc, steel). They can be installed on lathing or boarding, or even on specific tile supports.
Terracotta tiles represent the primary roofing material in France and many other countries. These elements are made of clay fired at high temperatures. The obtained colors depend on the clay used and the surface treatment applied during finishing.
Several types of tiles, each with their relative installation specifics based on their shape, exist, such as the canal tile, flat tile (with regional variations like the glazed tile from Burgundy or the Alsatian tile), Flemish tile, or interlocking tile.
Slates
Natural slates are elements made of very fine schist stone. They are manufactured from slate schist, cut and sawn to the desired dimension. Slate shapes include rectangular, rounded, pointed, or diamond-shaped. A slate is waterproof, non-porous, frost-resistant, and resistant to the most aggressive atmospheric agents.
Fiber-cement slates are prefabricated elements made of cement to resemble natural slates. They can be pigmented throughout or surface-colored. The early fiber-cement slates contained asbestos.
The geographical distribution of slate roofs is linked to the shale richness of the subsoil: Anjou, Brittany, Ardennes, certain parts of the Pyrenees, and the Massif Central in France. In Europe, natural slates usually come from Spain.
Slates are installed with hooks or nails. Two types of supports can be used: battens or boarding (also known as continuous support).
Bituminous shingles
Bituminous shingles, also called "shingles," consist of a fiberglass or cellulose felt reinforcement and a mixture of bitumen and mineral granules. Various shapes are available: rounded, rectangular, and scale-like. These products are easily installed on low-slope roofs and lightweight structures due to their low weight. The most common installation method is nailing the elements to a continuous support, made of particle boards or continuous boarding.
Wooden shingles
Roofs made of wooden shingles, also called wooden scales or shingles, are made of larch, chestnut, or red cedar. They represent an ancient technique still found in Franche-Comté, the Vosges, or Savoie. Small wooden elements are nailed in place, similar to slates. Here again, some artisans perpetuate and revive this technique, mainly found in mountainous areas but also in plains. The virtually decay-resistant wood gradually changes color over time to blend into silver-gray hues.
Lauzes
They are mainly found in the Massif Central, Burgundy, Champagne, and Lorraine. They are also traditional in mountainous regions. Despite being prohibitively expensive, they are often replaced by more modern materials. However, there is still a resurgence, and the expertise of roofers persists. Unfortunately, the extraction of these products has ceased in many regions. A revival is taking place through local productions, imports from Aosta Valley in Italy for Alpine roofs, and the appearance in recent years of industrial products imitating lauzes. All these products, regardless of their size and origin, require reinforced frameworks and are generally installed using the double roofing technique.
Roofing with large elements
Two types of materials are distinguished:
Self-supporting steel trays (or aluminum, but less commonly used) installed directly on the framework.
Sheet metals (zinc, copper), supported by continuous backing.
Sheet metal roofs have excellent durability over time and develop a patina that enhances their appearance. Copper turns black and then patinates or oxidizes into a green hue. Zinc, on the other hand, acquires a highly appreciated platinum ash color. Both zinc and copper are easy to shape, bend, and weld, making them suitable for even the most complex installations.
Installation of sheet metals
Support:
The support consists of continuous boarding (with a spacing of 5 mm between boards) or continuous backing (plywood or chipboard) covered with a film with studs to allow air circulation between the support and the metal elements.
Arrangement:
Custom-shaped elements are arranged parallel to the line of greatest slope and connected by stapling (butt joint system) or reliefs + cover joints (batten system). Junctions not parallel to the line of greatest slope are made differently depending on the slope of the slope. Junctions must ensure waterproofing, free expansion, and fixing of the elements. The width of the sheets is determined by the exposure to wind.
Steel trays
Also known as self-supporting covers (with no continuous support), they were originally reserved for industrial buildings but have found some applications in housing, especially in mountainous areas, due to their economic, frost-resistant, and reliable qualities. These products, made of galvanized, lacquered, and ribbed sheets, are also available in a wide range of colors. These covers are particularly used in countries prone to strong winds and tropical cyclones, such as the Caribbean and the Indian Ocean (Reunion, Mauritius, etc.). The significant ribbing of these elements eliminates the need for purlins, and fixing is done by screw and sealing washer at the upper part of the joint between two plates. Steel trays are commonly sold in lengths of up to 12 meters, adaptable upon request, and in widths ranging from 0.6 to 1.1 meters. The span of these products depends on the depth of the ribs, the thickness of the sheet, as well as the climatic constraints to be taken into account, ranging from 2 meters to 7 meters and more. To solve condensation problems due to differences in indoor and outdoor temperatures, as well as acoustic issues, double-skinned steel trays with internal insulation are offered.
Panels for roofing
The most well-known forms are corrugated sheets made of galvanized steel, fiberglass, or bituminous synthetic material. Very lightweight and inexpensive, these sheets are very easy to apply by simple screwing (with screws) or nailing onto rafters. Other, more recent sheets replicate one or several rows of tiles, with colors that resemble, depending on the regions, either tiles or slate. Quick to install, these sheets have the advantage of being very economical. These sheets are available in electro-galvanized steel, galvanized with painted coating, and also synthetic material, generally in more or less standard dimensions of 1 meter in width by 2 meters in length. These different molded sheets also exist in translucent materials of the same dimensions and can be interposed on an opaque roof without any problem.
Other types of roofing
Thatch
Still very present fifty years ago on rural buildings in several French regions, notably in Normandy and the Camargue, thatch had almost disappeared due to a lack of specialists. There are now a few dozen practitioners across the country who install this type of roofing, which is designed to last 30 to 50 years when properly implemented. Dried reeds are used, tightly bundled to prevent water from seeping through.
Green roofs
Existing for several thousand years and used by a few pioneers in the United States, these roofs, intended for low-slope roofs, have made a comeback in northern Europe since the 1970s and are beginning to be established in Latin countries. Particularly suitable for absorbing thermal shocks, they are favored for their aesthetics and ecological impacts: attenuation of urban heat peaks, buffer zones during rainfall, improved humidity in the home, and CO2 absorption. Their implementation has a low additional cost compared to more traditional roofs, and they offer the advantage of better waterproofing.
Transparent glass roofing
Built to bring in light and warmth from the sun, these coverings, more commonly known as glass roofs, became very popular from the 15th century during the Italian Renaissance and then in Europe, to glaze the arcades of large royal estates' Orangeries and the pleasure greenhouses in the 19th century. It was also during this period that this type of roofing was used to protect railway station halls, large hotels, exhibition halls and museums, department stores, shopping arcade passages, and some grand palaces (Reichstag building, Grand Palais in Paris, etc.); all on superb metal architecture, all classified as Historical Monuments.
The material used initially was single-pane glass, known to the Romans but little used in civil architecture until the 15th century. The evolution of techniques towards "sandwich" glass composed of two glass sheets glued to a synthetic film improved mechanical resistance, safety, and allowed for larger glazed surfaces. The use of glass with a central metal framework (factory roof sheds), organic glass, resin as for polycarbonate sheets, widely used for veranda roofing due to their lightness, insulation power, and impact resistance. Modern techniques and the use of synthetic glass allow for the creation of tinted, opaque, curved, custom-sized glass, etc. Ventilation of premises can be ensured by installing translucent panels on roofs (Vasistas).
Rainwater receivers
Rainwater receivers come in two types: gutters (commercial profiles) and custom-made gutters manufactured according to an existing support.
They are characterized by their evacuation potential (flow rate in liters per second), which will depend on:
Their slope (minimum 0.005 m/m);
Their shape;
Their cross-section in cm² (for low-slope gutters with variable development);
The projected area of the slopes they serve.
The maximum allowable flow rate is 3 l/min/m² (projected surface area). They are connected to the sewage network by cylindrical downpipes for rainwater (E.P.) of various diameters or square/rectangular with different cross-sections (cm²). It is considered that 1 cm² of section evacuates 1 m² of ground surface in the case of a cylindrical connection to the receiver. In the case of a tapered connection (funnel), this value is reduced to 0.7 cm²/m². The capacity of structures collecting rainwater will be calculated based on the ground projection in m² of the slopes considered.
These structures are commonly referred to as "galvanizing" or "roofing" and fall under the responsibility of the roofer, plumber, or plumber-roofer.
Construction technique
Two construction lines, however, are common to all installation techniques:
The level line;
The line of greatest slope (the path followed by water by gravity on a slope).
They are perpendicular. The elements of a roof will always be arranged according to these lines, which will also serve as the basis for all implementation drawings.
Edges
Edges are the lines that determine the geometric limits of a slope. They can be integrated into the slope (chimney passage, roof window, ventilation), at the junction of two slopes, or at the boundary of a building. They are classified and treated differently depending on their orientation relative to the line of greatest slope.
Edges that water follows or straight edges parallel to the line of greatest slope.
Edges that water escapes (ridge, hips) forming an acute angle with the level line.
Edges that receive water (gutters, valleys, beveled edges) forming an obtuse angle with the level line.
Inherent problems with waterproofing, durability, and resistance of roofing
One of the major problems to be solved in establishing installation rules is capillarity (water rising) between elements. It is decisive in the choice of joint type or the value of overlap. Phenomena due to wind action, overpressure, and depression, static loads (snow, ice) influence supports and fixings. Condensation, electrochemical incompatibility between metals or between metals and materials (specific wood species or concrete) compromise the durability of structures.
Roofer
One of the major challenges in the roofing profession lies in how to carry out these works depending on whether they are located at the junction of slopes or not.
The work of the roofer therefore consists of:
Choosing a material
Choosing an installation technique
Marking the location of each element
Shaping the materials
Installing them while respecting waterproofing, fixing, and possibly expansion rules.
Criteria for choosing
The choice of material is made based on multiple criteria. In most cases, local authorities impose types of roofs based on architectural or environmental constraints.
The choice of material and/or the implementation of an installation technique will depend on:
The slope of the slope in % or °
The ground projection of the slope considered in meters
Its geographical location (climate zone defined by maps taking into account the combination of rain/wind or mountainous area, etc.)
Its local geographical location (climatic site)
Snow and wind mechanical constraints (rules and NV 65 map)
Local environmental constraints (aesthetic, architectural, etc.)
Three climatic zones
Zone I consists of the entire interior of the country, the Mediterranean coast, and altitudes below 200 m.
Zone II includes the Atlantic coast within 20 km deep and altitudes between 200 and 500 m.
Zone III includes the Atlantic, Channel, and North Sea coasts within 20 km deep and altitudes above 500 m.
Three situations
The building's location relative to the environment overlaps with the climatic zone.
A sheltered site corresponds to a construction in the hollow of a basin surrounded by hills on all sides and thus protected from the wind.
A normal site is a plain or plateau with little variation in elevation.
An exposed site is where the buildings are located on the coast up to about 5 km deep, on the tops of cliffs, in estuaries or enclosed bays, and, inland, in narrow windy valleys, on isolated or high mountains.
Roofing at high altitudes
Mountain buildings (above 900 m) require a "double roof" composed of several layers. The large temperature differences between outside/inside and night/day cause phenomena such as dew point (Condensation) and freeze/thaw harmful to building preservation.
Interior condensation, often invisible, damages coatings and causes mold. It results from the low temperature inside the walls combined with a high humidity level.
Freeze/thaw causes the formation of extremely heavy icicles at the edge of the slopes, dangerous for pedestrians and destructive for materials. The phenomenon is simple: heated from below, the snow melts, flows between the slope and the snow cover, then freezes passing over the overhangs of the roof exposed to the cold air. Tons of ice can accumulate.
The "double roof" is the most effective way to counteract these drawbacks. The complex consists of:
A vapor barrier is placed at the ceiling of the rooms in the sloping ceiling. Acting as a barrier to vapor, it prevents it from entering the insulation and condensing inside it.
Above, the thermal insulation prevents the temperature of the wall from dropping, reducing the risk of interior condensation.
The insulation is then coated with waterproofing to protect it from condensate from the roof. – An air gap circulates between the waterproofing and the roof elements. It must be at the outside temperature to prevent thaw/refreeze. In some cases, linear ventilation "ridge gutter" will be implemented. – Finally, the actual roofing is carried out.
In order:
Rafters
Thin insulation
Counter batten
27 mm deck board
Cabrons
Tar insulation
Pressure-treated counter batten
4*10 pressure-treated basting
Non-felted corrugated steel fixed in corrugated steel + "snow stops"
Rules and techniques of implementation
The design and implementation of roofs are subject to the rules of the trade, standards, and technical opinions of official bodies as well as the installation advice from manufacturers.
In france
The design and implementation of roofs are subject to DTU (Document Technique Unifié) regulations in the 40 series. In the absence of official standards, Technical Assessments (ATec) are taken into account.
DTU 40.11 Slate Roofing
DTU 40.13 Fiber Cement Slate Roofing
DTU 40.21 Interlocking or Sliding Clay Tile Roofs with Relief
DTU 40.211 Clay Tile Roofs with Flat Gauge
DTU 40.22 Canal Clay Tile Roofing
DTU 40.23 Flat Clay Tile Roofs
DTU 40.24 Concrete Tile Roofing with Sliding and Longitudinal Interlocking
DTU 40.241 Concrete Flat Tile Roofing with Sliding and Longitudinal Interlocking
DTU 40.25 Concrete Flat Tile Roofing
DTU 40.35 Ribbed Sheet Roofing from Coated Steel Sheets
DTU 40.36 Pre-painted or Non-pre-painted Aluminum Sheet Roofing
DTU 40.41 Roofing with Metal Elements in Zinc Sheets and Long Sheets
DTU 40.44 Roofing with Metal Elements in Stainless Steel Sheets and Long Sheets
DTU 40.45 Roofing with Metal Elements in Copper Sheets and Long Sheets
DTU 40.46 Lead Roofing on Continuous Support
DTU 40.5 Rainwater Disposal Works
Roofer's vocabulary
In addition to the technical terms used by roofers, there are names for tiles used for finishing, decoration, and waterproofing of roofs. Here are the main terms to better understand the language of architects, builders, or roofers.
Hip: protruding line formed by the intersection of two roof slopes.
Brisé: the lower part of a Mansard roof.
Cabrons: over-rafter wooden profiles that create waves under flexible covering.
Chanlatte: beveled wooden lath, nailed on the rafters at the edge of the roof to compensate for the missing tile thickness in the first row (tilting). It can be replaced by a double batten.
Pet door: Metal or clay elements designed for roof and attic ventilation.
Coffin: (or cofine) tile or slate curved in the width direction.
Double tile: double row of tiles or slate, laid on the chanlatte, which makes up the roof edge. Also called a battlement.
Eaves: lower edge of a slope often equipped with a gutter.
Ridge tile: half-round or angular tile that covers the horizontal beam, called "ridge," placed at the junction of two slopes of a roof.
Standing seam: roofing and facade covering technique using waterproof metal.
Gambrel tile: tile curved inward in the width direction.
Left-handed tile: tile curved in length on its left edge, called "left to left," or right, called "left to right."
Giron tile: trapezoidal tile for making turrets, towers, or domes.
Lantern: ventilation cap that finishes an air intake, a vent, etc.
Batten: wooden strip nailed to the rafters that receives the tile hooks, commonly called a "roof batten."
Valley: inward ridge between two roof slopes.
Hanging tile: tile curved in the length direction.
Finial: decorative clay element that crowns the point of intersection of a ridge and hips, hips with each other if there is no ridge, or the top of a conical roof.
Gauge: visible part of the tile or slate that is completely wetted by rainwater. It corresponds to the spacing of the battens.
Bargeboard: the end of the roof on the gable side.
See also
Timber roof truss
Eavesdrip
Roofer
Glossary of architecture
Rain gutter
References
Building materials
Roofing materials
Roofs | Covering (construction) | [
"Physics",
"Technology",
"Engineering"
] | 4,793 | [
"Structural engineering",
"Building engineering",
"Architecture",
"Structural system",
"Construction",
"Materials",
"Roofs",
"Matter",
"Building materials"
] |
76,723,533 | https://en.wikipedia.org/wiki/Matrix%20mortality%20problem | In computer science, the matrix mortality problem (or mortal matrix problem) is a decision problem that asks, given a finite set of n×n matrices with integer coefficients, whether the zero matrix can be expressed as a finite product of matrices from this set.
The matrix mortality problem is known to be undecidable when n ≥ 3. In fact, it is already undecidable for sets of 6
matrices (or more) when n = 3, for 4 matrices when n = 5, for 3 matrices when n = 9, and for 2 matrices when n = 15.
In the case n = 2, it is an open problem whether matrix mortality is decidable, but several special cases have been solved: the problem is decidable for sets of 2 matrices, and for sets of matrices which contain at most one invertible matrix.
Notes
Undecidable problems
Unsolved problems in computer science
Unsolved problems in mathematics
Matrix theory | Matrix mortality problem | [
"Mathematics",
"Technology"
] | 195 | [
"Unsolved problems in mathematics",
"Unsolved problems in computer science",
"Computational problems",
"Computer science stubs",
"Computer science",
"Undecidable problems",
"Computing stubs",
"Mathematical problems"
] |
67,924,712 | https://en.wikipedia.org/wiki/Heterogeneous%20metal%20catalyzed%20cross-coupling | Heterogeneous metal catalyzed cross-coupling is a subset of metal catalyzed cross-coupling in which a heterogeneous metal catalyst is employed. Generally heterogeneous cross-coupling catalysts consist of a metal dispersed on an inorganic surface or bound to a polymeric support with ligands. Heterogeneous catalysts provide potential benefits over homogeneous catalysts in chemical processes in which cross-coupling is commonly employed—particularly in the fine chemical industry—including recyclability and lower metal contamination of reaction products. However, for cross-coupling reactions, heterogeneous metal catalysts can suffer from pitfalls such as poor turnover and poor substrate scope, which have limited their utility in cross-coupling reactions to date relative to homogeneous catalysts. Heterogeneous metal catalyzed cross-couplings, as with homogeneous metal catalyzed ones, most commonly use Pd as the cross-coupling metal.
Reaction mechanism and implications
Pd-catalyzed cross-coupling reactions catalyzed by a heterogeneous catalyst are thought to generally proceed, not on the surface of the solid catalyst, but in the solution phase. The solution-phase intermediates are not necessarily distinguishable from those obtained during homogeneous cross-couplings – for example, a heterogeneous Pd-catalyzed Suzuki reaction still proceeds via oxidative addition of the electrophile by Pd(0), transmetallation of a boronate, and reductive elimination to give product and regenerate Pd(0) (Figure 1A). The activity of heterogeneous catalysts in cross-coupling seems to be tied to the ability of the electrophile (usually an aryl halide) to undergo oxidative addition with an atom of Pd(0), whether on the solid catalyst surface or already in solution, after which the rest of the catalytic cycle will take place – in solution.The role of the solid phase in heterogeneous metal catalyzed cross-coupling, then, is more subtle than one might expect. Rather than enabling the productive catalytic cycle, the solid phase acts as a reservoir of Pd that is accessible to the productive catalytic cycle. For heterogeneous catalytic cross-coupling which involves unligated Pd (for example, when Pd/C is used as the catalyst), there exists a significant equilibrium that partitions Pd(0) between atomic, solution-phase monomers, surface-bound Pd, colloidal Pd and higher order Pd aggregates (Figure 1B). Aggregation of Pd atoms into clusters ultimately leads to irreversible precipitation of insoluble metallic Pd, which limits the maximum turnover number that can be achieved. An effective heterogeneous cross-coupling catalyst will recapture monomeric Pd or lower order oligomers and colloids onto the solid phase in order to maintain low concentrations of these species in solution, disfavouring aggregation and favouring instead the productive elementary steps of cross-coupling. This may explain the (perhaps counterintuitive) observation that lower catalyst loadings can improve turnover number for a heterogeneous cross-coupling catalyst system (Pd on porous glass, in the Heck reactions of 4-bromoacetophenone at 180 °C).
The solid-phase to solution-phase mass transfer requirement for Pd in most heterogeneous cross-couplings has further implications. Because the supported ligand for a polymer-supported catalyst is not optimized for reactivity, and because the productive catalytic cycle usually ignores the supported ligand entirely even if present, “difficult” cross-coupling reactions which require fine tuning of the electronic and steric properties of the Pd catalyst – via expensive, designer ligands – are scarcely reported in a heterogeneous context. A 2021 survey of heterogeneous metal catalyzed cross-couplings in the fine chemical industry reported, out of 22 examples, 19 Suzuki or Heck reactions, which included only 2 examples with N-basic heterocycles, and only 4 examples with a singly-ortho-substituted electrophile (representative example in Scheme 1). In nearly all these cases, reactions were initially developed with a homogeneous Pd catalyst (typically Pd(OAc)2 with either no exogenous ligand or PPh3 as ligand) on smaller scale, and only evaluated with heterogeneous Pd catalysts, (typically Pd/C or Pd black) for scaleup to decagram to multi-hundred-kilo scales, once process considerations such as process mass intensity and separation costs became significant. Notably, no polymer-supported catalysts were used; for these real-world examples of heterogeneous catalytic cross-coupling on scale, inorganic heterogeneous catalysts (such as Pd/C) are far cheaper and more robust than polymer-supported ligated Pd catalysts, and thus more commonly employed.
When designing a polymer-ligand solid support for Pd, the ligands should not simply be immobilized variants of homogeneous ligands which effect catalysis in the presence of Pd. Rather, immobilized ligands should optimize the redeposition of Pd onto the solid phase at the end of each catalytic cycle in a catalytically active form that is ready for a subsequent catalytic cycle. Ligand sets which are rarely seen in homogeneous cross-coupling, then, appear in heterogeneous ligand-containing Pd catalysts. For example, Buchmeiser et al. have reported high turnover N,N-bidentate ligands (Figure 2) which achieve turnover numbers (TONs) of >105 in the Heck reactions of iodobenzene, and TON ca. 103 in the amination of bromobenzene. These TONs are competitive with even the best solution TONs, giving clear advantages for this system for separation of the product from catalyst post-reaction.
Kinetics
The “shuttling” kinetics of Pd mass transfer (from solid phase to solution phase and back to solid phase) have been verified by three-phase test experiments, while the solution-phase catalytic activity which characterizes most heterogeneous cross-coupling has been verified by TEM, hot filtration, and poisoning experiments. However, truly heterogeneous cross-coupling systems may exist. Poyatos et al. immobilized a Pd pincer carbene complex (Figure 3) on MK-10 clay and observed that while high TON (ca. 103) and TOF was maintained relative to the soluble catalyst, no activity was found in the solution for the supported catalyst – a strong indicator of a fully heterogeneous catalytic mechanism.
Heterogeneous metal catalyzed cross-coupling in flow vs batch
For batch cross-couplings which use immobilized Pd, the concentration of solution-phase Pd increases dramatically when the reaction commences (as Pd is transferred out of the solid phase), and has decreased dramatically by the time full conversion has been achieved (by readsorption or precipitation onto the solid support). Such a kinetic profile matches the processing requirements of a batch process – although some amount of metal remains in solution post-reaction, the supported Pd catalyst can usually be recycled several times, despite the limitations described above.
In contrast, continuous flow systems do not allow for effective metal redeposition on the solid support; the reaction stream will transport the Pd through the support due to continuous metal leaching/readsorption (Figure 4). Cumulative periods of operation inevitably result in significant metal leaching from the flow system, depleting the supported catalyst's activity and giving low recyclability, with – typically – no particular benefit for reactivity.
In principle, it is possible for the metal leaching inherent to continuous flow cross-coupling to be avoided. Plucinkski and coworkers developed a continuous Mizoroki-Heck and hydrogenation sequence consisting of two separated packed-bed reactors containing Pd/C. Because the Pd/C-catalyzed hydrogenation proceeds via a heterogeneous mechanism, metal leaching due to the second hydrogenation step is minimal, and Pd leached from the first part of the reactor during the Heck coupling can be recaptured by the second packed bed during the hydrogenation. By cycling the direction of flow between forward and reverse, catalytic activity could be maintained over two consecutive experiments, although a greater number of cycles would be desirable in order to vindicate this strategy for increasing turnover in solid-supported flow catalysts for cross-coupling.
Separation
Heterogeneous catalysts are easily removed from a reaction mixture by filtration. Although some amount of metal catalyst typically remains in the product from leaching, these amounts tend to be lower than those remaining after workup of a homogenous metal-catalyzed cross-coupling.
Magnetic removal
A heterogeneous catalyst consisting of Pd supported by silica-coated Fe2O3/Fe3O4 nanoparticles allows the reaction to be heated by electrical induction, and also allows facile magnetic separation of catalyst and product post-reaction. Copper ferrite has been reported as a heterocycle arylation catalyst and can be similarly separated from the reaction with a magnet.
Recycling
Heterogeneous cross-coupling catalysts typically lose some portion of activity to metal leaching between different runs as a result of the solution-phase catalytic cycle (see above), and hence can only be recycled a finite number of times.
Multiple groups have pointed out that the need for recycling is obviated at extremely high turnover and low catalyst loading, since in these cases the catalyst cost is negligible relative to the cost of other reaction components. As a result, for most cross-coupling reactions, in which heterogeneous catalysts generally require higher loadings than equivalent homogeneous ones, the benefits of heterogeneous catalysts afforded by the greater ease of recycling may be outweighed by the disadvantages – higher catalyst loadings, and the additional process costs. Additionally, when catalyst loadings are lower than 10 ppm – the regulatory limit for several metals including Pd in pharmaceutical APIs – separation of the metal following the reaction does not even need to be performed. This nullifies another of the commonly perceived advantages of heterogeneous catalysts over their homogeneous counterparts.
References
Organometallic chemistry
Carbon-carbon bond forming reactions
Catalysis | Heterogeneous metal catalyzed cross-coupling | [
"Chemistry"
] | 2,118 | [
"Catalysis",
"Carbon-carbon bond forming reactions",
"Coupling reactions",
"Organic reactions",
"Chemical kinetics",
"Organometallic chemistry"
] |
67,925,573 | https://en.wikipedia.org/wiki/Conia-ene%20reaction | In organic chemistry, the Conia-ene reaction is an intramolecular cyclization reaction between an enolizable carbonyl such as an ester or ketone and an alkyne or alkene, giving a cyclic product with a new carbon-carbon bond. As initially reported by J. M. Conia and P. Le Perchec, the Conia-ene reaction is a heteroatom analog of the ene reaction that uses an enol as the ene component. Like other pericyclic reactions, the original Conia-ene reaction required high temperatures to proceed, limiting its wider application. However, subsequent improvements, particularly in metal catalysis, have led to significant expansion of reaction scope. Consequently, various forms of the Conia-ene reaction have been employed in the synthesis of complex molecules and natural products.
History and mechanism
In the late 1960s, the laboratory of chemist Jean-Marie Conia investigated small carbocyclic molecules, specifically as products of ene-type reactions with carbonyls. These efforts culminated in a 1975 review paper titled “The Thermal Cyclisation of Unsaturated Carbonyl Compounds.”
In its original manifestation, the Conia-ene reaction comprised the intramolecular cyclization of ε,ζ-unsaturated ketones or aldehydes to functionalized cyclopentanes upon intense heating. The proposed mechanism invoked a six-membered, ene reaction-like transition state in which the enol tautomer reacts concertedly with the pendant alkene.
The same conditions were found to give six- and nine-membered rings with the appropriate substrates, although with lower yields and diastereoselectivity. In the case of γ,δ- and δ,ε-unsaturated ketones, equilibrium favored the linear product over the cyclopropane or cyclobutane. Alkynyl ketones were also found to cyclize under thermal conditions, giving a mixture of the conjugated and skipped cyclic enones.
Two key drawbacks prevented wider implementation of the initial Conia-ene reaction. First, molecules with additional functional groups were often incompatible with the high temperatures required for conversion. Second, regio- and diastereoselectivity depended entirely on the substrate, offering little to no control over the orientation of the product.
Advancements
In the decades after the discovery of the Conia-ene reaction, several improvements allowed for milder reaction conditions and greater control of product stereo- and regiochemistry. For example, the carbonyl component, formerly a ketone or aldehyde, became a substituted β-ketoester or malonate ester. Such carbonyls enolize much more readily, yielding better access to the desired enol tautomer. Additionally, the alkene component was replaced with an alkyne, which not only gave better cyclization in accordance with Baldwin’s rules, but also furnished a product containing an alkene that served as a useful handle for further transformations. Finally, recent efforts have featured metal-mediated and metal-catalyzed Conia-ene reactions that can be rendered asymmetric using chiral ligands.
Activation modes
These advancements have produced five main types of Conia-ene reactions characterized by the operative activation mode: namely, enolate, alkyne, or ene-yne activation, and one- or two-metal dual activation. Note that though the mechanisms of Conia-ene variants differ from the initial ene-like cyclization, they are still considered Conia-ene or Conia-ene-type reactions. In addition, due to the complexity of some Conia-ene reaction systems, the true mechanism may lay somewhere between several different activation modes.
Enolate activation
Enolate activation is the simplest Conia-ene activation mode. In this mode, the carbonyl starting material is treated with a strong base, such as nBuLi, NaH, or tBuOK, to form a metal-stabilized enolate, which then attacks the tethered alkyne and transfers the metal cation. An early example of enolate activation was reported by Taguchi and coworkers in 1999. The authors found that in the presence of catalytic base, alkynyl-substituted malonate esters undergo facile cyclization to the corresponding cyclopentanes. High yields were also obtained with substituted cyanoacetate, sulfonylacetate, and phosphonoacetate analogs.
Alkyne activation
In Conia-ene reactions proceeding via alkyne activation, a suitable late transition metal (Au, Ag, Pt, Pd) coordinates to the alkyne and increases its electrophilicity; thus, the enol tautomer of the carbonyl can attack more readily. Toste et al. pioneered two of the first examples of alkyne activation in 2004. Using a cationic Au(I) complex, the authors formed a wide variety of cyclized products from linear β-ketoester starting materials. Notably, the reactions are run under mild conditions and give high diastereoselectivity. Moreover, by shortening the alkyne tether from three carbons to two, substituted cyclopentenes can also be accessed.
Ene-yne activation
In ene-yne activation, the least common of the five modes, a single metal species coordinates with the enol alkene and the tethered alkyne, simultaneously activating both moieties for reaction. Nickel, cobalt, and rhenium complexes have all been employed in this manner. A representative example was reported by Malacria et al. in 1994, in which an alkynyl substituted β-ketoester was treated with cyclopentadienyl cobalt complex and irradiation to give disubstituted methylene cyclopentane.
One-metal dual activation
To effect dual activation by a single metal, the same metal species that activates the enolate also interacts with the alkyne. Though the precise mechanisms are poorly understood and likely vary from case to case, metals such as In, Zn, Fe, and Cu are proposed to operate via this mode. One reaction system thought to proceed via one-metal dual activation is that developed by Shaw et al. in 2014. Using a catalytic Fe(III)-(Salen) complex, Shaw and coworkers were able to access chiral cyclopentanes from an array of alkynyl-tethered β-ketoesters and analogs thereof. The reaction tolerated a wide range of ketones (phenyl, homoallyl, cyclopropyl, 2-furyl, etc.), esters (ethyl, tert-butyl, etc.), and ester analogs (nitro, , cyano, sulfonyl, etc.).
Two-metal dual activation
Two-metal dual activation represents the combination of the enolate activation mode and the alkyne activation mode into a single reaction system. Generally, a hard, oxophilic metal (K, Na, Ag) activates the enolate oxygen, while a soft, carbophilic metal (Pd, Cu, Mo) coordinates with the alkyne. In some instances, however, the precise role of each metal is unclear. For example, in a 2005 study Toste et al. found that treatment of an alkynyl-tethered β-ketoester with a Pd(II) phosphine complex and Yb(OTf) effected asymmetric cyclization to the corresponding cyclopentane. It is proposed that a Pd-enolate adds into a Yb-activated alkyne, though there is also precedent for Pd activation of alkynes.
Applications in total synthesis
Following their development of Au-catalyzed Conia-ene reactions, Toste and coworkers employed such a transformation toward the alkaloid natural product lycopladine A. Starting from chiral cyclohexenone 1, a series of enone functionalizations gave silyl enol ether 2 as the Conia-ene precursor. To effect cyclization, 2 was treated with catalytic AuCl(PPh) and AgBF to furnish vinyl iodide 3 in high yield as a single diastereomer. The remainder of the molecule was completed in three steps to give (+)-lycopladine A in eight steps and 17% overall yield.
In 2012, Carreira et al. synthesized , a halogenated terpene isolated from the red algae Laurencia majuscula, and employed Au-catalyzed Conia-ene cyclization as the penultimate step. Having obtained silyl enol ether 7 in 11 steps from bicycle 6, itself the product of a Diels–Alder cycloaddition between 4 and enone 5, the authors subject 7 to 50 mol% Echavarren’s catalyst to deliver tricycle 8 in 65% yield. This compound is then elaborated to by chlorination of the exo-methylene.
In 2020, Yang and coworkers employed a diastereoselective Conia-ene reaction during their asymmetric synthesis of (+)-waihoensene, a structurally dense terpenoid from Podocarpus totara var. waihoensis, first synthesized by the Lee group in 2017. Vinylogous ester 9 was first functionalized in six steps to chiral Conia-ene precursor 10. Subsequent treatment of 10 with tBuOK in DMSO gave bicycle 11 in 83% yield as a single diastereomer. This compound then required eight additional transformations to reach (+)-waihoensene in 15 steps and 4% overall yield.
References
Organic reactions
Reaction mechanisms | Conia-ene reaction | [
"Chemistry"
] | 2,079 | [
"Reaction mechanisms",
"Chemical kinetics",
"Physical organic chemistry",
"Organic reactions"
] |
78,086,913 | https://en.wikipedia.org/wiki/O-Pivalylbufotenine | O-Pivalylbufotenine, or bufotenine O-pivalate, also known as 5-pivaloxy-N,N-dimethyltryptamine or O-pivalyl-N,N-dimethylserotonin, is a synthetic tryptamine derivative and putative serotonergic psychedelic. It is the O-pivalyl analogue of the naturally occurring but peripherally selective serotonergic tryptamine bufotenine (5-hydroxy-N,N-dimethyltrypamine or N,N-dimethylserotonin) and is thought to act as a centrally penetrant prodrug of bufotenine.
O-Pivalylbufotenine shows psychedelic-like effects in animals but is less active than anticipated, perhaps due to its high lipophilicity and, by extension, high plasma protein binding and ester hydrolysis into bufotenine prior to crossing the blood–brain barrier. In addition to theoretically acting as a prodrug of bufotenine, which is a non-selective serotonin receptor agonist, O-pivalylbufotenine may also interact directly with certain serotonin receptors.
Besides O-pivalylbufotenine, other bufotenine O-acyl esters and putative or confirmed bufotenine prodrugs, such as O-acetylbufotenine among others, have been developed and studied.
O-Pivalylbufotenine was first described in the scientific literature by 1979.
See also
4-AcO-DMT
α-Methyltryptophan
Neurotransmitter prodrug
References
Neurotransmitter precursors
Pivalate esters
Prodrugs
Psychedelic tryptamines
Serotonin receptor agonists
Dimethylamino compounds | O-Pivalylbufotenine | [
"Chemistry"
] | 397 | [
"Chemicals in medicine",
"Prodrugs"
] |
78,088,138 | https://en.wikipedia.org/wiki/Last%20Call%20BBS | Last Call BBS is a puzzle video game developed by Zachtronics. Released on August 4, 2022, it was the studio's final game before its closure.
Gameplay
Set on a Z5 Powerlance, a fictional 1990s PC, Last Call BBS features eight small puzzle games with distinct gameplay. To install the games, the player needs to dial an in-game BBS. All games contain elements of previous works developed by Zachtronics. For instance, 20th Century Food Court, a factory simulation game where the player arranges machines to make food, incorporates the assembly line elements of SpaceChem. With point-and-click controls, the games begin with easy introductory levels, before increasing in difficulty as new gameplay elements are introduced. In addition to the puzzle games, the game contains a circuit creation game, model-building simulator and a retro-style minigame of solitaire, similar to those present in previous Zachtronics games.
Development and release
The closure of Zachtronics was announced in June 2022, along with the announcements of Last Call BBS and a compilation of all solitaire minigames in the studio's previous works. Last Call BBS''' official trailer was released online later that month. The game was released on Steam via early access on July 5, and its final version was released on August 4. In an interview with Kotaku, Zach Barth, Zachtronics' founder and lead designer, said that the reason for their breakup was that they "felt it was time for a change." The team officially disbanded following the release of The Zachtronics Solitaire Collection, though some of its members, including Barth, are still active in game development. Last Call BBS'' received favorable reviews from critics upon release.
References
External links
2022 video games
MacOS games
Puzzle video games
Single-player video games
Video games about computing
Video games developed in the United States
Windows games
Zachtronics games | Last Call BBS | [
"Technology"
] | 397 | [
"Works about computing",
"Video games about computing"
] |
78,088,544 | https://en.wikipedia.org/wiki/Fuel%20homogenizer | A fuel homogenizer is a mechanical device used to improve the quality and combustion efficiency of various fuels by reducing particle size and ensuring a uniform mixture. By applying mechanical shear forces, fuel homogenizers break down larger fuel droplets into smaller, more uniform sizes, promoting better atomization during combustion. This process can lead to optimized combustion, reduced sludge formation, lower emissions, and enhanced overall fuel efficiency. Fuel homogenizers are commonly utilized in industries such as maritime shipping, power generation, and petrochemical processing, particularly with heavy fuels like heavy fuel oil (HFO) and marine diesel oil (MDO). They are also increasingly applied to biofuels and fuel blends to prevent degradation and improve stability.
History
Fuel homogenizers were developed in the mid-20th century to address challenges associated with poor fuel quality, particularly in industries that relied on heavy fuel oil (HFO). Early models used mechanical seals to prevent leaks in high-pressure fuel systems and were primarily aimed at breaking down fuel particles and emulsifying water to improve combustion efficiency. Over time, technological advancements led to the introduction of magnetic couplings, which offered reduced wear, lower maintenance, and increased reliability compared to mechanical seals. The adoption of fuel homogenizers became more widespread as environmental regulations, such as MARPOL Annex VI, required industries to reduce emissions. Modern fuel homogenizers are now widely used to improve fuel quality and stability across a range of fuels, including biofuels and fuel blends, in industries such as maritime shipping and power generation.
Overview
Fuel homogenizers operate by applying mechanical shear forces to fuel, typically through a rotor-stator mechanism. This process reduces the size of larger fuel droplets, enhancing fuel atomization during injection in combustion engines. Improved atomization leads to more efficient combustion, less unburnt residue, and reduced emissions of nitrogen oxides (NOₓ), particulate matter (PM), and carbon dioxide (CO₂).
Installation and benefits
The benefits of fuel homogenizers vary depending on their placement within the fuel system:
Sludge reduction: When installed before fuel separators, fuel homogenizers can reduce sludge production by breaking down larger asphaltene clusters. This allows separators to remove impurities more efficiently. Studies have indicated that sludge reduction of up to 50–80% can be achieved with the use of homogenizers.
Combustion improvement: Installing homogenizers within the booster module, just before the automatic filter, enhances fuel atomization and combustion efficiency. This placement can lead to lower particulate emissions, reduced fuel consumption, and decreased CO₂ emissions.
Fuel homogenizers are also utilized to prevent fuel degradation when installed over fuel tanks. This application is particularly beneficial for biofuels, blends, and low-quality fuels, as the homogenizer helps maintain fuel stability by preventing stratification and ensuring consistent fuel flow.
Technology and functionality
The core technology of fuel homogenizers involves the application of mechanical shear forces to reduce fuel particle sizes to between 3 and 5 micron. This is typically achieved using a rotor-stator mechanism that subjects the fuel to intense shear forces, breaking down larger particles and clusters.
This process not only improves fuel atomization during combustion but also reduces the formation of sludge and carbon deposits within the engine. Additionally, homogenization can enhance water-in-fuel emulsions, where small amounts of water are mixed with fuel to lower combustion temperatures, thereby reducing NOₓ emissions.
Water-fuel emulsions
Water-fuel emulsions (WFE) are created by mixing water into the fuel to form a stable emulsion. This method is beneficial for reducing harmful emissions and improving combustion efficiency. The evaporation of water during combustion causes micro-explosions that further break down fuel droplets, resulting in finer atomization and more complete combustion.
Applications
Fuel homogenizers are employed in several industries:
Maritime shipping: In the maritime industry, homogenizers reduce sludge formation, improve engine efficiency, and assist vessels in complying with international emissions regulations such as MARPOL Annex VI. The reduction of CO₂ emissions through fuel savings is increasingly important due to regulations like the EU Emissions Trading System (EU ETS) and the FuelEU Maritime initiative, which aim to reduce greenhouse gas emissions from the maritime sector.
Power generation: Power plants that utilize heavy fuels benefit from improved fuel atomization and sludge reduction when using homogenizers, leading to enhanced combustion efficiency and reduced emissions.
Biofuels and blends: Homogenizers are critical in stabilizing biofuels and fuel blends, ensuring smooth engine operation and consistent fuel quality.
Environmental and economic benefits
The use of fuel homogenizers offers both environmental and economic advantages:
Reduction in emissions: By improving fuel atomization and combustion efficiency, homogenizers contribute to lower emissions of NOₓ, particulate matter, and carbon dioxide (CO₂). Improved combustion efficiency results in reduced fuel consumption, leading to lower CO₂ emissions per unit of energy produced.
Compliance with emission regulations: The reduction in CO₂ emissions is significant in the context of emission trading systems such as the European Union Emissions Trading System (EU ETS). Under the EU ETS and the upcoming FuelEU Maritime regulation, maritime operators are required to monitor, report, and reduce their greenhouse gas emissions. Fuel homogenizers can assist in achieving compliance by lowering CO₂ emissions through improved fuel efficiency.
Cost savings: Reduced sludge formation and improved combustion efficiency can lower fuel consumption and decrease costs associated with sludge disposal and emission allowances.
Extended equipment lifespan: Decreased sludge and carbon deposits allow for longer intervals between engine maintenance, reducing operational costs and extending the lifespan of equipment.
References
Fuel technology
Combustion
Mechanical engineering
Marine engineering
Environmental engineering | Fuel homogenizer | [
"Physics",
"Chemistry",
"Engineering"
] | 1,177 | [
"Applied and interdisciplinary physics",
"Chemical engineering",
"Environmental engineering",
"Combustion",
"Civil engineering",
"Mechanical engineering",
"Marine engineering"
] |
78,095,563 | https://en.wikipedia.org/wiki/RELHIC | A RELHIC, or Reionization-Limited HI Cloud, is a theoretical construct in astrophysics that describes a type of dark matter halo containing neutral hydrogen gas. These clouds are characterized by their potential to reveal the nature of dark matter and the conditions for galaxy formation in the early universe, particularly during the epoch of reionization. RELHICs are thought to be spherical gaseous systems that are in hydrostatic equilibrium and are influenced by both the presence of dark matter and a background ultraviolet radiation field from stars and galaxies.
The concept of RELHICs was proposed by astrophysicists Alejandro Benitez-Llambay and Julio F. Navarro, as part of their work on cosmological simulations and observations. They suggest that these structures could serve as observational targets to better understand the nature of dark matter and the formation of galaxies.
Recently, a RELHIC candidate, referred to as Cloud-9, was detected near the spiral galaxy Messier 94.
References
Dark matter | RELHIC | [
"Physics",
"Astronomy"
] | 203 | [
"Dark matter",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Unsolved problems in physics",
"Exotic matter",
"Physics beyond the Standard Model",
"Matter"
] |
73,752,688 | https://en.wikipedia.org/wiki/Protactinium%20trihydride | Protactinium trihydride is an inorganic compound with the chemical formula PaH. It is isostructural with uranium trihydride and can be prepared by reacting protactinium and hydrogen at 250°C and 600 mmHg. Theoretical calculations show that it can form further compounds PaH (n = 4, 5, 8, 9) under high pressure. Protactinium trihydride is sensitive to moist air and oxygen.
References
Protactinium compounds
Metal hydrides | Protactinium trihydride | [
"Chemistry"
] | 108 | [
"Metal hydrides",
"Inorganic compounds",
"Reducing agents"
] |
73,755,426 | https://en.wikipedia.org/wiki/Null%20infinity | In theoretical physics, null infinity is a region at the boundary of asymptotically flat spacetimes. In general relativity, straight paths in spacetime, called geodesics, may be space-like, time-like, or light-like (also called null). The distinction between these paths stems from whether the spacetime interval of the path is positive (corresponding to space-like), negative (corresponding to time-like), or zero (corresponding to null). Light-like paths physically correspond to physical phenomena which propagate through space at the speed of light, such as electromagnetic radiation and gravitational radiation. The boundary of a flat spacetime is known as conformal infinity, and can be thought of as the end points of all geodesics as they go off to infinity. The region of null infinity corresponds to the terminus of all null geodesics in a flat Minkowski space. The different regions of conformal infinity are most often visualized on a Penrose diagram, where they make up the boundary of the diagram. There are two distinct regions of null infinity, called past and future null infinity, which can be denoted using a script '' as and . These two regions are often referred to as 'scri-plus' and 'scri-minus' respectively. Geometrically, each of these regions actually has the structure of a topologically cylindrical three dimensional region.
The study of null infinity originated from the need to describe the global properties of spacetime. While early methods in general relativity focused on the local structure built around local frames of reference, work beginning in the 1960s began analyzing global descriptions of general relativity, analyzing the structure of spacetime as a whole. The original study of null infinity originated with Roger Penrose's work analyzing black hole spacetimes. Null infinity is a useful mathematical tool for analyzing behavior in asymptotically flat spaces when limits of null paths need to be taken. For instance, black hole spacetimes are asymptotically flat, and null infinity can be used to characterize radiation in the limit that it travels outward away from the black hole. Null infinity can also be considered in the context of spacetimes which are not necessarily asymptotically flat, such as in the FLRW cosmology.
Conformal compactification in Minkowski spacetime
The metric for a flat Minkowski spacetime in spherical coordinates is . Conformal compactification induces a transformation which preserves angles, but changes the local structure of the metric and adds the boundary of the manifold, thus making it compact. For a given metric , a conformal compactification scales the entire metric by some conformal factor such that such that all of the points at infinity are scaled down to a finite value. Typically, the radial and time coordinates are transformed into null coordinates and . These are then transformed as and in order to use the properties of the inverse tangent function to map infinity to a finite value. The typical time and space coordinates may be introduced as and . After these coordinate transformations, a conformal factor is introduced, leading to a new unphysical metric for Minkowski space:
.
This is the metric on a Penrose diagram, illustrated. Unlike the original metric, this metric describes, a manifold with a boundary, given by the restrictions on and . There are two null surfaces on this boundary, corresponding to past and future null infinity. Specifically, future null infinity consists of all points where and , and past null infinity consists of all points where and .
From the coordinate restrictions, null infinity is a three dimensional null surface, with a cylindrical topology .
The construction given here is specific to the flat metric of Minkowski space. However, such a construction generalizes to other asymptotically flat spaces as well. In such scenarios, null infinity still exists as a three dimensional null surface at the boundary of the spacetime manifold, but the manifold's overall structure might be different. For instance, in Minkowski space, all null geodesics begin at past null infinity and end at future null infinity. However, in the Schwarzschild black hole spacetime, the black hole event horizon leads to two possibilities: geodesics may end at null infinity, but may also end at the black hole's future singularity. The presence of null infinity (along with the other regions of conformal infinity) guarantees geodesic completion on the spacetime manifold, where all geodesics terminate either at a true singularity or intersect the boundary of infinity.
Other physical applications
The symmetries of null infinity are characteristically different from that of the typical regions of spacetime. While the symmetries of a flat Minkowski spacetime are given by the Poincaré group, the symmetries of null infinity are instead given by the Bondi–Metzner–Sachs (BMS) group. The work by Bondi, Metzner, and Sachs characterized gravitational radiation using analyses related to null infinity, whereas previous work such as the ADM framework dealt with characterizations of spacelike infinity. In recent years, interest has grown in studying gravitons on the boundary null infinity. Using the BMS group, quanta on null infinity can be characterized as massless spin-2 particles, consistent with the quanta of general relativity being gravitons.
References
General relativity
Lorentzian manifolds
Theoretical physics
Wikipedia Student Program | Null infinity | [
"Physics"
] | 1,093 | [
"General relativity",
"Theoretical physics",
"Theory of relativity"
] |
73,759,088 | https://en.wikipedia.org/wiki/Tropicamide/phenylephrine | Tropicamide/phenylephrine, sold under the brand name Mydcombi is a fixed dose combination of tropicamide and phenylephrine used to dilate the eyes (mydriasis). It contains, tropicamide, an anticholinergic, and phenylephrine, as the hydrochloride, an alpha-1 adrenergic receptor agonist. It is sprayed into the eyes.
It was approved for medical use in the United States in May 2023.
Medical uses
Tropicamide/phenylephrine is used for the short-term dilation of the pupils.
References
External links
Alpha-1 adrenergic receptor agonists
Combination drugs
Muscarinic antagonists
Ophthalmology drugs | Tropicamide/phenylephrine | [
"Chemistry"
] | 165 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
73,759,506 | https://en.wikipedia.org/wiki/YM-254890 | YM-254890 is a macrolide antibiotic derived from Chromobacterium species. It is used as a pharmacological research compound which acts as a selective inhibitor of Gq mediated signalling. However the claimed selectivity for Gq has been disputed.
References
Macrolides
Nitrogen heterocycles | YM-254890 | [
"Chemistry"
] | 68 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
73,761,875 | https://en.wikipedia.org/wiki/Lam%C3%A9%27s%20theorem | Lamé's Theorem is the result of Gabriel Lamé's analysis of the complexity of the Euclidean algorithm. Using Fibonacci numbers, he proved in 1844 that when looking for the greatest common divisor (GCD) of two integers a and b, the algorithm finishes in at most 5k steps, where k is the number of digits (decimal) of b.
Statement
The number of division steps in the Euclidean algorithm with entries and is less than times the number of decimal digits of .
Proof
Let be two positive integers. Applying to them the Euclidean algorithm provides two sequences and of positive integers such that, setting and one has
for and
The number is called the number of steps of the Euclidean algorithm, since it is the number of Euclidean divisions that are performed.
The Fibonacci numbers are defined by and
for
The above relations show that and By induction,
So, if the Euclidean algorithm requires steps, one has
One has for every integer , where is the Golden ratio. This can be proved by induction, starting with and continuing by using that
So, if is the number of steps of the Euclidean algorithm, one has
and thus
using
If is the number of decimal digits of , one has and
So,
and, as both members of the inequality are integers,
which is exactly what Lamé's theorem asserts.
As a side result of this proof, one gets that the pairs of integers that give the maximum number of steps of the Euclidean algorithm (for a given size of ) are the pairs of consecutive Fibonacci numbers.
References
Bibliography
Bach, Eric (1996). Algorithmic number theory. Jeffrey Outlaw Shallit. Cambridge, Mass.: MIT Press. .
Carvalho, João Bosco Pitombeira de (1993). Olhando mais de cima: Euclides, Fibonacci e Lamé. Revista do Professor de Matemática, São Paulo, n. 24, p. 32-40, 2 sem.
Theorems in number theory
Algorithms
Information technology
Number theoretic algorithms
Euclid | Lamé's theorem | [
"Mathematics",
"Technology"
] | 416 | [
"Information and communications technology",
"Mathematical theorems",
"Algorithms",
"Mathematical logic",
"Applied mathematics",
"Theorems in number theory",
"Information technology",
"Mathematical problems",
"Number theory"
] |
73,762,558 | https://en.wikipedia.org/wiki/Biscogniauxia%20atropunctata | Biscogniauxia atropunctata, the hypoxylon canker, is a species of sac fungus in the family Graphostromataceae. Like many other fungi in the genus, it is a plant pathogen; specifically this species can cause Biscogniauxia (Hypoxylon) canker and dieback disease in host trees.
Taxonomy
Biscogniauxia atropunctata contains the following varieties:
Biscogniauxia atropunctata maritima
Biscogniauxia atropunctata atropunctata
Description
Patches of the fungus can reach a few metres across. It is white, sometimes with black patches, and usually with a black margin.
Similar species
In addition to other species within the genus, Diatrype stigma, Camarops tubulina, Kretzschmaria deusta, and species of Camillea can appear similar, as can Arthonia lichens.
Distribution
This species is found in spring and early summer east of the Rocky Mountains of North America.
Ecology
When not pathogenic, Biscogniauxia atropunctata is saprobic on oak and other hardwood trees, causing a white rot on the host deadwood. The fruiting body grows in patches with a whitish-gray surface covered by black dots that grow to be blackened overall.
The fungus can colonize healthy trees and live undetected and harmlessly in the bark and sapwood for some time, its spread kept in check by the host's natural defenses. However, when the trees become stressed, the fungus invades weakened host tissues, causing the dieback disease. Initially the infection kills affected branches, then progresses down the trunk to form a canker, girdling the tree and killing the entire crown.
References
Xylariales
Fungi of North America
Fungal plant pathogens and diseases
Fungal tree pathogens and diseases
Fungus species | Biscogniauxia atropunctata | [
"Biology"
] | 393 | [
"Fungi",
"Fungus species"
] |
70,820,827 | https://en.wikipedia.org/wiki/M9%20gun%20director | The M9 gun director was an electronic director developed by Bell Labs during World War II. This computer continuously calculated trigonometric firing solutions for anti-aircraft weapons against enemy aircraft. When cued by the SCR-584 centimetric gun-laying radar and used in concert with anti-aircraft guns firing shells with proximity fuzes, it helped form the most effective anti-aircraft weapon system utilized by the Allies during the war.
Background
During the late 1930s the United States Army's signal corps attempted to utilize the newly developed SCR-268 radar to provide fire control quality data to the Sperry Corporation's M4 mechanical gun director. The SCR-268's longwave did not provide accurate enough data for the pairing to be an effective anti-aircraft weapon. In 1940, Vannevar Bush formed the National Defense Research Committee and its section D-2 was tasked with examining issues related to fire control headed by Warren Weaver.
Development
In May 1940, an engineer at Bell named David Parkinson had a dream about being in an anti-aircraft revetment where he also spotted a potentiometer. He spent the next couple of weeks working with his boss to draft specifications for an analog computer that provided firing solutions for anti-aircraft guns. Later that year, Bell Labs, at the time led by Harvey Fletcher and Mervin Kelly, submitted a proposal to the National Defense Research Committee. Their proposed director would calculate course and speed of incoming aircraft, shell velocities and fuse timing, powder temperatures, shell drift, and air density and wind speeds to provide a predicted firing solution for the associated gun battery. The project was approved in December 1940 and the initial work on the project was completed by Drs. David B. Parkinson and Clarence A. Lovell under the direction of Dr. Edward Wente. A prototype, designated T-10, was delivered to the Army only a few days after the attack on Pearl Harbor and a few hundred sets were ordered immediately. As the SCR-584's development continued it was paired with the M9.
On November 9, 1943, a demonstration was held for senior Army leadership at the Bell Lab facility in Mullica Hill, New Jersey. Once operational testing was complete, the M9 was mass-produced at the Hawthorne Works in Cicero, Illinois.
Operational Use
90 mm anti-aircraft guns were normally operated in groups of four, utilizing the SCR-584 radar and being controlled by the M9 director. The SCR-584 was accurate to about 0.06 degrees (1 mil) and also provided automatic tracking. Direction and range information was sent directly to the M3 gun data computer, and M9 director, which directed and laid the guns automatically. All the crews had to do was load the guns.
SCR-584s with the associated M9 gun directors were rushed to the Anzio beachhead in February 1944 to assist with engaging the German-Italian air force that was jamming the SCR-268s and bombing the beachhead and harbor at night. On the evening of February 24, 1944, four American 90 mm guns opened fire on a flight of 12 Junkers Ju 88s, shooting down five of them. The success achieved that evening dramatically reduced German nighttime bombing moving forward.
In June 1944, the M9, working in concert with the SCR-584 and anti-aircraft batteries utilizing proximity fuses, formed the bulwark of defense against German V-1 flying bombs launched against southern England. Training and accuracy improved so that by the end of August, Allied crews were shooting down nearly two-thirds of incoming V-1s.
See also
Operation Diver
Citations
References
Bibliography
Web
Analog computers
Military equipment introduced from 1940 to 1944
Applications of control engineering
Artillery components
Ballistics
Anti-aircraft guns of the United States
Artillery operation | M9 gun director | [
"Physics",
"Technology",
"Engineering"
] | 775 | [
"Applied and interdisciplinary physics",
"Control engineering",
"Artillery components",
"Ballistics",
"Applications of control engineering",
"Components"
] |
70,821,684 | https://en.wikipedia.org/wiki/Alan%20Lidiard | Alan Bernard Lidiard (9 May 1928 – 21 November 2020), or A. B. Lidiard, was a British condensed matter physicist known for his research into defects in materials.
Education and career
Lidiard studied theoretical physics under Charles Coulson at King's College London, obtaining an MSc in 1950 and a PhD in 1952. He spent two years as a Fulbright scholar in the USA, first as a research assistant for Friedrich Seitz at the University of Illinois Urbana-Champaign and then under Charles Kittel at University of California, Berkeley. He took up a research fellowship in the Theoretical Division at Atomic Energy Research Establishment in Harwell. Between 1957 and 1961, he was a Lecturer in Theoretical Physics at University of Reading. He returned to Harwell and set up the radiation damage theory group in the Theoretical Physics Division (TPD). Lidiard became the head of the TPD in 1966 until his retirement. Afterwards, he moved to the Department of Physics at University of Reading and the Department of Theoretical Chemistry at Oxford University.
Honors and awards
Lidiard was awarded the Guthrie Medal in 1988. He was a Fellow of the Institute of Physics and a Fellow of the Royal Society of Chemistry.
Personal life
Lidiard married three times. He has two daughters from his second marriage.
Bibliography
See also
Marshall Stoneham
Richard Catlow
References
1928 births
2020 deaths
Alumni of King's College London
Academics of the University of Reading
British physicists
Condensed matter physicists
Academics of the University of Oxford
People from Waltham St Lawrence | Alan Lidiard | [
"Physics",
"Materials_science"
] | 306 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
70,824,018 | https://en.wikipedia.org/wiki/Trazpiroben | Trazpiroben (developmental code name TAK-906) is a dopamine antagonist drug which was under development for the treatment of gastroparesis. It acts as a peripherally selective dopamine D2 and D3 receptor antagonist. The drug has been found to strongly increase prolactin levels in humans, similarly to other peripherally selective D2 receptor antagonists like domperidone. Clinical development of trazpiroben was discontinued before April 2022. Trazpiroben was originated by Altos Therapeutics and was under development by Takeda Oncology.
References
External links
Trazpiroben - AdisInsight
Abandoned drugs
D2 antagonists
D3 antagonists
Motility stimulants
Peripherally selective drugs
Prolactin releasers
Benzoic acids
Spiro compounds
Cyclohexanes
Imidazolidinones
Piperidines | Trazpiroben | [
"Chemistry"
] | 183 | [
"Organic compounds",
"Spiro compounds",
"Drug safety",
"Abandoned drugs"
] |
70,828,297 | https://en.wikipedia.org/wiki/Harald%20A.%20Enge | Harald Anton Enge (September 28, 1920, Fauske Municipality, Nordland, Norway – April 14, 2008, Middlesex County, Massachusetts) was a Norwegian-American experimental nuclear physicist and inventor of instrumentation used in nuclear physics. He is known for the Enge split-pole spectrograph, which became a standard instrument of nuclear physics research.
Biography
Enge completed his secondary education in 1940 in Bodø. He studied electrical engineering and received in 1947 his engineering degree from the Norwegian Institute of Technology (now part of the Norwegian University of Science and Technology). He married his first wife in 1947. From 1948 to 1955 he was a research associate and lecturer in physics at the University of Bergen. For a year and a half in 1950 and 1951, he worked at Massachusetts Institute of Technology (MIT). For four months he was supported by MIT's Foreign Students Summer Program and then was given a salaried job by William Weber Buechner (1914–1985). At MIT Enge did research in nuclear physics using a magnetic spectrograph while working with the team led by Robert J. Van de Graaff. During this time at MIT, Enge also designed his first broad-range spectrograph, which he built when he returned to the University of Bergen. In 1954 he received his doctorate from the University of Bergen. His dissertation was supervised by Bjørn Trumpy.
In the MIT physics department, Enge was an instructor from 1955 to 1956, an assistant professor from 1956 to 1959, an associate professor from 1959 to 1963, and a full professor from 1963 to 1986, when he retired as professor emeritus. He became a U.S. citizen.
He was, for many years, the director of the MIT research group started by Robert J. Van de Graaff and was an internationally recognized expert on the design of magnetic spectrometers.
In 1967 he was co-founder and chair of Deuteron Inc. He was also associated with the Deltaray Corporation (1969 to 1973) and the Gammaray Corporation (1981).
He received in 1984 the Tom W. Bonner Prize in Nuclear Physics with citation:
In 1985 he received an honorary doctorate from the University of Bergen.
His first wife died in 1988. Upon his death in 2008, he was survived by his second wife, three sons from his first marriage, seven grandchildren, and five great-grandchildren.
Selected publications
Articles
&pg=203 chapter from 2012 reprint
Books
References
External links
1920 births
2008 deaths
Norwegian University of Science and Technology alumni
University of Bergen alumni
Massachusetts Institute of Technology faculty
20th-century American physicists
21st-century American physicists
Norwegian nuclear physicists
Experimental physicists
20th-century American inventors
21st-century American inventors
Norwegian inventors
People from Nordland
People from Bodø
Norwegian emigrants to the United States | Harald A. Enge | [
"Physics"
] | 574 | [
"Experimental physics",
"Experimental physicists"
] |
70,828,730 | https://en.wikipedia.org/wiki/Passive%20dual%20coil%20resonator | A passive dual coil resonator (pDCR) is a purely passive receive coil insert for a preclinical magnetic particle imaging (MPI) system which provides frequency-selective signal enhancement. The pDCR aims to enhance the frequency components associated with high mixing orders, which are critical to achieve a high spatial resolution.
Motivation
One of the biggest challenges in MPI is to achieve a good signal-to-noise ratio, especially for higher harmonics. The intention behind this is that as many harmonics of the induced particle signal as possible, which drop in intensity at higher frequencies and then disappear in the noise floor, should be usable for image reconstruction to reach a better spatial resolution. To enhance the harmonics at higher frequencies, one aims to increase the inductive coupling between the particles and the receive coils at higher harmonics.
Design
As the name suggests, the pDCR, which consists of two coaxial coils, is passive because it does not have a voltage source and also no electrical connection to the rest of the MPI scanner system. The pDCR represents a resonant circuit and therefore also includes a capacitor. The pDCR picks up the particles’ magnetization response mainly with its interior coil. It then sends out the received signal with its exterior coil, but enhanced in the range of its resonant frequency. This output is then picked up by the scanner’s receive coils. There will be coupling between all the different coils of the scanner and the pDCR, however, the described process will dominate. The pDCR, i.e. the resonant circuit, is tuned to a frequency near the frequency at which the harmonics of the signal disappear into the noise floor and thus serves its function as mentioned above.
References
Medical imaging
Medical technology
Medical physics
Resonators | Passive dual coil resonator | [
"Physics",
"Biology"
] | 370 | [
"Applied and interdisciplinary physics",
"Medical physics",
"Medical technology"
] |
70,831,420 | https://en.wikipedia.org/wiki/Electroplasticity | Electroplasticity, describes the enhanced plastic behavior of a solid material under the application of an electric field. This electric field could be internal, resulting in current flow in conducting materials, or external. The effect of electric field on mechanical properties ranges from simply enhancing existing plasticity, such as reducing the flow stress in already ductile metals, to promoting plasticity in otherwise brittle ceramics. The exact mechanisms that control electroplasticity vary based on the material and the exact conditions (e.g., temperature, strain rate, grain size, etc.). Enhancing the plasticity of materials is of great practical interest as plastic deformation provides an efficient way of transforming raw materials into final products. The use of electroplasticity to improve processing of materials is known as electrically assisted manufacturing.
History
Electroplasticity was first discovered by Eugene S. Machlin, who reported in 1959 that applying an electric field made NaCl weaker and more ductile. Since then, the effect of electric fields on plasticity has been studied in many materials systems including metal, ceramics, and semiconductors. Various mechanisms have been posited to explain electroplastic effects and their dependence on materials properties and external conditions. For most materials the electroplastic effect arises from a combination of multiple mechanisms. This should not be all that surprising given that the electric fields directly affect electrons which dictate the bonding in materials and therefore all higher level phenomena such as dislocation motion, flow stress, vacancy diffusion, etc.
Electroplasticity in Metals
The application of DC electric fields is known to reduce the flow stress of metals and metal alloys while increasing the fracture strain. Several mechanisms have been put forth to explain this effect including Joule heating, electron wind force, dissolution of metallic bonds, and unpinning of dislocations due the induction of magnetic fields. None of these mechanisms on their own can sufficiently explain the full extent of electroplasticity in metals. The application of electric fields has been shown to enhance the effect of superplasticity which occurs in polycrystalline metals at high homologous temperatures (T>0.5Tm). This is likely due to the electric field reducing cavitation, which can lead to premature fracture, and grain growth, which can prevent superplastic flow due to grain boundary sliding, in addition to reducing the activation energy for grain boundary sliding. The strength of the electroplastic effect scales with the magnitude of the applied electric field past some threshold value. While the application of an electric field typically augments the plasticity of metals there are alloy systems that show a reduction in plasticity. Conrad and Li found that the activation energy for grain boundary sliding in Zn-5 wt.% Al increased by nearly 20% under the application of a 2 kV cm^{-1} DC electric field, resulting in more difficult deformation.
Electroplasticity in Ceramics
The application of electric fields to ceramics can give rise to plasticity in materials that traditionally exhibit no plastic deformation. High homologous temperatures are, however, typically necessary to achieve significant plastic deformation in ceramic materials. Plastic deformation ceramic oxides was found by Conrad et al. to occur under relatively modest electric field strengths (0.02-0.32 kV cm^{-1}). Strain-mediating defects such as vacancies and dislocations tend to be charged in ceramic materials due to the ionic or covalent nature of bonding. Thus, the movement of electrons can have a direct impact on the mobility of these defects in ceramics and subsequent plastic deformation. The primary effect of the electric field in the deformation of fine-grained ceramic oxides is to shift the diffusion pathway from bulk diffusion to grain boundary diffusion, resulting in greater diffusion and easier grain boundary sliding.
References
Electrochemistry
Materials science | Electroplasticity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 773 | [
"Electrochemistry",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
58,439,642 | https://en.wikipedia.org/wiki/Margham | Margham is an oil and gas field in Dubai, United Arab Emirates (UAE) and the largest onshore gas field in the emirate. The field is managed by Dusup - the Dubai Supply Authority. Condensate production ran at some 25,000 barrels per day in 2010. Margham also has an oil production capability.
Background
Production at Margham commenced in 1984, with three major gas-bearing formations located up to 10,000 feet below sea level. The field is connected by pipeline to Jebel Ali, where the gas condensate is loaded onto tankers for export. Dry gas is now also sent by pipeline to supply the Dubai grid, with consumption increasing since 2015.
Margham was initially developed as a liquids stripping/gas recycling project (dry gas was pumped back into the reservoir), but now operates as a gas storage facility for Dubai since 2008, allowing Dubai to depend on gas produced from Margham for its electricity generation and desalination needs. This usage, together with sustainables such as DEWA's Mohammed bin Rashid Al Maktoum Solar Park, means that Dubai has eliminated the use of oil as a domestic energy fuel.
Although it is a major producer with ambitions to develop its trading activities to become a major global LNG hub, the UAE is actually a net importer of LNG.
References
Natural gas fields in the United Arab Emirates
Energy | Margham | [
"Physics"
] | 286 | [
"Energy (physics)",
"Energy",
"Physical quantities"
] |
58,446,188 | https://en.wikipedia.org/wiki/Electric%20motor%20brake | An electric motor brake (commonly referred to as an electric brake) is a safety feature incorporated into many modern power tools, such as circular saws, drills, and miter saws. Many manufacturers implement this feature into tools specifically with a spinning blade or cutter.
Usage in corded tools
An electric brake is commonly used in corded tools such as circular saws, miter saws, routers, bandsaws, angle grinders, and more recently, table saws. These mechanisms are designed to prevent injuries resulting from things like kickback or skin-to-blade contact. The way these mechanisms work are almost universally the same; when the trigger or switch is released, the polarity of the electricity running to the motor's brushes is reversed, which decelerates the motor to a stop much quicker than it otherwise would. In circular saws, this feature can reduce the risk of the saw jolting backwards when the saw is set down, as well as prevent damage to the cord or the user. In other tools (such as miter saws or table saws), the brake can reduce the risk of injury to an operator's fingers or hands when the saw is switched off (such as when grabbing a scrap piece off the table). The disadvantage of this feature is that it wears the brushes prematurely when compared to non-brake tools. The first use of an electric brake on a tool was that of the miter saw, invented in 1964 by Ed Niehaus, a tool engineer for Rockwell Tools. Since then, a number of manufacturers have incorporated brakes into their power tools.
Usage in cordless tools
Electric brakes on cordless tools have been prevalent since the invention of the first cordless drill by Makita in 1969. They are found on most cordless tools, with the exception of cordless vacuums and blowers, etc., where such a feature offers no benefit. The way the brake on cordless tools works is slightly different than in corded models; when the switch is released, the motor terminals are shorted together, causing the motor to stop almost instantly by dissipating the rotational energy in the windings. This method is practical for the small permanent-magnet motors in cordless tools, which have low inertia, and is applicable to both brushed and brushless variants. This would not work on corded tools, as they generally use wound field magnets instead of permanent magnets.
References
Power tools
Brakes | Electric motor brake | [
"Physics"
] | 503 | [
"Power (physics)",
"Power tools",
"Physical quantities"
] |
65,163,559 | https://en.wikipedia.org/wiki/Bleckley%20Plaza%20Plan | The Bleckley Plaza Plan was a proposed engineering project in Atlanta, Georgia, United States. Proposed by architect Haralson Bleckley in the early 1900s, the project would have seen numerous railroads in downtown Atlanta covered by a large public plaza that would have run from the Georgia State Capitol to Terminal Station, covering much of The Gulch. The project, while considered at numerous points in the early 1900s, never came to fruition.
History
The plan for a large plaza was created by Atlanta-based architect Haralson Bleckley in the early 1900s. The impetus behind the plan came in 1906 at a meeting of the Atlanta chapter of the American Institute of Architects (AIA) where the members declared the railroad tracks running between Forsyth Street and Central Avenue in downtown Atlanta were an eyesore that required fixing. The following year, Bleckley, an AIA member, reported back to the chapter with an extensive plan to cover the railroad tracks in downtown with a system of parks and high-rise buildings, raising the street level in the process. Railroads that would be covered as part of the plan included the Central of Georgia Railroad, the Georgia Railroad, and the Western and Atlantic Railroad. This plan, emblematic of the City Beautiful movement of the time, would have included the Georgia State Capitol on the eastern end of the plaza, new public buildings along the north and south sides of the plaza, and a newly constructed French Renaissance skyscraper at the western end that would have housed a city hall, railroad depot, and municipal offices. The plan was discussed in the local newspapers at the time, including The Atlanta Journal and The Atlanta Constitution in 1909.
The plan was reviewed and endorsed by the Atlanta chapter in 1910. That same year, the Atlanta Chamber of Commerce partnered with the Atlanta Real Estate Board to form a Planning Commission for the purposes of seeing Bleckley's plan come to fruition. Along with the Chamber of Commerce, the plan was endorsed by many prominent Atlanta citizens and by property owners who owned land near the railroad tracks. However, the project was opposed by the railroad owners, whose grants had placed the railroad tracks on street level, and by the government of Georgia, which owned the Western and Atlantic Railroad and felt that their air rights were valuable. For several years, the project remained stalled.
The project gained traction in 1916, when Atlanta mayor James G. Woodward created a Plaza Planning Commission to review the proposal. On May 3, the commission asked the Atlanta City Council to conduct an engineering study of the plan. On July 8, the architectural firm Barclay, Parsons, and Klapp presented their findings on the cost of the project and the creation of a new Union Station. The study was endorsed by the city council and the chamber of commerce, and was submitted to the Western and Atlantic Railroad Commission for their consideration. However, on June 27, 1917, the railroad commission recommended to the Georgia General Assembly that they not approve the plaza plan. Following this, the plaza plan was considered dead.
Future projects
Following the rejection of the plaza plan, Atlanta mayor Asa Griggs Candler created a commission to study the creation of viaducts in Atlanta. This led to the creation of numerous viaducts of Atlanta throughout the 1920s. The chamber of commerce would revive the idea for a public plaza covering the railroad gulch several times in the following decades, including in 1923, 1927, and 1930, though none of these plans lead to the creation of the plaza. Between 1928 and 1936, the city continued to expand its viaduct system, and in 1949, Plaza Park, a small public park, was opened near the proposed site of the plaza. Furthermore, a new Union Station was constructed in 1930 near the location Bleckley had proposed, though much smaller in scale. The idea for a plaza near the capitol building was later revived as Liberty Plaza, which was completed in 2016.
References
Bibliography
1900s architecture
Buildings and structures in Atlanta
History of Atlanta
Landscape architecture
Proposed buildings and structures in Georgia (U.S. state)
Proposed parks in the United States
Unbuilt buildings and structures in the United States
Urban planning in Georgia (U.S. state) | Bleckley Plaza Plan | [
"Engineering"
] | 838 | [
"Landscape architecture",
"Architecture"
] |
65,166,559 | https://en.wikipedia.org/wiki/CrystalExplorer | CrystalExplorer (CE) is a freeware designed to analysis the crystal structure with *.cif file format.
CE is helpful to investigate different areas of solid-state chemistry such as Hirshfeld surface analysis, intermolecular interactions, polymorphism, effect of pressure and temperature on crystal structure, single-crystal to single-crystal reactions, analyzing the voids present in crystal, and structure-property relationships.
The graphical interface of CE towards the 3D crystal structure visualization aids in drawing the crystal structure with or without Hirshfeld surface.
History
CrystalExplorer launched as a graphical user interface which facilitates the visualization of interactions in molecular crystal structures. In 2006, M. A. Spackman's student Dylan Jayatilaka and coworkers presented a paper about their new crystallographic software in the occasion of 23rd European Crystallographic Meeting (ECM23) conducted in Leuven. This software was designed by School of Biomedical and Chemical Sciences, University of Western Australia, Nedlands 6009, Australia. From 2006 onward researchers started citing the program in their research papers.
CrystalExplorer 2.1 designed for Mac OS X, Windows and Linux platforms for the analysis of crystal structures and can be used to investigate many areas of solid-state chemistry such as studying intermolecular interactions, polymorphism, the effects of pressure and temperature on crystal structures, single-crystal to single-crystal reactions, analyzing crystal voids, structure-property relationships, isostructural compounds, and calculate intermolecular interaction energies.
Currently in 2020 September, there are more than 2000 research papers that cite CrystalExplorer software as per google scholar analysis.
Licence
CrystalExplorer17 is licensed free-of-charges under conditions, such as not using the free version of CrystalExplorer to conduct commercial or confidential research, or research that is not likely to be published in a peer-reviewed journal.
See also
Cambridge Crystallographic Data Centre
Crystallographic Information File
International Union of Crystallography
References
External links
Computational chemistry software
Crystallography software
Chemistry software for Linux | CrystalExplorer | [
"Chemistry",
"Materials_science"
] | 434 | [
"Computational chemistry software",
"Chemistry software",
"Crystallography",
"Computational chemistry",
"Chemistry software for Linux",
"Crystallography software"
] |
65,171,762 | https://en.wikipedia.org/wiki/Lee%E2%80%93Yang%20theory | In statistical mechanics, Lee–Yang theory, sometimes also known as Yang–Lee theory, is a scientific theory which seeks to describe phase transitions in large physical systems in the thermodynamic limit based on the properties of small, finite-size systems. The theory revolves around the complex zeros of partition functions of finite-size systems and how these may reveal the existence of phase transitions in the thermodynamic limit.
Lee–Yang theory constitutes an indispensable part of the theories of phase transitions. Originally developed for the Ising model, the theory has been extended and applied to a wide range of models and phenomena, including protein folding, percolation, complex networks, and molecular zippers.
The theory is named after the Nobel laureates Tsung-Dao Lee and Yang Chen-Ning, who were awarded the 1957 Nobel Prize in Physics for their unrelated work on parity non-conservation in weak interaction.
Introduction
For an equilibrium system in the canonical ensemble, all statistical information about the system is encoded in the partition function,
where the sum runs over all possible microstates, and is the inverse temperature, is the Boltzmann constant and is the energy of a microstate. The moments of the energy statistics are obtained by differentiating the partition function with respect to the inverse temperature multiple times,
From the partition function, we may also obtain the free energy
Analogously to how the partition function generates the moments, the free energy generates the cumulants of the energy statistics
More generally, if the microstate energies depend on a control parameter and a fluctuating conjugate variable (whose value may depend on the microstate), the moments of may be obtained as
and the cumulants as
For instance, for a spin system, the control parameter may be an external magnetic field, , and the conjugate variable may be the total magnetization, .
Phase transitions and Lee–Yang theory
The partition function and the free energy are intimately linked to phase transitions, for which there is a sudden change in the properties of a physical system. Mathematically, a phase transition occurs when the partition function vanishes and the free energy is singular (non-analytic). For instance, if the first derivative of the free energy with respect to the control parameter is non-continuous, a jump may occur in the average value of the fluctuating conjugate variable, such as the magnetization, corresponding to a first-order phase transition.
Importantly, for a finite-size system, is a finite sum of exponential functions and is thus always positive for real values of . Consequently, is always well-behaved and analytic for finite system sizes. By contrast, in the thermodynamic limit, may exhibit a non-analytic behavior.
Using that is an entire function for finite system sizes, Lee–Yang theory takes advantage of the fact that the partition function can be fully characterized by its zeros in the complex plane of . These zeros are often known as Lee–Yang zeros or, in the case of inverse temperature as control parameter, Fisher zeros. The main idea of Lee–Yang theory is to mathematically study how the positions and the behavior of the zeros change as the system size grows. If the zeros move onto the real axis of the control parameter in the thermodynamic limit, it signals the presence of a phase transition at the corresponding real value of .
In this way, Lee–Yang theory establishes a connection between the properties (the zeros) of a partition function for a finite size system and phase transitions that may occur in the thermodynamic limit (where the system size goes to infinity).
Examples
Molecular zipper
The molecular zipper is a toy model which may be used to illustrate the Lee–Yang theory. It has the advantage that all quantities, including the zeros, can be computed analytically. The model is based on a double-stranded macromolecule with links that can be either open or closed. For a fully closed zipper, the energy is zero, while for each open link the energy is increased by an amount . A link can only be open if the preceding one is also open.
For a number of different ways that a link can be open, the partition function of a zipper with links reads
.
This partition function has the complex zeros
where we have introduced the critical inverse temperature , with . We see that in the limit , the zeros closest to the real axis approach the critical value . For , the critical temperature is infinite and no phase transition takes place for finite temperature. By contrast, for , a phase transition takes place at the finite temperature .
To confirm that the system displays a non-analytic behavior in the thermodynamic limit, we consider the free energy
or, equivalently, the dimensionless free energy per link
In the thermodynamic limit, one obtains
.
Indeed, a cusp develops at in the thermodynamic limit. In this case, the first derivative of the free energy is discontinuous, corresponding to a first-order phase transition.
Ising model
The Ising model is the original model that Lee and Yang studied when they developed their theory on partition function zeros. The Ising model consists of spin lattice with spins , each pointing either up, , or down, . Each spin may also interact with its closest spin neighbors with a strength . In addition, an external magnetic field may be applied (here we assume that it is uniform and thus independent of the spin indices). The Hamiltonian of the system for a certain spin configuration then reads
In this case, the partition function reads
The zeros of this partition function cannot be determined analytically, thus requiring numerical approaches.
Lee–Yang theorem
For the ferromagnetic Ising model, for which for all , Lee and Yang showed that all zeros of lie on the unit circle in the complex plane of the parameter . This statement is known as the Lee–Yang theorem, and has later been generalized to other models, such as the Heisenberg model.
Dynamical phase transitions
A similar approach can be used to study dynamical phase transitions. These transitions are characterized by the Loschmidt amplitude, which plays the analogue role of a partition function.
Connections to fluctuations
The Lee–Yang zeros may be connected to the cumulants of the conjugate variable of the control variable . For brevity, we set in the following. Using that the partition function is an entire function for a finite-size system, one may expand it in terms of its zeros as
where and are constants, and is the :th zero in the complex plane of . The corresponding free energy then reads
Differentiating this expression times with respect to , yields the :th order cumulant
Furthermore, using that the partition function is a real function, the Lee–Yang zeros have to come in complex conjugate pairs, allowing us to express the cumulants as
where the sum now runs only over each pair of zeros. This establishes a direct connection between cumulants and Lee–Yang zeros.
Moreover, if is large, the contribution from zeros lying far away from is strongly suppressed, and only the closest pair of zeros plays an important role. One may then write
This equation may be solved as a linear system of equations, allowing for the Lee–Yang zeros to be determined directly from higher-order cumulants of the conjugate variable:
Experiments
Being complex numbers of a physical variable, Lee–Yang zeros have traditionally been seen as a purely theoretical tool to describe phase transitions, with little or none connection to experiments. However, in a series of experiments in the 2010s, various kinds of Lee–Yang zeros have been determined from real measurements. In one experiment in 2015, the Lee–Yang zeros were extracted experimentally by measuring the quantum coherence of a spin coupled to an Ising-type spin bath. In another experiment in 2017, dynamical Lee–Yang zeros were extracted from Andreev tunneling processes between a normal-state island and two superconducting leads. Furthermore, in 2018, there was an experiment determining the dynamical Fisher zeros of the Loschmidt amplitude, which may be used to identify dynamical phase transitions.
See also
Lee–Yang theorem
References
Phase transitions | Lee–Yang theory | [
"Physics",
"Chemistry"
] | 1,697 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Statistical mechanics",
"Matter"
] |
69,340,305 | https://en.wikipedia.org/wiki/Monochromatic%20radiation | In physics, monochromatic radiation is electromagnetic radiation with a single constant frequency or wavelength. When that frequency is part of the visible spectrum (or near it) the term monochromatic light is often used. Monochromatic light is perceived by the human eye as a spectral color.
When monochromatic radiation propagates through vacuum or a homogeneous transparent medium, it remains with a single constant frequency or wavelength; otherwise, it suffers refraction.
Practical monochromaticity
No radiation can be totally monochromatic, since that would require a wave of infinite duration as a consequence of the Fourier transform's localization property (cf. spectral coherence). In practice, "monochromatic" radiation — even from lasers or spectral lines — always consists of components with a range of frequencies of non-zero width.
Generation
Monochromatic radiation can be produced by a number of methods. Isaac Newton observed that a beam of light from the sun could be spread out by refraction into a fan of light with varying colors; and that if a beam of any particular color was isolated from that fan, it behaved as "pure" light that could not be decomposed further.
When atoms of a chemical element in gaseous state are subjected to an electric current, to suitable radiation, or to high enough temperature, they emit a light spectrum with a set of discrete spectral lines (monochromatic components), that are characteristic of the element. This phenomenon is the basis of the science of spectroscopy, and is exploited in fluorescent lamps and the so-called neon signs.
A laser is a device that generates monochromatic and coherent radiation through a process of stimulated emission.
Properties and uses
When monochromatic radiation is made to interfere with itself, the result can be visible and stable interference fringes that can be used to measure very small distances, or large distances with very high accuracy. The current definition of the metre is based on this technique.
In the technique of spectroscopic analysis, a material sample is exposed to monochromatic radiation, and the amount that is absorbed is measured. The graph of absorption as a function of the radiation's frequency is often characteristic of the material's composition. This technique can use radiation ranging from the microwaves, as in rotational spectroscopy, to gamma rays, as in Mössbauer spectroscopy.
See also
Wave
Acoustics
Optics
Monochromator
Interferometer
Diffraction grating
Dichroic filter
Monochromatic plane wave
Newton rings
References
Radiation | Monochromatic radiation | [
"Physics",
"Chemistry"
] | 505 | [
"Transport phenomena",
"Waves",
"Physical phenomena",
"Radiation"
] |
78,103,499 | https://en.wikipedia.org/wiki/Tom%20Leinster | Thomas "Tom" Stephen Hampden Leinster (born 1971) is a British mathematician, known for his work on category theory.
Education and career
Leinster graduated in 2000 with a Ph.D. from the University of Cambridge. His Ph.D. thesis Operads in Higher-Dimensional Category Theory was supervised by Martin Hyland. After teaching at the University of Glasgow, Leinster became, and is now, a professor at the University of Edinburgh. He published textbooks on category theory and higher categories and operads. In the 2010s, he was mainly concerned with a generalization of the Euler characteristic in category theory, the magnitude. He also considered such generalizations in metric spaces with application in biology (measurement of biodiversity).
Award and honour
Leinster groups (i.e., finite groups whose order is equal to the sum of the orders of their normal subgroups) are named in his honour. He received the 2019 Chauvenet Prize for Rethinking Set Theory (based upon an axiomatization published in 1964 by F. William Lawvere). He is a frequent author and moderator for the academic group blog n-Category Café, where topics from mathematics, science and philosophy are discussed, often from the perspective of category theory. International media attention resulted from a 2014 article by Leinster in the New Scientist. Leinster's article called, on the basis of ethics, for mathematicians to refuse to work for intelligence agencies. In German-speaking countries, this was reported by, among others, Der Spiegel and Zeit Online.
Selected publications
References
External links
Homepage, University of Edinburgh (with many links to publications, talks, & notes)
Tom Leinster in the database zbMATH
1971 births
Living people
20th-century British mathematicians
21st-century British mathematicians
British bloggers
Category theorists
Alumni of the University of Cambridge
Academics of the University of Edinburgh | Tom Leinster | [
"Mathematics"
] | 375 | [
"Category theorists",
"Mathematical structures",
"Category theory"
] |
78,103,543 | https://en.wikipedia.org/wiki/Denis-Charles%20Cisinski | Denis-Charles Cisinski (born March 10, 1976) is a mathematician focussing on higher category theory, homotopy theory, K-theory and algebraic geometry. In 2001, Cisinski model structures on topoi were introduced and later named after him. Since 2016, Denis-Charles Cisinski works at the Universität Regensburg.
Research
Denis-Charles Cisinski obtained his PhD in 2002 at the Paris Diderot University with a thesis supervised by Georges Maltsiniotis and titled Les préfaisceaux comme modèles des types d'homotopie (Presheaves as models for homotopy types). It was expanded and released as a book in 2006, further developing the theory from Pursuing Stacks by Alexander Grothendieck. In 2015, Denis-Charles Cisinski gave a talk at the Séminaire Nicolas Bourbaki summarizing the current state of research titled Catégories supérieures et théorie des topos (Higher categories and theory of toposes).
Publications
Books
Scripts
References
External links
Homepage
Denis-Charles Cisinski on nLab
Catégories supérieures et théorie des topos on YouTube (French)
Category theorists
Living people
1976 births | Denis-Charles Cisinski | [
"Mathematics"
] | 246 | [
"Category theorists",
"Mathematical structures",
"Category theory"
] |
78,106,359 | https://en.wikipedia.org/wiki/Methenamine/sodium%20salicylate | Methenamine/sodium salicylate, sold under the brand name Cystex among others, is a combination drug comprising methenamine and sodium salicylate. Methenamine serves as a urinary antiseptic and antibacterial agent, while sodium salicylate is a nonsteroidal anti-inflammatory drug (NSAID) and analgesic. The combination is used for the treatment and prevention of urinary tract infection (UTI) symptoms.
Medical uses
Methenamine, whether used alone or in combination with sodium salicylate, is considered an alternative to antibiotics for the treatment and prevention of UTIs and related symptoms. Unlike antibiotics, methenamine does not contribute to the risk of bacterial resistance.
Available forms
The drug is available over-the-counter (OTC), including in the United States, and is typically taken by mouth three times per day.
Methenamine/sodium salicylate is marketed under several brand names, including Cystex Urinary Pain Relief, AZO Urinary Tract Defense, Uro-Pain Dual Action, and CVS Antibacterial Plus Urinary Pain Relief. Some formulations also include phenazopyridine and are marketed as products like the All-In-One UTI Emergency Kit.
Comparison with methenamine
Methenamine is also available as a prescription drug and is used alone to prevent recurrent UTIs. Clinical evidence supports its efficacy for this indication. Prescription methenamine is usually administered as the hippuric acid or mandelic acid salt, while the OTC methenamine/sodium salicylate formulation uses methenamine as the free base.
Compared to prescription methenamine, the OTC combination formulation contains lower doses of methenamine. This OTC version has been studied less extensively, and limited clinical data are available to guide its use.
See also
Boric acid
References
Combination drugs
Analgesics
Antimicrobials
Antiseptics
Bactericides
Nonsteroidal anti-inflammatory drugs
Prodrugs | Methenamine/sodium salicylate | [
"Chemistry",
"Biology"
] | 426 | [
"Antimicrobials",
"Prodrugs",
"Bactericides",
"Chemicals in medicine",
"Biocides"
] |
78,106,437 | https://en.wikipedia.org/wiki/LUZP2 | Leucine zipper protein 2 is a protein that in humans is encoded by the LUZP2 gene. There are no orthologs in invertebrates, but many in vertebrates. It is a transcription factor found in eukaryotes.
Gene
The LUZP2 gene is found on the short arm of chromosome 11 at position 11p14.3. It is located on the plus strand.
The gene contains 23 introns, and can produce 11 alternatively spliced mRNAs.
Protein structure
LUZP2 encodes a leucine zipper protein that is 346 amino acids in length, and has a molecular weight of ~39 kDa. This protein is secreted, and is found mostly expressed within the brain and spinal cord.
The protein contains a signal peptide, 3 glycosylation sites, a leucine zipper region, and a disordered region. It also contains 3 highly conserved "QLKE" amino acid repeats.
Leucine zipper
The leucine zipper motif is located on positions 164-192 of the protein, and contains 4 conserved lysine and 4 conserved leucine residues. Leucine zippers usually facilitate protein-protein interactions and contain many amphipathic helices that form a left-handed dimeric coiled-coil structure. They also often contain leucine residues spaced 7 amino acids apart.
Abundance
Protein Abundance
LUZP2 protein is present in higher amounts than most proteins, but is more abundant in the cerebral cortex and brain. Immunohistochemical staining using anti-LUZP2 rabbit antibodies shows it to be present in low levels in the pancreas and high in the cerebral cortex. Interestingly, it is present in high levels in neuronal projections, suggesting it could have some role in the development of the vertebral nervous system.
In situ hybridization
Based on in situ hybridization studies, LUZP2 mRNA is expressed at low levels throughout the brain, but more highly concentrated in the regions of the forebrain, the thalamus, and the hypothalamus. LUZP2 is also least expressed in the cerebellum compared to other structures.
Clinical findings
Based on a data mining study investigating low-grade Gliomas, LUZP2 downregulation was found to be associated with higher-grade tumors, suggesting that LUZP2 expression decreased as tumors became more aggressive. Low LUZP2 expression was also associated with worse overall survival in patients with low-grade gliomas across multiple cohorts.
This gene has also been found deleted in some patients with Wilms tumor-aniridia-ganomalies-mental retardation (WAGR) syndrome.
Evolution
LUZP2 has many orthologs in vertebrates. It is highly conserved in mammals, birds, and reptiles.
LUZP2 is expected to have first appeared in cartilaginous fish around 462 million years ago, and is evolving at an intermediate rate, slower than fibrinogen alpha, but faster than cytochrome c.
Possible Interactions
LUZP2 mostly interacts with proteins found the nucleus. Proteins that showed the most promising interactions with LUZP2 include the serine/threonine kinase TNIK, GAS2, and CBX5.
References
Genes on human chromosome 11
Proteins | LUZP2 | [
"Chemistry"
] | 673 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
66,422,842 | https://en.wikipedia.org/wiki/Hydrogenated%20MDI | Hydrogenated MDI (H12MDI or 4,4′-diisocyanato dicyclohexylmethane) is an organic compound in the class known as isocyanates. More specifically, it is an aliphatic diisocyanate. It is a water white liquid at room temperature and is manufactured in relatively small quantities. It is also known as 4,4'-methylenedi(cyclohexyl isocyanate) or methylene bis(4-cyclohexylisocyanate) and has the formula CH2[(C6H10)NCO]2.
Manufacture
The product is manufactured by hydrogenation of methylene diphenyl diisocyanate. It may also be manufactured by phosgenation of 4,4-Diaminodicyclohexylmethane.
Uses
Aliphatic diisocyanates are not used in the production of polyurethane foam as the cost is too high and foam is very much a commodity. It is used in special applications for polyurethane, such as enamel coatings which are resistant to abrasion and degradation from ultraviolet light. There are also multiple patents where prepolymers based on it are used in golf ball production. It is available commercially under the tradename of Desmodur W from Covestro - formerly Bayer Material Science. It is used as a reactive building block for the preparation of other chemical products such as isocyanate terminated prepolymers and other urethane polymers. The isocyanate groups can undergo addition reactions at room temperature with compounds which contain active hydrogens especially amines and polyols. Polyurethane resins based on this diisocyanate have good flexibility and mechanical strength. The polymers formed tend to have abrasion and hydrolysis resistance as well as retaining gloss and physical properties upon weathering. The resins based on this material are useful in coatings for flooring, roofing, maintenance and adhesives, and sealants. They find use in the coatings, adhesives, sealants and elastomers (CASE) applications. A prepolymer made from H12MDI and incorporating dimethylol propionic acid can also be converted to light stable polyurethane dispersions.
See also
Hexamethylene diisocyanate
Methylene diphenyl diisocyanate
Toluene diisocyanate
Isophorone diisocyanate
References
Isocyanates
Monomers
Cyclohexanes
Organic compounds | Hydrogenated MDI | [
"Chemistry",
"Materials_science"
] | 540 | [
"Isocyanates",
"Functional groups",
"Organic compounds",
"Polymer chemistry",
"Monomers"
] |
66,425,109 | https://en.wikipedia.org/wiki/Ironomycin | Ironomycin is a derivative of salinomycin and potent small molecule against persister cancer stem cells, that is under preclinical evaluation by SideROS for the treatment of cancer. Ironomycin was shown to induce ferroptosis in breast cancer cell lines and its mechanism of action involves the targeting of lysosomal iron.
Pre-clinical research
Ironomycin kills breast cancer stem cells in mice, and is more potent in vitro and in vivo than its parent anti-bacterial natural product salinomycin. Ironomycin and to a lesser extend salinomycin targeted cancer stem cells responsible for metastasis and relapse.
The mechanism of action by which ironomycin and salinomycin kill cancer stem cells involves lysosomal iron sequestration, leading to the production of reactive oxygen species, lysosome membrane permeabilization and ferroptosis in breast cancer. While mesenchymal breast cancer cells are vulnerable to ferroptosis, ironomycin and salinomycin can trigger cell death independently of ferroptosis in other cancer cell types.
These candidate drugs abolished the capacity of HMLER CD24low to form colonies at low concentrations and ironomycin prevented these cells from developing tumorsphere in suspension, a well-established characteristic of cancer stem cells, at a low dose (ie. 30 nM). This effect on cancer stem cells have been shown in vivo where ironomycin decreased tumour-seeding capacity of tumour cells (breast PDX), more efficiently that salinomycin and Docetaxel. CD44 mediating iron endocytosis prevails in the mesenchymal state of cancer cells, and iron operates as a metal catalyst to demethylate repressive histone (H3K9) that govern the expression of mesenchymal genes.
The ability of ironomycin to kill both cancer stem cells and drug-resistant cancer cells (persister) may provide a therapeutic advantage in treating cancer. Ironomycin is the preclinical development pipeline of the biotech company SideROS for the treatment of drug resistance cancers such as acute myeloid leukemia, triple negative breast cancer, pancreatic cancer and non-hodgkin lymphoma.
Synthesis
A team from ICSN has developed the chemical synthesis of salinomycin analogs, including ironomycin, which are more potent than salinomycin. Ironomycin is synthesized in two steps from salinomycin sodium salt: (1) a chemoselective allylic oxidation and (2) a chemo- and diastereoselective reductive amination at C20 leading to the alkyne derivative ironomycin.
See also
Salinomycin which is the sourcing material of ironomycin synthesis, is much less potent in vitro against persister cancer cells
Targeted therapy
References
Ionophores
Polyketides
Spiro compounds
Secondary amino acids
Propargyl compounds | Ironomycin | [
"Chemistry"
] | 616 | [
"Biomolecules by chemical classification",
"Natural products",
"Organic compounds",
"Polyketides",
"Spiro compounds"
] |
67,929,406 | https://en.wikipedia.org/wiki/FAM227B | FAM227B is a protein that in humans is encoded by FAM227B gene. FAM227B stands for family with sequence similarity 227 member B and encodes protein FAM227B of the same name. Its aliases include C15orf33, MGC57432 and FLJ23800.
Gene
FAM227B is located at 15q21.2 and contains 24 exons. The current size determined for FAM227B is 293,961 base pairs (NCBI). Neighbors of FAM227B on chromosome fifteen include: “ribosomal protein L15 pseudogene”, “galactokinase 2”, “RNA, 7SL, cytoplasmic 307, pseudogene”, “signal peptide peptidase like 2A pseudogene”, “fibroblast growth factor 7”, “uncharacterized LOC105370811”, “DTW domain containing 1”, and “ring finger protein, LIM domain interacting pseudogene 3”.
Transcript
There are 30 isoforms of FAM227B and one paralog, FAM227A. The conserved domains in these isoforms (as well as the paralog) are of various sizes and encode the protein FWWh (pfam14922) of unknown function, which all contain the distinctive motif FWW with a hydrophobic residue h. The main isoform used for analysis of FAM227B is isoform 1 (NM_152647.3). The next most reliable isoform of FAM227B is isoform 2 ( NM_001330293.2). The second isoform is shorter and has a distinct C-terminus.
Below are cartoons depicting the different lengths and cutting patterns of the isoforms*:
*The cartoons do not precisely depict differences between all the isoforms, but instead act as a simple depiction of a larger pattern between the isoforms.
Protein
The primary sequence for FAM227B is isoform 1 with accession number: NP_689860.2. It is 508 amino acids long. There are 30 isoforms. The molecular weight is 59.9kD and the isoelectric point is predicted to be high, around 10. Compared to other proteins in humans, FAM227B has high abundance of Phenylalanine and Glycine and low abundance levels of Valine. The protein is predicted to be in the nuclear region of the cell. There is a bipartite nuclear localization signal at RKLERYGEFLKKYHKKK, and three other nuclearization signals at HKKK, KKKK, and PKKTKIK. There is also a vacuolar targeting motif at TLPI. An FWWh region, where h signifies hydrophobic, runs from amino acids 135-296 in Homo sapiens FAM227B isoform 1. The function of this region is still unknown.
Secondary structure
The secondary structure is predicted to be made up of alpha helices mainly and coiled coils
Post translational modifications
Phosphorylation is the main post-translational modification predicted for FAM227B due to its predicted localization to the nucleus. There are many experimentally predicted phosphorylation sites, the most highly rated included in the conceptual translation. Glycosylation sites and SUMOylation sites were also predicted.
Expression
FAM227B is most highly expressed in the testis at 1.983 +/- 0.404 RPKM, in the kidney at 1.408 +/- 0.152 RPKM, in the adrenal at 1.177 +/- 0.088 RPKM, and in the thyroid 1.133 +/- 0.165 RPKM. It is also expressed to a lesser degree in the appendix, bone marrow, brain, colon, duodenum, endometrium, esophagus, fat, gall bladder, heart, liver, lung, lymph node, ovary, pancreas, placenta, prostate, salivary gland, skin, small intestine, spleen, stomach, and urinary bladder
Function
Currently, the function of FAM227B has not been characterized
Protein-protein interactions
RNF123 was found to be an interacting protein of FAM227B through Affinity Capture – MS. RAB3A was found to be an interacting protein of FAM227B through tandem affinity purification.
Subcellular localization
Current studies have determined the location of this gene to be in the nuclear region of the cell.
Homology and evolution
Paralogs: FAM227A
Orthologs: FAM227B is present in Deuterostomia and Protostomia, dating as far back as porifera. FAM227B is not present in choanoflagellates, and gene alignment sequences have shown that FAM227B is a rapidly evolving gene due to its evolution trajectory compared to cytochrome c and fibrinogen alpha.
Clinical significance
The location of FAM227B, 15q21.2, was found to be associated with oral cancer. The 15q21.2 locus is mentioned in other literature as well. FGF7 is a neighbour of FAM227B in the 15q21.2 locus (rs10519227), and encodes for the fibroblast growth factor, which is involved in processes such as embryonic development, cell growth, tissue repair, tumor growth, invasion, and morphogenesis. FGF works as a signal for thyroid gland development, and an SNP on intron 2 of FGF7 has been associated with thyroid growth/goiter growth. This association was only significant at the genome level in males. It was found that the abnormal goiter growth is likely due to variant signals that cause increased levels of TSH. FAM227B was found to be related to at least some of the 48 significant DMRs (differentially methylated regions) between HF (high fertile) and LF (low fertile) groups in the genome of spermatozoa from boar animal model. FAM227B was found to be upregulated in LOXL2 knockdown. Knocking down LOXL2 results in lower levels of H3K4ox, resulting in chromatin decompaction, thus continuing activation of DNA damage response. This results in anticancer agents being more effective against cancerous cell lines. FAM227B was found to be a genetic risk variant in breast cancer. FAM227B was differentially expressed in prostrate genes of Esr2 knockout rats compared to wildtype rats. Esr2 is involved in anti-proliferation and differentiation. FAM227B was part of 20 upregulated genes in chorionic girdle during trophoblast development in horses. Protein FAM227B was differentially expressed in cardiovascular disease. FAM227B was found to be a candidate causal gene for lung cancer. FAM227B has a predicted p53 binding site.
References
Protein biochemistry | FAM227B | [
"Chemistry",
"Biology"
] | 1,511 | [
"Biochemistry",
"Protein biochemistry",
"Molecular biology"
] |
67,934,706 | https://en.wikipedia.org/wiki/History%20of%20Science%20and%20Technology%20%28journal%29 | History of Science and Technology is a biannual peer-reviewed academic journal covering the history of science and technology. It is published by State University of Infrastructure and Technologies (Ukraine) and was established in 2011.
Abstracting and indexing
The journal is abstracted and indexed in Scopus and the Emerging Sources Citation Index.
References
Academic journals established in 2011
History of science and technology
History of science journals | History of Science and Technology (journal) | [
"Technology"
] | 81 | [
"History of science and technology"
] |
63,634,457 | https://en.wikipedia.org/wiki/Hyper%E2%80%93Rayleigh%20scattering | Hyper–Rayleigh scattering optical activity ( ), a form of chiroptical harmonic scattering, is a nonlinear optical physical effect whereby chiral scatterers (such as nanoparticles or molecules) convert light (or other electromagnetic radiation) to higher frequencies via harmonic generation processes, in a way that the intensity of generated light depends on the chirality of the scatterers. "Hyper–Rayleigh scattering" is a nonlinear optical counterpart to Rayleigh scattering. "Optical activity" refers to any changes in light properties (such as intensity or polarization) that are due to chirality.
History
The effect was theoretically predicted in 1979, in a mathematical description of hyper Raman scattering optical activity. Within this theoretical model, upon setting the initial and final frequencies of light to the same value, the mathematics describe the hyper Rayleigh scattering optical activity. The theory was well in advance of its time, and the effect remained elusive for 40 years. Its author David L. Andrews referred to it as the "impossible theory". However, in January 2019, an experimental demonstration was reported by Ventsislav K. Valev and his team. The team investigated the hyper Rayleigh scattering (at the second harmonic generation frequency) from chiral nanohelices made of silver. Valev and his team observed that the intensity of the hyper Rayleigh scattering light depended on the direction of circularly polarized light and that this dependence reversed with the chirality of the nanohelices. Valev's work unambiguously established that the effect is physically possible, opening the way for nonlinear chiroptical investigations of a variety of chiral light-scattering materials; including molecules, plasmonic metal nanoparticles and semiconductor nanoparticles.
Significance
Hyper Rayleigh scattering optical activity (HRS OA) is arguably the most fundamental nonlinear chiral optical (chiroptical) effect; since other nonlinear chiroptical effects have additional requirements, which make them conceptually more involved, i.e. less fundamental. HRS OA is a scattering effect and therefore it does not require the frequency conversion process to be coherent, contrary to other nonlinear chiroptical effects, such as second harmonic generation circular dichroism or second harmonic generation optical rotation. Moreover, HRS OA is a parametric process: the initial and final quantum mechanical states of the excited electron are the same. Because the excitation proceeds via virtual states, there is no restriction on the frequency of incident light. By contrast, other nonlinear scattering effects, such as two-photon circular dichroism and hyper-Raman are non-parametric: they require real energy states that restrict the frequencies at which these effects can be observed.
In molecules
Soon after the first demonstration of hyper Rayleigh scattering optical activity in metal nanoparticles, the effect was replicated in organic molecules, specifically aromatic oligoamide foldamers.
At the third harmonic
Whereas the initial experimental demonstration of hyper-Rayleigh scattering optical activity was observed at the second harmonic of the illumination frequency of light, the effect is general and can be observed at higher harmonics. The first demonstration of hyper-Rayleigh scattering optical activity at the third harmonic was reported by Valev's team in 2021, from silver nanohelices.
See also
Linear dichroism
Magnetic circular dichroism
Optical activity
Optical isomerism
Optical rotation
Optical rotatory dispersion
Protein circular dichroism data bank
Raman optical activity (ROA)
Two-photon circular dichroism
Vibrational circular dichroism
References
External links
New physical effect demonstrated by University of Bath scientists after a 40 year search, official press release by the University of Bath.
Bath University has something to twist and shout about after 40-year search, by the EPSRC.
Hyper-Rayleigh scattering, published in the scientific journal Nature Photonics.
Nonlinear optics
Scattering, absorption and radiative transfer (optics)
Chirality | Hyper–Rayleigh scattering | [
"Physics",
"Chemistry",
"Biology"
] | 804 | [
"Pharmacology",
" absorption and radiative transfer (optics)",
"Origin of life",
"Stereochemistry",
"Chirality",
"Scattering",
"Asymmetry",
"Biochemistry",
"Symmetry",
"Biological hypotheses"
] |
63,640,741 | https://en.wikipedia.org/wiki/Agouti%20coloration%20genetics | The agouti gene, the Agouti-signaling protein (ASIP) is responsible for variations in color in many species. Agouti works with extension to regulate the color of melanin which is produced in hairs. The agouti protein causes red to yellow pheomelanin to be produced, while the competing molecule α-MSH signals production of brown to black eumelanin. In wildtype mice, alternating cycles of agouti and α-MSH production cause agouti coloration. Each hair has bands of yellow which grew during agouti production, and black which grew during α-MSH production. Wildtype mice also have light-colored bellies. The hairs there are a creamy color the whole length because the agouti protein was produced the whole time the hairs were growing.
In mice and other species, loss of function mutations generally cause a darker color, while gain of function mutations cause a yellower coat.
Mice
As of 1979, there were 17 known alleles of agouti in mice.
Lethal yellow Ay causes yellow coloration and obesity. It is dominant to all other alleles in the series. When homozygous, it is lethal early in development.
Viable yellow Avy looks similar to lethal yellow and also causes obesity, but is not lethal when homozygous. Homozygous viable yellow mice can be variable in color from clear yellow through mottled black and yellow to a darker color similar to the agouti color.
Intermediate yellow aiy causes a mottled yellow coloration, which like viable yellow can sometimes resemble agouti.
Sienna yellow Asy heterozygotes are a dark yellow, while homozygotes are generally a clearer yellow.
White-bellied agouti AW mice have agouti coloration, with hairs that are black at the tips, then yellow, then black again, and white to tan bellies.
Agouti A looks like AW but the belly is dark like the back.
Black and tan at causes a black back with a tan belly. A/at heterozygotes look like AW mice.
Nonagouti a mice are almost completely black, with only a few yellow hairs around the ears and the genitals.
Extreme nonagouti ae mice are fully black, and is recessive to all other alleles in the series.
This is not a complete list of mouse agouti alleles.
The nonagouti allele a is unusually likely to revert to the black-and-tan allele at or to the white-bellied agouti allele AW.
Agouti production is regulated by multiple different promoter regions, capable of promoting transcription just in the ventral (belly) area, as seen in white-bellied agouti and black-and-tan mice, or all across the body but just during a specific part of the hair growth cycle, as seen in agouti and white-bellied agouti.
Lethal yellow and viable yellow cause obesity, features of type II diabetes, and a higher likelihood of tumors. In normal mice Agouti is only expressed in the skin during hair growth, but these dominant yellow mutations cause it to be expressed in other tissues including liver, muscle, and fat. The mahogany locus interacts with Agouti and a mutation there can override the pigmentation and body weight effects of lethal yellow.
Viable yellow agouti mice can inherit epigenetic differences from their dam affecting how yellow or brown they become.
The mouse agouti gene is found on chromosome 2.
Dogs
In dogs, the agouti gene is associated with various coat colors and patterns.
The alleles at the A locus are related to the production of agouti-signaling protein (ASIP) and determine whether an animal expresses an agouti appearance and, by controlling the distribution of pigment in individual hairs, what type of agouti. There are four known alleles that occur at the A locus:
Ay = Fawn or sable (tan with black whiskers and varying amounts of black-tipped and/or all-black hairs dispersed throughout) - fawn typically referring to dogs with clearer tan and sable to those with more black shading
aw = Wild-type agouti (each hair with 3-6 bands alternating black and tan) - also called wolf sable
at = Tan point (black with tan patches on the face and underside) - including saddle tan (tan with a black saddle or blanket)
a = Recessive black (black, inhibition of phaeomelanin)
ayt = Recombinant fawn (expresses a varied phenotype depending on the breed) has been identified in numerous Tibetan Spaniels and individuals in other breeds, including the Dingo. Its hierarchical position is not yet understood.
Most texts suggest that the dominance hierarchy for the A locus alleles appears to be as follows: Ay > aw > at > a; however, research suggests the existence of pairwise dominance/recessiveness relationships in different families and not the existence of a single hierarchy in one family.
Ay is incompletely dominant to at, so that heterozygous individuals have more black sabling, especially as puppies and Ayat can resemble the awaw phenotype. Other genes also affect how much black is in the coat.
aw is the only allele present in many Nordic spitzes, and is not present in most other breeds.
at includes tan point and saddle tan, both of which look tan point at birth. Modifier genes in saddle tan puppies cause a gradual reduction of the black area until the saddle tan pattern is achieved.
a is only present in a handful of breeds. Most black dogs are black due to a K locus allele.
A 2021 study found distinct genetic causes for fawn and sable, which it refers to as "dominant yellow" and "shaded yellow". Both have a more active hair cycle promoter than the wildtype agouti, but dominant yellow also has a more active ventral promoter. The hair cycle promoter involved in these colors is thought to have arisen about 2 million years ago in an extinct species of canid, which later hybridized with wolves.
Cats
The dominant, wild-type A allows hairs to be banded with black and red (revealing the underlying tabby pattern), while the recessive non-agouti or "hypermelanistic" allele, a, causes black pigment production throughout the growth cycle of the hair. Thus, the non-agouti genotype (aa) masks or hides the tabby pattern, although sometimes a suggestion of the underlying pattern can be seen (called "ghost striping"), especially in kittens. The sex-linked orange coloration is epistatic over agouti, and prevents the production of black pigment.
Horses
In normal horses, ASIP restricts the production of eumelanin to the "points": the legs, mane, tail, ear edges, etc. In 2001, researchers discovered a recessive mutation on ASIP that, when homozygous, left the horse without any functional ASIP. As a result, horses capable of producing true black pigment had uniformly black coats. The dominant, wildtype allele producing bay is symbolized as A, while the recessive allele producing black is symbolized as a. Extension is epistatic over agouti and will cause chestnut coloration regardless of what agouti alleles are present.
History
The cause behind the various shades of bay, particularly the genetic factors responsible for wild bay and seal brown, have been contested for over 50 years. In 1951, zoologist Miguel Odriozola published "A los colores del caballo" in which he suggested four possible alleles for the "A" gene, A+, A, At, and a, in order of most dominant to least.
This was accepted until the 1990s, when a different hypothesis became popular. It proposed that shades of bay were caused by many different genes, some which lightened the coat, some which darkened it. This theory also suggested that seal brown horses were black horses with a trait called pangare. Pangaré is an ancestral trait also called "mealy", which outlines the soft or communicative parts of the horse in buff tan.
The combination of black and pangaré was dismissed as the cause of seal brown in 2001, when a French research team published Mutations in the agouti (ASIP), the extension (MC1R), and the brown (TYRP1) loci and their association to coat color phenotypes in horses (Equus caballus). This study used a DNA test to identify the recessive a allele on the Agouti locus, and found that none of the horses fitting the phenotype of seal brown were homozygous for the a allele.
In 2007 one genetics lab began offering a test for what they believed was a marker for seal brown, and later for an agouti allele which they believed caused the brown color. However, the underlying research was never published and the test was suspended by 2015 due to unreliable results.
The genetic alleles that create seal brown and wildtype bay remain unknown. It is still hypothesized that to some extent, the darkening of coat color in some bays may be regulated by unrelated genes for traits like "sooty".
Donkeys
Most donkeys have creamy to gray-white areas on the belly and around the muzzle and eyes, called light points or pangare. However, a recessive variant of agouti causes those areas to be the same color as the body in a pattern called no light points or NLP, which is similar to recessive black in other mammals. This allele can be found in Norman donkeys and American miniature donkeys.
Rabbits
In rabbits, the wildtype is agouti with a light belly, and a recessive non-agouti allele causes a black coat. A third allele, possibly a mutation to a regulator or promoter region, is thought to cause black and tan color. The nonagouti allele is estimated to have first appeared before 1700.
Agouti is linked to the wideband gene, with about a 30% crossover rate.
Like white bellied agouti mice, rabbits with wildtype agouti produce transcripts with different untranslated 5' ends that have different dorsal and ventral expression. The 1A exon is only expressed in the belly region and may be responsible for the lighter color there.
References
Further reading
Genetics | Agouti coloration genetics | [
"Biology"
] | 2,123 | [
"Genetics"
] |
76,733,413 | https://en.wikipedia.org/wiki/Microbial%20hyaluronic%20acid%20production | Microbial hyaluronic acid production refers to the process by which microorganisms, such as bacteria and yeast, are utilized in fermentation to synthesize hyaluronic acid (HA). HA is used in a wide range of medical, cosmetic, and biological products because of its high moisture retention and viscoelasticity qualities. HA had originally been extracted from rooster combs in limited quantities. However, challenges such as low yields, high production costs, and ethical issues associated with animal-derived HA has driven the development of microbial production methods for HA.
Although there are other methods for instance chemical synthesis and modification, chemoenzymatic synthesis, enzymatic synthesis; microbial fermentation has been preferred to produce HA because of economical advantages.
Bacterial production
Some bacteria, like Streptococcus, develop an extracellular capsule that contains HA. This capsule functions as a molecular mimic to elude the host's immune system during the infection process in addition to providing adherence and protection. Streptococcus zooepidemicus was used for first commercially HA fermentation, and that is most used bacteria since provides high yields although it is a pathogen microorganism.
Encoding of HA production is carried out by hasA, hasB, hasC, hasD and hasE genes in S. zooepidemicus.
Genetically modified producers were developed such as Kluysveromyces lactis, Lactococcus lactis, Bacillus subtilis, Escherichia coli, and Corynebacterium glutamicum because of S. zooepidemicus’s pathogeny.
Biological process
Intracellular factors
Metabolism
Intermediates are used from pathways essential to support cell growth, such as the production of organic acids, polysaccharides during the HA production. HA is not an essential metabolite, and it competes other metabolites to attend the carbon flux in the cell. Reduction potential of S. zooepidemicus may have a role in hyaluronic acid production, because 2 NAD+ are consumed during the synthesis of one monomer. Although NAD+ does not control HA synthesis when NADH oxidase over-expressed, it has a big role in biomass formation.
Some studies showed that balanced intracellular concentration of precursors and their fluxes balanced provides higher molecular weight such as UDP-acetylglucosamine concentration. Enzymes such as hyaluronidase, β-glucuronidase of S. zooepidemicus decrease yield of HA. HA concentration is increased by deletion of associated genes of these enzymes.
On the other hand, some enzymes induce HA production like sucrose-6-phosphatate hydrolase, and hyaluronan synthase. Using combined approaches with these two type enzymes is a good strategy for high yield HA production.
Membrane
HA is produced around the cell, serving as a barrier against the host immune system by the bacteria. Only 8% of HA remains as attached the cell when cells arrived stationary phase. Biosurfactants such as sodium dodecyl sulfate (SDS) are used to gain this product. Hyaluronan synthase, that is a membrane-binding enzyme, is one of the factors that reduces the production of HA. Hyaluronan synthase limits hyaluronic acid production by affecting cell morphology.
Environmental factors
pH
Organic acids formed during HA production by S. zooepidemicus cause pH to decrease Although HA production without pH control is cheaper, it prefers since provides high hyaluronic acid yields..
Temperature
HA production is affected regarding to yield and molecular weight by temperature. HA production increases while bacterial cells are growing above 37°C. However, HA yield decreases while molecular weight is higher with fermentation under 32°C.
Aeration
Although S. zooepidemicus is an aerotolerant anaerobe, hyaluronic acid production is affected by oxygen because NADH/NAD+ balance of cells changes with oxygen amount. Controlling oxygen during the cultivation via agitation rate provides increase both HA yield and molecular weight.
Culture Media Components
The carbon source is one of the media components that has effects on production of microbial HA. Although the glucose is most used one as a carbon source for the HA production; molasses, sucrose, and maltose are used for microbial production.
HA production needs also many amino acids in the culture media therefore nitrogen source concentration has a key.
See also
Hyaluronic acid
Streptococcus zooepidemicus
References
Fermentation
Industrial processes
Wikipedia Student Program | Microbial hyaluronic acid production | [
"Chemistry",
"Biology"
] | 950 | [
"Biochemistry",
"Cellular respiration",
"Fermentation"
] |
76,736,170 | https://en.wikipedia.org/wiki/Sensitivity%20theorem | In computational complexity, the sensitivity theorem, proved by Hao Huang in 2019, states that the sensitivity of a Boolean function is at least the square root of its degree, thus settling a conjecture posed by Nisan and Szegedy in 1992. The proof is notably succinct, given that prior progress had been limited.
Background
Several papers in the late 1980s and early 1990s showed that various decision tree complexity measures of Boolean functions are polynomially related, meaning that if are two such measures then for some constant . Nisan and Szegedy showed that degree and approximate degree are also polynomially related to all these measures. Their proof went via yet another complexity measure, block sensitivity, which had been introduced by Nisan. Block sensitivity generalizes a more natural measure, (critical) sensitivity, which had appeared before.
Nisan and Szegedy asked whether block sensitivity is polynomially bounded by sensitivity (the other direction is immediate since sensitivity is at most block sensitivity). This is equivalent to asking whether sensitivity is polynomially related to the various decision tree complexity measures, as well as to degree, approximate degree, and other complexity measures which have been shown to be polynomially related to these along the years. This became known as the sensitivity conjecture.
Along the years, several special cases of the sensitivity conjecture were proven.
The sensitivity theorem was finally proven in its entirety by Huang, using a reduction of Gotsman and Linial.
Statement
Every Boolean function can be expressed in a unique way as a multilinear polynomial. The degree of is the degree of this unique polynomial, denoted .
The sensitivity of the Boolean function at the point is the number of indices such that , where is obtained from by flipping the 'th coordinate. The sensitivity of is the maximum sensitivity of at any point , denoted .
The sensitivity theorem states that
In the other direction, Tal, improving on an earlier bound of Nisan and Szegedy, showed that
The sensitivity theorem is tight for the AND-of-ORs function:
This function has degree and sensitivity .
Proof
Let be a Boolean function of degree . Consider any maxonomial of , that is, a monomial of degree in the unique multilinear polynomial representing . If we substitute an arbitrary value in the coordinates not mentioned in the monomial then we get a function on coordinates which has degree , and moreover, . If we prove the sensitivity theorem for then it follows for . So from now on, we assume without loss of generality that has degree .
Define a new function by
It can be shown that since has degree then is unbalanced (meaning that ), say . Consider the subgraph of the hypercube (the graph on in which two vertices are connected if they differ by a single coordinate) induced by . In order to prove the sensitivity theorem, it suffices to show that has a vertex whose degree is at least . This reduction is due to Gotsman and Linial.
Huang constructs a signing of the hypercube in which the product of the signs along any square is . This means that there is a way to assign a sign to every edge of the hypercube so that this property is satisfied. The same signing had been found earlier by Ahmadi et al., which were interested in signings of graphs with few distinct eigenvalues.
Let be the signed adjacency matrix corresponding to the signing. The property that the product of the signs in every square is implies that , and so half of the eigenvalues of are and half are . In particular, the eigenspace of (which has dimension ) intersects the space of vectors supported by (which has dimension ), implying that there is an eigenvector of with eigenvalue which is supported on . (This is a simplification of Huang's original argument due to Shalev Ben-David.)
Consider a point maximizing . On the one hand, .
On the other hand, is at most the sum of absolute values of all neighbors of in , which is at most . Hence .
Constructing the signing
Huang constructed the signing recursively. When , we can take an arbitrary signing. Given a signing of the -dimensional hypercube , we construct
a signing of as follows. Partition into two copies of . Use for one of them and for the other, and assign all edges between the two copies the sign .
The same signing can also be expressed directly. Let be an edge of the hypercube. If is the first coordinate on which differ, we use the sign .
Extensions
The sensitivity theorem can be equivalently restated as
Laplante et al. refined this to
where is the maximum sensitivity of at a point in .
They showed furthermore that this bound is attained at two neighboring points of the hypercube.
Aaronson, Ben-David, Kothari and Tal defined a new measure, the spectral sensitivity of , denoted . This is the largest eigenvalue of the adjacency matrix of the sensitivity graph of , which is the subgraph of the hypercube consisting of all sensitive edges (edges connecting two points such that ). They showed that Huang's proof can be decomposed into two steps:
.
.
Using this measure, they proved several tight relations between complexity measures of Boolean functions: and . Here is the deterministic query complexity and is the quantum query complexity.
Dafni et al. extended the notions of degree and sensitivity to Boolean functions on the symmetric group and on the perfect matching association scheme, and proved analogs of the sensitivity theorem for such functions. Their proofs use a reduction to Huang's sensitivity theorem.
See also
Decision tree model
Notes
References
Theorems in computational complexity theory | Sensitivity theorem | [
"Mathematics"
] | 1,167 | [
"Theorems in computational complexity theory",
"Theorems in discrete mathematics"
] |
72,331,198 | https://en.wikipedia.org/wiki/The%20Erd%C5%91s%20Distance%20Problem | The Erdős Distance Problem is a monograph on the Erdős distinct distances problem in discrete geometry: how can one place points into -dimensional Euclidean space so that the pairs of points make the smallest possible distance set? It was written by Julia Garibaldi, Alex Iosevich, and Steven Senger, and published in 2011 by the American Mathematical Society as volume 56 of the Student Mathematical Library. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Topics
The Erdős Distance Problem consists of twelve chapters and three appendices.
After an introductory chapter describing the formulation of the problem by Paul Erdős and Erdős's proof that the number of distances is always at least proportional to , the next six chapters cover the two-dimensional version of the problem. They build on each other to describe successive improvements to the known results on the problem, reaching a lower bound proportional to in Chapter 7. These results connect the problem to other topics including the Cauchy–Schwarz inequality, the crossing number inequality, the Szemerédi–Trotter theorem on incidences between points and lines, and methods from information theory.
Subsequent chapters discuss variations of the problem: higher dimensions, other metric spaces for the plane, the number of distinct inner products between vectors, and analogous problems in spaces whose coordinates come from a finite field instead of the real numbers.
Audience and reception
Although the book is largely self-contained, it assumes a level of mathematical sophistication aimed at advanced university-level mathematics students. Exercises are included, making it possible to use it as a textbook for a specialized course. Reviewer Michael Weiss suggests that the book is less successful than its authors hoped at reaching "readers at different levels of mathematical experience": the density of some of its material, needed to cover that material thoroughly, is incompatible with accessibility to beginning mathematicians. Weiss also complains about some minor mathematical errors in the book, which however do not interfere with its overall content.
Much of the book's content, on the two-dimensional version of the problem, was made obsolete soon after its publication by new results of Larry Guth and Nets Katz, who proved that the number of distances in this case must be near-linear. Nevertheless, reviewer William Gasarch argues that this outcome should make the book more interesting to readers, not less, because it helps explain the barriers that Guth and Katz had to overcome in proving their result. Additionally, the techniques that the book describes have many uses in discrete geometry.
References
Mathematics books
2011 non-fiction books
Discrete geometry
American Mathematical Society | The Erdős Distance Problem | [
"Mathematics"
] | 523 | [
"Discrete geometry",
"Discrete mathematics"
] |
72,339,713 | https://en.wikipedia.org/wiki/Titanium%20trisulfide | Titanium trisulfide (TiS3) is an inorganic chemical compound of titanium and sulfur. Its formula unit contains one Ti4+ cation, one S2− anion and one S22−.
TiS3 has a layered crystal structure, where the layers are weakly bonded to each other and can be exfoliated with an adhesive tape. The exfoliated layers have potential applications in ultrathin field-effect transistors.
Synthesis
Millimeter-long crystalline whiskers of TiS3 can be grown by chemical vapor transport at ca. 500 °C, using excess sulfur as the transporting gas.
Properties
TiS3 is an n-type semiconductor with an indirect bandgap of about 1 eV. Its individual layers are made of TiS atomic chains; hence they are anisotropic and their properties depend on the in-plane orientation. For example, in the same sample, electron mobility can be 80 cm2/(V·s) along the b-axis and 40 cm2/(V·s) along the a-axis.
References
Sulfides
Titanium(IV) compounds
Monolayers | Titanium trisulfide | [
"Physics"
] | 226 | [
"Monolayers",
"Atoms",
"Matter"
] |
73,769,818 | https://en.wikipedia.org/wiki/Nuclear%20Regulatory%20Authority | The Nuclear Regulatory Authority is the regulator for nuclear power in Turkey. Regulators are being trained in Russia and will oversee Akkuyu operated by Rosatom.
References
Governmental nuclear organizations
Nuclear power in Turkey
Government agencies of Turkey
2018 establishments in Turkey
Organizations based in Ankara
Regulatory and supervisory agencies of Turkey | Nuclear Regulatory Authority | [
"Engineering"
] | 58 | [
"Governmental nuclear organizations",
"Nuclear organizations"
] |
73,775,366 | https://en.wikipedia.org/wiki/DORIS%20%28particle%20accelerator%29 | The Double-Ring Storage Facility (DORIS) was an electron–positron storage ring at the German national laboratory DESY. It was DESY's second circular accelerator and its first storage ring, with a circumference of nearly 300 m. After construction was completed in 1974, DORIS provided collision experiments with electrons and their antiparticles at energies of 3.5 GeV per beam. In 1978, the energy of the beams was raised to 5 GeV each.
With evidence of "excited charmonium states", DORIS made an important contribution to the process of proving the existence of heavy quarks. In the same year, the first tests of X-ray lithography were performed at DESY.
In 1987, the ARGUS detector at the DORIS storage ring was the first experiment to observe the conversion of a B meson into its antiparticle, the anti-B meson.
The Hamburg Synchrotron Radiation Laboratory HASYLAB was commissioned in 1980 to use synchrotron radiation, which was generated at DORIS as a byproduct, for research. While DORIS was only used as a synchrotron radiation source for roughly a third of its running time in the beginning, the provision of synchrotron radiation became its sole purpose from 1993 onwards under the name DORIS III. In order to achieve more intense and controllable radiation, DORIS was upgraded in 1984 with wigglers and undulators. By means of a special array of permanent magnets, the accelerated positrons could now be brought onto a slalom course, increasing the intensity of the emitted synchrotron radiation by a factor of 100 in comparison to conventional storage ring systems.
Among the many studies carried out with the synchrotron radiation generated by DORIS, from 1986 to 2004, the Israeli biochemist Ada Yonath (Nobel Prize in Chemistry 2009) conducted experiments that led to her deciphering the ribosome.
DORIS III served 36 photon beamlines, where 45 instruments were operated in rotation. The overall beam time per year amounted to 8 to 10 months. It was finally shut down at the end of 2012.
OLYMPUS
The former site of the ARGUS detector at DORIS became the location of the OLYMPUS experiment in 2010. OLYMPUS used the toroidal magnet and pair of drift chambers from the MIT-Bates BLAST experiment along with refurbished time-of-flight detectors and multiple luminosity monitoring systems. OLYMPUS measured the positron–proton to electron–proton cross section ratio to precisely determine the size of two-photon exchange in elastic electron–proton scattering. Two-photon exchange may resolve the proton form factor discrepancy between recent measurements made using polarization techniques and ones using the Rosenbluth separation method. OLYMPUS took data in 2012 and 2013, and first results were published in 2017.
References
External links
DESY website
Particle physics facilities
Synchrotron radiation facilities | DORIS (particle accelerator) | [
"Materials_science"
] | 586 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
73,775,543 | https://en.wikipedia.org/wiki/Bromine%20monoxide%20radical | Bromine monoxide is a binary inorganic compound of bromine and oxygen with the chemical formula BrO. A free radical, this compound is the simplest of many bromine oxides. The compound is capable of influencing atmospheric chemical processes. Naturally, BrO can be found in volcanic plumes. BrO is similar to the oxygen monofluoride, chlorine monoxide and iodine monoxide radicals.
Chemical properties
The compound is very effective as a catalyst of the ozone destruction. The chemical reaction of BrO and chlorine dioxide (OClO) results in ozone depletion in the stratosphere.
References
Bromine compounds
Diatomic molecules
Oxides
Free radicals | Bromine monoxide radical | [
"Physics",
"Chemistry",
"Biology"
] | 136 | [
"Molecules",
"Free radicals",
"Oxides",
"Salts",
"Senescence",
"Biomolecules",
"Diatomic molecules",
"Matter"
] |
73,776,279 | https://en.wikipedia.org/wiki/School-choice%20mechanism | A school-choice mechanism is an algorithm that aims to match pupils to schools in a way that respects both the pupils' preferences and the schools' priorities. It is used to automate the process of school choice. The most common school-choice mechanisms are variants of the deferred-acceptance algorithm and random serial dictatorship.
Relation to other matching mechanisms
School choice is a kind of a two-sided matching market, like the stable marriage problem or residency matching. The main difference is that, in school choice, one side of the market (namely, the schools) are not strategic. Their priorities do not represent subjective preferences, but are determined by legal requirements, for example: a priority for relatives of previous students, minority quotas, minimum income quotas, etc.
Strategic considerations
A major concern in designing a school-choice mechanism is that it should be strategyproof for the pupils (as they are considered to be strategic), so that they reveal their true preferences for schools. Therefore, the mechanism most commonly used in practice is the Deferred-acceptance algorithm with pupils as the proposers. However, this mechanism may yield outcomes that are not Pareto-efficient for the pupils. This loss of efficiency might be substantial: a recent survey showed that around 2% of the pupils could receive a school that is more preferred by them, without harming any other student. Moreover, in some cases, DA might assign each pupil to their second-worst or worst school.
Efficiency-adjusted deferred-acceptance
Onur Kesten suggested to amend DA by removing "interrupters", that is, (student, school) pairs in which the student proposes to the school, causes the school to reject another student, and rejected later on. This "Efficiency Adjusted Deferred Acceptance" algorithm (EADA) is Pareto-efficient. Whereas it is not stable and not strategyproof for the pupils, it satisfies weaker versions of these two properties. For example, it is regret-free truth-telling.
Interestingly, in lab experiments, more pupils report their true preferences to EADA than to DA (70% vs 35%). EADA is about to be used in Flanders.
References
Mechanism design
Education economics | School-choice mechanism | [
"Mathematics"
] | 449 | [
"Game theory",
"Mechanism design"
] |
73,779,699 | https://en.wikipedia.org/wiki/Echinodontium%20ballouii | Echinodontium ballouii is a basidiomycete native to the northeastern United States. It is a polypore and important decomposer of the tree Chamaecyparis thyoides. It was declared an endangered species in 2015 due to the scarcity of this tree, which is threatened by the logging industry. It is probable that around 250 individuals exist today.
Taxonomy
Echinodontium balloui initially was thought to be in the genus Steccherinum, as it had spines on its hymenium and no stipe. It was placed into the genus Echinodontium in 1964 by Henry Louis Gross, which was later confirmed by Manfred Binder through gene sequencing.
Morphology
The fungi's fruiting bodies are irregularly-shaped shelf-like formations. Their diameters can span 5-50 centimeters, and they can grow up to 20 centimeters tall. They are brown in color, and the body is tough and woody. They are commonly seen with “teeth” or “spines” protruding from the cap's underside. As a polypore, the fungus's spores are released from an underside covered in small pores, which is lighter in color. These pores are conchate or bell-shaped and release the fungus's basidiospores.
The fungus's spores measure between 5-7 μm long and 2-3 μm tall and are strongly amyloid (turn blue-black under Melzer's reagent).As a polypore, Echinodontium ballouii is perennial, releasing spores once a year and forming a new layered hymenium directly on top of that of last year. The cystidia are 25-45 × 5-9 μm and club-shaped, becoming more thick-walled and dark with age. The basidia measure 20-25 × 6-8 μm and have 4 sterigma each.
The context (flesh) is made of skeletal and generative hyphae. The skeletal hyphae have a diameter of 3.5-4.5 μm, and are brown and smooth, with thick walls. The generative hyphae have a similar diameter and texture, but are transparent, thin-walled and nodose-septate.
Ecology
Echinodontium ballouii is a hemi-biotrophic wood decomposer, feeding off of only a single species of tree: the Chamaecyparis thyoides Atlantic white cedar. The fungus forms complex mycelial networks inside the tree's trunk, slowly digesting cellulose and lignin, and emerging as fruiting bodies in order to reproduce. They are often found relatively high up on trees, below branching points.
Habitat
Because this fungus only inhabits Chamaecyparis thyoides, its habitat is limited to this tree's ecological environment: swampy, coniferous forests within 150 miles of the Eastern coast of the United States. The production of fruiting bodies can take up to forty years, which means that the fungus is typically found on old growth trees.
Geographic distribution
The fungus's limited host species results in a very confined geographical distribution. Only about twenty visibly occupied trees have been documented to date, each in the East coast of the United States, with the first sighted in New Jersey.
Unique aspects
The small number of recorded Echinodontium ballouii has resulted in its classification as endangered. This is in part due to the formerly high demand for the fungi's host body, the Atlantic white cedar, for shipbuilding lumber, especially due to its coastal proximity. Today, demand for lumber still puts this important decomposer at risk. The fungus is named for William Hosea Ballou, one of its earliest discoverers. He mistakenly claimed that “there [was] no fungus more beautiful – or more deadly” to the Atlantic white cedar. In reality, the suffering trees that Ballou witnessed were likely due to logging development and changing hydrology. The fungus was generally thought to be extinct after no additional sightings occurred for the majority of the 20th century. However, it was rediscovered in the early 2000s by mycologists Larry Millman and Bill Neill.
References
Russulales
Fungi of the United States
Fungi described in 1909
Fungus species | Echinodontium ballouii | [
"Biology"
] | 873 | [
"Fungi",
"Fungus species"
] |
73,780,027 | https://en.wikipedia.org/wiki/Lorraine%20Maltby | Lorraine Lucy Maltby (born 1960, née Ward) is a British biologist and who is a professor of environmental biology at the University of Sheffield. She serves as deputy Vice-President for research and innovation and chair of the board of trustees of the Freshwater Habitats Trust. Her research investigates interactions in the riparian zone and the environmental impacts of agri-plastics.
Early life and education
Maltby became interested in freshwater ecology during her A-Levels, where she completed a project on urban ecology. She moved to Newcastle University for an undergraduate degree in zoology. She then moved to the University of Glasgow for graduate studies, where she studied the life history of freshwater Erpobdella leeches.
Research and career
Maltby was awarded a Natural Environment Research Council (NERC) postdoctoral research grant, and moved to the University of Sheffield in 1984. Maltby joined the Faculty at the University of Sheffield in 1988, and was appointed a professor in 2004 and served as head of department from 2008. In 2017 she was appointed Deputy Vice President of Research. Her research investigates aquatic-riparian interactions and the environmental impacts of plasticulture. She has been part of the UK Research and Innovation (UKRI) activity around sustainable plastics in agriculture. She has studied chemical pollution in Yorkshire rivers. Maltby is chair of the Board of Trustees of the Freshwater Habitats Trust.
Awards and honours
2019 Elected Fellow of the Freshwater Biological Association
2020 Appointed officer of the Most Excellent Order of the British Empire (OBE) in the 2021 New Year Honours for services to Environmental Biology, Animal and Plant Sciences.
Selected publications
References
Living people
Alumni of Newcastle University
Alumni of the University of Glasgow
Fellows of the Royal Society of Biology
Officers of the Order of the British Empire
20th-century British biologists
21st-century British biologists
Academics of the University of Sheffield
20th-century British women scientists
21st-century British women scientists
British women biologists
1960 births
Environmental scientists | Lorraine Maltby | [
"Environmental_science"
] | 388 | [
"Environmental scientists",
"British environmental scientists"
] |
75,128,987 | https://en.wikipedia.org/wiki/Rainer%20Marutzky | Rainer Marutzky (Halle, 1947) is a German wood scientist, who is emeritus professor of wood chemistry at the Technical University of Braunschweig and former director of the Fraunhofer Institute for Wood Research, Wilhelm Klauditz Institute (WKI) in Braunschweig, Germany.
Biography
He was born on 11 August 1947 in Halle, Germany.
In 1968, following his military service, he pursued studies in chemistry at the Technical University of Braunschweig from 1968 to 1973. Under the mentorship of Professor Karl Wagner, he earned his doctoral degree and subsequently served as a post-doctoral fellow at the Society for Biotechnology in Braunschweig-Stöckheim, specializing in enzyme chemistry. In 1976, he became a memmer of the Fraunhofer Institute for Wood Research as a research associate.
He successfully completed his habilitation at the Institute of Natural Sciences at the Technische Universität Braunschweig in 1991. He was appointed as university professor in 1996. His pioneering research work was predominantly related to the deleterious emissions from the wood-based products and the industrial environment. He was also actively engaged in European standardization initiatives. Marutzky held the position of director of the Fraunhofer WKI from 1989 until December 2009, when he was officially retired.
His yearlong efforts are evidenced by many publications in both German and international scientific journals, along with his participation as a keynote speaker and expert at various international scientific symposia.
International recognition
In 1988, Marutzky along with Edmone Roffael and Lutz Mehlhorn were awarded by the International Association iVTH for their research work on the topic "Investigations on the formaldehyde emissions from wood-based materials and other materials, and the development of methods to reduce formaldehyde emission potential." He has also received several other awards in the field of wood science and technology.
Presently, he is a technical advisor to the International Association for Technical Wood Matters (iVTH).
References
People from Braunschweig
German chemists
Wood sciences
Wood scientists
1947 births
Living people | Rainer Marutzky | [
"Materials_science",
"Engineering"
] | 422 | [
"Wood sciences",
"Wood scientists",
"Materials science"
] |
75,134,428 | https://en.wikipedia.org/wiki/Jankov%E2%80%93von%20Neumann%20uniformization%20theorem | In descriptive set theory the Jankov–von Neumann uniformization theorem is a result saying that every measurable relation on a pair of standard Borel spaces (with respect to the sigma algebra of analytic sets) admits a measurable section. It is named after V. A. Jankov and John von Neumann. While the axiom of choice guarantees that every relation has a section, this is a stronger conclusion in that it asserts that the section is measurable, and thus "definable" in some sense without using the axiom of choice.
Statement
Let be standard Borel spaces and a subset that is measurable with respect to the analytic sets. Then there exists a measurable function such that, for all , if and only if .
An application of the theorem is that, given any measurable function , there exists a universally measurable function such that for all .
References
.
.
Descriptive set theory
Inverse functions
Measure theory | Jankov–von Neumann uniformization theorem | [
"Mathematics"
] | 187 | [
"Theorems in mathematical analysis",
"Theorems in measure theory"
] |
75,138,286 | https://en.wikipedia.org/wiki/Pictet%27s%20experiment | Pictet's experiment is the demonstration of the reflection of heat and the apparent reflection of cold in a series of experiments performed in 1790 (reported in English in 1791 in An Essay on Fire) by Marc-Auguste Pictet—ten years before the discovery of infrared heating of the Earth by the Sun. The apparatus for most of the experiments used two concave mirrors facing one another at a distance. An object placed at the focus of one mirror would have heat and light reflected by the mirror and focused. An object at the focus of the counterpart mirror would do the same. Placing a hot object at one focus and a thermometer at the other would register an increase in temperature on the thermometer. This was sometimes demonstrated with the explosion of a flammable mix of gasses in a blackened balloon, as described and depicted by John Tyndall in 1863.
After "demonstrating that radiant heat, even when it was not accompanied by any light, could be reflected and focused like light", Pictet used the same apparatus to demonstrate the apparent reflection of cold in a similar manner. This demonstration was important to Benjamin Thompson, Count Rumford who argued for the existence of "frigorific rays" conveying cold. Rumford's continuation of the experiments and promotion of the topic caused the name to be attached to the experiment.
The apparent reflection of cold if a cold object is placed in one focus surprised Pictet and two scholars writing about the experiment in 1985 noted "most physicists, on seeing it demonstrated for the first time, find it surprising and even puzzling." The confusion may be resolved by understanding that all objects in the system—including the thermometer—are constantly radiating heat. Pictet described this as "the thermometer acts the same part relatively to the snow as the bullet [heat source] in relation to the thermometer." Addition of a very cold object adds an effective heat sink versus a room temperature object which would not, in the net, cool or warm a thermometer in the other focus.
Modern replications and demonstrations
There are relatively few published examples of demonstrations or recreation of the experiment. Two physicists in the University of Washington system reported on demonstrations to students and colleagues and produced directions for re-creating the experiment in 1985 as part of an investigation into the role of the experiment in the history of physics. Physicists at Sofia University in Bulgaria reported on reproducing the experiment for high school students in 2017.
References
External links
The Pictet Cabinet: The art of teaching science through experiment, a 2011 pamphlet from the Musée d'histoire des sciences de la Ville de Genève (Museum of the History of Science of the City of Geneva)
"Are There Rays of Cold?", an undated video demonstration in Russian from the Moscow Engineering Physics Institute
1790 in science
1791 in science
Physics experiments
Thermodynamics
History of science | Pictet's experiment | [
"Physics",
"Chemistry",
"Mathematics",
"Technology"
] | 587 | [
"Physics experiments",
"History of science",
"Experimental physics",
"Thermodynamics",
"History of science and technology",
"Dynamical systems"
] |
75,144,170 | https://en.wikipedia.org/wiki/Graphitization | Graphitization is a process of transforming a carbonaceous material, such as coal or the carbon in certain forms of iron alloys, into graphite.
Process
The graphitization process involves a restructuring of the molecular structure of the carbon material. In the initial state, these materials can have an amorphous structure or a crystalline structure different from graphite. Graphitization generally occurs at high temperatures (up to ), and can be accelerated by catalysts such as iron or nickel.
When carbonaceous material is exposed to high temperatures for an extended period of time, the carbon atoms begin to rearrange and form layered crystal planes. In the structure of graphite, carbon atoms are arranged in flat hexagonal sheets that are stacked on top of each other. These crystal planes give graphite its characteristic flake structure, giving it specific properties such as good electrical and thermal conductivity, low friction and excellent lubrication.
Interest
Graphitization can be observed in various contexts. For example, it occurs naturally during the formation of certain types of coal or graphite in the Earth's crust. It can also be artificially induced during the manufacture of specific carbon materials, such as graphite electrodes used in fuel cells, nuclear reactors or metallurgical applications.
Graphitization is of particular interest in the field of metallurgy. Some iron alloys, such as cast iron, can undergo graphitization heat treatment to improve their mechanical properties and machinability. During this process, the carbon dissolved in the iron alloy matrix separates and restructures as graphite, which gives the cast iron its specific characteristics, such as improved ductility and wear resistance.
Notes and references
Molecular physics
Metallurgy
Materials science | Graphitization | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 349 | [
"Applied and interdisciplinary physics",
"Molecular physics",
"Metallurgy",
"Materials science",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
67,944,516 | https://en.wikipedia.org/wiki/Physics-informed%20neural%20networks | Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). Low data availability for some biological and engineering problems limit the robustness of conventional machine learning models used for these applications. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the generalizability of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.
Function approximation
Most of the physical laws that govern the dynamics of a system can be described by partial differential equations. For example, the Navier–Stokes equations are a set of partial differential equations derived from the conservation laws (i.e., conservation of mass, momentum, and energy) that govern fluid mechanics. The solution of the Navier–Stokes equations with appropriate initial and boundary conditions allows the quantification of flow dynamics in a precisely defined geometry. However, these equations cannot be solved exactly and therefore numerical methods must be used (such as finite differences, finite elements and finite volumes). In this setting, these governing equations must be solved while accounting for prior assumptions, linearization, and adequate time and space discretization.
Recently, solving the governing partial differential equations of physical phenomena using deep learning has emerged as a new field of scientific machine learning (SciML), leveraging the universal approximation theorem and high expressivity of neural networks. In general, deep neural networks could approximate any high-dimensional function given that sufficient training data are supplied. However, such networks do not consider the physical characteristics underlying the problem, and the level of approximation accuracy provided by them is still heavily dependent on careful specifications of the problem geometry as well as the initial and boundary conditions. Without this preliminary information, the solution is not unique and may lose physical correctness. On the other hand, physics-informed neural networks (PINNs) leverage governing physical equations in neural network training. Namely, PINNs are designed to be trained to satisfy the given training data as well as the imposed governing equations. In this fashion, a neural network can be guided with training data that do not necessarily need to be large and complete. Potentially, an accurate solution of partial differential equations can be found without knowing the boundary conditions. Therefore, with some knowledge about the physical characteristics of the problem and some form of training data (even sparse and incomplete), PINN may be used for finding an optimal solution with high fidelity.
PINNs allow for addressing a wide range of problems in computational science and represent a pioneering technology leading to the development of new classes of numerical solvers for PDEs. PINNs can be thought of as a meshfree alternative to traditional approaches (e.g., CFD for fluid dynamics), and new data-driven approaches for model inversion and system identification. Notably, the trained PINN network can be used for predicting the values on simulation grids of different resolutions without the need to be retrained. In addition, they allow for exploiting automatic differentiation (AD) to compute the required derivatives in the partial differential equations, a new class of differentiation techniques widely used to derive neural networks assessed to be superior to numerical or symbolic differentiation.
Modeling and computation
A general nonlinear partial differential equation can be:
where denotes the solution, is a nonlinear operator parameterized by , and is a subset of . This general form of governing equations summarizes a wide range of problems in mathematical physics, such as conservative laws, diffusion process, advection-diffusion systems, and kinetic equations. Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems:
data-driven solution
data-driven discovery of partial differential equations.
Data-driven solution of partial differential equations
The data-driven solution of PDE computes the hidden state of the system given boundary data and/or measurements , and fixed model parameters . We solve:
.
By defining the residual as
,
and approximating by a deep neural network. This network can be differentiated using automatic differentiation. The parameters of and can be then learned by minimizing the following loss function :
.
Where is the error between the PINN and the set of boundary conditions and measured data on the set of points where the boundary conditions and data are defined, and is the mean-squared error of the residual function. This second term encourages the PINN to learn the structural information expressed by the partial differential equation during the training process.
This approach has been used to yield computationally efficient physics-informed surrogate models with applications in the forecasting of physical processes, model predictive control, multi-physics and multi-scale modeling, and simulation. It has been shown to converge to the solution of the PDE.
Data-driven discovery of partial differential equations
Given noisy and incomplete measurements of the state of the system, the data-driven discovery of PDE results in computing the unknown state and learning model parameters that best describe the observed data and it reads as follows:
.
By defining as
,
and approximating by a deep neural network, results in a PINN. This network can be derived using automatic differentiation. The parameters of and , together with the parameter of the differential operator can be then learned by minimizing the following loss function :
.
Where , with and state solutions and measurements at sparse location , respectively and residual function. This second term requires the structured information represented by the partial differential equations to be satisfied in the training process.
This strategy allows for discovering dynamic models described by nonlinear PDEs assembling computationally efficient and fully differentiable surrogate models that may find application in predictive forecasting, control, and data assimilation.
Physics-informed neural networks for piece-wise function approximation
PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems. Piece-wise approximation has been an old practice in the field of numerical approximation. With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well. DPINN (Distributed physics-informed neural networks) and DPIELM (Distributed physics-informed extreme learning machines) are generalizable space-time domain discretization for better approximation. DPIELM is an extremely fast and lightweight approximator with competitive accuracy. Domain scaling on the top has a special effect. Another school of thought is discretization for parallel computation to leverage usage of available computational resources.
XPINNs is a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations on arbitrary complex-geometry domains. The XPINNs further pushes the boundaries of both PINNs as well as Conservative PINNs (cPINNs), which is a spatial domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. The XPINN is particularly effective for the large-scale problems (involving large data set) as well as for the high-dimensional problems where single network based PINN is not adequate. The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved. However, DPINN debunks the use of residual (flux) matching at the domain interfaces as they hardly seem to improve the optimization.
Physics-informed neural networks and theory of functional connections
In the PINN framework, initial and boundary conditions are not analytically satisfied, thus they need to be included in the loss function of the network to be simultaneously learned with the differential equation (DE) unknown functions. Having competing objectives during the network's training can lead to unbalanced gradients while using gradient-based techniques, which causes PINNs to often struggle to accurately learn the underlying DE solution. This drawback is overcome by using functional interpolation techniques such as the Theory of functional connections (TFC)'s constrained expression, in the Deep-TFC framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints. A further improvement of PINN and functional interpolation approach is given by the Extreme Theory of Functional Connections (X-TFC) framework, where a single-layer Neural Network and the extreme learning machine training algorithm are employed. X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems, optimal control, aerospace, and rarefied gas dynamics applications.
Physics-informed PointNet (PIPN) for multiple sets of irregular geometries
Regular PINNs are only able to obtain the solution of a forward or inverse problem on a single geometry. It means that for any new geometry (computational domain), one must retrain a PINN. This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs. Physics-informed PointNet (PIPN) is fundamentally the result of a combination of PINN's loss function with PointNet. In fact, instead of using a simple fully connected neural network, PIPN uses PointNet as the core of its neural network. PointNet has been primarily designed for deep learning of 3D object classification and segmentation by the research group of Leonidas J. Guibas. PointNet extracts geometric features of input computational domains in PIPN. Thus, PIPN is able to solve governing equations on multiple computational domains (rather than only a single domain) with irregular geometries, simultaneously. The effectiveness of PIPN has been shown for incompressible flow, heat transfer and linear elasticity.
Physics-informed neural networks (PINNs) for inverse computations
Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations, demonstrating their applicability across science, engineering, and economics. They have shown useful for solving inverse problems in a variety of fields, including nano-optics, topology optimization/characterization, multiphase flow in porous media, and high-speed fluid flow. PINNs have demonstrated flexibility when dealing with noisy and uncertain observation datasets. They also demonstrated clear advantages in the inverse calculation of parameters for multi-fidelity datasets, meaning datasets with different quality, quantity, and types of observations. Uncertainties in calculations can be evaluated using ensemble-based or Bayesian-based calculations.
Physics-informed neural networks for elasticity problems
Surrogate networks are intended for the unknown functions, namely, the components of the strain and the stress tensors as well as the unknown displacement field, respectively. The residual network provides the residuals of the partial differential equations (PDEs) and of the boundary conditions.The computational approach is based on principles of artificial intelligence.
Physics-informed neural networks (PINNs) with backward stochastic differential equation
Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE) to solve high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality. Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions.
Physics-informed neural networks for biology
An extension or adaptation of PINNs are Biologically-informed neural networks (BINNs). BINNs introduce two key adaptations to the typical PINN framework: (i) the mechanistic terms of the governing PDE are replaced by neural networks, and (ii) the loss function is modified to include , a term used to incorporate domain-specific knowledge that helps enforce biological applicability. For (i), this adaptation has the advantage of relaxing the need to specify the governing differential equation a priori, either explicitly or by using a library of candidate terms. Additionally, this approach circumvents the potential issue of misspecifying regularization terms in stricter theory-informed cases.
A natural example of BINNs can be found in cell dynamics, where the cell density is governed by a reaction-diffusion equation with diffusion and growth functions and , respectively:
In this case, a component of could be for , which penalizes values of that fall outside a biologically relevant diffusion range defined by . Furthermore, the BINN architecture, when utilizing multilayer-perceptrons (MLPs), would function as follows: an MLP is used to construct from model inputs , serving as a surrogate model for the cell density . This surrogate is then fed into the two additional MLPs, and , which model the diffusion and growth functions. Automatic differentiation can then be applied to compute the necessary derivatives of , and to form the governing reaction-diffusion equation.
Note that since is a surrogate for the cell density, it may contain errors, particularly in regions where the PDE is not fully satisfied. Therefore, the reaction-diffusion equation may be solved numerically, for instance using a method-of-lines approach approach.
Limitations
Translation and discontinuous behavior are hard to approximate using PINNs. They fail when solving differential equations with slight advective dominance and hence asymptotic behaviour causes the method to fail. Such PDEs could be solved by scaling variables.
This difficulty in training of PINNs in advection-dominated PDEs can be explained by the Kolmogorov n–width of the solution.
They also fail to solve a system of dynamical systems and hence have not been a success in solving chaotic equations. One of the reasons behind the failure of regular PINNs is soft-constraining of Dirichlet and Neumann boundary conditions which pose a multi-objective optimization problem which requires manually weighing the loss terms to be able to optimize.
More generally, posing the solution of a PDE as an optimization problem brings with it all the problems that are faced in the world of optimization, the major one being getting stuck in local optima.
References
External links
Physics Informed Neural Network
PINN – repository to implement physics-informed neural network in Python
XPINN – repository to implement extended physics-informed neural network (XPINN) in Python
PIPN – repository to implement physics-informed PointNet (PIPN) in Python
Differential equations
Deep learning | Physics-informed neural networks | [
"Mathematics"
] | 3,172 | [
"Mathematical objects",
"Differential equations",
"Equations"
] |
67,944,523 | https://en.wikipedia.org/wiki/Quantum%20Markov%20semigroup | In quantum mechanics, a quantum Markov semigroup describes the dynamics in a Markovian open quantum system. The axiomatic definition of the prototype of quantum Markov semigroups was first introduced by A. M. Kossakowski in 1972, and then developed by V. Gorini, A. M. Kossakowski, E. C. G. Sudarshan and Göran Lindblad in 1976.
Motivation
An ideal quantum system is not realistic because it should be completely isolated while, in practice, it is influenced by the coupling to an environment, which typically has a large number of degrees of freedom (for example an atom interacting with the surrounding radiation field). A complete microscopic description of the degrees of freedom of the environment is typically too complicated. Hence, one looks for simpler descriptions of the dynamics of the open system. In principle, one should investigate the unitary dynamics of the total system, i.e. the system and the environment, to obtain information about the reduced system of interest by averaging the appropriate observables over the degrees of freedom of the environment. To model the dissipative effects due to the interaction with the environment, the Schrödinger equation is replaced by a suitable master equation, such as a Lindblad equation or a stochastic Schrödinger equation in which the infinite degrees of freedom of the environment are "synthesized" as a few quantum noises. Mathematically, time evolution in a Markovian open quantum system is no longer described by means of one-parameter groups of unitary maps, but one needs to introduce quantum Markov semigroups.
Definitions
Quantum dynamical semigroup (QDS)
In general, quantum dynamical semigroups can be defined on von Neumann algebras, so the dimensionality of the system could be infinite. Let be a von Neumann algebra acting on Hilbert space , a quantum dynamical semigroup on is a collection of bounded operators on , denoted by , with the following properties:
, ,
, , ,
is completely positive for all ,
is a -weakly continuous operator in for all ,
For all , the map is continuous with respect to the -weak topology on .
Under the condition of complete positivity, the operators are -weakly continuous if and only if are normal. Recall that, letting denote the convex cone of positive elements in , a positive operator is said to be normal if for every increasing net in with least upper bound in one has
for each in a norm-dense linear sub-manifold of .
Quantum Markov semigroup (QMS)
A quantum dynamical semigroup is said to be identity-preserving (or conservative, or Markovian) if
where is the identity element. For simplicity, is called quantum Markov semigroup. Notice that, the identity-preserving property and positivity of imply for all and then is a contraction semigroup.
The Condition () plays an important role not only in the proof of uniqueness and unitarity of solution of a Hudson–Parthasarathy quantum stochastic differential equation, but also in deducing regularity conditions for paths of classical Markov processes in view of operator theory.
Infinitesimal generator of QDS
The infinitesimal generator of a quantum dynamical semigroup is the operator with domain , where
and .
Characterization of generators of uniformly continuous QMSs
If the quantum Markov semigroup is uniformly continuous in addition, which means , then
the infinitesimal generator will be a bounded operator on von Neumann algebra with domain ,
the map will automatically be continuous for every ,
the infinitesimal generator will be also -weakly continuous.
Under such assumption, the infinitesimal generator has the characterization
where , , , and is self-adjoint. Moreover, above denotes the commutator, and the anti-commutator.
Selected recent publications
See also
References
Quantum mechanics
Semigroup theory | Quantum Markov semigroup | [
"Physics",
"Mathematics"
] | 780 | [
"Mathematical structures",
"Theoretical physics",
"Quantum mechanics",
"Fields of abstract algebra",
"Algebraic structures",
"Semigroup theory"
] |
67,944,537 | https://en.wikipedia.org/wiki/Reinforcement%20in%20concrete%203D%20printing | The reinforcement of 3D printed concrete is a mechanism where the ductility and tensile strength of printed concrete are improved using various reinforcing techniques, including reinforcing bars, meshes, fibers, or cables. The reinforcement of 3D printed concrete is important for the large-scale use of the new technology, like in the case of ordinary concrete. With a multitude of additive manufacturing application in the concrete construction industryspecifically the use of additively constructed concrete in the manufacture of structural concrete elementsthe reinforcement and anchorage technologies vary significantly. Even for non-structural elements, the use of non-structural reinforcement such as fiber reinforcement is not uncommon. The lack of formwork in most 3D printed concrete makes the installation of reinforcement complicated. Early phases of research in concrete 3D printing primarily focused on developing the material technologies of the cementitious/concrete mixes. These causes combined with the non-existence of codal provisions on reinforcement and anchorage for printed elements speak for the limited awareness and the usage of the various reinforcement techniques in additive manufacturing. The material extrusion-based printing of concrete is currently favorable both in terms of availability of technology and of the cost-effectiveness. Therefore, most of the reinforcement techniques developed or currently under development are suitable to the extrusion-based 3D printing technology.
Types of reinforcement
The reinforcement in concrete 3D printing, much like that in conventional concrete, can be classified based either on the method of placement or the method of action. The methods of placement of reinforcement are preinstallation, co-installation, and post-installation. The examples of each are pre-installed meshes, fibers mixed with concrete, and post-tensioning cables, respectively. The classification based on the structural action is once again the same as that in conventional concrete. Examples of active and passive reinforcement in 3D printed concrete are reinforcement bars and post-tensioning cables used to prestress segmental elements, respectively. The majority of the reinforcement in concrete has conventionally been steel and continues to be even in 3D printed concrete. Alternate composite materials such as FRPs and fibers of glass, basalt etc., in the mix have gained considerable prominence.
Some common reinforcements in 3D printing
Reinforcing steel bars
The high availability and popularity of deformed bars or rebars as a passive structural reinforcement in conventional concrete systems make it sought after in printed concrete. They are welded together to form trusses laid between layers to form a very effective co-installed reinforcement strategy without the use of formworks. They are erected to reinforce cages around which concrete is printed to form wall and beam elements, making rebars an effective pre-installment strategy.
The rebar-based formative skeletal structure can also act as a core on which printable concrete is shotcreted in a new method developed at TU Braunschweig.
The rebar cages can also be installed inside printed concrete formworks in non-structural members, and the holes are filled with grout. This method of post-installed reinforcement has proven to be cost-effective; however, it requires attention to the interface between steel and the printed concrete. The use of printed concrete as formwork requires higher tensile hoop strength of the concrete, which could be provided by the use of fibers in the mix.
Smart Dynamic Casting
Smart Dynamic Casting (SDC), a new printing technology being developed in ETH Zurich, combines slipforming and printing material technologies to produce varied cross-sections and complex geometries using very little formwork. Reinforcement bars are pre-installed, just like in the case of conventionally cast concrete, and the rheology of the concrete is adapted to retain the shape of the slipforming formwork before concrete hydrates enough to sustain self-weight. Concrete facade mullions of varying cross-sections are produced for a DFAB house in Switzerland.
Reinforcement meshes
Similar to the use of rebars, reinforcement meshes are also used popularly as a passive reinforcement technique. The welded wire meshes are laid in-between printed layers of slabs without requiring any formwork. They can also be used to print wall elements that are fabricated laterally and erected in place. In a method unlike with rebars, spools of meshes are unwound simultaneously ahead of the printer nozzle to provide both horizontal and vertical reinforcement to the printed elements. This method not only acts as reinforcement in the hardened state of concrete but also compensates for the lack of formwork in the fresh state of concrete.
Cables
High-strength galvanised steel cables provide effective reinforcement in printed concrete elements where sufficient cover concrete cannot be provided owing to the complexity of the shape. The cables can either be laid in-between layers or extruded simultaneously like the meshes. The bond between high-strength steel cables and concrete needs special attention.
Continus yarn or Flow-based Pultrusion
Continuous yarn in Glass, Basalt, High-performance Polymer or carbon can also effectively be used as reinforcement for 3D-printed concrete without needing additional motors. The technique takes advantage of the extruded concrete consistency to passively pultrude numerous continuous yarns. The obtained material is a unidirectional cementitious composite with an increase in strength and ductility in the extrusion direction depending on the proportion of fiber. Thanks to the small diameter of the yarn used their bond with the matrix is usually great. Furthermore, the process takes advantage of the small bending stiffness of the yarn to ensure the same geometric freedom with extended buildability possibility thanks to the early traction strength provided by the yarn during the printing. This feature comes with a more complex extrusion nozzle and the use of a specific device for handling the numerous yarns.
Post-tensioning cables
The automated fabrication of elements realises its true potential when printed segmental elements are fit in place using post-tensioning. The concrete segments are printed, leaving holes for the post-tensioning cables that not only act as an active reinforcement but also help in connecting the segmental elements to form a load-bearing structure. The holes left behind for the cables are filled with grout post the tensioning of the cables. A bicycle bridge has been constructed in TU Eindhoven by printing segments that are post-tensioned using high-strength cables running perpendicular to the printing direction. The post-tensioning technology has a lot of potential as a reinforcement strategy in additively manufactured concrete systems.
Fiber reinforcement
The use of fibers in the mix has several advantages like in the case of conventional concrete. The higher cement content and faster hydration rate requirements of printed concrete make it susceptible to shrinkage cracking and thermal stresses. The use of fibers (structural or non-structural) can counter these significantly. Fiber reinforcements are also useful in printing shell structures as the tensile membrane action required to convert bending moment into axial force is possible only with tough and high stiffness concrete. Fibers, when aligned can provide this required higher toughness and stiffness. The flexural tensile strength is also improved with the addition of structural steel or PVA fibers. These properties make the fiber-reinforced concrete a suitable material for printing formworks. The cohesiveness of concrete in the fresh state, which is crucial for printing, can be improved by using non-structural fibers such as polypropylene or basalt. The use of fiber reinforcement in 3D printing creates a much-needed segway into the fields of ultra-high performance concretes with enhanced strengths and durability, crucial in aesthetic slender elements.
External anchor connectors
Anchor connectors are installed in truss elements with the aim of connecting them to similar units using exposed threaded bars. This reinforcing technique has the advantage of faster fabrication of lightweight units that can be arranged in a free-form manner on-site, depending on the requirement. The exposed reinforcement might face corrosion issues when installed in outdoor environments. Topologically optimised truss shapes with force-follows-form can be created and used to save material and, in turn, the construction costs. The anchors can be connected both by in-plane and out-of-plane threaded rebars to create elements beyond simple beams and arches.
Bamboo Reinforcement
Bamboo reinforcement, including bamboo wrapped in steel wires has been proposed as reinforcement for traditional concrete elements as early as 2005, with recent studies suggesting possible applications in 3D-printed concrete. This technique has the advantage of producing potentially 50 times less carbon emissions than traditional steel reinforcement techniques. One drawback of this method is potential durability issues, as the organic nature of bamboo makes it vulnerable to pests and decomposition. Proper treating of the material can circumvent this issue, and can preserve the bamboo reinforcement for as long as 15 years.
Other less common reinforcement techniques
Interface ties and staples are sometimes used to improve the bonding between printed layers. Ladder wire is used to reinforce printed elements to improve horizontal bending. Print stabilisers are used to prevent the elastic buckling of printed layers during the printing process. Welded/printed reinforcement is a technology being developed at TU Braunschweig where the steel reinforcements are simultaneously printed using gas metal arc welding.
Hybrid solutions
Each reinforcement technology is usually more effective when used in conjuncture with another reinforcing technology, leaving a lot of scope for research and development. The mesh mould technology can be combined with SDC to produce highly automated elements faster. The printable Fiber Reinforced Concrete (FRC) technology can be combined with most other reinforcement techniques seamlessly to produce a highly durable concrete structure. Fiber-reinforced concrete, when used to print formwork, has a higher resistance to hoop stresses owing to higher filament strengths. The meshes and bar cages are almost always combined in the usage of large-scale construction projects.
See also
Construction 3D printing
Applications of 3D printing
Reinforced concrete
Types of concrete
3D printing
Automation
References
3D printing
Building technology
Construction
Building materials | Reinforcement in concrete 3D printing | [
"Physics",
"Engineering"
] | 1,991 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
67,944,539 | https://en.wikipedia.org/wiki/Groundwater%20contamination%20by%20pharmaceuticals | Groundwater contamination by pharmaceuticals, which belong to the category of contaminants of emerging concern (CEC) or emerging organic pollutants (EOP), has been receiving increasing attention in the fields of environmental engineering, hydrology and hydrogeochemistry since the last decades of the twentieth century.
Pharmaceuticals are suspected to provoke long-term effects in aquatic ecosystems even at low concentration ranges (trace concentrations) because of their bioactive and chemically stable nature, which leads to recalcitrant behaviours in the aqueous compartments, a feature that is typically associated with the difficulty in degrading these compounds to innocuous molecules, similarly with the behaviour exhibited by persistent organic pollutants. Furthermore, continuous release of medical products in the water cycle poses concerns about bioaccumulation and biomagnification phenomena. As the vulnerability of groundwater systems is increasingly recognized even from the regulating authority (the European Medicines Agency, EMA), environmental risk assessment (ERA) procedures, which is required for pharmaceuticals appliance for marketing authorization and preventive actions urged to preserve these environments.
In the last decades of the twentieth century, scientific research efforts have been fostered towards deeper understanding of the interactions of groundwater transport and attenuation mechanisms with the chemical nature of polluting agents. Amongst the multiple mechanisms governing solutes mobility in groundwater, biotransformation and biodegradation play a crucial role in determining the evolution of the system (as identified by developing concentration fields) in the presence of organic compounds, such as pharmaceuticals. Other processes that might impact on pharmaceuticals fate in groundwater include classical advective-dispersive mass transfer, as well as geochemical reactions, such as adsorption onto soils and dissolution / precipitation.
One major goal in the field of environmental protection and risk mitigation is the development of mathematical formulations yielding reliable predictions of the fate of pharmaceuticals in aquifer systems, eventually followed by an appropriate quantification of predictive uncertainty and estimation of the risks associated with this kind of contamination.
General problem
Pharmaceuticals represent a serious threat to aquifer systems because of their bioactive nature, which makes them capable of interacting directly with therein residing living microorganisms and yielding bioaccumulation and biomagnification phenomena. Occurrence of xenobiotics in groundwater has been proven to harm the delicate equilibria of aquatic ecosystems in several ways, such as promoting the growth of antibiotic-resistant bacteria or causing hormones-related sexual disruption in living organisms in surface waters. Considering then the role of groundwater systems as main worldwide drinking water resources, the capability of pharmaceuticals to interact with human tissues poses serious concerns also in terms of human health. Indeed, the majority of pharmaceuticals do not degrade in groundwater, where get accumulated due to their continuous release in the environment. Then, these compounds reach subsurface systems through different sources, such as hospital effluents, wastewaters and landfill leachates, which clearly risk contaminating drinking water.
Most detected pharmaceutical classes
The main pharmaceutical classes detected in worldwide groundwater systems are listed below. The following categorisation is based on a medical perspective and it is often referred to as therapeutic classification.
Antibiotics
Estrogens and hormones
Anti-inflammatories and analgesics
Antiepileptics
Lipid regulators
Antihypertensives
Contrast media
Antidepressants
Antiulcer drugs and Antihistamines
Chemical aspects relevant to aquifer systems dynamics
The chemical structure of pharmaceuticals affects the type of hydro-geochemical processes that mainly impacts on their fate in groundwater and it is strictly associated with their chemical properties. Therefore, a classification of pharmaceuticals based on chemical classes is a valid alternative to the purpose of understanding the role of molecular structures in determining the kind of physical and geochemical processes affecting their mobility in porous media.
With regard to the occurrence of medical drugs in subsurface aquatic systems, the following chemical properties are of major interest:
Solubility in the aqueous phase
Pharmaceuticals solubility in water affects the mobility of these compounds within aquifers. This feature depends on pharmaceuticals polarity, as polar substances are typically hydrophilic, thereby showing marked tendency to dissolve in the aqueous phase, where they become solutes. This aspect impacts on dissolution / precipitation equilibrium, a phenomenon that is mathematically described in terms of the substance solubility product (addressed in many books with the notation ).
Lipophilicity, often measured through the so-called octanol-water partition coefficient (typically addressed as )
Large values outline the non polar character of the chemical species, which shows instead particular affinity to dissolve into organic solvents. Therefore, lipophilic pharmaceuticals are markedly subjected to the risk to bioaccumulate and biomagnificate in the environment, consistent with their preferential partition with the organic tissues of living organisms. Sufficiently large pharmaceuticals are in fact subjected to specific tiers in the environmental risk assessment (ERA) procedure (to be supplied for the marketing authorisation application) and are highlighted as potential sources of bioaccumulation and biomagnification according to the EMA guidelines. Lipophilic compounds are then insoluble in water, where they persist as a separated phase from the aqueous one. This renders their mobility in groundwater basically decoupled with dissolution / precipitation mechanisms and attributed to the mean flow transport (advection and dispersion) and soil-mediated mechanisms of reaction (adsorption).
Affinity of sorption onto the soils
This feature is expressed in terms of the so-called organic carbon-water partition coefficient, that is usually referred to as and is an intrinsic property of the molecule.
Acidic character
Molecules behaviour in relation to aqueous dissociation reactions is typically related to their acid dissociation constants, that are typically outlined in terms of their coefficients.
Affinity to redox reactions, even in the context of bacterially-mediated metabolic pathways
The molecular structure of xenobiotics typically outlines the existence of several possible reaction pathways, which are embedded in complex reaction networks and are typically referred to as transformation processes. With reference to organic compounds, such as pharmaceuticals, innumerable kinds of chemical reactions exist, most of them involving common chemical mechanisms, such as functional groups elimination, addition and substitution. These processes often involve further redox reactions accomplished on the substrates, which are here represented by pharmaceutical solutes and, eventually, their transformation products and metabolites. These processes can be then classified as either biotic or abiotic, depending on the presence or absence of bacterial communities acting as reaction mediators. In the former case, these transformation pathways are typically addressed as biodegradation or biotransformation in the hydrogeochemical literature, depending on the extent of cleavage of the parent molecule into highly oxidized, innocuous species.
Transport and attenuation processes
The fate of pharmaceuticals in groundwater is governed by different processes. The reference theoretical framework is that of reactive solute transport in porous media at the continuum scale, that is typically interpreted through the advective-dispersive-reactive equation (ADRE). With reference to the saturated region of the aquifer, the ADRE is written as:
Where represents the effective porosity of the medium, and represent - respectively - the spatial coordinates vector and the time coordinate. represents the divergence operator, except for when it applies to , where the nabla symbol stands for gradient of . The term denotes then the pharmaceutical solute concentration field in the water phase (for unsaturated regions of the aquifer, the ADRE equation has a similar shape, but it includes additional terms accounting for volumetric contents and contaminants concentrations in other phases than water), while represents the velocity field. is the hydrodynamic dispersion tensor and is typically function of the sole variable . Lastly, the storage term includes the accumulation or removal contribution due to all possible reactive processes in the system, i.e., adsorption, dissolution / precipitation, acid dissociation and other transformation reactions, such as biodegradation.
The main hydrological transport processes driving pharmaceuticals and organic contaminants migration in aquifer systems are:
Advection
Hydrodynamic dispersion
The most influential geochemical processes, also referred to as reactive processes and whose effect is embedded in the term of the ADRE, include:
Adsorption onto soil
Dissolution and precipitation
Acid dissociation and aqueous complexation
Biodegradation, biotransformation and other transformation pathways
Advection
Advective transport accounts for the contribution of solute mass transfer across the system that originates from bulk flow motion. At the continuum scale of analysis, the system is interpreted as a continuous medium rather than a collection of solid particles (grains) and empty spaces (pores) through which the fluid can flow. In this context, an average flow velocity can be typically estimated, which arises upscaling the pore scale velocities. Here, the fluid flow conditions ensure the validity of the Darcy's law, which governs the system evolution in terms of average fluid velocity, typically referred to as seepage or advective velocity. Dissolved pharmaceuticals in groundwater are transferred within the domain along with the mean fluid flow and in agreement with the physical principles governing any other solute migration across the system.
Hydrodynamic dispersion
Hydrodynamic dispersion identifies a process that arises as summation of two separate effects. First, it is associated with molecular diffusion, a phenomenon that is appreciated at the macroscale as consequence of microscale Brownian motions. Secondly, it includes a contribution (called mechanical dispersion) arising as an effect of upscaling the fluid-dynamic transport problem from the pore to the continuum scale of investigation, due to the upscaling of local dishomogeneous velocities. The latter contribution is therefore not related to the occurrence of any physical process at the pore scale, but it is only a fictitious consequence of the modelling scale choice. Hydrodynamic dispersion is then embedded in the advective-dispersive-reactive equation (ADRE) assuming a Fickian closure model. Dispersion is felt at the macroscale as responsible of a spread effect of the contaminant plume around its center of mass.
Adsorption onto soil
Sorption identifies a heterogeneous reaction that is often driven by instantaneous thermochemical equilibrium. It describes the process for which a certain mass of solute dissolved in the aqueous phase adheres to a solid phase (such as the organic fraction of soil in the case of organic compounds), being therefore removed from the liquid phase. In hydrogeochemistry, this phenomenon has been proved to cause a delayed effect in solute mobility with respect to the case in which solely advection and dispersion occur in the aquifer. For pharmaceuticals, it can be typically interpreted using a linear adsorption model at equilibrium, which is fully applicable at low concentrations ranges. The latter model relies upon assessment of a linear partition coefficient, usually denoted as , that depends - for organic compounds - on both organic carbon-water partition coefficient and organic carbon fraction into soil. While the former term is an intrinsic chemical property of the molecule, the latter one instead depends on the soil moisture of the analyzed aquifer.
Sorption of trace elements like pharmaceuticals in groundwater is interpreted through the following linear isotherm model:
Where identifies the adsorbed concentration on the solid phase and .
The neutral form of the organic molecules dissolved in water is typically the sole responsible of sorptive mechanisms, that become as more important as the soils are rich in terms of organic carbon. Anionic forms are instead insensitive to sorptive mechanisms, while cations can undergo adsorption only in very particular conditions.
Dissolution and precipitation
Dissolution represents the heterogeneous reaction during which a solid compound, such as an organic salt in the case of pharmaceuticals, gets dissolved into the aqueous phase. Here, the original salt appears in the form of both aqueous cations and anions, depending on the stoichiometry of the dissolution reaction. Precipitation represents the reverse reaction. This process is typically accomplished at thermochemical equilibrium, but in some applications of hydrogeochemical modelling it might be required to consider its kinetics. As an example for the case of pharmaceuticals, the non-steroidal anti-inflammatory drug diclofenac, which is commercialised as sodium diclofenac, undergoes this process in groundwater environments.
Acid dissociation and aqueous complexation
Acid dissociation is a homogeneous reaction that yields dissociation of a dissolved acid (in the water phase) into cationic and anionic forms, while aqueous complexation denotes its reverse process. The aqueous speciation of a solution is determined on the basis of the coefficient, that typically ranges between 3 and 50 (approximately) for organic compounds, such as pharmaceuticals. Being the latter ones weak acids and considering that this process is always accomplished upon instantaneous achievement of thermochemical equilibrium conditions, it is then reasonable to assume that the undissociated form of the original contaminant is predominant in the water speciation for most practical cases in the field of hydrogeochemistry.
Biodegradation, biotransformation and other transformation pathways
Pharmaceuticals can undergo biotransformation or transformation processes in groundwater systems.
Aquifers are indeed rich reserves in terms of minerals and other dissolved chemical species, such as organic matter, dissolved oxygen, nitrates, ferrous and manganese compounds, sulfates, etc., as well as dissolved cations, such as calcium, magnesium and sodium ones. All of these compounds interact through complex reaction networks embedding reactive processes of different nature, such as carbonates precipitation / dissolution, acid–base reactions, sorption and redox reactions. With reference to the latter kind of processes, several pathways are typically possible in aquifers because the environment is often rich in both reducing (like organic matter) and oxidizing agents (like dissolved oxygen, nitrates, ferrous and Manganese oxides, sulfates etc.). Pharmaceuticals can act as substrates as well in this scenario, i.e., they can represent either the reducing, or the oxidizing agent in the context of redox processes. In fact, most chemical reactions involving organic molecules are typically accomplished upon gain or loss of electrons, so that the oxidation state of the molecule changes along the reactive pathway. In this context, the aquifer acts as a "chemical reactor".
There are innumerable kinds of chemical reactions that pharmaceuticals can undergo in this environment, which depend on the availability of other reactants, pH and other environmental conditions, but all of these processes typically share common mechanisms. The main ones involve addition, elimination or substitution of functional groups. The mechanism of reaction is important in the field of hydrogeochemical modeling of aquifer systems because all of these reactions are typically governed by kinetic laws. Therefore, recognizing the correct molecular mechanisms through which a chemical reaction progresses is fundamental to the purpose of modelling the reaction rates correctly (for example, it is often possible to identify a rate limiting step within multistep reactions and relate the rate of reaction progress to that particular step). Modelling these reactions typically follows the classic kinetic laws, except for the case in which reactions involving the contaminant are accomplished in the context of bacterial metabolism. While in the former case the ensemble of reactions is addressed as transformation pathway, in the latter one the terms biodegradation or biotransformation are used, depending on the extent to which the chemical reactions effectively degrade the original organic molecule to innocuous compounds in their maximum oxidation state (i.e., carbon dioxide, methane and water). In case of biologically mediated pathways of reaction, which are relevant in the study of groundwater contamination by pharmaceuticals, there are appropriate kinetic laws that can be employed to model these processes in hydrogeochemical contexts. For example, the Monod and Michaelis-Menten equations are suitable options in case of biotic transformation processes involving organic compounds (such as pharmaceuticals) as substrates.
Despite most hydrogeochemical literature addresses these processes through linear biodegradation models, several studies have been carried out since the second decade of the twenty-first century, as the former ones are typically too simplified to ensure reliable predictions of pharmaceuticals fate in groundwater and might bias risk estimates in the context of risk mitigation applications for the environment.
Hydrologic and geochemical modelling approaches
Groundwater contamination by pharmaceuticals is a topic of great interest in the field of the environmental and hydraulic engineering, where most research efforts have been fostered towards studies on this kind of contaminants since the beginning of the twenty-first century. The general goal of those disciplines is that of developing interpretive models capable to predict the behaviour of aquifer systems in relation to the occurrence of various types of contaminants, among which are included also medical drugs. Such goal is motivated by the necessity to provide mathematical tools to predict, for example, how contaminants concentration fields develop across the aquifer along time. This may provide useful information to support decision-making processes in the context of environmental risk assessment procedures. To this purpose, several interdisciplinary strategies and tools are typically employed, the most fundamental ones being listed below:
Numerical modelling strategies are employed to simulate hydrogeochemical transport models. Some examples of commonly used softwares are MODFLOW and PHREEQC, but there are plenty of available software that can be used.
Statistical inference tools are used to calibrate available hydrogeochemical models against raw data. A widely employed software is, for example, PEST.
Knowledge in organic chemistry stands as fundamental prerequisite to develop geochemical models to be fit against data.
Laboratory or field scale experiments are designed to obtain raw data, which are necessary to study the behaviour of aquifer systems under exposure to compounds of concern.
All of these interdisciplinary tools and strategies are contemporarily employed to analyse the fate of pharmaceuticals in groundwater.
See also
Groundwater pollution
Environmental impact of pharmaceuticals and personal care products
Reactive transport modeling in porous media
Computer simulation
References
Natural resources
Aquifers
Environmental science
Water chemistry
Water pollution
Environmental issues with water
Drug manufacturing | Groundwater contamination by pharmaceuticals | [
"Chemistry",
"Environmental_science"
] | 3,780 | [
"Hydrology",
"Aquifers",
"nan",
"Water pollution"
] |
67,944,546 | https://en.wikipedia.org/wiki/Tantalum%20diselenide | Tantalum diselenide is a compound made with tantalum and selenium atoms, with chemical formula TaSe2, which belongs to the family of transition metal dichalcogenides. In contrast to molybdenum disulfide (MoS2) or rhenium disulfide (ReS2), tantalum diselenide does not occur spontaneously in nature, but it can be synthesized. Depending on the growth parameters, different types of crystal structures can be stabilized.
In the 2010s, interest in this compound has risen due to its ability to show a charge density wave (CDW), which depends on the crystal structure, up to , while other transition metal dichalcogenides normally need to be cooled down to hundreds of kelvins or even below to observe the same capability.
Structure
As other TMDs, TaSe2 is a layered compound, with a central tantalum hexagonal lattice sandwiched between two layers of selenium atoms, still with a hexagonal structure. Differently with respect to other 2D materials such as graphene, which is atomically thin, TMDs are composed by trilayers of atoms strongly bounded to each others, stacked above other trilayers and kept together through Van der Waals forces. TMDs can be easily exfoliated.
The most studied crystal structures of TaSe2 are the 1T and 2H phases that feature, respectively, octahedral and trigonal prismatic symmetries. However, it is also possible to synthesize the 3R phase or the 1H phase.
1T phase
In the 1T phase, selenium atoms show an octahedral symmetry and the relative orientation of the selenium atoms in the topmost and bottommost layers is opposed. On a macroscopic scale, the sample shows a gold colour. The lattice parameters are a = b = 3.48 Å, while c = 0.627 nm.
Depending on the temperature, it shows different types of charge density waves (CDW): an incommensurate CDW (ICDW) between and a commensurate CDW (CCDW) below . In the commensurate CDW, the resulting superlattice shows a × reconstruction often referred to as star of David (SOD), with respect to the lattice parameter (a = b) of non distorted TaSe2 (above ). Film thickness can influence as well the CDW transition temperature: the thinner the film, the lower the transition temperature from ICDW to CCDW.
In the 1T phase the single trilayers are stacked always in the same geometry, as shown in the corresponding image.
2H phase
The 2H phase is based on a configuration of selenium atoms characterized by a trigonal prismatic symmetry and an equal relative orientation in the topmost and bottommost layers. The lattice parameters are a = b = 3.43 Å, while c = 1.27 nm. Depending on the temperature, it shows different types of charge density wave: an incommensurate CDW (ICDW) between and a commensurate CDW (CCDW) below . The lattice distortion below gives rise to a CCDW that makes a 3 × 3 reconstruction with respect to the non-distorted lattice parameter (a = b) of 2H-TaSe2 (above ).
In the 2H phase the single trilayers are stacked one opposed to others, as shown in the relative image. Through molecular beam epitaxy it is possible to grow one single trilayer of 2H-TaSe2, also known as 1H phase. Basically, the 2H phase can be seen as the stacking of 1H phase with opposed relative orientation with respect to each others.
In the 1H phase the ICDW transition temperature is raised to .
Properties
Electric and Magnetic
TaSe2 exhibits different properties according to the polytype (2H or 1T), even if the chemical composition remains unchanged.
1T phase
The resistivity at low temperature is similar to that of a metal, but it starts decreasing at higher temperatures. A peak is exhibited at approximately , which resembles the behavior of semiconductors. 1T phase has almost two orders of magnitude higher resistivity than to the 2H phase.
The magnetic susceptibility of the 1T phases has no peaks at low temperature and remains always nearly constant until is reached (ICDW temperature transition), when it jumps to slightly higher values. 1T phase is diamagnetic.
2H phase
Resistivity linearly depends on the temperature when the latter exceeds . On the opposite, below this threshold it shows a non-linear behaviour. This abrupt variation of R(T) at might be related to the formation of some kinds of magnetic ordering in TaSe2: ordered spins scatter electrons in a less efficient way. This increases electrons mobility and yields a faster drop in resistivity than that ideally corresponding to a linear trend.
The magnetic susceptibility of the 2H polytype slightly depends on the temperature and peaks in the range . The trend is linearly ascending or descending below and above , respectively. This maximum in the 2H phases is related to the formation of the CCDW at . The 2H phase is Pauli paramagnetic.
The Hall coefficient RH is almost independent of the temperature above , a threshold below which it instead starts to drop to eventually reach a value of zero at . In the range between , the coefficient RH is negative, its minimum being experienced at approximately .
Electronic
1T phase
Bulk 1T-TaSe2 is metallic, while single monolayer (trilayer Se–Ta–Se in octahedral symmetry) is observed to be insulating with a band gap of 0.2 eV, in contrast with theoretical calculation which expected to be metallic as the bulk.
2H phase
Bulk 2H-TaSe2 is metallic and so the single monolayer (trilayer Se–Ta–Se in trigonal prismatic symmetry), which is also known as the 1H phase.
Optical
Investigating the non-linear refractive index of tantalum diselenide can be pursued preparing atomically thin flakes of TaSe2 with the liquid phase exfoliation method. Since this technique requires using alcohol, the refractive index of tantalum diselenide can be retrieved through Kerr's law: , where n0 = 1.37 represents the linear refractive index of ethanol, n2 is the non-linear refractive index of TaSe2 and is the incident intensity of the laser beam. Using different light wavelengths, in particular λ = 532 nm and λ = 671 nm, it is possible to measure both n2 and χ(3), the third order nonlinear susceptibility.
Both these quantities depend on because the higher the intensity of the laser, the higher the samples are heated up, which results in a variation of the refractive index.
For λ = 532 nm, n2 = and χ(3) = (e.s.u.).
For λ = 671 nm, n2 = and χ(3) = (e.s.u.).
Superconductivity
Bulk 2H-TaSe2 has been demonstrated to be superconductive below a temperature of . However, the single monolayer (1H phase) can be associated with a critical temperature increased by an increment that can range up to .
Despite the 1T phase typically does not show any superconductive behaviour, formation of TaSe2−xTex compound is possible through doping with tellurium atoms. The former compound superconductive character depends on the fraction of tellurium (x can vary in the range ). The superconductive state arises when the fraction of Te ranges within : the optimal configuration is achieved at x = 0.6 and in correspondence of a critical temperature Tc = . In the optimal configuration, the CDW is totally suppressed by the presence of tellurium.
Lubricant
Opposite to MoS2, which is largely employed as a lubricant in many different mechanical application, TaSe2 has not shown the same properties, with an average friction coefficient of 0.15. Under friction tests, like the Barker pendulum, it shows an initial friction coefficient of 0.2 to 0.3, which quickly increases to larger values as the number of oscillations of the pendulum increases (while for MoS2 it is almost constant during all the oscillations.)
Synthesis
There are different methods in order to synthesize tantalum diselenide: depending on the growth parameter, different types of polytype can be stabilized.
Chemical Vapor Transport
In general, TMDs can be synthesized through a chemical vapor transport technique accordingly to the following chemical equation:
M + MCl5 + 2 X → MX2 + Cl2
where M is the chosen transition metal (Ta, Mo, etc.) and X represents the chosen chalcogen element (Se, Te, S). The parameter n, which governs the crystal growth, can vary between 3 and 50, and can be selected appropriately so that the crystal growth is optimized. During such growth, which might last for 2 to 7 days, the temperature is initially increased within a range between Th = . Then, it is cooled down to Tc = . After the growth completion, the crystals are cooled down to room temperature. Depending on the value of Tc, either the 2H or the 1T phase can be stabilized: in particular, using tantalum and selenium with Tc < , only the 2H phase is stabilized. For the 1T phase, Tc must be larger. This allows to selectively grow the desirable phase of the chosen TMD.
Chemical vapor deposition
Using powder of TaCl5 and selenium as precursors, and a gold substrate, the 2H phase can be stabilized. The gold substrate has to be heated up to , while TaCl5 and Se can be heated to and , respectively. Argon and hydrogen gases are used as carriers. Once the growth is complete, the sample is cooled down to room temperature.
Mechanical exfoliation
Since the single trilayers are kept together only by weak Van der Waals forces, atomically thin layers of tantalum diselenide can be easily separated by using scotch/carbon tape on the bulk TaSe2 crystals. With this method it is possible to isolate few layers (or even a single layer) of TaSe2. Then, the isolated layers can be deposited above other substrates, such as SiO2, for further characterizations.
Molecular Beam Epitaxy
Pure tantalum is directly sublimated on a bilayer of graphene inside a selenium atmosphere. Depending on the temperature of the substrate Ts (graphene bilayer), the 1T or the 2H phase can be stabilized: in particular, if Ts = the 2H is favoured, while at Ts = the 1T is stabilized. This growth method is suitable only for atomically thin/few layers, but not for bulk crystals.
Liquid Phase Exfoliation
Bulk crystals of TaSe2 (or any other TMDs) are put in a solution of pure ethanol. The mixture is then sonicated in an ultrasonic device with a power of at least 450 W for 15 hours. In this way it is possible to overcome the Van der Waals forces that keep the single monolayers of TaSe2 together, resulting in the formation of atomically thin flakes of tantalum diselenide.
Research
Optoelectronics
Since 2H TaSe2 has been found to feature very large optical absorption and emission of light at approximately 532 nm, it might be used for the development of new devices. In particular, the possibility of transferring energy between TaSe2 and other TMDs, especially MoS2, has been proved. This process can be accomplished in a non-radiative resonant way by exploiting the large coupling between the TaSe2 emission and the excitonic absorption of TMDs.
Moreover, it is a promising material that may be used for the injection of hot carriers in semiconducting materials and other non-metallic TMDs due to the high lifetime of the generated photoelectrons.
All-optical switch and transferring of information
Exploiting the dependence of the non linear effects of TaSe2 by the intensity of the incident laser beam, it is possible to build an all-optical switch by means of two lasers which operate at different wavelengths and intensities. In particular, a high-intensity laser at λ2 = 671 nm is used to modulate a low-intensity signal at λ2 = 532 nm. Since there is a minimum value of in order to trigger the non-linear effects, the low intensity signal cannot excite alone. On the contrary, when the high-intensity beam (λ1) is coupled with the low intensity signal (λ2), non-linear effects at both λ1 and λ2 arise. So, it is possible to trigger the non-linear effects on the low-intensity signal (λ2) by operating on the high-intensity one (λ1).
Exploiting the coupling between λ1 and λ2 enables transferring information from the high-intensity beam to the low-intensity one. With this method, the delay time for transferring the information from λ1 to λ2 is around 0.6 seconds
Spin-orbit torque devices
Usually spin-orbit torque and spin to charge devices are built by interfacing a ferromagnetic layer with a bulk heavy transition metal, such as platinum. However, these effects take mainly place at the interface rather than in the platinum bulk, which introduces heat dissipation due to ohmic losses. Theoretical and DFT simulations suggest that interfacing a 1T-TaSe2 monolayer with cobalt might lead to higher performances with respect to the usual platinum-based devices.
Recent experiments showed that the spin-orbit scattering length of TaSe2 is around Lso = 17 nm, which is highly comparable with the one of platinum, Lso = 12 nm. This suggests the possible implementation of tantalum diselenide for the development of new 2D spintronic devices based on the spin Hall effect.
Hydrogen evolution reaction (HER)
DFT and AIMD simulations suggest that the stacking of flakes of both TaSe2 and TaS2 in a disordered way could be used for the development of a new efficient and cheaper cathode that might be used for the extraction of H2 from other chemical compounds.
See also
Molybdenum disulfide
Molybdenum diselenide
Molybdenum ditelluride
Rhenium diselenide
Rhenium disulfide
Tungsten diselenide
References
Transition metal dichalcogenides
Selenides
Monolayers
Tantalum compounds
Nanomaterials | Tantalum diselenide | [
"Physics",
"Materials_science"
] | 3,087 | [
"Monolayers",
"Nanotechnology",
"Nanomaterials",
"Atoms",
"Matter"
] |
67,944,641 | https://en.wikipedia.org/wiki/Continuum%20robot | A continuum robot is a type of robot that is characterised by infinite degrees of freedom and number of joints. These characteristics allow continuum manipulators to adjust and modify their shape at any point along their length, granting them the possibility to work in confined spaces and complex environments where standard rigid-link robots cannot operate. In particular, we can define a continuum robot as an actuatable structure whose constitutive material forms curves with continuous tangent vectors. This is a fundamental definition that allows to distinguish between continuum robots and snake-arm robots or hyper-redundant manipulators: the presence of rigid links and joints allows them to only approximately perform curves with continuous tangent vectors.
The design of continuum robots is bioinspired, as the intent is to resemble biological trunks, snakes and tentacles. Several concepts of continuum robots have been commercialised and can be found in many different domains of application, ranging from the medical field to undersea exploration.
Classification
Continuum robots can be categorised according to two main criteria: structure and actuation.
Structure
The main characteristic of the design of continuum robots is the presence of a continuously curving core structure, named backbone, whose shape can be actuated. The backbone must also be compliant, meaning that the backbone yields smoothly to external loads.
According to the design principles chosen for the continuum manipulator, we can distinguish between:
single-backbone: these continuum manipulators have one central elastic backbone through which actuation/transmission elements can run.
multi-backbone: the structure of these continuum robots has two or more elastic elements (either rods or tubes) parallel to each other and constrained with one another in some way.
concentric-tube: the backbone is made of concentric tubes that are free to rotate and translate between each other, depending on the actuation happening at the base of the robot.
Actuation
The actuation strategy of continuum manipulators can be distinguished between extrinsic or intrinsic actuation, depending on where the actuation happens:
extrinsic actuation: the actuation happens outside the main structure of the robot and the forces are transmitted via mechanical transmission; among these techniques, there are cable/tendon driven actuators and multi-backbone strategies.
intrinsic actuation: the actuation mechanism operates within the structure of the robot; these strategies include pneumatic or hydraulic chambers and the shape memory effect.
Advantages
The particular design of continuum robots offers several advantages with respect to rigid-link robots. First of all, as already said, continuum robots can more easily operate in environments that require a high level of dexterity, adaptability and flexibility. Moreover, the simplicity of their structure makes continuum robots more prone to miniaturisation. The rise of continuum robots has also paved the way for the development of soft continuum manipulators. These continuum manipulators are made of highly compliant materials that are flexible and can adapt and deform according to the surrounding environment. The "softness" of their material grants higher safety in human-robot interactions.
Disadvantages
The particular design of continuum robots also introduces many challenges. To properly and safely use continuum robots, it is crucial to have an accurate force and shape sensing system. Traditionally, this is done using cameras that are not suitable for some of the applications of continuum robots (e.g. minimally invasive surgery), or using electromagnetic sensors that are however disturbed by the presence of magnetic objects in the environment. To solve this issue, in the last years fiber-Bragg-grating sensors have been proposed as a possible alternative and have shown promising results. It is also necessary to notice that while the mechanical properties of rigid-link robots are fully understood, the comprehension of the behaviour and properties of continuum robots is still subject of study and debate. This poses new challenges in developing accurate models and control algorithms for this kind of robots.
Modelling
Creating an accurate model that can predict the shape of a continuum robot allows to properly control the robot's shape. There are three main approaches to model continuum robots:
Cosserat rod theory: this approach is an exact solution to the static of a continuum robot, as it is not subject to any assumption. It solves a set of equilibrium equations between position, orientation, internal force and torque of the robot. This method requires to be solved numerically and it is therefore computationally expensive, due to its high complexity.
Constant curvature: this technique assumes the backbone to be made of a series of mutually tangent sections that can be approximated as arcs with constant curvature. This approach is also known as piecewise constant-curvature. This assumption can be applied to the entire segment of the backbone or to its subsegments. This model has shown promising results, however it must be taken into account that the segment/subsegments of the backbone may not comply to the constant curvature assumption and therefore the model's behaviour may not entirely reflect the behaviour of the robot.
Rigid-link model: this approach is based on the assumption that the continuum robot can be divided in small segments with rigid links. This is a strong assumption, since if the number of segments is too low, the model hardly behaves like the continuum robot, while increasing the number of segments means increasing the number of variables, and thus complexity. Despite this limitation, rigid-link modelling allows the use of the standard control techniques that are well known for rigid-link robots. It has been proven that this model can be coupled with shape and force sensing to mitigate its inaccuracy and can lead to promising results.
Sensing
To develop accurate control algorithms, it is necessary to complement the presented modelling techniques with real time shape sensing. The following options are currently available:
Electromagnetic (EM) sensing: shape is reconstructed thanks to the mutual induction between a magnetic field generator and a magnetic field sensor. The most common external EM tracking system is the commercially available NDI Aurora: small sensors can be placed on the robot and their position is tracked in an external generated magnetic field. The validity of this method has been extensively assessed, however its performance is hindered by the limited workspace, whose dimension depends on the magnetic field. Another alternative is to embed the sensors internally in the continuum robot, combining magnetic sensors with Hall effect sensors: the magnetic field is measured at the level of the Hall effect sensors in order to estimate the deflection of the robot. However, it has been noticed that the higher the bending of the manipulator, the higher is the estimation error, due to crosstalk between sensors and magnets.
Optical sensing: fiber Bragg grating sensors incorporated in an optical fiber can be embedded into the backbone of the continuum robot to estimate its shape; these sensors can only reflect a small range of the input light spectrum depending on their strain; therefore, by measuring the strain on each sensor it is possible to obtain the shape of the robot. This type of sensor is however expensive and is more prone to breaking in case of excessive strain, and this can happen in robots that can perform high deflections.
Control strategies
The control strategies can be distinguished in static and dynamic; the first one is based on the steady-state assumption, while the latter also considers the dynamic behaviour of the continuum robot. We can also differentiate between model-based controllers, that depend on a model of the robot, and model-free, that learn the robot's behaviour from data.
Model-based static controllers: they rely on one of the modelling approaches presented above; once the model is defined, the kinematics must be inverted to obtain the desired actuator or configuration space variables. There are several ways to do this, like differential inverse kinematics, direct inversion or optimization.
Model-free static controllers: these approaches learn directly, via machine learning techniques (e.g. regression methods and neural networks), the inverse kinematic or the direct kinematic representation of the continuum robot from collected data, and they are also known as data-driven methods. Even though these controllers present the advantage of not having to establish an accurate model of the continuum robot, they perform worse than their model-based counterpart.
Model-based dynamic controllers: they need the formulation of the kinematic model and an associated dynamic formulation. , they are in the early stage, as they require high computational power and high-dimensional sensory feedback. With improvements in computational power and sensing capabilities they could be crucial in industrial applications of continuum robots, where time and cost are also relevant along with accuracy.
Model-free dynamic controllers: they are still a relatively unexplored approach. Some works that propose machine learning techniques to learn the dynamic behaviour of continuum robots have been presented, but their performance is limited by high training time and instability of the machine learning model.
Hybrid approaches, that combine model-free and model-based controllers, can also present a valid alternative.
Applications
Continuum robots have been applied in many different fields.
Medical
Continuum robots have been widely applied in the medical field, in particular for minimally invasive surgery. For example, Ion by Intuitive is a robotic-assisted endoluminal platform for minimally invasive peripheral lung biopsy, that allows to reach nodules located in peripheral areas of the lungs that cannot be reached by standard instrumentations; this allows to perform early-stage diagnoses of cancer.
Hazardous places
Continuum robots offer the possibility of completing tasks in hazardous and hostile environments. For example, a quadruped robot with continuum limbs has been developed: it can walk, crawl, trot and propel to whole arm grasping to negotiate difficult obstacles.
Space
NASA has developed a continuum manipulator, named Tendril, that can extend into crevasses and under thermal blankets to access areas that would be otherwise inaccessible with conventional means.
Subsea
The AMADEUS project developed a dextrous underwater robot for grasping and manipulation tasks, while the FLAPS project created propulsion systems that replicate the mechanisms of fish swimming.
See also
Soft robotics
Biorobotics
References
External links
Continuum robots - a state of the art
Robotics
Robot kinematics
Robot control | Continuum robot | [
"Engineering"
] | 2,033 | [
"Robotics engineering",
"Automation",
"Robot control",
"Robotics",
"Robot kinematics"
] |
67,944,653 | https://en.wikipedia.org/wiki/Sorption%20enhanced%20water%20gas%20shift | Sorption enhanced water gas shift (SEWGS) is a technology that combines a pre-combustion carbon capture process with the water gas shift reaction (WGS) in order to produce a hydrogen rich stream from the syngas fed to the SEWGS reactor.
The water gas shift reaction converts the carbon monoxide into carbon dioxide, according to the following chemical reaction:
CO + H2O CO2 + H2
While carbon dioxide is captured and removed through an adsorption process.
The in-situ CO2 adsorption and removal shifts the water gas shift reaction to the right-hand side, thereby completely converting the CO and maximizing the production of high pressure hydrogen.
Since the beginning of the second decade of the 21st century this technology has started gaining attention, as it shows advantages over carbon capture conventional technologies and because hydrogen is considered the energy carrier of the future.
Process
The SEWGS technology is the combination of the water gas shift reaction with the adsorption of carbon dioxide on a solid material. Typical temperature and pressure ranges are 350-550 °C and 20-30 bar. The inlet gas of SEWGS reactors is typically a mixture of hydrogen, CO and CO2, where steam is added to convert CO into CO2.
The conversion of carbon monoxide into carbon dioxide is enhanced by shifting the reaction equilibrium through CO2 adsorption and removal, the latter being one the produced species.
The SEWGS technology is based on a multi-bed pressure swing adsorption (PSA) unit in which the vessels are filled with the water gas shift catalyst and the CO2 adsorbent material. Each vessel is subjected to a series of processes. In the sorption/reaction step, a high pressure hydrogen-rich stream is produced, while during sorbent regeneration a CO2 rich stream is generated.
The process starts feeding syngas to the SEWGS reactor, where CO2 is adsorbed and a hydrogen-rich stream is produced. The regeneration of the first vessel starts when the sorbent material is saturated by CO2, directing the feed stream to another vessel. After the regeneration, the vessels are re-pressurized. A multibed configuration is necessary to guarantee a continuous production of hydrogen and carbon dioxide. The optimal number of beds usually varies between 6 and 8.
Water gas shift reaction
The water gas shift reaction is the reaction between carbon monoxide and steam to form hydrogen and carbon dioxide:
CO + H2O CO2 + H2
This reaction was discovered by Felice Fontana and nowadays is adopted in a wide range of industrial applications, such as in the production process of ammonia, hydrocarbons, methanol, hydrogen and other chemicals. In the industrial practice two water gas shift sections are necessary, one at high temperature and one at low temperature, with an intersystem cooling.
Adsorption process
Adsorption is the phenomenon of sorption of gases or solutes on solid or liquid surfaces. Adsorption on solid surface occurs when some substances collide with the solid surface creating bonds with the atoms or the molecules of the solid surface. There are two main adsorption processes: physical adsorption and chemical adsorption. The first one is the result of the interaction of intermolecular forces. Since weak bonds are formed, the adsorbed substance can be easily separated. In chemical adsorption, chemical bonds are formed, meaning that the absorption or release of adsorption heat and the activation energy are larger with respect to physical adsorption. These two processes often take place simultaneously. The adsorbent material is then regenerated through desorption, which is the opposite phenomenon of sorption, releasing the captured substance from the adsorbent material.
In SEWGS technology the pressure swing adsorption (PSA) process is employed to regenerate the adsorbent material and produce a CO2 rich stream. The process is similar to the one conventionally used for air separation, hydrogen purification and other gas separations.
Conventional technology for carbon dioxide removal
The industrially used technology for carbon dioxide removal is called amine washing technology and is based on chemical absorption of carbon dioxide. In chemical absorption, reactions between the absorbed substance (CO2) and the solvent occur and produce a rich liquid. Then, the rich liquid enters the desorption column where carbon dioxide is separated from the sorbent which is reused for CO2 absorption. Ethanolamine (C2H7NO), diethanolamine (C4H11NO2), triethanolamine (C6H15NO3) mono-ethanolamine (C2H7NO) and methyl-diethanolamine (C5H13NO2) are commonly used for the removal of CO2.
Advantages of SEWGS over conventional technologies
SEWGS technology shows some advantages in comparison with traditional technologies adoptable for pre-combustion removal of carbon dioxide. Traditional technologies require employing two water gas shift reactors (a high temperature and a low temperature stage) in order to get high conversions of carbon monoxide into carbon dioxide with an intermediate cooling stage between the two reactors. In addition, another cooling stage is necessary at the outlet of the second WGS reactor for the CO2 capture with a solvent. Furthermore, the hydrogen rich stream at the outlet of SEWGS section can be directly fed into a gas turbine, while the hydrogen rich stream produced by the traditional route needs a further heating stage.
Applications
The importance of this technology is directly related to the problem of global warming and the mitigation of the carbon dioxide emissions. In hydrogen economy hydrogen is considered a clean energy carrier with high energy content and is expected to replace fossil fuels and other energy sources associated with pollution issues. For these reasons, since the beginning of second decade of the 21st century this technology attracted the public interest.
The SEWGS technology enables producing high-purity hydrogen without need for further purification processes. It furthermore finds potential application in a wide range of industrial processes, such as in the production of electricity from fossil fuels or in the iron and steel industry.
The integration of the SEWGS process in natural gas combined cycle (NGCC) and integrated gasification combined cycle (IGCC) power plants has been investigated as a possible way to produce electricity from natural gas or coal with almost-zero emissions. In NGCC power plant the carbon capture achieved is around 95% with a CO2 purity over 99%, while in IGCC power plants the carbon capture ratio is around 90% with a CO2 purity of 99%.
The investigation of SEWGS integration in steel mills started during the second decade of 21st century. The goal is to reduce the carbon footprint of this industrial process that is responsible of the 6% of total global CO2 emissions and 16% of the emissions generated by industrial processes.
The captured and removed CO2 can be then stored or used for the production of high value chemical products.
Sorbents for SEWGS process
The reactor vessels are loaded with sorbent pellets. Sorbent must have the following features:
high CO2 capacity and selectivity over H2
low H2O adsorption
low specific cost
mechanical stability under pressure and temperature variation
chemical stability in the presence of impurities
easy regeneration by steam
Different sorbent materials have been investigated to the purpose of being employed in SEWGS. Some examples include:
K2CO3-promoted hydrotalcite
potassium promoted alumina
Na–Mg double salt
CaO
Potassium promoted hydrotalcite is the most studied sorbent material for SEWGS application. Its principal features are listed below:
low cost
sufficiently high CO2 cyclic working capacity
fast adsorption kinetics
good mechanical stability
See also
Water-gas shift reaction
Adsorption
Carbon capture and storage
Carbon capture and utilization
References
External links
Projects in which SEWGS technology is investigated:
Web page of STEPWISE project
Web page of C4U project
Chemical processes
Hydrogen production
Industrial gases | Sorption enhanced water gas shift | [
"Chemistry"
] | 1,620 | [
"Chemical process engineering",
"Chemical processes",
"Industrial gases",
"nan"
] |
67,944,684 | https://en.wikipedia.org/wiki/Lithium%20aluminium%20germanium%20phosphate | Lithium aluminium germanium phosphate, typically known with the acronyms LAGP or LAGPO, is an inorganic ceramic solid material whose general formula is . LAGP belongs to the NASICON (Sodium Super Ionic Conductors) family of solid conductors and has been applied as a solid electrolyte in all-solid-state lithium-ion batteries. Typical values of ionic conductivity in LAGP at room temperature are in the range of 10 - 10 S/cm, even if the actual value of conductivity is strongly affected by stoichiometry, microstructure, and synthesis conditions. Compared to lithium aluminium titanium phosphate (LATP), which is another phosphate-based lithium solid conductor, the absence of titanium in LAGP improves its stability towards lithium metal. In addition, phosphate-based solid electrolytes have superior stability against moisture and oxygen compared to sulfide-based electrolytes like (LGPS) and can be handled safely in air, thus simplifying the manufacture process.
Since the best performances are encountered when the stoichiometric value of x is 0.5, the acronym LAGP usually indicates the particular composition of , which is also the typically used material in battery applications.
Properties
Crystal structure
Lithium-containing NASICON-type crystals are described by the general formula , in which M stands for a metal or a metalloid (Ti, Zr, Hf, Sn, Ge), and display a complex three-dimensional network of corner-sharing octahedra and phosphate tetrahedra. Lithium ions are hosted in voids in between, which can be subdivided into three kinds of sites:
Li(1) 6-fold coordinated sites at Wyckoff 6b position;
Li(2) sites at Wyckoff 18e position;
Li(3) sites at Wyckoff 36f position.
In order to promote lithium conductivity at sufficiently high rates, Li(1) sites should be fully occupied and Li(2) sites should be fully empty. Li(3) sites are located between Li(1) and Li(2) sites and are occupied only when large tetravalent cations are present in the structure, such as Zr, Hf, and Sn. If some Ge cations in the (LGP) structure are partially replaced by Al cations, the LAGP material is obtained with the general formula . The single-phase NASICON structure is stable with x between 0.1 and 0.6; when this limit is exceeded, a solid solution is no more possible and secondary phases tend to be formed. Although Ge and Al cations have very similar ionic radii (0.53 Å for Ge vs. 0.535 Å for Al), cationic substitution leads to compositional disorder and promotes the incorporation of a larger amount of lithium ions to achieve electrical neutrality. Additional lithium ions can be incorporated in either Li(2) or Li(3) empty sites.
In the available scientific literature, there is not a unique description of the sites available for lithium ions and of their atomic coordination, as well as of the sites directly involved during the conduction mechanism. For example, only two available sites, namely Li(1) and Li(2), are mentioned in some cases, while the Li(3) site is neither occupied nor involved in the conduction process. This results in the lack of unambiguous description of LAGP local crystal structure, especially concerning the arrangement of lithium ions and site occupancy when germanium is partially replaced by aluminium.
LAGP displays a rhombohedral unit cell with a space group R3c.
Vibrational properties
Factor group analysis
LAGP crystals belong to the space group D63d - R3c. The factor group analysis of NASICON-type materials with general formula MIM2IVPO4 (where MI stands for a monovalent metal ion like Na+, Li+ or K+, and MIV represents a tetravalent cation such as Ti4+, Ge4+, Sn4+, Zr4+ or Hf4+) is usually performed assuming the separation between internal vibrational modes (i.e. modes originating in PO4 units) and external modes (i.e. modes arising from the translations of the MI and MIV cations, from PO4 translations, and from PO4 librations).
Focusing on internal modes only, the factor group analysis for R3c space group identifies 14 Raman-active modes for the PO4 units: 6 of these modes correspond to stretching vibrations and 8 to bending vibrations.
On the contrary, the analysis of external modes leads to many available vibrations: since the number of irreducible representations within the rhombohedral R3c space group is restricted, interactions among different modes could be expected and a clear assignment or discrimination becomes unfeasible.
Raman spectra
The vibrational properties of LAGP could be directly probed using Raman spectroscopy. LAGP shows the Raman features characteristic of all the NASICON-type materials, most of which caused by the vibrational motions of PO4 units. The main spectral regions in a Raman spectrum of NASICON-type materials are summarized in the following table.
The Raman spectra of LAGP are usually characterized by broad peaks, even when the material is in its crystalline form. Indeed, both the presence of aluminium ions in place of germanium ions and the extra lithium ions introduce structural and compositional disorder in the sublattice, resulting in peak broadening.
Transport properties
LAGP is a solid ionic conductor and features the two fundamental properties to be used as a solid-state electrolyte in lithium-ion batteries, namely a sufficiently high ionic conductivity and a negligible electronic conductivity. Indeed, during battery operations, LAGP should guarantee the easy and fast motion of lithium ions between cathode and anode, while preventing the transfer of electrons.
As stated in the description of the crystal structure, three kinds of sites are available for hosting lithium ions in the LAGP NASICON structure, i.e. the Li(1) sites, the Li(2) sites and the Li(3) sites. Ionic conduction occurs because of hopping of lithium ions from Li(1) to Li(2) sites or across two Li(3) sites. The bottleneck to ionic motion is represented by a triangular window delimited by three oxygen atoms between Li(1) and Li(2) sites.
The ionic conductivity in LAGP follows the usual dependency on temperature expressed by an Arrhenius-type equation, which is typical of most of solid-state ionic conductors:
where
is the pre-exponential factor,
is the absolute temperature,
is the activation energy for ionic transport,
is the Boltzmann constant.
Typical values for the activation energies of bulk LAGP materials are in the range of 0.35 - 0.41 eV. Similarly, the room-temperature ionic conductivity is closely related to the synthesis conditions and to the actual material microstructure, therefore the conductivity values reported in scientific literature span from 10 S/cm up to 10 mS/cm, the highest value close to room temperature reported up to now. Compared to LGP, the room-temperature ionic conductivity of LAGP is increased by 3-4 orders of magnitude upon partial substitution of Ge by Al. Aluminium ions have a lower charge compared to Ge ions and additional lithium is incorporated in the NASICON structure to maintain charge balance, resulting in an enlarged number of charge carriers. The beneficial effect of aluminium is maximized for x around 0.4 - 0.5; for larger Al content, the single-phase NASICON structure is not stable and secondary phases appear, mainly AlPO4, , and GeO2. Secondary phases are typically nonconductive; however, small and controlled amounts of AlPO4 exert a densification effect which affects in a positive way the overall ionic conductivity of the material.
The prefactor in the Arrhenius equation can in turn be written as a function of fundamental constants and conduction parameters:
where
is ion valence,
is the elementary charge,
is the absolute temperature,
is the Boltzmann constant,
is the concentration of charge carriers,
is the average velocity of the ions,
is the mean free path.
The prefactor is directly proportional to the concentration of mobile lithium-ion carriers, which increases with the aluminium content in the material. As a result, since the dependency of the activation energy on aluminium content is negligible, the ionic conductivity is expected to increase with increasing Ge substitution by Al, until secondary phases are formed. The introduction of aluminium also reduces the grain boundary resistivity of the material, positively impacting on the total (bulk crystal + grain boundary) ionic conductivity of the LAGP material.
As expected for solid ionic conductors, the ionic conductivity of LAGP increases with increasing temperature.
Regarding the electronic conductivity of LAGP, it should be as low as possible to prevent electrical short circuit between anode and cathode. As for ionic conductivity, the exact stoichiometry and microstructure, strongly connected to the synthesis method, have an influence on the electronic conductivity, even if the reported values are very low and close to (or lower than) 10 S/cm.
Thermal properties
The specific heat capacity of LAGP materials with general formula fits into the Maier-Kelley polynomial law in the temperature range from room temperature to 700 °C:
where
is the absolute temperature,
are fitting constants.
Typical values are in the range of 0.75 - 1.5 J⋅g−1⋅K−1 in the temperature interval 25 - 100 °C. The constants increase with the x value, i.e. with both the aluminium and the lithium content, while the constant does not follow a precise trend. As a result, the specific heat capacity of LAGP is expected to increase as the Al content grows and the Ge content decreases, which is consistent with data about the relative specific heats of aluminium and germanium compounds.
In addition, the thermal diffusivity of LAGP follows a decreasing trend with increasing temperature, irrespective of the aluminium content:
The aluminium level affects the exponent , which varies from 0.08 (high Al content) to 0.11 (low Al content). Such small values suggest the presence of a large number of point defects in the material, which is highly beneficial for solid ionic conductors. Finally, the expression for the thermal conductivity can be written:
where
is the heat capacity per unit volume,
is the average phonon group velocity,
is the phonon mean free path,
is the density of the material.
Taking everything into account, as the aluminium content in LAGP increases, the ionic conductivity increases as well, while the thermal conductivity decreases, since a larger number of lithium ions enhances the phonon scattering, thus reducing the phonon mean free path and the thermal transport in the material. Therefore, thermal and ionic transports in LAGP ceramics are not correlated: the corresponding conductivities follow opposite trends as a function of the aluminium content and are affected in a different way by temperature variations (e.g., the ionic conductivity increases by one order of magnitude upon an increase from room temperature to 100 °C, while the thermal conductivity increases by only 6%).
Thermal stability
Detrimental secondary phases can also form because of thermal treatments or during the material production. Excessively high sintering/annealing temperatures or long dwelling times will result in the loss of volatile species (especially Li2O) and in the decomposition of LAGP main phase into AlPO4 and GeO2. LAGP bulk samples and thin films are typically stable up to 700-750 °C; if this temperature is exceeded, volatile lithium is lost and the impurity phase GeO2 forms. If the temperature is further increased beyond 950 °C, also AlPO4 appears.
Raman spectroscopy and in situ X-ray diffraction (XRD) are useful techniques that can be employed to recognise the phase purity of LAGP samples during and after the heat treatments.
Chemical and electrochemical stability
LAGP belongs to phosphate-based solid electrolytes and, in spite of showing a moderate ionic conductivity compared to other families of solid ionic conductors, it possesses some intrinsic advantages with respect to sulfides and oxides:
Extremely high chemical stability in humid air;
Wide electrochemical stability window;
Low to negligible electronic conductivity.
One of the main advantages of LAGP is its chemical stability in the presence of oxygen, water vapour, and carbon dioxide, which simplifies the manufacture process preventing the use of a glovebox or protected environments. Unlike sulfide-based solid electrolytes, which react with water releasing poisonous gaseous hydrogen sulfide, and garnet-type lithium lanthanum zirconium oxide (LLZO), which react with water and CO2 to form passivating layers of LiOH and Li2CO3, LAGP is practically inert in humid air.
Another important advantage of LAGP is its wide electrochemical stability window, up to 6 V, which allows the use of such electrolyte in contact with high-voltage cathodes, thus enabling high energy densities. However, the stability at very low voltages and against lithium metal is controversial: even if LAGP is more stable than LATP because of the absence of titanium, some literature works report on the reduction of Ge by lithium as well, with formation of Ge and metallic germanium at the electrode-electrolyte interface and dramatic increase of interfacial resistance.
The possible decomposition mechanism of LAGP in contact with metallic lithium is reported in the equation below:
Synthesis
Several synthesis methods exist to produce LAGP in the form of bulk pellets or thin films, depending on the required performances and final applications. The synthesis path significantly affects the microstructure of the LAGP material, which plays a key role in determining its overall conductive properties. Indeed, a compact layer of crystalline LAGP with large and connected grains, and minimal amount of secondary, non-conductive phases ensures the highest conductivity values. On the contrary, an amorphous structure or the presence of small grains and pores tend to hinder the motion of lithium ions, with values of ionic conductivity in the range of 10 - 10 S/cm for glassy LAGP.
In most cases, a post-process thermal treatment is performed to achieve the desired degree of crystallinity.
Bulk pellets
Solid-state sintering
Solid-state sintering is the most used synthesis process to produce solid-state electrolytes. Powders of LAGP precursors, including oxides like GeO2 and Al2O3, are mixed, calcinated and densified at high temperature (700 - 1200 °C) and for long times (12 hours). Sintered LAGP is characterized by high crystalline quality, large grains, a compact microstructure, and high density, even if negative side effects such as loss of volatile lithium compounds and formation of secondary phases should be avoided while the material is kept at high temperature.
The sintering parameters affects LAGP microstructure and purity and, ultimately, its ionic conductivity and conduction performances.
Glass crystallization
LAGP glass-ceramics can be obtained starting from an amorphous glass with nominal composition of , which is subsequently annealed to promote crystallization. Compared to solid-state sintering, ceramic melt-quenching followed by crystallization is a simpler and more flexible process which leads to a denser and more homogeneous microstructure.
The starting point for glass crystallization is the synthesis of the glass through a melt-quenching process of precursors in suitable amount to achieve the desired stoichiometry. Different precursors can be used, especially to provide phosphorus to the material. One possible route is the following:
Preheating of Al2O3 and GeO2 at 1000 °C for 1 hour;
Drying of Li2CO3 at 300 °C for 3 hours;
Mixing of the starting precursors in proper amounts to match the nominal stoichiometry;
Removal of volatile species by stepwise heating of the mix to 500 °C;
Melting at 1450 °C for 1 hour;
Quenching of the melt;
Annealing of glass samples in air.
The main steps are summarized in the following equation:
+
The annealing temperature is selected to promote the full crystallization and avoiding the formation of detrimental secondary phases, pores, and cracks. Various temperatures are reported in different literature sources; however, crystallization does not usually start below 550-600 °C, while temperatures larger than 850 °C cause the extensive formation of impurity phases.
Sol-gel techniques
The sol-gel technique enables the production of LAGP particles at lower processing temperatures compared to sintering or glass crystallization. The typical precursor is a germanium organic compound, like germanium ethoxide , which is dissolved in an aqueous solution with stoichiometric amounts of the sources of lithium, phosphorus, and aluminium. The mixture is then heated and stirred. The sol-gel process starts after the addition of a gelation agent and the final material is obtained after subsequent heating steps aimed at eliminating water and at promoting the pyrolysis reaction, followed by calcination.
The sol-gel process requires the use of germanium organic precursors, which are more expensive compared to GeO2.
Thin films
Sputtering
Sputtering (in particular radio-frequency magnetron sputtering) has been applied to the fabrication of LAGP thin-films starting from a LAGP target. Depending on the temperature of the substrate during the deposition, LAGP can be deposited in the cold sputtering or hot sputtering configuration.
The film stoichiometry and microstructure can be tuned by controlling the deposition parameters, especially the power density, the chamber pressure, and the substrate temperature. Both amorphous and crystalline films are obtained, with a typical thickness around 1 μm. The room-temperature ionic conductivity and the activation energy of sputtered and annealed LAGP films are comparable with those of bulk pellets, i.e. 10 S/cm and 0.31 eV.
Aerosol deposition
Pre-synthesized LAGP powders can be sprayed on a substrate to form a LAGP film by means of aerosol deposition. The powders are loaded into the aerosol deposition chamber and purified air is used as the carrier gas to drive the particles towards the substrate, where they impinge and coalesce to generate the film. Since the as-produced film is amorphous, an annealing treatment is usually performed to improve the film crystallinity and its conduction properties.
Other techniques
Some other methods to produce LAGP materials have been reported in literature works, including liquid-based techniques, spark plasma sintering, and co-precipitation.
In the following table, some ionic conductivity values are reported for LAGP materials produced with different synthesis routes, in the case of optimized production and annealing conditions.
Applications
LAGP is one of the most studied solid-state electrolytes for lithium-ion batteries. The use of a solid-state electrolyte improves the battery safety eliminating liquid-based electrolytes, which are flammable and usually unstable above 4.3 V. In addition, it physically separates the anode from the cathode, reducing the risk of short-circuit, and strongly inhibits lithium dendrite growth. Finally, solid-state electrolytes can operate in a wide range of temperatures, with minimum conductivity loss and decomposition issues. Nevertheless, the ionic conductivity of solid-state electrolytes is some orders of magnitude lower than the one of conventional liquid-based electrolytes, therefore a thin electrolyte layer is preferred to reduce the overall internal impedance and to achieve a shorter diffusion path and larger energy densities. Therefore, LAGP is a suitable candidate for all-solid-state thin-film lithium-ion batteries, in which the electrolyte thickness ranges from 1 to some hundreds of micrometres. The good mechanical strength of LAGP effectively suppress lithium dendrites during lithium stripping and plating, reducing the risk of internal short-circuit and battery failure.
LAGP is applied as a solid-state electrolyte both as a pure material and as a component in organic-inorganic composite electrolytes. For example, LAGP can be composited with polymeric materials, like polypropylene (PP) or polyethylene oxide (PEO), to improve the ionic conductivity and to tune the electrochemical stability. Moreover, since LAGP is not fully stable against metallic lithium because of the electrochemical reactivity of Ge cations, additional interlayers can be introduced between the lithium anode and the solid electrolyte to improve the interfacial stability. The addition of a thin layer of metallic germanium inhibits the electrochemical reduction by lithium metal at very negative potentials and promotes the interfacial contact between the anode and the electrolyte, resulting in improved cycling performance and battery stability. The use of polymer-ceramic composite interlayers or the excess of Li2O are alternative strategies to improve the electrochemical stability of LAGP at negative potentials.
LAGP has been also tested not only as a solid electrolyte, but also as an anode material in lithium-ion battery, showing high electrochemical stability and good cycling performance.
Lithium-sulfur batteries
LAGP-based membranes have been applied as separators in lithium-sulfur batteries. LAGP allows the transfer of lithium ions from anode to cathode but, at the same time, prevents the diffusion of polysulfides from the cathode, suppressing the polysulfide shuttle effect and enhancing the overall performance of the battery. Typically, all-solid-state lithium-sulfur batteries are not fabricated because of high interfacial resistance; therefore, hybrid electrolytes are usually realized, in which LAGP acts as a barrier against polysulfide diffusion but it is combined with liquid or polymer electrolytes to promote fast lithium diffusion and to improve the interfacial contact with electrodes.
See also
Solid-state electrolyte
Solid-state battery
NASICON
Lithium lanthanum zirconium oxide
References
Lithium compounds
Lithium-ion batteries
Electrochemistry
Solid-state batteries
Phosphates
Germanium compounds | Lithium aluminium germanium phosphate | [
"Chemistry"
] | 4,598 | [
"Electrochemistry",
"Phosphates",
"Salts"
] |
67,944,695 | https://en.wikipedia.org/wiki/Biaxial%20tensile%20testing | In materials science and solid mechanics, biaxial tensile testing is a versatile technique to address the mechanical characterization of planar materials. It is a generalized form of tensile testing in which the material sample is simultaneously stressed along two perpendicular axes. Typical materials tested in biaxial configuration include
metal sheets,
silicone elastomers,
composites,
thin films,
textiles
and biological soft tissues.
Purposes of biaxial tensile testing
A biaxial tensile test generally allows the assessment of the mechanical properties
and a complete characterization for uncompressible isotropic materials, which can be obtained through a fewer number of specimens with respect to uniaxial tensile tests.
Biaxial tensile testing is particularly suitable for understanding the mechanical properties of biomaterials, due to their directionally oriented microstructures.
If the testing aims at the material characterization of the post elastic behaviour, the uniaxial results become inadequate, and a biaxial test is required in order to examine the plastic behaviour.
In addition to this, using uniaxial test results to predict rupture under biaxial stress states seems to be inadequate.
Even if a biaxial tensile test is performed in a planar configuration, it may be equivalent to the stress state applied on three-dimensional geometries, such as cylinders with an inner pressure and an axial stretching.
The relationship between the inner pressure and the circumferential stress is given by the Mariotte formula:
where is the circumferential stress, P the inner pressure, D the inner diameter and t the wall thickness of the tube.
Equipment
Typically, a biaxial tensile machine is equipped with motor stages, two load cells and a gripping system.
Motor stages
Through the movement of the motor stages a certain displacement is applied on the material sample. If the motor stage is one, the displacement is the same in the two direction and only the equi-biaxial state is allowed. On the other hand, by using four independent motor stages, any load condition is allowed; this feature makes the biaxial tensile test superior to other tests that may apply a biaxial tensile state, such as the hydraulic bulge, semispherical bulge, stack compression or flat punch.
Using four independent motor stages allows to keep the sample centred during the whole duration of the test; this feature is particularly useful to couple an image analysis during the mechanical test. The most common way to obtain the fields of displacements and strains is the Digital Image Correlation (DIC), which is a contactless technique and so very useful since it doesn't affect the mechanical results.
Load cells
Two load cells are placed along the two orthogonal load directions to measure the normal reaction forces explicated by the specimen. The dimensions of the sample have to be in accordance with the resolution and the full scale of the load cells.
A biaxial tensile test can be performed either in a load-controlled condition, or a displacement-controlled condition, in accordance with the settings of the biaxial tensile machine. In the former configuration a constant loading rate is applied and the displacements are measured, whereas in the latter configuration a constant displacement rate is applied and the forces are measured.
Dealing with elastic materials the load history is not relevant, whereas in viscoelastic materials it is not negligible. Furthermore, for this class of materials also the loading rate plays a role.
Gripping system
The gripping system transfers the load from the motor stages to the specimen. Although the use of biaxial tensile testing is growing more and more, there is still a lack of robust standardized protocols concerning the gripping system. Since it plays a fundamental role in the application and distribution of the load, the gripping system has to be carefully designed in order to satisfy the Saint-Venant principle. Some different gripping systems are reported below.
Clamps
The clamps are the most common used gripping system for biaxial tensile test since they allow a quite uniformly distributed load at the junction with the sample. To increase the uniformity of stress in the region of the sample close to the clamps, some notches with circular tips are obtained from the arm of the sample. The main problem related with the clamps is the low friction at the interface with the sample; indeed, if the friction between the inner surface of the clamps and the sample is too low, there could be a relative motion between the two systems altering the results of the test.
Sutures
Small holes are performed on the surface on the sample to connect it to the motor stages through wire with a stiffness much higher than the sample. Typically, sutures are used with square samples. In contrast to the clamps, sutures allow the rotation of the sample around the axis perpendicular to the plane; in this way they do not allow the transmission of shear stresses to the sample.
The load transmission is very local, thereby the load distribution is not uniform. A template is needed to apply the sutures in the same position in different samples, to have repeatability among different tests.
Rakes
This system is similar to the suture gripping system, but stiffer. The rakes transfer a limited quantity of shear stress, so they are less useful than sutures if used in presence of large shear strains. Although the load is transmitted in a discontinuous way, the load distribution is more uniform if compared to the sutures.
Specimen shape
The success of a biaxial tensile test is strictly related to the shape of the specimen.
The two most used geometries are the square and cruciform shapes. Dealing with fibrous materials or fibres reinforced composites, the fibres should be aligned to the load directions for both classes of specimens, in order to minimize the shear stresses and to avoid the sample rotation.
Square samples
Square or more generally rectangular specimens are easy to obtain, and their dimension and ratio depend on the material availability. Large specimens are needed to make negligible the effects of the gripping system in the core of the sample. However this solution is very material consuming so small specimen are required. Since the gripping system is very close to the core of the specimen the strain distribution is not homogeneous.
Cruciform samples
A proper cruciform sample should fulfil the following requirements:
maximization of the biaxially loaded area in the centre of the sample, where the strain field is uniform;
minimization of the shear strain in the centre of the sample;
minimization of regions of stress concentration, even outside the area of interest;
failure in the biaxially loaded area;
repeatable results.
Is important to note that on this kind of sample, the stretch is larger in the outer region than in the centre, where the strain is uniform.
Method
Uniaxial stress test is typically used to measure mechanical properties of materials, while many materials exhibit various behavior when different loading stress are exerted. Thus, biaxial tensile test become one of the prospective measurements. Small Punch Test (SPT) and Bulge Testing are two methods applying biaxial tensile state.
Small Punch Test (SPT)
The Small Punch Test (SPT) was first developed in the 1980s as minimal invasive in-situ technique to investigate the local degradation and embrittlement of nuclear material. The SPT is a kind of miniaturized test method that only small volume specimen is required. Using small volumes would not severely affect and damage an in-service component which make SPT a good method to determine the mechanical properties of unirradiated and irradiated materials or analyze small regions of structural components.
In terms of the testing, the disc shaped specimen is clamped between two dies. The punch is then pushed with a constant displacement rate through the specimen. A flat punch or concave tip pushing a ball are typically used in the test. After the testing, some characteristic parameters such as force-displacement curves are used to estimate yield strength, ultimate tensile stress. Considering the curves with various temperatures from SPT tensile/fracture data, ductile to brittle transition temperature (DBTT) can be calculated. One thing to be noticed is that the specimen used in SPT is suggested to be very flat to reduce the stress error caused by undefined contact situation.
Hydraulic Bulge Test (HBT)
Hydraulic Bulge Test (HBT) is a method of biaxial tensile testing. It used to determine the mechanical properties such as Young’s moduli, yield strength, ultimate tensile strength, and strain-hardening properties of sheet material like thin films. HBT can better describe the plastic properties of a sheet at large strains since the strain in press forming are normally larger than the uniform strain. However, the geometries of forming part are not symmetry, therefore, the true stress and strain measured by HBT will be higher than that measured by tensile test.
In HBT, rupture discs and high-pressure hydraulic oil are used to cause specimen deformation which also used to avoid influence factors such as friction during small punch test. While there are constraints in test conditions, the temperature is limited by solidification and vaporization of hydraulic oil. High temperature would lead to loading failure, while low temperature result in the failure of the seal part and the leaking vapor might be dangerous.
In HBT, a circular sample is normally stripped from a substrate on which they have been prepared and clamped over a hole around its periphery at the end of a cylinder. It experiences pressure from one side using hydraulic oil and then bulges and expands into a cavity with increasing pressure. The flow stress is calculated from the dome height of the bulging blank and the pressure and height can also be determined. Strain will be measured by Digital Image Correlation (DIC). With the specimen thickness and clamper size being considered, the true stress and strain can be calculated.
Other liquids may also be used as the hydraulic fluid in HBT. Xiang et al. (2005) developed a HBT for sub-micron thin films by using standard photolithographic microfabrication techniques etch away a small channel behind the film of interest, then pressurized the channel with water to bulge thin films. Validity of this method was confirmed using finite element analysis (FEA).
Gas Bulge Test (GBT)
Gas bulge tests (GBT) operate similarly to HBT. Instead of a hydraulic oil, high-pressure gas is used to back-pressure a thin plate specimen. Since gas has a much lower density than liquid, the maximum safe pressure output from GBT is considerably lower than hydraulic systems. Therefore, elevated temperature GBT is often used to increase ductility of the specimen, enabling plastic deformation at lower pressures.
Unlike HBT, elevated temperatures are possible for GBT. Operating temperatures of biaxial bulge testing are limited by phase transitions of the pressurized fluid—gasses therefore have an extremely wide range of operating temperatures. GBT is suitable for studying fatigue, low and high-temperature mechanical properties (given sufficient ductility at low temperatures), and thermal cycling. Additionally, holding pressure at a high temperature allows for testing time-dependent mechanical properties such as creep.
High temperature DIC may be used to measure biaxial stress and strain during GBT. Alternatively, a laser interferometer may be used to find the displacement near the apex of the dome, and many models are presented for calculating both radius of curvature and radial strain of bulged specimens. True stress is best approximated by the Young-Laplace equation. Results are comparable to biaxial testing standard ISO 16808. Clamping of elevated-temperature gas bulge specimens requires clamping materials with an operating temperature in excess of the operating temperature. This is possible using high-temperature mechanical fasteners, or by directly bonding materials via traditional welding, friction stir welding (FSW), or diffusion bonding.
GBT example studies
Frary et al. (2002) use GBT to demonstrate superplastic deformation of commercially pure (CP) titanium and Ti64 by thermally cycling through the material’s α/β transformation temperature.
Huang et al. (2019) measure coefficients of thermal expansion through GBT, and thermally cycle NiTi shape memory alloys to measure stress evolution.
The ability to perform GBT in parallel for an array of specimens enables high-throughput screening of mechanical properties and facilitates rapid materials design. Ding et al. (2014) conducted parallel measurements of viscosity across a huge composition-space of bulk metallic glass. Instead of using a direct pressure hookup, tungstic acid was placed into the cavities behind the specimen plate and decomposed to produce gas upon heating to ~100 °C.
Analytical solution
A biaxial tensile state can be derived starting from the most general constitutive law for isotropic materials in large strains regime:
where S is the second Piola-Kirchhoff stress tensor, I the identity matrix, C the right Cauchy-Green tensor, and , and the derivatives of the strain energy function per unit of volume in the undeformed configuration with respect to the three invariants of C.
For an uncompressible material, the previous equation becomes:
where p is of hydrostatic nature and plays the role of a Lagrange multiplier. It is worth nothing that p is not the hydrostatic pressure and must be determined independently of constitutive model of the material.
A well-posed problem requires specifying ; for a biaxial state of a membrane , thereby the p term can be obtained
where is the third component of the diagonal of C.
According to the definition, the three non zero components of the deformation gradient tensor F are , and .
Consequently, the components of C can be calculated with the formula , and they are , and .
According with this stress state, the two non zero components of the second Piola-Kirchhoff stress tensor are:
By using the relationship between the second Piola-Kirchhoff and the Cauchy stress tensor, and can be calculated:
Equi-biaxial configuration
The simplest biaxial configuration is the equi-biaxial configuration, where each of the two direction of load are subjected to the same stretch at the same rate. In an uncompressible isotropic material under a biaxial stress state, the non zero components of the deformation gradient tensor F are and .
According to the definition of C, its non zero components are and .
The Cauchy stress in the two directions is:
Strip biaxial configuration
A strip biaxial test is a test configuration where the stretch of one direction is confined, namely there is a zero displacement applied on that direction. The components of the C tensor become , and . It is worth nothing that even if there is no displacement along the direction 2, the stress is different from zero and it is dependent on the stretch applied on the orthogonal direction, as stated in the following equations:
The Cauchy stress in the two directions is:
The strip biaxial test has been used in different applications, such as the prediction of the behaviour of orthotropic materials under a uniaxial tensile stress, delamination problems, and failure analysis.
FEM analysis
Finite Element Methods (FEM) are sometimes used to obtain the material parameters.
The procedure consists of reproducing the experimental test and obtain the same stress-stretch behaviour; to do so, an iterative procedure is needed to calibrate the constitutive parameters. Nevertheless, the cracking behavior of a cruciform specimen under mixed mode loading can be determined using FEA. Franc2d program is used to calculate the stress intensity factor (SIF) for such specimens using the linear elastic fracture mechanics approach. This kind of approach has been demonstrated to be effective to obtain the stress-stretch relationship for a wide class of hyperelastic material models (Ogden, Neo-Hooke, Yeoh, and Mooney-Rivlin).
Standards
ISO 16842:2014 metallic materials – sheet and strip – biaxial tensile testing method using a cruciform test piece.
ISO 16808:2014 metallic materials – sheet and strip – determination of biaxial stress-strain curve by means of bulge test with optical measuring systems.
ASTM D5617 – 04(2015) – Standard Test Method for Multi-Axial Tension Test for Geosynthetics.
DIN EN 17117 – A German standard describes methods of the test using biaxial stress states for the determination of the tensile stiffness properties of biaxially oriented coated fabrics
See also
Tensile testing
Mechanical properties
References
Materials testing
Continuum mechanics
Solid mechanics | Biaxial tensile testing | [
"Physics",
"Materials_science",
"Engineering"
] | 3,400 | [
"Solid mechanics",
"Continuum mechanics",
"Classical mechanics",
"Materials science",
"Materials testing",
"Mechanics"
] |
67,944,697 | https://en.wikipedia.org/wiki/Non-linear%20inverse%20Compton%20scattering | Non-linear inverse Compton scattering (NICS), also known as non-linear Compton scattering and multiphoton Compton scattering, is the scattering of multiple low-energy photons, given by an intense electromagnetic field, in a high-energy photon (X-ray or gamma ray) during the interaction with a charged particle, in many cases an electron. This process is an inverted variant of Compton scattering since, contrary to it, the charged particle transfers its energy to the outgoing high-energy photon instead of receiving energy from an incoming high-energy photon. Furthermore, differently from Compton scattering, this process is explicitly non-linear because the conditions for multiphoton absorption by the charged particle are reached in the presence of a very intense electromagnetic field, for example, the one produced by high-intensity lasers.
Non-linear inverse Compton scattering is a scattering process belonging to the category of light-matter interaction phenomena. The absorption of multiple photons of the electromagnetic field by the charged particle causes the consequent emission of an X-ray or a gamma ray with energy comparable or higher with respect to the charged particle rest energy.
The normalized vector potential helps to isolate the regime in which non-linear inverse Compton scattering occurs ( is the electron charge, is the electron mass, the speed of light and the vector potential). If , the emission phenomenon can be reduced to the scattering of a single photon by an electron, which is the case of inverse Compton scattering. While, if , NICS occurs and the probability amplitudes of emission have non-linear dependencies on the field. For this reason, in the description of non-linear inverse Compton scattering, is called classical non-linearity parameter.
History
The physical process of non-linear inverse Compton scattering has been first introduced theoretically in different scientific articles starting from 1964. Before this date, some seminal works had emerged dealing with the description of the classical limit of NICS, called non-linear Thomson scattering or multiphoton Thomson scattering. In 1964, different papers were published on the topic of electron scattering in intense electromagnetic fields by L. S. Brown and T. W. B. Kibble, and by A. I. Nikishov and V. I. Ritus, among the others. The development of the high-intensity laser systems required to study the phenomenon has motivated the continuous advancements in the theoretical and experimental studies of NICS. At the time of the first theoretical studies, the terms non-linear (inverse) Compton scattering and multiphoton Compton scattering were not in use yet and they progressively emerged in later works. The case of an electron scattering off high-energy photons in the field of a monochromatic background plane wave with either circular or linear polarization was one of the most studied topics at the beginning. Then, some groups have studied more complicated non-linear inverse Compton scattering scenario, considering complex electromagnetic fields of finite spatial and temporal extension, typical of laser pulses.
The advent of laser amplification techniques and in particular of chirped pulse amplification (CPA) has allowed to reach sufficiently high-laser intensities to study new regimes of light-matter interaction and to significantly observe non-linear inverse Compton scattering and its peculiar effects. Non-linear Thomson scattering was first observed in 1983 with keV electron beam colliding with a Q-switched Nd:YAG laser delivering an intensity of W/cm2 (), photons of frequency two times the one of the laser were produced, then in 1995 with a CPA laser of peak intensity around W/cm2 interacting with neon gas, and in 1998 in the interaction of a mode-locked Nd:YAG laser ( W/cm2, ) with plasma electrons from a helium gas jet, producing multiple harmonics of the laser frequency. NICS was detected for the first time in a pioneering experiment at the SLAC National Accelerator Laboratory at Stanford University, USA. In this experiment, the collision of an ultra-relativistic electron beam, with energy of about GeV, with a terawatt Nd:glass laser, with an intensity of W/cm2 (, ), produced NICS photons which were observed indirectly via a nonlinear energy shift in the spectrum of electrons in output; consequent positron generation was also observed in this experiment.
Multiple experiments have been then performed by crossing a high-energy laser pulse with a relativistic electron beam from a conventional linear electron accelerator, but a further achievement in the study of non-linear inverse Compton scattering has been achieved with the realization of all-optical setups. In these cases, a laser pulse is both responsible for the electron acceleration, through the mechanisms of plasma acceleration, and for the non-linear inverse Compton scattering occurring in the interaction of accelerated electrons with a laser pulse (possibly counter-propagating with respect to electrons). One of the first experiment of this type was made in 2006 producing photons of energy from to keV with a Ti:Sa laser beam (W/cm2). Research is still ongoing and active in this field as attested by the numerous theoretical and experimental publications.
Classical limit
The classical limit of non-linear inverse Compton scattering, also called non-linear Thomson scattering and multiphoton Thomson scattering, is a special case of classical synchrotron emission driven by the force exerted on a charged particle by intense electric and magnetic fields. Practically, a moving charge emits electromagnetic radiation while experiencing the Lorentz force induced by the presence of these electromagnetic fields. The calculation of the emitted spectrum in this classical case is based on the solution of the Lorentz equation for the particle and the substitution of the corresponding particle trajectory in the Liénard-Wiechert fields. In the following, the considered charged particles will be electrons, and gaussian units will be used.
The component of the Lorentz force perpendicular to the particle velocity is the component responsible for the local radial acceleration and thus of the relevant part of the radiation emission by a relativistic electron of charge , mass and velocity . In a simplified picture, one can suppose a local circular trajectory for a relativistic particle and can assume a relativistic centripetal force equal to the magnitude of the perpendicular Lorentz force acting on the particle: and are the electric and magnetic fields respectively, is the magnitude of the electron velocity and is the Lorentz factor . This equation defines a simple dependence of the local radius of curvature on the particle velocity and on the electromagnetic fields felt by the particle. Since the motion of the particle is relativistic, the magnitude can be substituted with the speed of light to simplify the expression for . Given an expression for , the model given in Example 1: bending magnet can be used to approximately describe the classical limit of non-linear inverse Compton scattering. Thus, the power distribution in frequency of non-linear Thomson scattering by a relativistic charged particle can be seen as equivalent to the general case of synchrotron emission with the main parameters made explicitly dependent on the particle velocity and on the electromagnetic fields.
Electron quantum parameter
Increasing the intensity of the electromagnetic field and the particle velocity, the emission of photons with energy comparable to the electron one becomes more probable and non-linear inverse Compton scattering starts to progressively differ from the classical limit because of quantum effects such as photon recoil. A dimensionless parameter, called electron quantum parameter, can be introduced to describe how far the physical condition are from the classical limit and how much non-linear and quantum effects matter. This parameter is given by the following expression:where V/m is the Schwinger field. In scientific literature, is also called . The Schwinger field , appearing in this definition, is a critical field capable of performing on electrons a work of over a reduced Compton length , where is the reduced Planck constant. The presence of such a strong field implies the instability of vacuum and it is necessary to explore non-linear QED effects, such as the production of pairs from vacuum. The Schwinger field corresponds to an intensity of nearly W/cm2. Consequently, represents the work, in units of , performed by the field over the Compton length and in this way it also measures the importance of quantum non-linear effects since it compares the field strength in the rest frame of the electron with that of the critical field. Non-linear quantum effects, like the production of an electron-positron pair in vacuum, occur above the critical field , however, they can be observed also well below this limit since ultra-relativistic particles with Lorentz factor equal to see fields of the order of in their rest frame. is called also non-linear quantum parameter whereas it is a measure of the magnitude of non-linear quantum effects. The electron quantum parameter is linked to the magnitude of the Lorentz four-force acting on the particle due to the electromagnetic field and it is a Lorentz-invariant:The four-force acting on the particle is equal to the derivative of the four-momentum with respect to proper time. Using this fact in the classical limit, the radiated power according to the relativistic generalization of the Larmor formula becomes:As a result, emission is improved by higher values of and, therefore, some considerations can be done on which are the conditions for prolific emission, further evaluating the definition (). The electron quantum parameter increases with the energy of the electron (direct proportionality to ) and it is larger when the force exerted by the field perpendicularly to the particle velocity increases.
Plane wave case
Considering a plane wave the electron quantum parameter can be rewritten using this relation between electric and magnetic fields:where is the wavevector of the plane wave and the wavevector magnitude. Inserting this expression in the formula of :where the vectorial identity was used. Elaborating the expression:Since for a plane wave and the last two terms under the square root compensate each other, reduces to:
In the simplified configuration of a plane wave impinging on the electron, higher values of the electron quantum parameter are obtained when the plane wave is counter-propagating with respect to the electron velocity.
Quantum effects
A full description of non-linear inverse Compton scattering must include some effects related to the quantization of light and matter. The principal ones are listed below.
Inclusion of the discretization of the emitted radiation, i.e. the introduction of photons with respect to the continuous description of the classical limit. This effect does not change quantitatively the emission features but changes how the emitted radiation is interpreted. A parameter equivalent to can be introduced for the photon of frequency and it is called photon quantum parameter:where is the photon four-wavevector and is the three-dimensional wavevector. In the limit in which the particle approaches the speed of light, the ratio between and is equal to:From the Frequency distribution of radiated energy one can get a rate of high-energy photon emission distributed in as a function of and but still valid in the classical limit:
where stands for the McDonald functions. The mean energy of the emitted photon is given by . Consequently, a large Lorentz factor and intense fields increase the chance of producing high-energy photons. goes as because of this formula.
The effect of radiation reaction, due to photon recoil. The electron energy after the interaction process reduces because part of it is delivered to the emitted photon and the maximum energy achievable by the emitted photon cannot be higher than the electron kinetic energy. This effect is not taken into account in non-linear Thomson scattering in which the electron energy is supposed to remain almost unaltered in energy such as in elastic scattering. Quantum radiation reaction effects become important when the emitted photon energy approaches the electron energy. Since , if the classical limit of NICS is a valid description, while for the energy of the emitted photon is of the order of the electron energy and photon recoil is very relevant.
The quantization of the motion of the electron and spin effects. An accurate description of non-linear inverse Compton scattering is made considering the electron dynamics described with the Dirac equation in presence of an electromagnetic field.
Emission description when and
When the incoming field is very intense , the interaction of the electron with the electromagnetic field is completely equivalent to the interaction of the electron with multiple photons, with no need of explicitly quantize the electromagnetic field of the incoming low-energy radiation. While the interaction with the radiation field, i.e. the emitted photon, is treated with perturbation theory: the probability of photon emission is evaluated considering the transition between the states of the electron in presence of the electromagnetic field. This problem has been solved primarily in the case in which electric and magnetic fields are orthogonal and equal in magnitude (crossed field); in particular, the case of a plane electromagnetic wave has been considered. Crossed fields represent in good approximation many existing fields so the found solution can be considered quite general. The spectrum of non-linear inverse Compton scattering, obtained with this approach and valid for and , is:
where the parameter , is now defined as:The result is similar to the classical one except for the different expression of . For it reduces to the classical spectrum (). Note that if ( or ) the spectrum must be zero because the energy of the emitted photon cannot be higher than the electron energy, in particular could not be higher than the electron kinetic energy .
The total power emitted in radiation is given by the integration in of the spectrum ():where the result of the integration of is contained in the last term:
This expression is equal to the classical one if is equal to one and it can be expanded in two limiting cases, near the classical limit and when quantum effects are of major importance:A related quantity is the rate of photon emission:where it is made explicit that the integration is limited by the condition that if no photons can be produced. This rate of photon emission depends explicitly on electron quantum parameter and on the Lorentz factor for the electron.
Applications
Non-linear inverse Compton scattering is an interesting phenomenon for all applications requiring high-energy photons since NICS is capable of producing photons with energy comparable to and higher. In the case of electrons, this means that it is possible to produce photons with MeV energy that can consequently trigger other phenomena such as pair production, Breit–Wheeler pair production, Compton scattering, nuclear reactions.
In the context of laser-plasma acceleration, both relativistic electrons and laser pulses of ultra-high intensity can be present, setting favourable conditions for the observation and the exploitation of non-linear inverse Compton scattering for high-energy photon production, for diagnostic of electron motion, and for probing non-linear quantum effects and non-linear QED. Because of this reason, several numerical tools have been introduced to study non-linear inverse Compton scattering. For example, particle-in-cell codes for the study of laser-plasma acceleration have been developed with the capabilities of simulating non-linear inverse Compton scattering with Monte Carlo methods. These tools are used to explore the different regimes of NICS in the context of laser-plasma interaction.
See also
Compton scattering
Synchrotron radiation
Breit–Wheeler process
Quantum electrodynamics
Laser
References
External links
High-energy photon emission & radiation reaction in the PIC code SMILEI - Example of particle-in-cell code with a module for NICS simulation.
CORELS research - Example of research activity on NICS.
Scattering
Quantum electrodynamics | Non-linear inverse Compton scattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,148 | [
"Nuclear physics",
"Scattering",
"Condensed matter physics",
"Particle physics"
] |
67,945,769 | https://en.wikipedia.org/wiki/Giant%20Arc | The Giant Arc is a large-scale structure discovered in June 2021 that spans 3.3 billion light years. The structure of galaxies exceeds the 1.2 billion light year threshold, challenging the cosmological principle that at large enough scales the universe is considered to be the same in every place (homogeneous) and in every direction (isotropic). The Giant Arc consists of galaxies, galactic clusters, as well as gas and dust. It is located 9.2 billion light-years away and stretches across roughly a 15th of the radius of the observable universe. It was discovered using data from the Sloan Digital Sky Survey by the team of Alexia M. Lopez, a doctoral candidate in cosmology at the University of Central Lancashire.
It and the Big Ring may form part of a connected cosmological system.
If the Giant Arc were visible in the night sky it would form an arc occupying as much space as 20 full moons, or 10 degrees on the sky.
See also
Huge-LQG
Sloan Great Wall
CfA2 Great Wall
South Pole Wall
BOSS Great Wall
Hercules–Corona Borealis Great Wall
References
Galaxy filaments
Physical cosmology
Large-scale structure of the cosmos
Astronomical objects discovered in 2021 | Giant Arc | [
"Physics",
"Astronomy"
] | 246 | [
"Galaxy stubs",
"Astronomical sub-disciplines",
"Theoretical physics",
"Astronomy stubs",
"Astrophysics",
"Physical cosmology"
] |
67,948,345 | https://en.wikipedia.org/wiki/Gelesis100 | Gelesis100, sold under the brand name Plenity, is an oral hydrogel used to treat overweight and obesity. It absorbs water and expands in the stomach and small bowel thereby increasing feelings of fullness. Possible side effects include primarily gastrointestinal symptoms, such as diarrhea, abdominal distention, infrequent bowel movements, constipation, abdominal pain, and flatulence. It is contraindicated in pregnancy, chronic malabsorption syndromes, and cholestasis. The US Food and Drug Administration approved it in 2019 as a medical device. Gelesis100 was developed by the company Gelesis.
History
The US Food and Drug Administration approved the use of Gelesis100 in April 2019 as a medical device. Gelesis100 is the first treatment of its kind for overweight and obesity. In 2022, the American Gastroenterology Association published a guideline for the management of obesity, which recommended the use of Gelesis100 be limited to clinical trials due to limited evidence.
Uses and effectiveness
Gelesis100 is used to treat obesity and overweight as an anti-obesity medication. Gelesis100 is taken as a pill before meals with water.
Gelesis100 has been criticized for its small impact on weight loss relative to side effects.
Mechanism and physiology
Gelesis100 is an oral superabsorbent hydrogel, which is produced from carboxymethylcellulose and citric acid. The cross-linked product forms a hydrophilic matrix, which absorbs water. Taken in capsule form by mouth, as Gelesis100 absorbs water, it expands in the stomach and small intestine. After absorbing water, a semisolid gel structure forms, which may promote satiety and result in weight loss via reduced caloric intake.
Contraindications
Gelesis100 is contraindicated in pregnancy, chronic malabsorption syndromes, and cholestasis.
Side effects
Side effects consist of minor gastrointestinal symptoms, including diarrhea, abdominal distention, infrequent bowel movements, constipation, abdominal pain, and flatulence. Gelesis100 is not associated with any severe adverse events. However, long-term safety data beyond 24 weeks is not available.
References
External links
Official website
Gastroenterology
Anti-obesity drugs
Bariatrics
Management of obesity
Medical devices | Gelesis100 | [
"Biology"
] | 493 | [
"Medical devices",
"Medical technology"
] |
69,346,844 | https://en.wikipedia.org/wiki/BOLD-100 | BOLD-100, or sodium trans-[tetrachlorobis (1H-indazole)ruthenate(III)], is a ruthenium-based anti-cancer therapeutic in clinical development. As of February 2024, BOLD-100 was being tested in a Phase 1b/2a clinical trial in 117 patients with advanced gastrointestinal cancers in combination with the chemotherapy regimen FOLFOX. BOLD-100 is being developed by Bold Therapeutics Inc.
Structure
BOLD-100 has an octahedral structure with two trans indazoles and four chloride ligands in the equatorial plane. The primary cation for BOLD-100 is sodium. BOLD-100’s impurity profile contains trace quantities of cesium
BOLD-100 derivatives
BOLD-100 is sodium trans-[tetrachlorobis (1H-indazole) ruthenate(III)] with cesium as an intermediate salt form. BOLD-100 was developed from the closely related ruthenium molecule KP1339 (also known as IT-139 or NKP-1339) which is also sodium trans-[tetrachlorobis (1H-indazole) ruthenate(III)], but has different manufacturing methods and purity profiles. The names are often used interchangeably.
The precursor molecule to BOLD-100 is KP1019, which is the indazole salt equivalent. KP1019 previously entered Phase 1 clinical trials but development was halted due to low solubility in water, leading to the development of KP1339 and BOLD-100 which are readily soluble in water. KP1019 and KP1339 were invented by Dr. Keppler at the University of Vienna.
Synthesis
Synthesis of BOLD-100 is accomplished by treating RuCl3 with an excess of 1H-indazole in a concentrated aqueous HCl solution. The resulting indazolium salt is treated with CsCl, and a salt exchange is performed that converts the cesium salt to the final sodium salt. The drug product is prepared as a lyophilized powder for parenteral administration.
Mechanism of action
BOLD-100 kills cancer cells through multiple mechanisms, leading to cell death through apoptosis. BOLD-100 inhibits GRP78 and alters the unfolded protein response (UPR), while also inducing reactive oxygen species (ROS), leading to DNA damage.
BOLD-100 can synergize with cytotoxic chemotherapies and targeted agents to improve cancer cell death.
BOLD-100 also causes immunogenic cell death in colon cancer organoids.
Clinical development
The precursor molecule to BOLD-100, KP1339 was tested in a Phase 1 monotherapy clinical trial in heavily pretreated patients with advanced cancers. In this dose escalation study, KP1339 was administered to 46 patients with doses ranging from 20 mg/m2 to 780 mg/m2. KP1339 was well tolerated, with the treatment-emergent adverse events occurring in >20% of patients being nausea, fatigue, vomiting, anaemia and dehydration. These adverse events were mainly grade 2 or lower. In the 38 efficacy-evaluable patients, nine patients achieved stable disease and 1 patient had a durable partial response. 625 mg/m2 was determined to be the recommended Phase 2 dose.
BOLD-100 is being tested in a Phase 1b/2a clinical trial in combination with the chemotherapy regimen FOLFOX (5-fluorouracil, leucovorin, and oxaliplatin) for the treatment of gastrointestinal cancers, including gastric, pancreatic, colon and bile duct cancer. This trial includes a dose escalation phase followed by a cohort expansion with 117 patients enrolled. Interim data presented at ASCO GI in January 2024 showed that BOLD-100 + FOLFOX was active and well-tolerated treatment in a heavily pre-treated Stage IV mCRC study population with 36 patients. Progression Free Survival, Overall Survival, and Objective Response Rate demonstrate significant clinical benefit and improvement over the currently available therapies, with minimal treatment emergent neuropathy or significant toxicities.
References
Experimental cancer drugs
Ruthenium complexes
Indazoles
Chlorides
Ruthenium(IV) compounds | BOLD-100 | [
"Chemistry"
] | 887 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
69,349,499 | https://en.wikipedia.org/wiki/Cerebrospinal%20fluid%20flow%20MRI | Cerebrospinal fluid (CSF) flow MRI is used to assess pulsatile CSF flow both qualitatively and quantitatively. Time-resolved 2D phase-contrast MRI with velocity encoding is the most common method for CSF analysis. CSF Fluid Flow MRI detects back and forth flow of Cerebrospinal fluid that corresponds to vascular pulsations from mostly the cardiac cycle of the choroid plexus. Bulk transport of CSF, characterized by CSF circulation through the Central Nervous System, is not used because it is too slow to assess clinically. CSF would have to pass through the brain's lymphatic system and be absorbed by arachnoid granulations.
Cerebrospinal fluid (CSF)
CSF is a clear fluid that surrounds the brain and spinal cord. The rate of CSF formation in humans is about 0.3–0.4 ml per minute and the total CSF volume is 90–150 ml in adults.
Traditionally, CSF was evaluated mainly using invasive procedures such as lumbar puncture, myelographies, radioisotope studies, and intracranial pressure monitoring. Recently, rapid advances in imaging techniques have provided non-invasive methods for flow assessment. One of the best-known methods is Phase-Contrast MRI and it is the only imaging modality for both qualitative and quantitative evaluation. The constant progress of magnetic resonance sequences gives a new opportunity to develop new applications and enhance unknown mechanisms of CSF flow.
Phase contrast MRI
The study of CSF flow became one of Phase-contrast MRI's major applications. The key to Phase-contrast MRI (PC-MRI) is the use of a bipolar gradient. A bipolar gradient has equal positive and negative magnitudes that are applied for the same time duration. The bipolar gradient in PC-MRI is put in a sequence after RF excitation but before data collection during the echo time of the generic MRI modality. The bipolar lobe must be applied in all three axes to image flow in all three directions.
Bipolar gradient
The basis of the bipolar gradient in PC-MRI is that when using this gradient to change frequencies, there will be no phase shift for the stationary protons because they will experience equal positive and negative magnitudes. However, the moving protons will undergo various degrees of phase shift because, along the gradient direction, their locations are constantly changing. This notion can be applied to monitor protons that are moving through a plane. From the phase contrast, the floating protons can be detected. In the equation for determining the phase, local susceptibility influence is not removed by this bipolar gradient. Thus, it is necessary to invert a second sequence with the bipolar gradient, and the signal must be subtracted from the original acquisition. The purpose of this step is to cancel out those static areas’ signals and produce the characteristic static appearance at phase-contrast imaging.
where = phase shift, = gyromagnetic ratio, is the proton velocity, and is the change in magnetic moment
Equation 1. This is used to calculate phase shift, which is directly proportional to the gradient strength according to the change in magnetic moment.
In phase-contrast imaging, there is a direct correlation between the degree of phase shift and the proton velocity in the direction of the gradient. However, because of the limitation of angles above 360°, the angle will wrap back to 0°, and only a specific range of proton velocities can be measured. For example, if a certain velocity leads to a 361° phase shift, we cannot distinguish this one from a velocity that causes a 1° phase shift. This phenomenon is called aliasing. Because both the forward direction velocity and the backward direction velocity are important, phase angles are usually within the range from −180° to 180°.
Using the bipolar gradient, it is possible to create a phase shift of spins that move with a specific velocity in the axis direction. Spins moving towards the bipolar gradient have a positive net phase shift, whereas spins moving away from the gradient have a negative net phase shift. Positive phase shifts are generally shown as white, while negative phase shifts are black. The net phase shift is directly proportional to both the time of bipolar gradient application and the flow velocity. This is why it is important to pick a velocity parameter that is similar in magnitude and width to that of the bipolar gradient - this is denoted as velocity encoding.
Velocity encoding
Velocity encoding (VENC), measured in cm/s, is directly related to the properties of the bipolar gradient. The VENC is used as the highest estimated fluid velocity in PC-MRI. Underestimating VENC leads to aliasing artifacts, as any velocity slightly higher than the VENC value has the opposite sign phase shift. However, overestimating the VENC value leads to a lower acquired flow signal and a lower SNR. Typical CSF flow is 5–8 cm/s; however, patients with hyper-dynamic circulation often require higher VENCs of up to 25 cm/s. An accurate VENC value helps generate the highest signal possible.
Equation 2. This is used to calculate VENC, which is inversely proportional to gradient strength. The variables are equivalent to those defined in Equation 1.
Images
PC-MRI is made of a magnitude and phase image for each plane and VENC obtained. In the magnitude image, cerebrospinal fluid (CSF) that is flowing is a brighter signal and stationary tissues are suppressed and visualized as black background. The phase image is phase-shift encoded, where white high signals represent forward flowing CSF and black low signals represent backwards flow. Since the phase image is phase-dependent, the velocity can be quantitatively estimated from the image. The background is mid-grey in color. There is also a re-phased image, which is the magnitude of flow of the compensated signal. It includes bright high signal flow and a background that is visible.
The phase-contrast velocity image has greater sensitivity to CSF flow than the magnitude image, since the velocity image reflects the phase shifts of the protons. There are two sets of phase-contrast images used in evaluating CSF flow. The first is imaging of the axial plane, with through-plane velocity that shows the craniocaudal direction of flow (from cranial to caudal end of the structure). The second image is in the sagittal plane, where the velocity is shown in-plane and images the craniocaudal direction. The first technique allows for flow quantification, while the second allows for qualitative assessment. Through-plane analysis is usually done perpendicular to the aqueduct and is more accurate for quantitative evaluation because this minimizes the partial volume effect, a main limitation of PC-MRI. The partial volume effect occurs when a voxel includes a boundary of static and moving materials, this leads to an overestimate of phase which results in inaccurate velocities at material boundaries. These quantitative and qualitative CSF flow images can be acquired in about 8-10 additional minutes than a regular MRI.
Choosing parameters
Factors that impact PC-MRI include VENC, repetition time (TR), and signal-to-noise ratio (SNR). To capture CSF flow of 5–8 cm/s, it is necessary to use a strong bipolar gradient. VENC is inversely proportional to magnitude and time of application. This means that a slower VENC value needs a higher magnitude bipolar gradient applied for a longer time. This results in a larger TR value; however, TR can only be increased to a certain extent, as a short repetition time is needed for higher temporal resolution since the data is plotted relative to a full cardiac cycle. Therefore, it is important to balance these parameters to maximize resolution.
Quantification
To quantify CSF flow, it is important to define the region of interest, which can be done using a cross-sectional area measurement, for example. Then, velocity versus time can be plotted. Velocity is typically pulsatile due to systole and diastole, and the area under the curve can yield the amount of flow. Systole produces forward flow, while diastole produces backwards flow.
Applications
Clinical
CSF flow can be used in diagnosing and treating aqueduct stenosis, normal pressure hydrocephalus, and Chiari malformation.
Aqueduct stenosis is the narrowing of the aqueduct of Sylvius which blocks the flow of CSF, causing fluid buildup in the brain called hydrocephalus. Decreased aqueduct stroke volume and peak systolic velocity could be detected through CSF flow to diagnose a patient with aqueduct stenosis.
Normal pressure hydrocephalus (NPH) looks at CSF flow values and velocities, which is important for diagnosis because NPH is idiopathic and has varying symptoms amongst patients including urinary incontinence, dementia, and gait disturbances. Increased aqueduct CSF stroke volume and velocity are indicators of NPH. It is critically important to recognize and treat NPH because NPH is one of the few potentially treatable causes of dementia. The treatment of choice in NPH is ventriculoperitoneal shunt surgery (VPS). This treatment needs a VP shunt, which is a catheter with a valve aiming at implementing a one-way outflow of the excessive amount of CSF from the ventricles. It is obligatory to have patency control because of some possible complications such as infections and obstruction. Due to the development and widespread of PC-MRI, it superseded spin-echo(SE) images, which is the traditional way to choose patients who might benefit from a VPS. And PC-MRI gradually became the most often used sequence to evaluate the CSF flow pattern in patients with NPH in relation to the cardiac cycle.
Chiari malformation (CMI) is when the cerebellar tonsils push through the foramen magnum of the skull. CSF flow varies based on level of tonsil descent and type of Chiari malformation, so the MRI can also be helpful in deciding the type of surgery to be performed and monitoring progress. CSF flow will be altered within different regions of the spinal cord and brain stem because of the changes in the morphology of the posterior fossa and craniocervical junction, which enables PC-MRI as a fundamental technique in CMI research studies and clinical evaluation.
Limitations
In PC-MRI, the quantitative analysis of stroke volume, mean peak velocity, and peak systolic velocity is possible only in the plane that is perpendicular to the unidirectional flow. Additionally, it is not possible to calculate multidirectional flow in multiaxial planes in 2D or 3D PC-MRI. This means that it is not a useful technique in clinical applications that have turbulent flow.
Future
Emerging 4D PC-MRI is showing promising results in the assessment of multidirectional flow. The 4D imaging modality adds time as a dimension to the 3D image. There are many applications of 4D PC-MRI, including the ability to examine blood flow patterns. This is particularly helpful for cardiac and aortic imaging, but the major limitation remains the image acquisition time.
References
Magnetic resonance imaging | Cerebrospinal fluid flow MRI | [
"Chemistry"
] | 2,311 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
69,349,975 | https://en.wikipedia.org/wiki/Tire%20model | In vehicle dynamics, a tire model is a type of multibody simulation used to simulate the behavior of tires. In current vehicle simulator models, the tire model is the weakest and most difficult part to simulate.
Tire models can be classified on their accuracy and complexity, in a spectrum that goes from more simple empirical models to more complex physical models that are theoretically grounded. Empirical models include Hans B. Pacejka's Magic Formula, while physically based models include brush models (although they are still quite simplified), and more complex and detailed physical models include RMOD-K, FTire and Hankook. Theoretically-based models can be in turn classified from more approximative to more complex ones, going for example from the solid model, to the rigid ring model, to the flexural (elastic) ring model (like the Fiala model), and the most complex ones based on finite element methods.
Brush models were very popular in the 1960s and '70s, after which Pacejka's models became widespread for many applications.
Classification by purpose
Driving dynamics models
Brush model (Dugoff, Fancher and Segel, 1970)
Hohenheim tire model (physical approach [1])
Pacejka Magic Formula Tire (Bakker, Nyborg and Pacejka, 1987)
TameTire (semi-physical approach)
TMeasy (semi-physical approach)
Stretched string tire model (Fiala 1954)
Comfort models
BRIT (Brush and Ring Tire)
CDTire (Comfort and Durability Tire)
Ctire (Comfort tire)
Dtire (Dynamical Nonlinear Spatial Tire Model)
FTire (Flexible Structure Tire Model)
RMOD-K (Comfort and Durability Tire)
SWIFT (Short Wavelength Intermediate Frequency Tire) (Besselink, Pacejka, Schmeitz, & Jansen, 2005)
Applications
Fully physics-based tire models have been typically too computational expensive to be run in realtime driving simulations. For example, to since CDTire/3D, a physics-based tire model, cannot be run in realtime, for realtime applications typically an equivalent semi-empirical "magic formula" type of model, called CDTire/Realtime, is derived from it through experiments and a regression algorithm.
In 2016, a slightly less accurate version of FTire, a physics-based tire model, was adapted to be run in real time. This realtime version of FTire was shown in 2018 to run on a 2,7 GHz 12 Core Intel Xeon E5 (2014, 22 nm process, about $2000), with 900 contact road/contact patch elements, a sample frequency of 4.0 kHz including thermal and wear simulation.
The typical tire model sampling rate used in automotive simulators is 1 kHz. However, running at higher frequencies, like 2 kHz, might mitigate lowered numerical stability in some scenarios, and might increase the model accuracy in frequency domain above
about 250 Hz.
See also
Contact patch
Self aligning torque
Slip (vehicle dynamics)
Thermal analysis
Heat transfer
References
Further reading
A new way of representing tyre data obtained from measurements in pure cornering and pure braking conditions.
Hans Pacejka (2012) Tire and Vehicle Dynamics, third edition (first edition 2002)
Lugner, P., & Plöchl, M. (2005). Tyre model performance test: first experiences and results. Vehicle System Dynamics, 43(sup1), 48-62.
Xu Wang (2020) Automotive Tire Noise and Vibrations: Analysis, Measurement and Simulation, ch.10
FTire physical tire model to run with rFpro driving simulation software, at rFpro official Vimeo and youtube channels, Jan 29, 2020
Romano, L., Bruzelius, F., & Jacobson, B. (2020) Brush tyre models for large camber angles and steering speeds, in Vehicle System Dynamics, 1-52.
VEHICLE DYNAMICS LIBRARY OFFERS EXPANDED SUPPORT FOR COSIN’S FTIRE MODEL, MODELON, AUGUST 1, 2017
Février, P., Hague, O. B., Schick, B., & Miquet, C. (2010) Advantages of a thermomechanical tire model for vehicle dynamics, ATZ worldwide, 112(7), 33-37.
External links
Tire models at Project Chrono
Tires For Heavy Trucks by PneusQuebec
Tire Modeling; Extracting Results from a Large Data Set at YouTube
Tires
Automotive software
Driving simulators
Racing simulators
Simulation software
Vehicle dynamics
Computational physics
Dynamical systems
de:Reifenmodell | Tire model | [
"Physics",
"Mathematics",
"Technology"
] | 924 | [
"Driving simulators",
"Computational physics",
"Mechanics",
"Real-time simulation",
"Dynamical systems"
] |
69,352,620 | https://en.wikipedia.org/wiki/Rank-width | Rank-width is a graph width parameter used in graph theory and parameterized complexity, and defined using linear algebra.
It is defined from hierarchical clusterings of the vertices of a given graph, which can be visualized as ternary trees having the vertices as their leaves. Removing any edge from such a tree disconnects it into two subtrees and partitions the vertices into two subsets. The graph edges that cross from one side of the partition to the other can be described by a biadjacency matrix; for the purposes of rank-width, this matrix is defined over the finite field GF(2) rather than using real numbers. The rank-width of a graph is the maximum of the ranks of the biadjacency matrices, for a clustering chosen to minimize this maximum.
Rank-width is closely related to clique-width: , where is the clique-width and the rank-width. However, clique-width is NP-hard to compute, for graphs of large clique-width, and its parameterized complexity is unknown. In contrast, testing whether the rank-width is at most a constant takes polynomial time, and even when the rank-width is not constant it can be approximated, with a constant approximation ratio, in polynomial time. For this reason, rank-width can be used as a more easily computed substitute for clique-width.
An example of a family of graphs with high rank-width is provided by the square grid graphs. For an grid graph, the rank-width is exactly .
References
Graph minor theory
Linear algebra | Rank-width | [
"Mathematics"
] | 327 | [
"Graph theory stubs",
"Graph theory",
"Mathematical relations",
"Linear algebra",
"Graph minor theory",
"Algebra"
] |
65,175,699 | https://en.wikipedia.org/wiki/Glossary%20of%20microelectronics%20manufacturing%20terms | Glossary of microelectronics manufacturing terms
This is a list of terms used in the manufacture of electronic micro-components. Many of the terms are already defined and explained in Wikipedia; this glossary is for looking up, comparing, and reviewing the terms. You can help enhance this page by adding new terms or clarifying definitions of existing ones.
2.5D integration – an advanced integrated circuit packaging technology that bonds dies and/or chiplets onto an interposer for enclosure within a single package
3D integration – an advanced semiconductor technology that incorporates multiple layers of circuitry into a single chip, integrated both vertically and horizontally
3D-IC (also 3DIC or 3D IC) – Three-dimensional integrated circuit; an integrated circuit built with 3D integration
advanced packaging – the aggregation and interconnection of components before traditional packaging
ALD – see atomic layer deposition
atomic layer deposition (ALD) – chemical vapor deposition process by which very thin films of a controlled composition are grown
back end of line (BEoL) – wafer processing steps from the creation of metal interconnect layers through the final etching step that creates pad openings (see also front end of line, far back end of line, post-fab)
BEoL – see back end of line
bonding – any of several technologies that attach one electronic circuit or component to another; see wire bonding, thermocompression bonding, flip chip, hybrid bonding, etc.
breadboard – a construction base for prototyping of electronics
bumping – the formation of microbumps on the surface of an electronic circuit in preparation for flip chip assembly
carrier wafer – a wafer that is attached to dies, chiplets, or another wafer during intermediate steps, but is not a part of the finished device
chip – an integrated circuit; may refer to either a bare die or a packaged device
chip carrier – a package built to contain an integrated circuit
chiplet – a small die designed to be integrated with other components within a single package
chemical-mechanical polishing (CMP) – smoothing a surface with the combination of chemical and mechanical forces, using an abrasive/corrosive chemical slurry and a polishing pad
circuit board – see printed circuit board
class 10, class 100, etc. – a measure of the air quality in a cleanroom; class 10 means fewer than 10 airborne particles of size 0.5 μm or larger are permitted per cubic foot of air
cleanroom (clean room) – a specialized manufacturing environment that maintains extremely low levels of particulates
CMP – see chemical-mechanical polishing
copper pillar – a type of microbump with embedded thin-film thermoelectric material
deep reactive-ion etching (DRIE) – process that creates deep, steep-sided holes and trenches in a wafer or other substrate, typically with high aspect ratios
dicing – cutting a processed semiconductor wafer into separate dies
die – an unpackaged integrated circuit; a rectangular piece cut (diced) from a processed wafer
die-to-die (also die-on-die) stacking – bonding and integrating individual bare dies atop one another
die-to-wafer (also die-on-wafer) stacking – bonding and integrating dies onto a wafer before dicing the wafer
doping – intentional introduction of impurities into a semiconductor material for the purpose of modulating its properties
DRIE – see deep reactive-ion etching
e-beam – see electron-beam processing
EDA – see electronic design automation
electron-beam processing (e-beam) – irradiation with high energy electrons for lithography, inspection, etc.
electronic design automation (EDA) – software tools for designing electronic systems
etching (etch, etch processing) – chemically removing layers from the surface of a wafer during semiconductor device fabrication
fab – a semiconductor fabrication plant
fan-out wafer-level packaging – an extension of wafer-level packaging in which the wafer is diced, dies are positioned on a carrier wafer and molded, and then a redistribution layer is added
far back end of line (FBEoL) – after normal back end of line, additional in-fab processes to create RDL, copper pillars, microbumps, and other packaging-related structures (see also front end of line, back end of line, post-fab)
FBEoL – see far back end of line
FEoL – see front end of line
flip chip – interconnecting electronic components by means of microbumps that have been deposited onto the contact pads
front end of line (FEoL) – initial wafer processing steps up to (but not including) metal interconnect (see also back end of line, far back end of line, post-fab)
heterogeneous integration – combining different types of integrated circuitry into a single device; differences may be in fabrication process, technology node, substrate, or function
HIC - see hybrid integrated circuit
hybrid bonding – a permanent bond that combines a dielectric bond with embedded metal to form interconnections
hybrid integrated circuit (HIC) – a miniaturized circuit constructed of both semiconductor devices and passive components bonded to a substrate
IC – see integrated circuit
integrated circuit (IC) – a miniature electronic circuit formed by microfabrication on semiconducting material, performing the same function as a larger circuit made from discrete components
interconnect (n.) – wires or signal traces that carry electrical signals between the elements in an electronic device
interposer – a small piece of semiconductor material (glass, silicon, or organic) built to host and interconnect two or more dies and/or chiplets in a single package
lead – a metal structure connecting the circuitry inside a package with components outside the package
lead frame (or leadframe) – a metal structure inside a package that connects the chip to its leads
mask – see photomask
MCM – see multi-chip module
microbump – a very small solder ball that provides contact between two stacked physical layers of electronics
microelectronics – the study and manufacture (or microfabrication) of very small electronic designs and components
microfabrication – the process of fabricating miniature structures of sub-micron scale
Moore’s Law – an observation by Gordon Moore that the transistor count per square inch on ICs doubled every year, and the prediction that it will continue to do so
more than Moore – a catch-all phrase for technologies that attempt to bypass Moore’s Law, creating smaller, faster, or more powerful ICs without shrinking the size of the transistor
multi-chip module (MCM) – an electronic assembly integrating multiple ICs, dies, chiplets, etc. onto a unifying substrate so that they can be treated as one IC
nanofabrication – design and manufacture of devices with dimensions measured in nanometers
node – see technology node
optical mask – see photomask
package – a chip carrier; a protective structure that holds an integrated circuit and provides connections to other components
packaging – the final step in device fabrication, when the device is encapsulated in a protective package.
pad (contact pad or bond pad) – designated surface area on a printed circuit board or die where an electrical connection is to be made
pad opening – a hole in the final passivation layer that exposes a pad
parasitics (parasitic structures, parasitic elements) – unwanted intrinsic electrical elements that are created by proximity to actual circuit elements
passivation layer – an oxide layer that isolates the underlying surface from electrical and chemical conditions
PCB – see printed circuit board
photolithography – a manufacturing process that uses light to transfer a geometric pattern from a photomask to a photoresist on the substrate
photomask (optical mask) – an opaque plate with holes or transparencies that allow light to shine through in a defined pattern
photoresist – a light-sensitive material used in processes such as photolithography to form a patterned coating on a surface
pitch – the distance between the centers of repeated elements
planarization – a process that makes a surface planar (flat)
polishing – see chemical-mechanical polishing
post-fab – processes that occur after cleanroom fabrication is complete; performed outside of the cleanroom environment, often by another company
printed circuit board (PCB) – a board that supports electrical or electronic components and connects them with etched traces and pads
quilt packaging – a technology that makes electrically and mechanically robust chip-to-chip interconnections by using horizontal structures at the chip edges
redistribution layer (RDL) – an extra metal layer that makes the pads of an IC available in other locations of the chip
reticle – a partial plate with holes or transparencies used in photolithography integrated circuit fabrication
RDL – see redistribution layer
semiconductor – a material with an electrical conductivity value falling between that of a conductor and an insulator; its resistivity falls as its temperature rises
silicon – the semiconductor material used most frequently as a substrate in electronics
silicon on insulator (SoI) – a layered silicon–insulator–silicon substrate
SiP – see system in package
SoC – see system on chip
SoI – see silicon on insulator
split-fab (split fabrication, split manufacturing) – performing FEoL wafer processing at one fab and BEoL at another
sputtering (sputter deposition) – a thin film deposition method that erodes material from a target (source) onto a substrate
stepper – a step-and-scan system used in photolithography
substrate – the semiconductor material underlying the circuitry of an IC, usually silicon
system in package (SiP) – a number of integrated circuits (chips or chiplets) enclosed in a single package that functions as a complete system
system on chip (SoC) – a single IC that integrates all or most components of a computer or other electronic system
technology node – an industry standard semiconductor manufacturing process generation defined by the minimum size of the transistor gate length
thermocompression bonding – a bonding technique where two metal surfaces are brought into contact with simultaneous application of force and heat
thin-film deposition – a technique for depositing a thin film of material onto a substrate or onto previously deposited layers; in IC manufacturing, the layers are insulators, semiconductors, and conductors
through-silicon via (TSV) – a vertical electrical connection that pierces the (usually silicon) substrate
trace (signal trace) – the microelectronic equivalent of a wire; a tiny strip of conductor (copper, aluminum, etc.) that carries power, ground, or signal horizontally across a circuit
TSV – see through-silicon via
via – a vertical electrical connection between layers in a circuit
wafer – a disk of semiconductor material (usually silicon) on which electronic circuitry can be fabricated
wafer-level packaging (WLP) – packaging ICs before they are diced, while they are still part of the wafer
wafer-to-wafer (also wafer-on-wafer) stacking – bonding and integrating whole processed wafers atop one another before dicing the stack into dies
wire bonding – using tiny wires to interconnect an IC or other semiconductor device with its package (see also thermocompression bonding, flip chip, hybrid bonding, etc.)
WLP – see wafer-level packaging
Microelectronics manufacturing
Semiconductor device fabrication
Electronics manufacturing
Semiconductors
Engineering
Wikipedia glossaries using unordered lists | Glossary of microelectronics manufacturing terms | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,351 | [
"Electrical resistance and conductance",
"Physical quantities",
"Microtechnology",
"Semiconductors",
"Semiconductor device fabrication",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Electronics manufacturing",
"Solid state engineering",
"Matter"
] |
65,183,190 | https://en.wikipedia.org/wiki/Nitrate%20chlorides | Nitrate chlorides are mixed anion compounds that contain both nitrate (NO3−) and chloride (Cl−) ions. Various compounds are known, including amino acid salts, and also complexes from iron group, rare-earth, and actinide metals. Complexes are not usually identified as nitrate chlorides, and would be termed chlorido nitrato complexes.
Formation
Nitrate chloride compounds may be formed by mixing solutions of chloride and nitrate slats, the addition of nitric acid to a chloride salt solution, or the addition of hydrochloric acid to a nitrate solution. Most commonly water is used as a solvent, but other solvents such as methylene dichloride, methanol or ethanol can be used.
Minerals
List
References
Nitrates
Chlorides
Mixed anion compounds | Nitrate chlorides | [
"Physics",
"Chemistry"
] | 161 | [
"Matter",
"Chlorides",
"Inorganic compounds",
"Mixed anion compounds",
"Nitrates",
"Salts",
"Oxidizing agents",
"Ions"
] |
58,451,979 | https://en.wikipedia.org/wiki/Sea%20balls | Sea balls (also known as Aegagropila or Pillae marinae) are tightly packed balls of fibrous marine material, recorded from the seashore. They vary in size but are generally up to in size. In Edgartown, Massachusetts a longish sea ball around in diameter has been found. Others have been reported at Dingle Bay in Ireland and at Valencia, Spain. They may occur in hundreds and are composed of plant material, in majority seagrass rhizome netting torn out by water movement.
In recent years they have been shown to contain more and more plastic marine debris and even microplastics.
Gallery
References
Aquatic ecology
Ecotoxicology
Ocean pollution
Oceanographical terminology
Waste | Sea balls | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 145 | [
"Ocean pollution",
"Water pollution",
"Materials",
"Ecosystems",
"Aquatic ecology",
"Waste",
"Matter"
] |
58,461,116 | https://en.wikipedia.org/wiki/Sodium%20hydrogenoxalate | Sodium hydrogenoxalate or sodium hydrogen oxalate is a chemical compound with the chemical formula . It is an ionic compound. It is a sodium salt of oxalic acid . It is an acidic salt, because it consists of sodium cations and hydrogen oxalate anions or , in which only one acidic hydrogen atom in oxalic acid is replaced by sodium atom. The hydrogen oxalate anion can be described as the result of removing one hydrogen ion from oxalic acid, or adding one to the oxalate anion .
Properties
Hydrates
The compound is commonly encountered as the anhydrous form or as the monohydrate . Both are colorless crystalline solids at ambient temperature.
The monohydrate can be obtained by evaporating a solution of the compound at room temperature.
The crystal structure of is triclinic normal (pinacoidal, space group P). The lattice parameters are a = 650.3 pm, b = 667.3 pm, c = 569.8 pm, α = 85.04°, β = 110.00°, γ = 105.02°, and Z = 2. The hydrogen oxalate ions are linked end to end in infinite chains by hydrogen bonds (257.1 pm). The chains are cross linked to form layers by both ···O bonds from the water molecules (280.8 pm, 282.6 pm) and by ionic bonds ···O. These layers are in turn held together by ···O bonds. The oxalate group is non-planar with an angle of twist about the bond of 12.9°.
Reactions
Upon being heated, sodium hydrogenoxalate converts to oxalic acid and sodium oxalate, the latter of which decomposes into sodium carbonate and carbon monoxide.
Toxicity
The health hazards posed by this compound are largely due to its acidity and to the toxic effects of oxalic acid and other oxalate or hydrogenoxalate salts, which can follow ingestion or absorption through the skin. The toxic effects include necrosis of tissues due to sequestration of calcium ions , and the formation of poorly soluble calcium oxalate stones in the kidneys that can obstruct the kidney tubules.
References
Organic sodium salts
Oxalates | Sodium hydrogenoxalate | [
"Chemistry"
] | 473 | [
"Organic sodium salts",
"Salts"
] |
78,117,494 | https://en.wikipedia.org/wiki/Disaster%20restoration | Disaster restoration refers to the process of repairing and restoring property damaged by natural disasters such as floods, hurricanes, wildfires, or earthquakes. It typically involves various services such as structural repairs and water damage restoration, fire damage restoration, mold remediation, and content restoration.
The industry
The disaster restoration industry, encompassing services such as fire damage repair and mold remediation, has experienced significant growth in recent decades due to a confluence of factors. Severe natural disasters, coupled with increasing development in disaster-prone areas, have created a steady demand for restoration services. While historically dominated by local family-owned businesses, the industry has witnessed a notable consolidation trend driven by private equity firms seeking to capitalize on its recession-proof nature.
Market size
The global post-storm remediation market is projected to expand from $70 billion in 2024 to $92 billion by 2029, reflecting the enduring demand for restoration services in the face of climate change and other environmental challenges.
References
Companies
Business services companies
Cleaning industry | Disaster restoration | [
"Chemistry"
] | 202 | [
"Cleaning",
"Surface science"
] |
78,118,609 | https://en.wikipedia.org/wiki/Cybersecurity%20engineering | Cybersecurity engineering is a tech discipline focused on the protection of systems, networks, and data from unauthorized access, cyberattacks, and other malicious activities. It applies engineering principles to the design, implementation, maintenance, and evaluation of secure systems, ensuring the integrity, confidentiality, and availability of information.
Given the rising costs of cybercrimes, which now amount to trillions of dollars in global economic losses each year, organizations are seeking cybersecurity engineers to safeguard their data, reduce potential damages, and strengthen their defensive security systems.
History
Cybersecurity engineering began to take shape as a distinct field in the 1970s, coinciding with the growth of computer networks and the Internet. Initially, security efforts focused on physical protection, such as safeguarding mainframes and limiting access to sensitive areas. However, as systems became more interconnected, digital security gained prominence.
In the 1970s, the introduction of the first public-key cryptosystems, such as the RSA algorithm, was a significant milestone, enabling secure communications between parties that did not share a previously established secret. During the 1980s, the expansion of local area networks (LANs) and the emergence of multi-user operating systems, such as UNIX, highlighted the need for more sophisticated access controls and system audits.
The Internet and the consolidation of security practices
In the 1990s, the rise of the Internet alongside the advent of the World Wide Web (WWW) brought new challenges to cybersecurity. The emergence of viruses, worms, and distributed denial-of-service (DDoS) attacks required the development of new defensive techniques, such as firewalls and antivirus software. This period marked the solidification of the information security concept, which began to include not only technical protections but also organizational policies and practices for risk mitigation.
Modern era and technological advances
In the 21st century, the field of cybersecurity engineering expanded to tackle sophisticated threats, including state-sponsored attacks, ransomware, and phishing. Concepts like layered security architecture and the use of artificial intelligence for threat detection became critical. The integration of frameworks such as the NIST Cybersecurity Framework emphasized the need for a comprehensive approach that includes technical defense, prevention, response, and incident recovery. Cybersecurity engineering has since expanded to encompass technical, legal, and ethical aspects, reflecting the increasing complexity of the threat landscape.
Core principles
Cybersecurity engineering is underpinned by several essential principles that are integral to creating resilient systems capable of withstanding and responding to cyber threats.
Risk management: involves identifying, assessing, and prioritizing potential risks to inform security decisions. By understanding the likelihood and impact of various threats, organizations can allocate resources effectively, focusing on the most critical vulnerabilities.
Defense in depth: advocates for a layered security approach, where multiple security measures are implemented at different levels of an organization. By using overlapping controls—such as firewalls, intrusion detection systems, and access controls—an organization can better protect itself against diverse threats.
Secure coding practices: emphasizes the importance of developing software with security in mind. Techniques such as input validation, proper error handling, and the use of secure libraries help minimize vulnerabilities, thereby reducing the risk of exploitation in production environments.
Incident response and recovery: effective incident response planning is crucial for managing potential security breaches. Organizations should establish predefined response protocols and recovery strategies to minimize damage, restore systems quickly, and learn from incidents to improve future security measures.
Key areas of focus
Cybersecurity engineering works on several key areas. They start with secure architecture, designing systems and networks that integrate robust security features from the ground up. This proactive approach helps mitigate risks associated with cyber threats. During the design phase, engineers engage in threat modeling to identify potential vulnerabilities and threats, allowing them to develop effective countermeasures tailored to the specific environment. This forward-thinking strategy ensures that security is embedded within the infrastructure rather than bolted on as an afterthought.
Penetration testing is another essential component of their work. By simulating cyber attacks, engineers can rigorously evaluate the effectiveness of existing security measures and uncover weaknesses before malicious actors exploit them. This hands-on testing approach not only identifies vulnerabilities but also helps organizations understand their risk landscape more comprehensively.
Moreover, cybersecurity engineers ensure that systems comply with regulatory and industry standards, such as ISO 27001 and NIST guidelines. Compliance is vital not only for legal adherence but also for establishing a framework of best practices that enhance the overall security posture.
Technologies and tools
Firewalls and IDS/IPS
Firewalls, whether hardware or software-based, are vital components of a cybersecurity infrastructure, acting as barriers that control incoming and outgoing network traffic according to established security rules. By preventing unauthorized access, firewalls protect networks from potential threats. Complementing this, Intrusion Detection Systems (IDS) continuously monitor network traffic to detect suspicious activities, alerting administrators to potential breaches. Intrusion Prevention Systems (IPS) enhance these measures by not only detecting threats but also actively blocking them in real-time, creating a more proactive security posture.
Encryption
Encryption is a cornerstone of data protection, employing sophisticated cryptographic techniques to secure sensitive information. This process ensures that data is rendered unreadable to unauthorized users, safeguarding both data at rest—such as files stored on servers—and data in transit—like information sent over the internet. By implementing encryption protocols, organizations can maintain confidentiality and integrity, protecting critical assets from cyber threats and data breaches.
Security Information and Event Management (SIEM)
SIEM systems play a crucial role in modern cybersecurity engineering by aggregating and analyzing data from various sources across an organization's IT environment. They provide a comprehensive overview of security alerts and events, enabling cybersecurity engineers to detect anomalies and respond to incidents swiftly. By correlating information from different devices and applications, SIEM tools enhance situational awareness and support compliance with regulatory requirements.
Vulnerability assessment tools
Vulnerability assessment tools are essential for identifying and evaluating security weaknesses within systems and applications. These tools conduct thorough scans to detect vulnerabilities, categorizing them based on severity. This prioritization allows cybersecurity engineers to focus on addressing the most critical vulnerabilities first, thus reducing the organization's risk exposure and enhancing overall security effectiveness.
Threat Detection and Response (TDR)
TDR solutions utilize advanced analytics to sift through vast amounts of data, identifying patterns that may indicate potential threats. Tools like Security Information and Event Management (SIEM) and User and Entity Behavior Analytics (UEBA) provide real-time insights into security incidents, enabling organizations to respond effectively to threats before they escalate.
Traffic control and Quality of Service (QoS)
Traffic control measures in cybersecurity engineering are designed to optimize the flow of data within networks, mitigating risks such as Distributed Denial of Service (DDoS) attacks. By utilizing technologies like Web Application Firewalls (WAF) and load balancers, organizations can ensure secure and efficient traffic distribution. Additionally, implementing Quality of Service (QoS) protocols prioritizes critical applications and services, ensuring they maintain operational integrity even in the face of potential security incidents or resource contention.
Endpoint detection and response (EDR) and extended detection and response (XDR)
EDR tools focus on monitoring and analyzing endpoint activities, such as those on laptops and mobile devices, to detect threats in real time. XDR expands on EDR by integrating multiple security products, such as network analysis tools, providing a more holistic view of an organization's security posture. This comprehensive insight aids in the early detection and mitigation of threats across various points in the network.
Standards and regulations
Various countries establish legislative frameworks that define requirements for the protection of personal data and information security across different sectors. In the United States, specific regulations play a critical role in safeguarding sensitive information. The Health Insurance Portability and Accountability Act (HIPAA) outlines stringent standards for protecting health information, ensuring that healthcare organizations maintain the confidentiality and integrity of patient data.
The Sarbanes-Oxley Act (SOX) sets forth compliance requirements aimed at enhancing the accuracy and reliability of financial reporting and corporate governance, thereby securing corporate data. Additionally, the Federal Information Security Management Act (FISMA) mandates comprehensive security standards for federal agencies and their contractors, ensuring a unified approach to information security across the government sector.
Globally, numerous other regulations also address data protection, such as the General Data Protection Regulation (GDPR) in the European Union, which sets a high standard for data privacy and empowers individuals with greater control over their personal information. These frameworks collectively contribute to establishing robust cybersecurity measures and promote best practices across various industries.
Education
A career in cybersecurity engineering typically requires a strong educational foundation in information technology or a related field. Many professionals pursue a bachelor's degree in cybersecurity or computer engineering which covers essential topics such as network security, cryptography, and risk management.
For those seeking advanced knowledge, a master's degree in cybersecurity engineering can provide deeper insights into specialized areas like ethical hacking, secure software development, and incident response strategies. Additionally, hands-on training through internships or lab experiences is highly valuable, as it equips students with practical skills essential for addressing real-world security challenges.
Continuous education is crucial in this field, with many engineers opting for certifications to stay current with industry trends and technologies. Security certifications are important credentials for professionals looking to demonstrate their expertise in cybersecurity practices. Key certifications include:
Certified Information Systems Security Professional (CISSP): Globally recognized for security professionals.
Certified Information Security Manager (CISM): Focuses on security management.
Certified Ethical Hacker (CEH): Validates skills in penetration testing and ethical hacking.
References
Computer engineering
Computer networks engineering
Cybersecurity engineering
Computer security
Engineering disciplines | Cybersecurity engineering | [
"Technology",
"Engineering"
] | 2,058 | [
"Cybersecurity engineering",
"Computer engineering",
"Computer networks engineering",
"nan",
"Electrical engineering"
] |
78,119,260 | https://en.wikipedia.org/wiki/Hyperpositive%20nonlinear%20effect | A hyperpositive nonlinear effect is a very specific case of a nonlinear effect. A nonlinear effect in asymmetric catalysis is a phenomenon in which the enantiopurity of the catalyst (or chiral auxiliary) is not proportional to the enantiopurity of the product obtained. These phenomena were rationalized in the mid-1980s by Henri B. Kagan, who proposed simple mechanistic models, supported by mathematical models, to model experimental curves.
In 1994, H. B. Kagan and collaborators proposed more elaborate models that more closely resembled the experimental results observed at the time. Using these models, the authors were able to make theoretical predictions about situations that had not been encountered experimentally. An example is a case “where the enantiomeric excess could take on much larger values for a partially resolved ligand than for an enantiomerically pure ligand”. The authors proposed the term “hyperpositive nonlinear effect” to characterize this situation.
This statement may seem somewhat implausible at first glance, but the possibility was observed experimentally 26 years later: the first experimental example of a hyperpositive nonlinear effect was described in 2020 by S. Bellemin-Laponnaz and colleagues, but the mechanism of this phenomenon turned out to be different from that originally proposed. This mechanism, which explains a hyperpositive nonlinear effect, has also been validated to explain cases of enantiodivergence.
References
Catalysis | Hyperpositive nonlinear effect | [
"Chemistry"
] | 304 | [
"Catalysis",
"Chemical kinetics"
] |
78,126,667 | https://en.wikipedia.org/wiki/Cm28 | Cm28, a scorpion toxin from Centruroides margaritatus, selectively blocks voltage-gated potassium channels KV1.2 and KV1.3 with high affinity. It also suppresses the activation of human CD4+ effector memory T cells, suggesting its potential as a therapeutic agent for autoimmune diseases. Phylogenetic analysis reveals that Cm28 belongs to a new α-KTx subfamily, highlighting its unique structural and functional properties for potential drug development.
Etymology
The peptide name "Cm28" is derived from the scorpion species Centruroides margaritatus and its molecular mass, which is estimated to be 2820 Daltons.
Sources
Cm28 was isolated from the venom of the Centruroides margaritatus scorpion. The venom was isolated through milking the animal by electric stimulation.
Chemistry
Structure
Cm28 is a short peptide from the α-KTx subfamily composed of 27 amino acid residues with six cysteines forming three disulfide bridges, which is a specific quality of the proteins from the mentioned family. Both defensins and venom toxins, like Cm28, share a structural similarity. Both are cysteine-rich proteins with multiple disulfide bonds that help maintain their shape. This structural feature, often referred to as the CSα/β fold, is characterized by alternating alpha-helices and beta-sheets stabilized by disulfide bridges. This fold is essential for their ability to interact with and block ion channels, a function crucial for both immune defense (defensins) and venom toxicity (neurotoxins).
Amino acid sequence
KCRECGNTSPSCYFSGNCVNGKCVCPA
Family
Phylogenetic analysis comparing the amino acid sequence of Cm28 with 75 other reported scorpion toxins suggests that Cm28 belongs to the α-KTx family. It has been given the systematic number α-KTx 32.1. Cm28 lacks the typical lysine-tyrosine functional dyad required for blocking KV channels.
The 3D model can be found on Swissmodel.
Target and mode of action
Cm28 is a potent inhibitor of voltage-gated potassium channels KV1.2 and KV1.3, with dissociation constants (Kd) of 0.96 nM and 1.3 nM, respectively. KV1.3 channels are essential for the activation and proliferation of TH17 cells, a T helper cell subset critical for immune responses, especially in autoimmune diseases. These channels regulate T cell proliferation and other signaling pathways necessary for T cell functioning. The binding of Cm28 to both KV1.2 and KV1.3 is reversible, allowing dynamic regulation of channel activity during immune responses.
It operates by physically blocking the pores of these channels, preventing potassium ions from passing through. Rather than altering the voltage-sensing domain, Cm28 interacts with the selectivity filter region, effectively disrupting ion flow without shifting the activation thresholds. This specific interaction highlights Cm28's precise targeting of the pore region, making it a highly selective blocker for KV1.2 and KV1.3 channels. The exact residues involved in selectivity filter block are unknown.
Toxicity
In toxicity assays, Cm28 did not compromise the viability of human CD4+ T cells, even at concentrations much higher than its binding affinity to KV1.3 channels. Specifically, after a 24-hour incubation period with 1.5 μM Cm28, the cytotoxicity of the peptide on quiescent and TCR-activated CD4+ T cells was less than 1%. This finding was confirmed by both lactate dehydrogenase (LDH) assays and flow cytometry using Zombie NIR dye to evaluate cell viability. Therefore, Cm28 demonstrates minimal cytotoxicity in vitro under the experimental conditions.
References
External links
https://swissmodel.expasy.org/repository/uniprot/C0HM22?csm=619126AE0255A0CA
Ion channel toxins
Neurotoxins
Scorpion toxins
Peptides | Cm28 | [
"Chemistry"
] | 848 | [
"Biomolecules by chemical classification",
"Molecular biology",
"Neurochemistry",
"Neurotoxins",
"Peptides"
] |
63,644,325 | https://en.wikipedia.org/wiki/Ceramic%20engine | A ceramic engine is an internal combustion engine made from specially engineered ceramic materials. Ceramic engines allow for the compression and expansion of gases at extremely high temperatures without loss of heat or engine damage. Proof-of-concept ceramic engines were popularized by successful studies in the early 1980s and 1990s. Under controlled laboratory conditions, ceramic engines outperformed traditional metal engines in terms of weight, efficiency, and performance. All-ceramic engines were seen as the next advancement in future engine technology, but have not yet entered the automobile market because of manufacturing and economic problems.
History
Research into more efficient diesel engines occurred after the 1970s energy crisis, resulting in a new market for fuel-efficient vehicles. A newly developed gas turbine engine design promised high thermal efficiency, but needed a material that could withstand temperatures. The high heat did not allow for readily available materials like metals, superalloys, and carbon composites to be used. As a result, government-funded research facilities in the United States, Japan, Germany, and the United Kingdom experimented with replacing metal with ceramics. Ceramics' high resistance to heat helped pave the way towards the first commercial use of gas turbine engines, the successes of which led to the idea of an all-ceramic engine.
Between 1985 and 1989, Nissan, in collaboration with NGK, produced the world's first ceramic turbocharger, later debuting this on the 1985 Fairlady Z 200ZR. Isuzu developed a diesel ceramic engine that used ceramic for the pistons, piston rings, and turbocharger wheels. Isuzu also developed an engine that used cylinder liners made of ceramic materials such as silicon nitride. Isuzu also used ceramics for the intake and exhaust valves, exhaust manifold, turbocharger housing, camshafts, heat insulation, and rocker arms.
Predictions for an adiabatic turbo-compound engine (a theoretical heat-efficient engine) were seen as plausible with the use of technical ceramic material. A 1987 technical paper by Roy Kamo predicted the mass production of such engines to occur in the year 2000. However, these predictions were made with the belief that ceramics would overcome "the design methodology, manufacturing process, machining cost, and mass production quality control needed for high volume production."
Currently, ceramic engines are not viable for mass production. Large parts, like the engine block, can be challenging to manufacture out of ceramics due to their brittleness and stiffness.
Applications
In 1982, Isuzu tested a car with an all-ceramic engine near the Kinko Bay.
In 1988, Toyota introduced a ceramic engine into its Crown, as well as its GTV (Gas Turbine Vehicle) concept car.
Notes
Engines
Ceramics | Ceramic engine | [
"Physics",
"Technology"
] | 540 | [
"Physical systems",
"Machines",
"Engines"
] |
63,645,173 | https://en.wikipedia.org/wiki/Medical%20gown | Medical gowns are hospital gowns worn by medical professionals as personal protective equipment (PPE) in order to provide a barrier between patient and professional. Whereas patient gowns are flimsy often with exposed backs and arms, PPE gowns, as seen below in the cardiac surgeon photograph, cover most of the exposed skin surfaces of the professional medics.
In several countries, PPE gowns for use in the COVID-19 pandemic became in appearance more like cleanroom suits as knowledge of the best practices filtered up through the national bureaucracies. For example, the European norm-setting bodies CEN and CENELEC on 30 March 2020 in collaboration with the European Commissioner for the Internal Market made freely-available the relevant standards documents in order "to tackle the severe shortage of protective masks, gloves and other products currently faced by many European countries. Providing free access to the standards will facilitate the work of the many companies wishing to reconvert their production lines in order to manufacture the equipment that is so urgently needed."
History
The concept of PPE in regards to medical professionals was seen as early as the 17th century Plague doctor's outfit.
During the Ebola crisis of 2014, the WHO published a rapid advice guideline on PPE coveralls.
Types
The different levels of various gown types are categorized as follows:
Local variants
United States
In the United States, medical gowns are medical devices regulated by the Food and Drug Administration. FDA divides medical gowns into three categories. A surgical gown is intended to be worn by health care personnel during surgical procedures. Surgical isolation gowns are used when there is a medium to high risk of contamination and a need for larger critical zones of protection. Non-surgical gowns are worn in low or minimal risk situations.
Surgical and surgical isolation gowns are regulated by the FDA as a Class II medical device that require a 510(k) premarket notification, but non-surgical gowns are Class I devices exempt from premarket review. Surgical gowns only require protection of the front of the body due to the controlled nature of surgical procedures, while surgical isolation gowns and non-surgical gowns require protection over nearly the entire gown.
In 2004, the FDA recognized ANSI/AAMI PB70:2003 standard on protective apparel and drapes for use in health care facilities. Surgical gowns must also conform to the ASTM F2407 standard for tear resistance, seam strength, lint generation, evaporative resistance, and water vapor transmission. Because surgical gowns are considered to be a surface-contacting device with intact skin, FDA recommends that cytotoxicity, sensitization, and irritation or intracutaneous reactivity is evaluated.
China
The First Affiliated Hospital of the Zhejiang University School of Medicine in Hangzhou, Zhejiang Province, People's Republic of China developed their own protocol and equipment during the early months of the COVID-19 pandemic. A screenshot of the cover of the Handbook of COVID-19 Prevention and Treatment shows a picture of two rows of medical personnel, each wearing PPE gowns and PPE masks and PPE hoods and PPE goggles.
During the COVID-19 pandemic in Wuhan, doctors were provided with full PPE gown suits as early as January 2020.
European Union
During the COVID-19 pandemic, the European Commissioner for the Internal Market on 30 March 2020 listed the applicable norms for to help manufacturers re-convert their production lines:
Protective masks
EN 1492009-08: Respiratory protective devices – Filtering half masks to protect against particles - Requirements, testing, marking
EN 146832019-10: Medical face masks - Requirements and test methods
Eye protection
EN 1662002-04: Personal eye-protection – Specifications
Protective clothing
EN 141262004-01: Protective clothing - Performance requirements and tests methods for protective clothing against infective agents
EN 146052009-08: Protective clothing against liquid chemicals - performance requirements for clothing with liquid-tight (Type 3) or spray-tight (Type 4) connections, including items providing protection to parts of the body only (Types PB [3] and PB [4])
EN ISO 136882013-12 Protective clothing - General requirements (ISO 13688:2013)
EN 13795-12019-06: Surgical clothing and drapes - Requirements and test methods – Part 1: Surgical drapes and gowns
EN 13795-22019-06: Surgical clothing and drapes - Requirements and test methods – Part 2: Clean air suits
Gloves
EN 455-12001-01 Medical gloves for single use – Part 1: Requirements and testing for freedom from holes
EN 455-22015-07: Medical gloves for single use – Part 2: Requirements and testing for physical properties
EN 455-32015-07: Medical gloves for single use – Part 3: Requirements and testing for biological evaluation
EN 455-42009-10: Medical gloves for single use – Part 4: Requirements and testing for shelf life determination
EN 4202010-03: Protective gloves - General requirements and test methods
EN ISO 374-12018-10 Protective gloves against dangerous chemicals and micro-organisms – Part 1: Terminology and performance requirements for chemical risks
EN ISO 374-52017-03: Protective gloves against dangerous chemicals and micro-organisms – Part 5: Terminology and performance requirements for micro-organisms risks (ISO 374-5:2016)
Israel
As seen in the accompanying gallery figure, at least one Israeli hospital had access to full Tyvek PPE gowns as early as 17 March 2020 during the COVID-19 pandemic.
Italy
In an early April article, 20 doctors from the whole of Italy describe their experience with coronavirus patient care. Their conclusion reads:
Their findings are set out in a table entitled "Necessary personal protection equipment":
FFP2 facial mask or (in case of maneuvers at high risk of generating aerosolized particles:) FFP3 facial mask
Disposable long sleeve waterproof coats, gowns, or Tyvek suits
Disposable double pair of nitrile gloves
Protective goggles or visors
Disposable head caps
Disposable long shoe covers
Alcoholic hand hygiene solution
Criticisms
In a May 2017 research article, several French scientists complained that there was little harmonization across Europe for the names of pathogens, and went on to describe the PPE norms and regulations in France for infectious diseases under BSL-3.
See also
, historical equivalent
Hazmat suit
Workplace hazard controls for COVID-19
References
Gowns
Medical equipment
Headgear
Occupational safety and health
Risk management in business
Industrial hygiene
Environmental social science
Working conditions
Personal protective equipment | Medical gown | [
"Engineering",
"Biology",
"Environmental_science"
] | 1,376 | [
"Personal protective equipment",
"Safety engineering",
"Medical equipment",
"Environmental social science",
"Medical technology"
] |
63,649,139 | https://en.wikipedia.org/wiki/Sum%20of%20residues%20formula | In mathematics, the residue formula says that the sum of the residues of a meromorphic differential form on a smooth proper algebraic curve vanishes.
Statement
In this article, X denotes a proper smooth algebraic curve over a field k. A meromorphic (algebraic) differential form has, at each closed point x in X, a residue which is denoted . Since has poles only at finitely many points, in particular the residue vanishes for all but finitely many points. The residue formula states:
Proofs
A geometric way of proving the theorem is by reducing the theorem to the case when X is the projective line, and proving it by explicit computations in this case, for example in .
proves the theorem using a notion of traces for certain endomorphisms of infinite-dimensional vector spaces. The residue of a differential form can be expressed in terms of traces of endomorphisms on the fraction field of the completed local rings which leads to a conceptual proof of the formula. A more recent exposition along similar lines, using more explicitly the notion of Tate vector spaces, is given by .
References
Algebraic geometry
Algebraic curves
Differential forms | Sum of residues formula | [
"Mathematics",
"Engineering"
] | 228 | [
"Fields of abstract algebra",
"Tensors",
"Differential forms",
"Algebraic geometry"
] |
63,653,068 | https://en.wikipedia.org/wiki/HKUST-1 | HKUST-1 (HKUST ⇒ Hong Kong University of Science and Technology), which is also called MOF-199, is a material in the class of metal-organic frameworks (MOFs). Metal-organic frameworks are crystalline materials, in which metals are linked by ligands (so-called linker molecules) to form repeating coordination motives extending in three dimensions. The HKUST-1 framework is built up of dimeric metal units, which are connected by benzene-1,3,5-tricarboxylate linker molecules. The paddlewheel unit is the commonly used structural motif to describe the coordination environment of the metal centers and also called secondary building unit (SBU) of the HKUST-1 structure. The paddlewheel is built up of four benzene-1,3,5-tricarboxylate linkers molecules, which bridge two metal centers. One water molecules is coordinated to each of the two metal centers at the axial position of the paddlewheel unit in the hydrated state, which is usually found if the material is handled in air. After an activation process (heating, vacuum), these water molecules can be removed (dehydrated state) and the coordination site at the metal atoms is left unoccupied. This unoccupied coordination site is called coordinatively unsaturated site (CUS) and can be accessed by other molecules.
Structural analogs
Monometallic HKUST-1 analogs
Cu2+ was used as metal center in the first synthesized HKUST-1 material, but the HKUST-1 structure was also obtained with other metals. The oxidation state of most used metals is +II, which results in a neutral overall framework. In the case of trivalent metals (oxidation state +3), the overall framework is positively charged and requires anions to compensate the charge and guarantee charge neutrality.
Mixed-metal HKUST-1 analogs
In addition to monometallic HKUST-1 analogs, several mixed-metal HKUST-1 materials were synthesized, in which two metals are incorporated into the framework structure at crystallographically equivalent positions. The incorporation of two metals can be achieved by using both metals for the synthesis (direct synthesis) or by using post-synthetic metal-exchange. For the post-synthetic metal exchange, a monometallic HKUST-1 material is synthesized in the first step. Subsequently, this monometallic HKUST-1 is suspended in a solution containing the second metal, which results in an exchange of metal centers in the framework leading to a mixed-metal HKUST-1.
Theoretically calculated HKUST-1 analogs
Several HKUST-1 analogs have already been synthesized, but several research groups have investigated the properties of the HKUST-1 structure by means of theoretical calculations. For this purpose, additional metal centers were incorporated into the framework on the theoretical level, which have not been used for the synthesis (e.g. Sc, V, Ti, W, Cd). Theoretical study on a mixed-metal HKUST-1 containing Cu in combination with various other metals (e.g. W, Re, Os, Ir, Pt, Au) were also reported, of which several metal combinations have not been synthesized.
References
Metal-organic frameworks
Copper(II) compounds | HKUST-1 | [
"Chemistry",
"Materials_science"
] | 683 | [
"Porous polymers",
"Metal-organic frameworks"
] |
63,655,804 | https://en.wikipedia.org/wiki/Driver-controlled%20operation | Driver-controlled operation is the operation of a train in which the driver carries out all the essential roles needed to operate the train itself. It differs from driver-only operation (DOO, also called one-person operation) in that other members of staff also work on board—for example, revenue collectors.
Currently, only around 30% of Britain's journeys are either DCO or DOO, meaning the remainder require a guard to operate and thus, if there is no available guard, the service must be cancelled. DCO means only the unavailability of a driver would lead to a cancellation of a train.
Railways using DCO
A deal agreed between Greater Anglia (train operating company) and RMT union meant that all of their intercity and regional services would change to DCO. However, unlike other DCO in place in the UK, a guard could still operate the doors in exceptional circumstances and must still be present in order for the service to run. The only exceptions are on intercity services between Liverpool Street and Ipswich, and regional services between Ely and Stansted Airport, where trains were already cleared to run without guards. Arriva Rail North were also hoping to agree a similar deal, however this was not achieved and on 1 March 2020, the Department for Transport took over operations as Northern Trains, who are also looking to implement DCO, so could be implemented in the future.
Other examples of DCO within the UK includes Abellio ScotRail and Southern and Southeastern longer distance routes and services.
Merseyrail have implemented DCO onboard their British Rail Class 777 fleet, requiring a signal from the Train Manager for the driver to begin the Close Door process, following a deal reached by the RMT after disputes with Merseyrail previously planning to run the trains with DOO.
South Western Railway are planning to implement DCO on London suburban services for when its new fleet of British Rail Class 701 trains arrive. The RMT has opposed to these changes and have held various strikes on many occasions including 27 days in December 2019.
In April 2021, a deal was agreed between South Western Railway and RMT. Whilst South Western Railway claimed to have implemented a DCO method for their inner suburban routes, unlike Southern, the guard would still be an essential crew member on the train and would be required to be onboard.
References
Transport operations
Rail transport operations | Driver-controlled operation | [
"Physics"
] | 474 | [
"Physical systems",
"Transport",
"Transport operations"
] |
66,437,177 | https://en.wikipedia.org/wiki/QSO%20J0313%E2%88%921806 | QSO J0313−1806 was the most distant, and hence also the oldest known quasar at z = 7.64, at the time of its discovery. In January 2021, it was identified as the most redshifted (highest z) known quasar, with the oldest known supermassive black hole (SMBH) at solar masses. The 2021 announcement paper described it as "the most massive SMBH at z > 7". This quasar beat the prior recordsetting quasar, ULAS J1342+0928. In 2023, UHZ1 was discovered, setting a new record for most distant quasar, eclipsing that of QSO J0313−1806.
One of the 2021 paper authors, Feige Wang, said that the existence of a supermassive black hole so early in the existence of the Universe posed problems for the current theories of formation since "black holes created by the very first massive stars could not have grown this large in only a few hundred million years". The redshift z = 7.642 corresponds to an age of about 600 million years.
See also
Direct collapse black hole, a process by which black holes may form less than a few hundred million years after the Big Bang
List of the most distant astronomical objects
List of quasars
PSO J172.3556+18.7734
ULAS J1342+0928
References
Sources
Further reading
Astronomical objects discovered in 2021
Supermassive black holes
Quasars
Eridanus (constellation) | QSO J0313−1806 | [
"Physics",
"Astronomy"
] | 322 | [
"Black holes",
"Galaxy stubs",
"Unsolved problems in physics",
"Supermassive black holes",
"Constellations",
"Astronomy stubs",
"Eridanus (constellation)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.