id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
41,332,848 | https://en.wikipedia.org/wiki/N-Vinylacetamide | N-Vinylacetamide (NVA) is a non-ionic monomer. Copolymers made of NVA and other monomers can exhibit practical characteristics in addition to those common with the existing hydrophilic polymers.
History
NVA is an amphipathic monomer. It was introduced and compounded in the U.S. in 1967. Today, it is recognized as a monomer that does polymerize; however, Showa Denko K.K. succeeded in its industrialization in 1997.
Properties
NVA is soluble in water, various organic solvents and liquid vinyl monomers. It is polymerizable by various radical polymerization processes, depending on the objective. Since NVA itself is a solvent, it can act as a dissolution agent for poorly soluble substances.
References
Acetamides
Amide solvents
Plasticizers
Monomers
Vinyl compounds | N-Vinylacetamide | [
"Chemistry",
"Materials_science"
] | 177 | [
"Monomers",
"Polymer chemistry"
] |
41,336,594 | https://en.wikipedia.org/wiki/Monochromatic%20wavelength%20dispersive%20x-ray%20fluorescence | Monochromatic wavelength dispersive x-ray fluorescence (MWD XRF) is an enhanced version of conventional wavelength-dispersive X-ray spectroscopy (WDXRF) elemental analysis. The key difference is that MWD XRF uses a doubly curved crystal X-ray optic between the X-ray source and the sample resulting in monochromatic excitation. This additional optic creates a high-intensity X-ray beam on a small spot size without increasing the power of the X-ray source. An MWD XRF instrument is constructed from a low-power X-ray tube, a point-to-point focusing optic for excitation, a sample cell, a focusing optic that collects the fluorescence from the sample, and an X-ray detector. By using an optic between the X-ray source and the sample, a monochromatic beam free of bremsstrahlung, excites the sample, eliciting the secondary fluorescence X-rays needed for elemental analysis. By restricting the band of wavelengths used for excitation, a much higher signal to background ratio is achieved. This type of excitation allows much lower limits of detection and faster reading times.
References
X-ray spectroscopy | Monochromatic wavelength dispersive x-ray fluorescence | [
"Physics",
"Chemistry",
"Astronomy"
] | 257 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Astronomy stubs",
"X-ray spectroscopy",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
41,336,612 | https://en.wikipedia.org/wiki/Provisional%20Low%20Temperature%20Scale%20of%202000 | The Provisional Low Temperature Scale of 2000 (PLTS-2000) is an equipment calibration standard for making measurements of very low temperatures, in the range of 0.9 mK (millikelvin) to 1 K, adopted by the International Committee for Weights and Measures in October 2000. It is based on the melting pressure of solidified helium-3.
At these low temperatures, the melting pressure of helium-3 varies from about 2.9 MPa to nearly 4.0 MPa. At the temperature of approximately 315 mK, a minimum of pressure (2.9 MPa) occurs. Although this gives a disadvantage of non-monotonicity, in that two different temperatures can give the same pressure, the scale is otherwise robust since the melting pressure of helium-3 is insensitive to many experimental factors.
See also
International Temperature Scale of 1990 (ITS-90) — the calibration standard used for all temperatures above 0.6 K
Leiden scale
References
Temperature
Scales of temperature | Provisional Low Temperature Scale of 2000 | [
"Physics",
"Chemistry",
"Mathematics"
] | 203 | [
"Scales of temperature",
"Temperature",
"Scalar physical quantities",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Quantity",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
57,553,626 | https://en.wikipedia.org/wiki/Glycerol%202-phosphate | Glycerol 2-phosphate is the conjugate base of phosphoric ester of glycerol. It is commonly known as β-glycerophosphate or BGP. Unlike glycerol 1-phosphate and glycerol 3-phosphate, this isomer is not chiral. It is also less common.
Applications
β-Glycerophosphate is an inhibitor of the enzyme serine-threonine phosphatase. It is often used in combination with other phosphatase/protease inhibitors for broad spectrum inhibition.
β-Glycerophosphate is also used to drive osteogenic differentiation of bone marrow stem cells in vitro.
β-Glycerophosphate is used to buffer M17 media for Lactococcus culture in recombinant protein expression.
Notes
Organophosphates | Glycerol 2-phosphate | [
"Chemistry",
"Biology"
] | 185 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
57,554,421 | https://en.wikipedia.org/wiki/Pomeranchuk%20instability | The Pomeranchuk instability is an instability in the shape of the Fermi surface of a material with interacting fermions, causing Landau’s Fermi liquid theory to break down. It occurs when a Landau parameter in Fermi liquid theory has a sufficiently negative value, causing deformations of the Fermi surface to be energetically favourable. It is named after the Soviet physicist Isaak Pomeranchuk.
Introduction: Landau parameter for a Fermi liquid
In a Fermi liquid, renormalized single electron propagators (ignoring spin) are
where capital momentum letters denote four-vectors and the Fermi surface has zero energy; poles of this function determine the quasiparticle energy-momentum dispersion relation. The four-point vertex function describes the diagram with two incoming electrons of momentum and two outgoing electrons of momentum and and amputated external lines: Call the momentum transfer When is very small (the regime of interest here), the T-channel dominates the S- and U-channels. The Dyson equation then offers a simpler description of the four-point vertex function in terms of the 2-particle irreducible which corresponds to all diagrams connected after cutting two electron propagators: Solving for shows that, in the similar-momentum, similar-wavelength limit the former tends towards an operator satisfying where The normalized Landau parameter is defined in terms of as where is the density of Fermi surface states. In the Legendre eigenbasis the parameter admits the expansion Pomeranchuk's analysis revealed that each cannot be very negative.
Stability criterion
In a 3D isotropic Fermi liquid, consider small density fluctuations around the Fermi momentum where the shift in Fermi surface expands in spherical harmonics as The energy associated with a perturbation is approximated by the functional where Assuming , these terms are, and so
When the Pomeranchuk stability criterion is satisfied, this value is positive, and the Fermi surface distortion requires energy to form. Otherwise, releases energy, and will grow without bound until the model breaks down. That process is known as Pomeranchuk instability.
In 2D, a similar analysis, with circular wave fluctuations instead of spherical harmonics and Chebyshev polynomials instead of Legendre polynomials, shows the Pomeranchuk constraint to be In anisotropic materials, the same qualitative result is true—for sufficiently negative Landau parameters, unstable fluctuations spontaneously destroy the Fermi surface.
The point at which is of much theoretical interest as it indicates a quantum phase transition from a Fermi liquid to a different state of matter Above zero temperature a quantum critical state exists.
Physical quantities with manifest Pomeranchuk criterion
Many physical quantities in Fermi liquid theory are simple expressions of components of Landau parameters. A few standard ones are listed here; they diverge or become unphysical beyond the quantum critical point.
Isothermal compressibility:
Effective mass:
Speed of first sound:
Unstable zero sound modes
The Pomeranchuk instability manifests in the dispersion relation for the zeroth sound, which describes how the localized fluctuations of the momentum density function propagate through space and time.
Just as the quasiparticle dispersion is given by the pole of the one-particle propagator, the zero sound dispersion relation is given by the pole of the T-channel of the vertex function near small Physically, this describes the propagation of an electron hole pair, which is responsible for the fluctuations in
From the relation and ignoring the contributions of for the zero sound spectrum is given by the four-vectors satisfying Equivalently, where and
When the equation () can be implicitly solved for a real solution , corresponding to a real dispersion relation of oscillatory waves.
When the solution is pure imaginary, corresponding to an exponential change in amplitude over time. For the imaginary part damping waves of zeroth sound. But for and sufficiently small the imaginary part implying exponential growth of any low-momentum zero sound perturbation.
Nematic phase transition
Pomeranchuk instabilities in non-relativistic systems at cannot exist. However, instabilities at have interesting solid state applications. From the form of spherical harmonics (or in 2D), the Fermi surface is distorted into an ellipsoid (or ellipse). Specifically, in 2D, the quadrupole moment order parameter has nonzero vacuum expectation value in the Pomeranchuk instability. The Fermi surface has eccentricity and spontaneous major axis orientation . Gradual spatial variation in forms gapless Goldstone modes, forming a nematic liquid statistically analogous to a liquid crystal. Oganesyan et al.'s analysis of a model interaction between quadrupole moments predicts damped zero sound fluctuations of the quadrupole moment condensate for waves oblique to the ellipse axes.
The 2d square tight-binding Hubbard Hamiltonian with next-to-nearest neighbour interaction has been found by Halboth and Metzner to display instability in susceptibility of d-wave fluctuations under renormalization group flow. Thus, the Pomeranchuk instability is suspected to explain the experimentally measured anisotropy in cuprate superconductors such as LSCO and YBCO.
See also
Kohn anomaly
Pomeranchuk's theorem
Lindhard theory
References
Fermions | Pomeranchuk instability | [
"Physics",
"Materials_science"
] | 1,089 | [
"Fermions",
"Subatomic particles",
"Condensed matter physics",
"Matter"
] |
57,560,504 | https://en.wikipedia.org/wiki/Hitchhiker%201 | Hitchhiker 1 (or Hitchhiker P-11 4201) was a satellite launched by U.S. Air Force on June 27, 1963. It was launched with the aim of studying and measuring cosmic radiation. The satellite was the first successful satellite of the P-11 program, following the failure of the first Hitchhiker satellite in March 1963.
Instruments
1 Geiger tube (40-4 MeV)
1 Faraday cup plasma
1 Electron detector (0.3-5.0 MeV)
1 Proton detector (0.7-5.3 MeV)
2 electrostatic analysers (4-100 keV)
See also
Corona program
References
1963 in spaceflight
Derelict satellites orbiting Earth | Hitchhiker 1 | [
"Astronomy"
] | 146 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
51,380,802 | https://en.wikipedia.org/wiki/Eulerian%20coherent%20structure | In applied mathematics, objective Eulerian coherent structures (OECSs) are the instantaneously most influential surfaces or curves that exert a major influence on nearby trajectories in a dynamical system over short time-scales, and are the short-time limit of Lagrangian coherent structures (LCSs). Such influence can be of different types, but OECSs invariably create a short-term coherent trajectory pattern for which they serve as a theoretical centerpiece. While LCSs are intrinsically tied to a specific finite time interval, OECSs can be computed at any time instant regardless of the multiple and generally unknown time scales of the system.
In observations of tracer patterns in nature, one readily identifies short-term variability in material structures such as emerging and dissolving coherent features. However, it is often the underlying structure creating these features that is of interest. While individual tracer trajectories forming coherent patterns are generally sensitive with respect to changes in their initial conditions and the system parameters, OECSs are robust and reveal the instantaneous time-varying skeleton of complex dynamical systems. Despite OECSs are defined for general dynamical systems, their role in creating coherent patterns is perhaps most readily observable in fluid flows. Therefore, OECSs are suitable in a number of applications ranging from flow control to environmental assessment such as now-casting or short-term forecasting of pattern evolution, where quick operational decisions need to be made. Examples include floating debris, oil spills, surface drifters, and control of unsteady flow separation.
References
Dynamical systems
Fluid dynamics | Eulerian coherent structure | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 329 | [
"Dynamical systems",
"Chemical engineering",
"Mechanics",
"Piping",
"Fluid dynamics"
] |
39,939,294 | https://en.wikipedia.org/wiki/Magnesium%20chromate | Magnesium chromate is a chemical compound, with the formula . It is a yellow, odorless, water-soluble salt with several important industrial uses. This chromate can be manufactured as a powder.
History
Before 1940, the literature about magnesium chromate and its hydrates was sparse, but studies starting in that year looked at its properties and solubility.
Uses
It is available commercially in a variety of powders, from nanoscale to micron-sized, either as an anhydrous or hydrated form.
As a hydrate, it is useful as a corrosion inhibitor and pigment, or as an ingredient in cosmetics.
In 2011, an undecahydrate (containing 11 molecules of water) of this compound was discovered by scientists at the University College London.
Hazards
Magnesium chromate hydrate should be stored at room temperature, and there is no current therapeutic use. It is a confirmed carcinogen, and can cause acute dermititis, and possibly kidney and liver damage if inhaled, so it should be treated as a hazardous waste.
References
Magnesium compounds
Chromates | Magnesium chromate | [
"Chemistry"
] | 225 | [
"Chromates",
"Oxidizing agents",
"Salts"
] |
39,943,982 | https://en.wikipedia.org/wiki/Ethernet%20train%20backbone | An Ethernet train backbone (ETB) is a train communication network based on Ethernet technology standardised with IEC-61375-2-5. This is a train-wide communication backbone such as Wire Train Bus (WTB).
Notes and references
See also
Ethernet consist network (ECN)
External links
Industrial Ethernet
Network topology
Networking standards
Ethernet standards | Ethernet train backbone | [
"Mathematics",
"Technology",
"Engineering"
] | 73 | [
"Networking standards",
"Network topology",
"Computer standards",
"Computer networks engineering",
"Topology",
"Industrial Ethernet"
] |
39,944,913 | https://en.wikipedia.org/wiki/Integral%20closure%20of%20an%20ideal | In algebra, the integral closure of an ideal I of a commutative ring R, denoted by , is the set of all elements r in R that are integral over I: there exist such that
It is similar to the integral closure of a subring. For example, if R is a domain, an element r in R belongs to if and only if there is a finitely generated R-module M, annihilated only by zero, such that . It follows that is an ideal of R (in fact, the integral closure of an ideal is always an ideal; see below.) I is said to be integrally closed if .
The integral closure of an ideal appears in a theorem of Rees that characterizes an analytically unramified ring.
Examples
In , is integral over . It satisfies the equation , where is in the ideal.
Radical ideals (e.g., prime ideals) are integrally closed. The intersection of integrally closed ideals is integrally closed.
In a normal ring, for any non-zerodivisor x and any ideal I, . In particular, in a normal ring, a principal ideal generated by a non-zerodivisor is integrally closed.
Let be a polynomial ring over a field k. An ideal I in R is called monomial if it is generated by monomials; i.e., . The integral closure of a monomial ideal is monomial.
Structure results
Let R be a ring. The Rees algebra can be used to compute the integral closure of an ideal. The structure result is the following: the integral closure of in , which is graded, is . In particular, is an ideal and ; i.e., the integral closure of an ideal is integrally closed. It also follows that the integral closure of a homogeneous ideal is homogeneous.
The following type of results is called the Briancon–Skoda theorem: let R be a regular ring and an ideal generated by elements. Then for any .
A theorem of Rees states: let (R, m) be a noetherian local ring. Assume it is formally equidimensional (i.e., the completion is equidimensional.). Then two m-primary ideals have the same integral closure if and only if they have the same multiplicity.
See also
Dedekind–Kummer theorem
Notes
References
Eisenbud, David, Commutative Algebra with a View Toward Algebraic Geometry, Graduate Texts in Mathematics, 150, Springer-Verlag, 1995, .
Further reading
Irena Swanson, Rees valuations.
Commutative algebra
Ring theory
Algebraic structures | Integral closure of an ideal | [
"Mathematics"
] | 540 | [
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures",
"Commutative algebra"
] |
39,945,265 | https://en.wikipedia.org/wiki/Production%20flow%20analysis | In operations management and industrial engineering, production flow analysis refers to methods which share the following characteristics:
Classification of machines
Technological cycles information control
Generating a binary product-machines matrix (1 if a given product requires processing in a given machine, 0 otherwise)
Methods differ on how they group together machines with products. These play an important role in designing manufacturing cells.
Rank order clustering
Given a binary product-machines n-by-m matrix , rank order clustering is an algorithm characterized by the following steps:
For each row i compute the number
Order rows according to descending numbers previously computed
For each column p compute the number
Order columns according to descending numbers previously computed
If on steps 2 and 4 no reordering happened go to step 6, otherwise go to step 1
Stop
Similarity coefficients
Given a binary product-machines n-by-m matrix, the algorithm proceeds by the following steps:
Compute the similarity coefficient for all with being the number of products that need to be processed on both machine i and machine j, u comprises the number of components which visit machine j but not k and vice versa.
Group together in cell k the tuple (i*,j*) with higher similarity coefficient, with k being the algorithm iteration index
Remove row i* and column j* from the original binary matrix and substitute for the row and column of the cell k,
Go to step 2, iteration index k raised by one
Unless this procedure is stopped the algorithm eventually will put all machines in one single group.
References
Industrial engineering | Production flow analysis | [
"Engineering"
] | 302 | [
"Industrial engineering"
] |
39,945,942 | https://en.wikipedia.org/wiki/FLACS | FLACS (FLame ACceleration Simulator) is a commercial Computational Fluid Dynamics (CFD) software used extensively for explosion modeling and atmospheric dispersion modeling within the field of industrial safety and risk assessment. Main application areas of FLACS are in petrochemical, process manufacturing, food processing, wood processing, metallurgical, and nuclear safety industries.
FLACS has dedicated modules to simulate gas explosion, dust explosion and explosions involving chemical explosives like TNT. FLACS is also extensively used to simulate flammable and toxic gas dispersion. It was applied in the investigation of many high profile accidents such as Buncefield fire, Piper Alpha, TWA Flight 800, and the Petrobras 36 platform.
History
FLACS software development started in-house in the early 1980s under the sponsorship program, Gas Explosion Safety (GSP), funded by the oil companies BP, Elf Aquitaine, Esso, Mobil, Norsk Hydro and Statoil. FLACS-86 was released to GSP sponsors in 1986. Continuous research and development from then onwards resulted in many commercial releases. In 2006, FLACS v8.1 was released to customers. Till then FLACS was developed for Unix and Linux platforms. In 2008, however, FLACS v9.0 was released for Microsoft Windows platform. FLACS v9.1 and FLACS-Wind was developed in 2010. A fully parallelized FLACSv10.0 (using OpenMP) with a new solver for incompressible flows was released in 2012. FLACSv10.0 also constitutes a Homogeneous Equilibrium Model (HEM) for two-phase flow calculations.
Related software
CFX (proprietary software)
Fire Dynamics Simulator (GPL)
OpenFOAM (GPL)
KFX DNV GL
See also
Computational fluid dynamics
Computer simulation
Gas explosion
Dust explosion
Atmospheric dispersion modeling
References
External links
FLACS official website
GexCon AS (FLACS developers)
Computational fluid dynamics | FLACS | [
"Physics",
"Chemistry"
] | 413 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
42,735,534 | https://en.wikipedia.org/wiki/Nonribosomal%20code | The nonribosomal code refers to key amino acid residues and their positions within the primary sequence of an adenylation domain of a nonribosomal peptide synthetase used to predict substrate specificity and thus (partially) the final product. Analogous to the nonribosomal code is prediction of peptide composition by DNA/RNA codon reading, which is well supported by the central dogma of molecular biology and accomplished using the genetic code simply by following the DNA codon table or RNA codon table. However, prediction of natural product/secondary metabolites by the nonribosomal code is not as concrete as DNA/RNA codon-to-amino acid and much research is still needed to have a broad-use code. The increasing number of sequenced genomes and high-throughput prediction software has allowed for better elucidation of predicted substrate specificity and thus natural products/secondary metabolites. Enzyme characterization by, for example, ATP-pyrophosphate exchange assays for substrate specificity, in silico substrate-binding pocket modelling and structure-function mutagenesis (in vitro tests or in silico modelling) helps support predictive algorithms. Much research has been done on bacteria and fungi, with prokaryotic bacteria having easier-to-predict products.
The nonribosomal peptide synthetase (NRPS), a multi-modular enzyme complex, minimally contains repeating, tri-domains (adenylation (A), peptidyl carrier protein (PCP) and lastly condensation(C)). The adenylation domain (A) is the focus for substrate specificity since it is the initiating and substrate recognition domain. In one example, adenylation substrate-binding pocket (defined by 10 residue within) alignments led to clusters giving rise to defined specificity (i.e. the residues of the enzyme pocket can predict nonribosomal peptide sequence). In silico mutations of substrate-determining residues also led to varying or relaxed specificity. Additionally, the NRPS collinearity principle/rule dictates that given the order of adenylation domains (and substrate-specificity code) throughout the NRPS one can predict the amino acid sequence of the produced small peptide. NRPS, NRPS-like or NRPS-PKS complexes also exist and have domain variations, additions and/or exclusions.
Supporting examples
The A-domains have 8 amino acid-long non-ribosomal signatures.
LTKVGHIG → Asp (Aspartic acid)
VGEIGSID → Orn (Orinithine)
AWMFAAVL → Val (Valine)
See also
Nonribosomal peptide
Natural product
Secondary metabolite
References
Molecular biology | Nonribosomal code | [
"Chemistry",
"Biology"
] | 577 | [
"Biochemistry",
"Molecular biology"
] |
42,736,145 | https://en.wikipedia.org/wiki/Analytical%20light%20scattering | Analytical light scattering (ALS), also loosely referred to as SEC-MALS, is the implementation of static light scattering (SLS) and dynamic light scattering (DLS) techniques in an online or flow mode. A typical ALS instrument consists of an HPLC/FPLC chromatography system coupled in-line with appropriate light scattering and refractive index detectors. The advantage of ALS over conventional steady-state light scattering methods is that it allows separation of molecules/macromolecules on a chromatography column prior to analysis with light scattering detectors. Accordingly, ALS enables one to determine hydrodynamic properties of a single monodisperse species as opposed to bulk or average measurements on a sample afforded by conventional light scattering.
References
Scattering, absorption and radiative transfer (optics)
Physical chemistry
Scientific techniques | Analytical light scattering | [
"Physics",
"Chemistry"
] | 170 | [
" absorption and radiative transfer (optics)",
"Applied and interdisciplinary physics",
"Scattering",
"nan",
"Physical chemistry"
] |
42,736,510 | https://en.wikipedia.org/wiki/Retro%20screening | Retro (or reverse) screening (RS) is a relatively new approach to determine the specificity and selectivity of a therapeutic drug molecule against a target protein or another macromolecule. It proceeds in the opposite direction to the so-called virtual screening (VS). In VS, the goal is to use a protein target to identify a high-affinity ligand from a search library typically containing hundreds of thousands of small molecules. In contrast, RS employs a known drug molecule to screen a protein library containing hundreds of thousands of individual structures (obtained from both experimental and modeling techniques). Accordingly, the extent to which this drug cross-reacts with the human proteome provides a measure of its efficacy and the potential long-term side-effects. RS is expected to play a key role in providing an additional layer of quality control in drug discovery.
Bioinformatics
Drug discovery
Cheminformatics
Alternatives to animal testing | Retro screening | [
"Chemistry",
"Engineering",
"Biology"
] | 186 | [
"Animal testing",
"Biological engineering",
"Life sciences industry",
"Drug discovery",
"Bioinformatics",
"Alternatives to animal testing",
"Computational chemistry",
"nan",
"Medicinal chemistry",
"Cheminformatics"
] |
42,737,919 | https://en.wikipedia.org/wiki/Vitali%E2%80%93Carath%C3%A9odory%20theorem | In mathematics, the Vitali–Carathéodory theorem is a result in real analysis that shows that, under the conditions stated below, integrable functions can be approximated in L1 from above and below by lower- and upper-semicontinuous functions, respectively. It is named after Giuseppe Vitali and Constantin Carathéodory.
Statement of the theorem
Let X be a locally compact Hausdorff space equipped with a Borel measure, μ, that is finite on every compact set, outer regular, and tight when restricted to any Borel set that is open or of finite mass. If f is an element of L1(μ) then, for every ε > 0, there are functions u and v on X such that u ≤ f ≤ v, u is upper-semicontinuous and bounded above, v is lower-semicontinuous and bounded below, and
References
Theorems in real analysis | Vitali–Carathéodory theorem | [
"Mathematics"
] | 190 | [
"Theorems in mathematical analysis",
"Theorems in real analysis"
] |
42,744,699 | https://en.wikipedia.org/wiki/Salvinia%20effect | The Salvinia effect describes the permanent stabilization of an air layer upon a hierarchically structured surface submerged in water. Based on biological models (e.g. the floating ferns Salvinia, backswimmer Notonecta), biomimetic Salvinia-surfaces are used as drag reducing coatings (up to 30% reduction were previously measured on the first prototypes. When applied to a ship hull, the coating would allow the boat to float on an air-layer, reducing energy consumption and emissions. Such surfaces require an extremely water repellent super-hydrophobic surface and an elastic hairy structure in the millimeter range to entrap air while submerged. The Salvinia effect was discovered by the biologist and botanist Wilhelm Barthlott (University of Bonn) and his colleagues and has been investigated on several plants and animals since 2002. Publications and patents were published between 2006 and 2016. The best biological models are the floating ferns (Salvinia) with highly sophisticated hierarchically structured hairy surfaces, and the back swimmers (e.g.Notonecta) with a complex double structure of hairs (setae) and microvilli (microtrichia). Three of the ten known Salvinia species show a paradoxical chemical heterogeneity: hydrophilic hair tips, in addition to the super-hydrophobic plant surface, further stabilizing the air layer.
Salvinia, Notonecta and other organisms with air retaining surfaces
Immersed in water, extremely water repellent (super-hydrophobic), structured surfaces trap air between the structures and this air-layer is maintained for a period of time. A silvery shine, due to the reflection of light at the interface of air and water, is visible on the submerged surfaces.
Long lasting air layers also occur in aquatic arthropods which breathe via a physical gill (plastron) e. g. the water spider (Argyroneta) and the saucer bug (Aphelocheirus) Air layers are presumably also conducive to the reduction of friction in fast moving animals under water, as is the case for the back swimmer Notonecta.
The best known examples for long term air retention under water are the floating ferns of genus Salvinia. About ten species of very diverse sizes are found in lentic water in all warmer regions of the earth, one widely spread species (S. natans) found in temperate climates can be even found in Central Europe. The ability to retain air is presumably a survival technique for these plants. The upper side of the floating leaves is highly water repellent and possesses highly complex and species-specific very distinctive hairs. Some species present multicellular free-standing hairs of 0.3–3 mm length (e. g. S. cucullata) while on others, two hairs are connected at the tips (e.g. S. oblongifolia). S. minima and S. natans have four free standing hairs connected at a single base. The Giant Salvinia (S. molesta), as well as S. auriculata, and other closely related species, display the most complex hairs: four hairs grow on a shared shaft; they are connected at their tips. These structures resemble microscopic eggbeaters and are therefore referred to as “eggbeater trichomes”. The entire leaf surface, including the hairs, is covered with nanoscale wax crystals which are the reason for the water repellent properties of the surfaces. These leaf surfaces are therefore a classical example of a “hierarchical structuring“.
The egg-beater hairs of Salvinia molesta and closely related species (e.g. S. auriculata) show an additional remarkable property. The four cells at the tip of each hair (the anchor cells), as opposed to the rest of the hair, are free of wax and therefore hydrophilic; in effect, wettable islands surrounded by a super-hydrophobic surface. This chemical heterogeneity, the Salvinia paradox, enables a pinning of the air water interface to the plant and increases the pressure and longtime stability of the air layer.
The air retaining surface of the floating fern does not lead to a reduction in friction. The ecological extremely adaptable Giant Salvinia (S. molesta) is one of the most important invasive plants in all tropical and subtropical regions of the earth and is the cause of economic as well as ecological problems. Its growth rate might be the highest of all vascular plants. In the tropics and under optimal conditions, S. molesta can double its biomass within four days. The Salvinia effect, described here, most likely plays an essential role in its ecological success; the multilayered floating plant mats presumably maintain their function of gas exchange within the air-layer.
The working principle
The Salvinia effect defines surfaces which are able to permanently keep relatively thick air layers as a result of their hydrophobic chemistry, in combination with a complex architecture in nano- and microscopic dimensions.
This phenomenon was discovered during a systematic research on aquatic plants and animals by Wilhelm Barthlott and his colleagues at the University of Bonn between 2002 and 2007. Five criteria have been defined, they enable the existence of stable air layers under water and as of 2009 define the Salvinia effect: (1) hydrophobic surfaces chemistry in combination with (2) nanoscalic structures generate superhydrophobicity, (3) microscopic hierarchical structures ranging from a few mirco- to several millimeters with (4) undercuts and (5) elastic properties. Elasticity appears to be important for the compression of the air-layer in dynamic hydrostatic conditions. An additional optimizing criterion is the chemical heterogeneity of the hydrophilic tips (Salvinia Paradox). This is a prime example of a hierarchical structuring on several levels.
In plants and animals, air retaining salvinia effect surfaces are always fragmented in small compartments with a length of 0.5 to 8 cm and the borders are sealed against loss of air by particular microstructures. Compartments with sealed edges are also important for technical applications.
The working principle is illustrated in for the Giant Salvinia. The leaves of S. molesta are capable of keeping an air layer on its surfaces for a long time when submerged in water. If a leaf is pulled under water, the leaf surface shows a silvery shine. The distinctive feature of S. molesta lies in the long term stability. While the air layer on most hydrophobic surfaces vanishes shortly after submerging, S. molesta is able to stabilize the air for several days to several weeks. The time span is thereby just limited by the lifetime of the leaf.
The high stability is a consequence of a seemingly paradoxical combination of a superhydrophobic (extremely water repellent) surface with hydrophilic (water attractive) patches on the tips of the structures.
When submerged under water, no water can penetrate the room between the hairs due to the hydrophobic character of the surfaces. However, the water is pinned to the tip of each hair by the four wax free (hydrophilic) end cells. This fixation results in a stabilization of the air layer under water. The principle is shown in the figure.
Two submerged, air retaining surfaces are schematically shown: on the left hand side: a hydrophobic surface. On the right hand side: a hydrophobic surface with hydrophilic tips.
If negative pressure is applied, a bubble is quickly formed on the purely hydrophobic surfaces (left) stretching over several structures. With increasing negative pressure the bubble grows and can detach from the surface. The air bubble rises to the surface and the air layer decreases until it vanishes completely.
In case of the surface with hydrophilic anchor cells (right) the water is pinned to the tips of every structure by the hydrophilic patch on top. These linkages allow the formation of a bubble stretching over several structures; bubble release is suppressed because several links have to be broken first. This results in a higher energy input for the bubble formation. Therefore, an increased negative pressure is needed to form a bubble able to detach from the surface and rise upwards.
Biomimetic technical application
Underwater air retaining surfaces are of great interest for technical applications. If a transfer of the effect to a technical surface is successful, ship hulls could be coated with this surface to reduce friction between ship and water resulting in less fuel consumption, fuel costs and reduction of its negative environmental impact (antifouling effect by the air layer). In 2007 first test boats already achieved a ten percent friction reduction and the principle was subsequently patented. By now scientists assume a friction reduction of over 30%.
The underlying principle is schematically shown in a figure. Two flow profiles of laminar flow in water over a solid surface and water flowing over an air retaining surface are compared here.
If water flows over a smooth solid surface, the velocity at the surface is zero due to the friction between water and surface molecules. If an air layer is situated between the solid surface and the water the velocity is higher than zero. The lower viscosity of air (55 times lower than the viscosity of water) reduces the transmission of friction forces by the same factor.
Researchers are currently working on the development of a biomimetic, permanently air retaining surface modeled on S. molesta to reduce friction on ships. Salvinia-Effect surfaces have been proven to quickly and efficiently adsorb oil and can be used for oil-water separation applications
Animations
See also
Lotus effect
Petal effect
References
Further reading
P. Ditsche-Kuru, M. J. Mayser, E. S. Schneider, H. F. Bohn, K. Koch, J.-E. Melskotte, M. Brede, A. Leder. M. Barczewski, A. Weis, A. Kaltenmaier, S. Walheim, Th. Schimmel, W. Barthlott: Eine Lufthülle für Schiffe – Können Schwimmfarn und Rückenschwimmer helfen Sprit zu sparen? In: A. B. Kesel, D. Zehren (ed.): Bionik: Patente aus der Natur −5. Bremer Bionik Kongress. A. B. Kesel & D. Zehren. Bremen 2011,Seiten 159–165.
S. Klein: Effizienzsteigerung in der Frachtschifffahrt unter ökonomischen und ökologischen Aspekten am Beispiel der Reederei Hapag Lloyd, Projektarbeit Gepr. Betriebswirt (IHK), Akademie für Welthandel, 2012.
W. Baumgarten, B. Böhnlein, A. Wolter, M. Brede, W. Barthlott, A. Leder: Einfluss der Strömungsgeschwindigkeit auf die Stabilität von Luft-Wasser Grenzflächen an biomimetischen, Luft haltenden Beschichtungen. In: B. Ruck, C. Gromke, K. Klausmann, A. Leder, D. Dopheide (Hrsg.): Lasermethoden in der Strömungsmesstechnik. 22. Fachtagung, 9.–11. September 2014, Karlsruhe; (Tagungsband). Karlsruhe, Dt. Ges. für Laser-Anemometrie GALA e.V., , S. 36.1–36.5 (Online).
M. Rauhe: Salvinia-Effekt Gute Luft unter Wasser. In: LOOKIT. Nr. 4, 2010, S. 26–28.
External links
www.lotus-salvinia.de
Video: Das Geheimnis des Südamerikanischen Schwimmfarns
Video: Lufthaltende Schiffsbeschichtungen nach biologischem Vorbild zur Reibungsreduktion
Nanotechnology
Surface science | Salvinia effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,532 | [
"Nanotechnology",
"Condensed matter physics",
"Surface science",
"Materials science"
] |
42,747,055 | https://en.wikipedia.org/wiki/Octopart | Octopart.com is a search engine for electronic and industrial parts headquartered in La Jolla, CA. It aggregates parts from distributors and manufacturers online, making them easy to search for and purchase.
History
Octopart was created by three physics grad-school dropouts, Andres Morey, Sam Wurzel, and Harish Agarwal, in 2007. After coming up with the idea for the site and leaving graduate school, Morey and Wurzel worked with Paul Graham and Jessica Livingston's Y Combinator. Octopart works with large distributors.
In 2017, Octopart was acquired by Altium Limited.
References
Y Combinator companies
2007 establishments in California | Octopart | [
"Engineering"
] | 141 | [
"Electronics companies",
"Engineering companies"
] |
65,617,852 | https://en.wikipedia.org/wiki/Zlatko%20Tesanovic | Zlatko Boško Tešanović (August 1, 1956 – July 26, 2012) was a Yugoslav-American theoretical condensed-matter physicist, whose work focused mainly on the high-temperature superconductors (HTS) and related materials.
His particular research interests were in the areas of theoretical condensed matter physics, revolving primarily around iron- and copper-based high-temperature superconductors, quantum Hall effects (QHE), superconductivity and strongly correlated electron materials. His broad knowledge of condensed matter physics, his deep understanding of the effects of strong magnetic fields, and his talent for exposition were influential.
Biography
He was born in Sarajevo, former Yugoslavia (present Bosnia and Herzegovina). In 1979, he received a B.Sci. in physics from the University of Sarajevo. He then received a Fulbright Fellowship and attended the University of Minnesota, where he earned a Ph.D. in physics in 1985. He became a naturalized American citizen.
He worked as a professor of physics at Johns Hopkins University (JHU) in the Henry A. Rowland Department of Physics and Astronomy in Baltimore from July 1987 until his death on July 26, 2012. Previously, he served as director of the TIPAC Theory Center at JHU.
He was a foreign member of the Royal Norwegian Society of Sciences and Letters and a fellow of the APS Division of Condensed Matter Physics (DCMP). He served as a member of the committee to Assess the Current Status and Future Direction of High Magnetic Field Science in the United States, and contributed strongly to it, until his death.
Students
Among his graduate students are:
Lei Xing (Jacob Haimson Professor, Stanford University)
Igor F. Herbut (Professor, Simon Fraser University)
Anton Andreev (Associate Professor, University of Washington)
Sasha Dukan (Professor and Chair of Physics, Goucher College)
Oskar Vafek (Associate Professor, Florida State University and NHMFL)
Ashot Melikyan (Editor, Physical Review B)
Andrés Concha (Postdoctoral Fellow, Harvard SEAS)
Valentin Stanev (Postdoctoral Fellow, Argonne National Laboratory)
Jian Kang (Grad student, Johns Hopkins University)
Works
He gave more than 100 invited talks at scientific meetings, including major international conferences. He has authored and published more than 125 scientific papers, and a book entitled:
Honors and awards
Fulbright Fellowship, U.S. Institute of International Education (1980)
Shevlin Fellowship, University of Minnesota (1983)
Stanwood Johnston Memorial Fellowship, University of Minnesota (1984)
J. R. Oppenheimer Fellowship, Los Alamos National Laboratory, 1985 (declined)
David and Lucile Packard Foundation Fellowship (1988-1994)
Inaugural Speaker, J. R. Schrieffer Tutorial Lecture Series, National High Magnetic Field Laboratory (1997)
Foreign Member, The Royal Norwegian Society of Sciences and Letters
Fellow, The American Physical Society, Division of Condensed Matter Physics
He received grants from the Department of Energy, and the National Science Foundation awarded him a post-doctoral fellowship that enabled him to spend two years studying at Harvard University.
Death
He died on July 26, 2012, at the age of 55 of an "apparent" heart attack at the George Washington University Hospital in Washington, D.C., after collapsing at Reagan National Airport.
On March 23, 2013, the Johns Hopkins University Department of Physics and Astronomy organised a memorial symposium as a tribute to him. A number of distinguished speakers have been invited to highlight Zlatko's scientific accomplishments.
See also
List of American Physical Society Fellows (2011–)
List of theoretical physicists
Piers Coleman
Alexei Alexeyevich Abrikosov
Edward Witten
Joseph Polchinski
Notes
References
External links
Are iron pnictides new cuprates? by Zlatko Tesanovic — American Physical Society
Profile on Blogger — Blogger.com
Zlatko Tesanovic: What is the theory of the Fe-pnictides?
Curriculum vitae of Dr. Zlatko B. Tešanović
1956 births
2012 deaths
Scientists from Sarajevo
American string theorists
American condensed matter physicists
Yugoslav emigrants to the United States
Bosniaks of Bosnia and Herzegovina
Serbs of Bosnia and Herzegovina
Johns Hopkins University faculty
Fellows of the American Physical Society
Superconductivity
Death in Washington, D.C. | Zlatko Tesanovic | [
"Physics",
"Materials_science",
"Engineering"
] | 864 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
65,621,570 | https://en.wikipedia.org/wiki/Marl%20Chemical%20Park | Marl Chemical Park () is an industrial park in Marl, North Rhine-Westphalia, Germany. It is the third largest industrial cluster in Germany and among the largest chemical production facilities in Europe. The site occupies over 6 square kilometers, hosts 100 chemical plants, employs 10,000 people, and produces 4 million metric tons of chemicals annually. 18 companies are based in the Park, including primary tenant Evonik Industries AG, which also owns and operates the infrastructure through its subsidiary Infracor GmbH.
Originally named Chemische Werke Hüls, the complex was built in 1938 by a consortium led by IG Farben to produce synthetic rubber and other war materials for the Third Reich. By 1942 over 5000 workers' families had relocated into new housing which transformed Marl into a company town. At the height of World War II, the Germans also used slave laborers and prisoners of war at the plant. Allied bombing heavily damaged the site in mid-1943 although full production had resumed by 1944. Near the end of the war, employees saved the plant from complete destruction under Hitler's Nero Decree and the US Army occupied it in March, 1945.
After the war, the plant operated under restrictions imposed by the Allied Control Council and by 1953 turned over to new German owners. New products such as plastics and intermediate chemicals began to be produced. Coal-mining conglomerate RAG AG became majority owner in 2007 and created a new entity Evonik Industries, with a focus on specialty and fine chemicals. In 2009, Marl Chemical Park received its current name. In 2012 a fire halted production of cyclododecatriene (CDT) for several months. The plant manufactures a substantial proportion of the world supply of CDT, a precursor to Nylon 12, which in turn led to a shortage which impacted global production of finished goods particularly in the automotive industry.
Marl Chemical Park is an anchor point on the Ruhr Industrial Heritage Trail and can be visited.
Location
Marl Chemical Park is located on the northern edge of the Ruhr area in the southern foothills of the Münster region. Both the Lippe River and Wesel-Datteln Canal run through the northern part of the site. To the south is Bundesautobahn 52 with a connection to Bundesautobahn 43. In addition to freight rail links with Deutsche Bahn, an alternative connection leads to the Gelsenkirchen-Buer Nord–Marl Lippe railway. The national Ethene Pipeline System running from Gelsenkirchen to Wilhelmshaven travels through the site, and the Rhine-Ruhr Hydrogen Pipeline is owned and operated from the site.
Facilities
Including Evonik, Marl Chemical Park hosts 18 companies and 100 production plants in over 900 buildings operating through a shared infrastructure. It is the third largest integrated industrial park, known as a Verbund site, in Germany. It is also the largest filling center for hydrogen in Europe. Shared services include:
Energy cogeneration: two gas-fired power plants and one coal-fired power station provide 300MW of electrical power in different voltages (110kV, 10kV, 6kV, 500V and 400/230V) and 1000 tons of steam per hour in various pressures (4, 20, 70 and 120 bar). In 2019 construction began on a replacement of the coal station with a new 180MW natural gas facility, to be opened in 2022.
Street grid: 55km long and numbered east-west (100, 200, 1200) and south-north (20, 40, 60), giving buildings unique numbers that indicate their position in the facility (i.e. building 145 is near the intersection of streets 100 and 40).
Raw materials: via pipeline, rail, truck and ship, such as ethylene, propene, C4 hydrocarbons, benzene, methanol, brine and natural gas. This includes storage areas, high rack and tank storage facilities.
Air separation plant: generates liquefied argon and other gases based on the Linde process.
Internal pipeline network: 1200 km long on 30 km of pipeline bridges transporting reaction intermediates, end products, and various gases including hydrogen, nitrogen, and oxygen.
Industrial railway: 100km-long and freight station with two connections to the Deutsche Bahn, in one of the largest private electronically-monitored train stations in Europe.
Wastewater system: 70km long sewer network separated into rain/cooling and dark water channels, processed by two sewage treatment plants before reaching the Lippe River. On the north end of the site is a sludge incineration plant.
Fire department: handling hazardous materials, industrial fires and other emergencies.
History
Construction
In 1936, the Nazi government launched a Four Year Plan which identified strategic materials critical to German rearmament, with a goal to make Germany self-sufficient in preparation for war. Replacing natural rubber with synthetic rubber in the manufacture of tires and continuous track for the Wehrmacht became a priority. The solution was Buna-S, a polymer derived from coal, initially developed by Bayer in 1928 and first manufactured commercially by parent company IG Farben in 1937. Prior to World War II Germany had become the world leader in the development of synthetic rubber technology.
To build a plant needed for mass production of Buna-S, a new company Chemische Werke Hüls GmbH was created as a joint venture between majority owner IG Farben and coal-mining company Hibernia AG, a subsidiary of Prussian state-owned holding company VEBA AG. The plant would use a new electric arc manufacturing method developed in a research alliance with American company Standard Oil of New Jersey in 1935. IG Farben provided patents to the joint venture free of charge, and in return the joint venture was to provide IG Farben all new developments in the technology and proceeds of future sales.
The factory site, adjacent to the August-Victoria coal mine at Hüls near the village of Marl, was strategically located on the northern edge of the Ruhr industrial basin along the Wesel-Datteln Canal. The Hibernia coking and hydrogenation plants in Scholven, recently completed in 1936, were to the southwest. This created a highly efficient production cycle wherein exhaust gases from Hibernia were piped to Hüls, which were converted into acetylene and ethylene using the electric arc process. Acetylene was then used to make butadiene into buna, while ethylene was processed via ethylene oxide into antifreeze and other products. The excess hydrogen produced was returned to Hibernia to make synthetic gasoline from coal liquefaction. The Hüls factory complex was inaugurated on May 9, 1938.
Managers and foremen were relocated to Marl exclusively from other IG Farben plants across Germany, such as Ludwigshafen am Rhein, Schkopau and Leverkusen, while skilled workers came from the surrounding Münster area. Housing became critical and workers lived in temporary camps as new homes were built south of the plant. The neighborhood, known as the Bereitsschaftssiedlung (literally "standby settlement"), was built by IG Farben architect Clemens Anders in the traditionalist Stuttgart school style favored in the Third Reich. From 1938 to 1942, more than 5,000 employees and their families moved in, transforming Marl into a company town. A Feierabendhaus (social center) was built in 1940 with a company restaurant, cinema, theater, and training school for National Socialist concepts. Robert Ley, director of the German Labor Front, laid the foundation stone.
World War II
At the outbreak of war, the plant was still being fitted for full production and the first commercial buna bales were delivered on August 29, 1940. By 1942 the plant was producing 50,000 tons of Buna-S annually along with chlorine, solvents, softening agents, resins and other chemicals needed for the war effort. In addition to the 5000 German employees, between 10,000-15,000 prisoners of war and forced laborers were locked up in 30 camps around Marl to provide workers for the plant and mines which supplied it. Records from 1944 show a special prison camp on the company site controlled by the Gestapo, and Polish workers transferred between Hüls and the Buna plant at Auschwitz.
The effects of war reached Hüls in mid-1943. Raw materials had become increasingly difficult to obtain and the plants were targeted by allied bombing. On June 11, a heavy daylight raid dropped 1,560 bombs which killed 186 people and wounded 752. Another raid by the USAAF 100th bomber group was carried out on June 22 from 25,000 ft. The site was attacked again in daylight by 235 bombers from the USAAF on June 25, with 16 bombers lost. These raids halted all production for three months. More heavy bombing targeted the Hibernia hydrogenation plants to stop the flow of raw materials, however the Hüls works managed to reach maximum output again by 1944.
On March 29, 1945 a German Army special unit appeared with orders under Hitler's Nero Decree to destroy everything in Hüls. Plant employees and particularly plant director Paul Baumann persuaded the unit to disobey the orders and protect the plant until the arrival of the Americans. The United States 8th Armored Division occupied the factories on March 31, 1945. At the end of the war, the worker population had dropped from over 10,000 to about 500.
Postwar
Immediately after the war, the site was placed under British administration. On the breakup of owner IG Farben, the Allies initially placed tight limits on what could be produced and had plans to dismantle the plant, although rubber shortages in Europe soon meant that great efforts were made to restart buna production. By 1949, the company recognized their existing synthetic rubber production methods were not competitive in world markets and American development aid became critical in re-establishing the plant's former importance. In 1953, the works were released from Allied control and ownership converted into a stock corporation. The complex was named Chemische Werke Hüls AG and began manufacturing plastics, raw materials for detergents and a new synthetic rubber process developed by the Americans.
During the Wirtschaftswunder, the chemical works were continuously redeveloped with new product lines under the management of VEBA AG. In 1985, the company began trading under the name Hüls AG and had moved away from basic industries towards more complex chemicals. Hüls AG and Degussa AG merged in 1999 to form Degussa-Hüls, and in 2001 Degussa-Hüls and SKW Trostberg AG merged to form the new Degussa AG, the third largest chemical group in Germany.
Recent
In 2006, Essen-based coal mining conglomerate RAG AG took a controlling interest in the plant. The chemicals, energy and real estate business of RAG were then combined to form a new industrial group Evonik Industries. In 2009, Evonik repositioned itself entirely into specialty chemicals and became owner/operator of the newly named Marl Chemical Park through its subsidiarity Infracor GmbH. Today, in addition to Evonik and its affiliates, 17 other companies are based in the Park.
Resident companies
Evonik Industries and subsidiaries:
Nutrition & Care
Performance Materials
Ressource Efficiency
Materials
Creavis
Technology and Infrastructure
Logistics Service
Catering Services
Operations
Real Estate
CPM Netz
TÜV Nord InfraChem
Umschlag Terminal Marl
Westgas
Companies independent of Evonik
Air Liquide GmbH
Air Products GmbH
Alba Group plc & Co. KG
C+S Chlorgas GmbH
Dow Deutschland Anlagengesellschaft mbH
Eastman Chemical Company HTF GmbH
Goodman Germany GmbH
Ineos Solvents Marl GmbH
Ineos Styrenics GmbH (before 2005 part of BP)
Karl Schmidt Spedition GmbH & Co. KG
Linde plc
Metro Logistics Germany
Natural Energy West GmbH
OQ Chemicals GmbH & Co. KG
Sasol Germany GmbH
Synthomer Deutschland GmbH
Vestolit GmbH
Products
Marls Chemical Park produces 4 million metric tons of chemicals annually. More than 4,000 chemical products are manufactured, the largest quantities being:
Acetylene, acrylic acid, alkanolamines, alkylphenols
Benzene, butadiene, butane, butanediol, butanol, butene, butyl acetates, butyl acrylate, butyl chloride, butyraldehyde
Chlorine, copolyamides, copolyesters, cumene
Dichlorobutane, dichloroethane
Ethoxylate, ethylbenzene, ethyl chloride, ethylene, ethylene glycol, ethylene oxide, 2-ethylhexanol
Formaldehyde
Glycols
Resins
Isobutene
Latex
MAC/MAS, methanol, methyl chloride, MTBE
Sodium hydroxide
Polyamides, polyesters, polyethylene glycols, polyoctenamer, polystyrene, propylene, PVC
Hydrochloric acid, sulfuric acid, styrene
Surfactants, tetrahydrofuran
Plasticizers
Emergency management
The chemical industry in Germany and Austria jointly maintain the Transport-Unfall-Informations- und Hilfeleistungssystem, acronym TUIS (English: Transport Accident Information and Assistance System). Experts can be reached by phone around the clock to provide information on how to handle chemicals in the event of a transport accident. The Marl Chemical Park fire brigade is one of the ten nationwide TUIS emergency call centers and also provides vehicles and equipment.
Accidents
January 30, 1995: After a previous safety shutdown, a connecting elbow in a reactor at the ethanolamine factory tore off when starting up and about two tons of ammonia and 400 kg of ethanolamine leaked. Since this accident happened after the day shift, only property damage occurred. The release of the substances is registered as ZEMA event 9501.
July 19, 1998: Operator error in the vinyl chloride plant triggered an unexpected exothermic reaction. This led to the bursting of pipes, escape of hydrogen chloride and an open fire. The fire brigade was able to protect neighboring systems with cooling, suppress the hydrogen chloride with spray mist and let escaping gases burn off in a controlled manner. There was considerable property damage. The release of the substance is recorded by ZEMA as event 9815.
May 28, 1999: A pipe bend in a vinyl chloride plant tore open and a mixture of 1,2-dichloroethane, vinyl chloride and hydrogen chloride leaked out. Six employees were injured, some emergency services also suffered minor injuries. No people were affected outside the Chemical Park. Because of the release of the substances, this was a reportable accident registered as ZEMA event 9918.
October 10, 2006: At around 10:40am there was a deflagration in a production building of the intermediate product factory. As a result, the Marlotherm with which products are heated up to approximately 300 °C ignited. As a result of the oil fire, a huge black column of smoke rose into the sky clearly visible even in the neighboring towns. After a few hours, the plant fire brigade was able to put out the fire. This incident is recorded by ZEMA as event 0621.
2012 cyclododecatriene plant fire: On March 31, 2012 at around 1:35 p.m. there was damage to the cyclododecatriene (CDT) system of the Evonik company, which was accompanied by a 100-meter-high jet flame and heavy smoke. Residents reported a severe explosion and a cloud of smoke moved south over the A 2. One worker died at the scene of the accident, another died from serious injuries later in hospital. Measurements by the fire brigade showed no health risk for the population. According to initial investigations, material fatigue is assumed to be the cause. The damage stopped production of cyclododecatriene (CDT) for several months. The plant produced a substantial proportion of the world's production of CDT, particularly that needed to produce laurolactam, a precursor to the polyamide Nylon 12. This shortage in turn led to concerns for global production of finished goods, particularly in the automotive industry. Other biobased polyamides, not dependent on laurolactam or CDT, have been put forward as possible alternative materials.
References
External links
Evonik Industries website in English
Marl Chemical Park in English
1938 establishments in Germany
Buildings and structures in Germany destroyed during World War II
Buildings and structures in Krefeld
Chemical industry in Germany
Chemical plants
Companies based in North Rhine-Westphalia
Industrial buildings completed in 1938
Manufacturing plants in Germany
Rubber industry
World War II strategic bombing conducted by the United States
German Industrial Heritage Trail sites
Companies of Nazi Germany | Marl Chemical Park | [
"Chemistry"
] | 3,444 | [
"Chemical process engineering",
"Chemical plants"
] |
65,627,070 | https://en.wikipedia.org/wiki/Residential%20Design%20Codes%20%28Western%20Australia%29 | The Residential Design Codes (R-Codes) provide uniform residential development standards across all Western Australian local government areas. The R-Codes where first gazetted in 1985 with four subsequent editions published in 1991, 2002, 2008 and 2019. The codes are prepared by the Department of Planning, Lands and Heritage for the Western Australian Planning Commission and implemented via reference in local planning schemes. The R-Codes primarily control residential development by limiting the number of dwellings per site area.
Background
During the nineteenth to mid-twentieth century residential development in Western Australia was regulated via local government by-laws and development standards under town planning schemes. This led to considerable variation between local governments. In 1964 the Town Planning Department commissioned George Clarke and Donald Gazzard to prepare a uniform residential code known as the “General Residential Codes” (also the GR Codes) which was gazetted in 1966. These codes improved matters, but were still implemented via incorporation into local planning schemes which allowed local governments to vary provisions. Ultimately the codes did not lead to the level of uniformity desired.
Following a series of reviews, a new Residential Planning Code was gazetted in 1985 as a State Planning Policy. This code was incorporated into all local planning schemes via reference and applied uniformly, allowing the state government to update the codes periodically. A performance-based assessment pathway was introduced in 2002.
See also
ResCode (Victoria)
Green Street Joint Venture
References
Local government areas of Western Australia
Building codes
Western Australia | Residential Design Codes (Western Australia) | [
"Engineering"
] | 291 | [
"Building engineering",
"Building codes"
] |
44,201,078 | https://en.wikipedia.org/wiki/Cassette%20mutagenesis | Cassette mutagenesis is a type of site-directed mutagenesis that uses a short, double-stranded oligonucleotide sequence (gene cassette) to replace a fragment of target DNA. It uses complementary restriction enzyme digest ends on the target DNA and gene cassette to achieve specificity. It is different from methods that use single oligonucleotide in that a single gene cassette can contain multiple mutations. Unlike many site directed mutagenesis methods, cassette mutagenesis also does not involve primer extension by DNA polymerase.
Mechanism
First, restriction enzymes are used to cleave near the target sequence on DNA contained in a suitable vector. This step removes the target sequence and everything between the restriction sites. Then, the synthetic double stranded DNA containing the desired mutation and ends that are complementary to the restriction digest ends are ligated in place of the sequence removed. Finally, the resultant construct is sequenced to check that the target sequence contains the intended mutation.
Usage
The use of synthetic gene cassette allows total control over the type of mutation that can be generated. When studying protein functions, cassette mutagenesis can allow a scientist to change individual amino acids by introducing different codons or omitting codons.
By including the SD sequence and the first few codons of a gene, a scientist can easily and dramatically affect the expression level of a protein by altering these regulatory sequences.
Limitations
To use this method, the sequence of the target sequence and nearby restriction sites must be known. Since restriction enzymes are used, for this method to be useful, the restriction sites flanking the target DNA has to be unique in the gene/vector system so that the gene cassette can be inserted with specificity. The length of the sequence flanked by the restriction sites is also a limiting factor due to the use of synthetic gene cassettes.
Advantages
Since one gene cassette can contain multiple mutations, less total oligonucleotide synthesis and purification is needed. Compared to mutagenesis methods that requires the synthesis of double stranded DNA using a single stranded template (1-30% in vitro in M13), the efficiency of the ligation of oligodeoxynucleotide cassette is close to 100%. The high efficiency of the mutagenesis means mutants can be screened directly by sequencing. Once the vector is set up with flanking restriction sites, all manipulations (i.e., mutagenesis, sequencing, expression) can be performed in the same plasmid.
References
Genetics techniques
Molecular genetics
Mutagenesis
Protein engineering | Cassette mutagenesis | [
"Chemistry",
"Engineering",
"Biology"
] | 518 | [
"Genetics techniques",
"Molecular genetics",
"Genetic engineering",
"Molecular biology"
] |
49,045,892 | https://en.wikipedia.org/wiki/Zener%20ratio | The Zener ratio is a dimensionless number that is used to quantify the anisotropy for cubic crystals. It is sometimes referred as anisotropy ratio and is named after Clarence Zener. Conceptually, it quantifies how far a material is from being isotropic (where the value of 1 means an isotropic material).
Its mathematical definition is
where refers to Elastic constants in Voigt notation.
Cubic materials
Cubic materials are special orthotropic materials that are invariant with respect to 90° rotations with respect to the principal axes, i.e., the material is the same along its principal axes. Due to these additional symmetries the stiffness tensor can be written with just three different material properties like
The inverse of this matrix is commonly written as
where is the Young's modulus, is the shear modulus, and is the Poisson's ratio. Therefore, we can think of the ratio as the relation between the shear modulus for the cubic material and its (isotropic) equivalent:
Universal Elastic Anisotropy Index
The Zener ratio is only applicable to cubic crystals. To overcome this limitation, a 'Universal Elastic Anisotropy Index (AU)' was formulated from variational principles of elasticity and tensor algebra. The AU is now used to quantify the anisotropy of elastic crystals of all classes.
Tensorial Anisotropy Index
The Tensorial Anisotropy Index AT extends the Zener ratio for fully anisotropic materials and overcomes the limitation of the AU that is designed for materials exhibiting internal symmetries of elastic crystals, which is not always observed in multi-component composites. It takes into consideration all the 21 coefficients of the fully anisotropic stiffness tensor and covers the directional differences among the stiffness tensor groups.
It is composed of two major parts and , the former referring to components existing in cubic tensor and the latter in anisotropic tensor so that This first component includes the modified Zener ratio and additionally accounts for directional differences in the material, which exist in orthotropic material, for instance. The second component of this index covers the influence of stiffness coefficients that are nonzero only for non-cubic materials and remains zero otherwise.
where is the Coefficient of variation for each stiffness group accounting for directional differences of material stiffness, i.e. In cubic materials each stiffness component in groups 1-3 has equal value and thus this expression reduces directly to Zener ratio for cubic materials.
The second component of this index is non-zero for complex materials or composites with only few or no symmetries in their internal structure. In such cases the remaining stiffness coefficients joined in three groups are not null
See also
Anisotropy
Orthotropic material
Linear elasticity
References
Crystallography
Orientation (geometry)
Elasticity (physics) | Zener ratio | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 594 | [
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Materials science",
"Crystallography",
"Topology",
"Space",
"Condensed matter physics",
"Geometry",
"Spacetime",
"Orientation (geometry)",
"Physical properties"
] |
49,052,274 | https://en.wikipedia.org/wiki/Cure53 | Cure53 is a German cybersecurity firm. The company was founded by Mario Heiderich, a security researcher.
History
After a report from Cure53 on the South Korean security app Smart Sheriff, that described the app's security holes as "catastrophic", the South Korean government ordered the Smart Sheriff to be shut down.
Software audited by Cure53 includes Mastodon, OnionShare, Bitwarden, Mailvelope, GlobaLeaks, SecureDrop, Obsidian (client software), OpenPGP, Onion Browser, F-Droid, Nitrokey, Peerio, OpenKeychain, cURL, Briar, Mozilla Thunderbird, Threema, MetaMask, Obsidian, Proton Pass, Enpass and Passbolt, as well as many VPN and password manager providers.
References
External links
Computer security
Information technology companies of Germany
Companies based in Berlin | Cure53 | [
"Technology"
] | 183 | [
"Computer security stubs",
"Computing stubs"
] |
50,564,088 | https://en.wikipedia.org/wiki/Procore | Procore Technologies is an American construction management software as a service company founded in 2002, with headquarters in Carpinteria, California.
History
Founder and CEO Craig "Tooey" Courtemanche created the software that became Procore as a response to his struggles to manage the construction of his new home in Santa Barbara, from his then-home in Silicon Valley. The app he built tracked the activity of the workers onsite. Founded in 2002, the company was originally headquartered in Montecito, California. Steve Zahm, founder of the e-learning company DigitalThink, joined Procore as president in 2004.
Procore's revenue in 2012 was $4.8 million. In 2020, it was $400 million.
The company initially filed to go public in 2019, with plans to launch the IPO in 2020, but delayed the offering due to the coronavirus pandemic. Procore stock began trading under stock ticker PCOR on May 20, 2021 at $67 per share. The initial public offering raised $634.5 million. Following the IPO, the company was valued at nearly $11 billion. As of May 2021, the company has over 10,000 customers, and over 1.6 million users of its products in more than 125 countries.
Procore's campus is on a 9-acre oceanfront property in Carpinteria, California.
Investors and acquisitions
In 2014, Bessemer Venture Partners led a $15 million investment round. In 2015, the company raised an additional $30 million in a round led by Bessemer and Iconiq Capital. In 2015, the Wall Street Journal reported the company to be worth "$500 million post-money." In 2016, the company raised $50 million in a round led by Iconiq, reaching a $1 billion valuation. In 2018, the company raised an additional $75 million, and in 2020, it raised over $150 million. In total, the company raised nearly $500 million from 2007 through its IPO in 2021.
In July 2019, Procore acquired US project management software group Honest Buildings. In October 2020, it acquired US estimating software provider Esticom. Procore acquired construction artificial intelligence companies Avata Intelligence in 2020, and INDUS.AI in 2021.
Software
Procore's cloud-based construction management software allows teams of construction companies, property owners, project managers, contractors, and partners to collaborate on construction projects and share access to documents, planning systems and data, using an Internet-connected device. Data and video can also be streamed in to the system via drones. The software includes features such as meeting minutes, drawing markups and document storage for all project-related materials.
Procore's offerings also include an app marketplace, with 300+ partners, including Box, an enterprise file storage and content management company; Botlink, a joint venture by Packet Digital that allows users to stream in both video and data from drones surveying their construction projects; and Dexter + Chaney, an ERP provider.
References
Software companies based in California
Business software companies
Architectural communication
Software companies of the United States
2002 establishments in California
Software companies established in 2002
American companies established in 2002
2021 initial public offerings
Companies listed on the New York Stock Exchange
Cloud computing providers
Companies based in Santa Barbara County, California | Procore | [
"Engineering"
] | 669 | [
"Construction",
"Architecture",
"Architectural communication",
"Construction software"
] |
47,306,429 | https://en.wikipedia.org/wiki/Cyanidin-3%2C5-O-diglucoside | Cyanidin-3,5-O-diglucoside, also known as cyanin, is an anthocyanin. It is the 3,5-O-diglucoside of cyanidin.
Natural occurrences
Cyanin can be found in species of the genus Rhaponticum (Asteraceae).
In food
Cyanin can be found in red wine as well as pomegranate juice according to a study done by Graça Miguel, Susana Dandlen, Dulce Antunes, Alcinda Neves, and Denise Martins in the winter of 2004. Pomegranate juice extracted through centrifugal seed separation has higher amounts of cyanidin-3,5-O-diglucoside than juice extracted by squeezing fruit halves with an electric lemon squeezer.
See also
Phenolic content in wine
References
External links
Anthocyanins
Flavonoid glucosides | Cyanidin-3,5-O-diglucoside | [
"Chemistry"
] | 198 | [
"PH indicators",
"Anthocyanins"
] |
47,308,192 | https://en.wikipedia.org/wiki/CTAIDI | The Customer Total Average Interruption Duration Index (CTAIDI) is a reliability index associated with electric power distribution. CTAIDI is the average total duration of interruption for customers who had at least one interruption during the period of analysis, and is calculated as:
where is the number of customers and is the annual outage time for location , and is the number of customers at location that were interrupted. In other words,
CTAIDI is measured in units of time, such as minutes or hours. It is similar to CAIDI, but CAIDI divides the total duration of interruptions by the number of interruptions whereas CTAIDI divides by the number of interrupted customers. When CTAIDI is much greater than CAIDI, the service outages are more concentrated among certain customers.
CTAIDI also has the same numerator as SAIDI, but SAIDI divides the total duration of interruptions by the total number of customers served. The fraction of distinct customers interrupted illustrates the relationship between several reliability indicators:
References
Electric power
Reliability indices | CTAIDI | [
"Physics",
"Engineering"
] | 209 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
55,815,126 | https://en.wikipedia.org/wiki/Locomotor%20mimicry | Locomotor mimicry is a subtype of Batesian mimicry in which animals avoid predation by mimicking the movements of another species phylogenetically separated. This can be in the form of mimicking a less desirable species or by mimicking the predator itself. Animals can show similarity in swimming, walking, or flying of their model animals.
The complex interaction between mimics, models, and predators (sometimes called observers) can help explain similarities amongst species beyond ideas that emerge from evolutionary comparative approaches. In terms of overall movement, the continuous locomotor mimicry of a species that differs anatomically from the mimic may increase metabolic cost. However, the benefit of avoiding predation appears to outweigh the increased energy cost, because mimicking animals tend to have higher survival rates than their non-mimicking counterparts.
Terrestrial locomotor mimicry
The most common form of locomotor terrestrial mimicry is found in ant-mimicking spiders. These mimics are capable of antennal illusions and similar gait patterns as an ant, which is shown in the jumping spider family (Araneae, Salticidae). Ants appear to be beneficial models because they possess effective protective traits such as, chemical defences, and aggressiveness. Spiders, however, lack some of these specialized traits and therefore by acting as an ant, may avoid predation because the predator has less desire for ants.
Mimetic jumping spiders imitate the zig-zag trajectories of ants, which appears to be beneficial for avoiding predators that are from an elevated vantage point. However, this may be an example of imperfect mimicry because the spiders display this behaviour in settings where ants do not.
It was once thought that these ant-mimicking spiders walk on 6 legs instead of 8 so that they could use a set of legs to mimic ant antennae. However, further analysis revealed that the spiders only do this whilst stationary, which leads to the assumption that there may be a limit to the neural circuitry underlying limb movement that does not allow them to move on 6 legs. This antennal mimicry appears to be most beneficial whilst in a close proximity to a predator.
Another example of terrestrial locomotor mimicry is seen in salticid-mimicking moths. The moths fan out their hind wings and their forewings are raised above their bodies. In this position, the moth's wings look like salticid legs. Moths that resemble the appearance and locomotion of predatory spiders are preyed upon less by the spiders. The spiders will even display courtship or territorial behaviour towards the mimics, indicating that the spiders misidentify the moths as conspecifics. Even if the spiders eventually eat the moths, the time it takes for the first attack to occur is longer than the time taken to attack non-mimetic moths.
Aerial locomotor mimicry
In butterflies, it is thought that palatability to predators is related to flight components. Typically, fast-flying prey are more palatable, whereas unpalatable species tend to fly more slowly. These flight characteristics could help predators recognize prey as being palatable or unpalatable. Researchers compared the flight patterns of palatable non-mimetic, palatable mimetic, and unpalatable butterflies by looking at directional flight changes of each species. It was determined that the palatable mimetic butterfly species had a significantly different flight pattern compared to the palatable non-mimetic. The palatable mimetic species had a flight pattern that resembled that of their unpalatable models.
Another example of aerial locomotor mimicry is found in the common drone fly (Eristalis tenax) and its presumed model, the western honey bee (Apis mellifera). In analyses of flight sequences, flight velocities, flight trajectories, and time spent hovering, it was found that the flight patterns of common drone flies were more similar to honey bees than to that of other flies. The drone flies and their models both exhibit loops in their flight paths, which is surprising for the drone flies because they are very adept fliers. A likely explanation for this flight behaviour is that, while foraging, the drone flies are at increased risk of predation by birds and therefore they alter their flying to resemble the noxious honeybee and avoid predation.
Inanimate object locomotor mimicry
The ghost pipefish is able to blend into its surroundings due to its similarity in colour and motion to sea plants. In order to avoid predators, the organism will sway in the water to resemble underwater vegetation as much as possible.
See also
Anti-predator adaptation
Defensive mimicry
References
Mimicry
Animal locomotion | Locomotor mimicry | [
"Physics",
"Biology"
] | 951 | [
"Animal locomotion",
"Physical phenomena",
"Behavior",
"Animals",
"Biological defense mechanisms",
"Mimicry",
"Motion (physics)",
"Ethology"
] |
55,818,277 | https://en.wikipedia.org/wiki/Grafomap | Grafomap is a Latvia-based design company that combines OpenStreetMap data with design filters, allowing people to create map posters of places in the world.
History
The company and its team are located in Latvia, while the posters and maps are printed and shipped from Los Angeles and Riga. Grafomap was founded in 2016 by Rihards Piks and Karlis Bikis.
According to co-founder Rihards Piks, the start-up is inspired by Snazzy Maps. It is a WordPress plugin that colors maps for website contact pages. The idea transformed into creating custom maps in real-time for people who would like to use maps as wall posters.
The start-up has been featured in numerous fashion, art and business outlets like The Guardian, Chicago Tribune, Launching Next, The Coolector, Simply Grove, PSFK and Product Hunt.
References
External links
Companies of Latvia
Design companies
Design companies established in 2016
Latvian companies established in 2016 | Grafomap | [
"Engineering"
] | 198 | [
"Design",
"Engineering companies",
"Design companies"
] |
55,823,152 | https://en.wikipedia.org/wiki/Diffeomorphometry | Diffeomorphometry is the metric study of imagery, shape and form in the discipline of computational anatomy (CA) in medical imaging. The study of images in computational anatomy rely on high-dimensional diffeomorphism groups which generate orbits of the form , in which images can be dense scalar magnetic resonance or computed axial tomography images. For deformable shapes these are the collection of manifolds , points, curves and surfaces. The diffeomorphisms move the images and shapes through the orbit according to which are defined as the group actions of computational anatomy.
The orbit of shapes and forms is made into a metric space by inducing a metric on the group of diffeomorphisms. The study of metrics on groups of diffeomorphisms and the study of metrics between manifolds and surfaces has been an area of significant investigation. In Computational anatomy, the diffeomorphometry metric measures how close and far two shapes or images are from each other. Informally, the metric is constructed by defining a flow of diffeomorphisms which connect the group elements from one to another, so for then . The metric between two coordinate systems or diffeomorphisms is then the shortest length or geodesic flow connecting them. The metric on the space associated to the geodesics is given by. The metrics on the orbits are inherited from the metric induced on the diffeomorphism group.
The group is thusly made into a smooth Riemannian manifold with Riemannian metric associated to the tangent spaces at all . The Riemannian metric satisfies at every point of the manifold there is an inner product inducing the norm on the tangent space that varies smoothly across .
Oftentimes, the familiar Euclidean metric is not directly applicable because the patterns of shapes and images don't form a vector space. In the Riemannian orbit model of Computational anatomy, diffeomorphisms acting on the forms don't act linearly. There are many ways to define metrics, and for the sets associated to shapes the Hausdorff metric is another. The method used to induce the Riemannian metric is to induce the metric on the orbit of shapes by defining it in terms of the metric length between diffeomorphic coordinate system transformations of the flows. Measuring the lengths of the geodesic flow between coordinates systems in the orbit of shapes is called diffeomorphometry.
The diffeomorphisms group generated as Lagrangian and Eulerian flows
The diffeomorphisms in computational anatomy are generated to satisfy the Lagrangian and Eulerian specification of the flow fields, , generated via the ordinary differential equation
with the Eulerian vector fields in for . The inverse for the flow is given by
and the Jacobian matrix for flows in given as
To ensure smooth flows of diffeomorphisms with inverse, the vector fields must be at least 1-time continuously differentiable in space which are modelled as elements of the Hilbert space using the Sobolev embedding theorems so that each element has 3-square-integrable derivatives thusly implies embeds smoothly in 1-time continuously differentiable functions. The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm:
The Riemannian orbit model
Shapes in Computational Anatomy (CA) are studied via the use of diffeomorphic mapping for establishing correspondences between anatomical coordinate systems. In this setting, 3-dimensional medical images are modelled as diffeomorphic transformations of some exemplar, termed the template , resulting in the observed images to be elements of the random orbit model of CA. For images these are defined as , with for charts representing sub-manifolds denoted as .
The Riemannian metric
The orbit of shapes and forms in Computational Anatomy are generated by the group action , . These are made into a Riemannian orbits by introducing a metric associated to each point and associated tangent space. For this a metric is defined on the group which induces the metric on the orbit. Take as the metric for Computational anatomy at each element of the tangent space in the group of diffeomorphisms
with the vector fields modelled to be in a Hilbert space with the norm in the Hilbert space . We model as a reproducing kernel Hilbert space (RKHS) defined by a 1-1, differential operator , where is the dual-space. In general, is a generalized function or distribution, the linear form associated to the inner-product and norm for generalized functions are interpreted by integration by parts according to for ,
When , a vector density,
The differential operator is selected so that the Green's kernel associated to the inverse is sufficiently smooth so that the vector fields support 1-continuous derivative. The Sobolev embedding theorem arguments were made in demonstrating that 1-continuous derivative is required for smooth flows. The Green's operator generated from the Green's function(scalar case) associated to the differential operator smooths.
For proper choice of then is an RKHS with the operator . The Green's kernels associated to the differential operator smooths since for controlling enough derivatives in the square-integral sense the kernel is continuously differentiable in both variables implying
The diffeomorphometry of the space of shapes and forms
The right-invariant metric on diffeomorphisms
The metric on the group of diffeomorphisms is defined by the distance as defined on pairs of elements in the group of diffeomorphisms according to
This distance provides a right-invariant metric of diffeomorphometry, invariant to reparameterization of space since for all ,
The metric on shapes and forms
The distance on images, ,
The distance on shapes and forms, ,
The metric on geodesic flows of landmarks, surfaces, and volumes within the orbit
For calculating the metric, the geodesics are a dynamical system, the flow of coordinates and the control the vector field related via The Hamiltonian view
reparameterizes the momentum distribution in terms of the Hamiltonian momentum, a Lagrange multiplier constraining the Lagrangian velocity .accordingly:
The Pontryagin maximum principle gives the Hamiltonian
The optimizing vector field with dynamics . Along the geodesic the Hamiltonian is constant:
. The metric distance between coordinate systems connected via the geodesic determined by the induced distance between identity and group element:
Landmark or pointset geodesics
For landmarks, , the Hamiltonian momentum
with Hamiltonian dynamics taking the form
with
The metric between landmarks
The dynamics associated to these geodesics is shown in the accompanying figure.
Surface geodesics
For surfaces, the Hamiltonian momentum is defined across the surface has Hamiltonian
and dynamics
The metric between surface coordinates
Volume geodesics
For volumes the Hamiltonian
with dynamics
The metric between volumes
Software for diffeomorphic mapping
Software suites containing a variety of diffeomorphic mapping algorithms include the following:
Deformetrica
ANTS
DARTEL Voxel-based morphometry(VBM)
DEMONS
LDDMM
StationaryLDDMM
Cloud software
MRICloud
References
Computational anatomy
Medical imaging
Geometry
Mathematical analysis
Fluid mechanics
Bayesian estimation
Neuroscience
Neural engineering
Biomedical engineering | Diffeomorphometry | [
"Mathematics",
"Engineering",
"Biology"
] | 1,485 | [
"Mathematical analysis",
"Biological engineering",
"Neuroscience",
"Biomedical engineering",
"Civil engineering",
"Geometry",
"Fluid mechanics",
"Medical technology"
] |
55,823,783 | https://en.wikipedia.org/wiki/Mulberry%20%28uranium%20alloy%29 | Mulberry is a uranium alloy.
It is used as a non-corroding or 'stainless' uranium alloy. It has been put forward as a structural material for the casings of the physics package in nuclear weapons, including those of North Korea.
The composition is a ternary alloy, of 7.5% niobium, 2.5% zirconium, 90% uranium.
Mulberry was developed in the 1960s at UCRL. Binary alloy compositions were first studied to avoid the mechanical problems of pure uranium: corrosion, dimensional instability, inability to improve its mechanical properties by heat treatment. Uranium-molybdenum alloys were found susceptible to stress-corrosion cracking, uranium-niobium alloys to be weak, and uranium-zirconium alloys to be brittle. Ternary alloys were next studied to try to avoid these drawbacks. Uranium-niobium-zirconium was found to be corrosion resistant and to permit age hardening, which could increase its hardness from .
Multiple crystal phases were observed, with a critical temperature of 650°C. Above this the body-centered cubic γ phase was stable. Water quenching to room temperature produces a γs transition phase and with aging this transforms to a tetragonal γo phase. Further aging produces a monoclinic ɑ phase that is observed metallographically as a Widmanstätten pattern. The crystal structure of the alloy has been studied, particularly the γ phase. Uranium inclusions have been observed within the alloy although, unlike the binary alloys, niobium-rich inclusions were not. Early studies were uncertain as to whether these were inherent behaviours, or artifacts of their processing.
References
Uranium
Alloys | Mulberry (uranium alloy) | [
"Chemistry"
] | 351 | [
"Chemical mixtures",
"Alloys"
] |
55,825,837 | https://en.wikipedia.org/wiki/Beatriz%20%C3%81lvarez%20Sanna | Beatriz Álvarez Sanna (born 17 September 1968) is a Uruguayan chemist and biochemistry professor at the Faculty of Sciences of the University of the Republic. She researches in the areas of redox biochemistry and enzymology. In 2013 she was the winner of the L'Oréal-UNESCO Award for Women in Science.
Career
Álvarez Sanna earned a bachelor's degree in chemistry in 1991. In 1993 she obtained a master's degree in chemistry with a thesis focused on bacterial metabolism. In 1999 she received her doctorate in chemistry from the University of the Republic. Her thesis focused on the biological chemistry of peroxynitrite.
She works as associate professor at the Enzymology Laboratory of the Faculty of Sciences, University of the Republic.
Her main interests are redox biochemistry, kinetics, and enzymology. She develops lines of research in biological thiols and hydrogen sulfide. She is a member of the Editorial Committee of the Journal of Biological Chemistry.
In 2013 Álvarez Sanna received the L'Oréal-UNESCO National Award. Her project on the biochemistry of hydrogen sulfide deals with this compound and its possible modulation for pharmacological production and administration, which could constitute new alternatives for the treatment of a wide spectrum of conditions, including hypertension, atherosclerosis, diabetes, and inflammation.
She is a researcher at (PEDECIBA), and a Level II member of the Sistema Nacional de Investigadores (SNI).
She is co-author of more than 40 publications in refereed international journals.
References
1968 births
20th-century Uruguayan educators
20th-century women scientists
21st-century Uruguayan educators
21st-century women scientists
Uruguayan biochemists
Uruguayan chemists
Living people
L'Oréal-UNESCO Awards for Women in Science laureates
University of the Republic (Uruguay) alumni
Academic staff of the University of the Republic (Uruguay)
Uruguayan educators
Uruguayan women educators
Women biochemists | Beatriz Álvarez Sanna | [
"Chemistry"
] | 391 | [
"Biochemists",
"Women biochemists"
] |
59,158,616 | https://en.wikipedia.org/wiki/Li%C3%B1%C3%A1n%27s%20equation | In the study of diffusion flame, Liñán's equation is a second-order nonlinear ordinary differential equation which describes the inner structure of the diffusion flame, first derived by Amable Liñán in 1974. The equation reads as
subjected to the boundary conditions
where is the reduced or rescaled Damköhler number and is the ratio of excess heat conducted to one side of the reaction sheet to the total heat generated in the reaction zone. If , more heat is transported to the oxidizer side, thereby reducing the reaction rate on the oxidizer side (since reaction rate depends on the temperature) and consequently greater amount of fuel will be leaked into the oxidizer side. Whereas, if , more heat is transported to the fuel side of the diffusion flame, thereby reducing the reaction rate on the fuel side of the flame and increasing the oxidizer leakage into the fuel side. When , all the heat is transported to the oxidizer (fuel) side and therefore the flame sustains extremely large amount of fuel (oxidizer) leakage.
The equation is, in some aspects, universal (also called as the canonical equation of the diffusion flame) since although Liñán derived the equation for stagnation point flow, assuming unity Lewis numbers for the reactants, the same equation is found to represent the inner structure for general laminar flamelets, having arbitrary Lewis numbers.
Existence of solutions
Near the extinction of the diffusion flame, is order unity. The equation has no solution for , where is the extinction Damköhler number. For with , the equation possess two solutions, of which one is an unstable solution. Unique solution exist if and . The solution is unique for , where is the ignition Damköhler number.
Liñán also gave a correlation formula for the extinction Damköhler number, which is increasingly accurate for ,
Generalized Liñán's equation
The generalized Liñán's equation is given by
where and are constant reaction orders of fuel and oxidizer, respectively.
Large Damköhler number limit
In the Burke–Schumann limit, . Then the equation reduces to
An approximate solution to this equation was developed by Liñán himself using integral method in 1963 for his thesis,
where is the error function and
Here is the location where reaches its minimum value . When , , and .
See also
Liñán's diffusion flame theory
References
Fluid dynamics
Combustion
Ordinary differential equations | Liñán's equation | [
"Chemistry",
"Engineering"
] | 480 | [
"Piping",
"Chemical engineering",
"Combustion",
"Fluid dynamics"
] |
64,164,203 | https://en.wikipedia.org/wiki/Jan-Erik%20Roos | Jan-Erik Ingvar Roos (16 October 1935 – 15 December 2017) was a Swedish mathematician whose research interests were in abelian category theory, homological algebra, and related areas.
He was born in Halmstad, in the province of Halland on the Swedish west coast. Roos enrolled at Lund University in 1954, and started studying mathematics with Lars Gårding in 1957. Under Gårding's direction he wrote a thesis on ordinary differential equation, and graduated in 1958 with a licentiate degree. Later that year he went to Paris on a doctoral scholarship; there, he gravitated towards the mathematical environment at the Institut Henri Poincaré, and the various seminars held there. After a while, he started attending Alexander Grothendieck's seminar at the Institut des hautes études scientifiques in Bures-sur-Yvette, where he became interested in abstract algebra and algebraic geometry. In 1967 he was invited by Saunders Mac Lane to visit the University of Chicago for three months; Mac Lane was impressed by Roos and later wrote a very positive letter of recommendation for him.
Upon his return to Sweden, Roos was appointed Professor of Mathematics at Stockholm University in 1970, and started building a strong algebra school. He was elected to the Royal Swedish Academy of Sciences in 1980 and was its President from 1980 to 1982. While serving on the Academy, he was on the committees deciding the Rolf Schock Prizes in Mathematics and the Crafoord Prize in Astronomy and Mathematics.
Roos made important contributions to homological algebra, and did extensive computer-assisted studies of Hilbert–Poincaré series and their rationality. A special issue of the journal Homology, Homotopy and Applications ("The Roos Festschrift volume") was published in 2002, on the occasion of his 65th birthday.
He died on 15 December 2017 at his home in Uppsala and is buried at the Uppsala old cemetery.
Publications
References
1935 births
2017 deaths
People from Halmstad
20th-century Swedish mathematicians
21st-century Swedish mathematicians
Algebraists
Lund University alumni
Swedish expatriates in France
Members of the Royal Swedish Academy of Sciences
Academic staff of Stockholm University
Burials at Uppsala old cemetery | Jan-Erik Roos | [
"Mathematics"
] | 450 | [
"Algebra",
"Algebraists"
] |
64,170,193 | https://en.wikipedia.org/wiki/Hack%20Club | Hack Club is a global nonprofit network of high school computer hackers, makers and coders founded in 2014 by Zach Latta. It now includes more than 500 high school clubs and 40,000 students. It has been featured on the TODAY Show, and profiled in the Wall Street Journal and many other publications.
Programs
Hack Club's primary focus is its clubs program, in which it supports high school coding clubs through learning resources and mentorship. It also runs a series of other programs and events.
Some of their notable programs and events include:
HCB - A fiscal sponsorship program originally targeted at high school hacker events
AMAs - Video calls with industry experts such as Elon Musk, Vitalik Buterin, and Sal Khan
Summer of Making - A collaboration with GitHub, Adafruit & Arduino to create an online summer program for teenagers during the COVID-19 pandemic that included $50k in hardware donations to teen hackers around the world
The Hacker Zephyr - A cross-country hackathon on a train across America
Assemble - The first high school hackathon in San Francisco since the COVID-19 pandemic, with the stated goal of "kick[ing] off a hackathon renaissance"
Epoch - A global high schooler-led hackathon in Delhi NCR organized in public to inspire the community of student hackers and bring hundreds of teenagers together
Winter Hardware Wonderland - An online winter program where teenagers submit ideas for hardware projects and, if accepted, get grants of up to $250
Outernet - An experimental four-day hackathon and camping trip in the Northeast Kingdom
2024 Leader's Summit - A 72-hour hackathon in San Francisco where teenage club leaders built projects for their club members to use
Wonderland - A 48-hour hackathon in Boston where teenagers built projects using random items found in their "chest"
Apocalypse - A 42-hour high-school hackathon at Shopify's Toronto office, with the theme of a "zombie apocalypse"
The Boreal Express - A cross-country hackathon on a train in partnership with Via Rail originally planned from Vancouver to Montreal, but was turned around due to wildfires in Jasper, Alberta
Arcade - An online summer program in collaboration with GitHub, allowing teenagers to log work on creative projects to earn “tickets”, which could be exchanged for prize
Onboard $100 grant for high schoolers to produce PCBs
Funding
Hack Club is funded by grants from philanthropic organizations and donations from individual supporters. In 2019, GitHub Education provided cash grants of up to $500 to every Hack Club "hackathon" event. In May 2020, GitHub committed to a $50K hardware fund, globally alongside Arduino and Adafruit, to deliver hardware tools directly to students’ homes with a program named Hack Club Summer of Making.Elon Musk and the Musk Foundation donated $500,000 to help expand Hack Club in 2020, donated another $1,000,000 in 2021, and an additional $4,000,000 in 2023. In 2022, Tom and Theresa Preston-Werner donated $500,000 to Hack Club.
See also
Ethical hacking
References
Hacker_culture
Clubs and societies
Computer programming
2014 establishments in Vermont | Hack Club | [
"Technology",
"Engineering"
] | 669 | [
"Software engineering",
"Computer programming",
"Computers"
] |
64,170,778 | https://en.wikipedia.org/wiki/Interface%20force%20field | In the context of chemistry and molecular modelling, the Interface force field (IFF) is a force field for classical molecular simulations of atoms, molecules, and assemblies up to the large nanometer scale, covering compounds from across the periodic table. It employs a consistent classical Hamiltonian energy function for metals, oxides, and organic compounds, linking biomolecular and materials simulation platforms into a single platform. The reliability is often higher than that of density functional theory calculations at more than a million times lower computational cost. IFF includes a physical-chemical interpretation for all parameters as well as a surface model database that covers different cleavage planes and surface chemistry of included compounds. The Interface Force Field is compatible with force fields for the simulation of primarily organic compounds and can be used with common molecular dynamics and Monte Carlo codes. Structures and energies of included chemical elements and compounds are rigorously validated and property predictions are up to a factor of 100 more accurate relative to earlier models.
Origin
IFF was developed by Hendrik Heinz and his research group in 2013, based on preliminary work dating back to 2003 that includes a new rationale for atomic charges, use of energy expressions, interpretation of parameters, and a series of outperforming force field parameters for minerals, metals, and polymers. The force fields covered new chemical space and were one to two orders of magnitude more accurate than prior models where available, with apparently no restrictions to extend them further across the periodic table.
As early as in the late 1960s, interatomic potentials were developed, for example, for amino acids and later served the CHARMM program. The fraction of covered chemical space was small, however, considering the size of the periodic table, and compatible interatomic potentials for inorganic compounds remained largely unavailable. Different energy functions, lack of interpretation and validation of parameters restricted modeling to isolated compounds with unpredictable errors. Assumptions of formal charges, a lack of rationale for Lennard-Jones parameters and even for bonded terms, fixed atoms, as well as other approximations often led to collapsed structures and random energy differences when allowing atom mobility. A concept for consistent simulations of inorganic-organic interfaces, that formed the basis of IFF, was first introduced in 2003.
A major obstacle was the poor definition of atomic charges in molecular models, especially for inorganic compounds, due to reliance on quantum chemistry calculations and partitioning methods that may be suitable for field-based but not for point-based charge distributions necessary in force fields. As a result, uncertainties in quantum-mechanically derived point charges were often 100% or higher, clearly unsuited to quantify chemical bonding or chemical processes in force fields and in molecular simulations. IFF utilizes a method to assign atomic charges that translates chemical bonding accurately into molecular models, including metals, oxides, minerals, and organic molecules. The models reproduce multipole moments internal to a chemical compound on the basis of experimental data for electron deformation densities, dipole moments (often known to <1% error), as well as consideration of atomization energies, ionization energies, coordination numbers, and trends relative to other chemically similar compounds in the periodic table (the Extended Born Model). The method ensures a combination of experimental data and theory to represent chemical bonding and yields up to ten times more reliable and reproducible atomic charges in comparison to the use of quantum methods, with typical uncertainties of 5%. This approach is essential to carry out consistent all-atom simulations of compounds across the periodic table that vary widely in the type of chemical bonding and in internal polarity. IFF also allows the inclusion of specific features of the electronic structure such as π electrons in graphitic materials and aromatic compounds as well as image charges in metals.
Another distinctive characteristic of IFF is the systematic reproduction of structures and energies to validate the classical Hamiltonian. First, the quality of structural predictions is assessed by validation of lattice parameters and densities from X-ray data, which has been common in molecular simulations. Second, in addition, IFF uses surface and cleavage energies for solids from experimental measurements to ensure a reliable potential energy surface. Third, in addition, force field parameters and reference data are considered at standard temperature and pressure. This protocol is far more practical than using lattice parameters at a temperature of 0 K and cohesive (vaporization) energies at up to 3000 K, which is commonly the case to assess ab-initio calculations, as then the conditions are far from practical utility and experimental data for validation may be limited or not at all available. As a result of the advances in IFF, hydration energies, adsorption energies, thermal, and mechanical properties can often be computed in quantitative agreement with measurements without further parameter modifications. The IFF parameters also have a physical-chemical interpretation and allow chemical analogy as an effective method to derive parameters for chemically similar, yet not parameterized compounds in good accuracy.
Alternative approaches based on gray-box or black-box fitting of force field parameters, e.g., using lattice parameters and mechanical properties (the 2nd derivative of the energy) as target quantities, lack interpretability and frequently incur 50% to 500% error in surface and interfacial energies, which is usually not sufficient to accelerate materials design.
Current coverage
IFF covers metals, oxides, 2D materials, cement minerals, and organic compounds. The typical accuracy is ~0.5% for lattice parameters, ~5% for surface energies, and ~10% for elastic moduli, including documented variations for individual compounds. All-atom models and simulation inputs for bulk materials and interfaces can be built using Materials Studio, VMD, LAMMPS, CHARMM-GUI, as well as other editing programs. Simulations and analysis can be carried out using many molecular dynamics programs such as Discover, Forcite, LAMMPS, NAMD, GROMACS, and CHARMM. IFF uses employs the same potential energy function as other common force fields (CHARMM, AMBER, OPLS-AA, CVFF, DREIDING, GROMOS, PCFF, COMPASS), including options for 12-6 and 9-6 Lennard-Jones potentials, and can be used standalone or as a plugin to these force fields to utilize existing parameters.
Applications
Accurate interatomic potentials are essential to analyze assemblies of atoms, molecules, and nanostructures up to the small microscale. IFF is used in molecular dynamics simulations of nanomaterials and biological interfaces. Structures up to ten thousands of atoms can be analyzed on a workstation, and up to a billion atoms using supercomputing. Examples include properties of metals and alloys, mineral-organic interfaces, protein- and DNA-nanomaterial interactions, earth and building materials, carbon nanostructures, batteries, and polymer composites. The simulations visualize atomically resolved processes and quantify relationships to macroscale properties that are elusive from experiments due to limitations in imaging and tracking of atoms. Modeling thereby complements experimental studies by X-ray diffraction, electron microscopy and tomography, such as transmission electron microscopy and atomic force microscopy, as well as several types of spectroscopy, calorimetry, and electrochemical measurements. Knowledge of the 3D atomic structures and dynamic changes over time is key to understanding the function of sensors, molecular signatures of diseases, and material properties. Computations with IFF can also be used to screen large numbers of hypothetical materials for guidance in synthesis and processing.
Surface model database
A database in IFF provides simulation-ready models of crystal structures and crystallographic surfaces of metals and minerals. Often, variable surface chemistry is important, such as in pH-responsive surfaces of silica, hydroxyapatite, and cement minerals. The model options in the database incorporate extensive experimental data, which can be selected and customized by users. For example, models for silica cover the flexible area density of silanol groups and siloxide groups according to data from differential thermal gravimetry, spectroscopy, zeta potentials, surface titration, and pK values. Similarly, hydroxyapatite minerals in bone and teeth displays surfaces that differ in dihydrogenphosphate versus monohydrogenphosphate content as a function of pH value. The surface chemistry is often as critical as good interatomic potentials to predict the dynamics of electrolyte interfaces, molecular recognition, and surface reactions.
Application to chemical reactions
IFF is primarily a classical potential with limited applicability to chemical reactions. Quantitative simulations of reactions is, however, a natural extension due to an interpretable representation of chemical bonding and electronic structure. Simulations of the relative activity of Pd nanoparticle catalysts in C-C Stille coupling, hydration reactions, and cis-trans isomerization reactions of azobenzene have been reported. A general pathway to simulate reactions are QM/MM simulations. Other pathways to implement reactions are user-defined changes in bond connectivity during the simulations, and use of a Morse potential instead of a harmonic bond potential to enable bond breaking in stress-strain simulations.
References
Intermolecular forces
Molecular physics
Interface force field
Molecular modelling | Interface force field | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,853 | [
"Molecular physics",
"Force fields (chemistry)",
"Materials science",
"Intermolecular forces",
"Theoretical chemistry",
"Molecular modelling",
"Molecular dynamics",
"Computational chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
61,803,136 | https://en.wikipedia.org/wiki/Blowback%20%28steam%20engine%29 | A blowback (also blow back or blow-back) is a failure of a steam locomotive, which can be catastrophic.
One type of blowback is caused when atmospheric air blows down the locomotive's chimney, causing the flow of hot gases through the boiler tubes to be reversed, with the fire itself being blown through the firehole onto the footplate, with potentially serious consequences for the crew. The risk of backdraught is higher when the locomotive enters a tunnel because of the pressure shock. Such blowbacks can be prevented by opening the blower before closing the regulator. Similar blowback can be caused by debris or other obstructions in the smokebox.
In the days when steam-hauled trains were common in the United Kingdom, blowbacks occurred fairly frequently. In a 1955 report on an accident near Dunstable, the Inspector wrote:
He also recommended that the British Transport Commission carry out an investigation into the causes of blowbacks.
Blowbacks can also occur when a steam tube (or pipe) bursts in the boiler, allowing high-pressure steam to enter the firebox and thus egress onto the footplate. Other potential causes are unused mining explosives in the coal used to fuel the engine, and unburnt gases collecting in the firebox and then igniting.
Examples
The 1965 Winsford railway accident was caused by a blowback. Driver Wallace Oakes died as a result, and his fireman Gwilym Roberts was severely injured.
References
Locomotive boilers
Steam locomotive fireboxes
Steam locomotive exhaust systems
Explosions | Blowback (steam engine) | [
"Chemistry",
"Engineering"
] | 308 | [
"Combustion engineering",
"Explosions",
"Steam locomotive fireboxes"
] |
68,498,475 | https://en.wikipedia.org/wiki/Ariane%20Next | Ariane Next—also known as SALTO (reusable strategic space launcher technologies and operations)—is a future European Space Agency rocket being developed in the 2020s by ArianeGroup. This partially reusable launcher is planned to succeed Ariane 6, with an entry into service in the 2030s. The objective of the new launcher is to halve the launch costs compared with Ariane 6. The preferred architecture is that of the Falcon 9 rocket (a reusable first stage landing vertically with a common engine model for the two stages) while using an engine burning a mixture of liquid methane and liquid oxygen. The first technological demonstrators are under development.
History
The European Space Agency's Ariane 6 launcher is to gradually succeed the Ariane 5 rocket after 2023. Studies on the next generation of European government-funded launcher to follow Ariane 6 started before 2019. The stated priority objective for the new rocket is to halve the cost of launching compared to Ariane 6 with simplified and more flexible launch methods.
ArianeGroup was selected by the ESA in 2021 to head two projects: one to develop a new reusable launch vehicle and the other to develop a new liquid propellant rocket engine for the vehicle. More specifically, the two programmes were named "SALTO (reuSable strAtegic space Launcher Technologies & Operations) and ENLIGHTEN (European iNitiative for Low cost, Innovative & Green High Thrust Engine) projects," respectively.
ArianeGroup secured funding to begin development of the new reusable launch vehicle in May 2022.
Funding for the project will be provided "by the European Commission as a part of the Horizon Europe programme designed to encourage and accelerate innovation" in Europe.
In May 2022, the "French Economy Minister Bruno Le Maire said SALTO and ENLIGHTEN would be operational by 2026", and ArianeGroup stated that the target date was achievable.
, the SALTO project intended to carry out an initial flight test of a single rocket stage by mid-2024, using a Themis prototype first stage to validate the landing phase of the design. First hot fire engine testing occurred in 2023.
Description
The architecture proposed for Ariane Next uses a design based on SpaceX's Falcon 9: a reusable first stage which, after having separated from the second stage, returns to land vertically on Earth. The first stage will use several liquid-propellant rocket engines: the predecessor for these is the Prometheus rocket engine under development by the EU, which burns a mixture of methane and liquid oxygen. Methane is somewhat less efficient than the hydrogen used by the Vulcain engine of Ariane 6 but it can be stored at higher temperatures, compared to for hydrogen, which makes it possible to lighten and simplify the tanks and the supply circuits. The density of liquid methane is higher than hydrogen, which allows a mass reduction in the tank structure. The launcher is planned to use seven or nine of such engines for the first stage and a single engine for the second stage. The goal is to halve the launch costs compared to Ariane 6.
Preliminary steps
To be able to produce the new launcher, various technology demonstrators are being developed, each also funded by European Union technology development funds:
FROG is a small demonstrator for testing the vertical landing of a rocket stage. It made several flights in 2019.
Callisto, under development, aims to improve the techniques required to produce a reusable launcher (return to Earth and reconditioning) and to estimate the operational cost of such a launcher. A first flight is scheduled for 2025 or early 2026.
Themis will then be developed. It will have a reusable first stage with one to three Prometheus rocket motors and is expected to fly around 2022–2025.
Configurations
Different configurations of the launcher are being evaluated. Three versions are under consideration for different missions:
A two-stage version
A version with two small liquid propellant boosters
A version with three first stages linked together, similar to Falcon Heavy
Return to Earth
Different systems are being studied for controlling the first stage's atmospheric re-entry:
Grid fins, as on the first stage of Falcon 9
Stabilization fins
Air braking
Landing system
Different systems are being considered, ranging from everything on ground (all ground systems) to everything on the launcher (all on-board systems). Currently, development is focused on an on-board legs system similar to that of Falcon 9.
See also
Reusable Vehicle Testing
References
External links
Ariane Next on CNES website
Reusable launch systems
Partially reusable space launch vehicles
Proposed space launch vehicles
Space launch vehicles of Europe
Ariane (rocket family)
Space programs
European space programmes
Spaceflight technology | Ariane Next | [
"Engineering"
] | 959 | [
"Space programs",
"European space programmes"
] |
68,500,212 | https://en.wikipedia.org/wiki/Immortality%20or%20Bust | Immortality or Bust is a 2019 feature documentary focusing on the 2016 U.S. presidential campaign of Transhumanist Party nominee Zoltan Istvan. Directed by Daniel Sollinger, it won two awards at film festivals - the Breakout Award at the 2019 Raw Science Film Festival and Best Biohacking Awareness Documentary at the GeekFest Toronto 2021. It is distributed by Gravitas Ventures.
Synopsis
Immortality or Bust explores the transhumanism movement and its major personalities as Zoltan Istvan drives his "Immortality Bus" across America.
The film begins with Istvan and his mother, Ilona Gyurko, mourning over the body of his father, Steven Gyurko. Months before his death, Istvan had been driving a 38-foot campaign bus shaped like a giant coffin in hopes of generating publicity for life extension science, which aims to overcome death with technologies such as genetic editing, robotic organs, and mind uploading.
Aboard the bus and featured in the documentary are embedded journalists from media such as The New York Times, The Verge, Vox, The Telegraph, and Der Spiegel.
The documentary explores biohacker gathering GrindFest, cryonics facility Alcor, Jacque Fresco's The Venus Project, and The Church of Perpetual Life, and virtual reality's Second Life via Terasem, among other places.
The documentary also features Istvan's visits with then Cyborg Party presidential candidate John McAfee, 2016 Libertarian presidential candidate Gary Johnson, and comedian Jimmy Dore. In the documentary Alex Jones and Fox News criticize Istvan's presidential campaign while Good Mythical Morning, underground group Anonymous offer support, and John Horgan at Scientific American offer support.
Immortality or Bust also focuses on Istvan's presidential campaign events, from California street demonstrations supporting transhumanism, to talks at Harvard University, to advocating for universal basic income, to delivering a Transhumanist Bill of Rights to the US Capitol. It also features Istvan's complex marriage to his wife and how his political ambitions affect his young children. The film concludes with Istvan's father voting for his son before he dies.
Cast
Zoltan Istvan
John McAfee
Gary Johnson
Alex Jones
Jimmy Dore
Max More
Jacque Fresco
William Falloon
John Horgan
Erica Orange
Rich Lee (biohacker)
Alexis Madrigal
Maitreya One (HipHop artist)
Criticism
Film Threat reviewer Chris Salce says Istvan mentions Jurassic Park themes too much in his transhumanism ideas, and that works against the overall message of the film.
References
External links
Official website
2019 films
2019 documentary films
American documentary films
Documentary films about technology
Documentary films about death
American independent films
Transhumanism
Futurology documentaries
2010s English-language films
2010s American films
English-language documentary films | Immortality or Bust | [
"Technology",
"Engineering",
"Biology"
] | 570 | [
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology"
] |
68,502,608 | https://en.wikipedia.org/wiki/Nitrate%20nitrite | A nitrate nitrite, or nitrite nitrate, is a coordination complex or other chemical compound that contains both nitrite () and nitrate () anions. They are mixed-anion compounds, and they are mixed-valence compounds. Some have third anions. Many nitrite nitrate compounds are coordination complexes of cobalt. Such a substance was discovered by Wolcott Gibbs and Frederick Genth in 1857.
Production
Mercury(II) nitrate and potassium nitrate in water solution produce the salt tripotassium tetranitratomercurate(II) nitrate .
Properties
On heating, nitrate nitrites lose NO2 and NO and yield metal oxides.
Related
Other compounds having an element in two different anion states include the sulfate sulfites, the phosphate phosphites, arsenate arsenites and the selenate selenites.
List
References
Nitrates
Nitrites | Nitrate nitrite | [
"Chemistry"
] | 187 | [
"Oxidizing agents",
"Nitrates",
"Salts"
] |
68,504,105 | https://en.wikipedia.org/wiki/Marine%20holobiont | The holobiont concept is a renewed paradigm in biology that can help to describe and understand complex systems, like the host-microbe interactions that play crucial roles in marine ecosystems. However, there is still little understanding of the mechanisms that govern these relationships, the evolutionary processes that shape them and their ecological consequences. The holobiont concept posits that a host and its associated microbiota with which it interacts, form a holobiont, and have to be studied together as a coherent biological and functional unit to understand its biology, ecology, and evolution.
History
The idea of holism started to regain popularity in biology when the endosymbiosis theory was first proposed by Konstantin Mereschkowski in 1905 and further developed by Ivan Wallin in 1925. Still accepted today, this theory posits a single origin for eukaryotic cells through the symbiotic assimilation of prokaryotes to form first mitochondria and later plastids (the latter through several independent symbiotic events) via phagocytosis (reviewed in Archibald, 2015). These ancestral and founding symbiotic events, which prompted the metabolic and cellular complexity of eukaryotic life, most likely occurred in the ocean.
Despite the general acceptance of the endosymbiosis theory, the term holobiosis or holobiont did not immediately enter the scientific vernacular. It was coined independently by the German Adolf Meyer-Abich in 1943, and by Lynn Margulis in 1990, who proposed that evolution has worked mainly through symbiosis-driven leaps that merged organisms into new forms, referred to as “holobionts”, and only secondarily through gradual mutational changes. However, the concept was not widely used until it was co-opted by coral biologists over a decade later. Corals and the dinoflagellate algae called Zooxanthellae are one of the most iconic examples of symbioses found in nature; most corals are incapable of long-term survival without the products of photosynthesis provided by their endosymbiotic algae. Rohwer et al. (2002) were the first to use the word holobiont to describe a unit of selection sensu Margulis for corals, where the holobiont comprised the cnidarian polyp (host), Zooxanthellae algae, various ectosymbionts (endolithic algae, prokaryotes, fungi, other unicellular eukaryotes), and viruses.
Although initially driven by studies of marine organisms, much of the research on the emerging properties and significance of holobionts has since been carried out in other fields of research: the microbiota of the rhizosphere of plants or the animal gut became predominant models and have led to an ongoing paradigm shift in agronomy and medical sciences. Holobionts occur in terrestrial and aquatic habitats alike, and several analogies between these ecosystems can be made. For example, in all of these habitats, interactions within and across holobionts such as induction of chemical defenses, nutrient acquisition, or biofilm formation are mediated by chemical cues and signals in the environment, dubbed infochemicals. Nevertheless, we can identify two major differences between terrestrial and aquatic systems. First, the physicochemical properties of water result in higher chemical connectivity and signaling between macro- and micro-organisms in aquatic or moist environments. In marine ecosystems, carbon fluxes also appear to be swifter and trophic modes more flexible, leading to higher plasticity of functional interactions across holobionts. Moreover, dispersal barriers are usually lower, allowing for faster microbial community shifts in marine holobionts. Secondly, phylogenetic diversity at broad taxonomic scales (i.e., supra-kingdom, kingdom and phylum levels), is higher in aquatic realms compared to land, with much of the aquatic diversity yet to be uncovered, especially marine viruses.
Russian Doll complexity
The boundaries of holobionts are usually delimited by a physical gradient, which corresponds to the area of local influence of the host, e.g., in unicellular algae the so-called phycosphere. However, they may also be defined in a context-dependent way as a Russian matryoshka doll, setting the boundaries of the holobiont depending on the interactions and biological functions that are being considered. Thus holobionts may encompass all levels of host-symbiont associations from intimate endosymbiosis with a high degree of co-evolution up to the community and ecosystem level; a concept referred to as "nested ecosystems" (see diagram).
In the diagram on the right, the host (blue circles), and associated microbes (all other shapes) including bacteria and eukaryotes that may be inside (i.e., endosymbiotic) or outside the host (i.e., ectosymbiotic) are connected by either beneficial (solid orange lines), neutral (solid blue lines) or pathogenic (dashed black lines) interactions, respectively. Changes from beneficial or neutral to pathogenic interactions are typical cases of dysbiosis. The different clusters are illustrated by the following examples: 1, a model holobiont in a stable physiological condition (e.g., in controlled laboratory condition); 2 and 3, holobionts changing during their life cycle or subjected to stress conditions—examples of vertically transmitted microbes are indicated by light blue arrows; 4 and 5, marine holobionts in the context of global sampling campaigns or long-term time series—examples of horizontal transmission of microbes and holobionts are illustrated by pink arrows.
Such a conceptual perspective raises fundamental questions not only regarding the interaction between the different components of holobionts and processes governing their dynamics, but also of the relevant units of selection and the role of coevolution. For instance, plant and animal evolution involves new functions co-constructed by members of the holobiont or elimination of functions redundant among them, and it is likely that these processes are also relevant in marine holobionts. Eugene Rosenberg et al. have argued that all animals and plants can be considered holobionts, and thus advocate the hologenome theory of evolution, suggesting that natural selection acts at the level of the holobiont and its hologenome. This interpretation of Margulis' definition of a holobiont considerably broadened fundamental concepts in evolution and speciation and has not been free of criticism, especially when applied at the community or ecosystem level. More recently, it has been shown that species that interact indirectly with the host can also be important in shaping coevolution within mutualistic multi-partner assemblages. Thus, the holobiont concept and the underlying complexity of holobiont systems should be better defined and further considered when addressing evolutionary and ecological questions.
Marine holobiont models
Environmental models: Within the animal kingdom, and in addition to corals and sponges, the discovery of deep-sea hydrothermal vents revealed symbioses of animals with chemosynthetic bacteria that have later been found in many other marine ecosystems and frequently exhibit high levels of metabolic and taxonomic diversity. In the SAR supergroup, in addition to well-known models such as diatoms, radiolarians and foraminiferans, both heterotrophic protist dwellers harboring endosymbiotic microalgae, are emerging as ecological models for unicellular photosymbiosis due to their ubiquitous presence in the world's oceans. Among the haptophytes, the cosmopolitan Emiliania huxleyi, promoted by associated bacteria, produces key intermediates in the carbon and sulfur biogeochemical cycles, making it an important model phytoplankton species. Finally, within the Archaeplastida, the siphonous green alga Bryopsis is an example of a model that harbors heterotrophic endosymbiotic bacteria, some of which exhibit patterns of co-evolution with their hosts.
Controlled bi- or trilateral associations: Only a few models, covering a small part of the overall marine biodiversity, are currently being cultivated ex-situ and can be used in fully controlled experiments, where they can be cultured aposymbiotically. The flatworm Symsagittifera roscoffensis, the sea anemone Exaiptasia, the upside-down jellyfish Cassiopea, and their respective intracellular green and dinoflagellate algae have, in addition to corals, become models for fundamental research on evolution of animal-algal photosymbiosis. In particular, the sea anemone Exaiptasia has been used to explore photobiology disruption and restoration of cnidarian symbioses. The Vibrio-squid model provides insights into the effect of microbiota on animal development, circadian rhythms, and immune systems/ The unicellular green alga Ostreococcus, an important marine primary producer, has been shown to exchange vitamins with specific associated bacteria. The green macroalga Ulva mutabilis has enabled the exploration of bacteria-mediated growth and morphogenesis including the identification of original chemical interactions in the holobiont. Although the culture conditions in these highly controlled model systems differ from the natural environment, these systems are essential to gain elementary mechanistic understanding of the functioning, the roles, and the evolution of marine holobionts.
Example holobionts
Influence on ecological processes
Work on model systems has demonstrated that motile and macroscopic marine holobionts can act as dissemination vectors for geographically restricted microbial taxa. Pelagic mollusks or vertebrates are textbook examples of high dispersal capacity organisms (e.g., against currents and through stratified water layers). It has been estimated that fish and marine mammals may enhance the original dispersion rate of their microbiota by a factor of 200 to 200,000 and marine birds may even act as bio-vectors across ecosystem boundaries. This host-driven dispersal of microbes can include non-native or invasive species as well as pathogens.
A related ecological function of holobionts is their potential to sustain rare species. Hosts provide an environment that favors the growth of specific microbial communities distinct from the surrounding environment (including rare microbes). They may, for instance, provide a nutrient-rich niche in the otherwise nutrient-poor surroundings.
Lastly, biological processes regulated by microbes are important drivers of global biogeochemical cycles. In the open ocean, it is estimated that symbioses with the cyanobacterium UCYN-A contribute about 20% to total N2 fixation. In benthic systems, sponges and corals may support entire ecosystems via their involvement in nutrient cycling thanks to their microbial partners, functioning as sinks and sources of nutrients. In particular the “sponge loop” recycles dissolved organic matter and makes it available to higher trophic levels in the form of detritus. In coastal sediments, bivalves hosting methanogenic archaea have been shown to increase the benthic methane efflux by a factor of up to eight, potentially accounting for 9.5% of total methane emissions from the Baltic Sea. This metabolic versatility is accomplished because of the simultaneous occurrence of disparate biochemical machineries (e.g., aerobic and anaerobic pathways) in individual symbionts, providing new metabolic abilities to the holobiont, such as the synthesis of specific essential amino acids, photosynthesis, or chemosynthesis. Furthermore, the interaction between host and microbiota can potentially extend the metabolic capabilities of a holobiont in a way that augments its resilience to environmental changes, or allow it to cross biotope boundaries (e.g., Woyke et al., 2006) and colonize extreme environments (Bang et al., 2018). Holobionts thus contribute to marine microbial diversity and possibly resilience in the context of global environmental changes and it is paramount to include the holobiont concept in predictive models that investigate the consequences of human impacts on the marine realm and its biogeochemical cycles.
Holobiont assembly and regulation
Two critical challenges partially addressed by using model systems are (1) to decipher the factors determining holobiont composition and (2) to elucidate the impacts and roles of the different partners in these complex systems over time. Some marine organisms such as bivalves transmit part of the microbiota maternally. In other marine holobionts, vertical transmission may be weak and inconsistent, whereas mixed modes of transmission (vertical and horizontal) or intermediate modes (pseudo-vertical, where horizontal acquisition frequently involves symbionts of parental origin) are more common. Identifying the factors shaping holobiont composition and understanding their evolution is highly relevant for marine organisms given that most marine hosts display a high specificity for their microbiota and even patterns of phylosymbiosis, despite a highly connected and microbe-rich environment.
During microbiota transmission (whether vertical or horizontal), selection by the host and/or by other components of the microbiome, is a key process in establishing or maintaining a holobiont microbial community that is distinct from the environment. The immune system of the host, e.g., via the secretion of specific antimicrobial peptides, is one way of performing this selection in both marine and terrestrial holobionts.
Another way of selecting a holobiont microbial community is by chemically mediated microbial gardening. This concept has been demonstrated for land plants, where root exudates manipulate microbiome composition. In marine environments, the phylogenetic diversity of hosts and symbionts suggests both conserved and marine-specific chemical interactions, but studies are still in their infancy. For instance, seaweeds can chemically garden beneficial microbes, facilitating normal morphogenesis and increasing disease resistance, and seaweeds and corals structure their surface-associated microbiome by producing chemoattractants and antibacterial compounds. There are fewer examples of chemical gardening in unicellular hosts, but it seems highly likely that similar processes are in place.
In addition to selection, ecological drift, dispersal and evolutionary diversification have been proposed as key processes in community assembly, but are difficult to estimate in microbial communities. The only data currently at our disposal to quantify these processes are the diversity and distribution of microbes. Considering the high connectivity of aquatic environments, differences in marine microbial communities are frequently attributed to a combination of selection and drift, rather than limited dispersal, a conclusion which in the future could be refined by conceptual models developed for instance for soil microbial communities.) Diversification is mainly considered in the sense of coevolution or adaptation to host selection, which may also be driven by the horizontal acquisition of genes. However, cospeciation is challenging to prove and only few studies have examined this process in marine holobionts to date, each focused on a restricted number of actors.
Perturbations in the transmission or the recruitment of the microbiota can lead to dysbiosis, and eventually microbial infections. Dysbiotic microbial communities are frequently determined by stochastic processes and thus display higher variability in their composition than those of healthy individuals. This observation in line with the "Anna Karenina principle", although there are exceptions to this rule. A specific case of dysbiosis is the so-called "Rasputin effect" where benign endosymbionts opportunistically become detrimental to the host due to processes such as reduction in immune response under food deprivation, coinfections, or environmental pressure. Many diseases are now interpreted as the result of a microbial imbalance and the rise of opportunistic or polymicrobial infections upon host stress. For instance in reef-building corals, warming destabilizes cnidarian-dinoflagellate associations, and some beneficial Symbiodiniacea strains switch their physiology and sequester more resources for their own growth at the expense of the coral host, leading to coral bleaching and even death.
Increasing our knowledge on the contribution of these processes to holobiont community assembly in marine systems is a key challenge, which is of particular urgency today in the context of ongoing global change. Moreover, understanding how the community and functional structure of resident microbes are resilient to perturbations remains critical to predict and promote the health of their host and the ecosystem. Yet, the contribution of the microbiome is still missing in most quantitative models predicting the distribution of marine macro-organisms, or additional information on biological interactions would be required to make the former more accurate.
References
Further references
Holobionts | Marine holobiont | [
"Biology"
] | 3,513 | [
"Symbiosis",
"Holobionts"
] |
68,505,120 | https://en.wikipedia.org/wiki/Fractional%20dose%20vaccination | Fractional dose vaccination is a strategy to reduce the dose of a vaccine to achieve a vaccination policy goal that is more difficult to achieve with conventional vaccination approaches, including deploying a vaccine faster in a pandemic, reaching more individuals in the setting of limited healthcare budgets, or minimizing side effects due to the vaccine.
Fractional dose vaccination exploits the nonlinear dose-response characteristics of a vaccine: If two persons can be vaccinated instead of one, but each one gets 2/3 of the protective efficacy, there is a net benefit at society scale for reducing the number of infections. If the healthcare budget is limited or only a limited amount of vaccine is available during the early phase of a pandemic, this can make a difference for the total number of infections.
Fractional dose vaccination uses a fraction of the standard dose of a regular vaccine that is administered by the same, or an alternative route (often subcutaneously or intradermally).
Fractional dose vaccination has been used or proposed in a number of relevant infectious poverty diseases including yellow fever, poliomyelitis, COVID-19.
Use
In the context of limited healthcare budgets
During the 2016 yellow fever outbreak in Angola and the Democratic Republic of the Congo, the WHO approved the use of fractional dose vaccination to deal with a potential shortage of vaccine. In August 2016, a large vaccination campaign in Kinshasa used 1/5 of the standard vaccine dose. In 2018 it was reported that fractional dose vaccination with 1/5 of the standard vaccine dose, administered intradermally, conferred protection for 10 years, as documented by a randomized clinical trial.
In Poliomyelitis, fractional dose vaccination has been shown to be effective while reducing overall cost, rendering polio vaccination available to more individuals.
In the Covid-19 pandemic
In a pandemic wave, fractional dose vaccination is considered to accelerate widespread access to vaccination when vaccine supply is limited:
In the COVID-19 pandemic, epidemiologic models predict a major benefit of personalized fractional dose vaccination strategies with certain vaccines in terms of case load, deaths, and shortening of the pandemic.
To reduce side effects
In some segments of the population, disease risk is lower but specific vaccine side effect risks may be increased. In such subpopulations, fractional dose vaccination might optimize the benefit-risk ratio of vaccination for an individuum and optimize the cost-benefit relation for society.
References
Vaccination | Fractional dose vaccination | [
"Biology"
] | 550 | [
"Vaccination"
] |
68,507,875 | https://en.wikipedia.org/wiki/Jellyfish%20Barge | The Jellyfish Barge is a floating greenhouse module that uses hydroponic agriculture and 70% less water compared to traditional agriculture. The barge is made of recyclable materials and uses solar distillation to collect 150 liters of saltwater daily and turn it into freshwater. 15% seawater is added back into the water to improve the mineral content and nutritional value of the crops. One module is approximately 70 square meters and can be used to grow between 1400 and 1600 plants per month. 120 units can be constructed on a hectare. The project was included as part of the Expo 2015 in Milan, Italy. The project was conceptualized by Stefano Mancuso and was financed by the Cassa di Risparmio di Firenze. The architects of the module are Antonio Girardi and Cristina Favretto.
References
Environmental mitigation
Hydroponics
Architectural design
Agricultural technology
External links
Jellyfish Barge | Jellyfish Barge | [
"Chemistry",
"Engineering"
] | 185 | [
"Environmental mitigation",
"Architectural design",
"Environmental engineering",
"Design",
"Architecture"
] |
68,508,065 | https://en.wikipedia.org/wiki/Corner%20house | Corner Houses () are a type of building located at the junction of two or three roads.
Hong Kong
Corner houses are buildings located at junctions. In Hong Kong, buildings must meet certain specifications, which is why corner houses are so common on Hong Kong Island and Kowloon.
Corner houses originate from the Composite Buildings of Hong Kong. They were popularized in the 1950s and the 1960s. Most corner houses are fourth-generation tong lau, featuring rounded corners and lines.
Antonio Hermenegildo Basto currently holds the record for the most corner buildings designed in Hong Kong.
Locations
Hong Kong Island: Wan Chai, Causeway Bay, Sai Ying Pun, Shau Kei Wan
Kowloon: Sham Shui Po, Mong Kok, Tai Kok Tsui, To Kwa Wan, Cheung Sha Wan
Styles
Hanging signs in large facades.
Units in round corners are known as large units.
Round buildings are built in a Bauhaus style.
Types
Notable buildings
Hong Kong
14 Nam Cheong Street (Boundary Street and Nam Cheong Street)
May Wah Building (Wan Chai Road and Johnston Road
Mido Cafe (Temple Street and Public Square Street
New Lucky House (Nathan Road and Jordan Road)
Chung Wui Mansion (Wan Chai Road, Fleming Road, and Johnston Road)
Hing Wah Mansion (Babington Path, Park Road, St Stephen's Lane, and Oaklands Path)
Taiwan
Hayashi Department Store
United States
Flatiron Building (NYC)
UK
The Cornerhouse, Nottingham
Cornerhouse (Demolished)
See also
Tong lau
Composite Building
References
Further reading
External links
Piece of Hong Kong's History: Composite Buildings》
Buildings and structures by type | Corner house | [
"Engineering"
] | 336 | [
"Buildings and structures by type",
"Architecture"
] |
51,386,092 | https://en.wikipedia.org/wiki/Certifying%20algorithm | In theoretical computer science, a certifying algorithm is an algorithm that outputs, together with a solution to the problem it solves, a proof that the solution is correct. A certifying algorithm is said to be efficient if the combined runtime of the algorithm and a proof checker is slower by at most a constant factor than the best known non-certifying algorithm for the same problem.
The proof produced by a certifying algorithm should be in some sense simpler than the algorithm itself, for otherwise any algorithm could be considered certifying (with its output verified by running the same algorithm again). Sometimes this is formalized by requiring that a verification of the proof take less time than the original algorithm, while for other problems (in particular those for which the solution can be found in linear time) simplicity of the output proof is considered in a less formal sense. For instance, the validity of the output proof may be more apparent to human users than the correctness of the algorithm, or a checker for the proof may be more amenable to formal verification.
Implementations of certifying algorithms that also include a checker for the proof generated by the algorithm may be considered to be more reliable than non-certifying algorithms. For, whenever the algorithm is run, one of three things happens: it produces a correct output (the desired case), it detects a bug in the algorithm or its implication (undesired, but generally preferable to continuing without detecting the bug), or both the algorithm and the checker are faulty in a way that masks the bug and prevents it from being detected (undesired, but unlikely as it depends on the existence of two independent bugs).
Examples
Many examples of problems with checkable algorithms come from graph theory.
For instance, a classical algorithm for testing whether a graph is bipartite would simply output a Boolean value: true if the graph is bipartite, false otherwise. In contrast, a certifying algorithm might output a 2-coloring of the graph in the case that it is bipartite, or a cycle of odd length if it is not. Any graph is bipartite if and only if it can be 2-colored, and non-bipartite if and only if it contains an odd cycle. Both checking whether a 2-coloring is valid and checking whether a given odd-length sequence of vertices is a cycle may be performed more simply than testing bipartiteness.
Analogously, it is possible to test whether a given directed graph is acyclic by a certifying algorithm that outputs either a topological order or a directed cycle. It is possible to test whether an undirected graph is a chordal graph by a certifying algorithm that outputs either an elimination ordering (an ordering of all vertices such that, for every vertex, the neighbors that are later in the ordering form a clique) or a chordless cycle. And it is possible to test whether a graph is planar by a certifying algorithm that outputs either a planar embedding or a Kuratowski subgraph.
The extended Euclidean algorithm for the greatest common divisor of two integers and is certifying: it outputs three integers (the divisor), , and , such that . This equation can only be true of multiples of the greatest common divisor, so testing that is the greatest common divisor may be performed by checking that divides both and and that this equation is correct.
See also
Sanity check, a simple test of the correctness of an output or intermediate result that is not required to be a complete proof of correctness
References
Algorithms
Error detection and correction
Software testing | Certifying algorithm | [
"Mathematics",
"Engineering"
] | 752 | [
"Software testing",
"Reliability engineering",
"Algorithms",
"Mathematical logic",
"Applied mathematics",
"Error detection and correction",
"Software engineering"
] |
41,341,065 | https://en.wikipedia.org/wiki/Methylphosphonic%20acid | Methylphosphonic acid is an organophosphorus compound with the chemical formula CH3P(O)(OH)2. The phosphorus center is tetrahedral and is bonded to a methyl group, two OH groups and an oxygen. Methylphosphonic acid is a white, non-volatile solid that is poorly soluble in organic solvent but soluble in water and common alcohols.
Preparation
Methylphosphonic acid can be prepared from triethylphosphite by first using a Michaelis-Arbuzov reaction to generate the phosphorus(V) centre:
CH3Cl + P(OC2H5)3 → CH3PO(OC2H5)2 + C2H5Cl
The resulting dialkylphosphonate is then treated with chlorotrimethylsilane before hydrolysis of the siloxyphosphonate to generate the desired product.
CH3PO(OC2H5)2 + 2 Me3SiCl → CH3PO(OSiMe3)2 + 2 C2H5Cl
CH3PO(OSiMe3)2 + 2H2O → CH3PO(OH)2 + 2 HOSiMe3
The reaction pathway proceeds via the siloxyphosphonate intermediate due to the difficulty in directly hydrolysing dialkylphosphonates. Katritzky and co-workers published a one-pot synthesis of methylphosphonic acid in 1989.
References
Phosphonic acids
Organic compounds with 1 carbon atom | Methylphosphonic acid | [
"Chemistry"
] | 312 | [
"Organic compounds",
"Organic compounds with 1 carbon atom"
] |
41,341,760 | https://en.wikipedia.org/wiki/OpenPHACTS | Open PHACTS (Open Pharmacological Concept Triple Store) was a European initiative public–private partnership between academia, publishers, enterprises, pharmaceutical companies and other organisations working to enable better, cheaper and faster drug discovery. It has been funded by the Innovative Medicines Initiative, selected as part of three projects to "design methods for common standards and sharing of data for more efficient drug development and patient treatment in the future".
Partnerships
A total of 27 partners were involved including:
Academia: Maastricht University, University of Santiago de Compostela, University of Vienna, University of Manchester, University of Bonn, Swiss Institute of Bioinformatics, European Bioinformatics Institute, Vrije Universiteit of Amsterdam, Technical University of Denmark, University of Hamburg
Pharmaceutical companies: Pfizer, Merck KGaA, Eli Lilly and Company, Novartis, GlaxoSmithKline, AstraZeneca
Other companies: ChemSpider, Biovia, Eagle Genomics, Entagen
Publishers: Royal Society of Chemistry, Thomson Reuters
Drug discovery
The Open Pharmacological Space created by the consortium intended to support open innovation and in-house non-public drug discovery research by removing bottlenecks in drug development. Resources from the project are publicly available on GitHub.
To reduce the barriers to drug discovery in industry, academia and for small businesses, the Open PHACTS consortium built the Open PHACTS Discovery Platform. This platform was freely available, integrating pharmacological data from a variety of information resources and providing tools and services to question this integrated data to support pharmacological research.
References
Information technology organizations based in Europe
Drug discovery
Medical databases | OpenPHACTS | [
"Chemistry",
"Biology"
] | 343 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
52,920,749 | https://en.wikipedia.org/wiki/Acceleration%20%28special%20relativity%29 | Accelerations in special relativity (SR) follow, as in Newtonian Mechanics, by differentiation of velocity with respect to time. Because of the Lorentz transformation and time dilation, the concepts of time and distance become more complex, which also leads to more complex definitions of "acceleration". SR as the theory of flat Minkowski spacetime remains valid in the presence of accelerations, because general relativity (GR) is only required when there is curvature of spacetime caused by the energy–momentum tensor (which is mainly determined by mass). However, since the amount of spacetime curvature is not particularly high on Earth or its vicinity, SR remains valid for most practical purposes, such as experiments in particle accelerators.
One can derive transformation formulas for ordinary accelerations in three spatial dimensions (three-acceleration or coordinate acceleration) as measured in an external inertial frame of reference, as well as for the special case of proper acceleration measured by a comoving accelerometer. Another useful formalism is four-acceleration, as its components can be connected in different inertial frames by a Lorentz transformation. Also equations of motion can be formulated which connect acceleration and force. Equations for several forms of acceleration of bodies and their curved world lines follow from these formulas by integration. Well known special cases are hyperbolic motion for constant longitudinal proper acceleration or uniform circular motion. Eventually, it is also possible to describe these phenomena in accelerated frames in the context of special relativity, see Proper reference frame (flat spacetime). In such frames, effects arise which are analogous to homogeneous gravitational fields, which have some formal similarities to the real, inhomogeneous gravitational fields of curved spacetime in general relativity. In the case of hyperbolic motion one can use Rindler coordinates, in the case of uniform circular motion one can use Born coordinates.
Concerning the historical development, relativistic equations containing accelerations can already be found in the early years of relativity, as summarized in early textbooks by Max von Laue (1911, 1921) or Wolfgang Pauli (1921). For instance, equations of motion and acceleration transformations were developed in the papers of Hendrik Antoon Lorentz (1899, 1904), Henri Poincaré (1905), Albert Einstein (1905), Max Planck (1906), and four-acceleration, proper acceleration, hyperbolic motion, accelerating reference frames, Born rigidity, have been analyzed by Einstein (1907), Hermann Minkowski (1907, 1908), Max Born (1909), Gustav Herglotz (1909), Arnold Sommerfeld (1910), von Laue (1911), Friedrich Kottler (1912, 1914), see section on history.
Three-acceleration
In accordance with both Newtonian mechanics and SR, three-acceleration or coordinate acceleration is the first derivative of velocity with respect to coordinate time or the second derivative of the location with respect to coordinate time:
.
However, the theories sharply differ in their predictions in terms of the relation between three-accelerations measured in different inertial frames. In Newtonian mechanics, time is absolute by in accordance with the Galilean transformation, therefore the three-acceleration derived from it is equal too in all inertial frames:
.
On the contrary in SR, both and depend on the Lorentz transformation, therefore also three-acceleration and its components vary in different inertial frames. When the relative velocity between the frames is directed in the x-direction by with as Lorentz factor, the Lorentz transformation has the form
or for arbitrary velocities of magnitude :
In order to find out the transformation of three-acceleration, one has to differentiate the spatial coordinates and of the Lorentz transformation with respect to and , from which the transformation of three-velocity (also called velocity-addition formula) between and follows, and eventually by another differentiation with respect to and the transformation of three-acceleration between and follows. Starting from (), this procedure gives the transformation where the accelerations are parallel (x-direction) or perpendicular (y-, z-direction) to the velocity:
or starting from () this procedure gives the result for the general case of arbitrary directions of velocities and accelerations:
This means, if there are two inertial frames and with relative velocity , then in the acceleration of an object with momentary velocity is measured, while in the same object has an acceleration and has the momentary velocity . As with the velocity addition formulas, also these acceleration transformations guarantee that the resultant speed of the accelerated object can never reach or surpass the speed of light.
Four-acceleration
If four-vectors are used instead of three-vectors, namely as four-position and as four-velocity, then the four-acceleration of an object is obtained by differentiation with respect to proper time instead of coordinate time:
where is the object's three-acceleration and its momentary three-velocity of magnitude with the corresponding Lorentz factor . If only the spatial part is considered, and when the velocity is directed in the x-direction by and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered, the expression is reduced to:
Unlike the three-acceleration previously discussed, it is not necessary to derive a new transformation for four-acceleration, because as with all four-vectors, the components of and in two inertial frames with relative speed are connected by a Lorentz transformation analogous to (, ). Another property of four-vectors is the invariance of the inner product or its magnitude , which gives in this case:
Proper acceleration
In infinitesimal small durations there is always one inertial frame, which momentarily has the same velocity as the accelerated body, and in which the Lorentz transformation holds. The corresponding three-acceleration in these frames can be directly measured by an accelerometer, and is called proper acceleration or rest acceleration. The relation of in a momentary inertial frame and measured in an external inertial frame follows from (, ) with , , and . So in terms of (), when the velocity is directed in the x-direction by and when only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered, it follows:
Generalized by () for arbitrary directions of of magnitude :
There is also a close relationship to the magnitude of four-acceleration: As it is invariant, it can be determined in the momentary inertial frame , in which and by it follows :
Thus the magnitude of four-acceleration corresponds to the magnitude of proper acceleration. By combining this with (), an alternative method for the determination of the connection between in and in is given, namely
from which () follows again when the velocity is directed in the x-direction by and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered.
Acceleration and force
Assuming constant mass , the four-force as a function of three-force is related to four-acceleration () by , thus:
The relation between three-force and three-acceleration for arbitrary directions of the velocity is thus
When the velocity is directed in the x-direction by and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered
Therefore, the Newtonian definition of mass as the ratio of three-force and three-acceleration is disadvantageous in SR, because such a mass would depend both on velocity and direction. Consequently, the following mass definitions used in older textbooks are not used anymore:
as "longitudinal mass",
as "transverse mass".
The relation () between three-acceleration and three-force can also be obtained from the equation of motion
where is the three-momentum. The corresponding transformation of three-force between in and in (when the relative velocity between the frames is directed in the x-direction by and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered) follows by substitution of the relevant transformation formulas for , , , , or from the Lorentz transformed components of four-force, with the result:
Or generalized for arbitrary directions of , as well as with magnitude :
Proper acceleration and proper force
The force in a momentary inertial frame measured by a comoving spring balance can be called proper force. It follows from (, ) by setting and as well as and . Thus by () where only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered:
Generalized by () for arbitrary directions of of magnitude :
Since in momentary inertial frames one has four-force and four-acceleration , equation () produces the Newtonian relation , therefore (, , ) can be summarized
By that, the apparent contradiction in the historical definitions of transverse mass can be explained. Einstein (1905) described the relation between three-acceleration and proper force
,
while Lorentz (1899, 1904) and Planck (1906) described the relation between three-acceleration and three-force
.
Curved world lines
By integration of the equations of motion one obtains the curved world lines of accelerated bodies corresponding to a sequence of momentary inertial frames (here, the expression "curved" is related to the form of the worldlines in Minkowski diagrams, which should not be confused with "curved" spacetime of general relativity). In connection with this, the so-called clock hypothesis of clock postulate has to be considered: The proper time of comoving clocks is independent of acceleration, that is, the time dilation of these clocks as seen in an external inertial frame only depends on its relative velocity with respect to that frame. Two simple cases of curved world lines are now provided by integration of equation () for proper acceleration:
a) Hyperbolic motion: The constant, longitudinal proper acceleration by () leads to the world line
The worldline corresponds to the hyperbolic equation , from which the name hyperbolic motion is derived. These equations are often used for the calculation of various scenarios of the twin paradox or Bell's spaceship paradox, or in relation to space travel using constant acceleration.
b) The constant, transverse proper acceleration by () can be seen as a centripetal acceleration, leading to the worldline of a body in uniform rotation
where is the tangential speed, is the orbital radius, is the angular velocity as a function of coordinate time, and as the proper angular velocity.
A classification of curved worldlines can be obtained by using the differential geometry of triple curves, which can be expressed by spacetime Frenet-Serret formulas. In particular, it can be shown that hyperbolic motion and uniform circular motion are special cases of motions having constant curvatures and torsions, satisfying the condition of Born rigidity. A body is called Born rigid if the spacetime distance between its infinitesimally separated worldlines or points remains constant during acceleration.
Accelerated reference frames
Instead of inertial frames, these accelerated motions and curved worldlines can also be described using accelerated or curvilinear coordinates. The proper reference frame established that way is closely related to Fermi coordinates. For instance, the coordinates for an hyperbolically accelerated reference frame are sometimes called Rindler coordinates, or those of a uniformly rotating reference frame are called rotating cylindrical coordinates (or sometimes Born coordinates). In terms of the equivalence principle, the effects arising in these accelerated frames are analogous to effects in a homogeneous, fictitious gravitational field. In this way it can be seen, that the employment of accelerating frames in SR produces important mathematical relations, which (when further developed) play a fundamental role in the description of real, inhomogeneous gravitational fields in terms of curved spacetime in general relativity.
History
For further information see von Laue, Pauli, Miller, Zahar, Gourgoulhon, and the historical sources in history of special relativity.
1899 Hendrik Lorentz derived the correct (up to a certain factor ) relations for accelerations, forces and masses between a resting electrostatic systems of particles (in a stationary aether), and a system emerging from it by adding a translation, with as the Lorentz factor:
, , for by ();
, , for by ();
, , for , thus longitudinal and transverse mass by ();
Lorentz explained that he has no means of determining the value of . If he had set , his expressions would have assumed the exact relativistic form.
1904 Lorentz derived the previous relations in a more detailed way, namely with respect to the properties of particles resting in the system and the moving system , with the new auxiliary variable equal to compared to the one in 1899, thus:
for as a function of by ();
for as a function of by ();
for as a function of by ();
for longitudinal and transverse mass as a function of the rest mass by (, ).
This time, Lorentz could show that , by which his formulas assume the exact relativistic form. He also formulated the equation of motion
with
which corresponds to () with , with , , , , , and as electromagnetic rest mass. Furthermore, he argued, that these formulas should not only hold for forces and masses of electrically charged particles, but for other processes as well so that the earth's motion through the aether remains undetectable.
1905 Henri Poincaré introduced the transformation of three-force ():
with , and as the Lorentz factor, the charge density. Or in modern notation: , , , and . As Lorentz, he set .
1905 Albert Einstein derived the equations of motions on the basis of his special theory of relativity, which represent the relation between equally valid inertial frames without the action of a mechanical aether. Einstein concluded, that in a momentary inertial frame the equations of motion retain their Newtonian form:
.
This corresponds to , because and and . By transformation into a relatively moving system he obtained the equations for the electrical and magnetic components observed in that frame:
.
This corresponds to () with , because and and and . Consequently, Einstein determined the longitudinal and transverse mass, even though he related it to the force in the momentary rest frame measured by a comoving spring balance, and to the three-acceleration in system :
This corresponds to () with .
1905 Poincaré introduces the transformation of three-acceleration ():
where as well as and and .
Furthermore, he introduced the four-force in the form:
where and and .
1906 Max Planck derived the equation of motion
with
and
and
The equations correspond to () with
, with and and , in agreement with those given by Lorentz (1904).
1907 Einstein analyzed a uniformly accelerated reference frame and obtained formulas for coordinate dependent time dilation and speed of light, analogous to those given by Kottler-Møller-Rindler coordinates.
1907 Hermann Minkowski defined the relation between the four-force (which he called the moving force) and the four acceleration
corresponding to .
1908 Minkowski denotes the second derivative with respect to proper time as "acceleration vector" (four-acceleration). He showed, that its magnitude at an arbitrary point of the worldline is , where is the magnitude of a vector directed from the center of the corresponding "curvature hyperbola" () to .
1909 Max Born denotes the motion with constant magnitude of Minkowski's acceleration vector as "hyperbolic motion" (), in the course of his study of rigidly accelerated motion. He set (now called proper velocity) and as Lorentz factor and as proper time, with the transformation equations
.
which corresponds to () with and . Eliminating Born derived the hyperbolic equation , and defined the magnitude of acceleration as . He also noticed that his transformation can be used to transform into a "hyperbolically accelerated reference system" ().
1909 Gustav Herglotz extends Born's investigation to all possible cases of rigidly accelerated motion, including uniform rotation.
1910 Arnold Sommerfeld brought Born's formulas for hyperbolic motion in a more concise form with as the imaginary time variable and as an imaginary angle:
He noted that when are variable and is constant, they describe the worldline of a charged body in hyperbolic motion. But if are constant and is variable, they denote the transformation into its rest frame.
1911 Sommerfeld explicitly used the expression "proper acceleration" () for the quantity in , which corresponds to (), as the acceleration in the momentary inertial frame.
1911 Herglotz explicitly used the expression "rest acceleration" () instead of proper acceleration. He wrote it in the form and which corresponds to (), where is the Lorentz factor and or are the longitudinal and transverse components of rest acceleration.
1911 Max von Laue derived in the first edition of his monograph "Das Relativitätsprinzip" the transformation for three-acceleration by differentiation of the velocity addition
equivalent to () as well as to Poincaré (1905/6). From that he derived the transformation of rest acceleration (equivalent to ), and eventually the formulas for hyperbolic motion which corresponds to ():
thus
,
and the transformation into a hyperbolic reference system with imaginary angle :
.
He also wrote the transformation of three-force as
equivalent to () as well as to Poincaré (1905).
1912–1914 Friedrich Kottler obtained general covariance of Maxwell's equations, and used four-dimensional Frenet-Serret formulas to analyze the Born rigid motions given by Herglotz (1909). He also obtained the proper reference frames for hyperbolic motion and uniform circular motion.
1913 von Laue replaced in the second edition of his book the transformation of three-acceleration by Minkowski's acceleration vector for which he coined the name "four-acceleration" (), defined by with as four-velocity. He showed, that the magnitude of four-acceleration corresponds to the rest acceleration by
,
which corresponds to (). Subsequently, he derived the same formulas as in 1911 for the transformation of rest acceleration and hyperbolic motion, and the hyperbolic reference frame.
References
Bibliography
; First edition 1911, second expanded edition 1913, third expanded edition 1919.
In English:
Historical papers
External links
Mathpages: Transverse Mass in Einstein's Electrodynamics, Accelerated Travels, Born Rigidity, Acceleration, and Inertia, Does A Uniformly Accelerating Charge Radiate?
Physics FAQ: Acceleration in Special Relativity, The Relativistic Rocket
Special relativity
Special relativity | Acceleration (special relativity) | [
"Physics",
"Mathematics"
] | 3,785 | [
"Physical quantities",
"Acceleration",
"Quantity",
"Special relativity",
"Theory of relativity",
"Wikipedia categories named after physical quantities"
] |
52,921,086 | https://en.wikipedia.org/wiki/Uragan-2M | Uragan-2M (U-2M, ) is a stellarator (magnetic plasma confinement, controlled thermonuclear fusion experiment) installed at the Institute of Plasma Physics National Science Center, which is part of the Kharkiv Institute of Physics and Technology (IFS KIPT) in Kharkiv, Ukraine. It was the largest stellarator (torsatron) in Europe until the construction of Wendelstein 7-X.
Specifications
Uragan-2M is a medium-sized stellarator with reduced helical corrugations. The unit has a torus radius of , a plasma radius of up to , and a toroidal magnetic field of up to .
See also
Controlled thermonuclear fusion
References
Further reading
Stellarators
Nuclear research institutes
Plasma physics facilities | Uragan-2M | [
"Physics",
"Engineering"
] | 168 | [
"Nuclear research institutes",
"Nuclear organizations",
"Plasma physics",
"Plasma physics stubs",
"Plasma physics facilities"
] |
52,926,101 | https://en.wikipedia.org/wiki/G-10%20%28material%29 | G-10 or garolite is a high-pressure fiberglass laminate, a type of composite material. It is created by stacking multiple layers of glass cloth, soaked in epoxy resin, then compressing the resulting material under heat until the epoxy cures. It is manufactured in flat sheets, most often a few millimeters thick.
G-10 is very similar to Micarta and carbon fiber laminates, except that glass cloth is used as filler material. (Note that the professional nomenclature of "filler" and "matrix" in composite materials may be somewhat counterintuitive when applied to soaking textiles with resin.)
G-10 is the toughest of the glass fiber resin laminates and therefore the most commonly used.
Properties
G-10 is favored for its high strength, low moisture absorption, and high level of electrical insulation and chemical resistance. These properties are maintained not only at room temperature but also under humid or moist conditions. It was first used as a substrate for printed circuit boards, and its designation, G-10, comes from a National Electrical Manufacturers Association standard for this purpose.
Decorative uses
Decorative variations of G-10 are produced in many colors and patterns and are especially used to make handles for knives, grips for firearms and other tools. These can be textured (for grip), bead blasted, sanded or polished. Its strength and low density make it useful for other kinds of handcrafting as well.
Structural uses
G-10 is used to reinforce the edges of fiberglass coated wood. It is used to protect the point-of-contact on many such items. During ordinary use it is the G-10 that takes the brunt of the blow. In such applications it is meant to be replaced as it wears. G-10 is also used as a 3D-Printer build surface.
G-10 is also commonly used as a material for durable knife and gun handles and grips.
Hazards
G-10 is safe to handle absent extreme conditions.
Hazards can result from cutting or grinding the material, as glass and epoxy dust are well known to contribute to respiratory disorders and may increase the risk of developing lung cancer. For any work of this kind, the work space should be appropriately ventilated and masks or respirators worn.
Epoxy resin is flammable and, once ignited, will burn vigorously, giving off poisonous gases. For this reason, materials such as FR-4 containing flame retardant additives have replaced G-10 in certain applications.
See also
Bakelite
References
Printed circuit board manufacturing
Fibre-reinforced polymers
Fiberglass | G-10 (material) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 533 | [
"Fiberglass",
"Electronic engineering",
"Polymer chemistry",
"Electrical engineering",
"Printed circuit board manufacturing"
] |
52,926,589 | https://en.wikipedia.org/wiki/Polycentric%20networks | In public policy a polycentric network is a group of distinct local, regional, or national entities that work co-operatively towards a common goal. Proponents claim that such networks can better adapt to changing issues collectively than individually, thus providing network participants better results from relevant efforts.
Urban contexts
Robert Kloosterman and Bart Lambregts define polycentric urban regions as collections of historically distinct jurisdictions that are administratively and politically independent. These jurisdictions are in close proximity and well connected through infrastructure. The literature on polycentric urban regions is limited and unconsolidated, so diverse concepts exist. Evert Meijers claimed that polycentric network are especially prominent in Europe.
Rural polycentric networks are nearly non-existent. Urban polycentric networks draw heavily on economic network theories. According to Meijers, “individual cities in these collections of distinct but proximally-located cities relate to each other in a synergetic way, making the whole network of cities more than the sum of its parts”.
Implementation
Polycentric networks have different spatial characteristics, reflecting a micro, meso, or macro-level of connections in a given region. These different scales allow flexible and convertible networks for spatial planning in complex regions and systems.
Micro-level: intra-urban or intra-regional aspects within a certain city region. The emphasis at this level is “urban functional and economic complementarities” which make “cooperation and improved links” major engines of regional economic performance and “promote integrated spatial development strategies for city clusters”.
Meso-level: inter-metropolitan issues within a delimited area. The emphasis at this level is very similar to the micro-level, with added specialization.
Macro-level: inter-metropolitan issues on a continental or global scale. At the macro level polycentricism is considered to be “a useful alternative model to enhance regional development more evenly across the European territory”.
Metropolitan areas
In metropolitan areas, the scale and intensity of collaboration is a key determinant of whether or not polycentric networks function properly. Metropolitan planning organizations (MPOs) have given researchers a unique opportunity to study the scale and intensity of collaboration. A 2015 study of 381 MPOs in the United States, found a direct link between the MPO's scale and performance. The study concluded that more intense MPO collaboration across both vertical and horizontal stakeholders improved performance. The study found that MPOs that focused more on vertical collaboration (between the state and higher-up agencies) saw a decrease in their perceived performance. The study looked at 15 indicators including condition of transportation network, mobility for disadvantaged populations, air quality, highway congestion, public participation, extent of coordination and stakeholder involvement, satisfaction among general public, satisfaction among local stakeholders, compliance with federal and state rules, transportation systems security, accessibility, reliability, and safety, travel demand model accuracy and project implementation.
Future polycentric networks
Researcher Perry Yang claimed that the future for polycentric networks lies in sustainability. Yang conducted research in Singapore around cities' push for greater sustainability. With globalization and the separation of land come split ecosystems and habitat changes to build infrastructure for human development. A big issue for sustainability is how to minimize human environmental impact while thriving as a species.
Yang’s research explored Singapore's growth over time. He noted that this growth was largely industrial and revolved around rapid mass transit (MRT), which connects areas of the island. The general shift seen in Singapore can be seen as the impact of polycentric networks throughout the island with the implementation of rezoning policies. From 1986 to 1994, much rezoning occurred due to changes in land use policy. Yang found that transit and raw materials are central to growth in rapidly developing areas and argued that Singapore is a good example of a polycentric urban form, but may not be adequate to establish an urban sustainability model.
See also
Economics of networks
Supply chain collaboration
References
Networks
Urban planning in Singapore
Natural resource management
Transport and the environment
Zoning
Metropolitan areas
Community development
Environmental impact assessment
Economic integration
Cityscapes
Travel
Spatial planning
Urban planning | Polycentric networks | [
"Physics",
"Engineering"
] | 817 | [
"Travel",
"Transport and the environment",
"Zoning",
"Physical systems",
"Transport",
"Construction",
"Urban planning",
"Architecture"
] |
54,246,896 | https://en.wikipedia.org/wiki/Stem%20cell%20secretome | The stem cell secretome (also referred to as the stromal cell secretome) is a collective term for the paracrine soluble factors produced by stem cells and utilized for their inter-cell communication. In addition to inter-cell communication, the paracrine factors are also responsible for tissue development, homeostasis and (re-)generation. The stem cell secretome consists of extracellular vesicles, specifically exosomes, microvesicles, membrane particles, peptides and small proteins (cytokines). The paracrine activity of stem cells, i.e. the stem cell secretome, has been found to be the predominant mechanism by which stem cell-based therapies mediate their effects in degenerative, auto-immune and/or inflammatory diseases. Though not only stem cells possess a secretome which influences their cellular environment, their secretome currently appears to be the most relevant for therapeutic use.
Extracellular Vesicles
The Extracellular Vesicles are small partials that are normally discharged and have boundaries that are formed by a lipid bilayer. Although cells can replicate, extracellular vesicle is not able to. In the extracellular vesical, things that consist of the stem cell secretome and are being packed are organelles, mRNA, miRNA, and proteins. Exosomes are discharged from the extracellular vesicles, which are found in biological fluid. Biological fluid like the cerebrospinal fluid, which can be used for treatment. Most impotently, exosomes can be found in between the eukaryotic organism's cell, also known as the tissue matrix.
Research
Stem Cell therapies, here referred to as therapies employing non-hematopoietic, mesenchymal stem cells, have a wide range of potential therapeutic benefits for different diseases, most of which are currently investigated in clinical trials. Stem cell therapies can benefit as a regenerative medicine for patients that have or been diagnosed with disease that affect the mid part of the brain, strokes and heart disease, joint disease and injuries to the spinal cord. Therapeutic properties of stem cells are mainly attributed to their secretome, which has been shown to modulate several biological processes in vitro and in vivo, such as cell proliferation, survival, differentiation, immunomodulation, anti-apoptosis, angiogenesis and stimulation of tissue adjacent cells. This is contrary to the historic hypothesis that stem cell migration and transdifferentiation is the primary mechanism of effect of stem cell injection therapies.
The most commonly used type of stem cells for therapeutic use are human (autologous) Mesenchymal Stem Cells, hMSCs. hMSCs’ secretome is one of the most widely researched secretome profile. The secretomes of other cell types, for example dendritic cells, are also being investigated for therapeutic use.
Studies of hMSCs aimed for examining their regenerative capacities for putative treatment of neurodegenerative diseases have demonstrated that hMSCs are able to secrete important neuroregulatory molecules, such as: brain-derived neurotrophic factor (BDNF), nerve growth factor (NGF), insulin growth factor 1 (IGF-1), hepatocyte growth factor (HGF), vascular endothelial growth factor (VEGF), transforming growth factor beta (TGF-β), glial-derived neurotrophic factor (GDNF), fibroblast growth factor 2 (FGF-2), stem cell factor (SCF), granulocyte colony-stimulating factor (G-CSF) and stromal cell-derived factor (SDF-1) both in vitro and in vivo. All of these molecules have been shown to have beneficial effects towards the treatment of neurodegenerative diseases.
With regard to orthopaedic conditions such as arthritis, the paracrine factors of stem cell-based therapies appeared to be responsible for the majority of regenerative effects. Extracellular vesicles have a prominent role in the development of joints and in the regulation of the intra-articular homeostasis. In the case of arthritis, this homeostasis is disrupted due to different reasons. Hypothetically, one reason may be related to the accumulation of senescent cells and their associated secretory phenotype. The secretome of (mesenchymal) Stem Cells have a positive effects on reestablishing the intra-articular homeostasis and stimulating regeneration by different growth factors, cytokines and miRNA that are contained within the extracellular vesicles of the secretome.
As a consequence, efforts have been made to synthesize specific stem cell secretomes efficiently, in vitro. In general, stem cells become activated and produce higher amounts of secretome in response to external stress (for example, by damaged tissues in vivo). As such, the main preconditioning mechanism to induce secretome (extracellular vesicles) production are stress-inducing methods, most prominently anoxia and hypoxia, but also pharmacological, physical or cytokine-related methods that force the cells to produce secretome in vitro. This approach is also known as cell-free stem cell therapy.
It has been hypothesized that future therapies aiming at generating a (specific) secretome with a defined profile, and optimized concentrations of paracrine factors will yield a better, more reliable and controlled outcome as compared to previous approaches that rely solely on injecting (mesenchymal) stem cells into the body and hope that their paracrine (or trans differentiation) capacity will have beneficial effects in the body. However, the controlled therapeutic use of the stem cell secretome demands high-quality standardization of isolation and analysis techniques to yield reproducible secretome preparations.
Various pharmaceutical companies and clinical institutions have started to develop protocols for the in vitro extraction of specific secretome profiles from autologous mesenchymal stem cells, as well as for the clinical use of secretome as a novel therapeutic for numerous diseases, either as a private pay procedure or within clinical trials. Even though these treatments are in compliance with the regulatory framework in Europe under certain conditions as of May 2017, there is yet no evidence for their proven efficacy in human clinical trials, besides singular case reports. Therefore, at the moment, the clinical use of stem cell secretome is experimental, and it is mainly based on in-vitro and animal data. One potential application of autologous stem cell secretome has been in veterinary medicine, as commercialized by a Russian company, T-Helper Cell Technologies in 2017 under the name Reparin-Helper.
References
Omics
Stem cell research | Stem cell secretome | [
"Chemistry",
"Biology"
] | 1,405 | [
"Stem cell research",
"Bioinformatics",
"Omics",
"Translational medicine",
"Tissue engineering"
] |
54,248,271 | https://en.wikipedia.org/wiki/Glow-discharge%20optical%20emission%20spectroscopy | Glow-discharge optical emission spectroscopy (GDOES) is a spectroscopic method for the quantitative analysis of metals and other non-metallic solids. The idea was published and patented in 1968 by Werner Grimm from Hanau, Germany.
Ordinary atomic spectroscopy can be used to determine the surface of a material, but not its layered structure. In contrast, GDOES gradually ablates the layers of the sample, revealing the deeper structure.
GDOES spectroscopy can be used for the quantitative and qualitative determination of elements and is therefore a method of analytical chemistry.
Process
The metallic samples are used as a cathode in a direct current plasma. From the surface, the sample is removed in layers by sputtering with argon ions. The removed atoms pass into the plasma by diffusion. Photons are emitted with excited waves and have characteristic wavelengths which are recorded by means of a downstream spectrometer and subsequently quantified.
When using a high-frequency alternating voltage for plasma generation and the corresponding construction of the glow discharge source, non-metallic samples can also be examined.
Various instruments are used as sensors. Photomultipliers can detect the slightest traces and also high concentrations of the sensor-specific element. By means of charge-coupled device, a complete element spectrum can be measured with the appropriate layer thickness.
Applications
Glow discharge spectroscopy is an established method for the characterization of steels and varnishes. Recent developments relate to the analysis of porous electrodes from lithium-ion batteries.
Further reading
R.Kenneth Marcus, José Broekaert: Glow Discharge Plasmas in Analytical Spectroscopy Wiley,
Thomas Nelis, Richard Payling,: Glow Discharge Optical Emission Spectroscopy - A Practical Guide
References
External links
Glow Discharges, Glow Discharge Laboratory
GDOES Theory with excellent illustrations
Emission spectroscopy | Glow-discharge optical emission spectroscopy | [
"Physics",
"Chemistry"
] | 363 | [
"Emission spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
54,248,486 | https://en.wikipedia.org/wiki/Multivalued%20treatment | In statistics, in particular in the design of experiments, a multi-valued treatment is a treatment that can take on more than two values. It is related to the dose-response model in the medical literature.
Description
Generally speaking, treatment levels may be finite or infinite as well as ordinal or cardinal, which leads to a large collection of possible treatment effects to be studied in applications. One example is the effect of different levels of program participation (e.g. full-time and part-time) in a job training program.
Assume there exists a finite collection of multi-valued treatment status with J some fixed integer. As in the potential outcomes framework, denote the collection of potential outcomes under the treatment J, and denotes the observed outcome and is an indicator that equals 1 when the treatment equals j and 0 when it does not equal j, leading to a fundamental problem of causal inference. A general framework that analyzes ordered choice models in terms of marginal treatment effects and average treatment effects has been extensively discussed by Heckman and Vytlacil.
Recent work in the econometrics and statistics literature has focused on estimation and inference for multivalued treatments and ignorability conditions for identifying the treatment effects. In the context of program evaluation, the propensity score has been generalized to allow for multi-valued treatments, while other work has also focused on the role of the conditional mean independence assumption. Other recent work has focused more on the large sample properties of an estimator of the marginal mean treatment effect conditional on a treatment level in the context of a difference-in-differences model, and on the efficient estimation of multi-valued treatment effects in a semiparametric framework.
References
Applied mathematics
Design of experiments
Statistical theory
Industrial engineering
Systems engineering
Statistical process control
Quantitative research
Experiments
Pharmacodynamics
Toxicology | Multivalued treatment | [
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 370 | [
"Pharmacology",
"Systems engineering",
"Toxicology",
"Statistical process control",
"Pharmacodynamics",
"Applied mathematics",
"Industrial engineering",
"Engineering statistics"
] |
54,248,584 | https://en.wikipedia.org/wiki/Optimal%20instruments | In statistics and econometrics, optimal instruments are a technique for improving the efficiency of estimators in conditional moment models, a class of semiparametric models that generate conditional expectation functions. To estimate parameters of a conditional moment model, the statistician can derive an expectation function (defining "moment conditions") and use the generalized method of moments (GMM). However, there are infinitely many moment conditions that can be generated from a single model; optimal instruments provide the most efficient moment conditions.
As an example, consider the nonlinear regression model
where is a scalar (one-dimensional) random variable, is a random vector with dimension , and is a -dimensional parameter. The conditional moment restriction is consistent with infinitely many moment conditions. For example:
More generally, for any vector-valued function of , it will be the case that
.
That is, defines a finite set of orthogonality conditions.
A natural question to ask, then, is whether an asymptotically efficient set of conditions is available, in the sense that no other set of conditions achieves lower asymptotic variance. Both econometricians and statisticians have extensively studied this subject.
The answer to this question is generally that this finite set exists and have been proven for a wide range of estimators. Takeshi Amemiya was one of the first to work on this problem and show the optimal number of instruments for nonlinear simultaneous equation models with homoskedastic and serially uncorrelated errors. The form of the optimal instruments was characterized by Lars Peter Hansen, and results for nonparametric estimation of optimal instruments are provided by Newey. A result for nearest neighbor estimators was provided by Robinson.
In linear regression
The technique of optimal instruments can be used to show that, in a conditional moment linear regression model with iid data, the optimal GMM estimator is generalized least squares. Consider the model
where is a scalar random variable, is a -dimensional random vector, and is a -dimensional parameter vector. As above, the moment conditions are
where is an instrument set of dimension (). The task is to choose to minimize the asymptotic variance of the resulting GMM estimator. If the data are iid, the asymptotic variance of the GMM estimator is
where .
The optimal instruments are given by
which produces the asymptotic variance matrix
These are the optimal instruments because for any other , the matrix
is positive semidefinite.
Given iid data , the GMM estimator corresponding to is
which is the generalized least squares estimator. (It is unfeasible because is unknown.)
References
Further reading
Econometric modeling
Moment (mathematics) | Optimal instruments | [
"Physics",
"Mathematics"
] | 557 | [
"Mathematical analysis",
"Moments (mathematics)",
"Physical quantities",
"Moment (physics)"
] |
54,248,882 | https://en.wikipedia.org/wiki/NanoCLAMP | In the medical field of immunology, nanoCLAMP (CLostridal Antibody Mimetic Proteins) affinity reagents are recombinant 15 kD antibody mimetic proteins selected for tight, selective and gently reversible binding to target molecules. The nanoCLAMP scaffold is based on an IgG-like, thermostable carbohydrate binding module family 32 (CBM32) from a Clostridium perfringens hyaluronidase (Mu toxin). The shape of nanoCLAMPs approximates a cylinder of approximately 4 nm in length and 2.5 nm in diameter, roughly the same size as a nanobody (). nanoCLAMPs to specific targets are generated by varying the amino acid sequences and sometimes the length of three solvent exposed, adjacent loops that connect the beta strands making up the beta-sandwich fold, conferring binding affinity and specificity for the target.
Properties
nanoCLAMPs are the first antibody mimetics described to be polyol-responsive, meaning they release their targets upon exposure to a non-chaotropic salt and a polyol, such as propylene glycol. This property has been shown to be useful for purifying functional proteins and protein complexes by affinity purification. nanoCLAMPs are easily produced in the cytoplasm of E. coli, with typical yields in the range of 50 to 300 mg/L culture. Because nanoCLAMPs are devoid of cysteines, an engineered C-terminal cysteine can be used for site-directed conjugation of entities like fluorophores or resins using thiol-chemistry.
Development and applications
nanoCLAMPs were developed in the laboratories of Nectagen. nanoCLAMP phage display libraries were constructed that contained variations on 16 surface amino acids in three loops with function diversities of approximately 109 variants. These libraries have been screened for binders to target proteins and peptides, typically yielding between 1 and 30 unique binders to the target.
Purified nanoCLAMPs containing a single C-terminal cysteine can be easily conjugated to halo-acetyl activated agarose resins under native or denaturing conditions, and the resulting thioether bond renders the resins leach-proof. Targets can be purified to apparent homogeneity in a single-step. The polyol-responsive nature of the resins allows the targets to be eluted with 0.75 M ammonium sulfate and 40% propylene glycol at pH 7.9, conditions which have been shown to preserve native structure and protein complexes.
nanoCLAMPs have been produced that target green fluorescent protein (GFP), mCherry, SUMO (SMT3), NusA, avidin, NeutrAvidin, maltose-binding protein (MBP), thioredoxin 1, beta-galactosidase, SlyD, and others. Typical binding capacities of resins range from 1 to 4 mg/ml resin. Because nanoCLAMPs readily refold, nanoCLAMP resins can be regenerated multiple times using guanidinium chloride to clean the resin.
References
External links
Nectagen, Inc., the developer
Antibody mimetics | NanoCLAMP | [
"Chemistry"
] | 674 | [
"Antibody mimetics",
"Molecular biology"
] |
54,250,075 | https://en.wikipedia.org/wiki/Carboalkoxylation | In industrial chemistry, carboalkoxylation is a process for converting alkenes to esters. This reaction is a form of carbonylation. A closely related reaction is hydrocarboxylation, which employs water in place of alcohols.
A commercial application is the carbomethoxylation of ethylene to give methyl propionate:
The process is catalyzed by . Under similar conditions, other Pd-diphosphines catalyze formation of polyethyleneketone.
Methyl propionate ester is a precursor to methyl methacrylate, which is used in plastics and adhesives.
Carboalkoxylation has been incorporated into various telomerization schemes. For example carboalkoxylation has been coupled with the dimerization of 1,3-butadiene. This step produces a doubly unsaturated C9-ester:
Hydroesterification
Related to carboalkoxylation is hydroesterification, the insertion of alkenes and alkynes into the H-O bond of carboxylic acids. Vinyl acetate is produced industrially by the addition of acetic acid to acetylene in the presence of zinc acetate catalysts: Presently, zinc acetate is used as the catalyst:
Further reading
References
Chemical reactions
Carbon monoxide | Carboalkoxylation | [
"Chemistry"
] | 277 | [
"nan"
] |
54,252,098 | https://en.wikipedia.org/wiki/High%20flux%20reactor | A High Flux Reactor is a type of nuclear research reactor.
High Flux Isotope Reactor (HFIR), in Oak Ridge, Tennessee, United States of America,
High Flux Australian Reactor (HIFAR), Australia's first nuclear reactor,
High-Flux Advanced Neutron Application Reactor (HANARO), in South Korea.
The High Flux Reactor at Institut Laue–Langevin in France.
High Flux Reactor (HFR) at Petten in the Netherlands
Nuclear research reactors | High flux reactor | [
"Physics"
] | 97 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
54,253,065 | https://en.wikipedia.org/wiki/NOC%20%28software%29 | NOC is an open-source operations support system for telecommunications service providers. It can maintain network inventory, manage virtual circuits, maintain distributed DNS configuration and manage IP address blocks.
NOC Project is mentioned in the Configuration management and backup tools section of the 2019 GEANT SIG-NOC Tools Survey among other tools used by the community.
See also
Comparison of open-source configuration management software
Infrastructure as code (IaC)
Infrastructure as Code Tools
References
Python (programming language) software
Network management
Software using the BSD license | NOC (software) | [
"Engineering"
] | 107 | [
"Computer networks engineering",
"Network management"
] |
54,256,008 | https://en.wikipedia.org/wiki/Choreocolax%20polysiphoniae | Choreocolax polysiphoniae is a minute marine parasitic alga in the division Rhodophyta.
Description
This small parasitic alga grows on the red alga Polysiphonia lanosa. It grows as an irregular sphere on the fronds of the alga, reaching no more than 1 mm in extent.
Habitat
Parasitic on Polysiphonia lanosa, the filaments grow into the host.
Distribution
The species has been reported from North Russia and the Pacific. In Ireland it has been confidently recorded from counties Down, Antrim and Waterford and at scattered sites around the British Isles including the Shetland Islands.
Reproduction
Cruciate tetrasporangia are produced all year round in the cortex. The gametangial are dioecious and are produced in spring and summer.
References
Ceramiales
Parasitic eukaryotes
Species described in 1875 | Choreocolax polysiphoniae | [
"Biology"
] | 178 | [
"Parasitic eukaryotes",
"Eukaryotes"
] |
54,256,334 | https://en.wikipedia.org/wiki/Hager%20Group | Hager Group is a manufacturer of electrical installations in residential, commercial and industrial buildings based in Blieskastel, Germany. The company has been family-run and owned ever since its foundation in 1955.
Hager Group provides products and services ranging from energy distribution and cable management to intelligent building automation and security systems, under the brand Hager. Hager Group also owns the brands Berker, Bocchiotti, Daitem, Diagral, Elcom and E3/DC. In 2018, Hager Group was the world market leader in electrical installation systems. In August 2019, the group was ranked number 128 in the top 500 family-owned businesses in Germany according to the magazine Die Deutsche Wirtschaft.
History
In 1955, Hager oHG, elektrotechnische Fabrik was founded by brothers Oswald and Hermann Hager, together with their father Peter Hager in Ensheim in the Saarland region of Germany. Since 1945, Saarland had been under the economic control of France and had no access to the German market. However, Hager wanted to gain a foothold in both markets. In 1959, the Hager brothers founded their first foreign subsidiary, Hager Electro S. A., in Obernai, Alsace, in north-eastern France.
In 1966, Hager began systematical training of its electricians, whose expertise has created a culture of customer loyalty, something that continues to this day. Hager’s modular rotary fuse carrier was patented in Germany in 1968 and in France in 1970. At the same time, the first mass-produced distribution board, the Hager-Rapid-System, was launched on the French market. In 1973, Hager achieved sales of 43 million Deutsche Marks in Germany and in 1974 the company reached a turnover of 22 million francs in France.
In 1976, Hager launched the mini Gamma enclosure, in 1982 the company started producing the first Residual-current circuit breakers (RCCB) in Germany. A new production facility with a high-bay warehouse was opened in Blieskastel.
Hager Group began to market itself as a complete service provider for electrical installations in buildings in the 1980s, setting up sales companies in Europe (Switzerland and Great Britain). In the mid-1990s, Hager set up distribution channels in the United Arab Emirates (Dubai), Singapore, Malaysia, Hong Kong, China, Australia and New Zealand.
In 2007, Hager Group became a European Company: Hager SE.
Locations
Hager Group has 22 manufacturing sites in 10 countries across the world. Components for the respective markets are manufactured at the local production facilities in order to accommodate local installation requirements. The biggest production site is in Obernai, France. Hager Forum was established there in 2015 as a training and meeting place for partners, customers and employees of the company.
Acquisitions
In 1992, the group acquired Lumetal, a manufacturer of distribution boards from Porcia, Italy. Hager Group acquired the German company Tehalit in 1996, a manufacturer of cable management systems and cable ducts.
In 1998, the group acquired the French electronic timer manufacturer Flash, whose registered office was in Saverne. Prior to this, Hager Group manufactured only mechanical timers. The same year, the company also acquired British manufacturer Ashley & Rock from Ulverston, whose products were manufactured according to British Standards.
In 2002, the Polish company Polo, whose registered office was in Tychy, was integrated into the company, in 2004, Hager Group acquired Swiss company Weber AG and French manufacturer Atral. In addition to Hager brand security systems, Atral also manufactures products for the brands Diagral, Daitem and Logisty.
In 2006, Hager entered the Brazilian market when it acquired 100 % of the shares in Eletromar. Hager Group opened a plant in Pune, India in 2008 and on 30 September of the same year, the foundation stone for a new Eletromar production site in Brazil was laid. On 1 January 2009, Hager acquired Electraplan Solutions GmbH, and in 2010, Hager acquired Berker, a German manufacturer of switches, whose registered offices were in Schalksmühle and Ottfingen.
2012 Hager gained the German family firm Elcom, a producer of intercoms.
In 2018, it acquired E3/DC GmbH, a German developer of inverters and energy storage systems.
Brands and products
Hager brand offers services for electrical installations in residential, commercial and industrial buildings. In 2009, the previous brands Tehalit, Weber, Lume, Klik, Flash, Polo, Ashley & Rock and Logisty were combined under Hager brand. Alarms and security systems are sold under Daitem and Diagral. Berker manufactures switches and switch systems as part of Hager Group. Bocchiotti/Iboco, the Italian market leader in cable management and room distribution systems, is also part of Hager Group whilst Elcom produces intercom systems for residential and office buildings.
There are four different areas of application for Hager Group’s products and services:
Energy distribution and metering systems, including energy management and VDI concepts for electrical installation
Cable management systems for power and data distribution
Switch ranges and building control
Security systems
Since 2018, Hager Group has been working on electromobility with Audi AG. The aim of the collaboration is to connect the Audi e-Tron model with Hager Group’s Home Energy Management System (HEMS).
Brands
Bocchiotti/Iboco, Italian producer of room distribution systems
Berker, German brand for electrical installation applications, switches and switch systems
Daitem
Diagral
Corporate culture
6% of sales are invested in research and development. In 2019, the company filed around 3,000 patents. The group employs around 800 people in research and development, which mainly focuses on electromobility, intelligent building technology (for smart homes) and energy efficiency. Between October 2010 and June 2014, Hager Group sponsored football club 1. FC Saarbrücken, with a focus on promoting young talent. Since 2017, the group has been supporting the French football club Racing Club Strasbourg Alsace. This sponsorship lasts for three years.
References
Electronics companies of Germany
Security engineering
Security technology
Security equipment manufacturers
Energy technology
Energy engineering
German companies established in 1955
Electronics companies established in 1955
Companies based in Saarland
German brands | Hager Group | [
"Engineering"
] | 1,298 | [
"Systems engineering",
"Security engineering",
"Energy engineering"
] |
67,094,287 | https://en.wikipedia.org/wiki/Dye-ligand%20affinity%20chromatography | Dye-ligand affinity chromatography is one of the Affinity chromatography techniques used for protein purification of a complex mixture. Like general chromatography, but using dyes to apply on a support matrix of a column as the stationary phase that will allow a range of proteins with similar active sites to bind to, refers to as pseudo-affinity. Synthetic dyes are used to mimic substrates or cofactors binding to the active sites of proteins which can be further enhanced to target more specific proteins. Follow with washing, the process of removing other non-target molecules, then eluting out target proteins out by changing pH or manipulate the salt concentration. The column can be reused many times due to the stability of immobilized dyes. It can carry out in a conventional way by using as a packed column, or in high-performance liquid chromatography (HPLC) column.
Discovery
The discovery of dye-ligand ability is from a blue dye called blue dextran. The blue dye is used as a void volume (V0) marker for a gel filtration column. It has shown that the dye has a property to bind to some certain proteins like pyruvate kinase and elute out with the void volume. Later on, it was found that "cibacron blue FG3-A", reactive dye link to dextran, is responsible for the interaction with the proteins.
Dye immobilization
The dyes are immobilized on the column matrix effectively, since usually the dyes link to a monochlorotriazine or dichlorotriazine ring (triazine dye). This type of dyes works especially well on a support matrix with hydroxyl group. The commonly used supporting matrix would be cross-linked agarose (sepharose), sephadex, polyacrylamide, and silica.
An example for triazine linkage immobilization is Blue Sepharose, resulting from Cibacron blue FG3-A with monochlorotriazine covalently coupled with OH group of sepharose. This reaction form an ether linkage and also hydrogen chloride.
C29H20ClN7O11S3 + C24H38O19 → C53H57N7O30S3 + HCl
Cibacron Blue FG3-A + Sepharose → Blue Sepharose + HCl
Reactive dyes
The dyes used in this type of chromatography are inexpensive and generally available as they are from textile industries called reactive dye. It contains chromophores that are often attached to a triazine ring. In textile industries, reactive dyes are used to dye material like cotton which is cellulose.
Commonly used reactive dyes for chromatography can be separated according to their color index name or functional group. Noted that each company has different trade names and slightly different formulas of the reactive dyes. Usually available commercially with sepharose as the supporting matrix in the form of packed columns.
Blue reactive dyes
Cibacron Blue F3GA
Cibacron Blue F3GA, Procion Blue HB, or Reactive blue 2 is a purinergic receptor antagonist, such as P2Y purinoceptor, and also an ATP receptor channels antagonist. It has a formula of C29H20ClN7O11S3 and a molecular weight of 774.2 g/mol. Cibacron blue is soluble in water and DMSO, however insoluble in ethanol. In water, saturated concentration is reached at 12.92 mM with the help of sonication. Cibacron Blue F3GA has a wide specificity for nucleotide-binding proteins or just a stereoselectivity electrostatic binding. It can be used to purify interferons, dehydrogenases, kinases, and serum albumin. For example, interferon purification from human gingival fibroblast extract using Cibacron Blue F3G-A on poly(2-hydroxyethyl methacrylate), the supporting matrix, in the form of cryogels. It has shown 97.6% purity of interferon.
Blue MX-R
Blue MX-R or Reactive Blue 4 has a formula of C23H14Cl2N6O8S2 and a molecular weight of 637.4 g/mol. It contains dichlorotriazine ring to the chromophore unlike Cibacron Blue F3GA. For a large scale protein purification, Blue MX-R can be used to purify protein such as lactate dehydrogenase (LDH). In fast-protein liquid chromatography (FPLC) using Blue MX-R immobilized on poly(glycidyl methacrylate-co-ethylene dimethacrylate) beads, it was seen to separate lysozyme and bovine serum albumin (BSA), purified lysozyme from chicken albumin.
Red reactive dyes
Red HE-3B
Red HE-3B or Reactive Red 120 has a formula of C44H30Cl2N14O20S6 and a molecular weight of 1338.1 g/mol, containing two monochlorotriazine rings. It is highly soluble in water. The dehydrogenases binding ability of Red HE-3B is greater to NADP+ dependent dehydrogenases than NAD+ dependent dehydrogenases, vice versa for Cibacron Blue F3G-A. It can be used to purify enterotoxins A, B, and C2 from Staphylococcus aureus using Procion Red HE-3B on sepharose, eluting out with 60 mM and 150 mM phosphate.
Yellow reactive dyes
Yellow H-A
Yellow H-A or Reactive Yellow 3 has a formula of C21H17ClN8O7S2 and a molecular weight of 593 g/mol, containing a monochlorotriazine ring. On agarose as supporting matrix, it was seen to purify cholesteryl ester transfer protein.
Brown reactive dyes
Brown MX-5BR
Brown MX-5BR or Reactive Brown 10 has a formula of C40H19Cl4CrN12Na2O12S2 and a molecular weight of 1163.6 g/mol, containing two dichlorotriazine rings. Brown MX-5BR, for example, can be used to purify lysozyme, phosphinothricin acetyltransferase. It also shown that it can elute tryptophanyl-tRNA synthetase using Trp as eluant, however, tryptophanyl-tRNA and tyrosyl-tRNA synthetase are the only t-RNA that can be elute out using Brown MX-5BR.
References
Chromatography | Dye-ligand affinity chromatography | [
"Chemistry"
] | 1,479 | [
"Chromatography",
"Separation processes"
] |
67,101,100 | https://en.wikipedia.org/wiki/Indaziflam | Indaziflam is a preemergent herbicide especially for grass control in tree and bush crops.
History
In 1991, the Japanese company Idemitsu Kosan filed a patent to 2-amino 6-fluoroalkyl triazine derivatives as herbicides. One of these compounds was subsequently given the ISO common name triaziflam but had limited success as a commercial herbicide. Bayer scientists subsequently investigated this area of chemistry and identified indaziflam as having superior properties, which they patented and developed under the code number BCS-AA10717. The compound was first registered for use in the USA in 2010.
Mechanism of action
Indaziflam is an inhibitor of cellulose biosynthesis. This mechanism of action was theorized to be responsible for indaziflam's effect in 2009 and proven in 2014. The cellulose biosynthesis inhibitors (CBIs) are identified as Class 29 by the Weed Science Society of America/Herbicide Resistance Action Committee.
Resistance
there are no resistant populations known and none for the broader CBI class (discounting quinclorac).
Brand names
Indaziflam composes all or part of the a.i. of several herbicides from Bayer Environmental Science (now owned by Cinven, aka Envu, per Bayer's and Envu's websites),
including Rejuvra, the Esplanade line (sometimes mixed with diquat dibromide and glyphosate isopropylamine), Marengo, Specticle, and Bayer CropScience (the inventor of the ingredient), like Alion.
Uses
Indaziflam is approved in the United States for hops, Rubus spp., Coffea spp., bushberries, tropical crops, drupes/stone fruit, and tree nuts. It is used as a preemergent.
References
Herbicides
Triazines
Organofluorides
Amines | Indaziflam | [
"Chemistry",
"Biology"
] | 406 | [
"Herbicides",
"Functional groups",
"Amines",
"Biocides",
"Bases (chemistry)"
] |
67,102,966 | https://en.wikipedia.org/wiki/Proprietary%20drug | Proprietary drug are chemicals used for medicinal purposes which are formulated or manufactured under a name protected from competition through trademark or patent. The invented drug is usually still considered proprietary even if the patent expired. When a patent expires, generic drugs may be developed and released legally. Some international and national governmental organizations have set up laws to enforce intellectual property to protect proprietary drugs, but some also highlight the importance of public health disregarding legal regulations. Proprietary drugs affect the world in various aspects including medicine, public health and economy.
Not all proprietary drugs have their generic replacements available. Biologics are often produced by in vivo preparation and direct extraction of substances from living organisms. Pharma is not extensively involved in searching for ready-to-sell generic biologics due to the complexity of manufacture and hurdles in extraction processes. Besides vaccines, these endogenous origin chemicals are prescribed to patients with severe conditions, such as complications including asthma, rheumatoid arthritis, or cancer. Patients taking a particular brand of biologics are unable to interchange between one and another to prevent underlying exposure to more side effects and/or suboptimal treatment. It is believed that generic biopharmaceutical products will not be released in the near future until all technical difficulties are overcome.
The table below shows some examples of pharma and their past/current proprietary medications:
Terminology
Brand name drugs
Broadly defined as drugs that are marketed under trade names and have patents, which can be a synonym of proprietary drugs in daily use. Strictly speaking, every drug with a trade name is a brand name drug, such as Panadol, a GSK branded paracetamol.
Generic drugs
Generic drugs are drugs that have the same active ingredient with a patent-expired drug, and are virtually bio-equivalent. The official names are often used to market these drugs, which are called unbranded generic drug, such as Panamax, a generic form of paracetamol.
Off-patent drugs
A term specifically used to describe past proprietary drugs by referring to their off-patent status.
Regulations
To support scientific investigation and protect intellectual properties, patents are granted to companies and individuals who invented the drug. Most entities in the world have established corresponding agendas legally. Global and regional governmental organizations have various extents of advancements and approaches in their intellectual property rights protection laws. Below are some examples for comparison:
World Trade Organization (WTO)
TRIPS Agreement
The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement) set up in 1994 suggested a standard on Intellectual Property Rights, which proprietary drug, a type of pharmaceutical and scientific inventions, is covered in this agreement. Basic principles such as the minimal duration of patent and part of the exclusive rights of patent owners are included by WTO member states in their respective national regulations. By 5 years and 10 years after the effectiveness of TRIPS Agreement, developed countries and developing countries were required to comply with it.
Doha Declaration
Trying to alleviate worldwide divide in accessibility of medical resources, members from the WTO endorsed the Doha Declaration on the TRIPS Agreement and Public Health in 2001. The basics of this Declaration is that "the TRIPS Agreement does not and should not prevent Members from taking measures to protect public health". It legalized the participating members to ignore the restriction from the patent of the proprietary medicine when they are controlling a significant public health crisis, namely human immunodeficiency virus (HIV), malaria and tuberculosis. Thus, affordable generic medicine can be provided for the populations in the developing countries in emergency situations.
United States
In the United States, proprietary drugs are associated with two status: patent and exclusivity. Patent is managed by the United States Patent and Trademark Office, granting inventors of new drugs rights for 20 years. It is open to all drugs, regardless of its research or commercialization status. To enjoy the benefits brought by patenting, pharmaceutical companies are obliged to disclose all research data on that drug to the public for further progression. Exclusivity, given by the U.S. Food and Drug Association (FDA), means a period of time in which no other competitor drugs can be approved. Commercialized and clinically used drugs are the targets of exclusivity. The length of exclusivity depends on the nature of application, and ranges from one to seven years. Practically exclusivity is granted for proprietary drugs that have been granted with patents, but it is not mandatory. Legalwise, generic counterparts have to wait for at least 2 decades for patent expiration to sell a copy . This system is said to aim for a balance between gaining public access to generic drugs and encouraging drug research and development.
Litigation
Despite the US having a legal system regarding to drug patenting, litigation have taken place. In the past, it is common for drug manufacturers challenging the validity of patents. In 2018, Mylan attempted to revoke the patent of Symbicort owned by AstraZeneca through the court. Now pharma suing on deliberately infringing generic drugs has become more prominent. AstraZeneca then took follow-up actions against Mylan for premature submission of Abbreviated New Drug Application (ANDA) for generic Symbicort and won the lawsuit. The introduction of biopharmaceuticals and the subsequent establishment of new drug laws may also bring more litigation.
India
According to the Patent Law in India, drug can apply for a patent since 2005. Registering a drug for patents in India is more limiting than developed countries. India was removed from the least developed member states list of the WTO and is therefore no longer eligible for the waiver, it has modified its patent law to satisfy TRIPS Agreement with its intellectual property rights legal system. An Indian patent lasts for 20 years. To ensure the interest of public, a compulsory license can be issued by the government if a pharma is suspected for violating the public health principles.
Litigation
Introducing new drug patenting regulations after 35 years could possibly lead to today's disputes, such as the Novartis v. Union of India incident. Generic Gleevec, a formulation that can substitute Glivec, were distributed in the local market since 1993. Novartis had filed a patent for Glivec in 1997. An exclusive marketing right was also granted for Novartis in 2003 and the application were approved in 2005 . In 2013, the Supreme Court of India upheld the rejection of the patent application of Glivec by Novartis, ending the 10-years battle between the proprietary drug tycoon and the local patent law. Since 1993, Novartis has started to register patent for Glivec and its active ingredients worldwide without defeats. However, the Supreme Court of India rejected the patent registration of Glivec on 2006 according to the interpretations on the patent law and TRIPS agreement by the Court. By then, the local generic drugs in India are protected. This incident is referred as a challenge to the intellectual property laws.
Patent cliff
Patent cliff refers to a dramatic dip in the revenue of a merchandise upon patent expiry. It is a prominent phenomenon in the proprietary drug industry due to the vast gap of prices between the proprietary drug and the generic drug. Since 2010, numerous pharmaceutical board busters have started to become off-patent. As seen in the figure below, the top five off-patent proprietary drug before 2017 have a combined lifetime sale of around US$588.4 Billion, which is enormous enough to surpass the bottom 5% countries' GDP in 2020.
Figure 1: Five best-selling proprietary drug which lost their patents before 2017
Proprietary drug is a substantial business protected by its respective patent. They are usually sold at a higher price, to compensate for the clinical trial cost and sometimes for the manufacturing of new technology. For example, an widely used average proprietary drug is 18 times more expensive than a common generic drug. Lyrica, a recent off-patent painkiller for nervous systems, had a sale of 5B USD in 2019, out of the 51.8B USD annual sale of the corresponding company.
However, once the proprietary drug become off-patent, there will soon be immense competition from the generic drugs produced by their business rivals. As cheaper pharmaceutical alternatives are launched, the surge in supply disrupts the market supply-and-demand status. The declining dependence on the original proprietary drug will cause its sales decreases. On top of that, the original company usually will have their prices tuned down for improving competitiveness. Resulting in a significant drop of revenue of the proprietary drug.
Below showed the graph which represented the yearly revenue of Lipitor, a proprietary drug which lost its patent on 2011. As it can be seen, a significant patent cliff happened from 2011 to 2012(58.8% drop in yearly revenue) and it is very possibly due to its newly off-patent status.
Figure 2: The yearly revenue of Lipitor by Pfizer (million US dollars) from 2004 to 2019
Benefits
Promoting influx of income to pharmaceutical research industry
Proprietary drug market is protected by its patent. As a result, its market exclusivity basis allows proprietary drugs to be highly profitable and commercially successful. Usually, pharmaceutical research is a lengthy, highly demanding, rarely successful, costly and risky investment. It is usually associated with a disinterested merchandise in the economic world. However, once a successful experimental drug candidate is registered as a proprietary drug product, the patent legally ensures a long-term dominance in the exclusive market which is free of imitative generic drug. Generating a stable and considerable net income to cover the cost.
The huge earning of the proprietary drug can circulate back to fund future medical research. Providing more resources and manpower to the research and development of another drug candidates. As well as attracting new investments to the pharmaceutical research industry due to its exclusive market potential. They encourage efforts on biopharmaceutical innovations and newborn medical breakthroughs.
Guiding future medical progresses by referencing of product knowledge
Being required for successful patent registration, detailed pharmaceutical formulations of the proprietary drug are disclosed on its patent registration application, which promote the spillover of research efforts among the medical world. In order to inspect the safety and efficacy of the proprietary drug candidate, pharmaceutical companies need to list all clinical trial data and formulation method as detailed as possible to prove the drug candidate's validity to the patent registration committee. Once the patent is rewarded, these data will later be published on medical literature and public domains as common knowledge.
Researchers can capitalise on the previous successes and establish their own project on top of the current statistics. These cooperatively help exploit more unknown drug candidates without repeating previous progresses. Speeding up future medical advancements.
Criticisms
Hindering of equitable access to medicine
According to World Health Organisation(WHO), equitable access to medicines refers to an affordable and reasonable ability for patients to get their required drug to achieve health. WHO member states shall fulfill their moral responsibility to improve the delivery of and access to the needed drugs.
However, the monopolisation of some expensive proprietary drug in the market is hindering poor patients' access to their best available medications. Leading to suboptimal treatments of diseases and lowering of health standard of these patients. This phenomenon is very prominent in the underdeveloped countries which usually have a large proportion of underprivileged citizens.
Some proprietary drugs(mainly speciality proprietary drug) are criticised for their price-gouging commercial tactics. To illustrate, the world's most expensive drug, Zolgensma, costs over US$2.1 million per year of treatment, which are generally considered as unaffordable. Since Zolgensma is the only approved drug for curing Spinal Muscular Atrophy in childhood, patients who cannot afford Zolgensma will be physically disabled for the rest of their lives. Creating inequity among patients with varying financial capacitances.
Abusing of patent extension system
According to the TRIPS Agreement, the term of patent of the proprietary drug usually can last for 20 years counting from the filing date. After that, approved generic drugs can enter the market legally with fair competition.
However, in order to achieve longer dominance in the market, pharmaceutical manufacturer (especially big pharma) may apply for patent extension or even new patent registration based on various reasons. Including modifying the formulations, dosage form or maneuvering legal system. To illustrate, AbbVie, a pharmaceutical tycoon, had attempted 247 proprietary drug patent extension applications for extending their exclusivity for 39 years in the USA on 2018 alone. Among them, 137 applications were successful in extending the patent.
The abusing of patent extension system leads to a much longer terms of patent than that stated in both local regulations and TRIPS Agreement. Providing a long period of competition-free market to their proprietary drug. It creates an unfair competition environment in the pharmaceutical market. Since the generic drug companies are excluded from that particular market, they cannot release new pharmaceutical products for public use on the same field. Resulting enduring monopolisation of proprietary drug market by the big pharma which are already stockpiling proprietary drug.
See also
Doha Declaration
Medicines Patent Pool
Generic drug
Generic brand
Intellectual property
Novartis v. Union of India & Others incident
Patent
Patent Cliff
Pharmaceutical Industry
Trademark
TRIPS agreement
References
Drugs | Proprietary drug | [
"Chemistry"
] | 2,692 | [
"Pharmacology",
"Chemicals in medicine",
"Drugs",
"Products of chemical industry"
] |
67,103,631 | https://en.wikipedia.org/wiki/Kamal%20Benslama | Kamal Benslama is a Moroccan-Swiss experimental particle physicist. He is a professor of physics at Drew University, a visiting experimental scientist at Fermilab, and a guest scientist at Brookhaven National Laboratory. He worked on the ATLAS experiment, at the Large Hadron Collider (LHC) at CERN in Switzerland. Currently, he is a member of the Mu2e experiment at Fermilab.
Biography
Originally from Morocco, Benslama studied physics at Geneva University. He obtained a bachelor and a master's degree in high-energy physics from Geneva University. In 1998, he completed a PhD at the department of High Energy Physics at the University of Lausanne.
After a short post-doc at the University of Lausanne, Benslama moved to North America in 1999. He first worked as a post-doc on the CLEO experiment at Cornell University in the US, and while at Cornell he collaborated with Syracuse University and the University of Illinois Urbana-Champaign. Then he became a research associate at the University of Montreal before becoming a post-doctoral research scientist at Columbia University in New York and associate scientist on the ATLAS experiment at Large Hadron Collider (LHC) at CERN. from 2006 to 2012, he was a professor of physics at the University of Regina in Canada. During this time, Benslama founded and led an international research group in experimental high-energy physics. He worked on the ATLAS experiment at CERN where he was a principal investigator and a team leader. He also was a member of the international ATLAS collaboration board and a member of the Liquid Argon representative board.
Benslama started his research activities at CERN in 1992, he first worked on ATLAS, then on NOMAD, (Neutrino Oscillation search with a MAgnetic Detector) which was designed to search for neutrino oscillation. His thesis was on the construction, installation and simulation of a preshower particle detector as well as on data analysis using data from the NOMAD experiment.
Benslama contributed to many aspects of the ATLAS experiment. He worked on a readout system for a silicon detector for the ATLAS experiment, then he worked on the Liquid Argon Calorimeter, the High Level Trigger and Data Quality and Monitoring. He also led several efforts on searches for physics beyond the standard model at the LHC, in particular searches for doubly charged higgs, extra-dimensions and leptoquarks. He was heavily involved in the exotics physics program at the LHC.
Before joining Drew University as a faculty, Benslama was a visiting professor at Loyola University Maryland and later he was a Senior Lecturer and Research Professor at Towson University
Private life
Kamal Benslama has three children and lives in New Jersey.
Selected work
Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC
Prospects for the search for a doubly charged Higgs in the left–right symmetric model with ATLAS - G. Azuelos, K. Benslama, J. Ferland, 10 March 2005, J.Phys.G32:73-92,2006
Exploring Little Higgs Models with ATLAS at the LHC - Azuelos, G; Benslama, K. Benslama et al. - Eur. Phys. J., C 39 (2005) 13-24
Design and implementation of the Front End Board for the readout of the ATLAS liquid argon calorimeters - N.~J.~Buchanan et al. - JINST 3, P03004 (2008)
Search for pair production of first or second generation leptoquarks in proton-proton collisions at √s=7 TeV using the ATLAS detector at the LHC
Measurement of the top quark-pair production cross section with ATLAS in pp collisions at sqrt(s)=7 TeV
Measurement of the W → ℓν and Z/γ* → ℓℓ production cross sections in proton-proton collisions at sqrt(s)=7TeV with the ATLAS detector
Electron reconstruction and identification efficiency measurements with the ATLAS detector using the 2011 LHC proton–proton collision data
Measurements of charmless hadronic two-body B meson decays and the ratio B(B to DK)/B(B to DPi)
Liste de publications et citations
References
External links
FermiLab
The ATLAS Experiment
Large Hadron Collider at CERN
20th-century births
20th-century Swiss physicists
21st-century Swiss physicists
Moroccan physicists
Particle physicists
Living people
Experimental physicists
University of Lausanne alumni
Year of birth missing (living people)
University of Geneva alumni
People associated with CERN
Swiss people of Moroccan descent
Swiss expatriates in the United States
Cornell University staff
Columbia University staff
Academic staff of the University of Regina
Drew University faculty | Kamal Benslama | [
"Physics"
] | 989 | [
"Experimental physicists",
"Particle physicists",
"Particle physics",
"Experimental physics"
] |
57,569,740 | https://en.wikipedia.org/wiki/Open%20microfluidics | Microfluidics refers to the flow of fluid in channels or networks with at least one dimension on the micron scale. In open microfluidics, also referred to as open surface microfluidics or open-space microfluidics, at least one boundary confining the fluid flow of a system is removed, exposing the fluid to air or another interface such as a second fluid.
Types of open microfluidics
Open microfluidics can be categorized into various subsets. Some examples of these subsets include open-channel microfluidics, paper-based, and thread-based microfluidics.
Open-channel microfluidics
In open-channel microfluidics, a surface tension-driven capillary flow occurs and is referred to as spontaneous capillary flow (SCF). SCF occurs when the pressure at the advancing meniscus is negative. The geometry of the channel and contact angle of fluids has been shown to produce SCF if the following equation is true.
Where pf is the free perimeter of the channel (i.e., the interface not in contact with the channel wall), and pw is the wetted perimeter (i.e., the walls in contact with the fluid), and θ is the contact angle of the fluid on the material of the device.
Paper-based microfluidics
Paper-based microfluidics utilizes the wicking ability of paper for functional readouts. Paper-based microfluidics is an attractive method because paper is cheap, easily accessible, and has a low environmental impact. Paper is also versatile because it is available in various thicknesses and pore sizes. Coatings such as wax have been used to guide flow in paper microfluidics. In some cases, dissolvable barriers have been used to create boundaries on the paper and control the fluid flow. The application of paper as a diagnostic tool has shown to be powerful because it has successfully been used to detect glucose levels, bacteria, viruses, and other components in whole blood. Cell culture methods within paper have also been developed. Lateral flow immunoassays, such as those used in pregnancy tests, are one example of the application of paper for point of care or home-based diagnostics. Disadvantages include difficulty of fluid retention and high limits of detection.
Thread-based microfluidics
Thread-based microfluidics, an offshoot from paper-based microfluidics, utilizes the same capillary based wicking capabilities. Common thread materials include nitrocellulose, rayon, nylon, hemp, wool, polyester, and silk. Threads are versatile because they can be woven to form specific patterns. Additionally, two or more threads can converge together in a knot bringing two separate ‘streams’ of fluid together as a reagent mixing method. Threads are also relatively strong and difficult to break from handling which makes them stable over time and easy to transport. Thread-based microfluidics has been applied to 3D tissue engineering and analyte analysis.
Capillary filaments in open microfluidics
Open capillary microfluidics are channels that expose fluids to open air by excluding the ceiling and/or floor of the channel. Rather than rely on using pumps or syringes to maintain flow, open capillary microfluidics uses surface tension to facilitate the flow. The elimination of and infusion source reduces the size of the device and associated apparatus, along with other aspects that could obstruct their use. The dynamics of capillary-driven flow in open microfluidics are highly reliant on two types of geometric channels commonly known as either rectangular U-grooves or triangular V-grooves. The geometry of the channels dictates the flow along the interior walls fabricated with various ever-evolving processes.
Capillary filaments in U-groove
Rectangular open-surface U-grooves are the easiest type of open microfluidic channel to fabricate. This design can maintain the same order of magnitude velocity in comparison to V-groove. Channels are made of glass or high clarity glass substitutes such as polymethyl methacrylate (PMMA), polycarbonate (PC), or cyclic olefin copolymer (COC). To eliminate the remaining resistance after etching, channels are given hydrophilic treatment using oxygen plasma or deep reactive-ion etching(DRIE).
Capillary filaments in V-groove
V-groove, unlike U-groove, allows for a variety of velocities depending on the groove angle. V-grooves with sharp groove angle result in the interface curvature at the corners explained by reduced Concus-Finn conditions. In a perfect inner corner of a V-groove, the filament will advance indefinitely in the groove allowing the formation of capillary filament depending on the wetting conditions. The width of the groove plays an important role in controlling the fluid flow. The narrower the V-groove is, the better the capillary flow of liquids is even for highly viscous liquids such as blood; this effect has been used to produce an autonomous assay. The fabrication of a V-groove is more difficult than a U-groove as it poses a higher risk for faulty construction, since the corner has to be tightly sealed.
Advantages
One of the main advantages of open microfluidics is ease of accessibility which enables intervention (i.e., for adding or removing reagents) to the flowing liquid in the system. Open microfluidics also allows simplicity of fabrication thus eliminating the need to bond surfaces. When one of the boundaries of a system is removed, a larger liquid-gas interface results, which enables liquid-gas reactions. Open microfluidic devices enable better optical transparency because at least one side of the system is not covered by the material which can reduce autofluorescence during imaging. Further, open systems minimize and sometimes eliminate bubble formation, a common problem in closed systems.
In closed system microfluidics, the flow in the channels is driven by pressure via pumps (syringe pumps), valves (trigger valves), or electrical field. An example of one of these methods for achieving low flow rates using temperature-controlled evaporation has been described for an open microfluidics system, allowing for long incubation hours for biological applications and requiring small sample volumes. Open system microfluidics enable surface-tension driven flow in channels thereby eliminating the need for external pumping methods. For example, some open microfluidic devices consist of a reservoir port and pumping port that can be filled with fluid using a pipette. Eliminating external pumping requirements lowers cost and enables device use in all laboratories with pipettes.
Materials Solutions
Thankfully, while many problems exist with PDMS, many solutions have also been developed. To address the negative hydrophobicity and porosity that PDMS exhibits, researchers have started to use coatings such as BSA (bovine serum albumin) or charged molecules to create a layer between the native PDMS and the cells. Other researchers have successfully employed several of the Pluronic surfactants, a tri-block copolymer that has two hydrophilic blocks surrounding a hydrophobic core often used to increase the hydrophilic nature of numerous substrates, and even borosilicate glass coatings to address the hydrophobicity problem. Interestingly, treatment with either of the prior two compounds can result in prevention of non-specific protein adsorption, as they (and other coatings) form stable adsorption interactions with the PDMS, which aides in reducing PDSM interference with cell culture media. These compounds and materials can affect surface properties and should be carefully tested to note the impact on cultured cells. Researchers developed 3D scaffolding systems to mimic in vivo environments so that more cells and cell types can grow in an effort to address the problem that not all cell types can grow on PDMS. Like coating the PDMS, 3D scaffolding systems employ alternatives materials like ECM (extracellular matrix) proteins so rather than not binding the native PDMS, cells are more likely to bind to the proteins. Lastly, researchers have addressed the permeability of PDMS to water vapor using some elegant solutions. For example, a portion of the microfluidic system can be designated for humidification and cast in PDMS, or other material like glass.
Disadvantages
Some drawbacks of open microfluidics include evaporation, contamination, and limited flow rate. Open systems are susceptible to evaporation which can greatly affect readouts when fluid volumes are on the microscale. Additionally, due to the nature of open systems, they are more susceptible to contamination than closed systems. Cell culture and other methods where contamination or small particulates are a concern must be carefully performed to prevent contamination. Lastly, open systems have a limited flow rate because induced pressures cannot be used to drive flow.
Materials
Polydimethylsiloxane (PDMS) is an ideal material to fabricate microfluidic devices for cell culture applications due to several advantageous properties such as low processing costs, ease of manufacture, rapid prototyping, ease of surface modification, and cellular non-toxicity. While there are several benefits that arise from using native Polydimethylsiloxane (PDMS), there are also some drawbacks that researchers must account for in their experiments. First, PDMS is both hydrophobic and porous, meaning that small molecules or other hydrophobic molecules can be adsorbed onto it. Such molecules include anything from methyl- or alkyl-containing molecules, and even certain dyes like Nile Red. Researchers identified in 2008 that plasma could be used to reduce the hydrophobicity of PDMS, though it returned about two weeks after treatment. Some researchers postulate that integrating removable polycaprolactone (PCL) fiber-based electrospun scaffolds under NaOH treatment enhances hydrophilicity as well as mitigating hydrophobicity, while promoting more efficient cell communication. Another problem that arises with PDMS is that it can interfere with the media that circulates in the channels. Incomplete curing of PDMS channels can lead to PDMS leaching into the media and, even when complete curing takes place, components of the media can still unintentionally attach to free hydrophobic sites on the PDMS walls. Yet another problem arises with the gas permeability of PDMS. Most researchers take advantage of this to oxygenate both the PDMS and the circulating media, but this trait also makes the microfluidic system especially vulnerable to water vapor loss. Lastly, not all cell types can grow, or will grow at the same levels, on native PDMS. For instance, high levels of rapid cell death in two fibroblast types grown on native PDMS were observed as early as 1994, which posed problems for the widespread use of PDMS in microfluidic cell culture.
Applications
Like many microfluidic technologies, open system microfluidics has been applied to nanotechnology, biotechnology, fuel cells, and point of care (POC) testing. For cell-based studies, open-channel microfluidic devices enable access to cells for single cell probing within the channel. Other applications include capillary gel electrophoresis, water-in-oil emulsions, and biosensors for POC systems. Suspended microfluidic devices, open microfluidic devices where the floor of the device is removed, have been used to study cellular diffusion and migration of cancer cells. Suspended and rail-based microfluidics have been used for micropatterning and studying cell communication.
Materials Solutions Applications
Applications of these solutions are still in use today, as seen by the following examples. In 2014, Lei et al. was testing the impedance of human oral cancer cells in the presence of cisplatin, a known anti-cancer drug, by molding the cells into a 3D scaffolding. The authors had noted from previous studies that cellular impedance could be correlated to cellular viability and proliferation in 2D cell culture and hoped to translate that correlation into 3D cell culture. Using agarose to create the 3D scaffolding, the researchers measured the growth and proliferation of human oral cancer cells in the presence and absence of cisplatin using fluorescent DNA assays and observed that there was indeed a correlation like that observed in 2D model. Not only did this prove that principles from 2D cell culture could be translated to 3D open microfluidic cell culture, but it also potentially lays the foundation for a more personalized treatment plan for cancer patients. They postulated that future developments could transform this method into an assay that could test patient cancer cell response to known anti-cancer drugs.
Another group used a similar method, but instead of creating a 3D scaffolding, they employed several different PDMS coatings to determine the best option for studying cancer stem cells. The group looked at BSA and ECM proteins and found that, while their experimental evidence supported BSA as the best coating for circulating cancer cells (CSC's), phenotypic changes did occur to the cells (namely, elongation), but did not impact the cells’ ability to perform normal cell functions. A key caveat to note here is that BSA is not a blanket solution that works for every cell type- different coatings work better or worse for certain cell types and these differences should be considered when developing an experiment.
References
Cell culture techniques
Microfluidics | Open microfluidics | [
"Chemistry",
"Materials_science",
"Biology"
] | 2,795 | [
"Biochemistry methods",
"Microfluidics",
"Cell culture techniques",
"Microtechnology"
] |
57,575,956 | https://en.wikipedia.org/wiki/Mycorrhizal%20bioremediation | Mycorrhizal amelioration of heavy metals or pollutants is a process by which mycorrhizal fungi in a mutualistic relationship with plants can sequester toxic compounds from the environment, as a form of bioremediation.
Mycorrhizae-plant partners
These symbiotic relationships are generally between plants and arbuscular mycorrhizae in the Glomeromycota clade of fungi. Other types of fungi have been documented. For example, there is a case where zinc phytoextraction from willows was increased after the Basidiomycete fungus Paxillus involutus was inoculated in the soil.
Mechanisms of the symbiosis
The mycorrhizae allow the plants to increase their biomass, which increases their tolerance to heavy metals. The fungi also stimulate the uptake of heavy metals (such as manganese and cadmium) with the enzymes and organic acids (such as acetic acid and malic acid) that they excrete into their surroundings in order to digest them.
Mycorrhizae on plant toleration
The fungi can prevent heavy metals from traveling past the roots of the plant. They can also store heavy metals in their vacuoles. However, in some cases, the fungi do not decrease the uptake of heavy metals by plants but increase their tolerance. In some cases, this is done by increasing the overall biomass of the plant so that there is a lower concentration of metals. They can also modify the response of the plant to heavy metals at the level of plant transcription and translation.
Colonization of barren soil
Mycorrhizae remain functional underground following extreme conditions, such as a forest fire. Researchers believe that this allows them to obtain minerals and nutrients that are released during a fire before they are leached out of the soil. This likely increases the ability to recover quickly after forest fires.
Serpentine soils are in part characterized by a low calcium-to-magnesium ratio. Studies indicate that arbuscular mycorrhiza helps plants increase their magnesium uptake in soils with low amounts of magnesium. However, plants in serpentine soils inoculated with fungus either showed no effect on magnesium concentration or decreased magnesium uptake.
Resistance to toxicity
Studies show that mycorrhizal symbionts of poplar seedlings are capable of preventing heavy metals reaching vulnerable parts of the plant by keeping the toxins in the rhizosphere. Another study demonstrates that Arctostaphylos uva-ursi plants in symbiotic relationships were more resistant to toxins because the fungi helped the plants grow below toxic layers of soil.
Application in bioremediation
In China's provinces of Guizhou, Yunnan and Guangxi, rocky desertification is expanding and is not well controlled. This area is characterized by soil depletion, soil erosion and droughts. It is very difficult for plants to grow in this region, and it is mostly filled with drought-resistant plants, lithophytes and calciphilopteris plants. Morus alba, commonly known as a mulberry, is a drought-resistant tree that can tolerate barren soils. It has been found that mulberry inoculated with arbuscular mycorrhiza has increased survivability in karst desert areas and, therefore, an increased rate of soil improvement and reduced erosion.
In 1993, artist Mel Chin collaborated with USDA agronomist Dr. Rufus Chaney in an effort to detoxify Pigs Eye Landfill, a superfund site in Saint Paul, Minnesota. The team planted Thlaspi, which had been selected for increased uptake and sequestration of heavy metals. Analysis showed elevated cadmium concentrations in Thlaspi biomass. It has been found that Thlaspi has a significant arbuscular mycorrhiza association.
Slovakia has many heavy metal mines, which have caused significant regional soil contamination. Samples of Thlaspi harvested in Slovakia from contaminated soils near a lead mine showed increased levels of cadmium, lead, and zinc. Furthermore, Thlaspi growing in contaminated regions had higher rates of certain arbuscular mycorrhizal fungi when compared to non-contaminated Thlaspi. Since manual clean-up is usually inefficient and expensive, mycorrhiza colonized Thlaspi may be useful in bioremediation efforts.
See also
Bioremediation
Mycoremediation
Phytoremediation
References
Bioremediation | Mycorrhizal bioremediation | [
"Chemistry",
"Biology",
"Environmental_science"
] | 920 | [
"Biodegradation",
"Ecological techniques",
"Environmental soil science",
"Bioremediation"
] |
65,631,074 | https://en.wikipedia.org/wiki/Jamulus | Jamulus is open source (GPL) networked music performance software that enables live rehearsing, jamming and performing with musicians located anywhere on the internet. Jamulus is written by Volker Fischer and contributors using C++. The Software is based on the Qt framework and uses the OPUS audio codec. It was known as "llcon" until 2013.
One of the problems with music playing over the internet in real time is latency - the time lag that occurs while (compressed) audio streams travel to and from each musician. Although the precedence effect means that small delays (up to around 40 ms) can be perceived as synchronous, longer delays make it practically impossible to play live together. A further problem is jitter, a type of packet delay due to changes in latency over time, which results in choppy or distorted sound. Long delays can even lead to packet loss (perceived as a 'blackout'). These can be alleviated by delay buffers or jitter buffers (both of which are present in Jamulus) - but these then add to the overall round-trip delay, so need to be balanced.
Popular video conferencing software such as Zoom or Teams is unsuited to this task as the latency can be much higher (Zoom recommends "a latency of 150ms or less" and jitter of "40ms or less", and in some 2020 tests was shown to have an average latency of 135 ms; the "Audio poor quality metrics" for Teams include having "Round-trip time >500 ms" and "Jitter >30 ms"). In addition, most such software is optimized for speech rather than music, so sustained musical notes can be misidentified as background noise and filtered out (although this can be alleviated to an extent via settings such as "Enable Original Sound"). Conferencing software is also often designed for one person to be heard at a time (the speaker gets 'focus'), to stop people talking over each other, but this makes playing music together impossible. In addition, conferencing software does not normally allow detailed setting of individual audio streams' volume or panning on the user side, both of which are integral features of Jamulus.
To reduce latency as much as possible, Jamulus makes use of compressed audio and the UDP protocol to transmit audio data. Total latency is composed of:
network latency due to delays within the network - every 300km is responsible for at least 1 ms extra latency since the speed of light limits the data transport on internet.
conversion latency - if analog-to-digital conversion or digital-to-analog conversion is not handled by special hardware, these conversions will add additional latency;
audio latency from sound traveling through air, if the microphone and/or loudspeakers are not in immediate proximity. Every meter of distance adds around 3 ms delay due to the limitation of the speed of sound.
Jamulus is client-server based; each client transmits its own compressed audio to a server on the internet. The server mixes the (decompressed) audio stream for each user separately and re-transmits the individual compressed mix to each client. Each client has its own mixing console which controls its mix on the server.
Servers can be either public or private (termed "Registered" and "Unregistered", since Jamulus has no built-in user authentication mechanism), the former being listed by "directories" from which users can choose a server with the lowest latency for them.
Usage
Already in 2018, Jamulus was attracting attention as a way for classical ensembles such as string quartets to rehearse at a distance, but its usage increased dramatically in 2020 due to the COVID-19 pandemic. In April 2020 it was being downloaded two thousand times per day, with the trend increasing. It was elected SourceForge 'Project of the Month' in June 2020. Jamulus Storband, Sweden's first "virtual big band" with over 20 members, also started that month. Many changes were later made to support larger groups, such as choirs with as many as 98 members as well as WorldJam, an initiative allowing musicians from all over the world to play together on a regular basis.
Having a synchronized metronome for participants of a session can be key to helping musicians keep the pace of the song and be in sync with each other. Numerous online metronomes are available, or other OpenSource tools may be used: as one example, Sychronome uses NTP (Network Time Protocol) with a network time server to sync metronomes for each Jamulus client via smartphones.
See also
LoLa
JamKazam
Ninjam / Ninbot
SonoBus
HPSJam
Koord
Comparison of Remote Music Performance Software
References
Audio software
2006 software
Music software
Audio software with JACK support | Jamulus | [
"Engineering"
] | 1,012 | [
"Audio engineering",
"Audio software"
] |
65,639,409 | https://en.wikipedia.org/wiki/Bendix%20Electrojector | The Bendix Electrojector is an electronically controlled manifold injection (EFI) system developed and made by Bendix Corporation. In 1957, American Motors (AMC) offered the Electrojector as an option in some of their cars; Chrysler followed in 1958. However, it proved to be an unreliable system that was soon replaced by conventional carburetors. The Electrojector patents were then sold to German car component supplier Bosch, who developed the Electrojector into a functioning system, the Bosch D-Jetronic, introduced in 1967.
Description
The Electrojector is an electronically controlled multi-point injection system that has an analogue engine control unit, the so-called "modulator" that uses the intake manifold vacuum and the engine speed for metering the right amount of fuel. The fuel is injected intermittently, and with a constant pressure of . The injectors are spring-loaded active injectors, actuated by a modulator-controlled electromagnet. Pulse-width modulation is used to change the amount of injected fuel: since the injection pressure is constant, the fuel amount can only be changed by increasing or decreasing the injection pulse duration. The modulator receives the injection pulse from an injection pulse generator that rotates in sync with the ignition distributor. The modulator converts the injection pulse into a correct injection signal for each fuel injector primarily by using the intake manifold and crankshaft speed sensor signals. It uses analogue transistor technology (i. e. no microprocessor) to do so. The system also supports setting the correct idle speed, mixture enrichment, and coolant temperature using additional resistors in the modulator.
History
The Electrojector was first offered by American Motors Corporation (AMC) in 1957. The Rambler Rebel was used to promote AMC's new engine. The Electrojector-injected engine was an option and rated at . It produced peak torque 500 rpm lower than the equivalent carburetor engine The cost of the EFI option was US$395 and it was available on 15 June 1957. According to AMC, the price would be significantly less than Chevrolet's mechanical fuel injection option. Initial problems with the Electrojector meant only pre-production cars had it installed so very few cars were sold and none were made available to the public. The EFI system in the Rambler worked well in warm weather, but was difficult to start in cooler temperatures.
Chrysler offered Electrojector on the 1958 Chrysler 300D, DeSoto Adventurer, Dodge D-500, and Plymouth Fury. The early electronic components were not reliable in an underhood environment and were not easily modified as engine control requirements advanced. Most of the 35 vehicles originally equipped with Electrojector were retrofitted with 4-barrel carburetors. The Electrojector patents were subsequently sold to Bosch.
Bosch developed their D-Jetronic (D for Druckfühlergesteuert, German for "pressure-sensor-controlled"), from the Electrojector, which was first used on the VW 1600TL/E in 1967. This was a speed/density system, using engine speed and intake manifold air density to calculate "air mass" flow rate and thus fuel requirements. This system was adopted by VW, Mercedes-Benz, Porsche, Citroën, Saab, and Volvo. Lucas licensed the system for production in Jaguar cars, initially in D-Jetronic form, before switching to L-Jetronic in 1978 on the XK6 engine.
References
Fuel injection systems
Embedded systems
Power control
Engine technology
Automotive technology tradenames
Bendix Corporation | Bendix Electrojector | [
"Physics",
"Technology",
"Engineering"
] | 735 | [
"Physical quantities",
"Computer engineering",
"Engines",
"Embedded systems",
"Computer systems",
"Engine technology",
"Power (physics)",
"Computer science",
"Power control"
] |
47,316,194 | https://en.wikipedia.org/wiki/Jaw-Shen%20Tsai | Jaw-Shen Tsai ( Tsai Jaw-Shen, born February 8, 1952, in Taipei, Taiwan) is a Taiwanese physicist. He is a professor at the Tokyo University of Science and a team leader of the Superconducting Quantum Simulation Research Team at the Center for Emergent Matter Science (CEMS) within RIKEN. He has contributed to the area of condensed matter physics in both its fundamental physical aspects and its technological applications. He has recently been working on experiments connected to quantum coherence in Josephson systems. In February 2014, he retired from NEC Corporation, after 31 years of employment. He is a fellow of the American Physical Society as well as the Japan Society of Applied Physics.
Education and Work
Jaw-Shen Tsai obtained a Bachelor of Arts degree in Physics (1975) at University of California at Berkeley and a Ph.D. (1983) at the State University of New York at Stony Brook.
He has held the following positions:
1983 Research Scientist, Microelectronics Research Laboratories, NEC
2001 Fellow, Nano Electronics Research Laboratories, NEC
2001 Team Leader, Macroscopic Quantum Coherence Team, RIKEN
2012 Group Director, Single Quantum Dynamics Research Group, RIKEN
2013 Team Leader, Macroscopic Quantum Coherence Research Team, Quantum Information Electronics Division, RIKEN Center for Emergent Matter Science
2014 Team Leader, Superconducting Quantum Simulation Research Team, Quantum Information Electronics Division, RIKEN Center for Emergent Matter Science (-present)
2015 Professor, Tokyo University of Science (-present)
Honors and awards
2000 Fellow, American Physical Society
2004 Nishina Memorial Prize
2007 Honorary Professor, National Chiao Tung University
2008 Simon Memorial Prize (with Yasunobu Nakamura)
2010 Fellow, Japan Society of Applied Physics
2013 Quantum Innovator Award
2014 The 11th (with Yasunobu Nakamura)
2018 Medal with Purple Ribbon
2021 Asahi Prize (with Yasunobu Nakamura)
References
External links
Center for Emergent Matter Science at RIKEN http://www.riken.jp/en/research/labs/cems/
Quantum Cybernetics at RIKEN http://www.riken.jp/Qcybernetics/en/1_overview/index.html
Supeconducting Quantum Computing at FIRST, National Institute of Informatics http://www.nii.ac.jp/qis/first-quantum/e/subgroups/superconductingQcom/researcher.html
Department of Physics at Tokyo University of Science http://www.rs.tus.ac.jp/tsai/
1952 births
20th-century Taiwanese physicists
Quantum physicists
Living people
NEC people
Riken personnel
Stony Brook University alumni
Academic staff of Tokyo University of Science
UC Berkeley College of Letters and Science alumni
Taiwanese expatriates in Japan
Scientists from Taipei
Fellows of the American Physical Society
21st-century Taiwanese physicists
Taiwanese expatriates in the United States
Recipients of the Medal with Purple Ribbon
Foreign educators in Japan | Jaw-Shen Tsai | [
"Physics"
] | 608 | [
"Quantum physicists",
"Quantum mechanics"
] |
47,321,247 | https://en.wikipedia.org/wiki/Glaeser%27s%20continuity%20theorem | In mathematical analysis, Glaeser's continuity theorem is a characterization of the continuity of the derivative of the square roots of functions of class . It was introduced in 1963 by Georges Glaeser, and was later simplified by Jean Dieudonné.
The theorem states: Let be a function of class in an open set U contained in , then is of class in U if and only if its partial derivatives of first and second order vanish in the zeros of f.
References
Theorems in analysis | Glaeser's continuity theorem | [
"Mathematics"
] | 103 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical problems",
"Mathematical theorems"
] |
47,321,473 | https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold%20representation%20theorem | In real analysis and approximation theory, the Kolmogorov–Arnold representation theorem (or superposition theorem) states that every multivariate continuous function can be represented as a superposition of continuous single-variable functions.
The works of Vladimir Arnold and Andrey Kolmogorov established that if f is a multivariate continuous function, then f can be written as a finite composition of continuous functions of a single variable and the binary operation of addition. More specifically,
where and .
There are proofs with specific constructions.
It solved a more constrained form of Hilbert's thirteenth problem, so the original Hilbert's thirteenth problem is a corollary. In a sense, they showed that the only true continuous multivariate function is the sum, since every other continuous function can be written using univariate continuous functions and summing.
History
The Kolmogorov–Arnold representation theorem is closely related to Hilbert's 13th problem. In his Paris lecture at the International Congress of Mathematicians in 1900, David Hilbert formulated 23 problems which in his opinion were important for the further development of mathematics. The 13th of these problems dealt with the solution of general equations of higher degrees. It is known that for algebraic equations of degree 4 the solution can be computed by formulae that only contain radicals and arithmetic operations. For higher orders, Galois theory shows us that the solutions of algebraic equations cannot be expressed in terms of basic algebraic operations. It follows from the so called Tschirnhaus transformation that the general algebraic equation
can be translated to the form . The Tschirnhaus transformation is given by a formula containing only radicals and arithmetic operations and transforms. Therefore, the solution of an algebraic equation of degree can be represented as a superposition of functions of two variables if and as a superposition of functions of variables if . For the solution is a superposition of arithmetic operations, radicals, and the solution of the equation
A further simplification with algebraic transformations seems to be impossible which led to Hilbert's conjecture that "A solution of the general equation of degree 7 cannot be represented as a superposition of continuous functions of two variables". This explains the relation of Hilbert's thirteenth problem to the representation of a higher-dimensional function as superposition of lower-dimensional functions. In this context, it has stimulated many studies in the theory of functions and other related problems by different authors.
Variants
A variant of Kolmogorov's theorem that reduces the number of
outer functions is due to George Lorentz. He showed in 1962 that the outer functions can be replaced by a single function . More precisely, Lorentz proved the existence of functions , , such that
David Sprecher replaced the inner functions by one single inner function with an appropriate shift in its argument. He proved that there exist real values , a continuous function , and a real increasing continuous function with , for , such that
Phillip A. Ostrand generalized the Kolmogorov superposition theorem to compact metric spaces. For let be compact metric spaces of finite dimension and let . Then there exists continuous functions and continuous functions such that any continuous function is representable in the form
Kolmogorov-Arnold representation theorem and its aforementioned variants also hold for discontinuous multivariate functions.
Limitations
The theorem does not hold in general for complex multi-variate functions, as discussed here. Furthermore, the non-smoothness of the inner functions and their "wild behavior" has limited the practical use of the representation, although there is some debate on this.
Applications
In the field of machine learning, there have been various attempts to use neural networks modeled on the Kolmogorov–Arnold representation. In these works, the Kolmogorov–Arnold theorem plays a role analogous to that of the universal approximation theorem in the study of multilayer perceptrons.
Proof
Here one example is proved. This proof closely follows. A proof for the case of functions depending on two variables is given, as the generalization is immediate.
Setup
Let be the unit interval .
Let be the set of continuous functions of type . It is a function space with supremum norm (it is a Banach space).
Let be a continuous function of type , and let be the supremum of it on .
Let be a positive irrational number. Its exact value is irrelevant.
We say that a 5-tuple is a Kolmogorov-Arnold tuple if and only if any there exists a continuous function , such that In the notation, we have the following:
Proof
Fix a . We show that a certain subset is open and dense: There exists continuous such that , and We can assume that with no loss of generality.
By continuity, the set of such 5-tuples is open in . It remains to prove that they are dense.
The key idea is to divide into an overlapping system of small squares, each with a unique address, and define to have the appropriate value at each address.
Grid system
Let . For any , for all large , we can discretize into a continuous function satisfying the following properties:
is constant on each of the intervals .
These values are different rational numbers.
.
This function creates a grid address system on , divided into streets and blocks. The blocks are of form .
Since is continuous on , it is uniformly continuous. Thus, we can take large enough, so that varies by less than on any block.
On each block, has a constant value. The key property is that, because is irrational, and is rational on the blocks, each block has a different value of .
So, given any 5-tuple , we construct such a 5-tuple . These create 5 overlapping grid systems.
Enumerate the blocks as , where is the -th block of the grid system created by . The address of this block is , for any . By adding a small and linearly independent irrational number (the construction is similar to that of the Hamel basis) to each of , we can ensure that every block has a unique address.
By plotting out the entire grid system, one can see that every point in is contained in 3 to 5 blocks, and 2 to 0 streets.
Construction of g
For each block , if on all of then define ; if on all of then define . Now, linearly interpolate between these defined values. It remains to show this construction has the desired properties.
For any , we consider three cases.
If , then by uniform continuity, on every block that contains the point . This means that on 3 to 5 of the blocks, and have an unknown value on 2 to 0 of the streets. Thus, we have givingSimilarly for .
If , then since , we still have
Baire category theorem
Iterating the above construction, then applying the Baire category theorem, we find that the following kind of 5-tuples are open and dense in : There exists a sequence of such that , , etc. This allows their sum to be defined: , which is still continuous and bounded, and it satisfies Since has a countable dense subset, we can apply the Baire category theorem again to obtain the full theorem.
Extensions
The above proof generalizes for -dimensions: Divide the cube into interlocking grid systems, such that each point in the cube is on to blocks, and to streets. Now, since , the above construction works.
Indeed, this is the best possible value.
A relatively short proof is given in via dimension theory.
In another direction of generality, more conditions can be imposed on the Kolmogorov–Arnold tuples.
The proof is given in.
(Vituškin, 1954) showed that the theorem is false if we require all functions to be continuously differentiable. The theorem remains true if we require all to be 1-Lipschitz continuous.
References
Sources
Andrey Kolmogorov, "On the representation of continuous functions of several variables by superpositions of continuous functions of a smaller number of variables", Proceedings of the USSR Academy of Sciences, 108 (1956), pp. 179–182; English translation: Amer. Math. Soc. Transl., "17: Twelve Papers on Algebra and Real Functions" (1961), pp. 369–373.
Vladimir Arnold, "On functions of three variables", Proceedings of the USSR Academy of Sciences, 114 (1957), pp. 679–681; English translation: Amer. Math. Soc. Transl., "28: Sixteen Papers on Analysis" (1963), pp. 51–54. SpringerLink
Vladimir Arnold, "On the representation of continuous functions of three variables as superpositions of continuous functions of two variables", Dokl. Akad. Nauk. SSSR 114:4 (1957), pp. 679–681 (in Russian) SpringerLink
Andrey Kolmogorov, "On the representation of continuous functions of several variables as superpositions of continuous functions of one variable and addition", (1957); English translation: Amer. Math. Soc. Transl., "28: Sixteen Papers on Analysis" (1963), PDF
Further reading
S. Ya. Khavinson, Best Approximation by Linear Superpositions (Approximate Nomography), AMS Translations of Mathematical Monographs (1997)
Theorems in real analysis
Functions and mappings
Theorems in approximation theory | Kolmogorov–Arnold representation theorem | [
"Mathematics"
] | 1,903 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Functions and mappings",
"Theorems in real analysis",
"Mathematical objects",
"Theorems in approximation theory",
"Mathematical relations"
] |
39,950,774 | https://en.wikipedia.org/wiki/Plasmonic%20lens | In nano-optics, a plasmonic lens generally refers to a lens for surface plasmon polaritons (SPPs), i.e. a device that redirects SPPs to converge towards a single focal point. Because SPPs can have very small wavelength, they can converge into a very small and very intense spot, much smaller than the free space wavelength and the diffraction limit.
A simple example of a plasmonic lens is a series of concentric rings on a metal film. Any light that hits the film from free space at a 90-degree angle, known as the normal, will get coupled into a SPP (this part works like a diffraction grating coupler), and that SPP will be heading towards the center of the circles, which is the focal point. Another example is a tapered "dimple".
In 2007, a novel, or technologically new, plasmonic lenses and waveguide by modulating light a mesoscale dielectric structure on a metallic film with arrayed nano-slits, which have constant depth but variant widths. The slits transport electromagnetic energy in the form of SPPs in nanometer sized waveguides and provide desired phase adjustments for manipulating the beam of light. The scientists claim that it is an improvement over other subwavelength imaging techniques, such as "superlenses", where the object and image are confined to the near field.
These devices have been suggested for various applications that take advantage of the small size and high intensity of the SPPs at the focal point. These include photolithography, heat-assisted magnetic recording, microscopy, biophotonics, biological molecule sensors, and solar cells, as well as other applications.
The term "plasmonic lens" is also sometimes used to describe something different: Any free-space lens (i.e., a lens that focuses free-space light, rather than SPPs), that has something to do with plasmonics.
References
Further reading
Plasmonics
Biotechnology
Metamaterials
Lenses | Plasmonic lens | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 429 | [
"Plasmonics",
"Metamaterials",
"Materials science",
"Surface science",
"Biotechnology",
"Condensed matter physics",
"nan",
"Nanotechnology",
"Solid state engineering"
] |
39,953,452 | https://en.wikipedia.org/wiki/Thiolate-protected%20gold%20cluster | Thiolate-protected gold clusters are a type of ligand-protected metal cluster, synthesized from gold ions and thin layer compounds that play a special role in cluster physics because of their unique stability and electronic properties. They are considered to be stable compounds.
These clusters can range in size up to hundreds of gold atoms, above which they are classified as passivated gold nanoparticles.
Synthesis
Wet chemical synthesis
The wet chemical synthesis of thiolate-protected gold clusters is achieved by the reduction of gold(III) salt solutions, using a mild reducing agent in the presence of thiol compounds. This method starts with gold ions and synthesizes larger particles from them, therefore this type of synthesis can be regarded as a "bottom-up approach" in nanotechnology to the synthesis of nanoparticles.
The reduction process depends on the equilibrium between different oxidation states of the gold and the oxidized or reduced forms of the reducing agent, or thiols. Gold(I)-thiolate polymers have been identified as important in the initial steps of the reaction. Several synthesis recipes exist that are similar to the Brust synthesis of colloidal gold, however the mechanism is not yet fully understood. The synthesis produces a mixture of dissolved, thiolate-protected gold clusters of different sizes. These particles can then be separated by gel electrophoresis (PAGE). If the synthesis is performed in a kinetically controlled manner, particularly stable representatives can be obtained with particles of uniform size (monodispersely), avoiding further separation steps.
Template-mediated synthesis
Rather than starting from "naked" gold ions in solution, template reactions can be used for directed synthesis of clusters. The high affinity of the gold ions to electronegative and (partially) charged atoms of functional groups yields potential seeds for cluster formation. The interface between the metal and the template can act as a stabilizer and steer the final size of the cluster. Some potential templates are dendrimers, oligonucleotides, proteins, polyelectrolytes and polymers.
Etching synthesis
Top-down synthesis of the clusters can be achieved by the "etching" of larger metallic nanoparticles with redox-active, thiol-containing biomolecules. In this process, gold atoms on the nanoparticles' surface react with the thiol, dissolving as gold-thiolate complexes until the dissolution reaction stops; this leaves behind a residual species of thiolate-protected gold clusters that is particularly stable. This type of synthesis is also possible using other non thiol-based ligands.
Properties
Electronic and optical properties
The electronic structure of the thiolate-protected gold clusters is characterized by strongly pronounced quantum effects. These result in discrete electronic states, and a nonzero HOMO/LUMO gap. This existence of discrete electronic states was first indicated by the discrepancy between their optical absorption and the predictions of classical Mie scattering. The discrete optical transitions and occurrence of photoluminescence in these species are areas where they behave like molecular, rather than metallic, substances. This molecular optical behavior sharply distinguishes thiolate-protected clusters from gold nanoparticles, whose optical characteristics are driven by Plasmon resonance. Some of thiolate-protected clusters' properties can be described using a model in which the clusters are treated like "superatoms". According to this model they exhibit atomic-like electronic states, that are labeled S, P, D, F according to their respective angular momentum on the atomic level. Those clusters that have a "closed superatomic shell" configuration have indeed been identified as the most stable ones. This electronic shell closure and the resulting gain in stability is responsible for the discrete distribution of a few stable cluster sizes (magic numbers) observed in their synthesis, rather than a quasi-continuous distribution of sizes.
Magic numbers
Magic numbers are connected with the number of metal atoms in those thiolate-protected clusters which display an outstanding stability. Such clusters can be synthesized monodispersely and are end products of the etching procedure after an addition of excess thiols does not lead to further metal dissolution. Some important clusters with magic numbers are (SG:Glutathione): Au10(SG)10, Au15(SG)13, Au18(SG)14, Au22(SG)16, Au22(SG)17, Au25(SG)18, Au29(SG)20, Au33(SG)22, and Au39(SG)24.
Au20(SCH2Ph)16 is also well-known. It was greater than representatives Au102(p-MBA)44 with the para-mercaptobenzoice (para-mercapto-benzoic acid, p-MBA) produced ligand.
Structure prediction
Worthy of note is that in 2013, a structural prediction of the Au130 (SCH3)50 cluster, based on Density Functional Theory (DFT) was confirmed in 2015. This result represents the maturity of this field where calculations are able to guide the experimental work.
The following table features some sizes.
Composition database
Applications
In bionanotechnology, intrinsic properties of the clusters (for example, fluorescence) can be made available for bionanotechnological applications by linking them with biomolecules through the process of bioconjugation. The protected gold particles' stability and fluorescence makes them efficient emitters of electromagnetic radiation that can be tuned by varying the cluster size and the type of ligand used for protection. The protective shell can function (have functional groups added) in a way that selective binding (for example, as a complementary protein receptor of DNA-DNA-interaction) qualifies them for the use as biosensors.
References
Cluster chemistry | Thiolate-protected gold cluster | [
"Chemistry"
] | 1,189 | [
"Cluster chemistry",
"Organometallic chemistry"
] |
39,954,585 | https://en.wikipedia.org/wiki/Ribosomal%20pause | Ribosomal pause refers to the queueing or stacking of ribosomes during translation of the nucleotide sequence of mRNA transcripts. These transcripts are decoded and converted into an amino acid sequence during protein synthesis by ribosomes. Due to the pause sites of some mRNA's, there is a disturbance caused in translation. Ribosomal pausing occurs in both eukaryotes and prokaryotes. A more severe pause is known as a ribosomal stall.
It's been known since the 1980s that different mRNAs are translated at different rates. The main reason for these differences was thought to be the concentration of varieties of rare tRNAs limiting the rate at which some transcripts could be decoded. However, with research techniques such as ribosome profiling, it was found that at certain sites there were higher concentrations of ribosomes than average, and these pause sites were tested with specific codons. No link was found between the occupancy of specific codons and amount of their tRNAs. Thus, the early findings about rare tRNAs causing pause sites don't seem plausible.
Two techniques can localize the ribosomal pause site in vivo; a micrococcal nuclease protection assay and isolation of polysomal transcript. Isolation of polysomal transcripts occurs by centrifuging tissue extracts through a sucrose cushion with translation elongation inhibitors, for example cycloheximide.
Ribosome pausing can be detected during preprolactin synthesis on free polysomes, when the ribosome is paused the other ribosomes are tightly stacked together. When the ribosome pauses, during translation, the fragments that started to translate before the pause took place are overrepresented. However, along with the mRNA if the ribosome pauses then specific bands will be improved in the trailing edge of the ribosome.
Some of the elongation inhibitors, such as: cycloheximide (in eukaryotes) or chloramphenicol, cause the ribosomes to pause and to accumulate in the start codons. Elongation Factor P regulates the ribosomal pause at polyproline in bacteria, and when there is no EFP the density of ribosomes decreases from the polyproline motifs. If there are multiple ribosome pauses, then the EFP won't resolve it.
Resolution and effects on gene expression
Some forms of ribosomal pause are reversible without needing to discard the translated peptide and mRNA. This sort, usually described as a slowdown, is usually caused by polyproline stretches (resolved by EFP or eIF5A) and uncharged tRNA. Slowdowns are important for the cell to control how much protein is produced; it also aids co-translational folding of the nascent polypeptide on the ribosome, and delays protein translation while its encoding mRNA; this can trigger ribosomal frameshifting.
More severe "stalls" can be caused an actual lack of tRNA or by the mRNA terminating without a stop codon. In this case, ribosomal quality control (RQC) performs crisis rescue by translational abandonment. This releases the ribosome from the mRNA. The incomplete polypeptide is targeted for destruction; in eukaryotes, mRNA no-go decay is also triggered.
It is difficult for RQC machinery to differentiate between a slowdown and a stall. It is possible for a mRNA sequence that normally produces a protein slowly to produce nothing instead due to intervention by RQC under different conditions.
Rescue mechanisms
In bacteria, three rescue mechanisms are known.
The main, universal system involves transfer-messenger RNA (tmRNA) and SmpB. The tRNA first binds to the ribosome like a tRNA, then with SmpB's help shifts into the mRNA position to translate a short peptide ending on a normal stop codon.
Alternative ribosome-rescue factor A (ArfA) is an alternative system in E. coli. It recruits RF2.
Alternative ribosome-rescue factor B (ArfB) is another alternative from E. coli. It works like a GGQ-release factor itself, releasing the peptide from tRNA. At the same time, it fits into the mRNA tunnel to remove the mRNA.
In eukaryotes, the main mechanism involves PELO:HBS1L.
Advantage of the ribosomal pause
When the ribosome movement on the mRNA is not linear, the ribosome gets paused at different regions without a precise reason. The ribosome pause position will help to identify the mRNA sequence features, structure, and the transacting factor that modulates this process. The advantage of ribosomal pause sites that are located at protein domain boundaries are aiding the folding of a protein. There are times when the ribosomal pause does not cause an advantage and it needs to be restricted. In translation, elF5A inhibits ribosomal pausing for translation to function better. Ribosomal pausing can cause more non-canonical start codons without elF5A in eukaryotic cells. When there is a lack of elF5A in the eukaryotic cell, it can cause an increase in ribosomal pausing. The ribosomal pausing process can also be used by amino acids to control translation.
The location of the ribosome pause event in vitro
It is known that ribosomes pause at distinct sites, but the reasons for these pauses are mostly unknown. Also, the ribosome pauses if the pseudoknot is disrupted. 10% of the ribosome pauses at the pseudoknot and 4% of the ribosomes are terminated. Before the ribosome is obstructed it passes the pseudoknot. An assay was put together by a group from the University of California in an effort to show a model of mRNA. The translation was monitored in two in vitro systems. It was found that translating ribosomes aren't uniformly distributed along an mRNA. Protein folding in vivo is also important and is related to protein synthesis. For finding the location of the ribosomal pause in vivo, the methods that have been used to find the ribosomal pause in vitro can be changed to find these specific locations in vivo.
Ribosome profiling
Ribosome profiling is a method that can reveal pausing sites through sequencing the ribosome protected fragments (RPFs or footprints) to map ribosome occupancy on the mRNA. Ribosome profiling has the ability to reveal the ribosome pause sites in the whole transcriptome. When the kinetics layer is added, it discloses the time of the pause, and the translation takes place. Ribosome profiling is however still in early stages and has biases that need to be explored further. Ribosome profiling allows for translation to be measured more accurately and precisely. During this process, translation needs to be stopped in order for ribosome profiling to be performed. This may cause a problem with ribosome profiling because the methods that are used to stop translation in an experiment can impact the outcome, which causes incorrect results. Ribosome profiling is useful for getting specific information on translation and the process of protein synthesis.
See also
Translational frameshift
HIV Ribosomal frameshift signal
Coronavirus frameshifting stimulation element
Ribosomal frameshift
References
External links
Pseudobase
Recode
RNA
Gene expression
Cis-regulatory RNA elements
Molecular genetics | Ribosomal pause | [
"Chemistry",
"Biology"
] | 1,531 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
55,828,680 | https://en.wikipedia.org/wiki/Black%20Hole%20Initiative | The Black Hole Initiative (BHI) is an interdisciplinary center at Harvard University that includes the fields of astronomy, physics, and philosophy, and is claimed to be the first center in the world to focus on the study of black holes. Principal participants include Sheperd S. Doeleman, Peter Galison, Avi Loeb, Andrew Strominger and Shing-Tung Yau. The BHI inauguration was held on 18 April 2016 and attended by Stephen Hawking; related workshop events were held on 19 April 2016. Robbert Dijkgraaf created the mural for the BHI Inauguration.
The BHI is funded by the John Templeton Foundation and the Gordon and Betty Moore Foundation. Harvard University allocated office space for the BHI on the second floor of 20 Garden Street in Cambridge, Massachusetts. The BHI is an independent Center within the Faculty of Arts & Sciences at Harvard University.
See also
Cosmology
Galactic Center
Galaxy
General relativity
List of black holes
Outline of black holes
Timeline of black hole physics
References
External links
Official website
Official Youtube Channel
Inauguration workshop events (19 April 2017):
Astrophysics research institutes
Cosmological simulation
Physical cosmology
Black holes
Research institutes established in 2016
Harvard University research institutes
2016 establishments in Massachusetts | Black Hole Initiative | [
"Physics",
"Astronomy"
] | 250 | [
"Physical phenomena",
"Black holes",
"Applied and interdisciplinary physics",
"Physical quantities",
"Philosophy of physics",
"Astronomical sub-disciplines",
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Computational physics",
"Quantum gravity",
"Density",
"Astroph... |
55,837,994 | https://en.wikipedia.org/wiki/Overbore | Overbore cartridges are those with a relatively large case volume or case capacity, coupled with a relatively small diameter bullet.
The case volume or case capacity and barrel bore area can be mathematically related to obtain a case volume to bore area ratio in metric or imperial units.
where:
= the cartridge case internal volume or case capacity (in ml or (for non-metric users) grains of water)
= barrel bore cross section area (in cm2 or in2)
The higher the Oratio result, the more overbore a cartridge will be. As the ratio is expressed in units of length, relatively high Oratio is a good predictor of suitability for relatively long barreled guns.
Oratio is also used to predict barrel life in cartridges of the same caliber, but not of different calibres, since the ratio is an extensive quantity that does not correlate to temperature or pressure (e.g. a .50 cal straight cartridge may have the same overbore as a highly necked down .17 cal cartridge).
Comparative index for various rifle cartridges
The bore cross section areas "Q" used in the calculations were taken from the appropriate C.I.P. data sheets.
The intermediate cartridges .30 Carbine, 7.92×33mm Kurz, 7.62×39mm, 7.62×45mm, 5.45×39mm, .223 Remington/5.56×45mm NATO and 5.8×42mm stand out as having relatively low sub 8 Oratio's.
References
“Overbore” Cartridges Defined by Formula Can a Formula Provide a Useful Index Ranking of Overbore Cartridges? http://www.accurateshooter.com
Firearms
Firearm terminology
Ammunition
Ballistics | Overbore | [
"Physics"
] | 360 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
44,216,842 | https://en.wikipedia.org/wiki/Magnetic%20resonance%20%28quantum%20mechanics%29 | In quantum mechanics, magnetic resonance is a resonant effect that can appear when a magnetic dipole is exposed to a static magnetic field and perturbed with another, oscillating electromagnetic field. Due to the static field, the dipole can assume a number of discrete energy eigenstates, depending on the value of its angular momentum (azimuthal) quantum number. The oscillating field can then make the dipole transit between its energy states with a certain probability and at a certain rate. The overall transition probability will depend on the field's frequency and the rate will depend on its amplitude. When the frequency of that field leads to the maximum possible transition probability between two states, a magnetic resonance has been achieved. In that case, the energy of the photons composing the oscillating field matches the energy difference between said states. If the dipole is tickled with a field oscillating far from resonance, it is unlikely to transition. That is analogous to other resonant effects, such as with the forced harmonic oscillator. The periodic transition between the different states is called Rabi cycle and the rate at which that happens is called Rabi frequency. The Rabi frequency should not be confused with the field's own frequency. Since many atomic nuclei species can behave as a magnetic dipole, this resonance technique is the basis of nuclear magnetic resonance, including nuclear magnetic resonance imaging and nuclear magnetic resonance spectroscopy.
Quantum mechanical explanation
As a magnetic dipole, using a spin system such as a proton; according to the quantum mechanical state of the system, denoted by , evolved by the action of a unitary operator ; the result obeys Schrödinger equation:
States with definite energy evolve in time with phase , () where E is the energy of the state, since the probability of finding the system in state = is independent of time. Such states are termed stationary states, so if a system is prepared in a stationary state, (i.e. one of the eigenstates of the Hamiltonian operator), then P(t) = 1, i.e. it remains in that state indefinitely. This is the case only for isolated systems. When a system in a stationary state is perturbed, its state changes, so it is no longer an eigenstate of the system's complete Hamiltonian. This same phenomenon happens in magnetic resonance for a spin system in a magnetic field.
The Hamiltonian for a magnetic dipole (associated with a spin particle) in a magnetic field is:
Here is the Larmor precession frequency of the dipole for magnetic field and is z Pauli matrix. So the eigenvalues of are and . If the system is perturbed by a weak magnetic field , rotating counterclockwise in x-y plane (normal to ) with angular frequency , so that , then and are not eigenstates of the Hamiltonian, which is modified into
It is inconvenient to deal with a time-dependent hamiltonian. To make time-independent requires a new reference frame rotating with , i.e. rotation operator on , which amounts to basis change in Hilbert space. Using this on Schrödinger's equation, the Hamiltonian becomes:
Writing in the basis of as-
Using this form of the Hamiltonian a new basis is found:
where and
This Hamiltonian is exactly similar to that of a two state system with unperturbed energies and with a perturbation expressed by ; According to Rabi oscillation, starting with state, a dipole in parallel to with energy , the probability that it will transit to state (i.e. it will flip) is
Now consider , i.e. the field oscillates at the same rate the dipole exposed to the field does. That is a case of resonance. Then at specific points in time, namely , the dipole will flip, going to the other energy eigenstate with a 100% probability. When , the probability of change of energy state is small. Therefore, the resonance condition can be used, for instance, to measure the magnetic moment of a dipole or the magnetic field at a point in space.
A special case to show applications
A special case occurs where a system oscillates between two unstable levels that have the same life time . If atoms are excited at a constant, say n/time, to the first state, some decay and the rest have a probability to transition to the second state, so in the time interval between t and (t + dt) the number of atoms that jump to the second state from the first is , so at time t the number of atoms in the second state is
=
The rate of decay from state two depends on the number of atoms that were collected in that state from all previous intervals, so the number of atoms in state 2 is ; The rate of decay of atoms from state two is proportional to the number of atoms present in that state, while the constant of proportionality is decay constant . Performing the integration rate of decay of atoms from state two is obtained as:
From this expression many interesting points can be exploited, such
Varying uniform magnetic field so that in produces a Lorentz curve (see Cauchy–Lorentz distribution), detecting the peak of that curve, the abscissa of it gives , so now (angular frequency of rotation of = , so from the known value of and , the gyromagnetic ratio of the dipole can be measured; by this method we can measure Nuclear spin where all electronic spins are balanced. Correct measurement of nuclear magnetic moment helps to understand the character of nuclear force.
If is known, by varying , the value of can be obtained. This measurement technique is precise enough for use in sensitive magnetometers. Using this technique, the value of magnetic field acting at a particular lattice site by its environment inside a crystal can be obtained.
By measuring half-width of the curve, d = , for several values of (i.e. of ), we can plot d vs , and by extrapolating this line for , the lifetime of unstable states can be obtained from the intercept.
Rabi's method
The existence of spin angular momentum of electrons was discovered experimentally by the Stern–Gerlach experiment. In that study a beam of neutral atoms with one electron in the valence shell, carrying no orbital momentum (from the viewpoint of quantum mechanics) was passed through an inhomogeneous magnetic field. This process was not approximate due to the small deflection angle, resulting in considerable uncertainty in the measured value of the split beam.
Rabi's method was an improvement over Stern-Gerlach. As shown in the figure, the source emits a beam of neutral atoms, having spin angular momentum . The beam passes through a series of three aligned magnets. Magnet 1 produces an inhomogeneous magnetic field with a high gradient (as in Stern–Gerlach), so the atoms having 'upward' spin (with ) will deviate downward (path 1), i.e. to the region of less magnetic field B, to minimize energy. Atoms with 'downward' spin with ) will deviate upward similarly (path 2). Beams are passed through slit 1, to reduce any effects of source beyond. Magnet 2 produces only a uniform magnetic field in the vertical direction applying no force on the atomic beam, and magnet 3 is actually inverted magnet 1. In the region between the poles of magnet 3, atoms having 'upward' spin get upward push and atoms having 'downward' spin feel downward push, so their path remains 1 and 2 respectively. These beams pass through a second slit S2, and arrive at detector and get detected.
If a horizontal rotating field , angular frequency of rotation is applied in the region between poles of magnet 2, produced by oscillating current in circular coils then there is a probability for the atoms passing through there from one spin state to another ( and vice versa), when = , Larmor frequency of precession of magnetic moment in B. The atoms that transition from 'upward' to 'downward' spin will experience a downward force while passing through magnet 3, and will follow path 1'. Similarly, atoms that change from 'downward' to 'upward' spin will follow path 2', and these atoms will not reach the detector, causing a minimum in detector count. If angular frequency of is varied continuously, then a minimum in detector current will be obtained (when = ). From this known value of (, where g is 'Landé g factor'), 'Landé g-factor' is obtained which will enable one to have correct value of magnetic moment . This experiment, performed by Isidor Isaac Rabi is more sensitive and accurate compared than Stern-Gerlach.
Correspondence between classical and quantum mechanical explanations
Spin angular momentum allows magnetic resonance phenomena to be explained via classical physics. When viewed from the reference frame attached to the rotating field, it seems that the magnetic dipole precesses around a net magnetic field , where is the unit vector along uniform magnetic field and is the same in the direction of rotating field and .
{| class="toccolours collapsible collapsed" style="text-align:left" width="60%"
!Proof of classical expression for precession
|-
|
Classical electrodynamics tells us that torque on a magnetic dipole of moment is , so its equation of motion is , (where is the angular momentum associated with dipole) so –
For the case under consideration the dipole is under the action of magnetic field and , hence
It is easier to solve it by transforming co-ordinate system to OXYZ in which becomes OX axis, in that frame –
here Using and , one can see that –
so, here effective field becomes :
|}
So when , a high precession amplitude allows the magnetic moment to be completely flipped. Classical and quantum mechanical predictions correspond well, which can be viewed as an example of the Bohr Correspondence principle, which states that quantum mechanical phenomena, when predicted in classical regime, should match the classical result. The origin of this correspondence is that the evolution of the expected value of magnetic moment is identical to that obtained by classical reasoning. The expectation value of the magnetic moment is . The time evolution of is given by
so,
So, and
which looks exactly similar to the equation of motion of magnetic moment in classical mechanics –
This analogy in the mathematical equation for the evolution of magnetic moment and its expectation value facilitates to understand the phenomena without a background of quantum mechanics.
Magnetic resonance imaging
In magnetic resonance imaging (MRI) the spin angular momentum of the proton is used. The most available source for protons in the human body is represented by hydrogen atoms in water. A strong magnetic field applied to water causes the appearance of two different energy levels for spin angular momentum, and , using .
According to the Boltzmann distribution, as the number of systems having energy out of at temperature is (where is the Boltzmann constant), the lower energy level, associated with spin , is more populated than the other. In the presence of a rotating magnetic field more protons flip from to than the other way, causing absorption of microwave or radio-wave radiation (from the rotating field). When the field is withdrawn, protons tend to re-equilibrate along the Boltzmann distribution, so some of them transition from higher energy levels to lower ones, emitting microwave or radio-wave radiation at specific frequencies.
Instead of nuclear spin, spin angular momentum of unpaired electrons is used in EPR (electron paramagnetic resonance) in order to detect free radicals, etc.
Magnetic resonance as a quantum phenomenon
The phenomenon of magnetic resonance is rooted in the existence of spin angular momentum of a quantum system and its specific orientation with respect to an applied magnetic field. Both cases have no explanation in the classical approach and can be understood only by using quantum mechanics. Some people claim that purely quantum phenomena are those that cannot be explained by the classical approach. For example, phenomena in the microscopic domain that can to some extent be described by classical analogy are not really quantum phenomena. Since the basic elements of magnetic resonance have no classical origin, although analogy can be made with classical Larmor precession, MR should be treated as a quantum phenomenon.
See also
Nuclear magnetic resonance
Magnetic resonance imaging
Bloch equations
Physics of magnetic resonance imaging
References
Quantum mechanics
Magnetism | Magnetic resonance (quantum mechanics) | [
"Physics"
] | 2,551 | [
"Theoretical physics",
"Quantum mechanics"
] |
44,217,963 | https://en.wikipedia.org/wiki/Clebsch%E2%80%93Gordan%20coefficients%20for%20SU%283%29 | In mathematical physics, Clebsch–Gordan coefficients are the expansion coefficients of total angular momentum eigenstates in an uncoupled tensor product basis. Mathematically, they specify the decomposition of the tensor product of two irreducible representations into a direct sum of irreducible representations, where the type and the multiplicities of these irreducible representations are known abstractly. The name derives from the German mathematicians Alfred Clebsch (1833–1872) and Paul Gordan (1837–1912), who encountered an equivalent problem in invariant theory.
Generalization to SU(3) of Clebsch–Gordan coefficients is useful because of their utility in characterizing hadronic decays, where a flavor-SU(3) symmetry exists (the eightfold way) that connects the three light quarks: up, down, and strange.
SU(3) group
The special unitary group SU is the group of unitary matrices whose determinant is equal to 1. This set is closed under matrix multiplication. All transformations characterized by the special unitary group leave norms unchanged. The symmetry appears in the light quark flavour symmetry (among up, down, and strange quarks) dubbed the Eightfold Way (physics). The same group acts in quantum chromodynamics on the colour quantum numbers of the quarks that form the fundamental (triplet) representation of the group.
The group is a subgroup of group , the group of all 3×3 unitary matrices. The unitarity condition imposes nine constraint relations on the total 18 degrees of freedom of a 3×3 complex matrix. Thus, the dimension of the group is 9. Furthermore, multiplying a U by a phase, leaves the norm invariant. Thus can be decomposed into a direct product . Because of this additional constraint, has dimension 8.
Generators of the Lie algebra
Every unitary matrix can be written in the form
where H is hermitian. The elements of can be expressed as
where are the 8 linearly independent matrices forming the basis of the Lie algebra of , in the triplet representation. The unit determinant condition requires the matrices to be traceless, since
.
An explicit basis in the fundamental, 3, representation can be constructed in analogy to the Pauli matrix algebra of the spin operators. It consists of the Gell-Mann matrices,
These are the generators of the group in the triplet representation, and they are normalized as
The Lie algebra structure constants of the group are given by the commutators of
where are the structure constants completely antisymmetric and are analogous to the Levi-Civita symbol of .
In general, they vanish, unless they contain an odd number of indices from the set {2,5,7}, corresponding to the antisymmetric s. Note .
Moreover,
where are the completely symmetric coefficient constants. They vanish if the number of indices from the set is odd. In terms of the matrices,
Standard basis
A slightly differently normalized standard basis consists of the F-spin operators, which are defined as for the 3, and are utilized to apply to any representation of this algebra.
The Cartan–Weyl basis of the Lie algebra of is obtained by another change of basis, where one defines,
Because of the factors of i in these formulas, this is technically a basis for the complexification of the su(3) Lie algebra, namely sl(3,C). The preceding basis is then essentially the same one used in Hall's book.
Commutation algebra of the generators
The standard form of generators of the group satisfies the commutation relations given below,
All other commutation relations follow from hermitian conjugation of these operators.
These commutation relations can be used to construct the irreducible representations of the group.
The representations of the group lie in the 2-dimensional plane. Here, stands for the z-component of Isospin and is the Hypercharge, and they comprise the (abelian) Cartan subalgebra of the full Lie algebra. The maximum number of mutually commuting generators of a Lie algebra is called its rank: has rank 2. The remaining 6 generators, the ± ladder operators, correspond to the 6 roots arranged on the 2-dimensional hexagonal lattice of the figure.
Casimir operators
The Casimir operator is an operator that commutes with all the generators of the Lie group. In the case of , the quadratic operator is the only independent such operator.
In the case of group, by contrast, two independent Casimir operators can be constructed, a quadratic and a cubic: they are,
These Casimir operators serve to label the irreducible representations of the Lie group algebra , because all states in a given representation assume the same value for each Casimir operator, which serves as the identity in a space with the dimension of that representation. This is because states in a given representation are connected by the action of the generators of the Lie algebra, and all generators commute with the Casimir operators.
For example, for the triplet representation, , the eigenvalue of is 4/3, and of , 10/9.
More generally, from Freudenthal's formula, for generic , the eigenvalue of is
.
The eigenvalue ("anomaly coefficient") of is
It is an odd function under the interchange . Consequently, it vanishes for real representations , such as the adjoint, , i.e. both and anomalies vanish for it.
Representations of the SU(3) group
The irreducible representations of SU(3) are analyzed in various places, including Hall's book. Since the SU(3) group is simply connected, the representations are in one-to-one correspondence with the representations of its Lie algebra su(3), or the complexification of its Lie algebra, sl(3,C).
We label the representations as D(p,q), with p and q being non-negative integers, where in physical terms, p is the number of quarks and q is the number of antiquarks. Mathematically, the representation D(p,q) may be constructed by tensoring together p copies of the standard 3-dimensional representation and q copies of the dual of the standard representation, and then extracting an irreducible invariant subspace. (See also the section of Young tableaux below: is the number of single-box columns, "quarks", and the number of double-box columns, "antiquarks").
Still another way to think about the parameters p and q is as the maximum eigenvalues of the diagonal matrices
.
(The elements and are linear combinations of the elements and , but normalized so that the eigenvalues of and are integers.)
This is to be compared to the representation theory of SU(2), where the irreducible representations are labeled by the maximum eigenvalue of a single element, h.
The representations have dimension
their irreducible characters are given by
and the corresponding Haar measure is
such that and ,
An multiplet may be completely specified by five labels, two of which, the eigenvalues of the two Casimirs, are common to all members of the multiplet. This generalizes the mere two labels for multiplets, namely the eigenvalues of its quadratic Casimir and of 3.
Since , we can
label different states by the eigenvalues of and operators, , for a given eigenvalue of the isospin Casimir. The action of operators on this states are,
Here,
and
All the other states of the representation can be constructed by the successive application of the ladder operators and and by identifying the base states which are annihilated by the action of the lowering operators. These operators can be pictured as arrows whose endpoints form the vertices of a hexagon (picture for generators above).
Clebsch–Gordan coefficient for SU(3)
The product representation of two irreducible representations and is generally reducible. Symbolically,
where is an integer.
For example, two octets (adjoints) compose to
that is, their product reduces to an icosaseptet (27), decuplet, two octets, an antidecuplet, and a singlet, 64 states in all.
The right-hand series is called the Clebsch–Gordan series. It implies that the representation appears times in the reduction of this direct product of with .
Now a complete set of operators is needed to specify uniquely the states of each irreducible representation inside the one just reduced.
The complete set of commuting operators in the case of the irreducible representation is
where
.
The states of the above direct product representation are thus completely represented by the set of operators
where the number in the parentheses designates the representation on which the operator acts.
An alternate set of commuting operators can be found for the direct product representation, if one considers the following set of operators,
Thus, the set of commuting operators includes
This is a set of nine operators only. But the set must contain ten operators to define all the states of the direct product representation uniquely. To find the last operator , one must look outside the group. It is necessary to distinguish different for similar values of and .
Thus, any state in the direct product representation can be represented by the ket,
also using the second complete set of commuting operator, we can define the states in the direct product representation as
We can drop the from the state and label the states as
using the operators from the first set, and,
using the operators from the second set.
Both these states span the direct product representation and any states in the representation can be labeled by suitable choice of the eigenvalues.
Using the completeness relation,
Here, the coefficients
are the Clebsch–Gordan coefficients.
A different notation
To avoid confusion, the eigenvalues can be simultaneously denoted by and the eigenvalues are simultaneously denoted by . Then the eigenstate of the direct product representation can be denoted by
where is the eigenvalues of and is the eigenvalues of denoted simultaneously. Here, the quantity expressed by the parenthesis is the Wigner 3-j symbol.
Furthermore, are considered to be the basis states of and are the basis states of . Also are the basis states of the product representation. Here represents the combined eigenvalues and respectively.
Thus the unitary transformations that connects the two bases are
This is a comparatively compact notation. Here,
are the Clebsch–Gordan coefficients.
Orthogonality relations
The Clebsch–Gordan coefficients form a real orthogonal matrix. Therefore,
Also, they follow the following orthogonality relations,
Symmetry properties
If an irreducible representation appears in the Clebsch–Gordan series of , then it must appear in the Clebsch–Gordan series of . Which implies,
Where
Since the Clebsch–Gordan coefficients are all real, the following symmetry property can be deduced,
Where .
Symmetry group of the 3D oscillator Hamiltonian operator
A three-dimensional harmonic oscillator is described by the Hamiltonian
where the spring constant, the mass and the Planck constant have been absorbed into the definition of the variables, .
It is seen that this Hamiltonian is symmetric under coordinate transformations that preserve the value of . Thus, any operators in the group keep this Hamiltonian invariant.
More significantly, since the Hamiltonian is Hermitian, it further remains invariant under operation by elements of the much larger group.
More systematically, operators such as the Ladder operators
and
can be constructed which raise and lower the eigenvalue of the Hamiltonian operator by 1.
The operators and are not hermitian; but hermitian operators can be constructed from different combinations of them, namely,
.
There are nine such operators for i, j = 1, 2, 3.
The nine hermitian operators formed by the bilinear forms are controlled by the fundamental commutators
and seen to not commute among themselves. As a result, this complete set of operators don't share their eigenvectors in common, and they cannot be diagonalized simultaneously. The group is thus non-Abelian and degeneracies may be present in the Hamiltonian, as indicated.
The Hamiltonian of the 3D isotropic harmonic oscillator, when written in terms of the operator amounts to
.
The Hamiltonian has 8-fold degeneracy. A successive application of and † on the left preserves the Hamiltonian invariant, since it increases by 1 and decrease by 1, thereby keeping the total
constant. (cf. quantum harmonic oscillator)
Maximally commuting set of operators
Since the operators belonging to the symmetry group of Hamiltonian do not always form an Abelian group, a common eigenbasis cannot be found that diagonalizes all of them simultaneously. Instead, we take the maximally commuting set of operators from the symmetry group of the Hamiltonian, and try to reduce the matrix representations of the group into irreducible representations.
Hilbert space of two systems
The Hilbert space of two particles is the tensor product of the two Hilbert spaces of the two individual particles,
where and are the Hilbert space of the first and second particles, respectively.
The operators in each of the Hilbert spaces have their own commutation relations, and an operator of one Hilbert space commutes with an operator from the other Hilbert space. Thus the symmetry group of the two particle Hamiltonian operator is the superset of the symmetry groups of the Hamiltonian operators of individual particles. If the individual Hilbert spaces are -dimensional, the combined Hilbert space is -dimensional.
Clebsch–Gordan coefficient in this case
The symmetry group of the Hamiltonian is . As a result, the Clebsch–Gordan coefficients can be found by expanding the uncoupled basis vectors of the symmetry group of the Hamiltonian into its coupled basis. The Clebsch–Gordan series is obtained by block-diagonalizing the Hamiltonian through the unitary transformation constructed from the eigenstates which diagonalizes the maximal set of commuting operators.
Young tableaux
A Young tableau (plural tableaux) is a method for decomposing products of an SU(N) group representation into a sum of irreducible representations. It provides the dimension and symmetry types of the irreducible representations, which is known as the Clebsch–Gordan series. Each irreducible representation corresponds to a single-particle state and a product of more than one irreducible representation indicates a multiparticle state.
Since the particles are mostly indistinguishable in quantum mechanics, this approximately relates to several permutable particles. The permutations of identical particles constitute the symmetric group S. Every -particle state of S that is made up of single-particle states of the fundamental -dimensional SU(N) multiplet belongs to an irreducible SU(N) representation. Thus, it can be used to determine the Clebsch–Gordan series for any unitary group.
Constructing the states
Any two particle wavefunction , where the indices 1,2 represents the state of particle 1 and 2, can be used to generate states of explicit symmetry using the symmetrizing and the anti-symmetrizing operators.
where the are the operator that interchanges the particles (Exchange operator).
The following relation follows:-
thus,
Starting from a multiparticle state, we can apply and repeatedly to construct states that are:
Symmetric with respect to all particles.
Antisymmetric with respect to all particles.
Mixed symmetries, i.e. symmetric or antisymmetric with respect to some particles.
Constructing the tableaux
Instead of using ψ, in Young tableaux, we use square boxes (□) to denote particles and i to denote the state of the particles.
The complete set of particles are denoted by arrangements of □s, each with its own quantum number label (i).
The tableaux is formed by stacking boxes side by side and up-down such that the states symmetrised with respect to all particles are given in a row and the states anti-symmetrised with respect to all particles lies in a single column. Following rules are followed while constructing the tableaux:
A row must not be longer than the one before it.
The quantum labels (numbers in the □) should not decrease while going left to right in a row.
The quantum labels must strictly increase while going down in a column.
Case for N = 3
For N = 3 that is in the case of SU(3), the following situation arises. In SU(3) there are three labels, they are generally designated by (u,d,s) corresponding to up, down and strange quarks which follows the SU(3) algebra. They can also be designated generically as (1,2,3). For a two-particle system, we have the following six symmetry states:
{|
|- style="vertical-align:top"
| || || || || || || || || || ||
|}
and the following three antisymmetric states:
The 1-column, 3-row tableau is the singlet, and so all tableaux of nontrivial irreps of SU(3) cannot have more than two rows. The representation has
boxes on the top row and boxes on the second row.
Clebsch–Gordan series from the tableaux
Clebsch–Gordan series is the expansion of the tensor product of two irreducible representation into direct sum of irreducible representations.
. This can be easily found out from the Young tableaux.
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Procedure to obtain the Clebsch–Gordan series from Young tableaux:
|-
|
The following steps are followed to construct the Clebsch–Gordan series from the Young tableaux:
Write down the two Young diagrams for the two irreps under consideration, such as in the following example. In the second figure insert a series of the letter a in the first row, the letter b in the second row, the letter c in the third row, etc. in order to keep track of them once they are included in the various resultant diagrams:
Take the first box containing an a and appends it to the first Young diagram in all possible ways that follow the rules for creation of a Young diagram:
Then take the next box containing an a and do the same thing with it, except that we are not allowed to put two as together in the same column.
The last diagram in the curly bracket contains two a in the same column thus the diagram must be deleted. Thereby giving:
Append the last box to the diagram in curly bracket in all possible ways resulting in:
In each rows while counting from right to left, if at any point the number of a particular alphabet encountered be more than the number of the previous alphabet, then the diagram must be deleted. Here the first and the third diagram should be deleted, resulting in:
|}
Example of Clebsch–Gordan series for SU(3)
The tensor product of a triplet with an octet reducing to a deciquintuplet (15), an anti-sextet, and a triplet
appears diagrammatically as-
a total of 24 states.
Using the same procedure, any direct product representation is easily reduced.
See also
Wigner D-matrix
Tensor operator
Wigner–Eckart theorem
Representation theory
Racah W-coefficient
Gell-Mann–Okubo mass formula
References
online
Quantum mechanics
Mathematical physics
Lie algebras
Representation theory of Lie algebras | Clebsch–Gordan coefficients for SU(3) | [
"Physics",
"Mathematics"
] | 4,075 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics",
"Quantum mechanics"
] |
44,218,028 | https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein%20theorem | In set theory, the Schröder–Bernstein theorem states that, if there exist injective functions and between the sets and , then there exists a bijective function .
In terms of the cardinality of the two sets, this classically implies that if and , then ; that is, and are equipotent.
This is a useful feature in the ordering of cardinal numbers.
The theorem is named after Felix Bernstein and Ernst Schröder.
It is also known as the Cantor–Bernstein theorem or Cantor–Schröder–Bernstein theorem, after Georg Cantor, who first published it (albeit without proof).
Proof
The following proof is attributed to Julius König.
Assume without loss of generality that A and B are disjoint. For any a in A or b in B we can form a unique two-sided sequence of elements that are alternately in A and B, by repeatedly applying and to go from A to B and and to go from B to A (where defined; the inverses and are understood as partial functions.)
For any particular a, this sequence may terminate to the left or not, at a point where or is not defined.
By the fact that and are injective functions, each a in A and b in B is in exactly one such sequence to within identity: if an element occurs in two sequences, all elements to the left and to the right must be the same in both, by the definition of the sequences. Therefore, the sequences form a partition of the (disjoint) union of A and B. Hence it suffices to produce a bijection between the elements of A and B in each of the sequences separately, as follows:
Call a sequence an A-stopper if it stops at an element of A, or a B-stopper if it stops at an element of B. Otherwise, call it doubly infinite if all the elements are distinct or cyclic if it repeats. See the picture for examples.
For an A-stopper, the function is a bijection between its elements in A and its elements in B.
For a B-stopper, the function is a bijection between its elements in B and its elements in A.
For a doubly infinite sequence or a cyclic sequence, either or will do ( is used in the picture).
Corollary for surjective pair
If we assume the axiom of choice, then a pair of surjective functions and also implies the existence of a bijection. We construct an injective function from by picking a single element from the inverse image of each point in . The surjectivity of guarantees the existence of at least one element in each such inverse image. We do the same to obtain an injective function from . The Schröder-Bernstein theorem then can be applied to the injections h and k.
Examples
Bijective function from
Note: is the half open set from 0 to 1, including the boundary 0 and excluding the boundary 1.
Let with and with the two injective functions.
In line with that procedure
Then is a bijective function from .
Bijective function from
Let with
Then for one can use the expansions and with
and now one can set which defines an injective function . (Example: )
And therefore a bijective function can be constructed with the use of and .
In this case is still easy but already gets quite complicated.
Note: Of course there's a more simple way by using the (already bijective) function definition . Then would be the empty set and for all x.
History
The traditional name "Schröder–Bernstein" is based on two proofs published independently in 1898.
Cantor is often added because he first stated the theorem in 1887, while Schröder's name is often omitted because his proof turned out to be flawed while the name of Richard Dedekind, who first proved it, is not connected with the theorem.
According to Bernstein, Cantor had suggested the name equivalence theorem (Äquivalenzsatz).
1887 Cantor publishes the theorem, however without proof.
1887 On July 11, Dedekind proves the theorem (not relying on the axiom of choice) but neither publishes his proof nor tells Cantor about it. Ernst Zermelo discovered Dedekind's proof and in 1908 he publishes his own proof based on the chain theory from Dedekind's paper Was sind und was sollen die Zahlen?
1895 Cantor states the theorem in his first paper on set theory and transfinite numbers. He obtains it as an easy consequence of the linear order of cardinal numbers. However, he could not prove the latter theorem, which is shown in 1915 to be equivalent to the axiom of choice by Friedrich Moritz Hartogs.
1896 Schröder announces a proof (as a corollary of a theorem by Jevons).
1897 Bernstein, a 19-year-old student in Cantor's Seminar, presents his proof.
1897 Almost simultaneously, but independently, Schröder finds a proof.
1897 After a visit by Bernstein, Dedekind independently proves the theorem a second time.
1898 Bernstein's proof (not relying on the axiom of choice) is published by Émile Borel in his book on functions. (Communicated by Cantor at the 1897 International Congress of Mathematicians in Zürich.) In the same year, the proof also appears in Bernstein's dissertation.
1898 Schröder publishes his proof which, however, is shown to be faulty by Alwin Reinhold Korselt in 1902 (just before Schröder's death), (confirmed by Schröder), but Korselt's paper is published only in 1911.
Both proofs of Dedekind are based on his famous 1888 memoir Was sind und was sollen die Zahlen? and derive it as a corollary of a proposition equivalent to statement C in Cantor's paper, which reads and implies . Cantor observed this property as early as 1882/83 during his studies in set theory and transfinite numbers and was therefore (implicitly) relying on the axiom of choice.
Prerequisites
The 1895 proof by Cantor relied, in effect, on the axiom of choice by inferring the result as a corollary of the well-ordering theorem. However, König's proof given above shows that the result can also be proved without using the axiom of choice.
On the other hand, König's proof uses the principle of excluded middle to draw a conclusion through case analysis. As such, the above proof is not a constructive one. In fact, in a constructive set theory such as intuitionistic set theory , which adopts the full axiom of separation but dispenses with the principle of excluded middle, assuming the Schröder–Bernstein theorem implies the latter. In turn, there is no proof of König's conclusion in this or weaker constructive theories. Therefore, intuitionists do not accept the statement of the Schröder–Bernstein theorem.
There is also a proof which uses Tarski's fixed point theorem.
See also
Myhill isomorphism theorem
Netto's theorem, according to which the bijections constructed by the Schröder–Bernstein theorem between spaces of different dimensions cannot be continuous
Schröder–Bernstein theorem for measurable spaces
Schröder–Bernstein theorems for operator algebras
Schröder–Bernstein property
Notes
References
Martin Aigner & Gunter M. Ziegler (1998) Proofs from THE BOOK, § 3 Analysis: Sets and functions, Springer books , fifth edition 2014 , sixth edition 2018
External links
Cantor-Bernstein’s Theorem in a Semiring by Marcel Crabbé.
Theorems in the foundations of mathematics
Cardinal numbers
Articles containing proofs | Schröder–Bernstein theorem | [
"Mathematics"
] | 1,594 | [
"Mathematical theorems",
"Cardinal numbers",
"Foundations of mathematics",
"Mathematical logic",
"Mathematical objects",
"Infinity",
"Numbers",
"Articles containing proofs",
"Mathematical problems",
"Theorems in the foundations of mathematics"
] |
59,167,911 | https://en.wikipedia.org/wiki/Virginia%20Cornish | Virginia Wood Cornish is the Helena Rubinstein Professor of Chemistry at Columbia University.
Background and education
Cornish received her BA in chemistry in 1991, working with professor Ronald Breslow. Her PhD research, on site-specific protein labeling and mutagenesis, was carried out with Peter Schultz. Cornish was an NSF postdoctoral fellow at MIT with Robert T. Sauer. She is the first female graduate from Columbia College to be hired to a full-time faculty position since the College became coeducational in 1983.
Research
Cornish and her lab group use the tools of systems biology, synthetic biology, and DNA encoding to produce desired chemical products from specific organismic hosts. In 2016, she was part of a notable group of genomic scientists calling for increased ethical study and self-regulation as the costs and effort of creating de novo genomes plummeted. As the "read" phase of the Human Genome Project was completed in 2004, this new effort was dubbed Genome Project-Write.
Awards
2009 – Pfizer Award in Enzyme Chemistry
2009 – Irving Sigal Young Investigator Award
2003 – Sloan Foundation Fellow
References
Year of birth missing (living people)
Living people
American women chemists
Columbia College (New York) alumni
Massachusetts Institute of Technology alumni
Synthetic biologists
Human Genome Project scientists
21st-century American women | Virginia Cornish | [
"Engineering",
"Biology"
] | 262 | [
"Human Genome Project scientists",
"Synthetic biologists",
"Synthetic biology"
] |
59,168,504 | https://en.wikipedia.org/wiki/Developmental%20bias | In evolutionary biology, developmental bias refers to the production against or towards certain ontogenetic trajectories which ultimately influence the direction and outcome of evolutionary change by affecting the rates, magnitudes, directions and limits of trait evolution. Historically, the term was synonymous with developmental constraint, however, the latter has been more recently interpreted as referring solely to the negative role of development in evolution.
The role of the embryo
In modern evolutionary biology, the idea of developmental bias is embedded into a current of thought called Structuralism, which emphasizes the role of the organism as a causal force of evolutionary change. In the Structuralist view, phenotypic evolution is the result of the action of natural selection on previously ‘filtered’ variation during the course of ontogeny. It contrasts with the Functionalist (also “adaptationist”, “pan-selectionist” or “externalist”) view in which phenotypic evolution results only from the interaction between the deterministic action of natural selection and variation caused by mutation.
The rationale behind the role of the organism, or more specifically the embryo, as a causal force in evolution and for the existence of bias is as follows: The traditional, neo-Darwinian, approach to explain the process behind evolutionary change is natural selection acting upon heritable variation caused by genetic mutations. However, natural selection acts on phenotypes and mutation does not in itself produce phenotypic variation, thus, there is a conceptual gap regarding the connection between a mutation and the potential change in phenotype. For a mutation to readily alter a phenotype, and hence be visible to natural selection, it has to modify the ontogenetic trajectory, a process referred to as developmental reprogramming. Some kinds of reprogramming are more likely to occur than others given the nature of the genotype–phenotype map, which determines the propensity of a system to vary in a particular direction, thus, creating a bias. In other words, the underlying architecture of the developmental systems influences the kinds of possible phenotypic outcomes.
However, developmental bias can evolve through natural selection, and both processes simultaneously influence phenotypic evolution. For example, developmental bias can affect the rate or path to an adaptive peak (high-fitness phenotype), and conversely, strong directional selection can modify the developmental bias to increase the phenotypic variation in the direction of selection.
Types of bias
Developmental constraints
Developmental constraints are limitations on phenotypic variability (or absence of variation) caused by the inherent structure and dynamics of the developmental system. Constraints are a bias against a certain ontogenetic trajectory, and consequently are thought to limit adaptive evolution.
Developmental drive
Developmental drive is the inherent natural tendency of organisms and their ontogenetic trajectories to change in a particular direction (i.e. a bias towards a certain ontogenetic trajectory). This type of bias is thought to facilitate adaptive evolution by aligning phenotypic variability with the direction of selection.
Distribution of phenotypic variation
Morphospace
The morphospace is a quantitative representation of phenotypes in a multidimensional space, where each dimension corresponds to a trait. The phenotype of each organism or species is then represented as a point in that space that summarizes the combination of values or states at each particular trait. This approach is used to study the evolution of realized phenotypes compared to those that are theoretically possible but inexistent.
Nonrandom (anisotropic) distribution of phenotypic variation
Describing and understanding the drivers of the distribution of phenotypic variation in nature is one of the main goals in evolutionary biology. One way to study the distribution of phenotypic variation is through depicting the volume of the morphospace occupied by a set of organisms or species. Theoretically, there can exist a natural process that generates an almost-evenly (quasi stochastic) distributed pattern of phenotypes in the morphospace, regarding that new species necessary tend to occupy a point in the morphospace that is close to those of its phylogenetic relatives. However, it is now widely acknowledged that organisms are not evenly distributed along the morphospace, i.e. isotropic variation, but instead are nonrandomly distributed, i.e. anisotropic variation. In other words, there exists a discordance between the apparent (or theoretical) possible phenotypes and their actual accessibility.
Thus, some phenotypes are inaccessible (or impossible) due to the underlying architecture of the developmental trajectory, while others are accessible (or possible). However, of the possible phenotypes, some are ‘easier’ or more probable to occur than others. For example, a phenotype such as the classical figure of a dragon (i.e. a giant reptile-like creature with two pairs of limbs and an anterior pair of wings) may be impossible because in vertebrates the fore-limbs and the anterior pair of wings are homologous characters (e.g. birds and bats), and, thus, are mutually exclusive. On the other hand, if two phenotypes are possible (and equally fit), but one form of reprogramming requires only one mutation while the other requires two or more, the former will be more likely to occur (assuming that genetic mutations occur randomly).
An important distinction between structuralism and functionalism regards primarily with the interpretation of the causes of the empty regions in the morphospace (that is, the inexistent phenotypes): Under the functionalist view, empty spaces correspond to phenotypes that are both ontogenetically possible and equally probable but are eliminated by natural selection due to their low fitness. In contrast, under the structuralist view, empty spaces correspond to ontogenetically impossible or improbable phenotypes, thus, implying a bias in the types of phenotypes that can be produced assuming equal amounts of variation (genetic mutations) in both models.
Classical examples of anisotropic variation
In a classical natural example of bias it was shown that only a small proportion of all possible snail shell shapes was realized in nature and actual species were confined to discrete regions of the shell-morphospace rather than being continuously distributed. In another natural example, it was shown that soil-dwelling centipedes have an enormous variation in the number of pairs of legs, the lowest being 27 and the highest 191 pairs; however, there are no species with an even number of leg pairs, which suggests that either these phenotypes are somehow restricted during development or that there is a developmental drive into odd numbers.
A study of the polydactyl toe counts of 375 Hemingway mutants of the Maine Coon cat showed that the number of additional toes was variable (plastic) and contained a bias. The Maine Coon cat (as the basic model of the Hemingway mutants) has 18 toes in the wild. Polydactyly occurred in some cases with an unchanged number of toes (18 toes), whereby the deviation consisted of a three-jointed thumb due to the extension of the first toe. However, 20 toes were found much more frequently and then 22, 24 or 26 toes with decreasing frequency. Odd total numbers of toes on the feet were less common. There is another bias between the number of toes on the front and rear feet, and a left-right asymmetry in the number of toes. Random bistability during the development process could explain the observed bias.
Conversely, developmental abnormalities (or teratologies) have been used to understand the logic behind the mechanisms that produce variation. For example, in a wide range of animals, from fish to humans, two-headed organisms are much more common than three-headed organisms; similarly, Siamese twins theoretically could ‘fuse’ through any region in the body but the fusion occurs more frequently in the abdominal region. This trend was referred to as transpecific parallelism, suggesting the existence of profound historical rules governing the expression of abnormal forms in distantly related species.
Biased phenotypes I: Continuous variation
Developmental integration and the P-matrix
Integration or covariation among traits during development has been suggested to constrain phenotypic evolution to certain regions of the morphospace and limit adaptive evolution. These allometric changes are widespread in nature and can account for a wide variety of realized morphologies and subsequent ecological and physiological changes. Under this approach, phenotype is seen as an integrated system where each trait develops and evolves in concert with the other traits, and thus, a change in one trait affects the interacting parts in a correlated manner. The correlation between traits is a consequence of the architecture of the genotype–phenotype map, particularly the pleiotropic effects of underlying genes. This correlated change between traits can be measured and analyzed through a phenotypic variance-covariance matrix (P-matrix) which summarizes the dimensions of phenotypic variability and the main axis of variation.
Quantitative genetics and the G-matrix
Quantitative genetics is a statistical framework mainly concerned with modeling the evolution of continuous characters. Under this framework, correlation between traits could be the result of two processes: 1) natural selection acting simultaneously on several traits ensuring that they are inherited together (i.e. linkage disequilibrium), or 2) natural selection acting on one trait causing correlated change in other traits due to pleiotropic effects of genes. For a set of traits, the equation that describe the variance among traits is the multivariate breeder’s equation Δz = β x G, where Δz is the vector of differences in trait means, β is a vector of selection coefficients, and G is a matrix of the additive genetic variance and covariance between traits. Thus, a population’s immediate ability to respond to selection is determined by the G-matrix, in which the variance is a function of standing genetic variation, and the covariance arises from pleiotropy and linkage disequilibrium. Although the G-matrix is one of the most relevant parameters to study evolvability, the mutational matrix (M-matrix), also known as the distribution of mutational effects, has been shown to be of equivalent importance. The M-matrix describes the potential effects of new mutations on the existing genetic variances and covariances, and these effects will depend on the epistatic and pleiotropic interactions of the underlying genes. In other words, the M-matrix determines the G-matrix, and thus, the response to selection of a population. Similarly to the P-matrix, the G-matrix describes the main axis of variation.
Paths of least resistance
A general consequence of the P-matrices and G-matrices is that evolution will tend to follow the ‘path of least resistance’. In other words, if the main axis of variation is aligned with the direction of selection, covariation (genetic or phenotypic) will facilitate the rate of adaptive evolution; however, if the main axis of variation is orthogonal to the direction of selection, covariation will constraint the rate of adaptive evolution. In general, for a population under the influence of a single fitness optimum, the rate of morphological divergence (from an ancestral to a new phenotype or between pairs of species) is inversely proportional to the angle formed by the main axis of variation and the direction of selection, causing a curved trajectory through the morphospace.
From the P-matrix for a set of characters, two broadly important measures of the propensity of variation can be extracted: 1) Respondability: ability of a developmental system to change in any direction, and 2) Evolvability: ability of a developmental system to change in the direction of natural selection. In the latter, the main axis of phenotypic variation is aligned with the direction of selection. Similarly, from the G-matrix, the most important parameter that describes the propensity of variation is the lead eigenvector of G (gmax), which describes the direction of greatest additive genetic variance for a set of continuous traits within populations. For a population undergoing directional selection, gmax will bias the main direction of the trajectory.
Biased phenotypes II: Properties of gene regulatory networks
Hierarchy and optimal pleiotropy
GRNs are modular, multilayered, and semi-hierarchically systems of genes and their products: each transcription factor provides multiple inputs to other genes, creating a complex array of interactions, and information regarding the timing, place and amount of gene expression generally flows from few high-level control genes through multiple intermediate genes to peripheral gene batteries that ultimately determine the fate of each cell. This type of architecture implies that high-level control genes tend to be more pleiotropic affecting multiple downstream genes, whereas intermediate and peripheral genes tend to have moderate to low pleiotropic effects, respectively.
In general, it is expected that newly arisen mutations with higher dominance and fewer pleiotropic and epistatic effects are more likely to be targets of evolution, thus, the hierarchical architecture of developmental pathways may bias the genetic basis of evolutionary change. For instance, genes within GRNs with "optimally pleiotropic" effects, that is, genes that have the most widespread effect on the trait under selection but few effects on other traits, are expected to accumulate a higher proportion of mutations that cause evolutionary change. These strategically-positioned genes have the potential to filter random genetic variation and translate it to nonrandom functionally integrated phenotypes, making adaptive variants effectively accessible to selection, and, thus, many of the mutations contributing to phenotypic evolution may be concentrated in these genes.
Neutral networks
The genotype–phenotype map perspective establishes that the way in which genotypic variation can be mapped to phenotypic variation is critical for the ability of a system to evolve. The prevalence of neutral mutations in nature implies that biological systems have more genotypes than phenotypes, and a consequence of this "many-to-few" relationship between genotype and phenotype is the existence of neutral networks. In development, neutral networks are clusters of GRNs that differ in only one interaction between two nodes (e.g. replacing transcription with suppression) and yet produce the same phenotypic outcome. In this sense, an individual phenotype within a population could be mapped to several equivalent GRNs, that together constitute a neutral network. Conversely, a GRN that differs in one interaction and causes a different phenotype is considered non-neutral. Given this architecture, the probability of mutating from one phenotype to another will depend on the number of neutral-neighbors relative to non-neutral neighbors for a particular GRN, and thus, phenotypic change will be influenced by the position of a GRN within the network and will be biased towards changes that require few mutations to reach a neighboring non-neutral GRN.
See also
Evolvability
Speciation
References
Further reading
Ontogeny and Phylogeny (Gould, 1977)
Biased Embryos and Evolution (Arthur, 2004)
Evolution: A developmental approach (Arthur, 2010)
Homology, Genes, and Evolutionary Innovation (Wagner, 2014)
Evolution, development, and the predictable genome (Stern, 2011)
Developmental biology
Extended evolutionary synthesis | Developmental bias | [
"Biology"
] | 3,161 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
59,179,433 | https://en.wikipedia.org/wiki/He%20Jiankui | He Jiankui (; ; born 1984) is a Chinese biophysicist. He was named as the inaugural director of the Institute of Genetic Medicine at Wuchang Technical College, a private undergraduate college in Wuhan, in September 2023. Before January 2019, He served as associate professor at the Department of Biology of the Southern University of Science and Technology (SUSTech) in Shenzhen, Guangdong, China. Earning a PhD from Rice University in Texas on protein evolution, including that of CRISPR, He learned gene-editing techniques (CRISPR/Cas9) as a postdoctoral researcher at Stanford University in California.
In November 2018, He announced that he had created the first human genetically edited babies, twin girls who were born in mid-October 2018 and known by their pseudonyms, Lulu and Nana. The announcement was initially praised in the press as a major scientific advancement. But following scrutiny on how the experiment was executed, He received widespread condemnation. His research activities were suspended by the Chinese authorities on 29 November 2018, and he was fired by SUSTech on 21 January 2019. On 30 December 2019, a Chinese district court found He Jiankui guilty of illegal practice of medicine, sentencing him to three years in prison with a fine of 3 million yuan. He was released from prison in April 2022.
He was listed as one of Time 100 most influential people of 2019, in the section "Pioneers". At the same time he was variously referred to as a "rogue scientist", "China's Dr. Frankenstein", and a "mad genius".
Early life and education
He was born in Xinhua County, Loudi City, Hunan, in 1984.
He Jiankui attended the University of Science and Technology of China for undergraduate studies from 2002 to 2006, and graduated with a major in modern physics in 2006. He entered Rice University in 2007 and received a Doctor of Philosophy degree in biophysics under the supervision of Michael W. Deem in 2010.
After receiving his doctorate, Michael Deem arranged for He to work on CRISPR/Cas9 gene-editing technique as a postdoc fellow with Stephen Quake at Stanford University.
Career
In 2011, He received the Chinese Government Award for Outstanding Self-financed Students Abroad while still in the United States. Responding to an ad, He returned to China in 2012 under the city of Shenzhen's Peacock Plan and opened a lab at the Southern University of Science and Technology (SUSTech). As part of the program, he was given 1 million yuan (about US$ in 2012) in angel funding, which he used to start biotech and investment companies. He founded Direct Genomics in 2012 in Shenzhen, to develop single-molecule sequencing devices based on patents invented by Quake that had formerly been licensed by Helicos Biosciences. Direct Genomics received 40 million yuan (about US$ in 2012) in subsidies from Shenzhen, and raised hundreds of millions yuan more in private investment, but He sold his stake in 2019. He also founded Vienomics Biotech, which offers genome sequencing services for people with cancer. In 2017, He was included in the Chinese government's Thousand Talents Plan. He Jiankui's achievements were widely revered in Chinese media, including China Central Television and the People's Daily which covered his research and described him as "the founding father of third-generation genome editing" during a program celebrating the 19th National Congress of the Chinese Communist Party.
In August 2018, He met with Chinese-American doctor John Zhang to discuss plans to launch a company focused on "genetic medical tourism." The business was to target elite customers, operating out of China or Thailand. The business plans were shelved with He's detainment in November 2018.
He took an unpaid leave from SUSTech starting in February 2018, and began conducting the genome-editing clinical experiment. On 26 November 2018, he announced the birth of gene-edited human babies, Lulu and Nana. Three days later, on 29 November 2018, Chinese authorities suspended all of his research activities, saying that his work was "extremely abominable in nature" and a violation of Chinese law. In December 2018, following public outcry regarding his work, He appeared to have gone missing. SUSTech denied the widespread rumors that he had been detained. On 30 December 2019, the Shenzhen Nanshan District People's Court sentenced He Jiankui to three years in prison and a fine of three million yuan (about US$ in 2019). He Jiankui was released in April 2022 after serving the term.
Research
In 2010, at Rice University, He Jiankui and Michael W. Deem published a paper describing some details of the CRISPR protein; this paper was part of the early work on the CRISPR/Cas9 system, before it had been adopted as a gene editing tool.
In 2017, He gave a presentation at Cold Spring Harbor Laboratory describing work he did at Southern University of Science and Technology (SUSTech), in which he used CRISPR/Cas9 on mice, monkeys, and around 300 human embryos.
In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used with Zhong Zhong and Hua Hua – the first ever cloned monkeys - and Dolly the sheep, and the same gene-editing CRISPR/Cas9 technique allegedly used by He in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made in order to study several medical diseases.
Human gene-editing experiment
On 25 November 2018, He Jiankui first announced on YouTube that his team successfully created the world's first genome-edited babies, Lulu and Nana. Formally presenting the story at the Second International Summit on Human Genome Editing at the University of Hong Kong (HKU) three days later, he said that the twins were born from genetically modified embryos that were made resistant to M-tropic strains of HIV. His team recruited 8 couples consisting each of HIV-positive father and HIV-negative mother through Beijing-based HIV volunteer group called Baihualin China League. During in vitro fertilization, the sperms were cleansed of HIV. Using CRISPR/Cas9 gene-editing, they introduced a natural mutation CCR5-Δ32 in gene called CCR5, which would confer resistance to M-tropic HIV infection. The People's Daily announced the result as "a historical breakthrough in the application of gene editing technology for disease prevention".
The experiment had recruited couples who wanted to have children; in order to participate, the man had to be HIV-positive and the woman uninfected. At the time, it was not disclosed whether the clinical experiment had received appropriate ethical review from an institutional review board before it started, and it was unclear if the participants had given truly informed consent.
He Jiankui said that he edited the genomes of the embryos using CRISPR/Cas9, specifically targeting a gene, CCR5, that codes for a protein that HIV-1 uses to enter cells. He was trying to create a specific mutation in the gene, (CCR5 Δ32), that few people naturally have and that possibly confers innate resistance to HIV-1, as seen in the case of the Berlin Patient. He said that the girls still carried functional copies of CCR5 along with disabled CCR5 given mosaicism inherent in the present state of the art in germ-line editing. There are forms of HIV which use a different receptor instead of CCR5, and the work that He did could not protect resulting children from those forms of HIV.
He Jiankui said he used a preimplantation genetic diagnosis process on the embryos that were edited, where 3 to 5 single cells were removed and the editing was checked. He said that parents were offered the choice of using edited or unedited embryos.
The twin girls were born by mid-October 2018, according to emails from He to an adviser. According to He, they appeared to be healthy in all respects. When they were born, it was unclear if there might be long-term effects from the gene-editing; He was asked about his plans to monitor the children, and pay for their care should any problems arise, and how their confidentiality and that of their parents could remain protected. The names of the children used in reports, "Lulu" and "Nana", along with the names of their parents, "Mark" and "Grace", are pseudonyms. In February 2019, his claims were reported to have been confirmed by Chinese investigators, according to NPR News.
He Jiankui also said at the Hong Kong meeting that a second mother in his clinical experiment was in the early stages of pregnancy. Although there are no official reports, the baby was expected around August 2019, and the birth was confirmed from the court verdict on 30 December which mentioned that there were three genetically edited babies. The baby was later revealed in 2022 as Amy.
In February 2022, Chinese scientists called for building a special facility to care for and study the three children born with genetically edited genomes or 'CRISPR Babies'. They assert that errors could have occurred in the gene editing process. The scientists believe the children's genomes should be regularly sequenced and tested for 'abnormalities'. The proposal has received pushback from the international medical community citing invasion of the children's privacy and future abuses of power.
Gene therapy for rare diseases
On 10 November 2022, He announced that he was setting up a new laboratory in Beijing for research on gene therapy for rare genetic diseases, saying on Twitter: "Today, I moved in my new office in Beijing. This is the first day for Jiankui He Lab." On 24 November, he wrote: "Gene therapy in Western countries often costs millions of dollars, which makes many families fall into poverty due to illness. With the support of social philanthropists, we will overcome three to five genetic diseases within two to three years to benefit families with rare diseases." His first plan is to make a gene therapy for Duchenne muscular dystrophy that causes gradual muscle degeneration particularly in boys. He also said on a microblogging site, Weibo, that he had applied for government funding for a DNA synthesiser project, commenting: "[I will] continue the scientific research and serve the country... The biggest use of the DNA synthesiser I plan to make is for information storage. A fingernail-sized piece of synthetic DNA can store the contents of books from the entire national library."
Human gene-editing controversy
Revelation
He Jiankui's human gene-editing clinical experiment was conducted without public discussion in the scientific community. It was first made public on 25 November 2018 when Antonio Regalado published a story about the work in MIT Technology Review, based on documents that had been posted earlier that month on the Chinese clinical trials registry. He Jiankui refused to comment on whether the pregnancies were aborted or carried on. It was only after the story was posted that the experiment was revealed in a promotional video on YouTube by He Jiankui and the next day in the Associated Press report. He Jiankui had engaged a public relations firm as well.
Reaction
He Jiankui's conduct was widely condemned. On 26 November, 122 Chinese scientists issued a joint statement that He's works were unethical, crazy, insane, and "a huge blow to the global reputation and development of Chinese science". Other Chinese scientists and institutions harshly criticized He; an article in Nature stated that concerns about He's conduct were "particularly acute in China, where scientists are sensitive to the country's reputation as the Wild West of biomedical research". An eminent bioethicist, Ren-zong Qiu, speaking at the Second International Summit on Human Genome Editing, commented on He's research as "a practice with the least degree of ethical justifiability and acceptability". Geneticist Eric Topol stated, "This is far too premature ... We're dealing with the operating instructions of a human being. It's a big deal." Nobel Prize-winning biologist David Baltimore considered the work "irresponsible". Developmental biologist Kathy Niakan of the Francis Crick Institute said, "If true...this would be a highly irresponsible, unethical and dangerous use of genome editing technology." Medical ethicist Julian Savulescu of the University of Oxford noted, "If true, this experiment is monstrous." Bioethicist Henry T. Greely of Stanford Law School declared, "I unequivocally condemn the experiment," and later, "He Jiankui’s experiment was, amazingly, even worse than I first thought." Nobel prize-winning biochemist Jennifer Doudna, of the University of California, Berkeley, a pioneer of the CRISPR/Cas9 technology, condemned the research. The National Institutes of Health (NIH) of United States announced a statement on 28 November 2018 signed by its Director Francis S. Collins, condemning He and his team for intentionally flouting international ethical norms by doing such irresponsible work, and criticizing that He's "project was largely carried out in secret, the medical necessity for inactivation of CCR5 in these infants is utterly unconvincing, the informed consent process appears highly questionable, and the possibility of damaging off-target effects has not been satisfactorily explored". NIH claims no support for the use of gene-editing technologies in human embryos. The Chinese Academy of Medical Sciences published an announcement in the journal Lancet, stating that they "are opposed to any clinical operation of human embryo genome editing for reproductive purposes in violation of laws, regulations, and ethical norms in the absence of full scientific evaluation", and condemning He for violating relevant ethical regulations and guidelines that have been clearly documented by the Chinese government. They emphasized that the "genome editing of germ cells or early embryos is still in the stage of basic research, ... scientific research institutions and researchers should not undertake clinical operations of genome editing of human germ cells for reproductive purposes, nor should they fund such research", and they will "develop and issue further operational technical and ethical guidelines as soon as possible to guide and standardise relevant research and applications according to the highest scientific and ethical standards." In April 2019, genetics experts from the Chinese Academy of Science (CAS) noted, “[We] believe there is no sound scientific reason to perform this type of gene editing on the human germline, and that the behavior of He [Jiankui] and his team represents a gross violation of both the Chinese regulations and the consensus reached by the international science community. We strongly condemn their actions as extremely irresponsible, both scientifically and ethically.”
Others were less critical of He's experiment. George Church, a geneticist at Harvard University, defended some aspects of the experiment and said gene editing for HIV resistance was "justifiable" since HIV is "a major and growing public health threat", but questioned the decision of this project to allow one of the embryos to be used in a pregnancy attempt, since the use of that embryo suggests that the researchers’ "main emphasis was on testing editing rather than avoiding this disease". Arthur Caplan, bioethicist at the New York University School of Medicine, said that engineering human genes is inevitable and, although there are concerns of creating "designer babies", medical researchers are more interested in using the technology to prevent and treat diseases, much like the type of experiments performed by He. Carl Zimmer compared the reaction to He's human gene editing experiment to the initial reactions and subsequent debate over mitochondrial replacement therapy (MRT), and the eventual regulatory approval of MRT in the United Kingdom.
Investigation
The Southern University of Science and Technology stated that He Jiankui had been on unpaid leave since February 2018, and his research was conducted outside of their campus; the university and his department said they were unaware of the research project and said it was inviting international experts to form an independent committee to investigate the incident, and would release the results to the public. Local authorities and the Chinese government also opened investigations.
As of news reported on 28 December 2018, He was sequestered in a university apartment and under guard. According to news reported on 7 January 2019, he could face severe consequences. William Hurlbut, Stanford University neuroscientist and bioethicist, reported that he was in contact with He who was staying in a university apartment in Shenzhen “by mutual agreement” and was free to leave; often visiting the gym and taking walks with his wife. Nonetheless, He may have been under some form of surveillance.
On 25 February 2019, some suggested the Chinese government may have helped fund the CRISPR babies experiment, at least in part. Later reports showed that the fund for He's project was raised by himself to evade regulation, and no Chinese government funds were involved.
Preliminary authoritative report
An investigating task force set up by the Guangdong Provincial Health Commission released a preliminary report on January 21, 2019, stated that He Jiankui had defied government bans and conducted the research in the pursuit of personal fame and gain. The report confirmed that He had recruited eight couples to participate in his experiment, resulting in two pregnancies, one of which gave birth to the gene-edited twin girls in November 2018. The babies are now under medical supervision. The report further said He had made forged ethical review papers in order to enlist volunteers for the procedure, and had raised his own funds deliberately evading oversight, and organized a team that included some overseas members to carry out the illegal project. Officials from the investigation said that He, as well as other relevant personnel and organizations, will receive punishment per relevant laws and regulations, and those who are suspected of committing crimes will be charged.
Aftermath
The SUSTech announced a statement on its website on 21 January 2019 that He Jiankui had been fired.
On 30 December 2019, the Shenzhen City Nanshan District People's Court sentenced He Jiankui to three years in prison and fined him 3 million RMB (about US$). His collaborators received less penalty – Zhang Renli of the Guangdong Academy of Medical Sciences and Guangdong General Hospital, a two-year prison sentence and a 1-million RMB (about US$) fine, and Qin Jinzhou of the Southern University of Science and Technology, an 18-month prison sentence and a 500,000 RMB (about US$) fine. The three were found guilty of having "forged ethical review documents and misled doctors into unknowingly implanting gene-edited embryos into two women."
In May 2019, lawyers in China reported, in light of the purported creation by He Jiankui of the first gene-edited humans, the drafting of regulations that anyone manipulating the human genome by gene-editing techniques would be held responsible for any related adverse consequences. In December 2019, MIT Technology Review reported an overview of the controversy to date, including excerpts of the unpublished research manuscript.
In February 2019, scientists reported that the gene modification made in Lulu and Nana likely also confers cognitive benefits. While health journalist Julia Belluz has speculated in Vox that this may have been a motivation for He Jiankui to work on modifying this gene,) Antonio Regalado of MIT Technology Review found no evidence that He Jiankui had interest in this area.
In 2019, the World Health Organization (WHO) has launched a global registry to track research on human genome editing, after a call to halt all work on genome editing.
After the incident
On 21 February 2023, Hong Kong newspaper Ming Pao reported that He Jiankui said his application for a Hong Kong entry permit through the Top Talent Pass Scheme had been approved. Late that night, the Government of Hong Kong made a public announcement, suggesting that after inspecting the relevant applications, the Immigration Department suspected that He Jiankui had obtained a Hong Kong entry permit by making false statements. The Director of Immigration had declared He Jiankui's entry permit invalid, and a criminal investigation would be conducted.
On 8 September 2023, Wuchang Technical College (武昌理工学院), a private undergraduate college in Wuhan, Hubei, established the Institute of Genetic Medicine, with He Jiankui serving as the inaugural director.
In popular culture
He Jiankui's life and his CRISPR experiment were presented in the documentary Make People Better, released in 2022. The film described, "A Chinese scientist disappears after developing the first designer babies, shocking the world and the entire scientific community, but an investigation shows he may not have been alone in his experiment to create "better" human beings." Directed by Cody Sheehy, the expert panel included Antonio Regalado and Benjamin Hurlbut of the Arizona State University. The documentary originated from a Rhumbline Media project on genetic engineering titled Code of the Wild: The Nature of Us started in 2018 by Sheehy and Samira Kiani, a biotechnologist at Arizona State University.
His account is depicted in The CRISPR Generation: The Story of the World’s First Gene-Edited Babies, a 2019 book by Kiran Musunuru, a cardiologist at the University of Pennsylvania.
His story is narrated in the 2020 book The Mutant Project: Inside the Global Race to Genetically Modify Humans, written by Eben Kirksey, an anthropologist at the University of Oxford.
A documentary book CRISPR People: The Science and Ethics of Editing Humans, written by Henry Greely, was published in 2021.
See also
Assisted reproduction technology
Human Nature (2019 CRISPR film documentary)
Unnatural Selection (2019 TV documentary)
References
External links
(Archived) at SUSTech
Faculty profile (Archived) at SUSTech
1984 births
Living people
Biomedical engineers
Chinese geneticists
People from Loudi
Biologists from Hunan
University of Science and Technology of China alumni
Rice University alumni
Stanford University staff
Academic staff of the Southern University of Science and Technology
Genome editing
Chinese bioengineers
Educators from Hunan
Chinese prisoners and detainees
People involved in scientific misconduct incidents
Prisoners and detainees of China
Chinese eugenicists | He Jiankui | [
"Engineering",
"Biology"
] | 4,559 | [
"Genetics techniques",
"Genetic engineering",
"Genome editing"
] |
59,179,595 | https://en.wikipedia.org/wiki/T.%20S.%20R.%20Prasada%20Rao | Turaga Sundara Rama Prasada Rao (20 January 1939 – 7 April 2022) was an Indian engineer, known for his contributions in the fields of petroleum refining and heterogeneous catalysis. He was a former director of the Indian Institute of Petroleum and a former deputy general manager of the Indian Petrochemicals Corporation Limited. He was known for his studies in petrochemical engineering; his studies have been documented by way of a number of articles and Google Scholar, an online repository of scientific articles has listed 123 of them. He also co-edited a book, Recent Advances in Basic and Applied Aspects of Industrial Catalysis, published by Elsevier.
Born on 20 January 1939, Rao was an elected fellow of Indian National Academy of Engineering, and the Indian Academy of Sciences. He was also a member of Andhra Pradesh Akademi of Sciences, Indian Institute of Chemical Engineers, and New York Academy of Sciences. He shared the 1996 Om Prakash Bhasin Award for Engineering with M. R. Srinivasan and C. G. Krishnadas Nair. He received the Petrotech Lifetime Achievement Award in 2004. He was also a recipient of several other honors including Chemtech Outstanding Scientist Award, K. G. Naik Gold Medal, FICCI Award Technology Award of the Council of Scientific and Industrial Research.
Rao died in Hyderabad on 7 April 2022, at the age of 83.
Selected bibliography
Books
Articles
Notes
References
External links
1939 births
2022 deaths
Indian engineers
Petroleum engineers
Fellows of the Indian Academy of Sciences
Fellows of the Indian National Academy of Engineering
People from Machilipatnam | T. S. R. Prasada Rao | [
"Engineering"
] | 327 | [
"Petroleum engineers",
"Petroleum engineering"
] |
52,934,470 | https://en.wikipedia.org/wiki/Condylostoma%20nuclear%20code | The Condylostoma nuclear code (translation table 28) is a genetic code used by the nuclear genome of the heterotrich ciliate Condylostoma magnum. This code, along with translation tables 27 and 31, is remarkable in that every one of the 64 possible codons can be a sense codon. Experimental evidence suggests that translation termination relies on context, specifically proximity to the poly(A) tail. Near such a tail, PABP could help terminate the protein by recruiting eRF1 and eRF3 to prevent the cognate tRNA from binding.
The code (28)
AAs = FFLLSSSSYYQQCCWWLLLAPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG
Starts = ----------**--*--------------------M----------------------------
Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG
Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG
Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), and Valine (Val, V).
Differences from the standard code
See also
List of all genetic codes: translation tables 1 to 16, and 21 to 31.
The genetic codes database.
References
Molecular genetics
Gene expression
Protein biosynthesis | Condylostoma nuclear code | [
"Chemistry",
"Biology"
] | 615 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
52,934,712 | https://en.wikipedia.org/wiki/Mesodinium%20nuclear%20code | The Mesodinium nuclear code (translation table 29) is a genetic code used by the nuclear genome of the ciliates Mesodinium and Myrionecta.
The code (29)
AAs = FFLLSSSSYYYYCC*WLLLAPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG
Starts = --------------*--------------------M----------------------------
Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG
Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG
Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), and Valine (Val, V).
Differences from the standard code
See also
List of all genetic codes: translation tables 1 to 16, and 21 to 31.
The genetic codes database.
References
Molecular genetics
Gene expression
Protein biosynthesis | Mesodinium nuclear code | [
"Chemistry",
"Biology"
] | 531 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
52,937,983 | https://en.wikipedia.org/wiki/Chemosynthesis%20%28nanotechnology%29 | In molecular nanotechnology, chemosynthesis is any chemical synthesis where reactions occur due to random thermal motion, a class which encompasses almost all of modern synthetic chemistry. The human-authored processes of chemical engineering are accordingly represented as biomimicry of the natural phenomena above, and the entire class of non-photosynthetic chains by which complex molecules are constructed is described as chemo-.
Chemosynthesis can be applied in many different areas of research, including in positional assembly of molecules. This is where molecules are assembled in certain positions in order to perform specific types of chemosynthesis using molecular building blocks. In this case synthesis is most efficiently performed through the use of molecular building blocks with a small amount of linkages. Unstrained molecules are also preferred, which is when molecules undergo minimal external stress, which leads to the molecule having a low internal energy. There are two main types of synthesis: additive and subtractive. In additive synthesis the structure starts with nothing, and then gradually molecular building blocks are added until the structure that is needed is created. In subtractive synthesis they start with a large molecule and remove building blocks one by one until the structure is achieved.
This form of engineering is then contrasted with mechanosynthesis, a hypothetical process where individual molecules are mechanically manipulated to control reactions to human specification. Since photosynthesis and other natural processes create extremely complex molecules to the specifications contained in RNA and stored long-term in DNA form, advocates of molecular engineering claim that an artificial process can likewise exploit a chain of long-term storage, short-term storage, enzyme-like copying mechanisms similar to those in the cell, and ultimately produce complex molecules which need not be proteins. For instance, sheet diamond or carbon nanotubes could be produced by a chain of non-biological reactions that have been designed using the basic model of biology.
Use of the term chemosynthesis reinforces the view that this is feasible by pointing out that several alternate means of creating complex proteins, mineral shells of mollusks and crustaceans, etc., evolved naturally, not all of them dependent on photosynthesis and a food chain from the sun via chlorophyll. Since more than one such pathway exists to creating complex molecules, even extremely specific ones such as proteins edible to fish, the likelihood of humans being able to design an entirely new one is considered (by these advocates) to be near certainty in the long run, and possible within a generation.
Modern applications
Several methods of nanoscale chemosynthesis have been developed, a common variant of which is chemical bath deposition (CBD). This process enables large-scale synthesis of thin film layers of a variety of materials, and has been especially useful in providing such films for opto-electronics through the efficient creation of lead sulfide (PbS) films. CBD synthesis of these films allows for both cost-effective and accurate assemblies, with grain type and size as well as optical properties of the nanomaterial dictated by the properties of the surrounding bath. As such, this method of nanoscale chemosynthesis is often implemented when these properties are desired, and can be used for a wide range of nanomaterials, not just lead sulfide, due to the adjustable properties.
As explained previously, the usage of chemical bath deposition allows for the synthesis of large deposits of nanofilm layers at a low cost, which is important in the mass production of cadmium sulfide. The low cost associated with the synthesis of CdS through means of chemical deposition has seen CdS nanoparticles being applied to semiconductor sensitized solar cells, which when treated with CdS nanoparticles, see improved performance in their semiconductor materials through a reduction of the band gap energy. The usage of chemical deposition in particular allows for the crystallite orientation of CdS to be more favourable, though the process is quite time consuming. Research by S.A. Vanalakar in 2010 resulted in the successful production of cadmium sulfide nanoparticle film with a thickness of 139 nm, though this was only after the applied films were allowed to undergo deposition for 300 minutes. As the deposition time was increased for the film, not only was the film thickness found to increase, but the band gap of the resultant film decreased.
References
Nanotechnology
Chemical synthesis | Chemosynthesis (nanotechnology) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 878 | [
"Nanotechnology",
"Materials science",
"nan",
"Chemical synthesis"
] |
52,942,486 | https://en.wikipedia.org/wiki/B.%20D.%20Kulkarni | Bhaskar Dattatraya Kulkarni (5 May 1949 – 14 January 2019), popularly known as B. D. among his friends and colleagues, was an Indian chemical reaction engineer and a Distinguished Scientist of Chemical Engineering and Process Development at the National Chemical Laboratory, Pune. An INSA Senior Scientist and a J. C. Bose fellow, he was known for his work on fluidized bed reactors and chemical reactors. He is an elected fellow of the Indian Academy of Sciences, Indian National Science Academy, The World Academy of Sciences and the Indian National Academy of Engineering. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards for his contributions to Engineering Sciences in 1988.
Biography
B. D. Kulkarni, born on 5 May 1949 into a Deshastha Brahmin family in Nagpur in the western Indian state of Maharashtra, did his schooling at New English High School and after passing the matriculation with distinction in 1964, he did his pre-university course at Hislop College before joining Laxminarayan Institute of Technology of Nagpur University from where he graduated in chemical engineering in 1970. He continued there to complete his master's degree in chemical engineering in 1972 and enrolled at National Chemical Laboratory, Pune (NCL) in 1973 for his doctoral degree under the guidance of L. K. Doraiswamy, a noted chemical engineer and Padma Bhushan recipient. He worked under Doraiswamy, who is credited with developing Organic Synthesis Engineering as a definitive scientific stream, and secured a PhD in 1978 during which time he was invited by Man Mohan Sharma, a Padma Vibhushan laureate, to join the Institute of Chemical Technology, Mumbai but, on advice from Doraiswamy, he remained at NCL where he would spend the rest of his career. He served the institution in various capacities as Scientist C (1979–84), Scientist EI (1984–88), Scientist EII (1988), Scientist F (1988–93) and superannuated as Scientist G in 2010. On the administration front, he served as the Deputy Director and Head of the Chemical Engineering Division. Post-retirement, he served NCL as a Distinguished Scientist and continues his researches.
Career
Kulkarni's researches were mainly in the fields of Chemical Reaction Engineering, Applied Mathematics and Transport phenomena and he is known for his work on fluidized bed reactors and chemical reactors. He is credited with introducing an integer-solution approach and novel ideas on noise-induced transitions and his work on Artificial Intelligence-based evolutionary formalisms is reported to have assisted in a better understanding of reacting and reactor systems. His work spanned from conventional chemical reaction engineering in gas-liquid and gas-solid catalytic reactions to reactor stability to stochastic analysis of chemically reacting systems as well as inter-disciplinary fields. A model reaction system termed Encillator, an analytical approach for the solving model equations based on arithmetics, use of initial value formalism for modelling fluidized-bed reactors, introduction of normal form theory, evolutionary algorithms and stochastic approximation in analysing reactor behavior and performance are some of the contributions made by him. He holds US and Indian patents for several processes he has developed which include Method and an Apparatus for the Identification and/or Separation of Complex Composite Signals into its Deterministic and Noisy Components, Process for preparation of pure alkyl esters from alkali metal salt of carboxylic acid, and Enantioselective resolution process for arylpropionic acid drugs from the racemic mixture.
Kulakrni's researches have been documented in several peer-reviewed articles; and the online article repository of Indian Academy of Sciences has listed 250 of them. Besides, he has contributed chapters to books edited by others and has published seven edited or authored texts, including Recent Trends in Chemical Reaction Engineering, Advances in Transport Processes, The Analysis of Chemically Reacting Systems: A Stochastic Approach and Transport processes in fluidized bed reactors. He has guided several master's and doctoral scholars in their studies and has conducted training for students on mathematical modelling. He also serves as one of the directors of Hitech Bio Sciences India Limited, a probiotics and nutraceuticals manufacturer based in Pune and is a member of the advisory committee of the International Conference on Sustainable Development for Energy and Environment (ICSDEE 2017).
Awards and honors
The Indian National Science Academy awarded Kulkarni the Young Scientist Medal in 1981, making him the first chemical engineer to receive the honor. He received another award in 1981, the Amar Dye Chem Award of the Indian Institute of Chemical Engineers; IIChE would honor him again in 1988 with the Herdillia Award for Excellence in Basic Research in Chemical Engineering. The Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards the same year. The National Chemical Laboratory selected him s the Best Scientist of the Year in 1992 and the year 2000 brought him two awards, ChemTech-CEW Award of the ChemTech Foundation and the FICCI Award of the Federation of Indian Chambers of Commerce & Industry.
Kulkarni, a CSIR Distinguished Scientist at NCL, was elected as a fellow by Maharashtra Academy of Sciences in 1988, the same year as he became a fellow of the Indian Academy of Sciences. He received the elected fellowship of the Indian National Academy of Engineering and the Golden Jubilee Fellowship of the Institute of Chemical Technology (then known as the University Department of Chemical Technology-UDCT) in 1989. The Indian National Science Academy elected him as a fellow in 1990 and The World Academy of Sciences chose him as an elected fellow in 2002. When the Science and Engineering Research Board of the Department of Science and Technology selected scientists for the J. C. Bose National Fellowship in 2009, he was also included in the list of recipients. Industrial & Engineering Chemistry Research, the official journal of the American Chemical Society, issued a festschrift on him in 2009 titled Kulkarni Issue with the guest editorial written by his mentor, L. K. Doraiswmy and the issue featured his biosketch jointly written by Ganapati D. Yadav, V. K. Jayaraman and V. Ravikumar, all known chemical engineers.
Selective work
Books
Chapters
Patents
See also
List of chemical engineers
History of chemical engineering
Notes
References
External links
Recipients of the Shanti Swarup Bhatnagar Award in Engineering Science
1949 births
Fellows of the Indian Academy of Sciences
Fellows of the Indian National Science Academy
People from Nagpur district
TWAS fellows
Indian chemical engineers
Chemical reaction engineering
20th-century Indian inventors
Living people
Fellows of the Indian National Academy of Engineering
Indian technology writers
20th-century Indian engineers
Engineers from Maharashtra | B. D. Kulkarni | [
"Chemistry",
"Engineering"
] | 1,403 | [
"Chemical engineering",
"Chemical reaction engineering"
] |
69,976,893 | https://en.wikipedia.org/wiki/Urban%20vitality | Urban vitality is the quality of spaces in cities that attract diverse groups of people for a range of activities at different times of the day. Such spaces are often be perceived as being alive, lively or vibrant, in contrast with low-vitality areas, which may repel people and be perceived as unsafe.
The urban vitality index is a measure of this quality and has become a fundamental tool in urban planning, especially in interventions for spaces with low vitality. The index is also used to assist the management of spaces that already have high vitality. However, the success of high-vitality spaces can sometimes lead to gentrification and overtourism that may reduce their vitality and initial popularity.
The concept of urban vitality is based on the works of Jane Jacobs, especially her most influential work, The Death and Life of Great American Cities. In the 1960s, Jacobs criticized the modern and rationalist architecture of Robert Moses and Le Corbusier, whose work centered private cars. She argued that these forms of urban planning overlooked and oversimplified the complexity of human life in diverse communities. She opposed large-scale urban renewal programs that affected neighborhoods and that built freeways through inner cities. She instead advocated compact and mixed-use development with walkable streets and “eyes on the street” to deter crime.
The concept of urban vitality is important in Mediterranean urbanism and its history, in which public space, walkability and squares are valued as centers of social interaction and cohesion, in contrast to the Anglo-Saxon urbanism of large, car-centric infrastructures with greater distances between conveniences.
Conditions for high urban vitality
Urban vitality can be quantified thanks to the analysis of the elements that determine it. Among them are:
Diversity of uses of the space that can attract different types of people for diverse activities and at various times, making the space constantly occupied, improving its security.
Opportunities for personal contact with blocks, buildings and open spaces that are not too large, as they reduce the number of possible intersections and social interactions.
Diversity of buildings with varied characteristics and ages, allowing people with different purchasing power to live in all areas of the city, avoiding the formation of ghettos.
High population density, residential areas are essential to attract other types of activity.
Accessibility for all people without depending on private transport, with pedestrian access being the most important, as it is the most democratic, sustainable and cheap, followed by access by bicycle and public transport.
Distance to border elements, such as large buildings, ring roads, surface train tracks or large urban parks that discourage the use of the street.
See also
References
Human ecology
Human geography
Sustainable transport
Urban planning
Urban sociology | Urban vitality | [
"Physics",
"Engineering",
"Environmental_science"
] | 544 | [
"Physical systems",
"Transport",
"Sustainable transport",
"Urban planning",
"Human ecology",
"Environmental social science",
"Human geography",
"Architecture"
] |
69,978,600 | https://en.wikipedia.org/wiki/Cyclability | Cyclability is the degree of ease of bicycle circulation. A greater degree of cyclability in cities is related, among others, to benefits for people's health, lower levels of air and noise pollution, improved fluidity of traffic or increased productivity.
Cyclability factors
Among the factors that affect cyclability are:
Safety
The safety of cycle paths is a requirement for high cyclability:
The safest roads are those that are segregated from motorized traffic (bike lanes), followed by shared paths and, finally, lanes shared with other vehicles.
The width of cycle paths should be wide enough for two bikes to cross or pass each other safely.
The visibility of the road must make it possible to anticipate possible braking and intersections, avoiding curves at right angles.
Intersections must, in turn, be well marked for both cyclists and motorized traffic.
The routes must avoid obstacles, such as lampposts or benches. Also prevent carrying the bike, such as on stairs, in which case bicycle ramps can be incorporated.
The pavement must be smooth, with lowered obstacles such as curbs, with materials that do not offer too much resistance, that drain and are not slippery when it rains.
Coherence
A coherent cycling network implies:
The cycle paths must cover the entire extension of the city, so that the bicycle can be used to go to as many destinations as possible. Ideally, there should be a cycle path within 250 meters of any point in the city.
They have to be connected to each other continuously.
There must be secure bicycle parkings both at the origin and at the destination of the routes.
The design of cycle paths must be uniform, so that all citizens can quickly perceive the use of that path, avoiding conflicts.
The routes must be correctly signposted, including the destinations offered by each of the routes.
Directness
Bicycles are driven by people's physical exercise, therefore, a highly cyclable cycling network must allow direct movement without great effort:
The routes between origins and destinations can be made in the most linear way possible, without the need to make large deviations.
The cycle paths should go through the main streets, as they are usually the ones that host the majority of shops and services.
They should avoid or minimize slopes.
Reduce the number of stops such as traffic lights or intersections, which require greater physical effort. This may included Idaho stop, dead red, or red-light-as-yield traffic laws.
Cyclability indicators
One of the best indicators of the degree of cyclability is the balanced proportion of genders and ages that make daily use of the bicycle. Women, children and the elderly are the ones who have a greater perception of insecurity, so if a city has low cyclability, they will not consider the bicycle as a usual means of transport. On the contrary, a composition of bicycle users similar to the demographic structure will indicate a highly cyclable space.
See also
References
Cycling infrastructure
Cycling
Sustainable transport
Transport infrastructure
Transportation engineering
Transportation planning
Urban planning
Utility cycling | Cyclability | [
"Physics",
"Engineering"
] | 610 | [
"Industrial engineering",
"Physical systems",
"Transport",
"Sustainable transport",
"Transport infrastructure",
"Transportation engineering",
"Urban planning",
"Civil engineering",
"Architecture"
] |
69,986,552 | https://en.wikipedia.org/wiki/George%20W.%20Hammond | George Warren Hammond (April 4, 1833 – January 6, 1908) was an American businessman. Camp Hammond, in Yarmouth, Maine, is named for him. He was also one of its architects. Built in , it was placed on the National Register of Historic Places in 1979.
Hammond was also co-owner of Forest Paper Company, which was the largest paper mill in the world at the time of his death. The mill was also known as a pioneer in the processing of soda pulp.
Early life
Hammond was born on April 4, 1833, in Grafton, Massachusetts, to Josiah and Anna Warren. One of his siblings, William Henry (1841–1908), followed him to Maine. He worked in Portland until his death, a few months after George, at the age of 67. His body was returned to the family's hometown of Grafton for interment.
He received an honorary degree of Master of Arts degree from Bowdoin College in 1900.
Career
After finishing school, Hammond began working at Howe & Leeds Wholesale West India Goods Store on Boston's Long Wharf. The same year, he became a clerk with J. W. Blodgett & Co.
Hammond attended the Massachusetts Institute of Technology as a special student on the chemistry of paper manufacturing.
After moving to Maine part-time, in 1853 he accepted a position at his uncle Samuel Dennis Warren's S. D. Warren Paper Mill in Cumberland Mills. By 1857, he was superintendent, a role in which remained for five years. His next position was as the mill's agent.
In 1874, Hammond and Warren bought the rights to Yarmouth Paper Company, in Yarmouth, Maine, at the town's Third Falls. They renamed it Forest Paper Company. Beginning with a single wooden building, the facility expanded to ten buildings covering as many acres, including a span over the Royal River to Factory Island. Two bridges to it were also constructed. In 1909, the year following Hammond's death, it was the largest such mill in the world, employing 275 people. Hammond also worked at the S. D. Warren mill until 1876, before transferring full-time to Yarmouth as the manager of the new business. The mill became known as a pioneer in the processing of soda pulp.
Hammond retired from active business on January 1, 1906.
Personal life
Hammond married Ellen Sarah Sophia Clarke (1833–1905), the sister-in-law of Samuel Warren, in 1874. Hammond survived her by three years upon her death in 1905.
Along with New York architect Alexander Twombly, who was the engineer and draftsman of Forest Paper Company, Hammond designed what is today known as Camp Hammond, set back from Yarmouth's Main Street and from which Hammond could see his mill. Twombly also designed several buildings in Boston. Frederick Law Olmsted, who designed Central Park in New York City, designed the gardens of the property. With the Hammonds splitting their time between Boston and Yarmouth, the property became known as the Camp.
The Hammonds also formed the Antiquarian Society in order to facilitate the 1890 purchase of the North Yarmouth and Freeport Baptist Meetinghouse on Yarmouth's Hillside Street. It became a library and museum, known as the Hillside Library.
Among the many roles Hammond took on without payment was as president of the Yarmouth Water Committee, established in 1895, which sourced its water supply from Hammond Spring on the property of Forest Paper Company. Hammond donated Forest Paper Company land for the 1903 construction of Merrill Memorial Library, on Main Street, which was designed by Alexander Longfellow, a nephew of the poet Henry Wadsworth Longfellow.
Hammond served in the Maine Legislature between 1868 and 1870, was on the Maine Board of Agriculture and the board of trustees of North Yarmouth Academy, was a member of the American Association for the Advancement of Science, the Society of Chemical Industry, the American Institute of Mining, Metallurgical, and Petroleum Engineers, The Society of Arts and Crafts of Boston, the Massachusetts Historical Society, the New England Historic Genealogical Society (from January 1876), The Bostonian Society and the Franklin Institute. He was also a Freemason.
A member of the American Horticultural Society, he was a keen arborist, and his knowledge of trees and plants earned him a place on the Overseers' Committee at Harvard University's Gray Herbarium between 1888 and the time of his death.
The Hammonds were members of Yarmouth's First Parish Congregational Church and Boston's Trinity Church.
Death
Hammond died on January 6, 1908, aged 74. He is interred in Mount Auburn Cemetery, Cambridge, Massachusetts.
References
1833 births
1908 deaths
People from Grafton, Massachusetts
People from Yarmouth, Maine
19th-century American businesspeople
20th-century American businesspeople
Bowdoin College alumni
Massachusetts Institute of Technology alumni
American Freemasons
Arborists
Fellows of the American Association for the Advancement of Science
American Institute of Mining, Metallurgical, and Petroleum Engineers
Franklin Institute
New England Historic Genealogical Society | George W. Hammond | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,005 | [
"Mining engineering",
"Metallurgy",
"Petroleum engineering",
" and Petroleum Engineers",
"American Institute of Mining",
" Metallurgical"
] |
42,759,736 | https://en.wikipedia.org/wiki/Lifecycle%20Modeling%20Language | The Lifecycle Modeling Language (LML) is an open-standard modeling language designed for systems engineering. It supports the full lifecycle: conceptual, utilization, support and retirement stages. Along with the integration of all lifecycle disciplines including, program management, systems and design engineering, verification and validation, deployment and maintenance into one framework.
LML was originally designed by the LML steering committee. The specification was published October 17, 2013.
This is a modeling language like UML and SysML that supports additional project management uses such as risk analysis and scheduling. LML uses common language to define its modeling elements such as entity, attribute, schedule, cost, and relationship.
Overview
LML communicates cost, schedule and performance to all stakeholders in the system lifecycle.
LML combines the logical constructs with an ontology to capture information. SysML is mainly constructs and has a limited ontology, while DoDAF MetaModel 2.0 (DM2) only has an ontology. Instead LML simplifies both the constructs and ontology to make them more complete, but still easier to use. There are only 12 primary entity classes. Almost all of the classes relate to each other and themselves with consistent words, i.e., Asset performs Action. Action performed by Asset.
SysML uses object oriented design, because it was designed to relate systems thinking to software development. No other discipline in the lifecycle uses object oriented design and analysis extensively. LML captures the entire lifecycle from cradle to grave.
Systems Engineers have identified complexity as a major issue. LML is a new approach to analyzing, planning, specifying, designing, building and maintaining modern systems.
LML focuses on these 6 goals:
1. To be easy to understand
2. To be easy to extend
3. To support both functional and object oriented approaches within the same design
4. To be a language that can be understood by most system stakeholders, not just Systems Engineers
5. To support systems from cradle to grave
6. To support both evolutionary and revolutionary changes to system plans and designs over the lifetime of a system
History
The LML Steering Committee was formed in February 2013 to review a proposed draft ontology and set of diagrams that forms the LML specification. Contributors from many academic and commercial organizations provided direct input into the specification, resulting in its publication in October 2013. Presentations and tutorials were given at the National Defense Industrial Association (NDIA) Systems Engineering Conference (October 2013) and the Systems Engineering in DC (SEDC) in April 2014.
A predecessor to LML was developed by Dr. Steven H. Dam, SPEC Innovations, as part of a methodology called Knowledge-Based Analysis and Design (KBAD). The ontology portion was prototyping in a systems engineering database tool. Ideas on how to better implement it and the development of key LML diagrams (Action and Asset) were part of their Innoslate product development from 2009 to present.
Ontology
Ontologies provide a set of defined terms and relationships between the terms to capture the information that describes the physical, functional, performance, and programmatic aspects of the system.
Common ways for describing such ontologies are "Entity", "Relationship", and "Attribute" (ERA). ERA is often used to define database schemas. LML extends the ERA schema with "Attributes on Relationship", a feature that can reduce the number of required "Relationships", in the same way that "Attribute" reduce the number of required "Entities" in ERA.
In alignment with the first goal of LML, "Entity", "Relationship", "Attribute", and "Attribute on Relationship" have equivalent English language elements: noun, verb, adjective and adverb.
Entity (noun)
An entity is defined as something that is uniquely identifiable and can exist by itself. There are only 12 parent entities in LML: Action, Artifact, Asset, Characteristic, Connection, Cost, Decision, Input/Output, Location, Risk, Statement and Time.
Several child entities have been defined to capture information that stakeholders need. The child entities have the attributes and relationships of the parents plus additional attributes and relationships that make them unique. Child entities include: Conduit (child of Connection), Logical (child of Connection), Measure (child of Characteristic), Orbital (child of Location), Physical (child of Location), Requirement (child of Statement), Resource (child of Asset), and Virtual (child of Location).
Every entity has a name or number or description attribute or combination of the three to identify it uniquely. The name is a word or small collection of words providing an overview of information about the entity.
The number provides a numerical way to identify the entity. The description provides more detail about that entity.
Attribute (adjective)
The attributes work in the same way an adjective. Entities (the nouns) can have names, numbers, and description attributes. The inherent characteristic or quality of an entity is an attribute. Every attribute has a name that identifies it uniquely within an entity. Attributes names are unique within an entity, but may be used in other entities. The name provides an overview of information about the attribute. The attribute data type specifies the data associated with the attribute.
Relationship (verb)
The relationship works the same way a verb connects nouns or in this case the entities. The relationships enable a simple method to see how [entities] connect. For example, when connecting an action to a statement, LML uses “traced from” as the relationship: an Action is traced from a Statement. The inverse relation of traced from is “traced to.” Relationships are defined in both directions and have unique names with the same verb. The standard parent child relationship is decomposed by and its inverse is decomposes.
Relationship names are unique across the whole schema.
Attributes on Relationships (adverb)
Classic ERA modeling does not include "attributes on relationships", but is included in LML. In terms of the English language, an "attribute on a relationship" is like an adverb, helping to describe the relationship. Analogous to the way in which attributes relate to entities the "attribute on a relationship" has a name that is unique to its relationship, but need not be unique across other relationships.
List of LML Tools
Innoslate is the model-based systems engineering tool with LML available on the market. Innoslate implements LML and enables translation to UML, SysML, DoDAF 2.0, and other languages.
3DExperience platform is the enterprise software platform that fully supports LML modeling concepts. Particular tool for schema modeling is "Business Modeler" and basic tool for instance modelling based on that schema is "Matrix Navigator". Software is evolution of MatrixOne and Dassault Systemes V6 platform. CAD, CAM, CAE, PDM and other PLM technologies tools are provided based on that platform.
See also
Formal specification
Functional specification
Process specification
Product design specification
Requirements analysis
Specification (technical standard)
Specification tree
References
Software requirements
Systems architecture
Systems engineering | Lifecycle Modeling Language | [
"Engineering"
] | 1,441 | [
"Systems engineering",
"Software requirements",
"Software engineering",
"Systems architecture",
"Design"
] |
42,763,563 | https://en.wikipedia.org/wiki/Schmid%27s%20law | In materials science, Schmid's law (also Schmid factor) describes the slip plane and the slip direction of a stressed material, which can resolve the most shear stress.
Schmid's Law states that the critically resolved shear stress () is equal to the stress applied to the material () multiplied by the cosine of the angle with the vector normal to the glide plane () and the cosine of the angle with the glide direction (). Which can be expressed as:
where is known as the Schmid factor
Both factors and are measured in stress units, which is calculated the same way as pressure (force divided by area). and are angles.
The factor is named after Erich Schmid who coauthored a book with Walter Boas introducing the concept in 1935.
See also
Critical resolved shear stress
Notes
References
Further reading
Translation into English:
Materials science | Schmid's law | [
"Physics",
"Materials_science",
"Engineering"
] | 184 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
68,510,547 | https://en.wikipedia.org/wiki/Tellurite%20tellurate | A tellurite tellurate is a chemical compound or salt that contains tellurite and tellurate anions [TeO3]2- [TeO4 ]2-. These are mixed anion compounds, meaning the compounds are cations that contain one or more anions. Some have third anions. Environmentally, tellurite [TeO3]2- is the more abundant anion due to tellurate's [TeO4 ]2- low solubility limiting its concentration in biospheric waters. Another way to refer to the anions is tellurium's oxyanions, which happen to be relatively stable.
Naming
A tellurite tellurate compound may also be called a tellurate tellurite. Compounds that contain the anions follow basic nomenclature rules, the cation is named first, followed by the anion. As individual ions current IUPAC naming conventions dictate that compounds containing what was conventionally known as the tellurite ion, [TeO3]2-, be named as tellurate (IV) compounds, while other tellurates are labeled tellurate (VI) compounds. Furthering confusion, a number of other tellurate oxyanions exist, including pentoxotellurate, [TeO5]4-, and ditellurate, [Te2O10]8-. Additionally, a number of compounds that do not even include tellurium oxyanions still have "tellurate" in their names, as in the case of octafluoridotellurate, [TeF8]2-.
Production
One way to produce a tellurite tellurate compound is by heating oxides together. Tellurite tellurate compounds can also occur naturally as minerals such as Carlfriesite Ca[Te4+2Te6+O8].
Properties
Tellurite tellurate compounds can crystalize under certain conditions. Monoclinic and orthorhombic dominate crystal structures of the tellurite tellurates. Most compounds are transparent from near ultraviolet to near infrared. Te-O bonds cause absorption lines in infrared. Sodium tellurite exhibit
Related
Related to these are the selenate selenites and sulfate sulfites by varying the chalcogen.
List
References
Tellurates
Tellurites
Mixed anion compounds
Mixed valence compounds | Tellurite tellurate | [
"Physics",
"Chemistry"
] | 496 | [
"Matter",
"Mixed valence compounds",
"Inorganic compounds",
"Mixed anion compounds",
"Ions"
] |
60,654,187 | https://en.wikipedia.org/wiki/Twistronics | Twistronics (from twist and electronics) is the study of how the angle (the twist) between layers of two-dimensional materials can change their electrical properties. Materials such as bilayer graphene have been shown to have vastly different electronic behavior, ranging from non-conductive to superconductive, that depends sensitively on the angle between the layers. The term was first introduced by the research group of Efthimios Kaxiras at Harvard University in their theoretical treatment of graphene superlattices.
Pablo Jarillo-Herrero, Allan H. MacDonald and Rafi Bistritzer were awarded the 2020 Wolf Prize in Physics for their theoretical and experimental work on twisted bilayer graphene.
History
In 2007, National University of Singapore physicist Antonio H. Castro Neto hypothesized that pressing two misaligned graphene sheets together might yield new electrical properties, and separately proposed that graphene might offer a route to superconductivity, but he did not combine the two ideas. In 2010 researchers in Eva Andrei's laboratory at Rutgers University in Piscataway, New Jersey discovered twisted bilayer graphene through its defining moiré pattern and demonstrating that the twist angle has a strong effect on the band structure by measuring greatly renormalized van Hove singularities. Also in 2010 researchers from Federico Santa María Technical University in Chile found that for a certain angle close to 1 degree the band of the electronic structure of twisted bilayer graphene became completely flat, and because of that theoretical property, they suggested that collective behavior might be possible. In 2011 Allan H. MacDonald (of University of Texas at Austin) and Rafi Bistritzer using a simple theoretical model found that for the previously found "magic angle" the amount of energy a free electron would require to tunnel between two graphene sheets radically changes. In 2017, the research group of Efthimios Kaxiras at Harvard University used detailed quantum mechanics calculations to reduce uncertainty in the twist angle between two graphene layers that can induce extraordinary behavior of electrons in this two-dimensional system. In 2018, Pablo Jarillo-Herrero, an experimentalist at Massachusetts Institute of Technology, found that the magic angle resulted in the unusual electrical properties that MacDonald and Bistritzer had predicted. At 1.1 degrees rotation at sufficiently low temperatures, electrons move from one layer to the other, creating a lattice and the phenomenon of superconductivity.
Publication of these discoveries has generated a host of theoretical papers seeking to understand and explain the phenomena as well as numerous experiments using varying numbers of layers, twist angles and other materials. Subsequent works showed that electronic properties of the stack can also be strongly dependent on heterostrain especially near the magic angle allowing potential applications in straintronics.
Characteristics
Superconduction and insulation
The theoretical predictions of superconductivity were confirmed by Pablo Jarillo-Herrero and his student Yuan Cao of MIT and colleagues from Harvard University and the National Institute for Materials Science in Tsukuba, Japan. In 2018 they verified that superconductivity existed in bilayer graphene where one layer was rotated by an angle of 1.1° relative to the other, forming a moiré pattern, at a temperature of . They created two bilayer devices that acted as an insulator instead of a conductor without a magnetic field. Increasing the field strength turned the second device into a superconductor.
A further advance in twistronics is the discovery of a method of turning the superconductive paths on and off by application of a small voltage differential.
Heterostructures
Experiments have also been done using combinations of graphene layers with other materials that form heterostructures in the form of atomically thin sheets that are held together by the weak Van der Waals force. For example, a study published in Science in July 2019 found that with the addition of a boron nitride lattice between two graphene sheets, unique orbital ferromagnetic effects were produced at a 1.17° angle, which could be used to implement memory in quantum computers. Further spectroscopic studies of twisted bilayer graphene revealed strong electron-electron correlations at the magic angle.
Electron puddling
Between 2-D layers for bismuth selenide and a dichalcogenide, researchers at the Northeastern University in Boston, discovered that at a specific degrees of twist a new lattice layer, consisting of only pure electrons, would develop between the two 2-D elemental layers. The quantum and physical effects of the alignment between the two layers appears to create "puddle" regions which trap electrons into a stable lattice. Because this stable lattice consists only of electrons, it is the first non-atomic lattice observed and suggests new opportunities to confine, control, measure, and transport electrons.
Ferromagnetism
A three layer construction, consisting of two layers of graphene with a 2-D layer of boron nitride, has been shown to exhibit superconductivity, insulation and ferromagnetism. In 2021, this was achieved on a single graphene flake.
See also
Straintronics – a method for altering the properties of two-dimensional materials by introducing controlled stress
Spintronics – the study of the intrinsic spin of the electron and its associated magnetic moment in solid-state devices
Valleytronics – the study of local extrema, valleys, in the electronic band structure of semiconductors
References
Graphene
Superconductivity | Twistronics | [
"Physics",
"Materials_science",
"Engineering"
] | 1,105 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
60,655,411 | https://en.wikipedia.org/wiki/C21H24N2 | {{DISPLAYTITLE:C21H24N2}}
The molecular formula C21H24N2 (molar mass: 304.43 g/mol, exact mass: 304.1939 u) may refer to:
AVN-101
IMes
Quinupramine
Molecular formulas | C21H24N2 | [
"Physics",
"Chemistry"
] | 62 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
60,657,382 | https://en.wikipedia.org/wiki/Random%20cluster%20model | In statistical mechanics, probability theory, graph theory, etc. the random cluster model is a random graph that generalizes and unifies the Ising model, Potts model, and percolation model. It is used to study random combinatorial structures, electrical networks, etc. It is also referred to as the RC model or sometimes the FK representation after its founders Cees Fortuin and Piet Kasteleyn. The random cluster model has a critical limit, described by a conformal field theory.
Definition
Let be a graph, and be a bond configuration on the graph that maps each edge to a value of either 0 or 1. We say that a bond is closed on edge if , and open if . If we let be the set of open bonds, then an open cluster or FK cluster is any connected component in union the set of vertices. Note that an open cluster can be a single vertex (if that vertex is not incident to any open bonds).
Suppose an edge is open independently with probability and closed otherwise, then this is just the standard Bernoulli percolation process. The probability measure of a configuration is given as
The RC model is a generalization of percolation, where each cluster is weighted by a factor of . Given a configuration , we let be the number of open clusters, or alternatively the number of connected components formed by the open bonds. Then for any , the probability measure of a configuration is given as
Z is the partition function, or the sum over the unnormalized weights of all configurations,
The partition function of the RC model is a specialization of the Tutte polynomial, which itself is a specialization of the multivariate Tutte polynomial.
Special values of q
The parameter of the random cluster model can take arbitrary complex values. This includes the following special cases:
: linear resistance networks.
: negatively-correlated percolation.
: Bernoulli percolation, with .
: the Ising model.
: -state Potts model.
Edwards-Sokal representation
The Edwards-Sokal (ES) representation of the Potts model is named after Robert G. Edwards and Alan D. Sokal. It provides a unified representation of the Potts and random cluster models in terms of a joint distribution of spin and bond configurations.
Let be a graph, with the number of vertices being and the number of edges being . We denote a spin configuration as and a bond configuration as . The joint measure of is given as
where is the uniform measure, is the product measure with density , and is an appropriate normalizing constant. Importantly, the indicator function of the set
enforces the constraint that a bond can only be open on an edge if the adjacent spins are of the same state, also known as the SW rule.
The statistics of the Potts spins can be recovered from the cluster statistics (and vice versa), thanks to the following features of the ES representation:
The marginal measure of the spins is the Boltzmann measure of the q-state Potts model at inverse temperature .
The marginal measure of the bonds is the random-cluster measure with parameters q and p.
The conditional measure of the spin represents a uniformly random assignment of spin states that are constant on each connected component of the bond arrangement .
The conditional measure of the bonds represents a percolation process (of ratio p) on the subgraph of formed by the edges where adjacent spins are aligned.
In the case of the Ising model, the probability that two vertices are in the same connected component of the bond arrangement equals the two-point correlation function of spins , written .
Frustration
There are several complications of the ES representation once frustration is present in the spin model (e.g. the Ising model with both ferromagnetic and anti-ferromagnetic couplings in the same lattice). In particular, there is no longer a correspondence between the spin statistics and the cluster statistics, and the correlation length of the RC model will be greater than the correlation length of the spin model. This is the reason behind the inefficiency of the SW algorithm for simulating frustrated systems.
Two-dimensional case
If the underlying graph is a planar graph, there is a duality between the random cluster models on and on the dual graph . At the level of the partition function, the duality reads
On a self-dual graph such as the square lattice, a phase transition can only occur at the self-dual coupling .
The random cluster model on a planar graph can be reformulated as a loop model on the corresponding medial graph. For a configuration of the random cluster model, the corresponding loop configuration is the set of self-avoiding loops that separate the clusters from the dual clusters. In the transfer matrix approach, the loop model is written in terms of a Temperley-Lieb algebra with the parameter . In two dimensions, the random cluster model is therefore closely related to the O(n) model, which is also a loop model.
In two dimensions, the critical random cluster model is described by a conformal field theory with the central charge
Known exact results include the conformal dimensions of the fields that detect whether a point belongs to an FK cluster or a spin cluster. In terms of Kac indices, these conformal dimensions are respectively and , corresponding to the fractal dimensions and of the clusters.
History and applications
RC models were introduced in 1969 by Fortuin and Kasteleyn, mainly to solve combinatorial problems. After their founders, it is sometimes referred to as FK models. In 1971 they used it to obtain the FKG inequality. Post 1987, interest in the model and applications in statistical physics reignited. It became the inspiration for the Swendsen–Wang algorithm describing the time-evolution of Potts models. Michael Aizenman and coauthors used it to study the phase boundaries in 1D Ising and Potts models.
See also
Tutte polynomial
Ising model
Random graph
Swendsen–Wang algorithm
FKG inequality
References
External links
Random-Cluster Model – Wolfram MathWorld
Graph theory
Random graphs
Percolation theory
Statistical mechanics | Random cluster model | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,233 | [
"Physical phenomena",
"Phase transitions",
"Discrete mathematics",
"Percolation theory",
"Graph theory",
"Combinatorics",
"Mathematical relations",
"Random graphs",
"Statistical mechanics"
] |
64,173,194 | https://en.wikipedia.org/wiki/The%20First%20TV | The First, also called The First TV and stylized as The F1rst, is a conservative opinion and commentary network in the United States started in October 2019. It has five hosts, including Bill O'Reilly.
History
The First was launched in October 2019 on Pluto TV, a streaming platform owned by Paramount Global. It was started in partnership with Red Seat Ventures. It offers about 45 hours of original programming a week. In January 2023, The First was added to DirecTV, after it concurrently dropped Newsmax TV due to demands for carriage fees.
Hosts
The First launched with two hosts in October 2019, combat veteran Jesse Kelly and former CIA analyst Buck Sexton. In January 2020, the network added California-based talk radio host Mike Slater and Dana Loesch. On June 1, 2020, the network announced that Bill O'Reilly was joining the network with his show No Spin News. He began the online show in 2017 after being fired from Fox News Channel, in the wake of The New York Times publishing details of six sexual misconduct lawsuits O'Reilly had settled. Former OANN host, CPAC speaker, and conservative podcaster Liz Wheeler was added to the network in January 2023.
Josh Hammer hosts America on Trial with Josh Hammer, a legal podcast primarily focused on the 2024 United States presidential election.
Mike Baker hosts a podcast called The President's Daily Brief.
Reception
Tyler Hersko of IndieWire criticized ViacomCBS for their involvement in O'Reilly's show, commenting that its Pluto TV debut coincided with the date that its entertainment and youth channels were made unavailable for eight minutes 46 seconds in solidarity with Black Lives Matter. Hersko found this hypocritical in light of comments made by O'Reilly about African-Americans. A petition by ViacomCBS employees urged the company to remove The First for similar reasons.
References
External links
Official website
Conservative television in the United States
2019 establishments in the United States
Bill O'Reilly (political commentator)
Streaming media systems | The First TV | [
"Technology"
] | 403 | [
"Streaming media systems",
"Telecommunications systems",
"Computer systems"
] |
64,178,230 | https://en.wikipedia.org/wiki/Patrick%20H.%20Diamond | Patrick Henry Diamond is an American theoretical plasma physicist. He is currently a professor at the University of California, San Diego, and a director of the Fusion Theory Institute at the National Fusion Research Institute in Daejeon, South Korea, where the KSTAR Tokamak is operated.
In 2011, Diamond was jointly awarded the Hannes Alfvén Prize with Akira Hasegawa and Kunioki Mima for important contributions to the theory of turbulent transport in plasmas. In addition to applications in controlled nuclear fusion, he also specializes in astrophysical plasmas.
Early life and career
Diamond was raised in the Bay Ridge section of Brooklyn, NY. He graduated St. Anselm’s Elementary School and Xavierian High School, both located in Bay Ridge Brooklyn, NY
Diamond received his Ph.D. in 1979 from the Massachusetts Institute of Technology.
Honors and awards
In 1986, he was inducted a fellow of the American Physical Society. In 1988, he became a Sloan Research Fellow.
In 2011, Diamond was awarded the Hannes Alfvén Prize by the European Physical Society for "laying the foundations of modern numerical transport simulations and key contributions on self-generated zonal flows and flow shear decorrelation mechanisms which form the basis of modern turbulence in plasmas".
Publications
References
American plasma physicists
Fellows of the American Physical Society
Living people
Plasma physicists
Massachusetts Institute of Technology alumni
Sloan Research Fellows
Year of birth missing (living people) | Patrick H. Diamond | [
"Physics"
] | 287 | [
"Plasma physicists",
"Plasma physics"
] |
64,179,842 | https://en.wikipedia.org/wiki/Metallization%20pressure | Metallization pressure is the pressure required for a non-metallic chemical element to become a metal. Every material is predicted to turn into a metal if the pressure is high enough, and temperature low enough. Some of these pressures are beyond the reach of diamond anvil cells, and are thus theoretical predictions. Neon has the highest metallization pressure for any element.
The value for phosphorus refers to pressurizing black phosphorus. The value for arsenic refers to pressurizing metastable black arsenic; grey arsenic, the standard state, is already a metallic conductor at standard conditions. No value is known or theoretically predicted for astatine and radon.
See also
Metal–insulator transition
Metallic hydrogen
Nonmetallic material
References
Physical chemistry
Allotropes | Metallization pressure | [
"Physics",
"Chemistry"
] | 152 | [
"Periodic table",
"Applied and interdisciplinary physics",
"Properties of chemical elements",
"Allotropes",
"Materials",
"nan",
"Physical chemistry",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.