id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
36,972,445 | https://en.wikipedia.org/wiki/Non-dimensionalization%20and%20scaling%20of%20the%20Navier%E2%80%93Stokes%20equations | In fluid mechanics, non-dimensionalization of the Navier–Stokes equations is the conversion of the Navier–Stokes equation to a nondimensional form. This technique can ease the analysis of the problem at hand, and reduce the number of free parameters. Small or large sizes of certain dimensionless parameters indicate the importance of certain terms in the equations for the studied flow. This may provide possibilities to neglect terms in (certain areas of) the considered flow. Further, non-dimensionalized Navier–Stokes equations can be beneficial if one is posed with similar physical situations – that is problems where the only changes are those of the basic dimensions of the system.
Scaling of Navier–Stokes equation refers to the process of selecting the proper spatial scales – for a certain type of flow – to be used in the non-dimensionalization of the equation. Since the resulting equations need to be dimensionless, a suitable combination of parameters and constants of the equations and flow (domain) characteristics have to be found. As a result of this combination, the number of parameters to be analyzed is reduced and the results may be obtained in terms of the scaled variables.
Need for non-dimensionalization and scaling
In addition to reducing the number of parameters, non-dimensionalized equation helps to gain a greater insight into the relative size of various terms present in the equation.
Following appropriate selecting of scales for the non-dimensionalization process, this leads to identification of small terms in the equation. Neglecting the smaller terms against the bigger ones allows for the simplification of the situation. For the case of flow without heat transfer, the non-dimensionalized Navier–Stokes equation depends only on the Reynolds Number and hence all physical realizations of the related experiment will have the same value of non-dimensionalized variables for the same Reynolds Number.
Scaling helps provide better understanding of the physical situation, with the variation in dimensions of the parameters involved in the equation. This allows for experiments to be conducted on smaller scale prototypes provided that any physical effects which are not included in the non-dimensionalized equation are unimportant.
Non-dimensionalized Navier–Stokes equation
The incompressible Navier–Stokes momentum equation is written as:
where ρ is the density, p is the pressure, ν is the kinematic viscosity, u is the flow velocity, and g is the body acceleration field.
The above equation can be non-dimensionalized through selection of appropriate scales as follows:
{| class="wikitable"
|-
! scope="col" style="width:200px;"| Scale
! scope="col" style="width:200px;"| dimensionless variable
|-
| Length L
| and
|-
| Flow velocity U
|
|-
| Time L/U
|
|-
| Pressure: there is no natural selection for the pressure scale.
| Where dynamic effects are dominant i.e. high velocity flows
Where viscous effects are dominant i.e. creeping flows
|}
Substituting the scales the non-dimensionalized equation obtained is:
where is the Froude number and is the Reynolds number ().
Flows with large viscosity
For flows where viscous forces are dominant i.e. slow flows with large viscosity, a viscous pressure scale μU/L is used. In the absence of a free surface, the equation obtained is
Stokes regime
Scaling of equation () can be done, in a flow where inertia term is smaller than the viscous term i.e. when Re → 0 then inertia terms can be neglected, leaving the equation of a creeping motion.
Such flows tend to have influence of viscous interaction over large distances from an object. At low Reynolds number the same equation reduces to a diffusion equation, named Stokes equation
Euler regime
Similarly if Re → ∞ i.e. when the inertia forces dominates, the viscous contribution can be neglected. The non-dimensionalized Euler equation for an inviscid flow is
When density varies due to both concentration and temperature
Density variation due to both concentration and temperature is an important field of study in double diffusive convection. If density changes due to both temperature and salinity are taken into account, then some more terms add to the Z-Component of momentum as follows:
Where S is the salinity of the fluid, βT is the thermal expansion coefficient at constant pressure and βS is the coefficient of saline expansion at constant pressure and temperature.
Non dimensionalizing using the scale:
and
we get
where ST, TT denote the salinity and temperature at top layer, SB, TB denote the salinity and temperature at bottom layer, Ra is the Rayleigh Number, and Pr is the Prandtl Number. The sign of RaS and RaT will change depending on whether it stabilizes or destabilizes the system.
References
Footnotes
Other
T.Cebeci J.RShao, F. Kafyeke E. Laurendeau, Computational Fluid Dynamics for Engineers, Springer, 2005
C. Pozrikidis, FLUID DYNAMICS Theory, Computation, and Numerical Simulation, KLUWER ACADEMIC PUBLISHERS, 2001
Y. Cengel and J. Cimbala, FLUID MECHANICS: Fundamentals and Applications, 4th Edition, McGraw-Hill Education, 2018 (see p521, section 10.2. Nondimensionalized Equations of Motion).
Further reading
This book contains several examples of different non-dimensionalizations and scalings of the Navier–Stokes equations, see p. 430.
Fluid mechanics
Dimensional analysis
Equations of fluid dynamics | Non-dimensionalization and scaling of the Navier–Stokes equations | [
"Physics",
"Chemistry",
"Engineering"
] | 1,142 | [
"Equations of fluid dynamics",
"Dimensional analysis",
"Equations of physics",
"Civil engineering",
"Mechanical engineering",
"Fluid mechanics",
"Fluid dynamics"
] |
36,976,910 | https://en.wikipedia.org/wiki/Green%27s%20function%20number | In mathematical heat conduction, the Green's function number is used to uniquely categorize certain fundamental solutions of the heat equation to make existing solutions easier to identify, store, and retrieve.
Background
Numbers have long been used to identify types of boundary conditions. The Green's function number system was proposed by Beck and Litkouhi in 1988 and has seen increasing use since then. The number system has been used to catalog a large collection of Green's functions and related solutions.
Although the examples given below are for the heat equation, this number system applies to any phenomena described by differential equations such as diffusion, acoustics, electromagnetics, fluid dynamics, etc.
Notation
The Green's function number specifies the coordinate system and the type of boundary conditions that a Green's function satisfies. The Green's function number has two parts, a letter designation followed by a number designation. The letter(s) designate the coordinate system, while the numbers designate the type of boundary conditions that are satisfied.
Some of the designations for the Greens function number system are given next. Coordinate system designations include: X, Y, and Z for Cartesian coordinates; R, Z, φ for cylindrical coordinates; and, RS, φ, θ for spherical coordinates.
Designations for several boundary conditions are given in Table 1. The zeroth boundary condition is important for identifying the presence of a coordinate boundary where no physical boundary exists, for example, far away in a semi-infinite body or at the center of a cylindrical or spherical body.
Examples in Cartesian coordinates
X11
As an example, number X11 denotes the Green's function that satisfies the heat equation in the domain () for boundary conditions of type 1 (Dirichlet) at both boundaries and . Here X denotes the Cartesian coordinate and 11 denotes the type 1 boundary condition at both sides of the body. The boundary value problem for the X11 Green's function is given by
Here is the thermal diffusivity (m2/s) and is the Dirac delta function.
This GF is developed elsewhere.
X20
As another Cartesian example, number X20 denotes the Green's function in the semi-infinite body () with a Neumann (type 2) boundary at . Here X denotes the Cartesian coordinate, 2 denotes the type 2 boundary condition at and 0 denotes the zeroth type boundary condition (boundedness) at . The boundary value problem for the X20 Green's function is given by
This GF is published elsewhere.
X10Y20
As a two-dimensional example, number X10Y20 denotes the Green's function in the quarter-infinite body (, ) with a Dirichlet (type 1) boundary at and a Neumann (type 2) boundary at . The boundary value problem for the X10Y20 Green's function is given by
Applications of related half-space and quarter-space GF are available.
Examples in cylindrical coordinates
R03
As an example in the cylindrical coordinate system, number R03 denotes the Green's function that satisfies the heat equation in the solid cylinder () with a boundary condition of type 3 (Robin) at . Here letter R denotes the cylindrical coordinate system, number 0 denotes the zeroth boundary condition (boundedness) at the center of the cylinder (), and number 3 denotes the type 3 (Robin) boundary condition at . The boundary value problem for R03 Green's function is given by
Here is thermal conductivity (W/(m K)) and is the heat transfer coefficient (W/(m2 K)).
See , for this GF.
R10
As another example, number R10 denotes the Green's function in a large body containing a cylindrical void (a < r < ) with a type 1 (Dirichlet) boundary condition at . Again letter R denotes the cylindrical coordinate system, number 1 denotes the type 1 boundary at , and number 0 denotes the type zero boundary (boundedness) at large values of r. The boundary value problem for the R10 Green's function is given by
This GF is available elsewhere.
R01φ00
As a two dimensional example, number R01φ00 denotes the Green's function in a solid cylinder with angular dependence, with a type 1 (Dirichlet) boundary condition at . Here letter φ denotes the angular (azimuthal) coordinate, and numbers 00 denote the type zero boundaries for angle; here no physical boundary takes the form of the periodic boundary condition. The boundary value problem for the R01φ00 Green's function is given by
Both a transient and steady form of this GF are available.
Example in spherical coordinates
RS02
As an example in the spherical coordinate system, number RS02 denotes the Green's function for a solid sphere () with a type 2 (Neumann) boundary condition at . Here letters RS denote the radial-spherical coordinate system, number 0 denotes the zeroth boundary condition (boundedness) at , and number 2 denotes the type 2 boundary at . The boundary value problem for the RS02 Green's function is given by
This GF is available elsewhere.
See also
Fundamental solution
Dirichlet boundary condition
Neumann boundary condition
Robin boundary condition
Heat equation
References
Differential equations
Heat transfer
Generalized functions
Physical quantities | Green's function number | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,076 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Physical quantities",
"Quantity",
"Mathematical objects",
"Differential equations",
"Equations",
"Thermodynamics",
"Physical properties"
] |
36,976,974 | https://en.wikipedia.org/wiki/Plasma%20railgun | A plasma railgun is a linear accelerator which, like a projectile railgun, uses two long parallel electrodes to accelerate a "sliding short" armature. However, in a plasma railgun, the armature and ejected projectile consists of plasma, or hot, ionized, gas-like particles, instead of a solid slug of material. Scientific plasma railguns are typically operated in vacuum and not at air pressure. They are of value because they produce muzzle velocities of up to several hundreds of kilometers per second. Because of this, these devices have applications in magnetic confinement fusion (MCF), magneto-inertial fusion (MIF), high energy density physics research (HEDP), laboratory astrophysics, and as a plasma propulsion engine for spacecraft.
Theory
Plasma railguns appear in two principal topologies, linear and coaxial. Linear railguns consist of two flat plate electrodes separated by insulating spacers and accelerate sheet armatures. Coaxial railguns accelerate toroidal plasma armatures using a hollow outer conductor and a central, concentric, inner conductor.
Linear plasma railguns place extreme demands on their insulators, as they must be an electrically insulating, plasma-facing vacuum component which can withstand both thermal and acoustic shocks. Additionally, a complex triple joint seal may exist at the breech of the bore, which can often pose an extreme engineering challenge. Coaxial accelerators require insulators only at the breech, but the plasma armature in that case is subject to the "blow-by" instability. This is an instability in which the magnetic pressure front can out-run or "blow-by" the plasma armature due to the radial dependence of acceleration current density, drastically reducing device efficiency. Coaxial accelerators use various techniques to mitigate this instability. In either design, a plasma armature is formed at the breech. As plasma railguns are an open area of research, the method of armature formation varies. However, techniques including exploding foils, gas cell burst disk injection, neutral gas injection via fast gas valve, and plasma capillary injection have been employed.
After armature formation, the plasmoid is then accelerated down the length of the railgun by a current pulse driven through one electrode, through the armature, and out the other electrode, creating a large magnetic field behind the armature. Since the driver current through the armature is also moving through and normal to a self-generated magnetic field, the armature particles experience a Lorentz force, accelerating them down the length of the gun. Accelerator electrode geometry and materials are also open areas of research.
Applications
Controlled jets from plasma rail guns can have peak densities in the 1013 to 1016 particles/m3 range, and velocities from 5 to , depending on device design configuration and operating parameters, and the upper limits may be higher. Plasma rail guns are being evaluated for applications in magnetic confinement fusion for disruption mitigation and tokamak refueling.
Magneto-inertial fusion seeks to implode a magnetized D-T fusion target using a spherically symmetric, collapsing, conducting liner. Plasma railguns are being evaluated as a possible method of implosion linear formation for fusion.
Arrays of plasma railguns could be used to create pulsed implosions of ~1 Megabar peak pressure, allowing more access to chart this opening area of plasma physics.
High velocity jets of controllable density and temperature allow astrophysical phenomena such as solar wind, galactic jets, solar events and astrophysical plasma to be partially simulated in the laboratory and measured directly, in addition to astronomic and satellite observations.
Examples
See also
Helical railgun
Coilgun
Mass driver
Ram accelerator
Light-gas gun
Pulsed plasma thruster
MARAUDER
Combustion light-gas gun
References
Railguns
Plasma technology and applications | Plasma railgun | [
"Physics"
] | 806 | [
"Plasma technology and applications",
"Plasma physics"
] |
52,786,735 | https://en.wikipedia.org/wiki/Craik%E2%80%93Leibovich%20vortex%20force | In fluid dynamics, the Craik–Leibovich (CL) vortex force describes a forcing of the mean flow through wave–current interaction, specifically between the Stokes drift velocity and the mean-flow vorticity. The CL vortex force is used to explain the generation of Langmuir circulations by an instability mechanism. The CL vortex-force mechanism was derived and studied by Sidney Leibovich and Alex D. D. Craik in the 1970s and 80s, in their studies of Langmuir circulations (discovered by Irving Langmuir in the 1930s).
Description
The CL vortex force is
with the (Lagrangian) Stokes drift velocity and vorticity (i.e. the curl of the Eulerian mean-flow velocity ). Further is the fluid density and is the curl operator.
The CL vortex force finds its origins in the appearance of the Stokes drift in the convective acceleration terms in the mean momentum equation of the Euler equations or Navier–Stokes equations. For constant density, the momentum equation (divided by the density ) is:
with
(a): temporal acceleration
(b): convective acceleration
(c): Coriolis force due to the angular velocity of the Earth's rotation
(d): Coriolis–Stokes force
(e): gradient of the augmented pressure
(f): Craik–Leibovich vortex force
(g): viscous force due to the kinematic viscosity
The CL vortex force can be obtained by several means. Originally, Craik and Leibovich used perturbation theory. An easy way to derive it is through the generalized Lagrangian mean theory. It can also be derived through a Hamiltonian mechanics description.
Notes
References
Fluid dynamics
Water waves | Craik–Leibovich vortex force | [
"Physics",
"Chemistry",
"Engineering"
] | 365 | [
"Physical phenomena",
"Water waves",
"Chemical engineering",
"Waves",
"Piping",
"Fluid dynamics"
] |
52,787,040 | https://en.wikipedia.org/wiki/CFSMC | CFSMC, or Carbon Fiber Sheet Molding Compound (also known as CSMC or CF-SMC), is a ready to mold carbon fiber reinforced polymer composite material used in compression molding. While traditional SMC utilizes chopped glass fibers in a polymer resin, CFSMC utilizes chopped carbon fibers. The length and distribution of the carbon fibers is more regular, homogeneous, and constant than the standard glass SMC. CFSMC offers much higher stiffness and usually higher strength than standard SMC, but at a higher cost.
Manufacturing
CF-SMC are made up of carbon tow chunks, spread between two layers of uncured thermosetting resin. The carbon fibre tows are cut from prepreg UD tape. The originating tape can be made up of a certain number of fibres (filaments), thus affecting the properties of the final composite: values can vary from 3 to 50 thousand filaments, while typical tow lengths are within 10 to 50 mm. As for the resin, thermosetting resins are used: possible choices are polyester, vinyl ester or epoxy, with the former being the cheapest and the latter being the most performant. Despite not being as strong nor stiff as epoxy, vinyl ester is often used for its properties like corrosion and higher temperature resistance. The constituents are combined in sheets of prepreg material. The tows usually fall from the cutter onto one of the two layers of resin, and are then covered by the second layer. The prepreg sheets of SMC are made after the viscous assembly is compacted via rollers. In this phase, any control over the orientation of the fibres is generally impossible, and the fibres can be considered to have an equiprobable orientation in all directions.
Once the prepreg sheets are made, the material can be compression moulded into the final desired shape. Compression moulding is a manufacturing technique that requires a two part mould: the first one hosts the moulding material (charge), while the second one is mounted on a press to close the cavity while applying high pressure. Due to complex geometry, it may be necessary to cut the sheets to place them more easily in the lower mould. Then, while the upper mould cavity is closing, the material is pushed throughout the mould until closed. Pressure is maintained, together with elevated temperature, to allow the curing of the resin and low porosity. This stage has a heavy influence on the mechanical performances of the final product, as the viscous flow into the mold cavity tends to orient the fibers along the direction of the flow. By controlling the amount and direction of the flow, it is thus possible to influence the fibre orientation, having a quasi-isotropic material (low-flow moulding) or higher performances in a desired direction (high-flow moulding).
During the manufacturing phase, it is also important to avoid, when possible, defects like weld-
lines. Weld lines occur when two flow fronts of material meet during the filling of a mold cavity. This can sometimes result in air entrapment, inhibited crosslinking in the polymer matrix, or the clumping or absence of fibers. For these reasons weld-lines can be as weak or weaker than the neat polymer resin.
Material properties
Due to their heterogeneous and anisotropic microstructure, mechanical properties of CF-SMC can vary significantly within broad ranges. Parameters having profound impact on these materials performances are mainly related to the fibres and matrix neat mechanical and geometrical properties (especially those of the fibres) and the orientation and content of the reinforcement. Modulus can vary from less than 20 GPa to 60 GPa, while strength values are within 60–500 MPa.
CF-SMC can also be engineered, to some extent, to have better performances in a specific direction, in a similar fashion as continuous fibres composites. This can be achieved by carefully controlling the compression moulding stage to influence fibre orientation. When the fibres are mainly aligned with the loading direction, the material behaviour is mainly dominated by that of the fibres, thus resulting in stronger and stiffer, but also more brittle response. In the opposite case, if fibres tend to dispose perpendicular to the loading direction, the resin contributes more to the load bearing, and the overall composite will be less stiff, less strong and more ductile. Being based on hydrodynamic transport phenomena, however, the control over fibre orientation in CF-SMC is much more limited than in the continuous composites case, where orientation is often directly determined accurately by the manufacturer. In addition, while continuous fibres composites have a specific orientation, short fibre reinforced plastics can have a preferential orientation, meaning that, considering a generic system of axis, the majority of fibres can have a higher component along a direction and a lower component along the other two axis.
The discontinuous tow-based microstructure of these materials makes is even more heterogeneous than standard composites: fibre ends themselves acts as stress concentration areas for both the resin and the neighbouring tows; moreover, especially for complex shaped parts, it is impossible to prevent some local spots with badly aligned tows (e.g. perpendicular to the direction of axial stress) or with low fibre volume content, like resin pockets. Although making the material weaker and the structural design more complex, this feature makes these materials quite notch-insensitive.
When moulded, CFSMC has a very different appearance than traditional carbon fibre fabric composites, which traditionally appear with a woven checkerboard pattern. CFSMC has the appearance of black and grey marble or burl.
Industrial use
CF SMC combines the lightweight properties of carbon composites with a manufacturing process, as compression moulding, that allows fast manufacturing and thus is suitable for high volume industrial applications. For these reasons, the automotive industry is one of the best candidates for this technology.
Car manufacturers have used standard glass SMC for over 30 years as a material for body panels in select sport cars such as the Chevrolet Corvette. Substituting glass fibres with carbon is a recent development, having been used for significant structural components of the 2003 Dodge Viper, the multifunctional spare wheel pan of Mercedes-AMG E-Class, the Mercedes-Benz SLR McLaren, the 2009 Lexus LFA, 2015 Lamborghini Huracán, the 2017 BMW 7 series and 2017 McLaren chassis. Lamborghini (together with Callaway Golf Company) patented an advanced version of CF-SMC called Forged Composite. They first introduced it in the Sesto Elemento concept car, and since then, Forged Composite has been a distinctive mark for Lamborghini cars, used both in structural and aesthetical purposes. CF-SMC use is recently spreading also to the much broader non-high performance automotive sector as for the 2017 Toyota Prius PHV.
CF-SMC has also been used in the aeronautic industry by Boeing, for the 787 Dreamliner window frames, while producers suggest that the use of these materials will grow in this sector as well.
References
Composite materials
Fibre-reinforced polymers
Carbon compounds | CFSMC | [
"Physics"
] | 1,485 | [
"Materials",
"Composite materials",
"Matter"
] |
52,788,324 | https://en.wikipedia.org/wiki/Cresting%20%28architecture%29 | Cresting, in architecture, is ornamentation attached to the ridge of a roof, cornice, coping or parapet, usually made of a metal such as iron or copper. Cresting is associated with Second Empire architecture, where such decoration stands out against the sharp lines of the mansard roof. It became popular in the late 19th century, with mass-produced sheet metal cresting patterns available by the 1890s.
Cresting is typically attached to the roof by bolts, and is often installed during construction of the roof, with sealants applied to the roof directly covering the bolts to prevent water penetration and corrosion in these areas.
See also
Brattishing
References
Architectural elements
Roofs | Cresting (architecture) | [
"Technology",
"Engineering"
] | 135 | [
"Structural engineering",
"Components",
"Building engineering",
"Structural system",
"Architectural elements",
"Roofs",
"Architecture"
] |
57,426,815 | https://en.wikipedia.org/wiki/Gabriel%E2%80%93Rosenberg%20reconstruction%20theorem | In algebraic geometry, the Gabriel–Rosenberg reconstruction theorem, introduced in , states that a quasi-separated scheme can be recovered from the category of quasi-coherent sheaves on it. The theorem is taken as a starting point for noncommutative algebraic geometry as the theorem says (in a sense) working with stuff on a space is equivalent to working with the space itself. It is named after Pierre Gabriel and Alexander L. Rosenberg.
See also
Tannakian duality
References
External links
https://ncatlab.org/nlab/show/Gabriel-Rosenberg+theorem
How to unify various reconstruction theorems (Gabriel-Rosenberg, Tannaka, Balmers)
Theorems in algebraic geometry
Scheme theory
Sheaf theory | Gabriel–Rosenberg reconstruction theorem | [
"Mathematics"
] | 149 | [
"Theorems in algebraic geometry",
"Mathematical structures",
"Sheaf theory",
"Topology",
"Category theory",
"Theorems in geometry"
] |
57,426,950 | https://en.wikipedia.org/wiki/Early%20long-term%20potentiation | Early long-term potentiation (E-LTP) is the first phase of long-term potentiation (LTP), a well-studied form of synaptic plasticity, and consists of an increase in synaptic strength. LTP could be produced by repetitive stimulation of the presynaptic terminals, and it is believed to play a role in memory function in the hippocampus, amygdala and other cortical brain structures in mammals.
Long-term potentiation occurs when synaptic transmission becomes more effective as a result of recent activity. The neuronal changes can be temporary and wear off after some hours (early LTP) or much more stable and long-lasting (late LTP).
Early and late phase
It has been proposed that long-term potentiation is composed of at least two different phases: protein synthesis-independent E-LTP (early LTP) and protein synthesis-dependent L-LTP (late LTP). A single train of high-frequency stimuli is needed to trigger E-LTP that begins right after the stimulation, lasting a few hours or less, and depending primarily on short-term kinase activity. Contrarily stronger stimulation protocols are needed to recruit L-LTP that begins after a few hours, lasts for at least eight hours, and depends on the activation of de novo gene transcription. These different characteristics suggest a relationship between E-LTP and short-term memory phase, as well as L-LTP and long-term memory phase.
LTP and memory phases
A comparison between LTP induced by two spaced trains of stimuli and LTP induced by four trains in wild-type mice showed that LTP induced by two trains decays faster than the one induced by one train and slower than the one induced by four trains. Moreover, the LTP induced by two trains is only partially impaired by protein kinase A (PKA) inhibition and not by protein synthesis inhibition. These findings suggested that there is a PKA-dependent phase of LTP intermediate to E-LTP and L-LTP, which was called intermediate LTP (I-LTP).
In the transgenic mice, on the other hand, LTP induced by two trains decayed faster than in wild-type mice, implying that excessive calcineurin activity suppresses both I-LTP and L-LTP. This calcineurin-overexpression could be associated to memory-related behavioral deficits. The transgenic mice performed poorly in spatial memory tasks compared to wild-type mice, indicating a deficit. However when trained more intensively their performance deficit with respect to wild-type mice disappears. Moreover the transgenic mice performed normally on memory tasks 30 minutes after training, but were considerably impaired 24 hours after training. This led to the conclusion that calcineurin-overexpressing mice have a deficit in long-term memory consolidation, which reflects their deficit in late phase LTP.
Biological processes
Training of simple reflexes in Aplysia has shown a strengthening between sensory and motor neurons responsible for those reflexes; on a cellular level, for short-term memory (and thus, early LTP) potentiation leads to an increase in presynaptic neurotransmitter by means of modifications of proteins through cAMP-dependent PKA and PKC. The long-term process requires new protein synthesis and CAMP-mediated gene expression, and results in the growth of new synaptic connections.
These findings have led to the question whether there is a similar process in mammals. Input to the hippocampus comes from the neurons of the entorhinal cortex by means of the perforant pathway, which synapses on the granule cells of the dentate gyrus. The granule cells in turn send their axons, the mossy fibre pathway (CA3), to synapse on the pyramidal cells of the CA3 region. Finally, the axons of the pyramidal cells in the CA3 regions, the Schaffer collateral pathway (CA1), terminate on the pyramidal cells of the CA1 region. Damage to any of these hippocampal pathways is sufficient to cause some memory disturbance in humans.
In the perforant and Schaffer pathways, LTP is induced by activating a postsynaptic NMDA receptor, causing an influx of calcium. In the mossy fibres pathway on the other hand, LTP is induced presynaptically through an influx of glutamate.
E-LTP and classical conditioning
Early LTP is best studied in the context of classical conditioning. As the signal of an unconditioned stimulus enters the pontine nuclei in the brainstem, the signal travels through the mossy fibres to the interpositus nucleus and the parallel fibres in the cerebellum. The parallel fibres synapse on so called Purkinje cells, which simultaneously receive input of the unconditioned stimulus via the inferior olives and climbing fibres.
The parallel fibres release glutamate, which activates inhibitory metabotropic and excitatory ionotropic AMPA receptors. The metabotropic receptors activate an enzyme cascade via G protein, which leads to the activation of protein kinase C (PKC). This PKC phosphorylates the active ionotropic receptors.
At another place of the cell, the climbing fibres carry the neurotransmitter aspartate to the Purkinje cell, and that leads to the opening of calcium channels, which in turn causes an increased influx of calcium to the cell. The calcium activates PKC once again, and the phosphorylised ionotropic receptors are internalised. Thus, the surplus of metabotropic receptors hyperpolarises the cell, and the interpositus nucleus depolarises the inferior olives, which causes a decrease in expectation of the unconditioned stimulus and therefore causing an inhibition in early LTP or a period of long-term depression.
Clinical perspectives
LTP in Alzheimer's disease
It is known that Alzheimer's disease is characterized by extracellular deposits of neurotoxic amyloid peptides (Aβ), intracellular aggregation of hyper-phosphorylated tau protein, and neuronal death. Whereas chronic stress is characterized by its negative impacts on the effect of learning and memory and furthermore can exacerbate a number of disorders, including Alzheimer's disease (AD).
Previous studies have shown that the combination of chronic psychosocial stress and chronic infusion of a pathogenic dose of Aβ peptides impairs learning and memory and severely diminishes early phase long-term potentiation (E-LTP) in the hippocampal area CA1 of anesthetized rat.
Chronic psychosocial stress was produced using a rat intruder model and the at-risk rat model of Alzheimer's disease was created by osmotic pump infusion of sub-pathological dose of Aβ (subAβ). Electrophysiological methods were used to evoke and record early and late phase LTP in the dentate gyrus of anesthetized rats, and immunoblotting was used to measure levels of memory-related signaling molecules in the same region. These Electrophysiological and molecular tests in the dentate gyrus showed that subAβ rats or stressed rats were not different from control rats. However, the present findings conclude that when stress and subAβ are combined, significant suppression of E-LTP magnitude results.
In summary, although the CA1 and DG regions are closely related physically and functionally, they react differently to insults. While the area CA1 is vulnerable to stress and the combination stress/subAβ, the DG is remarkably resistant to the offending combination of subAβ and chronic stress.
LTP in drug use
Another use of LTP is in drug abuse. As can be seen in many drug victims, conditioning plays a vital role in building up a tolerance. In reconditioning recovering addicts to the place in which they used to take drugs with a different stimulus, the craving they feel could be counteracted. A rather successful experimental study has shown that this paradigm lowers the danger of relapsing and works as extinction.
Alternative models
Source:
The hypothesis that the stabilisation of synaptic plasticity depends on de novo protein synthesis is popular in literature. The temporal differentiation between early and late LTP is also based on this. Early LTP is associated with short-term memory and late LTP with long-term memory. Behavioural studies raised evidence against this differentiation.
Studies with protein synthesis inhibitors showed that blocking protein synthesis did not block memory retention. Stable LTP were found in slice preparation of the hippocampus under a state of global protein synthesis inhibition. Those studies show that LTP stabilization can happen independently from protein synthesis. This shows that the association between protein synthesis and stabilization is insufficient to determine the difference between early and late LTP.
Instead of the differentiation into early and late LTP and protein synthesis as the driving force for LTP and memory stabilization, an alternative model was proposed: in addition to the protein synthesis, the protein degradation also determines the stabilization, so the turn-over rate of proteins is said to underlie LTP stabilization. According to the model, the differentiation into temporal phases of LTP is inappropriate and even hindering to future research about LTP. Mechanisms can be overlooked due to the closed temporalization of function and processes.
References
Behavioral neuroscience
Memory
Neurophysiology
Neuroplasticity | Early long-term potentiation | [
"Biology"
] | 1,995 | [
"Behavioural sciences",
"Behavior",
"Behavioral neuroscience"
] |
57,427,016 | https://en.wikipedia.org/wiki/CYREN%20%28protein%29 | Cell cycle regulator of non-homologous end joining is a protein that in humans is encoded by the CYREN gene.
It prevents classical non-homologous end joining, a method of repair of double-stranded DNA breaks. This protein is therefore important in regulating DNA repair.
When alternatively spliced, is predicted to produce three different micropeptides.
MRI-1 was previously found to be a modulator of retrovirus infection.
MRI-2 may be important in non-homologous end joining (NHEJ) of DNA double strand breaks. In Co-Immunoprecipitation experiments, MRI-2 bound to Ku70 and Ku80, two subunits of Ku, which play a major role in the NHEJ pathway.
MRI-3
References
DNA repair | CYREN (protein) | [
"Chemistry",
"Biology"
] | 162 | [
"DNA repair",
"Protein stubs",
"Biochemistry stubs",
"Molecular genetics",
"Cellular processes"
] |
57,427,675 | https://en.wikipedia.org/wiki/Specific%20pump%20power | Specific Pump Power (SPP) is a metric in fluid dynamics that quantifies the energy-efficiency of pump systems. It is a measure of the electric power that is needed to operate a pump (or collection of pumps), relative to the volume flow rate. It is not constant for a given pump, but changes with both flow rate and pump pressure. This term 'SPP' is adapted from the established metric Specific fan power (SFP) for fans (blowers). It is commonly used when measuring the energy efficiency of buildings.
Definition
The SPP for a specific operating point (combination of flow rate and pressure rise) for a pump system is defined as:
where:
is the electrical power used by the pump (or sum of all pumps in a system or subsystem) [kW]
is the volumetric flow rate of fluid passing through the pump (or system) [m3/s], Some countries use [l/s]
Just as for SFP (i.e. fan power), SPP is also related to pump pressure (pump head) and the pump system efficiency, as follows:
where:
is the rise in total pressure across the pump system, aka. pump head [kPa]. This is a property of the fluid circuit in which the pump is placed.
is the overall efficiency of the pump system [-]. This is the combined product of multiple losses, including bearing friction, impeller fluid dynamic losses, leakage losses (backflow), all losses in the motor (friction, magnetic losses, copper losses, stray load), and losses in the speed control electronics (for variable-speed pumps). The pump system efficiency is therefore not fixed, but depends on the operating point (flow and pressure).
This equation is simply an application of Bernoulli's principle in the case where the inlet and outlet have the same diameter and same height. Observe that SPP is not a property of the pump alone, but is also dependent on the pressure drop of the circuit that the pump circulates fluid through. Thus, in order to minimize energy use for pump system, one must reduce the system pressure drop (e.g. use large diameter pipes and low flow rates) in addition to selecting pumps with good intrinsic efficiency (hydrodynamically efficient with an efficient motor).
Applying the above equations enables us to estimate electrical power consumption in a number of ways:
where:
is the hydraulic power () [kW]
See also
Pump
Thermodynamic pump testing
Specific fan power
Efficient energy use
References
Pumps | Specific pump power | [
"Physics",
"Chemistry"
] | 520 | [
"Pumps",
"Hydraulics",
"Physical systems",
"Turbomachinery"
] |
57,429,392 | https://en.wikipedia.org/wiki/Fusion%20of%20anyons | Anyon fusion is the process by which multiple anyons behave as one larger composite anyon. Anyon fusion is essential to understanding the physics of non-abelian anyons and how they can be used in quantum information.
Abelian anyons
If identical abelian anyons each with individual statistics (that is, the system picks up a phase when two individual anyons undergo adiabatic counterclockwise exchange) all fuse together, they together have statistics . This can be seen by noting that upon counterclockwise rotation of two composite anyons about each other, there are pairs of individual anyons (one in the first composite anyon, one in the second composite anyon) that each contribute a phase . An analogous analysis applies to the fusion of non-identical abelian anyons. The statistics of the composite anyon is uniquely determined by the statistics of its components.
Non-abelian anyon fusion rules
Non-abelian anyons have more complicated fusion relations. As a rule, in a system with non-abelian anyons, there is a composite particle whose statistics label is not uniquely determined by the statistics labels of its components, but rather exists as a quantum superposition (this is completely analogous to how two fermions known to each have spin 1/2 and 3/2 are together in quantum superposition of total spin 1 and 2). If the overall statistics of the fusion of all of several anyons is known, there is still ambiguity in the fusion of some subsets of those anyons, and each possibility is a unique quantum state. These multiple states provide a Hilbert space on which quantum computation can be done.
Specifically, two non-abelian anyons labeled and have a fusion rule given by , where the formal sum over goes over all labels of possible anyon types in the system (as well as the trivial label denoting no particles), and each is a nonnegative integer which denotes how many distinct quantum states there are in which and fuse into (This is true in the abelian case as well, except in that case, for each and , there is one type of anyon for which and for all other , .) Each anyon type should also have a conjugate antiparticle among the list of possible anyon types, such that , i.e. it can annihilate with its antiparticle. The anyon type label does not specify all of the information about the anyon, but the information that it does indicate is topologically invariant under local perturbations.
For example, the Fibonacci anyon system, one of the simplest, consists of labels and ( denotes a Fibonacci anyon), which satisfy fusion rule (corresponding to ) as well as the trivial rules and (corresponding to ).
The Ising anyon system consists of labels , and , which satisfy fusion rules , , and the trivial rules.
The operation is commutative and associative, as it must be to physically make sense with fused anyons. Furthermore, it is possible to view the coefficients as matrix entries of a matrix with row and column indices and ; then the largest eigenvalue of this matrix is known as the quantum dimension of anyon type .
Fusion rules can also be generalized to consider in how many ways a collection can be fused to a final anyon type .
Hilbert spaces of fusion processes
The fusion process where and fuse into corresponds to a dimensional complex vector space , consisting of all the distinct orthonormal quantum states in which and fuse into . This forms a Hilbert space. When , such as in the Ising and Fibonacci examples, is at most just a one dimensional space with one state. The direct sum is a decomposition of the tensor product of the Hilbert space of individual anyon and the Hilbert space of individual anyon . In topological quantum field theory, is the vector space associated with the pair of pants with waist labeled and legs and .
More complicated Hilbert spaces can be constructed corresponding to the fusion of three or more particles, i.e. for the quantum systems where it is known that the fuse into final anyon type . This Hilbert space would describe, for example, the quantum system formed by starting with a quasiparticle and, via some local physical procedure, splitting up that quasiparticle into quasiparticles (because in such a system all the anyons must necessarily fuse back into by topological invariance). There is an isomorphism between and for any . As mentioned in the previous section, the permutations of the labels are also isomorphic.
One can understand the structure of by considering fusion processes one pair of anyons at a time. There are many arbitrary ways one can do this, each of which can be used to derive a different decomposition of into pairs of pants. One possible choice is to first fuse and into , then fuse and into , and so on. This approach shows us that , and correspondingly where is the matrix defined in the previous section.
This decomposition manifestly indicates a choice of basis for the Hilbert space. Different arbitrary choices of the order in which to fuse anyons will correspond to different choices of basis.
References
Quantum mechanics | Fusion of anyons | [
"Physics"
] | 1,061 | [
"Theoretical physics",
"Quantum mechanics"
] |
57,430,432 | https://en.wikipedia.org/wiki/Joseph%20F.%20Keithley%20Award | The Joseph F. Keithley Award For Advances in Measurement Science is an award of the American Physical Society (APS) that was first awarded in 1998. It is named in honor of Joseph F. Keithley, the founder of Keithley Instruments. The award is presented annually for outstanding contributions in measurement techniques and equipment, and is sponsored by Keithley Instruments and the Topical Group on Instrument and Measurement Science (GIMS).
The award is not to be confused with the similarly-named IEEE Joseph F. Keithley Award in Instrumentation and Measurement of the Institute of Electrical and Electronics Engineers (IEEE), which is also endowed by Keithley Instruments.
Recipients
The award has been given to the following people.
1998: John Clarke
1999: Simon Foner
2000: Calvin Forrest Quate, H. Kumar Wickramasinghe
2001: James E. Faller
2002: Robert J. Soulen, Jr.
2003: Arthur Ashkin
2004: Virgil Bruce Elings
2005: E. Dwight Adams
2006: Frances Hellman
2007: Kent D. Irwin
2008: Bjorn Wannberg
2009: Robert J. Schoelkopf
2010: Eugene Ivanov
2011: Ian Walmsley
2012: Andreas Mandelis
2013: David McClelland, Nergis Mavalvala, Roman Schnabel
2014: Franz Josef Giessibl
2015: Daniel T. Pierce, John Unguris, Robert J. Celotta
2016: Albert Migliori
2017: Peter Denes
2018: Andreas J. Heinrich, Joseph A. Stroscio, Wilson Ho
2019: Zahid Hussain
2020: No award given.
2021: Irfan Siddiqi
2022: Daniel Rugar and John Mamin
2023: Joel N. Ullom
2024: David A Muller
2025: Frances M. Ross
See also
List of physics awards
References
Awards of the American Physical Society
Measurement | Joseph F. Keithley Award | [
"Physics",
"Mathematics",
"Technology"
] | 380 | [
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"Science award stubs",
"Science and technology awards"
] |
57,432,949 | https://en.wikipedia.org/wiki/Plant%20genome%20assembly | A plant genome assembly represents the complete genomic sequence of a plant species, which is assembled into chromosomes and other organelles by using DNA (deoxyribonucleic acid) fragments that are obtained from different types of sequencing technology.
Structure
The genome of plants can vary in their structure and complexity from small genomes like green algae (15 Mbp). to very large and complex genomes that have typically much higher ploidy, higher rates of heterozygosity and repetitive elements than species from other kingdoms. One of the most complex plant genome assemblies available is that of loblolly pine (22 Gbp). Due to their complexity, the plants' genome sequences can't be assembled back into chromosomes using only short reads provided by next-generation- sequencing technologies (NGS), and therefore most plant genome assemblies available that used NGS alone are highly fragmented, contain large numbers of contigs, and genome regions are not finished. Highly repetitive sequences, often larger than 10kbp, are the main challenge in plants. Most of the chromosomal sequences are produced by the activity of mobile genetic elements (MGEs) in the plant genomes. MGEs are divided into two classes: class I or retrotransposons, and class II or DNA transposons. In plants, long- terminal repeat (LTR) retrotransposons are predominant and constitute from 15% to 90% of the genome. Polyploidy is another challenge in assembling a plant genome, and it is estimated that ≈80% of plants are polyploids.
Assemblies
The first complete plant genome assembly, that of Arabidopsis thaliana, was finished in 2000, being the third multicellular eukaryotic genome published after C. elegans and D. melanogaster. Arabidopsis, unlike other plants' genomes (e.g. Malus) has convenient traits, such as a small nuclear genome (135Mbp) and a short generation time (8 weeks from seed to seed). The genome has five chromosomes reflecting approximately 4% of the human genome size. The genome was sequenced and annotated by the Arabidopsis Genome Initiative (AGI).
The initiative for sequencing the genome of rice (Oryza sativa), began in September 1997, when scientists from many nations agreed to an international collaboration to sequence the rice genome, forming "The International Rice Genome Sequencing Project" (IRGSP). At an estimated size between 400 and 430 Mb, approximatively four times larger in dimensions than A. thaliana, rice has the smallest of the major cereal crop genomes.
Between 2000 and 2008 in total 10 plant genomes were published while in 2012 alone, 13 plant genomes were published. Since then the number was constantly increasing, and now more than 400 plant genomes are available in the NCBI genome database, of which 72 were re-annotated [NCBI].
Databases
EnsemblPlants is part of EnsemblGenome database and contains resources for a reduced number of sequenced plant species (45, Oct. 2017). It mainly provides genome sequences, gene models, functional annotations and polymorphic loci. For some of the plant species, additional information is provided including population structure, individual genotypes, linkage, and phenotype data.
Gramene is an online web database resource for plant comparative genomics and pathway analysis based on Ensembl technology.
Plant Genome DataBase Japan (PGDBj) is a website that contains information related to genomes of model and crop plants from databases. It has three main components: ortholog db, DNA marker and linkage map db, and plant resource db, where multiple plant resources accumulated by different institutes are integrated. The aim is "to provide a platform, enabling comparative searches of different resources" (pgdbj.jp).
PlantsDB is a resource for analysing and storing genetic and genomic information from various plants, and offers tools to query these data and to perform comparative analysis with the help of in-house tools.
PLAZA is another online resource for comparative genomics that integrates plant sequence data and comparative genomic methods, and performs evolutionary analysis within the green plant lineage (Viridiplantae).
The Arabidopsis Information Resource (TAIR) maintains a web database of the "model higher plant Arabidopsis Thaliana ".
Assembly strategies
In general, for sequencing and assembling large and complex genomes like plants, different strategies are used, based on the technologies available at that time when the project started.
Sanger clone-by-clone
Clone-by-clone sequencing strategies are based on the construction of a map for each chromosome before the sequencing, and rely on libraries made from large-insert clones. The most common type of large-insert clone is the bacterial artificial chromosome (BAC).
With BAC, the genome is first split into smaller pieces with the location recorded. The pieces of DNA are then inserted into BAC clones that are further multiplied by inserting them into bacterial cells that grow very fast. These pieces are further fragmented into overlapping smaller pieces that are placed into a vector and then sequenced. The small pieces are then assembled into contigs by overlapping them. Next, using the map from the first step the contigs are assembled back into the chromosomes.
The first complete plant genome assembly (also the first plant genome published) that used this type of technique was Arabidopsis thaliana, in 2000. Different large-insert libraries like BACs, P1 artificial chromosomes (PAC), yeast artificial chromosome (YAC) and transformation-competent artificial chromosomes (TACs) were combined to assemble the genome. From clones with restriction fragment fingerprint, by comparison of the patterns and hybridization or polymerase chain reaction (PCR) the physical maps were constructed. The physical maps were integrated together with genetic maps to identify contig positions and orientations. End sequences from 47,788 BAC clones were used to extend contigs from anchored BACs and to select a minimum tiling path. A total of 1,569 clones found in minimum tiling path were selected and sequenced. Direct PCR products were used to clone remaining gaps, and YACs allowed the characterization of telomere sequences. The resulting sequenced regions were 115.4 Mb of the 125 Mb predicted size of the genome and a total of 25,498 of protein-coding genes.
To sequence and assemble the genome of Oryza sativa (japonica), the same strategy was used. For Oryza sativa a total of 3,401 mapped clones in a minimum tiling path were selected from the physical map and assembled.
One of the most important crops in the world, maize (Zea mays), is the last plant genome project primarily based on Sanger BAC-by-BAC strategy. The genome size of Maize, 2.3 Gb and 10 chromosomes, is significantly larger than that of rice and Arabidopsis. To assemble the genome of maize a set of 16,848 minimally overlapping BAC clones derived
from combinations of physical and genetic map were selected and sequenced. The assembly on maize was performed in addition with external information data. The data was obtained from cDNA and sequences from libraries with methyl-filtered DNA (libraries that uses the knowledge that the bases in genic sequences tends to be less heavily methylated than those in non-genic regions) and high C0 t techniques.
Sanger clone-by-clone strategy has the advantage of working in small units, which reduces the complexity and computational requirements, as well as minimized problems associated with the misassembly of highly repetitive DNA and therefore is an attractive solution in assembling plant genomes and other complex eukaryotic genomes. The main disadvantages of this method are the costs and the resources required. The cost of the first plant genome assemblies was estimated between 70 million dollars and 200 million dollars per assembly.
Sanger whole-genome shotgun (WGS)
In the WGS sequencing technology there is no order for the fragments that are sequenced. The DNA is randomly sheared and cloned fragments are sequenced and assembled using computational methods. This technology reduced the cost and the time associated with construction of the maps and relies on computational resources.
A considerable number of important plant genomes like grapevine (Vitis Vinifer), papaya (Carica papaya), and cottonwood (Populus trichocarpa) were sequenced and assembled with Sanger WGS strategy.
The draft genome of grapevine is the fourth genome published for a flowering plant and the first from a fruit crop. The sequences of the genome were obtained from different types of libraries, like plasmids, fosmids and BACs. All the data were generated by paired-end sequencing of cloned insert using Sanger technology on ABI3730x1 sequencers. To assemble the reads, Arachne, 2002, a software designed to analyze reads obtained from both ends of plasmid clones, was used. In total 6.2 million paired-end tag reads were produced. The software produced 20.784 contigs that were combined into 3,830 supercontigs, having an N50 value of 64kb. Supercontigs had a total size of 498 Mb.
The anchorage of the supercontigs along the genome was performed first by joining supercontigs together using paired BAC end sequences. The resulting ultracontigs and the remained supercontigs were then aligned along the genetic map of the genome. Later improvements of this strategy enabled the sequencing of Brachypodium distachyon, Sorghum bicolor and soybean.
Next-generation sequencing
Due to its relatively cheap cost in comparison to previous methods, most of the recent plant genomes were sequenced and assembled using data from NGS (next-generation- sequencing) technology. In general the NGS data are used in combination with Sanger Sequencing technology or long-reads obtained from the third generation sequencing. The genome of the cucumber, (Cucumis sativus), was one of the plant genomes that used the NGS Illumina reads in combination with Sanger sequences. 72.2-fold genome coverage high quality base pairs were generated from which 3.9-fold coverage was provided from Sanger and the Illumina GA reads provided 68.3-fold coverage. From this two assemblies were produced based on the sequencing technology. The resulting contigs were compared between them, resulting in a total length of the assembled genome of 243.5 Mb. The result is about 30% smaller than the genome size estimated by flow cytometry of isolated nuclei stained with propidium iodide (367 Mb). A genetic map was constructed to anchor the assembled genome. 72.8% of the assembled sequences were successfully anchored onto the seven chromosomes.
Another plant genome that combined NGS with Sanger sequencing was the genome of Theobroma cacao, 2010, an economically important tropical fruit tree crop and the primary source of cocoa. The genome was sequenced in a consortium, "The International Cocoa Genome Sequencing consortium (ICGS) " and produced a total of 17.6 million 454 single end reads, 8.8 million 454 paired-end reads, 398.0 million Illumina paired-end reads and about 88,000 Sanger BAC reads. First by using genome assembly software, Newbler, an assembly was produced with 25,912 contigs and 4,792 scaffolds from the reads obtained from Roche/454 and Sanger raw data. This had a total length of 326.9 Mb, which represents 76% of the estimated genome size.
The Illumina reads were used to complement the 454 assembly, by aligning the short reads on the cocoa genome assembly using the SOAP software.
A similar strategy that combined NGS reads and Sanger Sequencing was used for other important plant species like the first published apple genome (Malus domestica), cotton (Gossypium Raimond), draft genome of sweet orange (Citrus sinensis) and the domesticated tomato (Solanum lycopersicum) genome
Third-generation
With the emergence of third-generation sequencing (TGS) some of the limitations from previous methods of sequencing and assembling plant genomes have started to be addressed. This technology is characterized by the parallel sequencing of single molecules of DNA, that results in sequences up to 54 kbp length (PacBio RS 2). In general, long reads from TGS have relatively high error rates (≈10% on average) and therefore repeated sequencing of the same DNA fragments is required. The price of such technology is still quite high and therefore is generally used in combination with short reads from NGS.
One of the first plant genome that used long-reads from TGS, Pacific Biosciences in combination with short reads from NGS was the genome of spinach having a genome size estimated at 989 Mb. For this, a 60× coverage of the genome was generated, with 20% of the reads larger than 20 kb. Data were assembled using PacBio's hierarchical genome assembly process (HGAP), and showed that long-read assemblies revealed a 63-fold improvement in contig size over an Illumina-only assembly.
Another plant genome that was recently published that used long reads in combination with short reads is the improved assembly of the apple genome. In this project a hybrid approach was used, combining different data types from sequencing technologies. The sequences used came from: PacBio RS II, Illumina paired-end reads (PE) and Illumina mate- pair reads (MP). As a first step an assembly from Illumina paired-end reads was performed using a well-known de novo assembly software SOAPdevo. Then using a hybrid assembly pipeline DBG2OLC. the contigs obtained at the first step and the long reads from PacBio were combined. The assembly was then polished with the help of Illumina paired-end reads by mapping them to the contigs using BWA-MEM. By mapping the mate-pair reads on the corrected contigs they scaffold the assembly. Further BioNano optical mapping analysis with a total length of 649.7 Mb, were used in the hybrid assembly pipeline together with the scaffolds obtained from the previous step. The resulting scaffolds were anchored to a genetic map constructed from 15,417 single-nucleotide polymorphisms (SNPs) markers. For better understanding of the number and diversity of genes that were identified, ribonucleic acid RNA-seq, were used.
The resulted genome has a dimension of 643.2 Mb getting closer to the estimated genome size than the previous published assembly and a smaller number of protein-coding- genes.
The use of long reads in the plant genome assemblies became more popular, for reducing the number of scaffolds and increasing the quality of the genome by improving the assembly and coverage in regions that are not clearly defined by NGS assembly.
References
Botany
Bioinformatics | Plant genome assembly | [
"Engineering",
"Biology"
] | 3,146 | [
"Bioinformatics",
"Biological engineering",
"Plants",
"Botany"
] |
57,435,695 | https://en.wikipedia.org/wiki/Rclone | Rclone is an open source, multi threaded, command line computer program to manage or migrate content on cloud and other high latency storage. Its capabilities include sync, transfer, crypt, cache, union, compress and mount. The rclone website lists supported backends including S3 and Google Drive.
Descriptions of rclone often carry the strapline "Rclone syncs your files to cloud storage". Those prior to 2020 include the alternative "Rsync for Cloud Storage".
Rclone is well known for its rclone sync and rclone mount commands. It provides further management functions analogous to those ordinarily used for files on local disks, but which tolerate some intermittent and unreliable service. Rclone is commonly used with media servers such as Plex, Emby or Jellyfin to stream content direct from consumer file storage services.
Official Ubuntu, Debian, Fedora, Gentoo, Arch, Brew, Chocolatey, and other package managers include rclone.
History
Nick Craig-Wood was inspired by rsync. Concerns about the noise and power costs arising from home computer servers prompted him to embrace cloud storage and he began developing rclone as open source software in 2012 under the name swiftsync.
Rclone was promoted to stable version 1.00 in July 2014.
In May 2017, Amazon Drive barred new users of rclone and other upload utilities, citing security concerns. Amazon Drive had been advertised as offering unlimited storage for £55 per year. Amazon's AWS S3 service continues to support new rclone users.
The original rclone logo was updated in September 2018.
In March 2020, Nick Craig-Wood resigned from Memset Ltd, a cloud hosting company he founded, to focus on open source software.
Amazon's AWS April 2020 public sector blog explained how the Fred Hutch Cancer Research Center were using rclone in their Motuz tool to migrate very large biomedical research datasets in and out of AWS S3 object stores.
In November 2020, rclone was updated to correct a weakness in the way it generated passwords. Passwords for encrypted remotes can be generated randomly by rclone or supplied by the user. In all versions of rclone from 1.49.0 to 1.53.2 the seed value for generated passwords was based on the number of seconds elapsed in the day, and therefore not truly random. CVE-2020-28924 recommended users upgrade to the latest version of rclone and check the passwords protecting their encrypted remotes.
Release 1.55 of rclone in March 2021 included features sponsored by CERN and their CS3MESH4EOSC project. The work was EU funded to promote vendor-neutral application programming interfaces and protocols for synchronisation and sharing of academic data on cloud storage.
Backends and commands
Rclone supports the following services as backends. There are others, built on standard protocols such as WebDAV or S3, that work. WebDAV backends do not support rclone functionality dependent on server side checksum or modtime.
Remotes are usually defined interactively from these backends, local disk, or memory (as S3), with rclone config. Rclone can further wrap those remotes with one or more of alias, chunk, compress, crypt or union, remotes.
Once defined, the remotes are referenced by other rclone commands interchangeably with the local drive. Remote names are followed by a colon to distinguish them from local drives. For example, a remote example_remote containing a folder, or pseudofolder, myfolder is referred to within a command as a path example_remote:/myfolder.
Rclone commands directly apply to remotes, or mount them for file access or streaming. With appropriate cache options the mount can be addressed as if a conventional, block level disk. Commands are provided to serve remotes over SFTP, HTTP, WebDAV, FTP and DLNA. Commands can have sub-commands and flags. Filters determine which files on a remote that rclone commands are applied to.
rclone rc passes commands or new parameters to existing rclone sessions and has an experimental web browser interface.
Crypt remotes
Rclone's crypt implements encryption of files at rest in cloud storage. It layers an encrypted remote over a pre-existing, cloud or other remote. Crypt is commonly used to encrypt / decrypt media, for streaming, on consumer storage services such as Google Drive.
Rclone's configuration file contains the crypt password. The password can be lightly obfuscated, or the whole rclone.conf file can be encrypted.
Crypt can either encrypt file content and name, or additionally full paths. In the latter case there is a potential clash with encryption for cloud backends, such as Microsoft OneDrive, having limited path lengths. Crypt remotes do not encrypt object modification time or size. The encryption mechanism for content, name and path is available, for scrutiny, on the rclone website. Key derivation is with scrypt.
Example syntax (Linux)
These examples describe paths and file names but object keys behave similarly.
To recursively copy files from directory remote_stuff, at the remote xmpl, to directory stuff in the home folder:-
$ rclone copy -v -P xmpl:/remote_stuff ~/stuff
-v enables logging and -P, progress information. By default rclone checks the file integrity (hash) after copy; can retry each file up to three times if the operation is interrupted; uses up to four parallel transfer threads, and does not apply bandwidth throttling.
Running the above command again copies any new or changed files at the remote to the local folder but, like default rsync behaviour, will not delete from the local directory, files which have been removed from the remote.
To additionally delete files from the local folder which have been removed from the remote - more like the behaviour of rsync with a --delete flag:-
$ rclone sync xmpl:/remote_stuff ~/stuff
And to delete files from the source after they have been transferred to the local directory - more like the behaviour of rsync with a --remove-source-file flag:-
$ rclone move xmpl:/remote_stuff ~/stuff
To mount the remote directory at a mountpoint in the pre-existing, empty stuff directory in the home directory (the ampersand at the end makes the mount command run as a background process):-
$ rclone mount xmpl:/remote_stuff ~/stuff &
Default rclone syntax can be modified. Alternative transfer, filter, conflict and backend specific flags are available. Performance choices include number of concurrent transfer threads; chunk size; bandwidth limit profiling, and cache aggression.
Academic evaluation
In 2018, University of Kentucky researchers published a conference paper comparing use of rclone and other command line, cloud data transfer agents for big data. The paper was published as a result of funding by the National Science Foundation.
Later that year, University of Utah's Center for High Performance Computing examined the impact of rclone options on data transfer rates.
Rclone use at HPC research sites
Examples are University of Maryland, Iowa State University, Trinity College Dublin, NYU, BYU, Indiana University, CSC Finland, Utrecht University, University of Nebraska, University of Utah, North Carolina State University, Stony Brook, Tulane University, Washington State University, Georgia Tech, National Institutes of Health, Wharton, Yale, Harvard, Minnesota, Michigan State, Case Western Reserve University, University of South Dakota, Northern Arizona University, University of Pennsylvania, Stanford, University of Southern California, UC Santa Barbara, UC Irvine, UC Berkeley, and SURFnet.
Rclone and cybercrime
May 2020 reports stated rclone had been used by hackers to exploit Diebold Nixdorf ATMs with ProLock ransomware. The FBI issued a Flash Alert MI-000125-MW on May 4, 2020, in relation to the compromise. They issued a further, related alert 20200901–001 in September 2020. Attackers had exfiltrated / encrypted data from organisations involved in healthcare, construction, finance, and legal services. Multiple US government agencies, and industrial entities were affected. Researchers established the hackers spent about a month exploring the breached networks, using rclone to archive stolen data to cloud storage, before encrypting the target system. Reported targets included LaSalle County, and the city of Novi Sad.
The FBI warned January 2021, in Private Industry Notification 20210106–001, of extortion activity using Egregor ransomware and rclone. Organisations worldwide had been threatened with public release of exfiltrated data. In some cases rclone had been disguised under the name svchost. Bookseller Barnes & Noble, US retailer Kmart, games developer Ubisoft and the Vancouver metro system have been reported as victims.
An April 2021, cybersecurity investigation into SonicWall VPN zero-day vulnerability SNWLID-2021-0001 by FireEye's Mandiant team established attackers UNC2447 used rclone for reconnaissance and exfiltration of victims' files. Cybersecurity and Infrastructure Security Agency Analysis Report AR21-126A confirmed this use of rclone in FiveHands ransomware attacks.
A June 2021, Microsoft Security Intelligence Twitter post identified use of rclone in BazaCall cyber attacks. The attackers sent emails encouraging recipients to contact a fake call centre to cancel a paid for service. The call centre team then instructed victims to download a hostile file that installed malware on the target network, ultimately allowing use of rclone for covert extraction of potentially sensitive data.
Rclone Wars
In a 2021, Star Wars Day blog article, Managed Security Service Provider Red Canary announced Rclone Wars, an allusion to Clone Wars. The post notes illicit use of other legitimate file transfer utilities in exfiltrate and extort schemes but focuses on MEGAsync, MEGAcmd and rclone. To identify use of renamed rclone executables on compromised devices the authors suggest monitoring for distinctive rclone top level commands and command line flag strings such as remote: and \\.
Rclone or rsync
Rsync transfers files with other computers that have rsync installed. It operates at the block, rather than file, level and has a delta algorithm so that it only needs to transfer changes in files. Rsync preserves file attributes and permissions. Rclone has a wider range of content management capabilities, and types of backend it can address, but only works at a whole file / object level. It does not currently preserve permissions and attributes. Rclone is designed to have some tolerance of intermittent and unreliable connections or remote services. Its transfers are optimised for high latency networks. Rclone decides which of those whole files / objects to transfer after obtaining checksums, to compare, from the remote server. Where checksums are not available, rclone can use object size and timestamp.
Rsync is single threaded. Rclone is multi threaded with a user definable number of simultaneous transfers.
Rclone can pipe data between two completely remote locations, sometimes without local download. During an rsync transfer, one side must be a local drive.
Rclone ignores trailing slashes. Rsync requires their correct use. Rclone filters require the use of ** to refer to the contents of a directory. Rsync does not.
Eponymous cloud storage service rsync.net provides remote unix filesystems so that customers can run rsync and other standard Unix tools. They also offer rclone only accounts.
In 2016, a poster on Hacker News summarised rclone's relationship to rsync as:- (rclone) exists to give you rsync to things that aren't rsync. If you want to rsync to things that are rsync, use rsync.
See also
Rsync
Comparison of file synchronization software
References
External links
2012 software
Cloud storage
Network file systems
Data synchronization
Free backup software
Backup software for Linux
Free network-related software
Network file transfer protocols
Unix network-related software
Free file transfer software
Cloud storage gateways
File transfer software
Software using the MIT license
SSH File Transfer Protocol clients
FTP clients
Free FTP clients
MacOS Internet software
Free file sharing software
Cross-platform free software
Free software programmed in Go
Free storage software
Object storage
Distributed file systems
Userspace file systems
File copy utilities
Disk usage analysis software
Disk encryption
Special-purpose file systems
Cryptographic software
Free special-purpose file systems
Cloud computing
Cloud infrastructure
Free software for cloud computing
Backup software for Windows
Backup software for macOS
Cloud clients
Cloud applications | Rclone | [
"Mathematics",
"Technology"
] | 2,649 | [
"Cloud infrastructure",
"Cryptographic software",
"IT infrastructure",
"Mathematical software"
] |
41,196,800 | https://en.wikipedia.org/wiki/Steam%20and%20water%20analysis%20system | Steam and water analysis system (SWAS) is a system dedicated to the analysis of steam or water. In power stations, it is usually used to analyze boiler steam and water to ensure the water used to generate electricity is clean from impurities which can cause corrosion to any metallic surface, such as in boiler and turbine.
Steam and water analysis system (SWAS)
Corrosion and erosion are major concerns in thermal power plants operating on steam. The steam reaching the turbines need to be ultra-pure and hence needs to be monitored for its quality. A well designed Steam and Water Analysis system (SWAS) can help in monitoring the critical parameters in the steam. These parameters include pH, conductivity, silica, sodium, dissolved oxygen, phosphate and chlorides. A well designed SWAS must ensure that the sample is representative until the point of analysis. To achieve this, it is important to take care of the following aspects of the sample:
Sample Extraction
Sample Transport
Conditioning
Analysis
Controls
These aspects are well explained in international standards like ASME PTC 19.11-2008 and VGB S006 -00 2012_09_EN. The International Association for the Properties of Water and Steam (IAPWS) also gives good information on important measurement points and its significance.
Sample handling system components are the most important pressure parts of sample handling system and need to have certification from ASME Section VIII Div1 & Div2 or PED. Also many times country-specific certifications required like
American: ASME Section VIII Div 1 and Div 2/ ASME U and S Stamp
Europe: Pressure Equipment Directive (PED)
India: Indian Boiler Regulation (IBR) form IIIC
Malaysia: DOSH
Russia: CU TR Certification
Sample extraction
To ensure that the sample that is going to be extracted for analysis represents the process conditions exactly, it is important to choose the correct sample extraction probe. The validity of the analysis is largely dependent on the sample being truly representative. As the probe is going to be directly attached to the process pipe work, it may have to withstand severe conditions. For most applications, the sample probe is manufactured to the stringent codes applicable to high-pressure, high-temperature pipework.
The selection of the right type of probe is a challenge. Its use depends on the process stream parameter to be measured, the required sample flow rate and the location of the sampling point (which is also called the 'tapping point'). An important aspect of the sample extraction probe design is that the steam must enter the probe at the same velocity as the steam flowing in the pipeline from where the sample (it can be steam or water) was extracted. These probes are designed as per ASTM D1066 standard for steam extraction and must be designed and tested for their structural integrity in High pressure, High Temperature and Higher velocity of samples.
Sample extraction probes are extremely important and necessary of proper analysis of suspended impurities like Corrosion products, Total Iron, copper, carryover effects.
Sample Transport
Section#4 in ASME PTC 19.11-2008 standard describes details for designing of sample transportation lines. Following care need to take while designing of this sample transportation lines:
(1) Line Size Selection:
Following aspects are very important while designing of sample Transportation lines.
(a) Transportation time i.e. (Velocity) of sample from Isokinetic sample extraction probes to sampling system should be as minimum. SWAS room must be located close to low pressure water (condensate) samples from CEP discharge and condensate Polishing plants with lesser velocities.
(b) Pressure drops in lines is an important aspect. It is very important that the sample meets least resistance. Hence joints and bends in the pipeline need to be minimal. Also, sample lines must be continuously sloping to avoid accumulation of samples in lines.
(2) Line Material:
Minimum Stainless steel SS316 Grade material must be used for sample Transport Lines. This is to avoid corrosion of lines which leads to wrong measurement and analysis. For High pressure and Temperature samples (Super heated steam, Reheated Steam, Saturated Steam, Separator drains, Feed water at Economizer inlets) SS316H must be used which withstand High Temperature of samples.
Sample conditioning system
Sample conditioning system in some countries is also called sampling system, Wet Panel or Wet Rack. This is intended to house various components for sample conditioning. This may be an open rack or a closed enclosure with a corridor in between. The system contains sample conditioning equipment and a grab sampling sink. In this system stage, sample is first cooled in Sample Coolers, depressurized in Pressure Regulator and then fed to various analyzers while the flow characteristics is kept constant by means of Back Pressure Regulator.
The need to condition the sample exists, because the sensors used for online analysis are not able to handle the water/steam sample at high temperatures or pressures. To maintain a common reference of analysis, the sample analysis should be done at 25 °C. However, due to temperature compensation logic being available in most of the analyzers today, it is a practice to cool the sample to 25–40 °C. with the help of a well engineered sample conditioning system and then feed the conditioned sample to the analyzers.
However, if an uncompensated sample is to be analyzed, it becomes essential to cool the sample to 25 °C +/- 1 °C. This can be achieved by two-stage cooling. In the first stage cooling (also known as 'primary cooling'), the sample is cooled by using available cooling water. In most of the countries, cooling water is available in the range of 30–32 °C. This cooling water can cool the sample unto 35 °C(considering an approach temperature of 3 to 5 °C). A sample cooler is used to achieve this. Sample cooler is a heat exchanger specially designed for SWAS applications. Preferred sample cooler for primary cooling is a double helix coil in shell type design providing contraflow heat exchange.
The remaining part of cooling (i.e. from 35 to 25 °C) is achieved by using chilled water in the secondary cooling circuit. A chilled water supply is required from the plant or else an independent chiller package can be considered for this purpose along with SWAS.
The sampling system can be an 'open-frame free standing' type design or a fully or partially closed design, depending on the choice of the user, the environment it is supposed to operate in & the criticality of operation.
Sample coolers
In the sampling system, sample coolers play a major role in bringing down the temperature of hot steam (or water) to a temperature acceptable to the sensors of the on-line analyser. Some of the important design aspects of sample coolers are:
Preferably a sample cooler design should be double helix, coil in shell type, so designed as to provide contra-flow heat exchange. This makes the sample cooler more compact, yet highly effective in terms of heat exchange.
Sample coils made of stainless steel SS-316 are suitable for normal cooling water conditions. However, if the chloride content in the cooling water is high (more than 35 ppm), then other suitable coil materials such as Monel or Inconel need to be used depending upon the quality of cooling water.
A “built-in” safety relief valve on shell side of the cooler is a must, so as to prevent explosion of the shell in event of sample coil failure.
The sample cooler design must be meeting ASME PTC 19.11 standard requirements.
These sample coolers handle very high pressure and Temperature steam and Water samples and thus it is very important to design these Helical Tube Heat Exchangers inline with Pressure vessel standards
These are unfired pressure vessels and thus designed inline with ASME Section VIII Div 1&2, Pressure Equipment Directives (PED) Standards. Also many countries asked for local certification like
American: ASME Section VIII Div 1 and Div 2/ ASME U and S Stamp
Europe: Pressure Equipment Directive (PED)
India: Indian Boiler Regulation (IBR) form IIIC
Malaysia: DOSH
Russia: CU TR Certification
Pressure reducers
After the sample is cooled, the pressure of the sample must be reduced to meet the requirement of the sensors that receive this sample. Usually, the sensors like pH, conductivity, silica, sodium, and hydrazine require low pressure sample for healthy operation.
A rod-in-tube type of pressure reducer is the most effective method of pressure reduction recommended in ASME PTC19.11-2008 standard.
As per the latest technology, a Sample rod-in-tube pressure reducer with thermal and safety relief valve device is considered to be the most reliable and safe device. Single Rod in Tube System is a system in itself that takes care of some important aspects of sample conditioning. The pressure reducer in the Sampling system is rated for high very high pressure 450 Bar. There is no need of filters before the Rod in tube Pressure Reducers, as cleaning is on-line, without using any tools. For maintenance, no-shut-down is required for cleaning these pressure reducer.
Safety of analyzers against high temperature
Analyzers must be protected from high temperature samples. This is to avoid situations in case of failure of cooling water to primary sample coolers. There are various methods for stopping sample to analyzer in such a situation. The most popular and simple method is use of mechanical thermal shut off valves. These valves close and block samples to analyzer in case of cooling water failures.
These valves must be with:
(1) High pressure rating and designed inline with ASME standards to assure safety of operator and instruments downstream.
(2) This valves must be with MANUAL RESET design as recommended in ASME PTC 19.11-2008 standards.
(3) These valves must be equipped with potential free alarm contact for operator indication in Control system.
Online Analysis of Steam and Water Cycle Chemistry Parameters
A sample analysis system in some countries is also called Analyser Panel, Dry Panel or Dry Rack. It is usually a free-standing enclosed panel. The system contains the transmitter electronics, usually it is mounted on panels. In this system stage, sample is analyzed on its conductivity, pH, silica, phosphate, chloride, dissolved oxygen, hydrazine, sodium etc.
Online Conductivity measurements in SWAS
In Steam and Water Cycle conductivity measurement is very basic, but the most important measurement. Specific conductivity (total conductivity), acidic conductivity (conductivity after cation exchanger CACE) and degassed cation conductivity are measured at different location in steam and water cycle continuously Conductivity measurements give indication of contamination of water / steam with any kind of salts. These salts can get added to the water / steam from atmosphere or due to leakages in heat exchangers etc. The conductivity of ultra pure water is almost close to zero(as low as 0.05 microsiemens/cm), while with addition of even 1 ppm of any salt, the conductivity can shoot up to even more than 100 micro siemens/cm. Thus conductivity is a very good general purpose watch dog which can give a quick indication of plant malfunctioning or possible leakages.
Typical points in the steam circuit where conductivity should be monitored are . Drum steam, Drum water, High pressure heaters, Low pressure heaters, Condenser, Plant effluent, D.M. plant, Make-up water to D.M. plant.
Three types of conductivity measurement are usually done:
Specific conductivity,
Cation conductivity and
De-gassed cation conductivity.
There is a difference between these three types of measurements.
Specific conductivity gives overall conductivity value of the sample and is the most generic measurement
Cation conductivity is conductivity measurement after the Cation Column. At the Cation Column, the H+ resins replace the positive ions of all dissolved matter in the solution. When this happens, the treatment chemicals, which are desired ones (and are of basic or alkaline in nature) get converted to , i.e. water. (e.g. NH4OH + H(+) gives NH4+ and ). The impurities are nothing but salts of different natures These get converted to respective acids (e.g. NaCl + H(+) gives HCl and Cl-). Thus masking effects of treatment chemicals on the conductivity value are eliminated, while the conversion of salts to corresponding acids has an effect of increase in their corresponding conductivity value to around three times its original value. Thus, in effect, cation conductivity acts as amplifier of conductivity due to impurities and eliminator of conductivity due to treatment chemicals.
De-gassed conductivity is the finest level of conductivity measurement. Here one removes the masking effects of dissolved gases, mainly , on the conductivity measurement. In the De-Gassed conductivity system, there is a reboiler chamber to heat the sample, so that the dissolved gases are liberated and then there is cooling mechanism, by which the hot liquid is cooled again. The conductivity measured after this process is indeed the 'real' value of conductivity because of 'dissolved' impurities after eliminating the dissolved gases. Degas columns are designed inline with ASTM D4519 Standard. These measurements are also recommended in standards like ASME PTC 19.11-2008 and VGB S006 -00 2012_09_EN. You can also refer IAPWS guidelines for more information.
These Three conductivity measurement are very important and also used to calculate pH and dissolved values in steam and water cycles.
Online pH Measurement
pH measurement is also very basic yet very critical measurement for steam and water cycle. Monitoring the pH value of the feed water gives direct indication of alkalinity or acidity of this water. The ultra pure water has pH value of 7. In steam circuit it is normal practice to keep the pH value of feed water at slightly alkaline levels using chemical dosing. This helps in preventing the corrosion of pipe work and other equipment.
Typical points in the steam circuit where pH should be monitored are : Drum water, High pressure heaters, Make-up condensate, Plant effluent, Condenser, Cooling water.
Online Dissolved Oxygen Measurement
In Steam and Water Circuit temperature of water is increased from room temperature to superheated steam temperatures. In temperature range of 200to 250°C (feed water), dissolved oxygen causes corrosion of components and piping. Iron reacts with dissolved oxygen in feed water circuit resulting pitting may eventually cause puncturing and failures of Parts in Steam water circuits. Parts like condensers, Low Pressure Heaters (LPH), Feed water tanks, High pressure Heaters and Economizers need to be protected from dissolved oxygen attack. Dissolved oxygen also promotes electrolytic action between dissimilar metals causing corrosion and leakage at joints and gaskets.
In power plants various feed water treatments like
(1) All Volatile Treatment (AVT-R or AVT-O)
(2) Oxygenated Treatment (OT)
(3) Combine Water Treatment (CWT) are adopted to minimize corrosion.
Thus it is very important and critical to monitor and control Dissolved oxygen and pH values in Feed water systems. The typical points in steam circuit where dissolved oxygen monitoring is required are . Condenser outlet, L.P. heaters, Economizer inlet.
Online Hydrazine (Oxygen Scavenger) Measurement
In All Volatile Treatment-Reducing (AVT-R) treatment chemicals Like Hydrazine/ Carbohydrazine or DEHA are dosed in Boiler feed water. Such treatments are used for Steam water circuits with mixed metallurgy. These Chemicals act as an oxygen scavenger and a source of feed water alkalinity has well known advantages e.g. :
a) It prevents foaming and carryovers from boiler.
b) It minimizes deposits on metal surfaces.
c) Reduce Dissolved oxygen corrosion
In addition to its oxygen-scavenging function, hydrazine helps to maintain a protective magnetite layer over steel surfaces, and maintain feed water alkalinity to prevent acidic corrosion. The nominal dosage rate for hydrazine in feed water is about three times its oxygen level. Under dosing of hydrazine leads to increased corrosion; overdosing represents a costly waste. Monitoring the dissolved oxygen levels is not sufficient to control the optimum concentration because its provides no measure of any excess hydrazine.
The typical points in steam circuit where hydrazine monitoring is required are . Re-heaters, Economizer inlet, L.P. heaters.
Online Silica (SiO2--) Measurement
When it comes to safety and efficiency of the steam turbine and boiler in a power plant, silica becomes one of the most critical factors to be monitored. Deposition of various impurities on turbine blades has been identified as one of the most common problems. Various compounds deposit on the turbine blades. Of all these compounds, silica (SiO2) deposits can occur at lower operating pressures also, Therefore, silica deposition is quite common in turbines than other types of deposits. Silica usually deposits in the intermediate-pressure and low-pressure sections of the turbine. These deposits are hard to remove, disturb the geometry of turbine blades and ultimately result in vibrations causing imbalance and loss of output from turbine.
Another important area of concern as far as silica deposition is concerned is boiler tube. Silica scale is one of the hardest scale to remove. Because of its low thermal conductivity, a very thin silica deposit can reduce heat transfer considerably, reducing efficiency, leading to hot spots and ultimately ruptures.
Because of all these issues, it is extremely important to closely monitor silica levels by using on-line silica analyzers that can measure silica levels to a ppb (parts per billion) level.
Online Sodium (Na+) Ion measurement
Sodium Measurement is one of the most critical measurement in Steam and Water Cycle for leak detections in circuit. The measurement of sodium is recognized - among other chemical parameters - as an effective telltale to reveal the condition of a high-purity water/steam circuit. The presence of sodium signals contamination with potentially corrosive anions, e.g. chlorides, sulfates etc. Under conditions of high pressure and temperature, neutral sodium salts exhibit considerable steam solubility. NaCl and NaOH, in particular, are known to be associated with stress corrosion cracking of boiler and super heater tubes. The measurement of sodium, acting as a carrier of potentially corrosive anions, is now recognized as an effective means to monitor steam purity.
DM Water after Cation and Mixed bed: Sampling after cation exchange is one of the most important parameters in trace sodium monitoring because it rapidly alerts the operator about resin bed exhaustion. Sodium measurement is particularly valuable in plants cooled by saline waters, especially if there is a high risk of condenser leakage and no provision for condensate polishing. Consequently, while small leaks may be extremely difficult to locate and eliminate, their detection and escalation is most readily monitored by sodium measurement. SWAN's sodium analyzers can detect up to 0.001 ppb or 1 ppt of trace sodium in water treatment facilities. This sensitivity allows operators to follow trend changes before any leakage requires immediate action. Additionally, this advantage can be converted over time to analyze the origin of the leakage and to plan either a production reduction, or even to stop production far enough in advance to avoid costly and unexpected emergency shut downs.
Boiler: Solid conditioning agents, such as Tri-sodium phosphate (TSP) and sodium hydroxide (Caustic) used for boiler drum water treatment. In case these chemicals are carried over with steam, They may cause deposits in the turbine and therefore need to be considered as potentially corrosive impurities.
Steam: Sodium is also measured in power plant water and steam samples because it is a common corrosive contaminant and can be detected at very low concentrations in the presence of higher amounts of ammonia and/or amine treatment which have a relatively high background conductivity. Steam purity can be more accurately assessed by measuring sodium concentration in both steam and condensate, thus determining the “sodium balance”. The two concentrations should be equal. A higher level of sodium in the condensate indicates a condenser leakage. A lower level of sodium in the condensate indicates deposition of sodium in the steam circuit.
Condensate: Sodium measurement should be the preferred option for early warnings of leakages of impurities in condensates. It also plays key role in Condensate Polishing plant controls.
Online Phosphate Measurement in Boiler Drum Water
Phosphate measurement is important only for Drum Type boilers. Solid conditioning agents, such as Tri-sodium phosphate (TSP) are widely used as a dosing chemical in Boiler Drums. In case of excess dosing of these chemicals can lead to issues like Foaming, Carry over of salts to Steam. Controlling dosing of phosphate under variable steam loads is challenging task mainly because of Phosphate hideouts. Thus mainly users preferred Phosphate measurement in Drum water samples
See also
Boiler (power generation)
Supercritical steam generator
References
Steam power | Steam and water analysis system | [
"Physics"
] | 4,349 | [
"Power (physics)",
"Steam power",
"Physical quantities"
] |
50,394,182 | https://en.wikipedia.org/wiki/Cone%20%28algebraic%20geometry%29 | In algebraic geometry, a cone is a generalization of a vector bundle. Specifically, given a scheme X, the relative Spec
of a quasi-coherent graded OX-algebra R is called the cone or affine cone of R. Similarly, the relative Proj
is called the projective cone of C or R.
Note: The cone comes with the -action due to the grading of R; this action is a part of the data of a cone (whence the terminology).
Examples
If X = Spec k is a point and R is a homogeneous coordinate ring, then the affine cone of R is the (usual) affine cone over the projective variety corresponding to R.
If for some ideal sheaf I, then is the normal cone to the closed scheme determined by I.
If for some line bundle L, then is the total space of the dual of L.
More generally, given a vector bundle (finite-rank locally free sheaf) E on X, if R=Sym(E*) is the symmetric algebra generated by the dual of E, then the cone is the total space of E, often written just as E, and the projective cone is the projective bundle of E, which is written as .
Let be a coherent sheaf on a Deligne–Mumford stack X. Then let For any , since global Spec is a right adjoint to the direct image functor, we have: ; in particular, is a commutative group scheme over X.
Let R be a graded -algebra such that and is coherent and locally generates R as -algebra. Then there is a closed immersion
given by . Because of this, is called the abelian hull of the cone For example, if for some ideal sheaf I, then this embedding is the embedding of the normal cone into the normal bundle.
Computations
Consider the complete intersection ideal and let be the projective scheme defined by the ideal sheaf . Then, we have the isomorphism of -algebras is given by
Properties
If is a graded homomorphism of graded OX-algebras, then one gets an induced morphism between the cones:
.
If the homomorphism is surjective, then one gets closed immersions
In particular, assuming R0 = OX, the construction applies to the projection (which is an augmentation map) and gives
.
It is a section; i.e., is the identity and is called the zero-section embedding.
Consider the graded algebra R[t] with variable t having degree one: explicitly, the n-th degree piece is
.
Then the affine cone of it is denoted by . The projective cone is called the projective completion of CR. Indeed, the zero-locus t = 0 is exactly and the complement is the open subscheme CR. The locus t = 0 is called the hyperplane at infinity.
O(1)
Let R be a quasi-coherent graded OX-algebra such that R0 = OX and R is locally generated as OX-algebra by R1. Then, by definition, the projective cone of R is:
where the colimit runs over open affine subsets U of X. By assumption R(U) has finitely many degree-one generators xi's. Thus,
Then has the line bundle O(1) given by the hyperplane bundle of ; gluing such local O(1)'s, which agree locally, gives the line bundle O(1) on .
For any integer n, one also writes O(n) for the n-th tensor power of O(1). If the cone C=SpecXR is the total space of a vector bundle E, then O(-1) is the tautological line bundle on the projective bundle P(E).
Remark: When the (local) generators of R have degree other than one, the construction of O(1) still goes through but with a weighted projective space in place of a projective space; so the resulting O(1) is not necessarily a line bundle. In the language of divisor, this O(1) corresponds to a Q-Cartier divisor.
Notes
References
Lecture Notes
References
§ 8 of
Algebraic geometry | Cone (algebraic geometry) | [
"Mathematics"
] | 860 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
50,398,478 | https://en.wikipedia.org/wiki/Selection%20principle | In mathematics, a selection principle is a rule asserting
the possibility of obtaining mathematically significant objects by
selecting elements from given sequences of sets. The theory of selection principles
studies these principles and their relations to other mathematical properties.
Selection principles mainly describe covering properties,
measure- and category-theoretic properties, and local properties in
topological spaces, especially function spaces. Often, the
characterization of a mathematical property using a selection
principle is a nontrivial task leading to new insights on the
characterized property.
The main selection principles
In 1924, Karl Menger
introduced the following basis property for metric spaces:
Every basis of the topology contains a sequence of sets with vanishing
diameters that covers the space. Soon thereafter,
Witold Hurewicz
observed that Menger's basis property is equivalent to the
following selective property: for every sequence of open covers of the space,
one can select finitely many open sets from each cover in the sequence, such that the family of all selected sets covers the space.
Topological spaces having this covering property are called Menger spaces.
Hurewicz's reformulation of Menger's property was the first important
topological property described by a selection principle.
Let and be classes of mathematical objects.
In 1996, Marion Scheepers
introduced the following selection hypotheses,
capturing a large number of classic mathematical properties:
: For every sequence of elements from the class , there are elements such that .
: For every sequence of elements from the class , there are finite subsets such that .
In the case where the classes and consist of covers of some ambient space, Scheepers also introduced the following selection principle.
: For every sequence of elements from the class , none containing a finite subcover, there are finite subsets such that .
Later, Boaz Tsaban identified the prevalence of the following related principle:
: Every member of the class includes a member of the class .
The notions thus defined are selection principles. An instantiation of a selection principle, by considering specific classes and , gives a selection (or: selective) property. However, these terminologies are used interchangeably in the literature.
Variations
For a set and a family of subsets of , the star of in is the set .
In 1999, Ljubisa D.R. Kocinac introduced the following star selection principles:
: For every sequence of elements from the class , there are elements such that .
: For every sequence of elements from the class , there are finite subsets such that .
The star selection principles are special cases of the general selection principles. This can be seen by modifying the definition of the family accordingly.
Covering properties
Covering properties form the kernel of the theory of selection principles. Selection properties that are not covering properties are often studied by using implications to and from selective covering properties of related spaces.
Let be a topological space. An open cover of is a family of open sets whose union is the entire space For technical reasons, we also request that the entire space is not a member of the cover. The class of open covers of the space is denoted by . (Formally, , but usually the space is fixed in the background.) The above-mentioned property of Menger is, thus, . In 1942, Fritz Rothberger considered Borel's strong measure zero sets, and introduced a topological variation later called Rothberger space (also known as C space). In the notation of selections, Rothberger's property is the property .
An open cover of is point-cofinite if it has infinitely many elements, and every point belongs to all but finitely many sets . (This type of cover was considered by Gerlits and Nagy, in the third item of a certain list in their paper. The list was enumerated by Greek letters, and thus these covers are often called -covers.) The class of point-cofinite open covers of is denoted by . A topological space is a Hurewicz space if it satisfies .
An open cover of is an -cover if every finite subset of is contained in some member of . The class of -covers of is denoted by . A topological space is a γ-space if it satisfies .
By using star selection hypotheses one obtains properties such as star-Menger (), star-Rothberger () and star-Hurewicz ().
The Scheepers Diagram
There are 36 selection properties of the form , for and . Some of them are trivial (hold for all spaces, or fail for all spaces). Restricting attention to Lindelöf spaces, the diagram below, known as the Scheepers Diagram, presents nontrivial selection properties of the above form, and every nontrivial selection property is equivalent to one in the diagram. Arrows denote implications.
Local properties
Selection principles also capture important local properties.
Let be a topological space, and . The class of sets in the space that have the point in their closure is denoted by . The class consists of the countable elements of the class . The class of sequences in that converge to is denoted by .
A space is Fréchet–Urysohn if and only if it satisfies for all points .
A space is strongly Fréchet–Urysohn if and only if it satisfies for all points .
A space has countable tightness if and only if it satisfies for all points .
A space has countable fan tightness if and only if it satisfies for all points .
A space has countable strong fan tightness if and only if it satisfies for all points .
Topological games
There are close connections between selection principles and topological games.
The Menger game
Let be a topological space. The Menger game played on is a game for two players, Alice and Bob. It has an inning per each natural number . At the inning, Alice chooses an open cover of ,
and Bob chooses a finite subset of .
If the family is a cover of the space , then Bob wins the game. Otherwise, Alice wins.
A strategy for a player is a function determining the move of the player, given the earlier moves of both players. A strategy for a player is a winning strategy if each play where this player sticks to this strategy is won by this player.
A topological space is if and only if Alice has no winning strategy in the game played on this space.
Let be a metric space. Bob has a winning strategy in the game played on the space if and only if the space is -compact.
Note that among Lindelöf spaces, metrizable is equivalent to regular and second-countable, and so the previous result may alternatively be obtained by considering limited information strategies. A Markov strategy is one that only uses the most recent move of the opponent and the current round number.
Let be a regular space. Bob has a winning Markov strategy in the game played on the space if and only if the space is -compact.
Let be a second-countable space. Bob has a winning Markov strategy in the game played on the space if and only if he has a winning perfect-information strategy.
In a similar way, we define games for other selection principles from the given Scheepers Diagram. In all these cases a topological space has a property from the Scheepers Diagram if and only if Alice has no winning strategy in the corresponding game. But this does not hold in general:
Let be the family of k-covers of a space. That is, such that every compact set in the space is covered by some member of the cover.
Francis Jordan demonstrated a space where the selection principle holds, but
Alice has a winning strategy for the game
Examples and properties
Every space is a Lindelöf space.
Every σ-compact space (a countable union of compact spaces) is .
.
.
Assuming the Continuum Hypothesis, there are sets of real numbers witnessing that the above implications cannot be reversed.
Every Luzin set is but no .
Every Sierpiński set is Hurewicz.
Subsets of the real line (with the induced subspace topology) holding selection principle properties, most notably Menger and Hurewicz spaces, can be characterized by their continuous images in the Baire space . For functions , write if for all but finitely many natural numbers . Let be a subset of . The set is bounded if there is a function such that for all functions . The set is dominating if for each function there is a function such that .
A subset of the real line is if and only if every continuous image of that space into the Baire space is not dominating.
A subset of the real line is if and only if every continuous image of that space into the Baire space is bounded.
Connections with other fields
General topology
Every space is a D-space.
Let P be a property of spaces. A space is productively P if, for each space with property P, the product space has property P.
Every separable productively paracompact space is .
Assuming the Continuum Hypothesis, every productively Lindelöf space is productively
Let be a subset of the real line, and be a meager subset of the real line. Then the set is meager.
Measure theory
Every subset of the real line is a strong measure zero set.
Function spaces
Let be a Tychonoff space, and be the space of continuous functions with pointwise convergence topology.
satisfies if and only if is Fréchet–Urysohn if and only if is strong Fréchet–Urysohn.
satisfies if and only if has countable strong fan tightness.
satisfies if and only if has countable fan tightness.
See also
Compact space
Sigma-compact
Menger space
Hurewicz space
Rothberger space
References
Properties of topological spaces
Topology | Selection principle | [
"Physics",
"Mathematics"
] | 1,987 | [
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
50,402,709 | https://en.wikipedia.org/wiki/La%20Compagnie%20des%20Lampes | La Compagnie des Lampes ("The Lamp Company") was a name used by several French companies all in the area of electrical products particularly lighting.
La Compagnie des Lampes (1888)
The original Compagnie des Lampes was set up at Ivry-sur-Seine in 1888. The plant was subsequently attached to the CGE (Compagnie Générale d'Electricité) on its acquisition in 1898. The plant is classified as a historical monument.
In 1915, the plant was the second to start manufacturing TM triodes ("Télégraphie Militaire") in France, under their Métal brand (the first was E.C.&A. Grammont (Lyon) under their Radio Fotos brand). Later they made tubes for domestic AC transformer heating such as the BW604 and BW1010 under their Métal-Secteur brand.
La Compagnie des Lampes (1911)
Founded in 1911 by Paul Blavier, La Compagnie des Lampes was a light bulb factory workshop, located in Saint-Pierre-Montlimart, near Cholet. The company changed its name in 1918 to become Manufacture de lampes à incandescence, la Française. It was associated with the Thomson group in the 1950s.
La Compagnie des Lampes (1921)
In 1921 CFTH (Compagnie Française Thomson-Houston) and CGE (Compagnie Générale d'Electricité) jointly created a new Compagnie des Lampes. It later became a major player in the field of lighting in France, notably through its brand MAZDA.
Between 1924 and 1939, it was part of the Phoebus cartel, an oligopoly that dominated the market for light bulbs while putting in place an agreement on the principle of planned obsolescence for their products.
Besides light bulbs (and like British Ediswan), CdL (1921) also made vacuum tubes under the Mazda brand, for example 6H8G (1947), 3T100A1 (1949), E1 (1950); since 1953 as LAMPE MAZDA: 2G21 (1953), 927 (1954), EL183 (1959), EF816 (1962).
Many of their tubes were also available from Compagnie Industrielle Française des Tubes Electroniques (CIFTE) under their Mazda-Belvu brand (originating from Societé Radio Belvu, which sold Grammont's Fotos tubes).
References
Electrical engineering companies of France
Lighting brands
Vacuum tubes | La Compagnie des Lampes | [
"Physics"
] | 535 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
42,621,886 | https://en.wikipedia.org/wiki/PBAD%20promoter | PBAD (systematically araBp) is a promoter found in bacteria and especially as part of plasmids used in laboratory studies. The promoter is a part of the arabinose operon whose name derives from the genes it regulates transcription of: araB, araA, and araD. In E. coli, the PBAD promoter is adjacent to the PC promoter (systematically araCp), which transcribes the araC gene in the opposite direction. araC encodes the AraC protein, which regulates activity of both the PBAD and PC promoters. The cyclic AMP receptor protein CAP binds between the PBAD and PC promoters, stimulating transcription of both when bound by cAMP.
Regulation of PBAD
Transcription initiation at the PBAD promoter occurs in the presence of high arabinose and low glucose concentrations. Upon arabinose binding to AraC, the N-terminal arm of AraC is released from its DNA binding domain via a “light switch” mechanism. This allows AraC to dimerize and bind the I1 and I2 operators. The AraC-arabinose dimer at this site contributes to activation of the PBAD promoter. Additionally, CAP binds to two CAP binding sites upstream of the I1 and I2 operators and helps activate the PBAD promoter. In the presence of both high arabinose and high glucose concentrations however, low cAMP levels prevent CAP from activating the PBAD promoter. It is hypothesized that PBAD promoter activation by CAP and AraC is mediated through contacts between the C-terminal domain of the α-subunit of RNA polymerase and the CAP and AraC proteins.
Without arabinose, and regardless of glucose concentration, the PBAD and PC promoters are repressed by AraC. The N-terminal arm of AraC interacts with its DNA binding domain, allowing two AraC proteins to bind to the O2 and I1 operator sites. The O2 operator is situated within the araC gene. An AraC dimer also binds to the O1 operator and represses the PC promoter via a negative autoregulatory feedback loop. The two bound AraC proteins dimerize and cause looping of the DNA. The looping prevents binding of CAP and RNA Polymerase, which normally activate the transcription of both PBAD and PC.
The spacing between the O2 and I1 operator sites is critical. Adding or removing 5 base pairs between the O2 and I1 operator sites abrogates AraC mediated repression of the PBAD promoter. The spacing requirement arises from the double helix nature of DNA, in which a complete turn of the helix is about 10.5 nucleotides. Therefore, adding or removing 5 base pairs between the O2 and I1 operator sites rotates the helix roughly 180 degrees. This reverses the direction that the O2 operator faces when the DNA is looped and prevents dimerization of the O2 bound AraC with the bound I1 araC.
The PBAD promoter on expression plasmids
The PBAD promoter allows for tight regulation and control of a target gene in vivo. As explained above, PBAD is regulated by the addition and absence of arabinose. As tested, the promoter can be further repressed with reduced levels of cAMP through the addition of glucose. Plasmid vectors have been constructed and tested with a selectable marker (CmR in this case), origin of replication, araC and operons, multiple cloning site and PBAD promoter. Studies show that vectors are highly expressed and can be used, in combination with chromosomal null alleles, to study loss of function of essential genes.
References
Gene expression | PBAD promoter | [
"Chemistry",
"Biology"
] | 749 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
42,625,971 | https://en.wikipedia.org/wiki/Plate%20column | A plate column (or tray column) is equipment used in chemistry to carry out unit operations where it is necessary to transfer mass between a liquid phase and a gas phase. In other words, it is a particular gas-liquid contactor. The peculiarity of this gas-liquid contactor is that the gas comes in contact with liquid through different stages; each stage is delimited by two plates (except the stage at the top of the column and the stage at the bottom of the column).
Some common applications of plate columns are distillation, gas-liquid absorption and liquid-liquid extraction. In general, plate columns are suitable for both continuous and batch operations.
Fluid dynamics
The feed to the column can be liquid, gas or gas and liquid at equilibrium. Inside the column there are always two phases: one gas phase and one liquid phase. The liquid phase flows downward through the column via gravity, while the gas phase flows upward. These two phases come in contact in correspondence of holes, valves or bubble caps that fill the area of the plates. Gas moves to the higher plate through these devices, while the liquid moves to the lower plate through a downcomer.
The liquid is collected to the bottom of the column and it undergoes evaporation through a reboiler, while the gas is collected to the top and it undergoes condensation through a condenser. The liquid and gas produced at the top and at the bottom are in general recirculated.
In the simplest case, there are just one feed stream and two product streams. In the case of the fractionating column, there are instead many product streams.
Notes
Bibliography
Robert Perry, Don W. Green, Perry's Chemical Engineers' Handbook, 8th ed., McGraw-Hill, 2007.
See also
Distillation
Packed bed
Fractionating column
Chemical equipment
Distillation | Plate column | [
"Chemistry",
"Engineering"
] | 377 | [
"Chemical equipment",
"Distillation",
"nan",
"Separation processes"
] |
42,628,154 | https://en.wikipedia.org/wiki/Master%20stability%20function | In mathematics, the master stability function is a tool used to analyze the stability of the synchronous state in a dynamical system consisting of many identical systems which are coupled together, such as the Kuramoto model.
The setting is as follows. Consider a system with identical oscillators. Without the coupling, they evolve according to the same differential equation, say where denotes the state of oscillator . A synchronous state of the system of oscillators is where all the oscillators are in the same state.
The coupling is defined by a coupling strength , a matrix which describes how the oscillators are coupled together, and a function of the state of a single oscillator. Including the coupling leads to the following equation:
It is assumed that the row sums vanish so that the manifold of synchronous states is neutrally stable.
The master stability function is now defined as the function which maps the complex number to the greatest Lyapunov exponent of the equation
The synchronous state of the system of coupled oscillators is stable if the master stability function is negative at where ranges over the eigenvalues of the coupling matrix .
References
.
.
Dynamical systems | Master stability function | [
"Physics",
"Mathematics"
] | 255 | [
"Mechanics",
"Dynamical systems"
] |
42,628,804 | https://en.wikipedia.org/wiki/Quasi-crystals%20%28supramolecular%29 | Quasi-crystals are supramolecular aggregates exhibiting both crystalline (solid) properties as well as amorphous, liquid-like properties.
Self-organized structures termed "quasi-crystals" were originally described in 1978 by the Israeli scientist Valeri A. Krongauz of the Weizmann Institute of Science, in the Nature paper, Quasi-crystals from irradiated photochromic dyes in an applied electric field. In his 1978 paper Krongauz coined the term “Quasi-Crystals” for the new self-organized colloidal particles . The Quasi-crystals are supramolecular aggregates manifesting both crystalline properties e.g. Bragg scattering, as well as amorphous, liquid-like properties i.e. drop-like shapes, fluidity, extensibility and elasticity in electric field. The supramolecular Quasi-crystals are produced in photochemical reaction by exposing solutions of photochromic spiropyran molecules to UV radiation. The ultraviolet light induces the conversion of the spiropyrans to merocyanine molecules that manifest electric dipole moments. (see Scheme 1). The quasi-crystals have external shape of submicron globules and their internal structure consists of crystals enveloped by an amorphous matter (see Fig. 1). The crystals are formed by self-assembled stacks of the merocyanine molecular dipoles aligning themselves in a parallel manner, while amorphous envelopes consist of the same merocyanine dipoles aligned in an anti-parallel manner (Fig. 1, Scheme 2). In an applied electrostatic field, quasi-crystals form macroscopic threads that show linear optical dichroism.
Later Krongauz described unusual phase transitions of molecules composed of mesogenic and spiropyran moieties, which he named "quasi-liquid crystals." A micrograph of their mesophase appeared on the cover of Nature ''in a 1984 paper, “Quasi-Liquid Crystals.” The investigation of spiropyran-merocyanine self-organized systems, including macromolecules (see, for example, Fig. 2), has continued over the years.
These studies have resulted in discoveries of unusual and practically significant phenomena. Thus, in the electrostatic field, quasi-crystals and quasi-liquid crystals have exhibited 2nd order non-linear optical properties.
Potential applications of these fascinating materials have been described and patented.
Work on spiropyran-merocyanine self-assemblies currently continues in several laboratories.
References
Condensed matter physics
Liquid crystals
Supramolecular chemistry
Self-organization | Quasi-crystals (supramolecular) | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 548 | [
"Self-organization",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Dynamical systems",
"Nanotechnology",
"Matter",
"Supramolecular chemistry"
] |
38,401,360 | https://en.wikipedia.org/wiki/Reynolds%20equation | In fluid mechanics (specifically lubrication theory), the Reynolds equation is a partial differential equation governing the pressure distribution of thin viscous fluid films. It was first derived by Osborne Reynolds in 1886. The classical Reynolds Equation can be used to describe the pressure distribution in nearly any type of fluid film bearing; a bearing type in which the bounding bodies are fully separated by a thin layer of liquid or gas.
General usage
The general Reynolds equation is:
Where:
is fluid film pressure.
and are the bearing width and length coordinates.
is fluid film thickness coordinate.
is fluid film thickness.
is fluid viscosity.
is fluid density.
are the bounding body velocities in respectively.
are subscripts denoting the top and bottom bounding bodies respectively.
The equation can either be used with consistent units or nondimensionalized.
The Reynolds Equation assumes:
The fluid is Newtonian.
Fluid viscous forces dominate over fluid inertia forces. This is the principle of the Reynolds number.
Fluid body forces are negligible.
The variation of pressure across the fluid film is negligibly small (i.e. )
The fluid film thickness is much less than the width and length and thus curvature effects are negligible. (i.e. and ).
For some simple bearing geometries and boundary conditions, the Reynolds equation can be solved analytically. Often however, the equation must be solved numerically. Frequently this involves discretizing the geometric domain, and then applying a finite technique - often FDM, FVM, or FEM.
Derivation from Navier-Stokes
A full derivation of the Reynolds Equation from the Navier-Stokes equation can be found in numerous lubrication text books.
Solution of Reynolds Equation
In general, Reynolds equation has to be solved using numerical methods such as finite difference, or finite element. In certain simplified cases, however, analytical or approximate solutions can be obtained.
For the case of rigid sphere on flat geometry, steady-state case and half-Sommerfeld cavitation boundary condition, the 2-D Reynolds equation can be solved analytically. This solution was proposed by a Nobel Prize winner Pyotr Kapitsa. Half-Sommerfeld boundary condition was shown to be inaccurate and this solution has to be used with care.
In case of 1-D Reynolds equation several analytical or semi-analytical solutions are available. In 1916 Martin obtained a closed form solution for a minimum film thickness and pressure for a rigid cylinder and plane geometry. This solution is not accurate for the cases when the elastic deformation of the surfaces contributes considerably to the film thickness. In 1949, Grubin obtained an approximate solution for so called elasto-hydrodynamic lubrication (EHL) line contact problem, where he combined both elastic deformation and lubricant hydrodynamic flow. In this solution it was assumed that the pressure profile follows Hertz solution. The model is therefore accurate at high loads, when the hydrodynamic pressure tends to be close to the Hertz contact pressure.
Applications
The Reynolds equation is used to model the pressure in many applications. For example:
Ball bearings
Air bearings
Journal bearings
Squeeze film dampers in aircraft gas turbines
Human hip and knee joints
Lubricated gear contacts
Reynolds Equation adaptations - Average Flow Model
In 1978 Patir and Cheng introduced an average flow model, which modifies the Reynolds equation to consider the effects of surface roughness on lubricated contacts. The average flow model spans the regimes of lubrication where the surfaces are close together and/or touching. The average flow model applied "flow factors" to adjust how easy it is for the lubricant to flow in the direction of sliding or perpendicular to it. They also presented terms for adjusting the contact shear calculation. In these regimes, the surface topography acts to direct the lubricant flow, which has been demonstrated to affect the lubricant pressure and thus the surface separation and contact friction.
Several notable attempts have been made to taken additional details of the contact into account in the simulation of fluid films in contacts. Leighton et al. presented a method for determining the flow factors needed for the average flow model from any measured surface. Harp and Salent extended the average flow model by considering the inter-asperity cavitation. Chengwei and Linqing used an analysis of the surface height probability distribution to remove one of the more complex terms from the average Reynolds equation, and replace it with a flow factor referred to as contact flow factor, . Knoll et al. calculated flow factors, taking into account the elastic deformation of the surfaces. Meng et al. also considered the elastic deformation of the contacting surfaces.
The work of Patir and Cheng was a precursor to the investigations of surface texturing in lubricated contacts. Demonstrating how large scale surface features generated micro-hydrodynamic lift to separate films and reduce friction, but only when the contact conditions support this.
The average flow model of Patir and Cheng, is often coupled with the rough surface interaction model of Greenwood and Tripp for modelling of the interaction of rough surfaces in loaded contacts.
References
Mechanical engineering | Reynolds equation | [
"Physics",
"Engineering"
] | 1,043 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
38,404,821 | https://en.wikipedia.org/wiki/Zicronapine | Zicronapine ( , previously known as Lu 31-130) is an atypical antipsychotic medication formerly under development by H. Lundbeck A/S. In phase II studies zicronapine showed statistically significant separation from placebo and convincing efficacy and safety data when compared to olanzapine.
Zicronapine exhibits monoaminergic activity and has a multi-receptorial profile. In vitro and in vivo it has shown potent antagonistic effects at dopamine D1, D2 and serotonin 5HT2A receptors.
In 2014 Lundbeck removed zicronapine from its development portfolio in favor of pursuing the more promising antipsychotic Lu AF35700 (a prodrug of Lu AF356152).
References
External links
Abandoned drugs
Atypical antipsychotics
1-Aminoindanes
Chloroarenes
Piperazines | Zicronapine | [
"Chemistry"
] | 185 | [
"Pharmacology",
"Drug safety",
"Medicinal chemistry stubs",
"Pharmacology stubs",
"Abandoned drugs"
] |
38,407,148 | https://en.wikipedia.org/wiki/Quantum%20contextuality | Quantum contextuality is a feature of the phenomenology of quantum mechanics whereby measurements of quantum observables cannot simply be thought of as revealing pre-existing values. Any attempt to do so in a realistic hidden-variable theory leads to values that are dependent upon the choice of the other (compatible) observables which are simultaneously measured (the measurement context). More formally, the measurement result (assumed pre-existing) of a quantum observable is dependent upon which other commuting observables are within the same measurement set.
Contextuality was first demonstrated to be a feature of quantum phenomenology by the Bell–Kochen–Specker theorem. The study of contextuality has developed into a major topic of interest in quantum foundations as the phenomenon crystallises certain non-classical and counter-intuitive aspects of quantum theory. A number of powerful mathematical frameworks have been developed to study and better understand contextuality, from the perspective of sheaf theory, graph theory, hypergraphs, algebraic topology, and probabilistic couplings.
Nonlocality, in the sense of Bell's theorem, may be viewed as a special case of the more general phenomenon of contextuality, in which measurement contexts contain measurements that are distributed over spacelike separated regions. This follows from Fine's theorem.
Quantum contextuality has been identified as a source of quantum computational speedups and quantum advantage in quantum computing. Contemporary research has increasingly focused on exploring its utility as a computational resource.
Kochen and Specker
The need for contextuality was discussed informally in 1935 by Grete Hermann, but it was more than 30 years later when Simon B. Kochen and Ernst Specker, and separately John Bell, constructed proofs that any realistic hidden-variable theory able to explain the phenomenology of quantum mechanics is contextual for systems of Hilbert space dimension three and greater. The Kochen–Specker theorem proves that realistic noncontextual hidden-variable theories cannot reproduce the empirical predictions of quantum mechanics. Such a theory would suppose the following.
All quantum-mechanical observables may be simultaneously assigned definite values (this is the realism postulate, which is false in standard quantum mechanics, since there are observables that are indefinite in every given quantum state). These global value assignments may deterministically depend on some "hidden" classical variable, which in turn may vary stochastically for some classical reason (as in statistical mechanics). The measured assignments of observables may therefore finally stochastically change. This stochasticity is, however, epistemic and not ontic, as in the standard formulation of quantum mechanics.
Value assignments pre-exist and are independent of the choice of any other observables, which, in standard quantum mechanics, are described as commuting with the measured observable, and they are also measured.
Some functional constraints on the assignments of values for compatible observables are assumed (e.g., they are additive and multiplicative, there are, however, several versions of this functional requirement).
In addition, Kochen and Specker constructed an explicitly noncontextual hidden-variable model for the two-dimensional qubit case in their paper on the subject, thereby completing the characterisation of the dimensionality of quantum systems that can demonstrate contextual behaviour. Bell's proof invoked a weaker version of Gleason's theorem, reinterpreting the theorem to show that quantum contextuality exists only in Hilbert space dimension greater than two.
Frameworks for contextuality
Sheaf-theoretic framework
The sheaf-theoretic, or Abramsky–Brandenburger, approach to contextuality initiated by Samson Abramsky and Adam Brandenburger is theory-independent and can be applied beyond quantum theory to any situation in which empirical data arises in contexts. As well as being used to study forms of contextuality arising in quantum theory and other physical theories, it has also been used to study formally equivalent phenomena in logic, relational databases, natural language processing, and constraint satisfaction.
In essence, contextuality arises when empirical data is locally consistent but globally inconsistent.
This framework gives rise in a natural way to a qualitative hierarchy of contextuality:
(Probabilistic) contextuality may be witnessed in measurement statistics, e.g. by the violation of an inequality. A representative example is the KCBS proof of contextuality.
Logical contextuality may be witnessed in the "possibilistic" information about which outcome events are possible and which are not possible. A representative example is Hardy's nonlocality proof of nonlocality.
Strong contextuality is a maximal form of contextuality. Whereas (probabilistic) contextuality arises when measurement statistics cannot be reproduced by a mixture of global value assignments, strong contextuality arises when no global value assignment is even compatible with the possible outcome events. A representative example is the original Kochen–Specker proof of contextuality.
Each level in this hierarchy strictly includes the next. An important intermediate level that lies strictly between the logical and strong contextuality classes is all-versus-nothing contextuality, a representative example of which is the Greenberger–Horne–Zeilinger proof of nonlocality.
Graph and hypergraph frameworks
Adán Cabello, Simone Severini, and Andreas Winter introduced a general graph-theoretic framework for studying contextuality of different physical theories. Within this framework experimental scenarios are described by graphs, and certain invariants of these graphs were shown have particular physical significance. One way in which contextuality may be witnessed in measurement statistics is through the violation of noncontextuality inequalities (also known as generalized Bell inequalities). With respect to certain appropriately normalised inequalities, the independence number, Lovász number, and fractional packing number of the graph of an experimental scenario provide tight upper bounds on the degree to which classical theories, quantum theory, and generalised probabilistic theories, respectively, may exhibit contextuality in an experiment of that kind. A more refined framework based on hypergraphs rather than graphs is also used.
Contextuality-by-default (CbD) framework
In the CbD approach, developed by Ehtibar Dzhafarov, Janne Kujala, and colleagues, (non)contextuality is treated as a property of any system of random variables, defined as a set in which each random variable is labeled by its content the property it measures, and its context the set of recorded circumstances under which it is recorded (including but not limited to which other random variables it is recorded together with); stands for " is measured in ". The variables within a context are jointly distributed, but variables from different contexts are stochastically unrelated, defined on different sample spaces. A (probabilistic) coupling of the system is defined as a system in which all variables are jointly distributed and, in any context , and are identically distributed. The system is considered noncontextual if it has a coupling such that the probabilities are maximal possible for all contexts and contents such that . If such a coupling does not exist, the system is contextual. For the important class of cyclic systems of dichotomous () random variables, (), it has been shown that such a system is noncontextual if and only if
where
and
with the maximum taken over all whose product is . If and , measuring the same content in different context, are always identically distributed, the system is called consistently connected (satisfying "no-disturbance" or "no-signaling" principle). Except for certain logical issues, in this case CbD specializes to traditional treatments of contextuality in quantum physics. In particular, for consistently connected cyclic systems the noncontextuality criterion above reduces to which includes the Bell/CHSH inequality (), KCBS inequality (), and other famous inequalities. That nonlocality is a special case of contextuality follows in CbD from the fact that being jointly distributed for random variables is equivalent to being measurable functions of one and the same random variable (this generalizes Arthur Fine's analysis of Bell's theorem). CbD essentially coincides with the probabilistic part of Abramsky's sheaf-theoretic approach if the system is strongly consistently connected, which means that the joint distributions of and coincide whenever are measured in contexts . However, unlike most approaches to contextuality, CbD allows for inconsistent connectedness, with and differently distributed. This makes CbD applicable to physics experiments in which no-disturbance condition is violated, as well as to human behavior where this condition is violated as a rule. In particular, Vctor Cervantes, Ehtibar Dzhafarov, and colleagues have demonstrated that random variables describing certain paradigms of simple decision making form contextual systems, whereas many other decision-making systems are noncontextual once their inconsistent connectedness is properly taken into account.
Operational framework
An extended notion of contextuality due to Robert Spekkens applies to preparations and transformations as well as to measurements, within a general framework of operational physical theories. With respect to measurements, it removes the assumption of determinism of value assignments that is present in standard definitions of contextuality. This breaks the interpretation of nonlocality as a special case of contextuality, and does not treat irreducible randomness as nonclassical. Nevertheless, it recovers the usual notion of contextuality when outcome determinism is imposed.
Spekkens' contextuality can be motivated using Leibniz's law of the identity of indiscernibles. The law applied to physical systems in this framework mirrors the entended definition of noncontextuality. This was further explored by Simmons et al, who demonstrated that other notions of contextuality could also be motivated by Leibnizian principles, and could be thought of as tools enabling ontological conclusions from operational statistics.
Extracontextuality and extravalence
Given a pure quantum state , Born's rule tells that the probability to obtain another state in a measurement is . However, such a number does not define a full probability distribution, i.e. values over a set of mutually exclusive events, summing up to 1. In order to obtain such a set one needs to specify a context, that is a complete set of commuting operators (CSCO), or equivalently a set of N orthogonal projectors that sum to identity, where is the dimension of the Hilbert space. Then one has as expected. In that sense, one can tell that a state vector alone is predictively incomplete, as long a context has not been specified. The actual physical state, now defined by within a specified context, has been called a modality by Auffèves and Grangier
Since it is clear that alone does not define a modality, what is its status ? If , one sees easily that is associated with an equivalence class of modalities, belonging to different contexts, but connected between themselves with certainty, even if the different CSCO observables do not commute. This equivalence class is called an extravalence class, and the associated transfer of certainty between contexts is called extracontextuality. As a simple example, the usual singlet state for two spins 1/2 can be found in the (non commuting) CSCOs associated with the measurement of the total spin (with ), or with a Bell measurement, and actually it appears in infinitely many different CSCOs - but obviously not in all possible ones.
The concepts of extravalence and extracontextuality are very useful to spell out the role of contextuality in quantum mechanics, that is not non-contextual (like classical physical would be), but not either fully contextual, since modalities belonging to incompatible (non-commuting) contexts may be connected with certainty. Starting now from extracontextuality as a postulate, the fact that certainty can be transferred between contexts, and is then associated with a given projector, is the very basis of the hypotheses of Gleason's theorem, and thus of Born's rule. Also, associating a state vector with an extravalence class clarifies its status as a mathematical tool to calculate probabilities connecting modalities, which correspond to the actual observed physical events or results. This point of view is quite useful, and it can be used everywhere in quantum mechanics.
Other frameworks and extensions
A form of contextuality that may present in the dynamics of a quantum system was introduced by Shane Mansfield and Elham Kashefi, and has been shown to relate to computational quantum advantages. As a notion of contextuality that applies to transformations it is inequivalent to that of Spekkens. Examples explored to date rely on additional memory constraints which have a more computational than foundational motivation. Contextuality may be traded-off against Landauer erasure to obtain equivalent advantages.
Fine's theorem
The Kochen–Specker theorem proves that quantum mechanics is incompatible with realistic noncontextual hidden variable models. On the other hand Bell's theorem proves that quantum mechanics is incompatible with factorisable hidden variable models in an experiment in which measurements are performed at distinct spacelike separated locations. Arthur Fine showed that in the experimental scenario in which the famous CHSH inequalities and proof of nonlocality apply, a factorisable hidden variable model exists if and only if a noncontextual hidden variable model exists. This equivalence was proven to hold more generally in any experimental scenario by Samson Abramsky and Adam Brandenburger. It is for this reason that we may consider nonlocality to be a special case of contextuality.
Measures of contextuality
Contextual fraction
A number of methods exist for quantifying contextuality. One approach is by measuring the degree to which some particular noncontextuality inequality is violated, e.g. the KCBS inequality, the Yu–Oh inequality, or some Bell inequality. A more general measure of contextuality is the contextual fraction.
Given a set of measurement statistics e, consisting of a probability distribution over joint outcomes for each measurement context, we may consider factoring e into a noncontextual part eNC and some remainder e,
The maximum value of λ over all such decompositions is the noncontextual fraction of e denoted NCF(e), while the remainder CF(e)=(1-NCF(e)) is the contextual fraction of e. The idea is that we look for a noncontextual explanation for the highest possible fraction of the data, and what is left over is the irreducibly contextual part. Indeed, for any such decomposition that maximises λ the leftover e' is known to be strongly contextual. This measure of contextuality takes values in the interval [0,1], where 0 corresponds to noncontextuality and 1 corresponds to strong contextuality. The contextual fraction may be computed using linear programming.
It has also been proved that CF(e) is an upper bound on the extent to which e violates any normalised noncontextuality inequality. Here normalisation means that violations are expressed as fractions of the algebraic maximum violation of the inequality. Moreover, the dual linear program to that which maximises λ computes a noncontextual inequality for which this violation is attained. In this sense the contextual fraction is a more neutral measure of contextuality, since it optimises over all possible noncontextual inequalities rather than checking the statistics against one inequality in particular.
Measures of (non)contextuality within the Contextuality-by-Default (CbD) framework
Several measures of the degree of contextuality in contextual systems were proposed within the CbD framework, but only one of them, denoted CNT2, has been shown to naturally extend into a measure of noncontextuality in noncontextual systems, NCNT2. This is important, because at least in the non-physical applications of CbD contextuality and noncontextuality are of equal interest. Both CNT2 and NCNT2 are defined as the -distance between a probability vector representing a system and the surface of the noncontextuality polytope representing all possible noncontextual systems with the same single-variable marginals. For cyclic systems of dichotomous random variables, it is shown that if the system is contextual (i.e., ),
and if it is noncontextual ( ),
where is the -distance from the vector to the surface of the box circumscribing the noncontextuality polytope. More generally, NCNT2 and CNT2 are computed by means of linear programming. The same is true for other CbD-based measures of contextuality. One of them, denoted CNT3, uses the notion of a quasi-coupling, that differs from a coupling in that the probabilities in the joint distribution of its values are replaced with arbitrary reals (allowed to be negative but summing to 1). The class of quasi-couplings maximizing the probabilities is always nonempty, and the minimal total variation of the signed measure in this class is a natural measure of contextuality.
Contextuality as a resource for quantum computing
Recently, quantum contextuality has been investigated as a source of quantum advantage and computational speedups in quantum computing.
Magic state distillation
Magic state distillation is a scheme for quantum computing in which quantum circuits constructed only of Clifford operators, which by themselves are fault-tolerant but efficiently classically simulable, are injected with certain "magic" states that promote the computational power to universal fault-tolerant quantum computing. In 2014, Mark Howard, et al. showed that contextuality characterizes magic states for qubits of odd prime dimension and for qubits with real wavefunctions. Extensions to the qubit case have been investigated by Juani Bermejo Vega et al. This line of research builds on earlier work by Ernesto Galvão, which showed that Wigner function negativity is necessary for a state to be "magic"; it later emerged that Wigner negativity and contextuality are in a sense equivalent notions of nonclassicality.
Measurement-based quantum computing
Measurement-based quantum computation (MBQC) is a model for quantum computing in which a classical control computer interacts with a quantum system by specifying measurements to be performed and receiving measurement outcomes in return. The measurement statistics for the quantum system may or may not exhibit contextuality. A variety of results have shown that the presence of contextuality enhances the computational power of an MBQC.
In particular, researchers have considered an artificial situation in which the power of the classical control computer is restricted to only being able to compute linear Boolean functions, i.e. to solve problems in the Parity L complexity class ⊕L. For interactions with multi-qubit quantum systems a natural assumption is that each step of the interaction consists of a binary choice of measurement which in turn returns a binary outcome. An MBQC of this restricted kind is known as an l2-MBQC.
Anders and Browne
In 2009, Janet Anders and Dan Browne showed that two specific examples of nonlocality and contextuality were sufficient to compute a non-linear function. This in turn could be used to boost computational power to that of a universal classical computer, i.e. to solve problems in the complexity class P'''. This is sometimes referred to as measurement-based classical computation. The specific examples made use of the Greenberger–Horne–Zeilinger nonlocality proof and the supra-quantum Popescu–Rohrlich box.
Raussendorf
In 2013, Robert Raussendorf showed more generally that access to strongly contextual measurement statistics is necessary and sufficient for an l2-MBQC to compute a non-linear function. He also showed that to compute non-linear Boolean functions with sufficiently high probability requires contextuality.
Abramsky, Barbosa and Mansfield
A further generalization and refinement of these results due to Samson Abramsky, Rui Soares Barbosa and Shane Mansfield appeared in 2017, proving a precise quantifiable relationship between the probability of successfully computing any given non-linear function and the degree of contextuality present in the l2-MBQC as measured by the contextual fraction. Specifically, where are the probability of success, the contextual fraction of the measurement statistics e, and a measure of the non-linearity of the function to be computed , respectively.
Further examples
The above inequality was also shown to relate quantum advantage in non-local games to the degree of contextuality required by the strategy and an appropriate measure of the difficulty of the game.
Similarly the inequality arises in a transformation-based model of quantum computation analogous to l2''-MBQC where it relates the degree of sequential contextuality present in the dynamics of the quantum system to the probability of success and the degree of non-linearity of the target function.
Preparation contextuality has been shown to enable quantum advantages in cryptographic random-access codes and in state-discrimination tasks.
In classical simulations of quantum systems, contextuality has been shown to incur memory costs.
See also
Kochen–Specker theorem
Mermin–Peres square
KCBS pentagram
Quantum nonlocality
Quantum foundations
Quantum indeterminacy
References
Quantum mechanics | Quantum contextuality | [
"Physics"
] | 4,501 | [
"Theoretical physics",
"Quantum mechanics"
] |
38,409,255 | https://en.wikipedia.org/wiki/Singapore%20Synchrotron%20Light%20Source | Singapore Synchrotron Light Source (SSLS) is a synchrotron radiation facility located on Kent Ridge campus of the National University of Singapore.
History
The SSLS building project commenced in 1997 and concluded in 1999. Following the completion, the Helios 2 storage ring was relocated into the facility, and in 2000, an accelerator system was commissioned along with the construction of a beamline. In October 2001, user pilot operation commenced, starting with a phase-contrast imaging beamline. Additional facilities were subsequently added, and routine user operation was successfully established by 2003.
Footnotes
References
External links
Official website
Synchrotron radiation facilities | Singapore Synchrotron Light Source | [
"Physics",
"Materials_science"
] | 130 | [
"Particle physics stubs",
"Materials testing",
"Particle physics",
"Synchrotron radiation facilities"
] |
44,058,821 | https://en.wikipedia.org/wiki/Computers%20are%20social%20actors | Computers are social actors (CASA) is a paradigm which states that humans unthinkingly apply the same social heuristics used for human interactions to computers, because they call to mind similar social attributes as humans.
History and context
Clifford Nass and Youngme Moon's scientific article, "Machines and Mindlessness: Social Responses to Computers", published in 2000 in the Journal of Social Issues, is the origin for CASA. It states that CASA is the concept that people mindlessly apply social rules and expectations to computers, even though they know that these machines do not have feelings, intentions or human motivations.
In their 2000 article, Nass and Moon attribute their observation of anthropocentric reactions to computers and previous research on mindlessness as factors that lead them to study the phenomenon of computers as social actors. Specifically, they observed consistent anthropocentric treatment of computers by individuals in natural and lab settings, even though these individuals agreed that computers are not human and shouldn't be treated as such.
Additionally, Nass and Moon found a similarity between this behavior and research by Harvard psychology professor Ellen Langer on mindlessness. Langer states that mindlessness is when a specific context triggers an individual to rely on categories, associations, and habits of thought from the past with little to no conscious awareness. When these contexts are triggered, the individual becomes oblivious to novel or alternative aspects of the situation. In this respect, mindlessness is similar to habits and routines, but different in that with only one exposure to information, a person will create a cognitive commitment to the information and freeze its potential meaning. With mindlessness, alternative meanings or uses of the information become unavailable for active cognitive use.
Social attributes that computers have which are similar to humans include:
Words for output
Interactivity (the computer 'responds' when a button is touched)
Ability to perform traditional human tasks
According to CASA, the above attributes trigger scripts for human-human interaction, which leads an individual to ignore cues revealing the asocial nature of a computer. Although individuals using computers exhibit a mindless social response to the computer, individuals who are sensitive to the situation can observe the inappropriateness of the cued social behaviors. CASA has been extended to include robots and AI. However, recently, there have been challenges to the CASA paradigm. To account for the advances in technology, MASA has been forwarded as a significant extension of CASA.
Attributes
Cued social behaviors observed in research settings include some of the following:
Gender stereotyping: When voice outputs are used on computers, this triggers gender stereotype scripts, expectations, and attributions from individuals. For example, a 1997 study revealed that female-voiced tutor computers were rated as more informative about love and relationships than male-voiced computers, whereas male-voiced computers were more proficient in technical subjects than female-voiced computers.
Reciprocity: When a computer provides help, favours, or benefits, this triggers the mindless response of the participant feeling obliged to 'help' the computer. For example, an experiment in 1997 found that when a specific computer 'helped' a person, that person was more likely to do more 'work' for that computer.
Specialist versus generalist: When a technology is labeled as 'specialist', this triggers a mindless response by influencing people's perceptions of the content the labeled technology presents. For example, a 2000 study revealed when people watched a television labeled 'News Television', they thought the news segments on that TV were higher in quality, had more information, and were more interesting than people who saw the identical information on a TV labeled 'News and Entertainment Television'.
Personality: When a computer user mindlessly creates a personality for a computer based on verbal or paraverbal cues in the interface. For example, research from 1996 and 2001 found people with dominant personalities preferred computers that also had a 'dominant personality'; that is, the computer used strong, assertive language during tasks.
Academic research
Three research articles have represented some of the advances in the field of CASA. Researchers in this field are looking at how novel variables, manipulations, and new computer software influence mindlessness.
A 2010 article, "Cognitive load on social response to computers" by E.J. Lee discussed research on how human likeness of a computer interface, individuals' rationality, and cognitive load moderate the extent to which people apply social attributes to computers. The research revealed that participants were more socially attracted to a computer that flattered them than a generic-comment computer, but they became more suspicious about the validity of the flattery computer's claims and more likely to dismiss its answer. These negative effects disappeared when participants simultaneously engaged in a secondary task.
A 2011 study, "Computer emotion – impacts on trust" by Dimitrios Antos, Celso De Melo, Jonathan Gratch, and Barbara Grosz investigated whether computer agents can use the expression of emotion to influence human perceptions of trustworthiness in the context of a negotiation activity followed by a trust activity. They found that computer agents displaying emotions congruent with their actions were preferred as partners in the trust game over computer agents whose emotion expressions and actions did not match. They also found that when emotion did not carry useful new information, it did not strongly influence human decision-making behavior in a negotiation setting.
A 2011 study "Cloud computing – reexamination of CASA" by Hong and Sundar found that when people are in a cloud computing environment, they shift their source orientation—that is, users evaluate the system by focusing on service providers over the internet, instead of the machines in front of them. Hong and Sundar concluded their study by stating, "if individuals no longer respond socially to computers in clouds, there will need to be a fundamental re-examination of the mindless social response of humans to computers."
One example of how CASA research can impact consumer behaviour and attitude is Moon's experiment, which tested the application of the principle of reciprocity and disclosure in a consumer context. Moon tested this principle with intimate self-disclosure of high-risk information (when disclosure makes the person feel vulnerable) to a computer, and observed how that disclosure affects future attitudes and behaviors. Participants interacted with a computer which questioned them using reciprocal wording and gradual revealing of intimate information, then participants did a puzzle on paper, and finally half the group went back to the same computer and the other half went to a different computer. Both groups were shown 20 products and asked if they would purchase them. Participants who used the same computer throughout the experiment had a higher purchase likelihood score and a higher attraction score toward the computer in the product presentation than participants who did not use the same computer throughout the experiment. Studies also show that CASA can be applied to virtual influencers by showing that human-like appearance of virtual influencers show higher message credibility than anime-like virtual influencers.
References
Social psychology
Human–computer interaction | Computers are social actors | [
"Engineering"
] | 1,406 | [
"Human–computer interaction",
"Human–machine interaction"
] |
44,059,936 | https://en.wikipedia.org/wiki/Coherent%20turbulent%20structure | Turbulent flows are complex multi-scale and chaotic motions that need to be classified into more elementary components, referred to coherent turbulent structures. Such a structure must have temporal coherence, i.e. it must persist in its form for long enough periods that the methods of time-averaged statistics can be applied. Coherent structures are typically studied on very large scales, but can be broken down into more elementary structures with coherent properties of their own, such examples include hairpin vortices. Hairpins and coherent structures have been studied and noticed in data since the 1930s, and have been since cited in thousands of scientific papers and reviews.
Flow visualization experiments, using smoke and dye as tracers, have been historically used to simulate coherent structures and verify theories, but computer models are now the dominant tools widely used in the field to verify and understand the formation, evolution, and other properties of such structures. The kinematic properties of these motions include size, scale, shape, vorticity, energy, and the dynamic properties govern the way coherent structures grow, evolve, and decay. Most coherent structures are studied only within the confined forms of simple wall turbulence, which approximates the coherence to be steady, fully developed, incompressible, and with a zero pressure gradient in the boundary layer. Although such approximations depart from reality, they contain sufficient parameters needed to understand turbulent coherent structures in a highly conceptual degree.
History and discovery
The presence of organized motions and structures in turbulent shear flows was apparent for a long time, and has been additionally implied by mixing length hypothesis even before the concept was explicitly stated in literature. There were also early correlation data found by measuring jets and turbulent wakes, particularly by Corrsin and Roshko. Hama's hydrogen bubble technique, which used flow visualization to observe the structures, received wide spread attention and many researchers followed up including Kline. Flow visualization is a laboratory experimental technique that is used to visualize and understand the structures of turbulent shear flows.
With a much better understanding of coherent structures, it is now possible to discover and recognize many coherent structures in previous flow-visualization pictures collected of various turbulent flows taken decades ago. Computer simulations are now being the dominant tool for understanding and visualizing coherent flow structures. The ability to compute the necessary time-dependent Navier–Stokes equations produces graphic presentations at a much more sophisticated level, and can additionally be visualized at different planes and resolutions, exceeding the expected sizes and speeds previously generated in laboratory experiments. However, controlled flow visualization experiments are still necessary to direct, develop, and validate the numerical simulations now dominant in the field.
Definition
A turbulent flow is a flow regime in fluid dynamics where fluid velocity varies significantly and irregularly in both position and time. Furthermore, a coherent structure is defined as a turbulent flow whose vorticity expression, which is usually stochastic, contains orderly components that can be described as being instantaneously coherent over the spatial extent of the flow structure. In other words, underlying the three-dimensional chaotic vorticity expressions typical of turbulent flows, there is an organized component of that vorticity which is phase-correlated over the entire space of the structure. The instantaneously space and phase correlated vorticity found within the coherent structure expressions can be defined as coherent vorticity, hence making coherent vorticity the main characteristic identifier for coherent structures. Another characteristic inherent in turbulent flows is their intermittency, but intermittency is a very poor identifier of the boundaries of a coherent structure, hence it is generally accepted that the best way to characterize the boundary of a structure is by identifying and defining the boundary of the coherent vorticity.
By defining and identifying coherent structure in this manner, turbulent flows can be decomposed into coherent structures and incoherent structures depending on their coherence, particularly their correlations with their vorticity. Hence, similarly organized events in an ensemble average of organized events can be defined as a coherent structure, and whatever events not identified as similar or phase and space aligned in the ensemble average is an incoherent turbulent structure.
Other attempts at defining a coherent structure can be done through examining the correlation between their momenta or pressure and their turbulent flows. However, it often leads to false indications of turbulence, since pressure and velocity fluctuations over a fluid could be well correlated in the absence of any turbulence or vorticity. Some coherent structures, such as vortex rings, etc. can be large-scale motions comparable to the extent of the shear flow. There are also coherent motions at much smaller scales such as hairpin vortices and typical eddies, which are typically known as coherent substructures, as in coherent structures which can be broken up into smaller more elementary substructures.
Characteristics
Although a coherent structure is by definition characterized by high levels of coherent vorticity, Reynolds stress, production, and heat and mass transportation, it does not necessarily require a high level of kinetic energy. In fact, one of the main roles of coherent structures is the large-scale transport of mass, heat, and momentum without requiring the high amounts of energy normally needed. Consequently, this implies that coherent structures are not the main production and cause of Reynolds stress, and incoherent turbulence can be similarly significant.
Coherent structures cannot superimpose, i.e. they cannot overlap and each coherent structure has its own independent domain and boundary. Since eddies coexist as spatial superpositions, a coherent structure is not an eddy. For example, eddies dissipate energy by obtaining energy from the mean flow at large scales, and eventually dissipating it at the smallest scales. There is no such analogous exchange of energy between coherent structures, and any interaction such as tearing between coherent structures simply results in a new structure. However, two coherent structures can interact and influence each other. The mass of a structure change with time, with the typical case being that structures increase in volume via the diffusion of vorticity.
One of the most fundamental quantities of coherent structures is characterized by coherent vorticity, . Perhaps the next most critical measures of coherent structures are the coherent vs. incoherent Reynold's stresses, and . These represent the transports of momentum, and their relative strength indicates how much momentum is being transported by coherent structures as compared to incoherent structures. The next most significant measures include contoured depictions of coherent strain rate and shear production. A useful property of such contours is that they are invariant under Galilean transformations, hence the contours of coherent vorticity constitute an excellent identifier to the structure's boundaries. The contours of these properties not only locate where exactly coherent structure quantities have their peaks and saddles, but also identify where the incoherent turbulent structures are when overlaid on their directional gradients. In addition, spatial contours can be drawn describe the shape, size, and strength of the coherent structures, depicting not only the mechanics but also the dynamical evolution of coherent structures. For example, in order for a structure to be evolving, and hence dominant, its coherent vorticity, coherent Reynolds stress, and production terms should be larger than the time averaged values of the flow structures.
Formation
Coherent structures form due to some sort of instability, e.g. the Kelvin–Helmholtz instability. Identifying an instability, and hence the initial formation of a coherent structure, requires the knowledge of initial conditions of the flow structure. Hence, documentation of the initial condition is essential for capturing the evolution and interactions of coherent structures, since initial conditions are quite variable. Overlooking the initial conditions was common in early studies due to researchers overlooking their significance. Initial conditions include the mean velocity profile, thickness, shape, the probability densities of velocity and momentum, the spectrum of Reynolds stress values, etc. These measures of initial flow conditions can be organized and grouped into three broad categories: laminar, highly disturbed, and fully turbulent.
Out of the three categories, coherent structures typically arise from instabilities in laminar or turbulent states. After an initial triggering, their growth is determined by evolutionary changes due to non-linear interactions with other coherent structures, or their decay onto incoherent turbulent structures. Observed rapid changes lead to the belief that there must be a regenerative cycle that takes place during decay. For example, after a structure decays, the result may be that the flow is now turbulent and becomes susceptible to a new instability determined by the new flow state, leading to a new coherent structure being formed. It is also possible that structures do not decay and instead distort by splitting into substructures or interacting with other coherent structures.
Categories of coherent structures
Lagrangian coherent structures
Lagrangian coherent structures (LCSs) are influential material surfaces that create clearly recognizable patterns in passive tracer distributions advected by an unsteady flow. LCSs can be classified as hyperbolic (locally maximally attracting or repelling material surfaces), elliptic (material vortex boundaries), and parabolic (material jet cores). These surfaces are generalizations of classical invariant manifolds, known in dynamical systems theory, to finite-time unsteady flow data. This Lagrangian perspective on coherence is concerned with structures formed by fluid elements, as opposed to the Eulerian notion of coherence, which considers features in the instantaneous velocity field of the fluid. Various mathematical techniques have been developed to identify LCSs in two- and three-dimenisonal data sets, and have been applied to laboratory experiments, numerical simulations and geophysical observations.
Hairpin vortices
Hairpin vortices are found on top of turbulent bulges of the turbulent wall, wrapping around the turbulent wall in hairpin shaped loops, where the name originates. The hairpin-shaped vortices are believed to be one of the most important and elementary sustained flow patterns in turbulent boundary layers. Hairpins are perhaps the simplest structures, and models that represent large scale turbulent boundary layers are often constructed by breaking down individual hairpin vortices, which could explain most of the features of wall turbulence. Although hairpin vortices form the basis of simple conceptual models of flow near a wall, actual turbulent flows may contain a hierarchy of competing vortices, each with their own degree of asymmetry and disturbances.
Hairpin vortices resemble the horseshoe vortex, which exists because of perturbations of small upward motion due to differences in upward flowing velocities depending on the distance from the wall. These form multiple packets of hairpin vortices, where hairpin packets of different sizes could generate new vortices to add to the packet. Specifically, close to the surface, the tail ends of hairpin vortices could gradually converge resulting in provoked eruptions, producing new hairpin vortices. Hence, such eruptions are a regenerative process, in which they act to create vortices near the surface and eject them out onto the outer regions of the turbulent wall. Based on the eruptive properties, such flows can be inferred to be very efficient at heat transfer because of mixing. Specifically, eruptions carry hot fluids up while cooler flows are brought downwards during the converging of tails of the hairpin vortices before erupting.
It is believed that production and contributions to , the Reynolds stress, occur during strong interactions between the inner and outer walls of hairpins. During the production of this Reynold's stress term, the contributions come in sharp intermittent time segments when eruptions bring new vortices outward.
Formation of hairpin vortices has been observed in experiments and numerical simulations of single hairpins, however observational evidence for them in nature is still limited. Theodorsen has been producing sketches that indicate the presence of hairpin vortices in his flow visualization experiments. These smaller elementary structures can be seen overlaying the main vortex in the sketch to the right (image of sketch to Theodorsen's steam experiment that exposes the presence of structures). The sketch was well advanced for the time, but with the advent of computers came better depictions. Robinson in 1952 isolated two types of flow structures that he named the "horseshoe", or arch, vortex and the "quasi-streamwise" vortex (classic figure shown to the right).
Since the mass usage of computers, direct numerical simulations or DNS have been used widely, producing vast data sets describing the complex evolution of flow. DNS indicates many complicated 3-dimensional vortices are embedded in regions of high shear near the surface. Researchers look around this region of high shear for indications of individual vortex structures based on accepted definitions, like coherent vortices. Historically, a vortex has been thought of as a region in the flow where a group of vortex lines come together hence indicating the presence of a vortex core, with groups of instantaneous circular paths about the core. In 1991, Robinson defined a vortex structure to be a core consisting of convected low pressure regions, where instantaneous streamlines can form circles or spiral shapes relative to the plane normal to the vortex core plane. Although it is not possible to track the evolution of hairpins over long periods, it is possible to identify and trace their evolution over short time periods. Some of the key notable features of hairpin vortices are how they interact with the background shear flow, other vortices, and how they interact with the flow near the surface.
References
Aerodynamics
Concepts in physics
Turbulence
Dynamical systems | Coherent turbulent structure | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,767 | [
"Turbulence",
"Aerodynamics",
"Mechanics",
"nan",
"Aerospace engineering",
"Dynamical systems",
"Fluid dynamics"
] |
54,109,664 | https://en.wikipedia.org/wiki/Liouville%20space | In the mathematical physics of quantum mechanics, Liouville space, also known as line space, is the space of operators on Hilbert space. Liouville space is itself a Hilbert space under the Hilbert-Schmidt inner product.
Abstractly, Liouville space is equivalent (isometrically isomorphic) to the tensor product of a Hilbert space with its dual. A common computational technique to organize computations in Liouville space is vectorization.
Liouville space underlies the density operator formalism and is a common computation technique in the study of open quantum systems.
References
Hilbert spaces
Linear algebra
Operator theory
Functional analysis | Liouville space | [
"Physics",
"Mathematics"
] | 126 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Quantum mechanics",
"Mathematical relations",
"Linear algebra",
"Hilbert spaces",
"Algebra"
] |
54,112,223 | https://en.wikipedia.org/wiki/Transcriptomics%20technologies | Transcriptomics technologies are the techniques used to study an organism's transcriptome, the sum of all of its RNA transcripts. The information content of an organism is recorded in the DNA of its genome and expressed through transcription. Here, mRNA serves as a transient intermediary molecule in the information network, whilst non-coding RNAs perform additional diverse functions. A transcriptome captures a snapshot in time of the total transcripts present in a cell. Transcriptomics technologies provide a broad account of which cellular processes are active and which are dormant.
A major challenge in molecular biology is to understand how a single genome gives rise to a variety of cells. Another is how gene expression is regulated.
The first attempts to study whole transcriptomes began in the early 1990s. Subsequent technological advances since the late 1990s have repeatedly transformed the field and made transcriptomics a widespread discipline in biological sciences. There are two key contemporary techniques in the field: microarrays, which quantify a set of predetermined sequences, and RNA-Seq, which uses high-throughput sequencing to record all transcripts. As the technology improved, the volume of data produced by each transcriptome experiment increased. As a result, data analysis methods have steadily been adapted to more accurately and efficiently analyse increasingly large volumes of data. Transcriptome databases getting bigger and more useful as transcriptomes continue to be collected and shared by researchers. It would be almost impossible to interpret the information contained in a transcriptome without the knowledge of previous experiments.
Measuring the expression of an organism's genes in different tissues or conditions, or at different times, gives information on how genes are regulated and reveals details of an organism's biology. It can also be used to infer the functions of previously unannotated genes. Transcriptome analysis has enabled the study of how gene expression changes in different organisms and has been instrumental in the understanding of human disease. An analysis of gene expression in its entirety allows detection of broad coordinated trends which cannot be discerned by more targeted assays.
History
Transcriptomics has been characterised by the development of new techniques which have redefined what is possible every decade or so and rendered previous technologies obsolete. The first attempt at capturing a partial human transcriptome was published in 1991 and reported 609 mRNA sequences from the human brain. In 2008, two human transcriptomes, composed of millions of transcript-derived sequences covering 16,000 genes, were published, and by 2015 transcriptomes had been published for hundreds of individuals. Transcriptomes of different disease states, tissues, or even single cells are now routinely generated. This explosion in transcriptomics has been driven by the rapid development of new technologies with improved sensitivity and economy.
Before transcriptomics
Studies of individual transcripts were being performed several decades before any transcriptomics approaches were available. Libraries of silkmoth mRNA transcripts were collected and converted to complementary DNA (cDNA) for storage using reverse transcriptase in the late 1970s. In the 1980s, low-throughput sequencing using the Sanger method was used to sequence random transcripts, producing expressed sequence tags (ESTs). The Sanger method of sequencing was predominant until the advent of high-throughput methods such as sequencing by synthesis (Solexa/Illumina). ESTs came to prominence during the 1990s as an efficient method to determine the gene content of an organism without sequencing the entire genome. Amounts of individual transcripts were quantified using Northern blotting, nylon membrane arrays, and later reverse transcriptase quantitative PCR (RT-qPCR) methods, but these methods are laborious and can only capture a tiny subsection of a transcriptome. Consequently, the manner in which a transcriptome as a whole is expressed and regulated remained unknown until higher-throughput techniques were developed.
Early attempts
The word "transcriptome" was first used in the 1990s. In 1995, one of the earliest sequencing-based transcriptomic methods was developed, serial analysis of gene expression (SAGE), which worked by Sanger sequencing of concatenated random transcript fragments. Transcripts were quantified by matching the fragments to known genes. A variant of SAGE using high-throughput sequencing techniques, called digital gene expression analysis, was also briefly used. However, these methods were largely overtaken by high throughput sequencing of entire transcripts, which provided additional information on transcript structure such as splice variants.
Development of contemporary techniques
The dominant contemporary techniques, microarrays and RNA-Seq, were developed in the mid-1990s and 2000s. Microarrays that measure the abundances of a defined set of transcripts via their hybridisation to an array of complementary probes were first published in 1995. Microarray technology allowed the assay of thousands of transcripts simultaneously and at a greatly reduced cost per gene and labour saving. Both spotted oligonucleotide arrays and Affymetrix high-density arrays were the method of choice for transcriptional profiling until the late 2000s. Over this period, a range of microarrays were produced to cover known genes in model or economically important organisms. Advances in design and manufacture of arrays improved the specificity of probes and allowed more genes to be tested on a single array. Advances in fluorescence detection increased the sensitivity and measurement accuracy for low abundance transcripts.
RNA-Seq is accomplished by reverse transcribing RNA in vitro and sequencing the resulting cDNAs. Transcript abundance is derived from the number of counts from each transcript. The technique has therefore been heavily influenced by the development of high-throughput sequencing technologies. Massively parallel signature sequencing (MPSS) was an early example based on generating 16–20 bp sequences via a complex series of hybridisations, and was used in 2004 to validate the expression of ten thousand genes in Arabidopsis thaliana. The earliest RNA-Seq work was published in 2006 with one hundred thousand transcripts sequenced using 454 technology. This was sufficient coverage to quantify relative transcript abundance. RNA-Seq began to increase in popularity after 2008 when new Solexa/Illumina technologies allowed one billion transcript sequences to be recorded. This yield now allows for the quantification and comparison of human transcriptomes.
Data gathering
Generating data on RNA transcripts can be achieved via either of two main principles: sequencing of individual transcripts (ESTs, or RNA-Seq) or hybridisation of transcripts to an ordered array of nucleotide probes (microarrays).
Isolation of RNA
All transcriptomic methods require RNA to first be isolated from the experimental organism before transcripts can be recorded. Although biological systems are incredibly diverse, RNA extraction techniques are broadly similar and involve mechanical disruption of cells or tissues, disruption of RNase with chaotropic salts, disruption of macromolecules and nucleotide complexes, separation of RNA from undesired biomolecules including DNA, and concentration of the RNA via precipitation from solution or elution from a solid matrix. Isolated RNA may additionally be treated with DNase to digest any traces of DNA. It is necessary to enrich messenger RNA as total RNA extracts are typically 98% ribosomal RNA. Enrichment for transcripts can be performed by poly-A affinity methods or by depletion of ribosomal RNA using sequence-specific probes. Degraded RNA may affect downstream results; for example, mRNA enrichment from degraded samples will result in the depletion of 5’ mRNA ends and an uneven signal across the length of a transcript. Snap-freezing of tissue prior to RNA isolation is typical, and care is taken to reduce exposure to RNase enzymes once isolation is complete.
Expressed sequence tags
An expressed sequence tag (EST) is a short nucleotide sequence generated from a single RNA transcript. RNA is first copied as complementary DNA (cDNA) by a reverse transcriptase enzyme before the resultant cDNA is sequenced. Because ESTs can be collected without prior knowledge of the organism from which they come, they can be made from mixtures of organisms or environmental samples. Although higher-throughput methods are now used, EST libraries commonly provided sequence information for early microarray designs; for example, a barley microarray was designed from 350,000 previously sequenced ESTs.
Serial and cap analysis of gene expression (SAGE/CAGE)
Serial analysis of gene expression (SAGE) was a development of EST methodology to increase the throughput of the tags generated and allow some quantitation of transcript abundance. cDNA is generated from the RNA but is then digested into 11 bp "tag" fragments using restriction enzymes that cut DNA at a specific sequence, and 11 base pairs along from that sequence. These cDNA tags are then joined head-to-tail into long strands (>500 bp) and sequenced using low-throughput, but long read-length methods such as Sanger sequencing. The sequences are then divided back into their original 11 bp tags using computer software in a process called deconvolution. If a high-quality reference genome is available, these tags may be matched to their corresponding gene in the genome. If a reference genome is unavailable, the tags can be directly used as diagnostic markers if found to be differentially expressed in a disease state.
The cap analysis gene expression (CAGE) method is a variant of SAGE that sequences tags from the 5’ end of an mRNA transcript only. Therefore, the transcriptional start site of genes can be identified when the tags are aligned to a reference genome. Identifying gene start sites is of use for promoter analysis and for the cloning of full-length cDNAs.
SAGE and CAGE methods produce information on more genes than was possible when sequencing single ESTs, but sample preparation and data analysis are typically more labour-intensive.
Microarrays
Principles and advances
Microarrays usually consist of a grid of short nucleotide oligomers, known as "probes", typically arranged on a glass slide. Transcript abundance is determined by hybridisation of fluorescently labelled transcripts to these probes. The fluorescence intensity at each probe location on the array indicates the transcript abundance for that probe sequence. Groups of probes designed to measure the same transcript (i.e., hybridizing a specific transcript in different positions) are usually referred to as "probesets".
Microarrays require some genomic knowledge from the organism of interest, for example, in the form of an annotated genome sequence, or a library of ESTs that can be used to generate the probes for the array.
Methods
Microarrays for transcriptomics typically fall into one of two broad categories: low-density spotted arrays or high-density short probe arrays. Transcript abundance is inferred from the intensity of fluorescence derived from fluorophore-tagged transcripts that bind to the array.
Spotted low-density arrays typically feature picolitre drops of a range of purified cDNAs arrayed on the surface of a glass slide. These probes are longer than those of high-density arrays and cannot identify alternative splicing events. Spotted arrays use two different fluorophores to label the test and control samples, and the ratio of fluorescence is used to calculate a relative measure of abundance. High-density arrays use a single fluorescent label, and each sample is hybridised and detected individually. High-density arrays were popularised by the Affymetrix GeneChip array, where each transcript is quantified by several short 25-mer probes that together assay one gene.
NimbleGen arrays were a high-density array produced by a maskless-photochemistry method, which permitted flexible manufacture of arrays in small or large numbers. These arrays had 100,000s of 45 to 85-mer probes and were hybridised with a one-colour labelled sample for expression analysis. Some designs incorporated up to 12 independent arrays per slide.
RNA-Seq
Principles and advances
RNA-Seq refers to the combination of a high-throughput sequencing methodology with computational methods to capture and quantify transcripts present in an RNA extract. The nucleotide sequences generated are typically around 100 bp in length, but can range from 30 bp to over 10,000 bp depending on the sequencing method used. RNA-Seq leverages deep sampling of the transcriptome with many short fragments from a transcriptome to allow computational reconstruction of the original RNA transcript by aligning reads to a reference genome or to each other (de novo assembly). Both low-abundance and high-abundance RNAs can be quantified in an RNA-Seq experiment (dynamic range of 5 orders of magnitude)—a key advantage over microarray transcriptomes. In addition, input RNA amounts are much lower for RNA-Seq (nanogram quantity) compared to microarrays (microgram quantity), which allow examination of the transcriptome even at a single-cell resolution when combined with amplification of cDNA. Theoretically, there is no upper limit of quantification in RNA-Seq, and background noise is very low for 100 bp reads in non-repetitive regions.
RNA-Seq may be used to identify genes within a genome, or identify which genes are active at a particular point in time, and read counts can be used to accurately model the relative gene expression level. RNA-Seq methodology has constantly improved, primarily through the development of DNA sequencing technologies to increase throughput, accuracy, and read length. Since the first descriptions in 2006 and 2008, RNA-Seq has been rapidly adopted and overtook microarrays as the dominant transcriptomics technique in 2015.
The quest for transcriptome data at the level of individual cells has driven advances in RNA-Seq library preparation methods, resulting in dramatic advances in sensitivity. Single-cell transcriptomes are now well described and have even been extended to in situ RNA-Seq where transcriptomes of individual cells are directly interrogated in fixed tissues.
Methods
RNA-Seq was established in concert with the rapid development of a range of high-throughput DNA sequencing technologies. However, before the extracted RNA transcripts are sequenced, several key processing steps are performed. Methods differ in the use of transcript enrichment, fragmentation, amplification, single or paired-end sequencing, and whether to preserve strand information.
The sensitivity of an RNA-Seq experiment can be increased by enriching classes of RNA that are of interest and depleting known abundant RNAs. The mRNA molecules can be separated using oligonucleotides probes which bind their poly-A tails. Alternatively, ribo-depletion can be used to specifically remove abundant but uninformative ribosomal RNAs (rRNAs) by hybridisation to probes tailored to the taxon's specific rRNA sequences (e.g. mammal rRNA, plant rRNA). However, ribo-depletion can also introduce some bias via non-specific depletion of off-target transcripts. Small RNAs, such as micro RNAs, can be purified based on their size by gel electrophoresis and extraction.
Since mRNAs are longer than the read-lengths of typical high-throughput sequencing methods, transcripts are usually fragmented prior to sequencing. The fragmentation method is a key aspect of sequencing library construction. Fragmentation may be achieved by chemical hydrolysis, nebulisation, sonication, or reverse transcription with chain-terminating nucleotides. Alternatively, fragmentation and cDNA tagging may be done simultaneously by using transposase enzymes.
During preparation for sequencing, cDNA copies of transcripts may be amplified by PCR to enrich for fragments that contain the expected 5’ and 3’ adapter sequences. Amplification is also used to allow sequencing of very low input amounts of RNA, down to as little as 50 pg in extreme applications. Spike-in controls of known RNAs can be used for quality control assessment to check library preparation and sequencing, in terms of GC-content, fragment length, as well as the bias due to fragment position within a transcript. Unique molecular identifiers (UMIs) are short random sequences that are used to individually tag sequence fragments during library preparation so that every tagged fragment is unique. UMIs provide an absolute scale for quantification, the opportunity to correct for subsequent amplification bias introduced during library construction, and accurately estimate the initial sample size. UMIs are particularly well-suited to single-cell RNA-Seq transcriptomics, where the amount of input RNA is restricted and extended amplification of the sample is required.
Once the transcript molecules have been prepared they can be sequenced in just one direction (single-end) or both directions (paired-end). A single-end sequence is usually quicker to produce, cheaper than paired-end sequencing and sufficient for quantification of gene expression levels. Paired-end sequencing produces more robust alignments/assemblies, which is beneficial for gene annotation and transcript isoform discovery. Strand-specific RNA-Seq methods preserve the strand information of a sequenced transcript. Without strand information, reads can be aligned to a gene locus but do not inform in which direction the gene is transcribed. Stranded-RNA-Seq is useful for deciphering transcription for genes that overlap in different directions and to make more robust gene predictions in non-model organisms.
Legend: NCBI SRA – National center for biotechnology information sequence read archive.
Currently RNA-Seq relies on copying RNA molecules into cDNA molecules prior to sequencing; therefore, the subsequent platforms are the same for transcriptomic and genomic data. Consequently, the development of DNA sequencing technologies has been a defining feature of RNA-Seq. Direct sequencing of RNA using nanopore sequencing represents a current state-of-the-art RNA-Seq technique. Nanopore sequencing of RNA can detect modified bases that would be otherwise masked when sequencing cDNA and also eliminates amplification steps that can otherwise introduce bias.
The sensitivity and accuracy of an RNA-Seq experiment are dependent on the number of reads obtained from each sample. A large number of reads are needed to ensure sufficient coverage of the transcriptome, enabling detection of low abundance transcripts. Experimental design is further complicated by sequencing technologies with a limited output range, the variable efficiency of sequence creation, and variable sequence quality. Added to those considerations is that every species has a different number of genes and therefore requires a tailored sequence yield for an effective transcriptome. Early studies determined suitable thresholds empirically, but as the technology matured suitable coverage was predicted computationally by transcriptome saturation. Somewhat counter-intuitively, the most effective way to improve detection of differential expression in low expression genes is to add more biological replicates rather than adding more reads. The current benchmarks recommended by the Encyclopedia of DNA Elements (ENCODE) Project are for 70-fold exome coverage for standard RNA-Seq and up to 500-fold exome coverage to detect rare transcripts and isoforms.
Data analysis
Transcriptomics methods are highly parallel and require significant computation to produce meaningful data for both microarray and RNA-Seq experiments. Microarray data is recorded as high-resolution images, requiring feature detection and spectral analysis. Microarray raw image files are each about 750 MB in size, while the processed intensities are around 60 MB in size. Multiple short probes matching a single transcript can reveal details about the intron-exon structure, requiring statistical models to determine the authenticity of the resulting signal. RNA-Seq studies produce billions of short DNA sequences, which must be aligned to reference genomes composed of millions to billions of base pairs. De novo assembly of reads within a dataset requires the construction of highly complex sequence graphs. RNA-Seq operations are highly repetitious and benefit from parallelised computation but modern algorithms mean consumer computing hardware is sufficient for simple transcriptomics experiments that do not require de novo assembly of reads. A human transcriptome could be accurately captured using RNA-Seq with 30 million 100 bp sequences per sample. This example would require approximately 1.8 gigabytes of disk space per sample when stored in a compressed fastq format. Processed count data for each gene would be much smaller, equivalent to processed microarray intensities. Sequence data may be stored in public repositories, such as the Sequence Read Archive (SRA). RNA-Seq datasets can be uploaded via the Gene Expression Omnibus.
Image processing
Microarray image processing must correctly identify the regular grid of features within an image and independently quantify the fluorescence intensity for each feature. Image artefacts must be additionally identified and removed from the overall analysis. Fluorescence intensities directly indicate the abundance of each sequence, since the sequence of each probe on the array is already known.
The first steps of RNA-seq also include similar image processing; however, conversion of images to sequence data is typically handled automatically by the instrument software. The Illumina sequencing-by-synthesis method results in an array of clusters distributed over the surface of a flow cell. The flow cell is imaged up to four times during each sequencing cycle, with tens to hundreds of cycles in total. Flow cell clusters are analogous to microarray spots and must be correctly identified during the early stages of the sequencing process. In Roche’s pyrosequencing method, the intensity of emitted light determines the number of consecutive nucleotides in a homopolymer repeat. There are many variants on these methods, each with a different error profile for the resulting data.
RNA-Seq data analysis
RNA-Seq experiments generate a large volume of raw sequence reads which have to be processed to yield useful information. Data analysis usually requires a combination of bioinformatics software tools (see also List of RNA-Seq bioinformatics tools) that vary according to the experimental design and goals. The process can be broken down into four stages: quality control, alignment, quantification, and differential expression. Most popular RNA-Seq programs are run from a command-line interface, either in a Unix environment or within the R/Bioconductor statistical environment.
Quality control
Sequence reads are not perfect, so the accuracy of each base in the sequence needs to be estimated for downstream analyses. Raw data is examined to ensure: quality scores for base calls are high, the GC content matches the expected distribution, short sequence motifs (k-mers) are not over-represented, and the read duplication rate is acceptably low. Several software options exist for sequence quality analysis, including FastQC and FaQCs. Abnormalities may be removed (trimming) or tagged for special treatment during later processes.
Alignment
In order to link sequence read abundance to the expression of a particular gene, transcript sequences are aligned to a reference genome or de novo aligned to one another if no reference is available. The key challenges for alignment software include sufficient speed to permit billions of short sequences to be aligned in a meaningful timeframe, flexibility to recognise and deal with intron splicing of eukaryotic mRNA, and correct assignment of reads that map to multiple locations. Software advances have greatly addressed these issues, and increases in sequencing read length reduce the chance of ambiguous read alignments. A list of currently available high-throughput sequence aligners is maintained by the EBI.
Alignment of primary transcript mRNA sequences derived from eukaryotes to a reference genome requires specialised handling of intron sequences, which are absent from mature mRNA. Short read aligners perform an additional round of alignments specifically designed to identify splice junctions, informed by canonical splice site sequences and known intron splice site information. Identification of intron splice junctions prevents reads from being misaligned across splice junctions or erroneously discarded, allowing more reads to be aligned to the reference genome and improving the accuracy of gene expression estimates. Since gene regulation may occur at the mRNA isoform level, splice-aware alignments also permit detection of isoform abundance changes that would otherwise be lost in a bulked analysis.
De novo assembly can be used to align reads to one another to construct full-length transcript sequences without use of a reference genome. Challenges particular to de novo assembly include larger computational requirements compared to a reference-based transcriptome, additional validation of gene variants or fragments, and additional annotation of assembled transcripts. The first metrics used to describe transcriptome assemblies, such as N50, have been shown to be misleading and improved evaluation methods are now available. Annotation-based metrics are better assessments of assembly completeness, such as contig reciprocal best hit count. Once assembled de novo, the assembly can be used as a reference for subsequent sequence alignment methods and quantitative gene expression analysis.
Legend: RAM – random access memory; MPI – message passing interface; EST – expressed sequence tag.
Quantification
Quantification of sequence alignments may be performed at the gene, exon, or transcript level. Typical outputs include a table of read counts for each feature supplied to the software; for example, for genes in a general feature format file. Gene and exon read counts may be calculated quite easily using HTSeq, for example. Quantitation at the transcript level is more complicated and requires probabilistic methods to estimate transcript isoform abundance from short read information; for example, using cufflinks software. Reads that align equally well to multiple locations must be identified and either removed, aligned to one of the possible locations, or aligned to the most probable location.
Some quantification methods can circumvent the need for an exact alignment of a read to a reference sequence altogether. The kallisto software method combines pseudoalignment and quantification into a single step that runs 2 orders of magnitude faster than contemporary methods such as those used by tophat/cufflinks software, with less computational burden.
Differential expression
Once quantitative counts of each transcript are available, differential gene expression is measured by normalising, modelling, and statistically analysing the data. Most tools will read a table of genes and read counts as their input, but some programs, such as cuffdiff, will accept binary alignment map format read alignments as input. The final outputs of these analyses are gene lists with associated pair-wise tests for differential expression between treatments and the probability estimates of those differences.
Legend: mRNA - messenger RNA.
Validation
Transcriptomic analyses may be validated using an independent technique, for example, quantitative PCR (qPCR), which is recognisable and statistically assessable. Gene expression is measured against defined standards both for the gene of interest and control genes. The measurement by qPCR is similar to that obtained by RNA-Seq wherein a value can be calculated for the concentration of a target region in a given sample. qPCR is, however, restricted to amplicons smaller than 300 bp, usually toward the 3’ end of the coding region, avoiding the 3’UTR. If validation of transcript isoforms is required, an inspection of RNA-Seq read alignments should indicate where qPCR primers might be placed for maximum discrimination. The measurement of multiple control genes along with the genes of interest produces a stable reference within a biological context. qPCR validation of RNA-Seq data has generally shown that different RNA-Seq methods are highly correlated.
Functional validation of key genes is an important consideration for post transcriptome planning. Observed gene expression patterns may be functionally linked to a phenotype by an independent knock-down/rescue study in the organism of interest.
Applications
Diagnostics and disease profiling
Transcriptomic strategies have seen broad application across diverse areas of biomedical research, including disease diagnosis and profiling. RNA-Seq approaches have allowed for the large-scale identification of transcriptional start sites, uncovered alternative promoter usage, and novel splicing alterations. These regulatory elements are important in human disease and, therefore, defining such variants is crucial to the interpretation of disease-association studies. RNA-Seq can also identify disease-associated single nucleotide polymorphisms (SNPs), allele-specific expression, and gene fusions, which contributes to the understanding of disease causal variants.
Retrotransposons are transposable elements which proliferate within eukaryotic genomes through a process involving reverse transcription. RNA-Seq can provide information about the transcription of endogenous retrotransposons that may influence the transcription of neighboring genes by various epigenetic mechanisms that lead to disease. Similarly, the potential for using RNA-Seq to understand immune-related disease is expanding rapidly due to the ability to dissect immune cell populations and to sequence T cell and B cell receptor repertoires from patients.
Human and pathogen transcriptomes
RNA-Seq of human pathogens has become an established method for quantifying gene expression changes, identifying novel virulence factors, predicting antibiotic resistance, and unveiling host-pathogen immune interactions. A primary aim of this technology is to develop optimised infection control measures and targeted individualised treatment.
Transcriptomic analysis has predominantly focused on either the host or the pathogen. Dual RNA-Seq has been applied to simultaneously profile RNA expression in both the pathogen and host throughout the infection process. This technique enables the study of the dynamic response and interspecies gene regulatory networks in both interaction partners from initial contact through to invasion and the final persistence of the pathogen or clearance by the host immune system.
Responses to environment
Transcriptomics allows identification of genes and pathways that respond to and counteract biotic and abiotic environmental stresses. The non-targeted nature of transcriptomics allows the identification of novel transcriptional networks in complex systems. For example, comparative analysis of a range of chickpea lines at different developmental stages identified distinct transcriptional profiles associated with drought and salinity stresses, including identifying the role of transcript isoforms of AP2-EREBP. Investigation of gene expression during biofilm formation by the fungal pathogen Candida albicans revealed a co-regulated set of genes critical for biofilm establishment and maintenance.
Transcriptomic profiling also provides crucial information on mechanisms of drug resistance. Analysis of over 1000 isolates of Plasmodium falciparum, a virulent parasite responsible for malaria in humans, identified that upregulation of the unfolded protein response and slower progression through the early stages of the asexual intraerythrocytic developmental cycle were associated with artemisinin resistance in isolates from Southeast Asia.
The use of transcriptomics is also important to investigate responses in the marine environment. In marine ecology, "stress" and "adaptation" have been among the most common research topics, especially related to anthropogenic stress, such as global change and pollution. Most of the studies in this area have been done in animals, although invertebrates have been underrepresented. One issue still is a deficiency in functional genetic studies, which hamper gene annotations, especially for non-model species, and can lead to vague conclusions on the effects of responses studied.
Gene function annotation
All transcriptomic techniques have been particularly useful in identifying the functions of genes and identifying those responsible for particular phenotypes. Transcriptomics of Arabidopsis ecotypes that hyperaccumulate metals correlated genes involved in metal uptake, tolerance, and homeostasis with the phenotype. Integration of RNA-Seq datasets across different tissues has been used to improve annotation of gene functions in commercially important organisms (e.g. cucumber) or threatened species (e.g. koala).
Assembly of RNA-Seq reads is not dependent on a reference genome and so is ideal for gene expression studies of non-model organisms with non-existing or poorly developed genomic resources. For example, a database of SNPs used in Douglas fir breeding programs was created by de novo transcriptome analysis in the absence of a sequenced genome. Similarly, genes that function in the development of cardiac, muscle, and nervous tissue in lobsters were identified by comparing the transcriptomes of the various tissue types without use of a genome sequence. RNA-Seq can also be used to identify previously unknown protein coding regions in existing sequenced genomes.
Non-coding RNA
Transcriptomics is most commonly applied to the mRNA content of the cell. However, the same techniques are equally applicable to non-coding RNAs (ncRNAs) that are not translated into a protein, but instead have direct functions (e.g. roles in protein translation, DNA replication, RNA splicing, and transcriptional regulation). Many of these ncRNAs affect disease states, including cancer, cardiovascular, and neurological diseases.
Transcriptome databases
Transcriptomics studies generate large amounts of data that have potential applications far beyond the original aims of an experiment. As such, raw or processed data may be deposited in public databases to ensure their utility for the broader scientific community. For example, as of 2018, the Gene Expression Omnibus contained millions of experiments.
Legend: NCBI – National Center for Biotechnology Information; EBI – European Bioinformatics Institute; DDBJ – DNA Data Bank of Japan; ENA – European Nucleotide Archive; MIAME – Minimum Information About a Microarray Experiment; MINSEQE – Minimum Information about a high-throughput nucleotide SEQuencing Experiment.
See also
omics
Genomics
Proteomics
Metabolomics
Interactomics
References
Notes
Further reading
Comparative Transcriptomics Analysis in Reference Module in Life Sciences
Software used in transcriptomics:
cufflinks
kallisto
tophat
Omics
Molecular biology | Transcriptomics technologies | [
"Chemistry",
"Biology"
] | 6,859 | [
"Biochemistry",
"Bioinformatics",
"Omics",
"Molecular biology"
] |
54,116,133 | https://en.wikipedia.org/wiki/DH5-Alpha%20Cell | DH5-Alpha Cells are E. coli cells engineered by American biologist Douglas Hanahan to maximize transformation efficiency. They are defined by three mutations: recA1, endA1 which help plasmid insertion and lacZΔM15 which enables blue white screening. The cells are competent and often used with calcium chloride transformation to insert the desired plasmid. A study of four transformation methods and six bacteria strains showed that the most efficient one was the DH5 strain with the Hanahan method.
Mutations
The recA1 mutation is a single point mutation that replaces glycine 160 of the recA polypeptide with an aspartic acid residue in order to disable the activity of the recombinases and inactivate homologous recombination.
The endA1 mutation inactivates an intracellular endonuclease to prevent it from degrading the inserted plasmid.
References
Escherichia coli
Molecular biology | DH5-Alpha Cell | [
"Chemistry",
"Biology"
] | 196 | [
"Biochemistry",
"Model organisms",
"Escherichia coli",
"Molecular biology"
] |
54,117,020 | https://en.wikipedia.org/wiki/Unrestricted%20algorithm | An unrestricted algorithm is an algorithm for the computation of a mathematical function that puts no restrictions on the range of the argument or on the precision that may be demanded in the result. The idea of such an algorithm was put forward by C. W. Clenshaw and F. W. J. Olver in a paper published in 1980.
In the problem of developing algorithms for computing, as regards the values of a real-valued function of a real variable (e.g., g[x] in "restricted" algorithms), the error that can be tolerated in the result is specified in advance. An interval on the real line would also be specified for values when the values of a function are to be evaluated. Different algorithms may have to be applied for evaluating functions outside the interval. An unrestricted algorithm envisages a situation in which a user may stipulate the value of x and also the precision required in g(x) quite arbitrarily. The algorithm should then produce an acceptable result without failure.
References
Numerical analysis
Theoretical computer science | Unrestricted algorithm | [
"Mathematics"
] | 216 | [
"Theoretical computer science",
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
54,117,828 | https://en.wikipedia.org/wiki/International%20Society%20for%20Prosthetics%20and%20Orthotics | The International Society for Prosthetics and Orthotics (ISPO) is a non-governmental organization of people working in or interested in prosthetics, orthotics, mobility and assistive devices technology.
It was founded in 1970 in Copenhagen, Denmark by a committee chaired by Knud Jansen. It currently has about 3,500 members in over 100 countries.
ISPO, in partnership with the World Health Organization (WHO) has developed the WHO Standards for Prosthetics and Orthotics that were launched in May 2017 at the 16th World Congress of the International Society of Prosthetics and Orthotics (ISPO) in Cape Town, South Africa.
ISPO is also responsible for Prosthetics and Orthotics International, an academic journal that quarterly publishes papers related to Prothetics and Orthotics.
References
External links
Prosthetics
International medical and health organizations
Non-profit organizations based in Copenhagen
1970 establishments in Denmark | International Society for Prosthetics and Orthotics | [
"Engineering",
"Biology"
] | 203 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
48,914,877 | https://en.wikipedia.org/wiki/PET%20radiotracer | PET radiotracer is a type of radioligand that is used for the diagnostic purposes via positron emission tomography imaging technique.
Mechanism
PET is a functional imaging technique that produces a three-dimensional image of functional processes in the body. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radionuclide (tracer), which is introduced into the body on a biologically active molecule.
Pharmacology
In in vivo systems it is often used to quantify the binding of a test molecule to the binding site of radioligand. The higher the affinity of the molecule the more radioligand is displaced from the binding site and the increasing radioactive decay can be measured by scintillography. This assay is commonly used to calculate binding constant of molecules to receptors. Due to the probable injuries of PET-radiotracers, they could not be administered in the normal doses of the medications. Therefore, the binding affinity (PKD) of the PET-tracers must be high. In addition, since via the PET imaging technique is desired to investigate a function accurately, the selectivity of bindings to the specific targets is very important.
See also
Medicinal radiocompounds
List of PET radiotracers
Positron emission tomography
Medicinal radiochemistry
Radioligand
References
Positron emission tomography
Neuroimaging
Nuclear medicine
Radiopharmaceuticals
Medicinal radiochemistry
Chemicals in medicine | PET radiotracer | [
"Physics",
"Chemistry"
] | 299 | [
"Antimatter",
"Medicinal radiochemistry",
"Positron emission tomography",
"PET radiotracers",
"Radiopharmaceuticals",
"Medicinal chemistry",
"Chemicals in medicine",
"Matter"
] |
48,915,162 | https://en.wikipedia.org/wiki/Social%20Credit%20System | The Social Credit System () is a national credit rating and blacklist implemented by the government of the People's Republic of China. The social credit system is a record system so that businesses, individuals and government institutions can be tracked and evaluated for trustworthiness. The national regulatory method is based on varying degrees of whitelisting (termed redlisting in China) and blacklisting.
There has been a widespread misconception that China operates a nationwide and unitary social credit "score" based on individuals' behavior, leading to punishments if the score is too low. Media reports in the West have sometimes exaggerated or inaccurately described this concept. In 2019, the central government voiced dissatisfaction with pilot cities experimenting with social credit scores. It issued guidelines clarifying that citizens could not be punished for having low scores, and that punishments should only be limited to legally defined crimes and civil infractions. As a result, pilot cities either discontinued their point-based systems or restricted them to voluntary participation with no major consequences for having low scores. According to a February 2022 report by the Mercator Institute for China Studies (MERICS), a social credit "score" is a myth as there is "no score that dictates citizen's place in society".
The origin of the concept can be traced back to the 1980s when the Chinese government attempted to develop a personal banking and financial credit rating system, especially for rural individuals and small businesses who lacked documented records. The program first emerged in the early 2000s, inspired by the credit scoring systems in other countries. The program initiated regional trials in 2009, before launching a national pilot with eight credit scoring firms in 2014.
The Social Credit System is an extension to the existing legal and financial credit rating system in China. Managed by the National Development and Reform Commission (NDRC), the People's Bank of China (PBOC) and the Supreme People's Court (SPC), the system was intended to standardize the credit rating function and perform financial and social assessment for businesses, government institutions, individuals and non-government organizations. The Chinese government's stated aim is to enhance trust in society with the system and regulate businesses in areas such as food safety, intellectual property, and financial fraud. By 2023, most private social credit initiatives had been shut down by the PBOC.
History
Background
The origin of the Social Credit System can be traced back to the early 1990s as part of attempts to develop personal banking and financial credit rating systems in China, and was inspired by Western commercial credit systems like FICO, Equifax, and TransUnion. The credit system aims to facilitate financial assessment in rural areas, where individuals and small business entities often lacked financial documents.
In 1999, businesswoman Huang Wenyun wrote a report following her negative experiences with domestic business trustworthiness and her research into credit management in the United States business environment. At the time, credit management and rating were largely unfamiliar concepts within the Chinese economy. Huang sent her report to Premier Zhu Rongji, who approved it and in August 1999 ordered the People's Bank of China to take immediate action. In September 1999, the Institute of Economics of the Chinese Academy of Social Sciences began a research project on establishing a national credit management system. Huang contributed more than RMB 300,000 to fund the research initiative and sponsored fieldwork in the United States and Europe. In the United States, the research group studied and prepared translations of 17 American credit reporting laws, including the Fair Credit Reporting Act.
In January 2000, the research group from the Chinese Academy of Social Sciences compiled their research into a text titled National Credit Management System. Among these academics was Lin Junyue, who became an important intellectual figure in the development of social credit. Premier Zhu approved the text and instructed government figures from ten ministries and commissions to begin studying the creation of a social credit management system. In late January 2000, the State Council released an essay by Zhu in which Zhu stated that China must "vigorously rectify social credit." In March 2000, Zhu delivered the government's work report to the National People's Congress, in which Zhu talked about the need to rectify social credit in the context of supervision of financial institutions, fraud, tax evasion, and debt repayments.
2002 to 2014
In 2002, the construction of a social credit system was formally announced during the 16th National Congress of the Chinese Communist Party. The central government had not developed a specific vision for what a finished system might look like. Local governments were to develop pilot initiatives which could then guide the larger policy approach.
In 2003, the State Council stated that the basic framework and operational mechanisms for a social credit should be established within five years. Most of the goals in this period were missed, although the financial aspects of social credit developed much further than non-financial aspects.
Among the financial aspects of social credit which developed quickly was credit reporting. In March 2006, the People's Bank of China established the Credit Reference Center, which has information regarding financial credit worthiness and has established basic financial records for 990 million Chinese citizens as of 2019. Its records relate only to finance and does not have any blacklist mechanism.
In 2007, the Inter-Ministerial Joint Conference on the Establishment of the SCS was established, replacing the leading small group which had previously been the top policy organ for social credit issues. The initial blueprints of the Social Credit System were drafted in 2007 by government bodies. The social credit system also attempts to solve the moral vacuum problem, insufficient market supervision and income inequality generated by the rapid economic and social changes since Chinese economic reform in 1978. As a result of these problems, trust issues emerged in Chinese society such as food safety scandals, labor law violations, intellectual property thefts and corruption. Among the purposes of social credit is promotion and moral education regarding personal integrity and honesty. The policy of the social credit system traces its origin from both policing and work management practices.
The government of modern China has maintained systems of paper records on individuals and households such as the dàng'àn () and hùkǒu () which officials might refer to, but these systems do not provide the same degree and rapidity of feedback and consequences for Chinese citizens as the integrated electronic system because of the much greater difficulty of aggregating paper records for rapid, robust analysis.
The Social Credit System also originated from grid-style social management, a policing strategy first implemented in select locations from 2001 and 2002 (during the administration of Chinese Communist Party General Secretary Jiang Zemin) in specific locations across mainland China. In 2002, the Jiang administration proposed a social credit system as part of the promotion of a "unified, open, competitive, and orderly modern market system." In its first phase, grid-style policing was a system for more effective communication between public security bureaus. Within a few years, the grid system was adapted for use in distributing social services. Grid management provided the authorities not only with greater situational awareness on the group level, but also enhanced the tracking and monitoring of individuals. In 2018, sociologist Zhang Lifan explained that Chinese society today is still deficient in trust. People often expect to be cheated or to get in trouble even if they are innocent. He believes that it is due to the Cultural Revolution, where friends and family members were deliberately pitted against each other and millions of Chinese were killed. The stated purpose of the social credit system is to help Chinese people trust each other again.
One focus of social credit is to build judicial credibility through more effective enforcement of court orders. In 2013, the Supreme People's Court (SPC) of China started a blacklist of debtors with roughly 32,000 names. The list has since been described as a first step towards a national Social Credit System by state-owned media. The SPC's blacklist is composed of Chinese citizens and companies that refuse to comply with court orders (typically court orders to pay a fine or to repay a loan) despite having the ability to do so. It is hosted online at the Supreme People's Court judgment defaulter blacklist portal, and the information is shared with Credit China and the National Enterprise Credit Information Publicity System. The SPC also began working with private companies. For example, Sesame Credit began deducting credit points from people who defaulted on court fines.
Although there was institutional enthusiasm for a social credit system during the 2004 to 2014 period, implementation was adversely impacted by planning difficulties stemming from the relationship between credit reporting initiatives (which were defined narrowly) and regulatory objectives (which were more vaguely defined). A lack of central coordination resulted in institutional bottlenecks.
2014 to 2020
The State Council sought to accelerate the development of social credit and, in 2014, issued the Planning Outline for the Construction of a Social Credit System (2014-2020). The Planning Outline was a major step in China's approach to developing a social credit system; before the 2014 Planning Outline, there had been only one high-level policy document (issued in 2007). Since the Planning Outline, the State Council has issued new guidance annually.
The Planning Outline focused primarily on economic activity in commerce, government affairs, social integrity, and judicial credibility. It set broad goals intended to be reached by 2020:
a reward and punishment mechanism should be fully effective,
a basic credit investigation that covers the whole of society should be established,
credit oversight mechanisms should be established,
credit service markets should be performing well, and
fundamental social credit laws, regulations, and standards should be established.
In 2015, the People's Bank of China licensed eight companies to begin a trial of social credit systems. Among these eight firms is Sesame Credit (owned by Alibaba Group and operated by Ant Financial), Tencent, and China's biggest ride-sharing and online-dating services, Didi Chuxing and Baihe.com, respectively. In general, multiple firms collaborated with the government to develop the software and algorithms used to calculate credit. Commercial pilot programs developed by private Chinese conglomerates that have the authorization from the state to test out social credit experiments. The pilots are more widespread than their local government counterparts but function on a voluntary basis: citizens can decide to opt-out of these systems at any time on request. Users with good scores are offered advantages such as easier access to credit loans, discounts for car and bike sharing services, fast-tracked visa applications, free health check-ups and preferential treatment at hospitals.
In 2016, the State Council encouraged market entities to provide preferential treatment to those with outstanding financial credit records and differentiated services to those with seriously untrustworthy records.
The Chinese central government originally considered having the Social Credit System be run by a private firm, but by 2017, it acknowledged the need for third-party administration. However, no licenses to private companies were granted. By mid-2017, the Chinese government had decided that none of the pilot programs would receive authorization to be official credit reporting systems. The reasons include conflict of interest, the remaining control of the government, as well as the lack of cooperation in data sharing among the firms that participate in the development. However, the Social Credit System's operation by a seemingly external association, such as a formal collaboration between private firms, has not been ruled out yet. In November 2017, Sesame Credit denied that Sesame Credit data was shared with the Chinese government. In 2017, the People's Bank of China issued a jointly owned license to Baihang Credit valid for three years. Baihang Credit is co-owned by the National Internet Finance Association (36%) and the eight other companies (8% each), allowing the state to maintain control and oversee the creation of new commercial pilot programs. As of mid-2018, only pilot schemes had been tested without any official implementation.
Private companies have also signed contracts with provincial governments to set up the basic infrastructure for the Social Credit System at the provincial level. As of March 2017, 137 commercial credit reporting companies were active on the Chinese market. As part of the development of the Social Credit System, the Chinese government has been monitoring the progress of third-party Chinese credit rating systems. Ultimately, Chinese government dropped the support for privately developed credit rating system, and these pilot projects remained as corporate loyalty programs.
In December 2017 the National Development and Reform Commission and People's Bank of China selected "model cities" that demonstrated the steps needed to make a functional and efficient implementation of the Social Credit System. Among them are: Hangzhou, Nanjing, Xiamen, Chengdu, Suzhou, Suqian, Huizhou, Wenzhou, Weihai, Weifang, Yiwu and Rongcheng. These pilots were deemed successful in their handling of "blacklists and 'redlists'", their creation of "credit sharing platforms" and their "data sharing efforts with the other cities".
By 2018, some restrictions had been placed on citizens which state-owned media described as the first step toward creating a nationwide social credit system.
According to Antonia Hmaidi of the Mercator Institute for China Studies (MERICS), the local government Social Credit System experiments are focused more on the construction of transparent rule-based systems, in contrast with the rating systems used in the commercial pilots. Citizens often begin with an initial score, to which points are added or deducted depending on their actions. The specific number of points for each action are often listed in publicly available catalogs. Cities also experimented with a multi-level system, in which districts decide on scorekeepers who are responsible for reporting scores to higher-ups. Some experiments also allowed citizens to appeal the scores they were attributed.
In 2019, the central government expressed "unhappiness" at the pilot cities that were experimenting with social credit scores and issued guidelines that no citizens can be punished for having low scores, and instead punishment can only be for legally defined crimes and civil infractions, consequently leading to pilot cities either changing their programs to be encouragement-only or not materializing at all.
In July 2019, an NDRC spokesperson stated that at a press conference that "personal credit scores can be combined with incentives for trustworthiness, but cannot be used for punishments". The Hong Kong Government stated in July 2019 that claims that the social credit system will be rolled out in Hong Kong are "totally unfounded" and stated that the system will not be implemented there.
In 2019, high-level NDRC officials stated that over 10% of people blacklisted for their commission of tax fraud had repaid their taxes, that the bad credit rate had decreased by 22.7%, and that the proportion of companies blacklisted had decreased. In the view of these officials, these were "remarkable results."
2020 to present
In 2020, the Supreme People's Court announced that a nationwide total of 7.51 million blacklisted judgment defaulters had fulfilled their legal obligations and been removed from the judgment defaulter blacklist, accounting for half of the blacklisted judgment defaulters as of that date.
As a result of the COVID-19 pandemic, various aspects of social credit were modified. On February 1, 2020, the People's Bank of China announced it would temporarily suspend the inclusion of mortgage and credit card payments in the credit record of people impacted by the pandemic. Private financial credit scoring companies, including Sesame Credit, suspended financial credit ratings. Various cities established mechanisms to incentivize companies to provide pandemic relief, with measures including redlisting for those donating funds and supplies with benefits like simplified administrative procedures, increased policy support, or increased financial support. On the enforcement side of social credit, provinces and cities promulgated regulations emphasizing heavy penalties for price hikes, violence against doctors, counterfeit medical supplies, refusal to comply with pandemic prevention measures, and wildlife trade violations.
In 2020, the rights protection metrics in the NDRC's City Credit Status Monitoring and Early Warning Indicators emphasized that cities must establish transparent credit repair procedures handled within an appropriate timeframe. It also emphasized that cities should prevent the over generalization of the concept of credit, stating that individual behavior such as petitioning the government, unpaid property fees, running red lights (among other listed examples) must not be included in a person's credit record.
The State Council issued its Guiding Opinions on Further Improving Systems for Restraining the Untrustworthy and Building Mechanisms for Building Credit Worthiness that have Long-term Effect in November 2020. The central message of the Guiding Opinions was that new blacklists should not be created on an ad hoc basis and that social credit should not be applied in policy areas without sufficient consensus. It stated that credit repair processes must be improved, that blacklists must only be used in instances of severe harm, and that information security and privacy should be prioritized.
In November 2021, the United Nations Education, Scientific, and Cultural Organization (UNESCO) adopted a Recommendation on the Ethics of AI. Among its recommendations is that "AI systems should not be used for social scoring or mass surveillance purposes." China is a signatory of the document.
Following their submission for public comment, China in December 2021 issued the National List of Basic Penalty Measures for Untrustworthiness and the National Directory of Public Credit Information. The National Directory establishes limitations on what types of credit information can be collected or used as a basis for social credit penalties or rewards. It describes three categories of data:
information that is appropriate for consideration,
information on violations that can be considered only when the circumstances of the violation are severe, and
information that can never be included as part of social credit.
Appropriate information for consideration includes information on the execution of judicial judgments, administrative violations, among other material, and positive recognition for trustworthy behavior. Information appropriate only when the circumstances of the violation are severe include small payment arrears or public transportation fare evasion. The National Directory bans the consideration of private information like religious preferences or government petitioning activity.
The December 2021 National List's purpose is to further standardize penalty measures. It specifies that administrative bodies cannot extent penalties beyond those provided in national level law and regulation. In a 2022 directive, the State Council stated that it will "actively explore innovative ways to use the credit concept and methods to solve difficulties, bottlenecks, and painful points that restrict the country's economic and social activities." On 14 November 2022, the NDRC issued a draft Law on the Establishment of the Social Credit System. According to academic Vincent Brussee, the draft "was deeply unsatisfactory to SCS observers worldwide. It did not stipulate anything not already regulated in one of the many recent documents on the system. The draft just copy-pasted bits from those." Academic Haiqing Yu writes that "the draft law is a patchwork of existing policies and regulations that prioritise unification rather than clarification."
As of 2022, over 62 different Social Credit System pilot programs were implemented by local governments. The pilot programs began following the release of the 2014 "Planning Outline for the Construction of a Social Credit System" by Chinese authorities. The government oversees the creation and development of these governmental pilots by requesting they each publish a regular "interdepartmental agreement on joint enforcement of rewards and punishments for 'trustworthy' and 'untrustworthy' conduct."
Though some reports stated social credit would be powered by artificial intelligence (AI), as of 2023 penalty decisions were made by humans, not AI, and digitization remained limited. Credit systems for local government remained undeveloped and resemble incentivized loyalty programs like those run by airlines. Participation is fully voluntary and there are no enticement beyond losing access to minor rewards. For fear of overreach and pushback, the Chinese central government banned punishments for low scores and minor offences. During the city trials, pilot programs only saw limited participation. Many people living in pilot program cities are unaware of the programs. In Xiamen, 210,059 users activated their social credit account, roughly 5% of the population of Xiamen; 60,000 or 1.5% of population in Wuhu participated the system; Hangzhou has 1,872,316 (15%) participants and fewer regularly use the system. Scores are not shared between cities as the scoring criteria and mechanisms are different.
By 2023, most private social credit initiatives had been shut down by the People's Bank of China, and regulations had cracked down on most local scoring pilot programs.
Organization
Social credit in China is a broad policy category seeking to enforce legal obligations including laws, regulations, and contracts. Social credit does not itself bring new restrictions; it focuses on increasing implementation of existing restrictions. There are multiple social credit systems in China, some of which are designed and operated by the state, while others are operated by private companies. China's governmental approaches to social credit are described by various sets of documents issued by different institutions. There is no integrated system, nor a comprehensive document setting out a unified approach. Generally, the different approaches to social credit are united by the theme of increasing digitization, data collection, and data centralization.
There is no unified, numerical credit score for businesses or individuals, rather national and local platforms use different evaluation or rating systems. Due to the differences in various pilot programs and a fragment system structure, information regarding the scoring mechanism is often conflicting. Inspired by FICO, a numerical social credit score calculated by individual behavior and activities was given to citizens in certain pilot programs developed by financial firms or localized initiatives. However, these practices were not widespread applications and eventually, the numerical score mechanism was limited to private credit rating and loyalty programs. Private involvements were ultimately abandoned by the government.
The system includes sanctions for the offenders; unlike in the past where the offenders were punished by one supervising agency or court, they now face sanctions from multiple agencies, greatly increasing their effect. Though the sanctions are severe, they affect a small part of companies and individuals. By publicizing these punishments and blacklists through state-media and through other agencies, the system is aimed to create a deterrence effect.
Social credit is an example of China's "top-level design" () approach. It is coordinated by the Central Comprehensively Deepening Reforms Commission. Social credit when referred by the Chinese government, generally covers two different concepts. The first is "traditional financial creditworthiness" where it documents the financial history of individuals and companies and score them on how well they are able to pay off future loans. The second concept is “social creditworthiness” where the government is stating that there needs to be higher "trust in society". And to build such trust, the government had proposed to combat corruption, scammers, tax evasion, counterfeiting of goods, false advertising, pollution and other problematic issues, and to create the mechanisms to keep individuals and companies accountable for such transgressions.
Conceptualization
Scholars have conceptualized four different types of systems. These four systems are not interconnected, but relatively independent from each other with their own jurisdictions, rules and logic.
Business trustworthiness system () Blacklist system for discredited business organizations. This system is regulated by People's Bank of China financial credit-rating system and commercial credit-rating system.
Government trustworthiness system () Evaluation system targeting civil servants and government institutions.
Social trustworthiness system () Blacklist system for discredited individuals. Social trustworthiness system most closely relates to China's mass surveillance systems.
Judiciary public trust system () Blacklist system for judgment defaulters. This system is regulated by Supreme People's Court.
As of 2023, the government has only created a system that is primarily focused on assessing businesses rather than on individuals, and consists of a database that collects the data on corporate regulation compliance from a number of government agencies. Kendra Schaefer, head of tech policy research at the Beijing-based consultancy firm Trivium China, had described the system in a report for the US government's US-China Economic and Security Review Commission, as being “roughly equivalent to the IRS, FBI, EPA, USDA, FDA, HHS, HUD, Department of Energy, Department of Education, and every courthouse, police station, and major utility company in the US sharing regulatory records across a single platform”. The database can be openly accessed by any Chinese citizen on the newly created website called "Credit China". Its database also includes random information like a list of approved robot building companies, hospitals that have committed insurance fraud, universities that are deemed legitimate and a list of individuals who have defaulted on a court judgement.
Implementation
Social credit does not itself bring new restrictions; it focuses on increasing implementation of existing restrictions. Although the Chinese government announced in 2014 that it would implement a nationwide social credit system by 2020, as of 2023 no full-fledged system exists.
Implementation of social credit is primarily focused on marketplace behavior. As of 2023, about 1% of companies and 0.3% of individuals receive social credit-related penalties per year.
Financial credit reporting
National financial credit reporting for businesses and individuals is provided by the People's Bank of China, which does not assign any numerical scoring.
Red Lists
Red Listing practices seek to incentivize exemplary personal behavior or business compliance. Red List practices vary significantly and there are no top-level regulations or guidance addressing red lists in detail. The most common benefit to red listed companies include reduced administrative burdens or simplified procedures. Part of the government logic for red listing companies is that it facilitates regulators' ability to focus on companies with a worse compliance record. Red Listed individuals may receive benefits like parking and public transit discounts or discount tourist site tickets.
Blacklists
Blacklisting is based on specific instances of misconduct, not any numerical score. The Central Government operates a number of national and regional blacklists based on various types of violations. The court system is available for businesses, organizations and individuals to appeal their violations. As of 2019, it typically took 2–5 years to be removed from the blacklist, but early removal is also possible if the blacklisted person "fulfills legal obligations or remedies". By the end of 2021, over five million citizens had been affected by the blacklisting scheme in some form.
Three main types of blacklists exist: the judgment defaulter blacklist, sectoral blacklists, and no-fly/no-ride lists.
Before being added to a blacklist, a person or company must be informed of the decision and the legal basis for it. Blacklists may be publicized, although as of at least 2023 there is no uniform method for doing so. Some blacklist portals can be searched online while others are uploaded as PDFs or image files. Blacklisted parties are sometimes displayed in public settings, including on the Internet, in newspapers, or television.
Judgment defaulter blacklist
Before 2013, the process of obtaining court-ordered enforcement against judgment debtors was fragmented. In 2013, the Supreme People's Court issued the Several Provisions on Announcement of the Judgment Defaulter Blacklist which became the foundational regulation for the judgment defaulter blacklist. It stated that to be included on the list, a defaulter must be capable of complying with the court orders, but actively avoids doing so. Based on the idea that judgment defaulters should repay their debts before purchasing luxuries, once added to the list, judgment defaulters are restricted from:
(1) travelling via plane, high speed train, or first class non-high speed train,
(2) staying at star-rated hotels or golf courses,
(3) purchasing real estate,
(4) leasing "high-grade" office buildings, hotels, or apartments,
(5) purchasing "non-business essential" vehicles,
(6) holiday trips,
(7) sending children to high fee private schools,
(8) purchasing high-premium insurance products, and
(9) "other non-life and non-work essential consumption behavior."
In 2019, a Hebei court released an app showing a "map of deadbeat debtors" within 500 meters and encouraged users to report individuals who they believed could repay their debts. According to China Daily, A spokesman of the court stated that "it's a part of our measures to enforce our rulings and create a socially credible environment."
The Supreme People's Court's blacklist is one of its most important enforcement tools and its use has resulted in the recovery of tens of trillions of RMB for fines and delinquent repayments as of 2023. Chinese founders are increasingly placed on the national debtor blacklist by venture capitalists seeking a return of invested funds.
Sectoral blacklists
Many sectoral blacklists exist and are managed by a variety of regulatory and administrative bodies. Primarily, the penalties for being included on these blacklists are discretionary restrictions in administrative processes and interactions with the government. For example, regulators may exclude a company on a sectoral blacklist from participating in public procurement, revoke government funding or subsidies, cancel permits or revoke qualifications or certifications, or restrict the issuance of corporate bonds. Penalties cannot be developed ad hoc and must instead be based in national level law and regulation. Penalties from inclusion on sectoral blacklists may be imposed both on the violating company as well as legal representatives, senior company management, and the staff directly responsible for the violation that placed the company on the blacklist. Multiple government bodies may impose restrictions as a result of a person or company's inclusion on a sectoral blacklist. The availability of sectoral blacklist with the public also means that potential business partners may act accordingly and decline to deal with a blacklisted company.
No-fly and no-ride lists
Inclusion on the no-ride list or no-fly list results from specific instances of misconduct on trains or planes. Misconduct resulting in inclusion on the no-ride or no fly lists can include violation of safety regulations, harassing other passengers or transportation workers, smoking, scalping tickets, or using counterfeit tickets. Inclusion on the list prohibits a person from buying new tickets for a designated time period, usually six to twelve months. This is the only penalty under the no-ride or no-fly list, and inclusion on these blacklists has no impact in other areas of life or business.
By May 2018, several million flight and high-speed train trips had been denied to people who had been blacklisted either through misbehavior on planes or trains, or failing to follow a court-ordered judgement. As of June 2019, according to the National Development and Reform Commission of China, 26.82 million air tickets as well as 5.96 million high-speed rail tickets had been denied to people who were deemed "untrustworthy" () (on a blacklist) and 4.37 million blacklisted people had chosen to fulfill their duties required by the law, such as repaying court-ordered judgements before being allowed to travel on high-speed rail and planes. In July 2019, additional 2.56 million flight tickets as well as 90 thousand high-speed train tickets were denied to those on the blacklist.
The no-fly list is administered by Civil Aviation Administration of China. The no-ride list is administered by the National Railway Administration.
Procedures for removal from blacklists
After a blacklist decision becomes effective, the blacklisted party can file for credit repair. Through the credit repair process, a violator corrects the impact of the underlying violation and commits to abide by laws and regulations in the future. Companies undergoing credit repair typically must supply evidence that they have corrected their violations. Companies may also have to agree to a credit pledge in which they commit to upholding laws and regulations, commit to abiding by contracts, and agree to be subject to more severe penalties for any future violations. If authorities approve of the request for credit repair, the violator is removed from the blacklist and penalties are ended.
For companies
The Social Credit System is meant to provide an answer to the problem of lack of trust on the Chinese market. , the corporate regulation function of the system appears to be more advanced than other parts of the system and the "Corporate Social Credit System" has been the primary focus of government attention. , over 73.3% of the enforcement action since 2014 is targeted toward companies, the largest part of all enforcements, while around 1-2% of all companies were sanctioned by the system annually.
For businesses, the Social Credit System is meant to serve as a market regulation mechanism. The goal is to establish a self-enforcing regulatory regime fueled by big data in which businesses exercise "self-restraint" (企业自我约束). The basic idea is that with a functional credit system in place, companies will comply with government policies and regulations to avoid having their scores lowered by disgruntled employees, customers or clients. For example, the central government can use social credit data to offer risk-assessed grants and loans to small and medium enterprises (SMEs), encouraging banks to offer greater loan access for SMEs.
As currently envisioned, companies with good credit scores will enjoy benefits such as good credit conditions, lower tax rates, less custom checks, and more investment opportunities. Companies with bad credit scores will potentially face unfavorable conditions for new loans, higher tax rates, investment restrictions and lower chances to participate in publicly funded projects. Government plans also envision real-time monitoring of a business's activities. In that case, infractions on the part of a business could result in a lower score almost instantly. However, whether this will actually happen depends on the future implementation of the system as well as on the availability of technology needed for this kind of monitoring.
To improve credit score, companies need to conform to the government rules, such as following the COVID-19 containment guidelines.
For government institutions
Government institutions receive the second highest number of enforcement actions, accounting for 13.3% of the penalties , while less than 0.1% of all government entities were sanctioned by the system annually. The social credit system targets government agencies, assesses local governments' performance and focuses on financial problems such as local governments' debts and contract defaults. The Central Government hopes the system can improve "government self-discipline." Local governments are also encouraged and rewarded by the social credit system if they successfully implement and follow the orders from the central government.
For individuals
As of 2020, individuals receive 10.3% of all enforcement actions, affecting around 0.15% to 0.3% of the national population annually. The dealing of the social credit system with individuals focuses on the financial trustworthiness of individual citizens. The dealing of the system with individuals is primarily focused on debt repayment, though major violations of the law have also been sanctioned. One major focus is that of the debt-dodger (laolai), a phrase which refers to those who can pay their debts but choose not to. A laolai blacklist is maintained by the Supreme People's Court.
In addition to dishonest and fraudulent financial behavior, there have been proposals in some cities to officially list several behaviors as negative factors of credit ratings, including playing loud music or eating in rapid transits, violating traffic rules such as jaywalking and red-light violations, making reservations at restaurants or hotels, but not showing up, failing to correctly sort personal waste, fraudulently using other people's public transportation ID cards, etc.; on the other hand, including behavior listed as positive factors of credit ratings such as donating blood, donating to charity, volunteering for community services, praising government efforts on social media and so on. However, due to the system mainly relying on digitized administrative documents, early efforts to integrate behavioral data into the system were mainly discarded.
There are various punishments for debtors. Delinquent debtors are placed on blacklists maintained by Chinese courts and shared with the Ministry of Public Security, which controls the country's entry-exit checkpoints. Individuals with outstanding debts can be subject to exit bans and prevented from leaving the country as a way of encouraging or forcing the collection of debt. According to the Financial Times, as of 2017, some 6.7 million debtors had already been placed on blacklists and prevented from exiting the country as a result of the new policy. Future rewards of having a high score might include easier access to loans and jobs and priority during bureaucratic paperwork. A person with poor social credit may be denied employment in places such as banks, state-owned enterprises, or as a business executive. The Chinese government encourages checking whether candidates names' appear on the blacklist when hiring.
In certain test programs, public humiliation is used as a mechanism to deter sanctioned individuals. Mugshots of blacklisted individuals are sometimes displayed on large LED screens on buildings or shown before the movie in movie theaters. Certain personal information of the blacklisted people is deliberately made accessible to the public and is displayed online as well as at various public venues such as movie theaters and buses, while some cities have also banned children of "untrustworthy" residents from attending private schools and even universities. People with high credit ratings may receive rewards such as less waiting time at hospitals and government agencies, discounts at hotels, greater likelihood of receiving employment offers, and so on.
According to Sarah Cook of Freedom House in 2019, city-level pilot projects for the social credit system have included rewarding individuals for aiding authorities in enforcing restrictions of religious practices, including coercing practitioners of Falun Gong to renounce their beliefs and reporting on Uighurs who publicly pray, fast during Ramadan or perform other Islamic practices. In an October 2022 study, professors from Princeton University, Freie Universität Berlin and Pennsylvania State University also found that "repressing protesters, petitioners, journalists, and political activists via the SCS is common among Chinese localities."
For social organizations
As of 2020, non-government organizations receive 3.3% of all enforcement actions. Although the enforcement remain a small group in numerical terms, but their inclusion has an important implication as it affects foreign NGOs operated within China.
Examples of city trial policies
Most initiatives under the social credit system do not involve actual numerical scores; instead, documentation of specific offenses is recorded in one's credit profile, the exception being the trial programs launched by some cities and communities. The actual policy varies greatly from city to city, and participation is voluntary. Local credit profiles are not shared between cities.
Since the early 2010s, several cities in China launched pilot programs to test and develop a potential social credit system. Some of these programs assigned scores to individuals, but many of the scoring programs faced criticism. The main criticism of these pilot programs came from Chinese state media, which denounced these practices as having unfairly restricted legal rights or tracked personal behaviors that were completely unrelated to the concept of “credit.” In 2019, the Chinese government reinforced this criticism by issuing clear guidelines to prevent misuse, explicitly stating that "scores" can not be used to punish citizens. As a result, many pilot programs were discontinued, while some pilot cities revised their programs. Examples were Wenzhou, which abandoned their initial program and, in 2019, revised it to be an "encouragement-only scheme". Another was Rongcheng, which changed their pilot program in 2021, so that it was strictly voluntary and can only issue rewards. According to a 2022 article from the Mercator Institute for China Studies (MERICS), the only social credit system programs that continue to have "personal scores" of individuals are strictly for issuing positive incentives only. Under some policies, higher scores can earn a participant cheaper public transportation, shorter security lines in subways, or tax reductions.
Public opinions
Writing in 2023, academic Vincent Brussee observes that European misconceptions of social credit in China have become a source of amusement among Chinese Internet users.
Approvals
A series of studies have concluded that social credit is well-received domestically. In a 2018 study, 80% of respondents either strongly approved or approved of China's Social Credit System, while one percent disapproved. The study was conducted by Professor Genia Kostka of Free University of Berlin and was based on a cross-regional Internet survey of 2,209 Chinese citizens of various backgrounds. The study found "a surprisingly high degree of approval of SCSs across respondent groups" and that "more socially advantaged citizens (wealthier, better-educated and urban residents) show the strongest approval of SCSs, along with older people". Kostka explained in the paper that "while one might expect such knowledgeable citizens to be most concerned about the privacy implications of SCS, they instead appear to embrace SCSs because they interpret it through frames of benefit-generation and promoting honest dealings in society and the economy instead of privacy-violation."
In August 2019, assistant researcher Zhengjie Fan of China Institute of International Studies published an article, claiming that the current punishment policies such as the blacklist do not overstep the limits of law. He argued that since 2014, China's Social Credit System and the credit system of the market had grown to complement each other, forming a mutually beneficial interaction. According to Doing Business 2019 by World Bank Group which ranked "190 countries on the ease of doing business within their borders", China rose from 78th place in previous year to 46th place and Fan claimed that the Social Credit System has played an important role. In 2020, it further improved to 31st place in the now-defunct Ease of Doing Business index.
In an October 2022 study, professors from Princeton University, Freie Universität Berlin (Genia Kostka), and Pennsylvania State University discovered through a field survey of college students in China that "revealing the repressive potential of the SCS significantly reduces support for the system, whereas emphasizing its function in maintaining social order does not increase support." Additionally, the professors found that a nationwide survey of Chinese netizens showed higher support for the SCS among Chinese citizens who learned about it through state media.
Criticism
Chinese academics have produced a substantial body of work analyzing social credit in China. As of 2023, the large majority of Chinese scholarships accept the legitimacy of social credit as a whole, although there are also criticisms of different approaches or implementation efforts. In several instances, academics' criticisms of social credit have been adopted and re-issued by state media outlets, including Xinhua and People's Daily.
In October 2019, Professor Kui Shen of the Law School of Peking University published a paper in China Legal Science, suggesting that some of the then-current credit policies violated the "rule of law" or "Rechtsstaat": that they infringed the legal rights of residents and organizations, possibly violated the principle of respecting and protecting human rights, especially the right to reputation, the right to privacy as well as personal dignity and overstepped the boundary of reasonable punishment. In May 2020, Chinese investigative media group Caixin reported that business social credit systems in China were insufficient in deterring problematic business activities and that the social credit system was easy to game in favour of businesses.
China's Social Credit System has been implicated in a number of controversies. Western critics view social credit as an intrusive mechanism that infringes on privacy. In October 2018, U.S. Vice President Mike Pence criticized the social credit system, describing it as "an Orwellian system premised on controlling virtually every facet of human life." In January 2019, George Soros criticized the social credit system, saying it would give CCP leader Xi Jinping "total control over the people of China".
From 2017 to 2018, researchers argued that the credit system would be part of the government's plan to automate their authoritarian rule over the Chinese population. In June 2019, Samantha Hoffman of the Australian Strategic Policy Institute argued that "there are no genuine protections for the people and entities subject to the system... In China there is no such thing as the rule of law. Regulations that can be largely apolitical on the surface can be political when the Chinese Communist Party (CCP) decides to use them for political purposes." In August 2018, Professor Genia Kostka of Free University of Berlin stated in her published paper that "if successful in [their] effort, the Communist Party will possess a powerful means of quelling dissent, one that is comparatively low-cost and which does not require the overt (and unpopular) use of coercion by the state." In December 2017, Human Rights Watch described the proposed social credit system as "chilling" and filled with arbitrary abuses.
Misconceptions
There has been a degree of misreporting and misconceptions in English-language mass media due to translation errors, sensationalism, conflicting information and lack of comprehensive analysis. Examples of such popular misconceptions include a widespread misassumption that Chinese citizens are rewarded and punished based on a numerical score (social credit score) assigned by the system, that its decisions are taken by AI and that it constantly monitors Chinese citizens.
In July 2019, Wired reported that there existed misconceptions regarding the Social Credit System of China. It argued that "Western concerns about what could happen with China's Social Credit System have in some ways outstripped discussions about what's already really occurring...The exaggerated portrayals may also help to downplay surveillance efforts in other parts of the world." The rise of misconception, according to Jeremy Daum of Yale University, is contributed by translation errors, the difference in word usage and so on.
In May 2019, Logic published an article by Shazeda Ahmed, who argued that "[f]oreign media has distorted the social credit system into a technological dystopia far removed from what is actually happening in China." She pointed out that common misconceptions included the beliefs that surveillance data is connected with a centralized database; that human activities online and offline are assigned with actual values that can be deducted and that every citizen in China has a numerical score that is calculated by computer algorithm.
In March and February 2019, MIT Technology Review stated that, "[i]n the West, the system is highly controversial and often portrayed as an AI-powered surveillance regime that violates human rights." However, the magazine reported that "many scholars argue that social credit scores won't have the wide-scale controlling effect presumed...the system acts more as a tool of propaganda than a tool of enforcement" and that "[o]thers point out that it is simply an extension of Chinese culture's long tradition of promoting good moral behavior and that Chinese citizens have a completely different perspective on privacy and freedom."
In November 2018, Foreign Policy listed some factors which contributed to the misconception of China's credit system. The potential factors included the scale and variety of the social credit system program and the difficulties of comprehensive reporting that comes with it.
In May 2018, Rogier Creemers of Leiden University stated that despite the Chinese government's intentions of utilizing big data and artificial intelligence, the regulatory method of SCS remained relatively crude. His research concluded that it is "... perhaps more accurate to conceive of the SCS as an ecosystem of initiatives broadly sharing a similar underlying logic, than a fully unified and integrated machine for social control."
In November 2018, Bing Song, director of the Berggruen Institute China Center, posted an opinion piece in The Washington Post, arguing that the Western media and institutions have misreported the details and mechanics of the Social Credit system. The article suggested that media have confused private score reporting mechanisms with the national system. He also noted that penalties are executed based on the Supreme Court laws and regulations, while private scoring companies and government agencies are not capable of enacting penalties. He argued the widespread media reports often ignored the fact that local governments can be targeted in the blacklists and the scoring systems and its effects were exaggerated by many media stories. He also argued that the cultural expectations of the government and its role in China are different than that of in other countries.
In March 2021, The Diplomat remarked that the assumption by Western observers that the Social Credit System is an Orwellian surveillance system exaggerates the reality and purpose of the system in real life. Despite the claim, the social credit system is "an extension of bond issuance risk assessment credit ratings introduced in China in the 1980s" and primarily serves the function of a financial risk assessment tool.
In October 2021, the Washington-based think tank The Jamestown Foundation explored the function of the Social Credit System and concluded that there were widespread misinterpretations regarding the function and mechanism of the SCS. The think tank found that misinformed perceptions of an algorithm-driven citizen-rating system are originated from early analyses that confused the regulation-enforcement mechanisms and the morality propaganda campaigns of the SCS initiative. Furthermore, many failed to distinguish between the government regulations and the private credit rating systems. Corporations hyperbolically promoted the scores' predictive abilities, which may have resonated with Western anxiety and concerns surrounding corporate data collection and government access to personal information.
In 2022, academics Diana Fu and Rui Hou noted the persistence of Western misconceptions in their article Rating Citizens with China's Social Credit System, stating, "Western media articles initially compared the system to an episode of the British sci-fi series Black Mirror in which individuals' every day behavior, down to the minutiae, were tracked and rated by other people and a "big brother" government. Since then, scholars and journalists have sought to dispel this dystopian depiction of the social credit system, but the image continued to live on, particularly after the Trump administration started to use it as part of its anti-China policy in 2017 and 2018."
In 2023, academic Filip Šebok wrote that perhaps the most common myth associated with social credit is that there is a single numerical score that records individuals' behavior. No such score exists.
Academic Vincent Brussee writes that as of 2023, "hundreds of headlines have discussed the system, but few have systematically broken down what the [social credit system] is and how it works. Some studies refer to the 'breathtaking' ambition of the system and the 'massive quantities of behavioural data' going into the system without substantiating these claims in any way. Others rely on assumptions of what the system will look like, erroneously speculating that everyone will receive a social credit score, that this score will be publicly available, and that a bad rating will have far-reaching consequences. It is like a game of Chinese Whispers gone wrong."
Misconceptions of Zhima Credit and 2015 pilot programs
Alibaba's Zhima Credit, also rendered in English as Sesame Credit, is a private market credit initiative which ultimately became a loyalty program. It has frequently been mistaken for social credit.
In 2015, the PBOC designated eight private companies to pilot personal credit reporting (zhengxin) mechanisms. Because the pilot programs were zhengxin mechanisms, they had little connection to the idea of social credit more broadly. Zhima Credit was one of the pilot zhengxin mechanisms. It was an opt-in scoring initiative proposed to assess users' credit worthiness even if those users lacked formal credit history. It did not include standard industry metrics like income or debts, instead it assessed factors like user spending ability and whether users showed up for travel bookings.
Following the release of Zhima Credit, there was significant media speculation that it might turn into a national social credit system by 2020. This did not occur. Zhima Credit and the other pilot initiatives were never linked to the broader financial system. Zhima Credit did not prove to be an effective credit evaluation mechanism because the data showed no statistically significant link between its metrics and a user's ability to repay loans.
In one interview, Alibaba's technology director suggested that people who played too many video games might be considered less trustworthy. Various news outlets around the world incorrectly suggested that people could lose social credit for playing too many video games. No video game playing metric was ever implemented.
Ultimately Zhima Credit became a loyalty program that rewarded users for using Alibaba services and shopping platforms. PBOC decided not to extend the credit licenses of the eight private pilot programs from 2015.
In popular culture
In 2021, the social credit system was popularized as an Internet meme on various social media platforms. VICE reported that the memes' popularity reflects the "widespread discontent toward the Chinese government over its restrictions of people's freedoms", however, the article noted the trend continued the existing misapprehension and misinformation regarding the SCS mechanism, such as the idea that people in China are rewarded or punished based on a numerical "social credit score". The joke is often posed as a positive or negative action towards the Chinese government which affects the poster's "social credit score" positively or negatively.
According to a 2022 article in The Spectator, the Western narrative of the "social credit score" at the time received widespread mockery and satirical comments from the Chinese Internet community, due to the Western perception being drastically different from the reality in China.
Comparison to other countries
Russia
Around 80% of Russians will reportedly get a digital profile that will document personal successes and failures in less than a decade under the government's comprehensive plans to digitize the economy. Observers have compared this to China's social credit system, although Deputy Prime Minister Maxim Akimov has denied that, saying a Chinese-style social credit system is a "threat".
Spain
In Spain, people who cannot repay their home mortgages may declare bankruptcy. Bankruptcy and foreclosure discharges the obligation to pay mortgage interest, but not mortgage principal. If mortgage principal is not paid, the debtor is placed on a list of untrustworthy people.
United Kingdom
In 2018, the New Economics Foundation compared the Chinese citizen score to other rating systems in the United Kingdom. These included using data from a citizen's credit score, phone usage, rent payment, and so on, to filter job applications, determine access to social services, determine advertisements served, etc.
United States
Some media outlets have compared the social credit system to credit scoring systems in the United States. According to Mike Elgan of Fast Company, "an increasing number of societal "privileges" related to transportation, accommodations, communications and the rates US citizens pay for services (like insurance) are either controlled by technology companies or affected by how we use technology services. And Silicon Valley's rules for being allowed to use their services are getting stricter."
Venezuela
In 2017, Venezuela started developing a smart-card ID known as the "carnet de la patria" or "fatherland card", with the help of the Chinese telecom company ZTE. The system included a database which stores details like birthdays, family information, employment and income, property owned, medical history, state benefits received, presence on social media, membership in a political party and whether a person voted. Many in Venezuela have expressed concern that the card is an attempt to tighten social control through monitoring all aspects of daily life.
References
External links
Data
Reputation management
Credit scoring
Nudge theory
Social status
Social systems
Social influence
Social impact
Politics of China
Social information processing
Technology in society
Sociology of technology
Government by algorithm
Internet memes introduced in 2021 | Social Credit System | [
"Technology",
"Engineering"
] | 11,302 | [
"Information society",
"Government by algorithm",
"Automation",
"Information technology",
"Data",
"Computing and society",
"nan"
] |
51,251,453 | https://en.wikipedia.org/wiki/Fundamentals%20of%20Biochemistry | Fundamentals of Biochemistry: Life at the Molecular Level is a biochemistry textbook written by Donald Voet, Judith G. Voet and Charlotte W. Pratt. Published by John Wiley & Sons, it is a common undergraduate biochemistry textbook.
As of 2016, the book has been published in 5 editions.
References
Biochemistry textbooks | Fundamentals of Biochemistry | [
"Chemistry"
] | 64 | [
"Biochemistry textbooks",
"Biochemistry literature"
] |
51,252,051 | https://en.wikipedia.org/wiki/Maintenance%20of%20traffic | Maintenance of traffic (MOT), also known as temporary traffic control or temporary traffic management, is a process of establishing of a work zone, providing related transportation management and temporary traffic control on streets and highways right-of-way. This process does not apply to law enforcement officers.
The establishment of a work zone and management of temporary traffic control is conducted by traffic controllers, also known as flaggers, traffic observers, or spotters. Standards of operations are established by the department of transportation of each state, and may vary from state to state.
Temporary Traffic Control or Temporary Traffic Management is critical to maintaining safety and minimizing disruption during temporary work zones, events, and other short-term traffic disruptions.
In the United States, traffic control devices are set up according to the Manual on Uniform Traffic Control Devices, sometimes along with state supplements.
Maintenance of traffic training in the United States is provided by the American Traffic Safety Services Association.
References
External links
Manual on Uniform Traffic Control Devices (Part VI) - Occupational Safety and Health Administration (US DOL)
Road traffic management
Transportation engineering
Road safety
Transportation in the United States
United States Department of Transportation
Federal Highway Administration
Road infrastructure
Road transport | Maintenance of traffic | [
"Engineering"
] | 238 | [
"Civil engineering",
"Transportation engineering",
"Industrial engineering"
] |
62,644,194 | https://en.wikipedia.org/wiki/LIPID%20MAPS | LIPID MAPS (Lipid Metabolites and Pathways Strategy) is a web portal designed to be a gateway to Lipidomics resources. The resource has spearheaded a classification of biological lipids, dividing them into eight general categories. LIPID MAPS provides standardised methodologies for mass spectrometry analysis of lipids, e.g.
LIPID MAPS has been cited as evidence of a growing appreciation of the study of lipid metabolism and the rapid development and standardisation of the lipidomics field
Key LIPID MAPS resources include:
LIPID MAPS Structure Database (LMSD) - a database of structures and annotations of biologically relevant lipids, containing over 49000 different lipids. The paper describing this resource has, according to PubMed, been cited more than 200 times.
LIPID MAPS In-Silico Structure Database (LMISSD) - a database of computationally predicted lipids generated by expansion of headgroups for commonly occurring lipid classes
LIPID MAPS Gene/Proteome Database (LMPD) - a database of genes and gene products which are involved in lipid metabolism
Tools available from LIPID MAPS enable scientists to identify likely lipids in their samples from mass spectrometry data, a common method to analyse lipids in biological specimens. In particular, LipidFinder enables analysis of MS data. Tutorials and educational material on lipids are also available at the site.
In January 2020, LIPID MAPS became an ELIXIR service. and in 2024 a core data resource. In addition, it joined Global Biodata Coalition as a core biodata resource.
History
LIPID MAPS was founded in 2003 with NIH funding. LIPID MAPS was previously funded by a multi-institutional grant from Wellcome, and is now funded under an MRC Partnership award, held jointly by University of Cardiff led by Prof Valerie O'Donnell, the Babraham Institute, UCSD and Swansea University, and The University of Edinburgh. Wakelam's obituary describes LIPID MAPS as unifying the field of lipidomics.
LIPID MAPS is sponsored by Cayman Chemical and Avanti Polar lipids
References
Biological databases
Lipids | LIPID MAPS | [
"Chemistry",
"Biology"
] | 444 | [
"Biomolecules by chemical classification",
"Organic compounds",
"Bioinformatics",
"Biological databases",
"Lipids"
] |
62,645,485 | https://en.wikipedia.org/wiki/CUT%26Tag%20sequencing | CUT&Tag-sequencing, also known as cleavage under targets and tagmentation, is a method used to analyze protein interactions with DNA. CUT&Tag-sequencing combines antibody-targeted controlled cleavage by a protein A-Tn5 fusion with massively parallel DNA sequencing to identify the binding sites of DNA-associated proteins. It can be used to map global DNA binding sites precisely for any protein of interest. Currently, ChIP-Seq is the most common technique utilized to study protein–DNA relations, however, it suffers from a number of practical and economical limitations that CUT&RUN and CUT&Tag sequencing do not. CUT&Tag sequencing is an improvement over CUT&RUN because it does not require cells to be lysed or chromatin to be fractionated. CUT&RUN is not suitable for single-cell platforms so CUT&Tag is advantageous for these.
Uses
CUT&Tag-sequencing can be used to examine gene regulation or to analyze transcription factor and other chromatin-associated protein binding. Protein-DNA interactions regulate gene expression and are responsible for many biological processes and disease states. This epigenetic information is complementary to genotype and expression analysis. CUT&Tag is an alternative to the current standard of ChIP-seq. ChIP-Seq suffers from limitations due to the cross linking step in ChIP-Seq protocols that can promote epitope masking and generate false-positive binding sites. As well, ChIP-seq suffers from suboptimal signal-to-noise ratios and poor resolution. CUT&Run-sequencing and CUT&Tag have the advantage of being simpler techniques with lower costs due to the high signal-to-noise ratio, requiring less depth in sequencing.
Specific DNA sites in direct physical interaction with proteins such as transcription factors can be isolated by Protein-A (pA) conjugated Tn5 bound to a protein of interest. Tn5 mediated cleavage produces a library of target DNA sites bound to a protein of interest in situ. Sequencing of prepared DNA libraries and comparison to whole-genome sequence databases allows researchers to analyze the interactions between target proteins and DNA, as well as differences in epigenetic chromatin modifications. Therefore, the CUT&Tag method may be applied to proteins and modifications, including transcription factors, polymerases, structural proteins, protein modifications, and DNA modifications.
Sequencing
Unlike ChIP-Seq there is no size selection required before sequencing. A single sequencing run can scan for genome-wide associations with high resolution, due to the low background achieved by performing the reaction in situ with the CUT&RUN-sequencing methodology. ChIP-Seq, by contrast, requires ten times the sequencing depth because of the intrinsically high background associated with the method. The data is then collected and analyzed using software that aligns sample sequences to a known genomic sequence to identify the CUT&Tag DNA fragments.
Protocols
There are detailed CUT&Tag workflows available in an open-access methods repository.
CUT&Tag for efficient epigenomic profiling of small samples and single cells
CUT&Tag for efficient epigenomic profiling of small samples and single cells
Sensitivity
CUT&Run-Sequencing or CUT&Tag-Sequencing provide low levels of background signal because of in situ profiling which retains in vivo 3D confirmations of transcription factor-DNA interactions, so antibodies access only exposed surfaces. Sensitivity of sequencing depends on the depth of the sequencing run (i.e. the number of mapped sequence tags), the size of the genome and the distribution of the target factor. The sequencing depth is directly correlated with cost and negatively correlated with background. Therefore, low-background CUT&Tag sequencing is inherently more cost-effective than high-background ChIP-Sequencing.
Limitations
The primary limitation of CUT&Tag-seq is the likelihood of over-digestion of DNA due to inappropriate timing of the Magnesium-dependent Tn5 reaction. A similar limitation exists for contemporary ChIP-Seq protocols where enzymatic or sonicated DNA shearing must be optimized. As with ChIP-Seq, a good quality antibody targeting the protein of interest is required. As with other techniques using Tn5, the library preparation has a strong GC bias and has poor sensitivity in low GC regions or genomes with high variance in GC content.
Similar methods
Sono-Seq: Identical to ChIP-Seq but without the immunoprecipitation step.
HITS-CLIP: Also called CLIP-Seq, employed to detect interactions with RNA rather than DNA.
PAR-CLIP: A method for identifying the binding sites of cellular RNA-binding proteins.
RIP-Chip: Similar to ChIP-Seq, but does not employ cross linking methods and utilizes microarray analysis instead of sequencing.
SELEX: Employed to determine consensus binding sequences.
Competition-ChIP: Measures relative replacement dynamics on DNA.
ChiRP-Seq: Measures RNA-bound DNA and proteins.
ChIP-exo: Employs exonuclease treatment to achieve up to single base-pair resolution
ChIP-nexus: Potential improvement on ChIP-exo, capable of achieving up to single base-pair resolution.
DRIP-seq: Employs S9.6 antibody to precipitate three-stranded DND:RNA hybrids called R-loops.
TCP-seq: Principally similar method to measure mRNA translation dynamics.
DamID: Uses enrichment of methylated DNA sequences to detect protein-DNA interaction without antibodies.
CUT&RUN: Uses protein A-Mnase
See also
ChIP-on-chip
ChIP-Seq
CUT&RUN
ChIL-Seq
References
DNA sequencing | CUT&Tag sequencing | [
"Chemistry",
"Biology"
] | 1,135 | [
"Molecular biology techniques",
"DNA sequencing"
] |
62,652,603 | https://en.wikipedia.org/wiki/Borate%20carbonate | The borate carbonates are mixed anion compounds containing both borate and carbonate ions. Compared to mixed anion compounds containing halides, these are quite rare. They are hard to make, requiring higher temperatures, which are likely to decompose carbonate to carbon dioxide. The reason for the difficulty of formation is that when entering a crystal lattice, the anions have to be correctly located, and correctly oriented. They are also known as carbonatoborates or borocarbonates. Although these compounds have been termed carboborate, that word also refers to the C=B=C5− anion, or CB11H12− anion. This last anion should be called 1-carba-closo-dodecaborate or monocarba-closo-dodecaborate.
Some borate carbonates have additional different anions and can be borate carbonate halides or borate carbonate nitrites.
List
References
Borates
Carbonates
Mixed anion compounds | Borate carbonate | [
"Physics",
"Chemistry"
] | 204 | [
"Ions",
"Matter",
"Mixed anion compounds"
] |
39,833,800 | https://en.wikipedia.org/wiki/Darcy%20number | In fluid dynamics through porous media, the Darcy number (Da) represents the relative effect of the permeability of the medium versus its cross-sectional area—commonly the diameter squared. The number is named after Henry Darcy and is found from nondimensionalizing the differential form of Darcy's law. This number should not be confused with the Darcy friction factor which applies to pressure drop in a pipe. It is defined as
where
K is the permeability of the medium (SI units: );
d is the characteristic length, e.g. the diameter of the particle (SI units: ).
Alternative forms of this number do exist depending on the approach by which Darcy's Law is made dimensionless and the geometry of the system. The Darcy number is commonly used in heat transfer through porous media.
See also
Capillary pressure
Dimensionless quantity
References
Dimensionless numbers of fluid mechanics | Darcy number | [
"Chemistry"
] | 184 | [
"Fluid dynamics stubs",
"Fluid dynamics"
] |
39,834,718 | https://en.wikipedia.org/wiki/Dennis%20Wojtkiewicz | Dennis Wojtkiewicz (born 1956) is an American Hyperrealist painter and draughtsman.
Wojtkiewicz graduated from Southern Illinois University and is an artist associated with the Hyperrealist movement. He is best known for his large scale renderings of sliced fruit and flowers. Wojtkiewicz relies on traditional oil paint and pastels for his drawings.
Dennis Wojtkeiwicz's work is exhibited in leading fine art galleries from across the world. His work is owned by many leading private, corporate and public collections including the Evanston Museum of Art, Fidelity Investments in Boston and the University of South Dakota.
Selected solo exhibitions
2000 M.A. Doran Gallery, Tulsa, OK
2006 J. Cacciola Gallery, New York, NY
2009 Peterson-Cody Gallery, Santa fe, NM
2012 Art Revolution Taipei International Art Fair, Taipei World Trade Center, Taipei, Taiwan
2012 Peterson-Cody Gallery, Santa Fe, NM
2014 Sugarman Peterson Gallery, Santa Fe, NM
Notes
External links
Official Website of Dennis Wojtkiewicz
1956 births
Living people
20th-century American painters
20th-century American male artists
American male painters
21st-century American painters
21st-century American male artists
Painters from Chicago
Hyperreality
Southern Illinois University alumni | Dennis Wojtkiewicz | [
"Technology"
] | 251 | [
"Hyperreality",
"Science and technology studies"
] |
39,836,130 | https://en.wikipedia.org/wiki/Strelka%20Institute | Strelka Institute for Media, Architecture and Design is a non-profit international educational project, founded in 2009 and located in Moscow. Strelka incorporates an education programme on urbanism and urban development aimed at professionals with a higher education, a public summer programme, the Strelka Press publishing house, and KB Strelka, the consulting arm of the Institute. Strelka has been listed among the top-100 best architecture schools in 2014, according to Domus magazine.
The Institute has been directed since 2013 by Varvara Melnikova. After the start of the Russo-Ukrainian war in 2022, Strelka suspended its operations.
Education programme
The Institute aims to educate the next generation of architects, designers and media professionals, enabling them to shape the 21st century world. Each year, Strelka welcomes young professionals and gives them the opportunity to work together with experts in the fields of urbanism, architecture and communications from all over the world. During this nine-month post-graduate programme, the researchers explore the issues related to Russia's urban development through a multidisciplinary method conducted in English. Experimental methods, a holistic approach to architecture, media and design, and an emphasis on research are the main characteristics of the programme. The prominent architect and architecture theorist, Rem Koolhaas (AMO/OMA), contributed to the designing of the Institute's education programme.
Since 2016, Benjamin H. Bratton, design theorist and author of The Stack: On Software and Sovereignty, is programme director. The program theme, The New Normal, focuses on long-term urban futures in relation to technological, geographic and ecological complexities.
Some notable faculty at the Strelka Institute has been : Keller Easterling, Benjamin H. Bratton, Winy Maas, Joseph Grima, Reinier De Graaf, Carlo Ratti, and Rem Koolhaas.
KB Strelka
KB Strelka provides strategic consulting services in the fields of architecture and urban planning, as well as cultural and spatial programming. The company was founded in 2013 by the executive board of the Strelka Institute. KB’s method is based on the implementation of transparent competition procedures, involving international experts, forecasting of expenses, and risk analysis at the early stages of project realisation. In 2013, KB organised several key international competitions for Russia: Zaryadye Park, the National Centre for Contemporary Arts, the Museum and Educational Centre of the Polytechnic Museum and Lomonosov Moscow State University, and the International Financial Centre in Rublyovo-Arkhangelskoye. Despite transparency efforts, KB Strelka's urbanisation projects in different cities in Russia have received criticism for the costs and the methods employed; such as the violent clearing of small street kiosks, corruption or incompetent design.
Summer at Strelka
From the end of May until mid September, Strelka’s courtyard hosts a public programme that is open to all. Its programme includes: lectures by prominent architects, urbanists, designers, social activists and scholars; discussions on topical urban issues; workshops; film screenings; theatre performances; concerts and fairs.
Strelka Press
Strelka Press publishes books and essays on modern issues of architecture, design and urban development in both English and Russian. The publishing house releases both printed and digital books. Strelka Press is based in London and Moscow. Strelka Press has published books by Donald Norman, Keller Easterling, and others.
Other information
Strelka is curating the Russian pavilion for the XIV Venice Architectural Biennale.
Strelka took part in the renovation of Moscow’s Gorky Park, designed the concept for Big Moscow development project, and framed the programme for the Moscow Urban Forum 2012-2013.
In 2013, Strelka launched What Moscow Wants, an on-line platform to crowdsource ideas for improving the development of Moscow.
References
External links
Official website
Architecture schools in Russia
Art schools in Russia
Education in Moscow
Urban planning
Educational institutions established in 2009
2009 establishments in Russia | Strelka Institute | [
"Engineering"
] | 826 | [
"Urban planning",
"Architecture"
] |
47,188,687 | https://en.wikipedia.org/wiki/PTI-2 | PTI-2 (SGT-49) is an indole-based synthetic cannabinoid. It is one of few synthetic cannabinoids containing a thiazole group and is closely related to PTI-1. These compounds may be viewed as simplified analogues of indole-3-heterocycle compounds originally developed by Organon and subsequently further researched by Merck.
See also
JWH-018
LBP-1 (drug)
PTI-1
PTI-3
References
Indoles
Thiazoles
Cannabinoids
Designer drugs
Isopropylamino compounds
Ethers | PTI-2 | [
"Chemistry"
] | 126 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
47,189,627 | https://en.wikipedia.org/wiki/Sf%20caspase-1 | The protein Sf caspase-1 is the insect ortholog of the human effector caspases CASP3 (CPP32) and CASP7 (MCH3) in the species Spodoptera frugiperda (Fall armyworm). It was identified as the target of the baculoviral caspase inhibitor protein P35, which it cleaves and by which it is inhibited. Like other caspases, Sf caspase-1 is an aspartate-specific cysteine protease that is produced as an inactive proenzyme and becomes activated by autocatalytic cleavage. The Sf caspase-1 proenzyme is cleaved after the amino acid residues Asp-28 and Asp-195, resulting in a smaller 12 kDa fragment and a larger 19 kDa fragment. Just like with human caspases CASP3 or CASP7, the two cleavage fragments form heterodimers, which again form biologically active dimers-of-heterodimers consisting of two smaller and two larger fragments. Some experiments also showed cleavage of Sf caspase-1 at the residue Asp-184, resulting in an 18 kDa instead of 19 kDa fragment, however this result is likely an in vitro artefact. The insect immunophilin FKBP46 is a substrate of Sf caspase-1, which cleaves full length FKBP46 (~46 kDa) resulting in a ~25 kDa fragment.
References
Insect proteins
Apoptosis
EC 3.4.22 | Sf caspase-1 | [
"Chemistry"
] | 336 | [
"Apoptosis",
"Signal transduction"
] |
47,190,749 | https://en.wikipedia.org/wiki/CGAS%E2%80%93STING%20cytosolic%20DNA%20sensing%20pathway | The cGAS–STING pathway is a component of the innate immune system that functions to detect the presence of cytosolic DNA and, in response, trigger expression of inflammatory genes that can lead to senescence or to the activation of defense mechanisms. DNA is normally found in the nucleus of the cell. Localization of DNA to the cytosol is associated with tumorigenesis, viral infection, and invasion by some intracellular bacteria. The cGAS – STING pathway acts to detect cytosolic DNA and induce an immune response.
Upon binding DNA, the protein cyclic GMP-AMP Synthase (cGAS) triggers reaction of GTP and ATP to form cyclic GMP-AMP (cGAMP). cGAMP binds to Stimulator of Interferon Genes (STING) which triggers phosphorylation of IRF3 via TBK1. IRF3 can then go to the nucleus to trigger transcription of inflammatory genes. This pathway plays a critical role in mediating immune defense against double-stranded DNA viruses.
The innate immune system relies on germline encoded pattern recognition receptors (PRRs) to recognize distinct pathogen-associated molecular patterns (PAMPs). Upon recognition of a PAMP, PRRs generate signal cascades leading to transcription of genes associated with the immune response. Because all pathogens utilize nucleic acid to propagate, DNA and RNA can be recognized by PRRs to trigger immune activation. In normal cells, DNA is confined to the nucleus or mitochondria. The presence of DNA in the cytosol is indicative of cellular damage or infection and leads to activation of genes associated with the immune response. One way cytosolic DNA is sensed is via the cGAS/STING pathway, specifically by the cyclic-GMP-AMP synthase (cGAS). Upon DNA recognition, cGAS dimerizes and stimulates the formation of cyclic-GMP-AMP (cGAMP). cGAMP then binds directly to stimulator of interferon genes (STING) which triggers phosphorylation/activation of the transcription factor IRF3 via TBK1. IRF3 is able to enter the nucleus to promote transcription of inflammatory genes, such as IFN-β.
Cyclic GMP-AMP synthase (cGAS)
Structure
cGAS is a 522 amino acid protein and a member of the nucleotidyltransferase family. N-terminal residues 1-212 are necessary to bind dsDNA. This region may contain two different DNA binding domains. C-terminal residues 213-522 contain part of the nucleotidyltransferase (NTase) motif and a Mab21 domain and are highly conserved in cGAS from zebrafish to humans. These regions are necessary to form the catalytic pocket for the cGAS substrates: GTP and ATP, and to perform the necessary cyclization reaction.
Function
cGAS is found at the plasma membrane and is responsible for detecting cytosolic double stranded DNA, normally found in the cell nucleus, in order to stimulate production of IFN-β. cGAS is also found in the nucleus where tight tethering to chromatin prevents its activation by self-DNA. Upon directly binding cytosolic DNA, cGAS forms dimers to catalyze production of 2’3’-cGAMP from ATP and GTP. cGAMP then acts a second messenger, binding to STING, to trigger activation of the transcription factor IRF3. IRF3 leads to transcription of type-1 IFN-β. cGAS is unable to produce 2’3’-cGAMP in the presence of RNA.
Discovery
Prior to the discovery of cGAS, it was known that interferon beta was produced in the presence of cytosolic dsDNA and that STING-deficient cells were unable to produce interferon in the presence of dsDNA. Through biochemical fractionation of cell extracts and quantitative mass spectrometry, Sun, et al. identified cGAS as the DNA-sensing protein able to trigger interferon beta by synthesizing the second messenger, 2’3’-cGAMP. This activity is dependent on cytosolic DNA.
Enzymatic activity
cGAS catalyzes formation of cGAMP in the presence of dsDNA. cGAS directly binds dsDNA via positively charged amino acid residues interacting with the negatively charged DNA phosphate backbone. Mutations in the positively charged residues completely abrogate DNA binding and subsequent interferon production through STING. Upon binding dsDNA, cGAS dimerizes and undergoes conformational changes that open up a catalytic nucleotide binding pocket, allowing GTP and ATP to enter. Here they are stabilized through base stacking, hydrogen bonds, and divalent cations in order to catalyze phosphodiester bond formation to produce the cyclic dinucleotide cGAMP.
Cyclic GMP-AMP (cGAMP)
Structure
Cyclic GMP-AMP (cGAMP) is a cyclic dinucleotide (CDN) and the first to be found in metazoans. Other CDNs (c-di-GMP and c-di-AMP) are commonly found in bacteria, archaea, and protozoa. As the name suggests, cGAMP is cyclic molecule composed of one Adenine monophosphate (AMP) and one Guanine monophosphate (GMP) connected by two phosphodiester bonds. However, cGAMP differs from other CDNs in that it contains a unique phosphodiester bonds between the 2’ OH of GMP and the 5’ phosphate of AMP. The other bond is between the 3’ OH of AMP and the 5’ phosphate of GMP. The unique 2’-5’ phosphodiester bond may be advantageous because it is less susceptible to degradation caused by 3’-5’ phosphodiesterases. Other advantages of the unique 2’-5’ linkage may be that cGAMP is able to bind multiple allelic variants of STING found in the human population, while other CDNs, composed of only 3’-5’ linkages, are not.
Discovery
cGAMP was discovered by Zhijian "James" Chen and colleagues by collecting cytoplasmic extracts from cells transfected with different types of DNA. Cellular extracts were assayed for STING activation by detecting activated IRF3 dimers. Using affinity purification chromatography, the STING activating substance was purified and mass spectrometry was used to identify the substance as cyclic-GMP-AMP (cGAMP).
Chemically synthesized cGAMP was shown to trigger IRF3 activation and IFN-β production. cGAMP was found to be much more potent than other cyclic di-nucleotides (c-di-GMP and c-di-AMP). cGAMP was shown to definitively bind STING by using radiolabeled cGAMP cross-linked to STING. Adding in unlabeled cGAMP, c-di-GMP, or c-di-AMP was found to compete with radio-labeled cGAMP, suggesting that CDN binding sites overlap. It was later shown that cGAMP has a unique 2’-5’ phosphodiester bond, which differs from conventional 3’-5’ linked CDNs and that this bond may explain some of the unique signaling properties of cGAMP.
Stimulator of Interferon Genes (STING)
STING is an endoplasmic reticulum resident protein and has been shown to directly bind to a variety of different cyclic-di-nucleotides, such as Cyclic adenosine-inosine monophosphate.
Expression
STING is expressed broadly in numerous tissue types, of both immune and non-immune origin. STING was identified in murine embryonic fibroblasts, and is required for the type 1 interferon response in both immune and non-immune cells.
Structure
STING is a 378 amino acid protein. Its N-terminal region (residues 1-154) contains four trans-membrane domains. Its C-terminal domain contains the dimerization domain, the cyclic dinucleotide interaction domain, as well as a domain responsible for interacting and activating TBK1. Upon binding of 2’-3’ cGAMP, STING undergoes a significant conformational change (approximately 20 Angstrom inward rotation) that encloses cGAMP.
Function
Upon binding of 2’-3’ cGAMP (and other bacterial CDNs), STING activates TBK1 to phosphorylate downstream transcription factors IRF3, which induces the type 1 IFN response, and STAT6, which induces chemokines such as CCL2 and CCL20 independently of IRF3. STING is also thought to activate the NF-κB transcription factor through the activity of the IκB kinase (IKK), though the mechanism of NF-κB activation downstream of STING remains to be determined. The signaling pathways activated by STING combine to induce an innate immune response to cells with ectopic DNA in the cytosol. Loss of STING activity inhibits the ability of mouse embryonic fibroblasts to fight against infection by certain viruses, and more generally, is required for the type 1 IFN response to introduced cytosolic DNA.
STING’s general role as an adapter molecule in the cytosolic DNA-type 1 IFN response across cell types has been suggested to function through dendritic cells (DCs). DCs link the innate immune system with the adaptive immune system through phagocytosis and MHC presentation of foreign antigen. The type 1 IFN response initiated by DCs, perhaps through recognition of phagocytosed DNA, has an important co-stimulatory effect. This has recently led to speculation that 2’-3’ cGAMP could be used as a more efficient and direct adjuvant than DNA to induce immune responses.
Allelic Variation
Naturally occurring variations in human STING (hSTING) have been found at amino acid position 232 (R232 and H232). H232 variants have diminished type 1 IFN responses and mutation at this position to alanine abrogates the response to bacterial CDNs. Substitutions enhancing ligand binding were also found. G230A substitutions were shown to increase hSTING signaling upon c-di-GMP binding. This residue is found on the lid of the binding pocket, possibly increasing c-di-GMP binding ability.
Biological Importance of the cGAS-STING pathway
Role in viral response
The cGAS-cGAMP-STING pathway is able to generate interferon beta in response to cytosolic DNA. It was shown that DNA viruses, such as HSV-1, are able to trigger cGAMP production and subsequent activation of interferon beta via STING . RNA viruses, such as VSV or Sendai virus, are unable to trigger interferon via cGAS-STING. cGAS or STING defective mice are unable to produce interferon in response to HSV-1 infection which eventually leads to death, while mice with normal cGAS and STING function are able to recover.
Retroviruses, such as HIV-1, were also shown to activate IFN via the cGAS/STING pathway. In these studies, inhibitors of retroviral reverse transcription abrogated IFN production, suggesting that it is the viral cDNA which is activating cGAS.
Role in tumor surveillance
The cGAS/STING pathway also has a role in tumor surveillance. In response to cellular stress, such as DNA damage, cells will upregulate NKG2D ligands so that they may be recognized and destroyed by Natural Killer (NK) and T cells. In many tumor cells, the DNA damage response is constitutively active, leading to the accumulation of cytoplasmic DNA. This activates the cGAS/STING pathway leading to activation of IRF3. It was shown in lymphoma cells that the NKG2D ligand, Rae1, was upregulated in a STING/IRF3 dependent manner. Transfection of DNA into these cells also triggered Rae1 expression that was dependent on STING. In this model, the transcription factor IRF3, via cGAS/STING, upregulates stress-induced ligands, such as Rae1, in tumor cells, so as to aid in NK-mediated tumor clearance. Moreover, activation of the STING pathway in bone marrow macrophages has been shown to inhibit the growth of acute myeloid leukaemia cells in mice models.
Role in autoimmune disease
Cytoplasmic DNA, due to viral infection, can lead to activation of interferon beta to help clear the infection. However, chronic activation of STING, due to host DNA in the cytosol, can also activate the cGAS/STING pathway, leading to autoimmune disorders. An example of this occurs in Aicardi–Goutières syndrome (AGS). Mutations in the 3’ repair exonuclease, TREX1, cause endogenous retroelements to accumulate in the cytosol, which can lead to cGAS/STING activation, resulting in IFN production. Excessive IFN production leads to an over-active immune system, resulting in AGS and other immune disorders. In mice, it was found that autoimmune symptoms associated with TREX1 deficiency were relieved by cGAS, STING, or IRF3 knockout, implying the importance of aberrant DNA sensing in autoimmune disorders.
Role in cellular senescence
It has been shown that the depletion of cGAS and STING in mouse embryonic fibroblasts and in primary human fibroblasts prevents senescence and SASP (senescence-associated secretory phenotype) establishment.
Therapeutic role
Potential vaccine adjuvant
DNA has been shown to be a potent adjuvant to boost the immune response to antigens encoded by vaccines. cGAMP, through IRF3 activation of STING, stimulates transcription of interferon. This makes cGAMP a potential vaccine adjuvant capable of boosting inflammatory responses. Studies have shown that vaccines encoded with the chicken antigen, ovalbumin (OVA), in conjunction with cGAMP, were able to activate antigen-specific T and B cells in a STING-dependent manner in vivo. When stimulated with OVA peptide, the T cells from mice vaccinated with OVA + cGAMP were shown to have elevated IFN-g and IL-2 when compared to animals receiving only OVA. Furthermore, the enhanced stability of cGAMP, due to the unique 2’-5’ phosphodiester bond, may make it a preferred adjuvant to DNA for in vivo applications.
References
DNA
Immune system | CGAS–STING cytosolic DNA sensing pathway | [
"Biology"
] | 3,080 | [
"Immune system",
"Organ systems"
] |
47,192,473 | https://en.wikipedia.org/wiki/Digital%20heritage | The Charter on the Preservation of Digital Heritage of UNESCO defines digital heritage as embracing "cultural, educational, scientific and administrative resources, as well as technical, legal, medical and other kinds of information created digitally, or converted into digital form from existing analogue resources".
Digital heritage also includes the use of digital media in the service of understanding and preserving cultural or natural heritage.
The digitization of both cultural heritage and Natural heritage serves to enable the permanent access of current and future generations to culturally important objects ranging from literature and paintings to flora, fauna, or habitats. It is also used in the preservation and access of objects with enduring or significant historical, scientific, or cultural value including buildings, archeological sites, and natural phenomena. The main idea is the transformation of a material object into a virtual copy. It should not be confused with digital humanities, which uses digitizing technology to specifically help with research. There have been several debates concerning the efficiency of the process of digitizing heritage. Some of the drawbacks refer to the deterioration and technological obsolescence due to the lack of funding for archival materials and underdeveloped policies that would regulate such a process. Another main social debate has taken place around the restricted accessibility due to the digital divide that exists around the world. Nevertheless, new technologies enable easy, instant and cross boarder access to the digitized work. Many of these technologies include spatial and surveying technology to gain aerial or 3D images.
Digital heritage is also used to monitor cultural heritage sites over years to help with preservation, maintenance, and sustainable tourism. It aims to observe any changes, diseases, or deterioration that may occur on objects.
Cultural and natural heritage
Digital Heritage that is not born-digital can be divided into two separate groups—digital cultural heritage and digital natural heritage.
Digital cultural heritage is the maintenance or preservation of cultural objects through digitization. These are objects, in some cases entire cities, that are considered of cultural importance. These objects are sometimes able to be digitized or physically represented in minute detail. Digital cultural heritage also includes intangible heritage. These are things such as "oral traditions, customs, value systems, skills, traditional dances, diets, performances" and other unique features of a culture. Intangible heritage is particularly vulnerable to destruction due to urbanization.
There are several projects and programs which concentrate on digital cultural heritage. One such project is Mapping Gothic France, which aims to document and preserve cathedrals across France using images, VR tours, laser scans, and panoramas. This allows for scientific and historical study and preservation of the cathedrals and also provides detailed access to the sites for anyone in the world. The aim of projects like these is to help with the preservation and restoration of cultural objects. After the fire at Notre-Dame de Paris in 2019, digital scans are a major component in the ongoing restoration.
Digital natural heritage pertains to objects of natural heritage that are considered of cultural, scientific, or aesthetic importance. Digital heritage in this instance is used not only to grant access to these objects, but to monitor any changes over time, such as with plant or animal habitats. Geographic information systems are a form of technology that is used primarily in the study of natural heritage. Western Australia has one such digital heritage project where they have created a digital repository of native plants important to both the region and the Aboriginal people. This is in order to protect and preserve the important biological heritage of Western Australia.
Educational impact
The digitization of these heritage objects has impacts around the world and across many disciplines. The increase of digital items means that people, especially the youth, are able to learn about new objects and cultures online through various media. They provide viewers with a more in-depth experience with an item or place, instead of just an image. The media is also able to be curated to age- or educational-level appropriateness, making learning easier. Some of the technology used in education, especially in museums, includes mobile apps, virtual reality, social media, and video games. Cultural heritage institutions are using this technology to try to expand access, increase appreciation for these items, and to gain new viewpoints on their collections. Digital heritage also helps scientists, archeologists, or other historians and specialists collect data on these objects, providing more information on the objects and the past.
Digital Heritage is still currently being studied and improved by several sectors invested in cultural and intellectual preservation. It is particularly of interest to museums, governments, and academic institutions. Research by these groups are creating new concepts, methodologies, and techniques for the implementation of digital heritage to protect this type of cultural and natural heritage. As new technologies are created, museums and other heritage institutions are provided with more ways of disseminating their information and engaging with the public. A lack of resources within certain groups may still hinder everyone from accessing digital heritage.
Technologies used
The digitization of cultural heritage is attained through several means. Some of the main technology used is spatial and surveying technology.
Space archaeological technology - Observations from space satellites are non-intrusive and can be integrated with other technologies on the ground. It is used to photograph vast areas of earth and help with research. Remnants of ancient civilizations or other human objects are also able to be spotted via satellite imaging.
Unmanned aerial vehicles - UAV, such as drones, are commonly used in digitization of cultural heritage objects. The Great Wall of China is one such site that has been digitized and analyzed through unmanned aerial vehicle investigation. The resulting images, 3-D scans, maps, and other data are used to evaluate and maintain the Great Wall.
Laser Scanning - Laser scanning is used to scan an area and recreate spatially accurate depictions, such as a 3D model.
Virtual and Augmented Reality - VR is used primarily for education but does have uses for reconstruction and research. It is used to provide users with an immersive experience, as though they are actually at the site.
Geographic Information systems - GIS are used primarily to study objects and sites over time. It is also important in studying the socioeconomic status of the past.
3D Modeling - 3D modeling has become more widely used due to an increase in technology that works specifically with heritage sites. It is often used in tandem with GIS to reconstruct objects for restoration, documentation, preservation, and educational purposes. Data is collected using satellite or other aerial imaging and ground-based imaging. There is some concern about the accuracy and authenticity of these types of digital reconstructions and their effects on the sites themselves.
A major barrier to digital heritage is the amount of resources it takes to undertake such projects, such as money, time, and technology. Money and the lack of qualified personnel are two that are considered the most obstructive. This is especially an issue in less developed areas or within underfunded groups such as minorities.
Virtual heritage
A particular branch of digital heritage, known as "virtual heritage", is formed by the use of information technology with the aim of recreating the experience of existing cultural heritage, as in (approximations of) virtual reality. It is hard to differentiate this branch from the core contribution of digital heritage which is storing the heritage data digitally. Parsinejad et al. developed two techniques for Digital Twinning of the architectural assets and representation of the physical assets virtually in the museum context. Two techniques are hand recording and digital recording and both have challenges in adoption and implementation of Digital Twin as a revolutionary concept.
Digital heritage stewardship
Digital heritage stewardship is a form of digital curation which is modeled after collaborative curation. Digital heritage stewardship means stepping away from typical curatorial practices (e.g. discovering, arranging, and sharing information, material, and/or content) in favor of practices which allow its stakeholders the opportunity to contribute historical, political, and social context and culture. The collaborative practice encourages the creation, engagement, and maintenance of relationships with the relative communities from which certain information, material, and/or content originates.
A notable use of digital heritage stewardship is for the preservation of Indigenous heritage. The Plateau Peoples' Web Portal is an online archive developed and collaborated on by representatives from six different tribes — the Colville, Coeur d'Alene, Spokane, Umatilla, Yakama, and Warm Springs — along with the team for Washington State University Libraries' Manuscripts, Archives, and Special Collections to curate Plateau peoples' cultural materials.
Digital heritage studies
Digital heritage studies examines how people use the Internet to engage with elements of the past and attribute social and cultural meanings to them in the present. They also look into how concepts of history can change depending on the groups of people that engage with the objects or historical concepts. Digital heritage studies have also led to investigations on heritage as experiences.
See also
Archaeogaming
Digital archaeology
Digital humanities
References
Cultural heritage
Digital media
Digital preservation | Digital heritage | [
"Technology"
] | 1,781 | [
"Multimedia",
"Digital media"
] |
47,197,348 | https://en.wikipedia.org/wiki/Dose-fractionation%20theorem | The dose-fractionation theorem for tomographic imaging is a statement that says the total dose required to achieve statistical significance for each voxel of a computed 3D reconstruction is the same as that required to obtain a single 2D image of that isolated voxel at the same level of statistical significance. Hegerl and Hoppe have pointed out that a statistically significant 3D image can be computed from statistically insignificant projections, as long as the total dose that is distributed among these projections is high enough that it would have resulted in a statistically significant projection, if applied to only one image. The original derivations assumed weak-contrast imaging with additive noise, however, the dose-fractionation theorem was demonstrated using a more complete noise model by Yalisove, Sung, et al.
References
Condensed matter physics
Electron microscopy
Medical imaging
Geometric measurement
X-ray computed tomography
Multidimensional signal processing | Dose-fractionation theorem | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 179 | [
"Geometric measurement",
"Electron",
"Materials science stubs",
"Electron microscopy",
"Physical quantities",
"Quantity",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Geometry",
"Microscopy",
"Condensed matter stubs",
"Matter"
] |
47,198,640 | https://en.wikipedia.org/wiki/Crowther%20criterion | The conventional method to evaluate the resolution of a tomography reconstruction is determined by the Crowther criterion.
The minimum number of views, m, to reconstruct a particle of diameter D to a resolution of d (=1/R) is given by
References
Condensed matter physics
Electron microscopy
Medical imaging
Geometric measurement
X-ray computed tomography
Multidimensional signal processing | Crowther criterion | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 75 | [
"Geometric measurement",
"Electron",
"Materials science stubs",
"Electron microscopy",
"Physical quantities",
"Quantity",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Geometry",
"Microscopy",
"Condensed matter stubs",
"Matter"
] |
59,017,566 | https://en.wikipedia.org/wiki/Frog%20coffin | Frog coffins are burials of frogs in miniature coffins in Finland for the purposes of folk magic. These coffins are known from finds secreted in churches, as well as from references to their use in folk magic at other locations.
Overview
Burials of frogs in miniature coffins were discovered in churches in eastern Finland around the turn of the twentieth century, and were briefly recorded by , who explained them as placed objects with a magical purpose of the stealing the luck of more successful fishermen. A typical size for the coffin was long.
Churches where such coffins were found include Kuopio Cathedral (about 32 coffins); Tuusniemi Church (about 100 coffins); Kiihtelysvaara Church (4 coffins); Pielavesi Old Church (coffin and bound frogs); coffins and bound frogs have also been found at Nilsiä Old Church; Heinävesi Old Church; Turku Cathedral; and Church (in Sweden) – along with the coffins other finds included puppets made of alder or birch bark, parts of fishing nets, and textiles.
Physical finds
One such find were the coffins found during restoration work in the choir of Kuopio Cathedral, recorded in the newspaper in 1895 – the cathedral itself was consecrated in 1816 – according to the report in Savo-Karjala the 'coffins' had been pushed into the space through ventilation hatches. At the time of discovery some of the coffins were relatively new. Five further coffins were found in 1901, and the find recorded in Savo-Karjala again – the newspaper surmised from the number that the coffins were being added yearly. Two coffins were kept by the National Museum of Finland in Helsinki, and another in the Kuopio Museum – features of these offerings were – a coffin carved from alder wood; a frog inside the coffin; fishing net covering or wrapping the frog; a needle impaling the frog with white thread in the needle; in one case it is thought the frog's mouth had been stitched shut.
Finds with very similar features were found in 1907 at the church in Tuusniemi (b.1869). At the same church similar coffins were found in 1818 in the church's bell tower- these coffins also contained bedbugs, animal hair, or grains. Another find was made in the 1930s under the churches stone foundation.
Possibly related finds include a cat in an alder coffin at Kiihtelysvaara Church, and alder coffins (about ) containing a carved human figure found at The Old Church of Pielavesi.
Whilst most coffins have been found in eastern Finland an example has been found in the west at Turku Cathedral – this deposit was a 'high quality' work was made of varnished pine with cloth, and the initials 'HM' on the base. Radiocarbon dating, building history, and style have dated this coffin to the late 17th century or early 18th century.
Folklore
There is extensive recorded folklore concerning the placement of frogs in coffins in eastern Finland, including central Finland, Savo, Karelia, North Ostrobothnia, Kainuu, and as far north as south Lapland – these areas are mostly Lutheran in modern religion, except Karelia which is Orthodox.
Generally the burial of a miniature coffin is a key part of the ritual. The majority of recorded lore is about counter-magic – intended to reflect evil intentions back to those sending them. A lesser part of recorded rituals are malicious in intent – the ritual may be similar to the counter-magic one, but include the burial of an item from the victim in the coffin, though the intent of ritual is also key. In these rituals frogs are the commonest animal (about 70%) though others may be used including squirrels, pike, or even a human foetus. Animal substances (milk, feathers, hooves etc.) may have also been buried in a miniature coffin.
In recorded folklore accounts it was thought that such a practise was powerful magic, and could kill an intended victim. Other spells or rituals could be healing, such as a cure for epilepsy which included burying a piece of the afflicted's undergarments with a frog coffin – this cure likely was another form of a protective 'reflective' spell, with the illness assumed caused by malicious sorcery. For example, in a ritual to dispel the problem of cows not returning home at night, recorded from the cunning man Mikko Koljonen (born 1812) of Viitasaari:
A ritual against epilepsy also use a 'frog coffin' – it gives one ritual by which such coffins might end deposited in churches :
Frog coffins were believed to keep cattle healthy, if buried near a cattle shed.
Christian interpretations and influence
Both the malicious intent of 'frog coffin' rituals, and also those intended to 'reflect' evil intent back on the sender were at odds with the Christian world view of forgiveness, though the protective use against other persons' malice could co-exist to some extent as it did not harm the innocent. Contemporary newspaper reports of finds of such burials were scathing of such practises occurring and continuing to occur. notes the use of Christian holy places as geographic focuses for non-Christian practices. It is thought that the Väki ('Power') for these spells may have come from the dead associated with the church and churchyard. In some cases miniature coffin rituals included elements of Christian practice, such as reciting parts of the Lord's Prayer, but not performed by a priest.
In other cultures
The Zhuang people of China idolize the frog – on the first day of the Lunar Year a societal ritual ("Yaogui") takes place including a hunt for hibernating frogs ("Gui"), and their sacrifice and placing in a coffin (of a Bamboo section). On the 25th day after the sacrifice, the frog's bones are exhumed and used to foretell the next harvest.
See also
Apotropaic magic
Church grim
Concealed shoes
Dried cat
Horse skulls
Witch bottle
References
Notes
Sources
Further reading
Magic items
Finnish folklore
Frogs in culture
Coffins | Frog coffin | [
"Physics"
] | 1,264 | [
"Magic items",
"Physical objects",
"Matter"
] |
59,018,821 | https://en.wikipedia.org/wiki/Zoliflodacin | Zoliflodacin (development codes AZD0914 and ETX0914) is an experimental antibiotic that is being studied for the treatment of infection with Neisseria gonorrhoeae (gonorrhea). It has a novel mechanism of action which involves inhibition of bacterial type II topoisomerases. Zoliflodacin is being developed as part of a public-private partnership between Innoviva Specialty Therapeutics and the Global Antibiotic Research & Development Partnership (GARDP), and the drug has demonstrated clinical efficacy equivalent to ceftriaxone in Phase III clinical trials.
Susceptible bacteria
Zoliflodacin has shown in vitro activity against the following species of bacteria:
Staphylococcus aureus
Staphylococcus pyogenes
Streptococcus agalactiae
Streptococcus pneumoniae
Haemophilus influenzae
Moraxella catarrhalis
Mycoplasma pneumoniae
Neisseria gonorrhoeae
Chlamydia trachomatis
Mycoplasma genitalium
Pharmacology
Mechanism of action
Zoliflodacin is primarily active against both Gram-positive, but has activity against fastidious Gram-negative bacteria. It functions by inhibiting DNA gyrase, an enzyme necessary to separate bacterial DNA, thereby inhibiting cell replication.
History
A high throughput screening campaign aimed at identifying compounds with whole cell antibacterial activity performed at Pharmacia & Upjohn identified compound PNU-286607, a progenitor of Zoliflodacin, as having the desired activity. Subsequent biological profiling of PNU-286607 showed that the compound inhibited DNA synthesis in susceptible bacteria, and analysis of mutants resistant to the compound's activity indicated that these compounds acted on DNA gyrase at a site distinct from that of the fluoroquinolone antibiotics.
Subsequent research at AstraZeneca led to the discovery that the nitroaromatic in PNU-286607 could be replaced with a fused benzisoxazole ring, which allowed for an exploration of different groups at the 3-position of the heterocycle. This work was continued at Entasis Pharmaceuticals where extensive optimization resulted in the discovery of ETX0914, which was renamed Zolifodacin in the course of its clinical development.
References
Experimental drugs
Antibiotics
2-Oxazolidinones
Benzisoxazoles
Fluoroarenes
Barbiturates
Tetrahydroquinolines
Morpholines
Spiro compounds | Zoliflodacin | [
"Chemistry",
"Biology"
] | 530 | [
"Biotechnology products",
"Organic compounds",
"Antibiotics",
"Biocides",
"Spiro compounds"
] |
59,024,032 | https://en.wikipedia.org/wiki/Polyestriol%20phosphate | Polyestriol phosphate (PE3P, SEP), sold under the brand names Gynäsan, Klimadurin, and Triodurin, is an estrogen medication which was previously used in menopausal hormone therapy (i.e., to treat menopausal symptoms in postmenopausal women) and is no longer available.
Medical uses
PE3P has been used at a dosage of 40 to 80 mg by intramuscular injection once every 4 to 8 weeks in menopausal hormone therapy.
Available forms
PE3P has been available in the form of ampoules containing 50 to 80 mg in 1 or 2 mL aqueous solution.
Pharmacology
PE3P is similar to polyestradiol phosphate (PEP), and is, likewise, an estrogen ester – specifically, an ester and prodrug of estriol – in the form of a polymer with phosphate linkers. When adjusted for differences in molecular weight, PE3P contains the equivalent of about 80% of the amount of estriol. As such, 40 mg PE3P corresponds to about 32 mg estriol. Doses of PE3P of 10 mg or more have an extended duration of action. A single intramuscular injection of 80 mg PE3P has a duration of about 1 month and of 80 mg about 2 months.
The effects of PE3P on the vagina, uterus, pregnancy, prostate gland, coagulation, and fibrinolysis, as well as on mammary and endometrial cancer risk, have been studied. The endometrial proliferation dose of PE3P over 14 days in women is 40 to 60 mg by intramuscular injection.
Chemistry
PE3P is a water-soluble polymer of estriol with phosphoric acid.
History
PE3P was developed by the Swedish pharmaceutical company Leo Läkemedel AB in the 1960s. It was introduced for medical use by 1968.
Society and culture
Brand names
PE3P was marketed under brand names including Gynäsan, Klimadurin, and Triodurin.
Availability
PE3P was marketed in Germany and Spain.
See also
Estriol phosphate
Polytestosterone phloretin phosphate
Polydiethylstilbestrol phosphate
References
Abandoned drugs
Copolymers
Estranes
Estriol esters
Phosphate esters
Phosphatase inhibitors
Prodrugs
Synthetic estrogens | Polyestriol phosphate | [
"Chemistry"
] | 506 | [
"Chemicals in medicine",
"Drug safety",
"Prodrugs",
"Abandoned drugs"
] |
36,984,235 | https://en.wikipedia.org/wiki/C14H10O5 | {{DISPLAYTITLE:C14H10O5}}
The molecular formula C14H10O5 (molar mass: 258.23 g/mol, exact mass: 258.0528 u) may refer to:
Alternariol
Salsalate
Molecular formulas | C14H10O5 | [
"Physics",
"Chemistry"
] | 58 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
36,986,767 | https://en.wikipedia.org/wiki/Divertor | In magnetic confinement fusion, a divertor is a magnetic field configuration which diverts the heat and particles escaped from the magnetically confined plasma to dedicated plasma-facing components, thus spatially separating the region plasma-surface interactions from the confined core (in contrast to the limited configuration). This requires establishing a separatrix-bounded magnetic configuration, typically achieved by creating poloidal field nulls (X-points) using external coils.
The divertor is a critical part of magnetic confinement fusion devices, first introduced by Lyman Spitzer in the 1950s for the stellarator concept. It extracts heat and ash produced by the fusion reaction while protecting the main chamber from thermal loads, and reduces the level of plasma contamination due to sputtered impurities. In tokamaks, high confinement modes are more readily achieved in diverted configurations.
At present, it is expected that future fusion power plants will generate divertor heat loads greatly exceeding the engineering limits of the plasma-facing components. The search for mitigation strategies to the divertor power exhaust challenge is a major topic in nuclear fusion research.
Tokamak divertors
A tokamak featuring a divertor is known as a divertor tokamak or divertor configuration tokamak. In this configuration, the particles escape through a magnetic "gap" (separatrix), which allows the energy absorbing part of the divertor to be placed outside the plasma.
The divertor configuration also makes it easier to obtain a more stable H-mode of operation. The plasma facing material in the divertor faces significantly different stresses compared to the majority of the first wall.
Stellarator divertors
In stellarators, low-order magnetic islands can be used to form a divertor volume, the island divertor, for managing power and particle exhaust. The island divertor has shown success in accessing and stabilizing detached scenarios and has demonstrated reliable heat flux and detachment control with hydrogen gas injection, and impurity seeding in the W7-X stellarator. The magnetic island chain in the plasma edge can control plasma fueling. Despite some challenges, the island divertor concept has demonstrated great potential for managing power and particle exhaust in fusion reactors, and further research could lead to more efficient and reliable operation in the future.
The helical divertor, as employed in the Large Helical Device (LHD), utilizes large helical coils to create a diverting field. This design permits adjustment of the stochastic layer size, situated between the confined plasma volume and the field lines ending on the divertor plate. However, the compatibility of the Helical Divertor with stellarators optimized for neoclassical transport remains uncertain.
The non-resonant divertor provides an alternative design for optimized stellarators with significant bootstrap currents. This approach leverages sharp "ridges" on the plasma boundary to channel flux. The bootstrap currents modify the shape, not the location, of these ridges, providing an effective channeling mechanism. This design, although promising, has not been experimentally tested yet.
Given the complexity of the design of stellarator divertors, compared to their two-dimensional tokamak counterparts, a thorough understanding of their performance is crucial in stellarator optimization. The experiments with divertors in the W7-X and LHD have shown promising results and provide valuable insights for future improvements in shape and performance. Furthermore, the advent of non-resonant divertors offers an exciting path forward for quasi-symmetric stellarators and other configurations not optimized for minimizing plasma currents.
See also
Nuclear fusion
ITER
References
Further reading
Snowflake and the multiple divertor concepts. March 2016
External links
Limiters
Divertors
Fusion power | Divertor | [
"Physics",
"Chemistry"
] | 754 | [
"Nuclear fusion",
"Plasma physics stubs",
"Fusion power",
"Plasma physics"
] |
36,989,300 | https://en.wikipedia.org/wiki/Absolute%20angular%20momentum | In meteorology, absolute angular momentum is the angular momentum in an 'absolute' coordinate system (absolute time and space).
Introduction
Angular momentum equates with the cross product of the position (vector) of a particle (or fluid parcel) and its absolute linear momentum , equal to , the product of mass and velocity. Mathematically,
Definition
Absolute angular momentum sums the angular momentum of a particle or fluid parcel in a relative coordinate system and the angular momentum of that relative coordinate system.
Meteorologists typically express the three vector components of velocity (eastward, northward, and upward). The magnitude of the absolute angular momentum per unit mass
where
represents absolute angular momentum per unit mass of the fluid parcel (in ),
represents distance from the center of the Earth to the fluid parcel (in ),
represents earth-relative eastward component of velocity of the fluid parcel (in ),
represents latitude (in ), and
represents angular rate of Earth's rotation (in , usually ).
The first term represents the angular momentum of the parcel with respect to the surface of the Earth, which depends strongly on weather. The second term represents the angular momentum of the Earth itself at a particular latitude (essentially constant at least on non-geological timescales).
Applications
In the shallow troposphere of the Earth, humans can approximate , the distance between the fluid parcel and the center of the Earth approximately equal to the mean Earth radius:
where
represents Earth radius (in , usually )
represents absolute angular momentum per unit mass of the fluid parcel (in ),
represents Earth-relative eastward component of velocity of the fluid parcel (in ),
represents latitude (in ), and
represents angular rate of Earth's rotation (in , usually ).
At the North Pole and South Pole (latitude ), no absolute angular momentum can exist ( because ). If a fluid parcel with no eastward wind speed () originating at the equator ( so ) conserves its angular momentum () as it moves poleward, then its eastward wind speed increases dramatically: . After those substitutions, , or after further simplification, . Solution for gives . If (), then .
The zonal pressure gradient and eddy stresses cause torque that changes the absolute angular momentum of fluid parcels.
References
Angular momentum
Meteorological concepts
Rotation | Absolute angular momentum | [
"Physics",
"Mathematics"
] | 458 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Angular momentum",
"Momentum",
"Moment (physics)"
] |
64,005,900 | https://en.wikipedia.org/wiki/Magnetic%202D%20materials | Magnetic 2D materials or magnetic van der Waals materials are two-dimensional materials that display ordered magnetic properties such as antiferromagnetism or ferromagnetism. After the discovery of graphene in 2004, the family of 2D materials has grown rapidly. There have since been reports of several related materials, all except for magnetic materials. But since 2016 there have been numerous reports of 2D magnetic materials that can be exfoliated with ease just like graphene.
The first few-layered van der Waals magnetism was reported in 2017 (Cr2Ge2Te6, and CrI3). One reason for this seemingly late discovery is that thermal fluctuations tend to destroy magnetic order for 2D magnets more easily compared to 3D bulk. It is also generally accepted in the community that low dimensional materials have different magnetic properties compared to bulk. This academic interest that transition from 3D to 2D magnetism can be measured has been the driving force behind much of the recent works on van der Waals magnets. Much anticipated transition of such has been since observed in both antiferromagnets and ferromagnets: FePS3, Cr2Ge2Te6, CrI3, NiPS3, MnPS3, Fe3GeTe2
Although the field has been only around since 2016, it has become one of the most active fields in condensed matter physics and materials science and engineering. There have been several review articles written up to highlight its future and promise.
Overview
Magnetic van der Waals materials is a new addition to the growing list of 2d materials. The special feature of these new materials is that they exhibit a magnetic ground state, either antiferromagnetic or ferromagnetic, when they are thinned down to very few sheets or even one layer of materials. Another, probably more important, feature of these materials is that they can be easily produced in few layers or monolayer form using simple means such as scotch tape, which is rather uncommon among other magnetic materials like oxide magnets.
Interest in these materials is based on the possibility of producing two-dimensional magnetic materials with ease. The field started with a series of papers in 2016 with a conceptual paper and a first experimental demonstration. The field was expanded further with the publication of similar observations in ferromagnetism the following year. Since then, several new materials discovered and several review papers have been published.
Theory
Magnetic materials have their (spins) aligned over a macroscopic length scale. Alignment of the spins is typically driven by exchange interaction between neighboring spins. While at absolute zero () the alignment can always exist, thermal fluctuations misalign magnetic moments at temperatures above the Curie temperature (), causing a phase transition to a non-magnetic state. Whether is above the absolute zero depends heavily on the dimensions of the system.
For a 3D system, the Curie temperature is always above zero, while a one-dimensional system can only be in a ferromagnetic state at
For 2D systems, the transition temperature depends on the spin dimensionality (). In system with , the planar spins can be oriented either in or out of plane. A spin dimensionality of two means that the spins are free to point in any direction parallel to the plane. A system with a spin dimensionality of three means there are no constraints on the direction of the spin. A system with is described by the 2D Ising model. Onsager's solution to the model demonstrates that , thus allowing magnetism at obtainable temperatures. On the contrary, an infinite system where , described by the isotropic Heisenberg model, does not display magnetism at any finite temperature. The long range ordering of the spins for an infinite system is prevented by the Mermin-Wagner theorem stating that spontaneous symmetry breaking required for magnetism is not possible in isotropic two dimensional magnetic systems. Spin waves in this case have finite density of states and are gapless and are therefore easy to excite, destroying magnetic order. Therefore, an external source of magnetocrystalline anisotropy, such as external magnetic field, or a finite-sized system is required for materials with to demonstrate magnetism.
The 2D ising model describes the behavior of FePS3, CrI3. and Fe3GeTe2, while Cr2Ge2Te6 and MnPS3 behaves like isotropic Heisenberg model. The intrinsic anisotropy in CrI3 and Fe3GeTe2 is caused by strong spin–orbit coupling, allowing them to remain magnetic down to a monolayer, while Cr2Ge2Te6 has only exhibit magnetism as a bilayer or thicker. The XY model describes the case where . In this system, there is no transition between the ordered and unordered states, but instead the system undergoes a so-called Kosterlitz–Thouless transition at finite temperature , where at temperatures below the system has quasi-long-range magnetic order. It was reported that the theoretical predictions of the XY model are consistent with those experimental observations of NiPS3. The Heisenberg model describes the case where . In this system, there is no transition between the ordered and unordered states because of the Mermin-Wagner theorem. The experimental realization of the Heisenberg model was reported using MnPS3.
The above systems can be described by a generalized Heisenberg spin Hamiltonian:
,
Where is the exchange coupling between spins and , and and are on-site and inter-site magnetic anisotropies, respectively. Setting recovered the 2D Ising model and the XY model. (positive sign for and negative for ), while and recovers the Heisenberg model (). Along with the idealized models described above, the spin Hamiltonian can be used for most experimental setups, and it can also model dipole-dipole interactions by renormalization of the parameter . However, sometimes including further neighbours or using different exchange coupling, such as antisymmetric exchange, is required.
Measuring two-dimensional magnetism
Magnetic properties of two-dimensional materials are usually measured using Raman spectroscopy, Magneto-optic Kerr effect, Magnetic circular dichroism or Anomalous Hall effect techniques. The dimensionality of the system can be determined by measuring the scaling behaviour of magnetization (), susceptibility () or correlation length () as a function of temperature. The corresponding critical exponents are , and respectively. They can be retrieved by fitting
,
or
to the data. The critical exponents depend on the system and its dimensionality, as demonstrated in Table 1. Therefore, an abrupt change in any of the critical exponents indicates a transition between two models. Furthermore, the Curie temperature can be measured as a function of number of layers (). This relation for a large is given by
,
where is a material dependent constant. For thin layers, the behavior changes to
Applications
Magnetic 2D materials can be used as a part of van der Waals heterostructures. They are layered materials consisting of different 2D materials held together by van der Waals forces. One example of such structure is a thin insulating/semiconducting layer between layers of 2D magnetic material, producing a magnetic tunnel junction. This structure can have significant spin valve effect, and thus they can have many applications in the field of spintronics. Another newly emerging direction came from the rather unexpected observation of magnetic exciton in NiPS3.
References
Magnetism
Ferromagnetic materials
Materials science
Two-dimensional nanomaterials
Condensed matter physics
Semiconductors | Magnetic 2D materials | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,543 | [
"Electrical resistance and conductance",
"Applied and interdisciplinary physics",
"Physical quantities",
"Semiconductors",
"Ferromagnetic materials",
"Phases of matter",
"Materials science",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"nan",
"Solid state engineering",
... |
64,011,351 | https://en.wikipedia.org/wiki/Discovery%20of%20nuclear%20fission | Nuclear fission was discovered in December 1938 by chemists Otto Hahn and Fritz Strassmann and physicists Lise Meitner and Otto Robert Frisch. Fission is a nuclear reaction or radioactive decay process in which the nucleus of an atom splits into two or more smaller, lighter nuclei and often other particles. The fission process often produces gamma rays and releases a very large amount of energy, even by the energetic standards of radioactive decay. Scientists already knew about alpha decay and beta decay, but fission assumed great importance because the discovery that a nuclear chain reaction was possible led to the development of nuclear power and nuclear weapons. Hahn was awarded the 1944 Nobel Prize in Chemistry for the discovery of nuclear fission.
Hahn and Strassmann at the Kaiser Wilhelm Institute for Chemistry in Berlin bombarded uranium with slow neutrons and discovered that barium had been produced. Hahn suggested a bursting of the nucleus, but he was unsure of what the physical basis for the results were. They reported their findings by mail to Meitner in Sweden, who a few months earlier had fled Nazi Germany. Meitner and her nephew Frisch theorised, and then proved, that the uranium nucleus had been split and published their findings in Nature. Meitner calculated that the energy released by each disintegration was approximately 200 megaelectronvolts, and Frisch observed this. By analogy with the division of biological cells, he named the process "fission".
The discovery came after forty years of investigation into the nature and properties of radioactivity and radioactive substances. The discovery of the neutron by James Chadwick in 1932 created a new means of nuclear transmutation. Enrico Fermi and his colleagues in Rome studied the results of bombarding uranium with neutrons, and Fermi concluded that his experiments had created new elements with 93 and 94 protons, which his group dubbed ausenium and hesperium. Fermi won the 1938 Nobel Prize in Physics for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". However, not everyone was convinced by Fermi's analysis of his results. Ida Noddack suggested that instead of creating a new, heavier element 93, it was conceivable that the nucleus had broken up into large fragments, and Aristid von Grosse suggested that what Fermi's group had found was an isotope of protactinium.
This spurred Hahn and Meitner, the discoverers of the most stable isotope of protactinium, to conduct a four-year-long investigation into the process with their colleague Strassmann. After much hard work and many discoveries, they determined that what they were observing was fission, and that the new elements that Fermi had found were fission products. Their work overturned long-held beliefs in physics and paved the way for the discovery of the real elements 93 (neptunium) and 94 (plutonium), for the discovery of fission in other elements, and for the determination of the role of the uranium-235 isotope in that of uranium. Niels Bohr and John Wheeler reworked the liquid drop model to explain the mechanism of fission.
Background
Radioactivity
In the last years of the 19th century, scientists frequently experimented with the cathode-ray tube, which by then had become a standard piece of laboratory equipment. A common practice was to aim the cathode rays at various substances and to see what happened. Wilhelm Röntgen had a screen coated with barium platinocyanide that would fluoresce when exposed to cathode rays. On 8 November 1895, he noticed that even though his cathode-ray tube was not pointed at his screen, which was covered in black cardboard, the screen still fluoresced. He soon became convinced that he had discovered a new type of rays, which are today called X-rays. The following year Henri Becquerel was experimenting with fluorescent uranium salts, and wondered if they too might produce X-rays. On 1 March 1896 he discovered that they did indeed produce rays, but of a different kind, and even when the uranium salt was kept in a dark drawer, it still made an intense image on an X-ray plate, indicating that the rays came from within, and did not require an external energy source.
Unlike Röntgen's discovery, which was the object of widespread curiosity from scientists and lay people alike for the ability of X-rays to make visible the bones within the human body, Becquerel's discovery made little impact at the time, and Becquerel himself soon moved on to other research. Marie Curie tested samples of as many elements and minerals as she could find for signs of Becquerel rays, and in April 1898 also found them in thorium. She gave the phenomenon the name "radioactivity". Along with Pierre Curie and Gustave Bémont, she began investigating pitchblende, a uranium-bearing ore, which was found to be more radioactive than the uranium it contained. This indicated the existence of additional radioactive elements. One was chemically akin to bismuth, but strongly radioactive, and in July 1898 they published a paper in which they concluded that it was a new element, which they named "polonium". The other was chemically like barium, and in a December 1898 paper they announced the discovery of a second hitherto unknown element, which they called "radium". Convincing the scientific community was another matter. Separating radium from the barium in the ore proved very difficult. It took three years for them to produce a tenth of a gram of radium chloride, and they never did manage to isolate polonium.
In 1898, Ernest Rutherford noted that thorium gave off a radioactive gas. In examining the radiation, he classified Becquerel radiation into two types, which he called α (alpha) and β (beta) radiation. Subsequently, Paul Villard discovered a third type of Becquerel radiation which, following Rutherford's scheme, were called "gamma rays", and Curie noted that radium also produced a radioactive gas. Identifying the gas chemically proved frustrating; Rutherford and Frederick Soddy found it to be inert, much like argon. It later came to be known as radon. Rutherford identified beta rays as cathode rays (electrons), and hypothesised—and in 1909 with Thomas Royds proved—that alpha particles were helium nuclei. Observing the radioactive disintegration of elements, Rutherford and Soddy classified the radioactive products according to their characteristic rates of decay, introducing the concept of a half-life. In 1903, Soddy and Margaret Todd applied the term "isotope" to atoms that were chemically and spectroscopically identical but had different radioactive half-lives. Rutherford proposed a model of the atom in which a very small, dense and positively charged nucleus of protons was surrounded by orbiting, negatively charged electrons (the Rutherford model). Niels Bohr improved upon this in 1913 by reconciling it with the quantum behaviour of electrons (the Bohr model).
Protactinium
Soddy and Kasimir Fajans independently observed in 1913 that alpha decay caused atoms to shift down two places in the periodic table, while the loss of two beta particles restored it to its original position. In the resulting reorganisation of the periodic table, radium was placed in group II, actinium in group III, thorium in group IV and uranium in group VI. This left a gap between thorium and uranium. Soddy predicted that this unknown element, which he referred to (after Dmitri Mendeleev) as "ekatantalium", would be an alpha emitter with chemical properties similar to tantalium (now known as tantalum). It was not long before Fajans and Oswald Helmuth Göhring discovered it as a decay product of a beta-emitting product of thorium. Based on the radioactive displacement law of Fajans and Soddy, this was an isotope of the missing element, which they named "brevium" after its short half-life. However, it was a beta emitter, and therefore could not be the mother isotope of actinium. This had to be another isotope.
Two scientists at the Kaiser Wilhelm Institute (KWI) in Berlin-Dahlem took up the challenge of finding the missing isotope. Otto Hahn had graduated from the University of Marburg as an organic chemist, but had been a post-doctoral researcher at University College London under Sir William Ramsay, and under Rutherford at McGill University, where he had studied radioactive isotopes. In 1906, he returned to Germany, where he became an assistant to Emil Fischer at the University of Berlin. At McGill he had become accustomed to working closely with a physicist, so he teamed up with Lise Meitner, who had received her doctorate from the University of Vienna in 1906, and had then moved to Berlin to study physics under Max Planck at the Friedrich-Wilhelms-Universität. Meitner found Hahn, who was her own age, less intimidating than older, more distinguished colleagues. Hahn and Meitner moved to the recently established Kaiser Wilhelm Institute for Chemistry in 1913, and by 1920 had become the heads of their own laboratories there, with their own students, research programs and equipment. The new laboratories offered new opportunities, as the old ones had become too contaminated with radioactive substances to investigate feebly radioactive substances. They developed a new technique for separating the tantalum group from pitchblende, which they hoped would speed the isolation of the new isotope.
The work was interrupted by the outbreak of the First World War in 1914. Hahn was called up into the German Army, and Meitner became a volunteer radiographer in Austrian Army hospitals. She returned to the Kaiser Wilhelm Institute in October 1916. Hahn joined the new gas command unit at Imperial Headquarters in Berlin in December 1916 after travelling between the western and eastern fronts, Berlin and Leverkusen between the summer of 1914 and late 1916.
Most of the students, laboratory assistants and technicians had been called up, so Hahn, who was stationed in Berlin between January and September 1917, and Meitner had to do everything themselves. By December 1917 she was able to isolate the substance, and after further work were able to prove that it was indeed the missing isotope. Meitner submitted her and Hahn's findings for publication in March 1918 to the scientific paper Physikalischen Zeitschrift under the title .
Although Fajans and Göhring had been the first to discover the element, custom required that an element was represented by its longest-lived and most abundant isotope, and brevium did not seem appropriate. Fajans agreed to Meitner and Hahn naming the element protactinium, and assigning it the chemical symbol Pa. In June 1918, Soddy and John Cranston announced that they had extracted a sample of the isotope, but unlike Hahn and Meitner were unable to describe its characteristics. They acknowledged Hahn's and Meitner's priority, and agreed to the name. The connection to uranium remained a mystery, as neither of the known isotopes of uranium decayed into protactinium. It remained unsolved until uranium-235 was discovered in 1929.
For their discovery Hahn and Meitner were repeatedly nominated for the Nobel Prize in Chemistry in the 1920s by several scientists, among them Max Planck, Heinrich Goldschmidt, and Fajans himself. In 1949, the International Union of Pure and Applied Chemistry (IUPAC) named the new element definitively protactinium, and confirmed Hahn and Meitner as discoverers.
Transmutation
Patrick Blackett was able to accomplish nuclear transmutation of nitrogen into oxygen in 1925, using alpha particles directed at nitrogen. In modern notation for the atomic nuclei, the reaction was:
+ → + p
This was the first observation of a nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. A fully artificial nuclear reaction and nuclear transmutation was achieved in April 1932 by Ernest Walton and John Cockcroft, who used artificially accelerated protons against lithium, to break this nucleus into two alpha particles. The feat was popularly known as "splitting the atom", but was not nuclear fission; as it was not the result of initiating an internal radioactive decay process.
Just a few weeks before Cockcroft and Walton's feat, another scientist at the Cavendish Laboratory, James Chadwick, discovered the neutron, using an ingenious device made with sealing wax, through the reaction of beryllium with alpha particles:
+ → + n
Irène Curie and Frédéric Joliot irradiated aluminium foil with alpha particles and found that this results in a short-lived radioactive isotope of phosphorus with a half-life of around three minutes:
+ → + n
which then decays to a stable isotope of silicon
→ + e+
They noted that radioactivity continued after the neutron emissions ceased. Not only had they discovered a new form of radioactive decay in the form of positron emission, they had transmuted an element into a hitherto unknown radioactive isotope of another, thereby inducing radioactivity where there had been none before. Radiochemistry was now no longer confined to certain heavy elements, but extended to the entire periodic table.
Chadwick noted that being electrically neutral, neutrons would be able to penetrate the nucleus more easily than protons or alpha particles. Enrico Fermi and his colleagues in Rome—Edoardo Amaldi, Oscar D'Agostino, Franco Rasetti and Emilio Segrè—picked up on this idea. Rasetti visited Meitner's laboratory in 1931, and again in 1932 after Chadwick's discovery of the neutron. Meitner showed him how to prepare a polonium-beryllium neutron source. On returning to Rome, Rasetti built Geiger counters and a cloud chamber modelled after Meitner's. Fermi initially intended to use polonium as a source of alpha particles, as Chadwick and Curie had done. Radon was a stronger source of alpha particles than polonium, but it also emitted beta and gamma rays, which played havoc with the detection equipment in the laboratory. But Rasetti went on his Easter vacation without preparing the polonium-beryllium source, and Fermi realised that since he was interested in the products of the reaction, he could irradiate his sample in one laboratory and test it in another down the hall. The neutron source was easy to prepare by mixing with powdered beryllium in a sealed capsule. Moreover, radon was easily obtained; Giulio Cesare Trabacchi had more than a gram of radium and was happy to supply Fermi with radon. With a half-life of only 3.82 days it would only go to waste otherwise, and the radium continually produced more.
Working in assembly-line fashion, they started by irradiating water, and then progressed up the periodic table through lithium, beryllium, boron and carbon, without inducing any radioactivity. When they got to aluminium and then fluorine, they had their first successes. Induced radioactivity was ultimately found through the neutron bombardment of 22 different elements. Meitner was one of the select group of physicists to whom Fermi mailed advance copies of his papers, and she was able to report that she had verified his findings with respect to aluminium, silicon, phosphorus, copper and zinc. When a new copy of La Ricerca Scientifica arrived at the Niels Bohr's Institute for Theoretical Physics at the University of Copenhagen, her nephew, Otto Frisch, as the only physicist there who could read Italian, found himself in demand from colleagues wanting a translation. The Rome group had no samples of the rare earth metals, but at Bohr's institute George de Hevesy had a complete set of their oxides that had been given to him by Auergesellschaft, so de Hevesy and Hilde Levi carried out the process with them.
When the Rome group reached uranium, they had a problem: the radioactivity of natural uranium was almost as great as that of their neutron source. What they observed was a complex mixture of half-lives. Following the displacement law, they checked for the presence of lead, bismuth, radium, actinium, thorium and protactinium (skipping the elements whose chemical properties were unknown), and (correctly) found no indication of any of them. Fermi noted three types of reactions were caused by neutron irradiation: emission of an alpha particle (n, α); proton emission (n, p); and gamma emission (n, γ). Invariably, the new isotopes decayed by beta emission, which caused elements to move up the periodic table.
Based on the periodic table of the time, Fermi believed that element 93 was ekarhenium—the element below rhenium—with characteristics similar to manganese and rhenium. Such an element was found, and Fermi tentatively concluded that his experiments had created new elements with 93 and 94 protons, which he dubbed ausenium and hesperium. The results were published in Nature in June 1934. However, in this paper Fermi cautioned that "a careful search for such heavy particles has not yet been carried out, as they require for their observation that the active product should be in the form of a very thin layer. It seems therefore at present premature to form any definite hypothesis on the chain of disintegrations involved." In retrospect, what they had detected was indeed an unknown rhenium-like element, technetium, which lies between manganese and rhenium on the periodic table.
Leo Szilard and Thomas A. Chalmers reported that neutrons generated by gamma rays acting on beryllium were captured by iodine, a reaction that Fermi had also noted. When Meitner repeated their experiment, she found that neutrons from the gamma-beryllium sources were captured by heavy elements like iodine, silver and gold, but not by lighter ones like sodium, aluminium and silicon. She concluded that slow neutrons were more likely to be captured than fast ones, a finding she reported to Naturwissenschaften in October 1934. Everyone had been thinking that energetic neutrons were required, as was the case with alpha particles and protons, but that was required to overcome the Coulomb barrier; the neutrally charged neutrons were more likely to be captured by the nucleus if they spent more time in its vicinity. A few days later, Fermi considered a curiosity that his group had noted: uranium seemed to react differently in different parts of the laboratory; neutron irradiation conducted on a wooden table induced more radioactivity than on a marble table in the same room. Fermi thought about this and tried placing a piece of paraffin wax between the neutron source and the uranium. This resulted in a dramatic increase in activity. He reasoned that the neutrons had been slowed by collisions with hydrogen atoms in the paraffin and wood. The departure of D'Agostino meant that the Rome group no longer had a chemist, and the subsequent loss of Rasetti and Segrè reduced the group to just Fermi and Amaldi, who abandoned the research into transmutation to concentrate on exploring the physics of slow neutrons.
The current model of the nucleus in 1934 was the liquid drop model first proposed by George Gamow in 1930. His simple and elegant model was refined and developed by Carl Friedrich von Weizsäcker and, after the discovery of the neutron, by Werner Heisenberg in 1935 and Niels Bohr in 1936, it agreed closely with observations. In the model, the nucleons were held together in the smallest possible volume (a sphere) by the strong nuclear force, which was capable of overcoming the longer ranged Coulomb electrical repulsion between the protons. The model remained in use for certain applications into the 21st century, when it attracted the attention of mathematicians interested in its properties, but in its 1934 form it confirmed what physicists thought they already knew: that nuclei were static, and that the odds of a collision chipping off more than an alpha particle were practically zero.
Discovery
Objections
Fermi won the 1938 Nobel Prize in Physics for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". However, not everyone was convinced by Fermi's analysis of his results. Ida Noddack suggested in September 1934 that instead of creating a new, heavier element 93, that:
Noddack's article was read by Fermi's team in Rome, Curie and Joliot in Paris, and Meitner and Hahn in Berlin. However, the quoted objection comes some distance down, and is but one of several gaps she noted in Fermi's claim. Bohr's liquid drop model had not yet been formulated, so there was no theoretical way to calculate whether it was physically possible for the uranium atoms to break into large pieces. Noddack and her husband, Walter Noddack, were renowned chemists who had been nominated for the Nobel Prize in Chemistry for the discovery of rhenium, although at the time they were also embroiled in a controversy over the discovery of element 43, which they called "masurium". The discovery of technetium by Emilio Segrè and Carlo Perrier put an end to their claim, but did not occur until 1937. It is unlikely that Meitner or Curie had any prejudice against Noddack because of her sex, but Meitner was not afraid to tell Hahn Hähnchen, von Physik verstehst Du Nichts ("Hahn dear, of physics you understand nothing"). The same attitude carried over to Noddack, who did not propose an alternative nuclear model, nor conduct experiments to support her claim. Although Noddack was a renowned analytical chemist, she lacked the background in physics to appreciate the enormity of what she was proposing.
Noddack was not the only critic of Fermi's claim. Aristid von Grosse suggested that what Fermi had found was an isotope of protactinium. Meitner was eager to investigate Fermi's results, but she recognised that a highly skilled chemist was required, and she wanted the best one she knew: Hahn, although they had not collaborated for many years. Initially, Hahn was not interested, but von Grosse's mention of protactinium changed his mind. "The only question", Hahn later wrote, "seemed to be whether Fermi had found isotopes of transuranian elements, or isotopes of the next-lower element, protactinium. At that time Lise Meitner and I decided to repeat Fermi's experiments in order to find out whether the 13-minute isotope was a protactinium isotope or not. It was a logical decision, having been the discoverers of protactinium."
Hahn and Meitner were joined by Fritz Strassmann. Strassmann had received his doctorate in analytical chemistry from the Technical University of Hannover in 1929, and had come to the Kaiser Wilhelm Institute for Chemistry to study under Hahn, believing that this would improve his employment prospects. He enjoyed the work and the people so much that he stayed on after his stipend expired in 1932. After the Nazi Party came to power in Germany in 1933, he declined a lucrative offer of employment because it required political training and Nazi Party membership, and he resigned from the Society of German Chemists when it became part of the Nazi German Labour Front. As a result, he could neither work in the chemical industry nor receive his habilitation, which was required to become an independent researcher in Germany. Meitner persuaded Hahn to hire Strassmann using money from the director's special circumstances fund. In 1935, Strassmann became an assistant on half pay. Soon he would be credited as a collaborator on the papers they produced.
The 1933 Law for the Restoration of the Professional Civil Service removed Jewish people from the civil service, which included academia. Meitner never tried to conceal her Jewish descent, but initially was exempt from its impact on multiple grounds: she had been employed before 1914, had served in the military during the World War, was an Austrian rather than a German citizen, and the Kaiser Wilhelm Institute was a government-industry partnership. However, she was dismissed from her adjunct professorship at the University of Berlin on the grounds that her World War I service was not at the front, and she had not completed her habilitation until 1922. Carl Bosch, the director of IG Farben, a major sponsor of the Kaiser Wilhelm Institute for Chemistry, assured Meitner that her position there was safe, and she agreed to stay. Meitner, Hahn and Strassmann drew closer together personally as their anti-Nazi politics increasingly alienated them from the rest of the organisation, but it gave them more time for research, as administration was devolved to Hahn's and Meitner's assistants.
Research
The Berlin group started by irradiating uranium salt with neutrons from a radon-beryllium source similar to the one that Fermi had used. They dissolved it and added potassium perrhenate, platinum chloride and sodium hydroxide. What remained was then acidified with hydrogen sulphide, resulting in platinum sulphide and rhenium sulphide precipitation. Fermi had noted four radioactive isotopes with the longest-lived having 13- and 90-minute half-lives, and these were detected in the precipitate. The Berlin group then tested for protactinium by adding protactinium-234 to the solution. When this was precipitated, it was found to be separated from the 13- and 90-minute half-life isotopes, demonstrating that von Grosse was incorrect, and they were not isotopes of protactinium. Moreover, the chemical reactions involved ruled out all elements from mercury and above on the periodic table. They were able to precipitate the 90-minute activity with osmium sulphide and the 13-minute one with rhenium sulphide, which ruled out them being isotopes of the same element. All this provided strong evidence that they were indeed transuranium elements, with chemical properties similar to osmium and rhenium.
Fermi had also reported that fast and slow neutrons had produced different activities. This indicated that more than one reaction was taking place. When the Berlin group could not replicate the Rome group's findings, they commenced their own research into the effects of fast and slow neutrons. To minimise radioactive contamination if there were an accident, different phases were carried out in different rooms, all in Meitner's section on the ground floor of the Kaiser Wilhelm Institute. Neutron irradiation was carried out in one laboratory, chemical separation in another, and measurements were conducted in a third. The equipment they used was simple and mostly hand made.
By March 1936, they had identified ten different half-lives, with varying degrees of certainty. To account for them, Meitner had to hypothesise a new (n, 2n) class of reaction and the alpha decay of uranium, neither of which had ever been reported before, and for which physical evidence was lacking. So while Hahn and Strassmann refined their chemical procedures, Meitner devised new experiments to shine more light on the reaction processes. In May 1937, they issued parallel reports, one in Zeitschrift für Physik with Meitner as the principal author, and one in Chemische Berichte with Hahn as the principal author. Hahn concluded his by stating emphatically: Vor allem steht ihre chemische Verschiedenheit von allen bisher bekannten Elementen außerhalb jeder Diskussion ("Above all, their chemical distinction from all previously known elements needs no further discussion.")
Meitner was increasingly uncertain. They had now constructed three (n, γ) reactions:
+ n → (10 seconds) → (2.2 minutes) → (59 minutes) → (66 hours) → (2.5 hours) → (?)
+ n → (40 seconds) → (16 minutes) → (5.7 hours) → (?)
+ n → (23 minutes) →
Meitner was certain that these had to be (n, γ) reactions, as slow neutrons lacked the energy to chip off protons or alpha particles. She considered the possibility that the reactions were from different isotopes of uranium; three were known: uranium-238, uranium-235 and uranium-234. However, when she calculated the neutron cross section it was too large to be anything other than the most abundant isotope, uranium-238. She concluded that it must be a case of nuclear isomerism, which had been discovered in protactinium by Hahn in 1922. Nuclear isomerism had been given a physical explanation by von Weizsäcker, who had been Meitner's assistant in 1936, but had since taken a position at the Kaiser Wilhelm Institute for Physics. Different nuclear isomers of protactinium had different half-lives, and this could be the case for uranium too, but if so it was somehow being inherited by the daughter and granddaughter products, which seemed to be stretching the argument to breaking point. Then there was the third reaction, an (n, γ) one, which occurred only with slow neutrons. Meitner therefore ended her report on a very different note to Hahn, reporting that: "The process must be neutron capture by uranium-238, which leads to three isomeric nuclei of uranium-239. This result is very difficult to reconcile with current concepts of the nucleus."
After this, the Berlin group moved on to working with thorium, as Strassmann put it, "to recover from, the horror of the work with uranium". However, thorium was not easier to work with than uranium. For a start, it had a decay product, radiothorium () that overwhelmed weaker neutron-induced activity. But Hahn and Meitner had a sample from which they had regularly removed its mother isotope, mesothorium (), over a period of several years, allowing the radiothorium to decay away. Even then, it was still more difficult to work with because its induced decay products from neutron irradiation were isotopes of the same elements produced by thorium's own radioactive decay. What they found was three different decay series, all alpha emitters—a form of decay not found in any other heavy element, and for which Meitner once again had to postulate multiple isomers. They did find an interesting result: under bombardment with 2.5 MeV fast neutrons, these (n, α) decay series occurred simultaneously; for slow neutrons, an (n, γ) reaction that formed was favoured.
In Paris, Irene Curie and Pavel Savitch had also set out to replicate Fermi's findings. In collaboration with Hans von Halban and Peter Preiswerk, they irradiated thorium and produced the isotope with a 22-minute half-life that Fermi had noted. In all, Curie's group detected eight different half-lives in their irradiated thorium. Curie and Savitch detected a radioactive substance with a 3.5-hour half-life. The Paris group proposed that it might be an isotope of thorium. Meitner asked Strassmann, who was now doing most of the chemistry work, to check. He detected no sign of thorium. Meitner wrote to Curie with their results, and suggested a quiet retraction. Nonetheless, Curie persisted. They investigated the chemistry, and found that the 3.5-hour activity was coming from something that seemed to be chemically similar to lanthanum (which in fact it was), which they attempted unsuccessfully to isolate with a fractional crystallization process. (It is possible that their precipitate was contaminated with yttrium, which is chemically similar.) By using Geiger counters and skipping the chemical precipitation, Curie and Savitch detected the 3.5-hour half-life in irradiated uranium.
With the Anschluss, Germany's unification with Austria on 12 March 1938, Meitner lost her Austrian citizenship. James Franck offered to sponsor her immigration to the United States, and Bohr offered a temporary place at his institute, but when she went to the Danish embassy for a visa, she was told that Denmark no longer recognised her Austrian passport as valid. On 13 July 1938, Meitner departed for the Netherlands with Dutch physicist Dirk Coster. Before she left, Otto Hahn gave her a diamond ring he had inherited from his mother to sell if necessary. She reached safety, but with only her summer clothes. Meitner later said that she left Germany forever with 10 marks in her purse. With the help of Coster and Adriaan Fokker, she flew to Copenhagen, where she was greeted by Frisch, and stayed with Niels and Margrethe Bohr at their holiday house in Tisvilde. On 1 August she took the train to Stockholm, where she was met by Eva von Bahr.
Interpretation
The Paris group published their results in September 1938. Hahn dismissed the isotope with the 3.5-hour half-life as contamination, but after looking at the details of the Paris group's experiments and the decay curves, Strassmann was worried. He decided to repeat the experiment, using his more efficient method of separating radium. This time, they found what they thought was radium, which Hahn suggested resulted from two alpha decays:
+ n → α + → α +
Meitner found this very hard to believe.
In November, Hahn travelled to Copenhagen, where he met with Bohr and Meitner. They told him that they were very unhappy about the proposed radium isomers. On Meitner's instructions, Hahn and Strassmann began to redo the experiments, even as Fermi was collecting his Nobel Prize in Stockholm. Assisted by Clara Lieber and Irmgard Bohne, Hahn and Strassmann isolated the three radium isotopes (verified by their half-lives) and used fractional crystallisation to separate them from the barium carrier by adding barium bromide crystals in four steps. Since radium precipitates preferentially in a solution of barium bromide, at each step the fraction drawn off would contain less radium than the one before. However, they found no difference between each of the fractions. In case their process was faulty in some way, they verified it with known isotopes of radium; the process was fine. Hahn and Strassmann found a fourth radium isotope. Their half-lives were formulated as such by Hahn and Strassmann:
On 19 December, Hahn wrote to Meitner, informing her that the radium isotopes behaved chemically like barium. Anxious to finish up before the Christmas break, Hahn and Strassmann submitted their findings to Naturwissenschaften on 22 December without waiting for Meitner to reply. Hahn understood that a "burst" of the atomic nuclei had occurred, but he was unsure about that interpretation. Hahn concluded the article in "Naturwissenschaften" with: "As chemists... we should substitute the symbols Ba, La, Ce for Ra, Ac, Th. As 'nuclear chemists' fairly close to physics we cannot yet bring ourselves to take this step which contradicts all previous experience in physics."
Frisch normally celebrated Christmas with Meitner in Berlin, but in 1938 she accepted an invitation from Eva von Bahr to spend it with her family at Kungälv, and Meitner asked Frisch to join her there. Meitner received the letter from Hahn describing his chemical proof that some of the product of the bombardment of uranium with neutrons was barium. Barium had an atomic mass 40% less than uranium, and no previously known methods of radioactive decay could account for such a large difference in the mass of the nucleus.
Nonetheless, she had immediately written back to Hahn to say: "At the moment the assumption of such a thoroughgoing breakup seems very difficult to me, but in nuclear physics we have experienced so many surprises, that one cannot unconditionally say: 'It is impossible.'" Meitner felt that Hahn was too careful a chemist to make an elementary blunder, but found the results difficult to explain. All the nuclear reactions that had been documented involved chipping protons or alpha particles from the nucleus. Breaking it up seemed far more difficult. However the liquid drop model that Gamow had postulated suggested the possibility that an atomic nucleus could become elongated and overcome the surface tension that held it together.
According to Frisch:
Meitner and Frisch had correctly interpreted Hahn's results to mean that the nucleus of uranium had split roughly in half. The first two reactions that the Berlin group had observed were light elements created by the breakup of uranium nuclei; the third, the 23-minute one, was a decay into the real element 93. On returning to Copenhagen, Frisch informed Bohr, who slapped his forehead and exclaimed "What idiots we have been!" Bohr promised not to say anything until they had a paper ready for publication. To speed the process, they decided to submit a one-page note to Nature. At this point, the only evidence that they had was the barium. Logically, if barium was formed, the other element must be krypton, although Hahn mistakenly believed that the atomic masses had to add up to 239 rather than the atomic numbers adding up to 92, and thought it was masurium (technetium), and so did not check for it:
+ n → + + some n
Over a series of long-distance phone calls, Meitner and Frisch came up with a simple experiment to bolster their claim: to measure the recoil of the fission fragments, using a Geiger counter with the threshold set above that of the alpha particles. Frisch conducted the experiment on 13 January 1939, and found the pulses caused by the reaction just as they had predicted. He decided he needed a name for the newly discovered nuclear process. He spoke to William A. Arnold, an American biologist working with de Hevesy and asked him what biologists called the process by which living cells divided into two cells. Arnold told him that biologists called it fission. Frisch then applied that name to the nuclear process in his paper. Frisch mailed both the jointly-authored note on fission and his paper on the recoil experiment to Nature on 16 January 1939; the former appeared in print on 11 February and the latter on 18 February. In their second publication on nuclear fission in February 1939, Hahn and Strassmann used the term Uranspaltung (uranium fission) for the first time, and predicted the existence and liberation of additional neutrons during the fission process, opening up the possibility of a nuclear chain reaction.
In an 8 March 1959 interview, Meitner said: "It [the discovery of nuclear fission] was achieved with an unusually good chemistry by Hahn and Strassmann, with a fantastically good chemistry that nobody else could do at that time. Later, the Americans learned it. But at that time Hahn and Strassmann were really the only ones who could do it at all because they were such good chemists. They really demonstrated a physical process with chemistry, so to speak."
Reception
Bohr brings the news to the United States
Before departing for the United States on 7 January 1939 with his son Erik to attend the Fifth Washington Conference on Theoretical Physics, Bohr promised Frisch that he would not mention fission until the papers appeared in print, but during the Atlantic crossing on the , Bohr discussed the mechanism of fission with Leon Rosenfeld, and failed to inform him that the information was confidential. On arrival in New York City on 16 January, they were met by Fermi and his wife Laura Capon, and by John Wheeler, who had been a fellow at Bohr's institute in 1934–1935. As it happened, there was a meeting of Princeton University's Physics Journal Club that evening, and when Wheeler asked Rosenfeld if he had any news to report, Rosenfeld told them. An embarrassed Bohr fired off a note to Nature defending Meitner and Frisch's claim to the priority of the discovery. Hahn was annoyed that while Bohr mentioned his and Strassmann's work in the note, he cited only Meitner and Frisch.
News spread quickly of the new discovery, which was correctly seen as an entirely novel physical effect with great scientific—and potentially practical—possibilities. Isidor Isaac Rabi and Willis Lamb, two Columbia University physicists working at Princeton, heard the news and carried it back to Columbia. Rabi said he told Fermi; Fermi gave credit to Lamb. For Fermi, the news came as a profound embarrassment, as the transuranic elements that he had partly been awarded the Nobel Prize for discovering had not been transuranic elements at all, but fission products. He added a footnote to this effect to his Nobel Prize acceptance speech. Bohr soon thereafter went from Princeton to Columbia to see Fermi. Not finding Fermi in his office, Bohr went down to the cyclotron area and found Herbert L. Anderson. Bohr grabbed him by the shoulder and said: "Young man, let me explain to you about something new and exciting in physics."
Further research
It was clear to many scientists at Columbia that they should try to detect the energy released in the nuclear fission of uranium from neutron bombardment. On 25 January 1939, a Columbia University group conducted the first nuclear fission experiment in the United States, which was done in the basement of Pupin Hall. The experiment involved placing uranium oxide inside of an ionization chamber and irradiating it with neutrons, and measuring the energy thus released. The next day, the Fifth Washington Conference on Theoretical Physics began in Washington, D.C., under the joint auspices of The George Washington University and the Carnegie Institution of Washington. From there, the news on nuclear fission spread even further, which fostered many more experimental demonstrations.
Bohr and Wheeler overhauled the liquid drop model to explain the mechanism of nuclear fission, with conspicuous success. Their paper appeared in Physical Review on 1 September 1939, the day Germany invaded Poland, starting World War II in Europe. As the experimental physicists studied fission, they uncovered more puzzling results. George Placzek asked Bohr why uranium fissioned with both very fast and very slow neutrons. Walking to a meeting with Wheeler, Bohr had an insight that the fission at low energies was due to the uranium-235 isotope, while at high energies it was mainly due to the far more abundant uranium-238 isotope. This was based on Meitner's 1937 measurements of the neutron capture cross-sections. This would be experimentally verified in February 1940, after Alfred Nier was able to produce sufficient pure uranium-235 for John R. Dunning, Aristid von Grosse and Eugene T. Booth to test.
Other scientists resumed the search for the elusive element 93, which seemed to be straightforward, as they now knew it resulted from the 23-minute half-life. At the Radiation Laboratory in Berkeley, California, Emilio Segrè and Edwin McMillan used the cyclotron to create the isotope. They then detected a beta activity with a 2-day half-life, but it had rare-earth element chemical characteristics, and element 93 was supposed to have chemistry akin to rhenium. It was therefore overlooked as just another fission product. Another year passed before McMillan and Philip Abelson determined that the 2-day half-life element was that of the elusive element 93, which they named "neptunium". They paved the way for the discovery by Glenn Seaborg, Emilio Segrè and Joseph W. Kennedy of element 94, which they named "plutonium" in 1941.
Another avenue of research, spearheaded by Meitner, was to determine if other elements could fission after being irradiated with neutrons. It was soon determined that thorium and protactinium could. Measurements were also made of the amount of energy released. Hans von Halban, Frédéric Joliot-Curie and Lew Kowarski demonstrated that uranium bombarded by neutrons emitted more neutrons than it absorbed, suggesting the possibility of a nuclear chain reaction. Fermi and Anderson did so too a few weeks later. It was apparent to many scientists that, in theory at least, an extremely powerful energy source could be created, although most still considered an atomic bomb an impossibility.
Nobel Prize
Both Hahn and Meitner had been nominated for the chemistry and the physics Nobel Prizes many times even before the discovery of nuclear fission for their work on radioactive isotopes and protactinium. Several more nominations followed for the discovery of fission between 1940 and 1943. Nobel Prize nominations were vetted by committees of five, one for each award. Although both Hahn and Meitner received nominations for physics, radioactivity and radioactive elements had traditionally been seen as the domain of chemistry, and so the Nobel Committee for Chemistry evaluated the nominations in 1944.
The committee received reports from Theodor Svedberg in 1941 and in 1942. These chemists were impressed by Hahn's work, but felt that the experimental work of Meitner and Frisch was not extraordinary. They did not understand why the physics community regarded their work as seminal. As for Strassmann, although his name was on the papers, there was a long-standing policy of conferring awards on the most senior scientist in a collaboration. In 1944 the Nobel Committee for Chemistry voted to recommend that Hahn alone be given the Nobel Prize in Chemistry for 1944. However, Germans had been forbidden to accept Nobel Prizes after the Nobel Peace Prize had been awarded to Carl von Ossietzky in 1936. The committee's recommendation was rejected by the Royal Swedish Academy of Sciences, which decided to defer the award for one year.
The war was over when the academy reconsidered the award in September 1945. The Nobel Committee for Chemistry had now become more cautious, as it was apparent that much research had been undertaken by the Manhattan Project in the United States in secret, and it suggested deferring the 1944 Nobel Prize in Chemistry for another year. The academy was swayed by Göran Liljestrand, who argued that it was important for the academy to assert its independence from the Allies of World War II, and award the Nobel Prize in Chemistry to a German, as it had done after World War I when it had awarded it to Fritz Haber. Hahn therefore became the sole recipient of the 1944 Nobel Prize in Chemistry "for his discovery of the fission of heavy nuclei".
Meitner wrote in a letter to her friend Birgit Broomé-Aminoff on 20 November 1945:
In 1946, the Nobel Committee for Physics considered nominations for Meitner and Frisch received from Max von Laue, Niels Bohr, Oskar Klein, Egil Hylleraas and James Franck. Reports were written for the committee by Erik Hulthén, who held the chair of experimental physics at Stockholm University, in 1945 and 1946. Hulthén argued that theoretical physics should be considered award-worthy only if it inspired great experiments. The role of Meitner and Frisch in being the first to understand and explain fission was not understood. There may also have been personal factors: the chairman of the committee, Manne Siegbahn, disliked Meitner, and had a professional rivalry with Klein. Meitner and Frisch would continue to be nominated regularly for many years, but would never be awarded a Nobel Prize.
In history and memory
At the end of the war in Europe, Hahn was taken into custody and incarcerated at Farm Hall with nine other senior scientists, all of whom except Max von Laue had been involved with the German nuclear weapons program, and all except Hahn and Paul Harteck were physicists. It was here that they heard the news of the atomic bombings of Hiroshima and Nagasaki. Unwilling to accept that they were years behind the Americans, and unaware that their conversations were being recorded, many of them said in conversations, that they had never wanted their nuclear weapons program to succeed in the first place. Hahn did not believe them. Hahn was still there when his Nobel Prize was announced in November 1945. The Farm Hall scientists would spend the rest of their lives attempting to rehabilitate the image of German science that had been tarnished by the Nazi period. Inconvenient details like the thousands of female slave labourers from Sachsenhausen concentration camp who mined uranium ore for their experiments were swept under the rug.
For Hahn, this necessarily involved asserting his claim of the discovery of fission for himself, for chemistry, and for Germany. He used his Nobel Prize acceptance speech to assert this narrative, so he mentioned both Meitner's and Straßmann's involvements in his Nobel lecture. Hahn's message resonated strongly in Germany, where he was revered as the proverbial good German, a decent man who had been a staunch opponent of the Nazi regime, but had remained in Germany where he had pursued pure science. As president of the Max Planck Society from 1946 to 1960, he projected an image of German science as undiminished in brilliance and untainted by Nazism to an audience that wanted to believe it. After the Second World War, Hahn came out strongly against the use of nuclear energy for military purposes. He saw the application of his scientific discoveries to such ends as a misuse, or even a crime. Lawrence Badash wrote: "His wartime recognition of the perversion of science for the construction of weapons and his postwar activity in planning the direction of his country's scientific endeavours now inclined him increasingly toward being a spokesman for social responsibility."
In contrast, in the immediate aftermath of the war Meitner and Frisch were hailed as the discoverers of fission in English-speaking countries. Japan was seen as a puppet state of Germany and the destruction of Hiroshima and Nagasaki as poetic justice for the persecution of the Jewish people. In January 1946, Meitner toured the United States, where she gave lectures and received honorary degrees. She attended a cocktail party for Lieutenant General Leslie Groves, the director of the Manhattan Project (who gave her sole credit for the discovery of fission in his 1962 memoirs), and was named Woman of the Year by the Women's National Press Club. At the reception for this award, she sat next to the President of the United States, Harry S. Truman. But Meitner did not enjoy public speaking, especially in English, nor did she relish the role of a celebrity, and she declined the offer of a visiting professorship at Wellesley College. Hahn nominated Meitner and Frisch for the Nobel Prize in Physics in 1948. He and Meitner remained close friends after the war.
In 1966, the United States Atomic Energy Commission jointly awarded the Enrico Fermi Award to Hahn, Strassmann and Meitner for their discovery of fission. The ceremony was held in the Hofburg palace in Vienna. It was the first time that the Enrico Fermi Prize had been awarded to non-Americans, and the first time it was presented to a woman. Meitner's diploma bore the words: "For pioneering research in the naturally occurring radioactivities and extensive experimental studies leading to the discovery of fission". Hahn's diploma was slightly different: "For pioneering research in the naturally occurring radioactivities and extensive experimental studies culminating in the discovery of fission." Hahn and Strassmann were present, but Meitner was too ill to attend, so Frisch accepted the award on her behalf.
During combined celebrations in Germany of the 100th birthdays of Einstein, Hahn, Meitner and von Laue in 1978, Hahn's narrative of the discovery of fission began to crumble. Hahn and Meitner had died in 1968, but Strassmann was still alive, and he asserted the importance of his analytical chemistry and Meitner's physics in the discovery, and their role as more than just assistants. A detailed biography of Strassmann appeared in 1981, a year after his death, and a prize-winning one of Meitner for young adults in 1986. Scientists questioned the focus on chemistry, historians challenged the accepted narrative of the Nazi period, and feminists saw Meitner as yet another example of the Matilda effect, where a woman had been airbrushed from the pages of history. By 1990, Meitner had been restored to the narrative, although her role remained contested, particularly in Germany.
Weizsäcker, a colleague of Hahn and Meitner during their time in Berlin, and a fellow inmate with Hahn in Farm Hall, strongly supported Hahn's role in the discovery of nuclear fission. He told an audience that had gathered for the ceremonial inclusion of a bust of Meitner in the Ehrensaal (Hall of Fame) at the Deutsches Museum in Munich on 4 July 1991 that neither Meitner nor physics had contributed to the discovery of fission, which, he declared, was "a discovery of Hahn's and not of Lise Meitner's."
Notes
References
Further reading
1938 in science
December 1938
December 1938 events in Europe
Fission, discovery of
Nuclear fission
Nuclear chemistry
Radioactivity | Discovery of nuclear fission | [
"Physics",
"Chemistry"
] | 10,996 | [
"Nuclear fission",
"Nuclear chemistry",
"nan",
"Nuclear physics",
"Radioactivity"
] |
64,013,419 | https://en.wikipedia.org/wiki/Octanol-water%20partition%20coefficient | The n-octanol-water partition coefficient, Kow is a partition coefficient for the two-phase system consisting of n-octanol and water. Kow is also frequently referred to by the symbol P, especially in the English literature. It is also called n-octanol-water partition ratio.
Kow serves as a measure of the relationship between lipophilicity (fat solubility) and hydrophilicity (water solubility) of a substance. The value is greater than one if a substance is more soluble in fat-like solvents such as n-octanol, and less than one if it is more soluble in water.
If a substance is present as several chemical species in the octanol-water system due to association or dissociation, each species is assigned its own Kow value. A related value, D, does not distinguish between different species, only indicating the concentration ratio of the substance between the two phases.
History
In 1899, Charles Ernest Overton and Hans Horst Meyer independently proposed that the tadpole toxicity of non-ionizable organic compounds depends on their ability to partition into lipophilic compartments of cells. They further proposed the use of the partition coefficient in an olive oil/water mixture as an estimate of this lipophilic associated toxicity. Corwin Hansch later proposed the use of n-octanol as an inexpensive synthetic alcohol that could be obtained in a pure form as an alternative to olive oil.
Applications
Kow values are used, among others, to assess the environmental fate of persistent organic pollutants. Chemicals with high partition coefficients, for example, tend to accumulate in the fatty tissue of organisms (bioaccumulation). Under the Stockholm Convention, chemicals with a log Kow greater than 5 are considered to bioaccumulate.
Furthermore, the parameter plays an important role in drug research (Rule of Five) and toxicology. Ernst Overton and Hans Meyer discovered as early as 1900 that the efficacy of an anaesthetic increased with increasing Kow value (the so-called Meyer-Overton rule).
Kow values also provide a good estimate of how a substance is distributed within a cell between the lipophilic biomembranes and the aqueous cytosol.
Estimation
Since it is not possible to measure Kow for all substances, various models have been developed to allow for their prediction, e.g. Quantitative structure–activity relationships (QSAR) or linear free energy relationships (LFER) such as the Hammett equation.
A variant of the UNIFAC system can also be used to estimate octanol-water partition coefficients.
Equations
Definition of the Kow or P-value
The Kow or P-value always only refers to a single species or substance:
with:
concentration of species i of a substance in the octanol-rich phase
concentration of species i of a substance in the water-rich phase
If different species occur in the octanol-water system by dissociation or association, several P-values and one D-value exist for the system. If, on the other hand, the substance is only present in a single species, the P and D values are identical.
P is usually expressed as a common logarithm, i.e. Log P (also Log Pow or, less frequently, Log pOW):
Log P is positive for lipophilic and negative for hydrophilic substances or species.
Definition of the D-value
The P-value only correctly refers to the concentration ratio of a single substance distributed between the octanol and water phases. In the case of a substance that occurs as multiple species, it can therefore be calculated by summing the concentrations of all n species in the octanol phase and the concentrations of all n species in the aqueous phase:
with:
concentration of the substance in the octanol-rich phase
concentration of the substance in the water-rich phase
D values are also usually given in the form of the common logarithm as Log D:
Like Log P, Log D is positive for lipophilic and negative for hydrophilic substances. While P values are largely independent of the pH value of the aqueous phase due to their restriction to only one species, D values are often strongly dependent on the pH value of the aqueous phase.
Example values
Values for log Kow typically range between -3 (very hydrophilic) and +10 (extremely lipophilic/hydrophobic).
The values listed here are sorted by the partition coefficient. Acetamide is hydrophilic, and 2,2′,4,4′,5-Pentachlorobiphenyl is lipophilic.
See also
Hydrophobic effect
Dortmund Data Bank
References
Further reading
</ref>
External links
Virtual Computational Chemistry Laboratory interactive calculation and interactive comparison of several methods
LogP-Berechnungssoftware von ACD (commercial)
Directory of reference works and databases with octanol-water partition coefficients
Comprehensive free database of evaluated octanol-water partition coefficients from Sangster Research Laboratories
Ecotoxicology
Pharmacology
Physical chemistry | Octanol-water partition coefficient | [
"Physics",
"Chemistry"
] | 1,060 | [
"Pharmacology",
"Applied and interdisciplinary physics",
"nan",
"Medicinal chemistry",
"Physical chemistry"
] |
65,482,003 | https://en.wikipedia.org/wiki/Wolfgang%20Arlt | Wolfgang Arlt is a German thermodynamicist. Until his retirement in 2018, he was professor at the TU Berlin and since 2004 at the Friedrich-Alexander-Universität Erlangen-Nürnberg.
Life
After studying chemistry with a focus on physical chemistry at the University of Dortmund, he became a research assistant for Ulfert Onken at the same university in 1976 in the field of chemical engineering. During his doctorate, he helped set up the Dortmund Data Bank. After completing his doctorate as Dr.-Ing. In 1981 he moved to Bayer, where he worked on thermal separation processes. In 1987 he switched to plastics research in-house and worked in a leading position in setting up a production facility for a thermoplastic in Antwerp . After completing this work, he returned to the process engineering department in Leverkusen.
In 1992 he accepted a position as professor for thermodynamics and thermal process engineering at Technische Universität Berlin . During this time he developed, among other things, a recycling process for mixed thermoplastics (e.g. packaging material). He donated part of the proceeds of the corresponding patent in favor of the Philotherm Prize founded by Prof. Knapp, which honors students for special achievements in the subject of thermodynamics. In 2004 he moved to the Friedrich-Alexander-Universität Erlangen-Nürnberg, where he has held the chair for thermal process engineering ever since . In 2009 he founded the Siegfried Peter Prize for high pressure technology, which is usually awarded every two years for outstanding research in the field of high pressure process engineering.
From 2011 to the beginning of 2017 he was the spokesman for the scientific management of the , which he initiated. In 2018 he received the for his groundbreaking developments in fluid process engineering. Together with Peter Wasserscheid and Daniel Teichmann, he was nominated for the German Future Prize 2018 for his work on the development of liquid organic hydrogen carriers (LOHC).
References
German chemical engineers
Scientists from Dortmund
Living people
Thermodynamicists
Academic staff of Technische Universität Berlin
Process engineering
Academic staff of the University of Erlangen-Nuremberg
Plastics
Year of birth missing (living people) | Wolfgang Arlt | [
"Physics",
"Chemistry",
"Engineering"
] | 461 | [
"Process engineering",
"Unsolved problems in physics",
"Mechanical engineering by discipline",
"Thermodynamics",
"Thermodynamicists",
"Amorphous solids",
"Plastics"
] |
44,069,171 | https://en.wikipedia.org/wiki/Visual%20MIMO | Visual MIMO is an optical communication system. The name is derived from MIMO, where the multiple transmitter multiple receiver model has been adopted for light in the visible and non-visible spectrum. In Visual MIMO, a LED or electronic visual display serves as the transmitter, while a camera serves as the receiver.
References
External links
IEEE
http://winlab.rutgers.edu/~aashok/visualmimo/Home.html
http://winlab.rutgers.edu/~aashok/papers/wyuan_wacv2012.pdf
http://winlab.rutgers.edu/~aashok/papers/wyuan_procams11.pdf
Optical communications
Telecommunications
Light | Visual MIMO | [
"Physics",
"Technology",
"Engineering"
] | 153 | [
"Information and communications technology",
"Optical communications",
"Physical phenomena",
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Telecommunications",
"Waves",
"Light"
] |
44,070,426 | https://en.wikipedia.org/wiki/Poisson%20wavelet | In mathematics, in functional analysis, several different wavelets are known by the name Poisson wavelet. In one context, the term "Poisson wavelet" is used to denote a family of wavelets labeled by the set of positive integers, the members of which are associated with the Poisson probability distribution. These wavelets were first defined and studied by Karlene A. Kosanovich, Allan R. Moser and Michael J. Piovoso in 1995–96. In another context, the term refers to a certain wavelet which involves a form of the Poisson integral kernel. In still another context, the terminology is used to describe a family of complex wavelets indexed by positive integers which are connected with the derivatives of the Poisson integral kernel.
Wavelets associated with Poisson probability distribution
Definition
For each positive integer n the Poisson wavelet is defined by
To see the relation between the Poisson wavelet and the Poisson distribution let X be a discrete random variable having the Poisson distribution with parameter (mean) t and, for each non-negative integer n, let Prob(X = n) = pn(t). Then we have
The Poisson wavelet is now given by
Basic properties
is the backward difference of the values of the Poisson distribution:
The "waviness" of the members of this wavelet family follows from
The Fourier transform of is given
The admissibility constant associated with is
Poisson wavelet is not an orthogonal family of wavelets.
Poisson wavelet transform
The Poisson wavelet family can be used to construct the family of Poisson wavelet transforms of functions defined the time domain. Since the Poisson wavelets satisfy the admissibility condition also, functions in the time domain can be reconstructed from their Poisson wavelet transforms using the formula for inverse continuous-time wavelet transforms.
If f(t) is a function in the time domain its n-th Poisson wavelet transform is given by
In the reverse direction, given the n-th Poisson wavelet transform of a function f(t) in the time domain, the function f(t) can be reconstructed as follows:
Applications
Poisson wavelet transforms have been applied in multi-resolution analysis, system identification, and parameter estimation. They are particularly useful in studying problems in which the functions in the time domain consist of linear combinations of decaying exponentials with time delay.
Wavelet associated with Poisson kernel
Definition
The Poisson wavelet is defined by the function
This can be expressed in the form
where .
Relation with Poisson kernel
The function appears as an integral kernel in the solution of a certain initial value problem of the Laplace operator.
This is the initial value problem: Given any in , find a harmonic function defined in the upper half-plane satisfying the following conditions:
, and
as in .
The problem has the following solution: There is exactly one function satisfying the two conditions and it is given by
where and where "" denotes the convolution operation. The function is the integral kernel for the function . The function is the harmonic continuation of into the upper half plane.
Properties
The "waviness" of the function follows from
.
The Fourier transform of is given by
.
The admissibility constant is
A class of complex wavelets associated with the Poisson kernel
Definition
The Poisson wavelet is a family of complex valued functions indexed by the set of positive integers and defined by
where
Relation with Poisson kernel
The function can be expressed as an n-th derivative as follows:
Writing the function in terms of the Poisson integral kernel as
we have
Thus can be interpreted as a function proportional to the derivatives of the Poisson integral kernel.
Properties
The Fourier transform of is given by
where is the unit step function.
References
Wavelets
Time–frequency analysis
Signal processing
Continuous wavelets
Poisson distribution | Poisson wavelet | [
"Physics",
"Technology",
"Engineering"
] | 778 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Spectrum (physical sciences)",
"Time–frequency analysis",
"Frequency-domain analysis"
] |
41,202,355 | https://en.wikipedia.org/wiki/Runcicantellated%2024-cell%20honeycomb | In four-dimensional Euclidean geometry, the runcicantellated 24-cell honeycomb is a uniform space-filling honeycomb.
Alternate names
Runcicantellated icositetrachoric tetracomb/honeycomb
Prismatorhombated icositetrachoric tetracomb (pricot)
Great diprismatodisicositetrachoric tetracomb
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Rectified 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 118
o3x3x4o3x - apricot - O118
5-polytopes
Honeycombs (geometry) | Runcicantellated 24-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 346 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
41,202,531 | https://en.wikipedia.org/wiki/Stericantitruncated%2016-cell%20honeycomb | In four-dimensional Euclidean geometry, the stericantitruncated 16-cell honeycomb is a uniform space-filling honeycomb.
Alternate names
Great cellirhombated icositetrachoric tetracomb (gicaricot)
Runcicantic hexadecachoric tetracomb
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Rectified 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 121 (Wrongly named runcinated icositetrachoric honeycomb)
x3x3x4o3x - gicaricot - O130
5-polytopes
Honeycombs (geometry) | Stericantitruncated 16-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 348 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
41,204,100 | https://en.wikipedia.org/wiki/Citizens%20for%20Conservation | Citizens for Conservation (commonly called CFC) is a nonprofit organization, centered in Barrington, Illinois, established in 1971. CFC's motto is Saving Living Space for Living Things through protection, restoration and stewardship of land, conservation of natural resources and education. It is a member of Chicago Wilderness and the Land Trust Alliance.
CFC specializes in habitat restoration, both on properties it owns and nearby forest preserves of Lake County Forest Preserve District and Forest Preserve District of Cook County. CFC relies almost entirely on volunteers, meeting at least once a week year-round. In addition, student interns are hired during the summer.
CFC received the 2011 Conservation and Native Landscaping award from the U.S. EPA and Chicago Wilderness for its restoration work on the Flint Creek Savanna, their largest property and location of their headquarters.
CFC properties
As of early 2020, CFC owned 12 properties for a total of 476 acres. Much of this is agricultural land that was donated or purchased, and restored back to natural habitat, primarily oak savanna, tallgrass prairie, and wetlands. Removal of invasive species and re-seeding of native species from local seed sources is the main focus of habitat restoration. It has the largest holding of fee simple lands (direct ownership) of any non-profit in Lake County, Illinois.
Education
CFC offers periodic programs for children as part of the No Child Left Inside project, and works with the local school district to introduce 3rd and 4th graders to the prairie. It also provides occasional community education programs for adults.
References
Nature conservation organizations based in the United States
Ecological restoration | Citizens for Conservation | [
"Chemistry",
"Engineering"
] | 322 | [
"Ecological restoration",
"Environmental engineering"
] |
41,205,460 | https://en.wikipedia.org/wiki/Keatite | Keatite is a silicate mineral with the chemical formula SiO2 (silicon dioxide) that was discovered in nature in 2013. It is a tetragonal polymorph of silica first known as a synthetic phase. It was reported as minute inclusions within clinopyroxene (diopside) crystals in an ultra high pressure garnet pyroxenite body. The host rock is part of the Kokchetav Massif in Kazakhstan.
Keatite was synthesized in 1954 and named for Paul P. Keat who discovered it while studying the role of soda in the crystallization of amorphous silica. Keatite was well known before 1970 as evidenced in few studies from that era.
References
Silica polymorphs | Keatite | [
"Materials_science",
"Engineering"
] | 158 | [
"Silica polymorphs",
"Materials science stubs",
"Polymorphism (materials science)",
"Materials science"
] |
41,205,552 | https://en.wikipedia.org/wiki/Caius%20Iacob | Caius Iacob (March 29, 1912 – February 6, 1992) was a Romanian mathematician, professor at the University of Bucharest, and titular member of the Romanian Academy. After the fall of communism in 1989, he was elected to the Senate of Romania.
Biography
He was born in Arad, the son of and Camelia, née Moldovan. His father was professor of Canon Law and served as delegate for Arad at the Great National Assembly of Alba Iulia of 1 December 1918. Caius Iacob attended the Moise Nicoară High School in his native city, and then completed his secondary education at the Emanuil Gojdu High School in Oradea. After passing his baccalaureate examination with the highest mark in the nation, he was admitted in 1928 at the Faculty of Sciences of the University of Bucharest, from where he graduated in 1931, aged nineteen. Iacob continued his studies at the Faculty of Sciences of the University of Paris, with thesis advisor Henri Villat. He defended his thesis, Sur la détermination des fonctions harmoniques par certaines conditions aux limites: applications à l'hydrodynamique, on 24 June 1935.
His most important work was in the studies of classical hydrodynamics, fluid mechanics, mathematical analysis, and compressible-flow theory.
Iacob started his academic career in 1935 at Politehnica University of Timișoara, after which he became a professor at the University of Bucharest and at Babeș-Bolyai University in Cluj. In 1955, he was elected a corresponding member of the Romanian Academy, becoming a titular member in 1963. From 1980 to the end of his life he served as President of the Mathematics section of the Romanian Academy.
He was awarded several prizes for his work: the Henri de Parville Prize by the French Academy of Sciences (1940), the (1952), and the Order of the Star of the Romanian People's Republic, 3rd class (1964).
In May 1990, he was elected senator for the Christian Democratic National Peasants' Party — the only member of the party to be elected to the upper chamber of the Parliament of Romania that year. He died in Bucharest in February 1992.
Legacy
Iacob was one of the founders of the Institute of Applied Mathematics of the Romanian Academy in 1991. Ten years later, the institute merged with the Center for Mathematical Statistics of the Academy (that had been founded by Gheorghe Mihoc in 1964), becoming the current Gheorghe Mihoc–Caius Iacob Institute of Mathematical Statistics and Applied Mathematics of the Romanian Academy.
A high school and a middle school, as well as a street and a plaza in Arad also bear his name.
References
1912 births
1992 deaths
People from Arad, Romania
Romanian Austro-Hungarians
University of Bucharest alumni
University of Paris alumni
20th-century Romanian mathematicians
Mathematical analysts
Fluid dynamicists
Academic staff of the University of Bucharest
Academic staff of the Politehnica University of Timișoara
Academic staff of Babeș-Bolyai University
Titular members of the Romanian Academy
Members of the Romanian Academy of Sciences
Recipients of the Order of the Star of the Romanian Socialist Republic
National Peasants' Party politicians
21st-century Romanian politicians
Members of the Senate of Romania | Caius Iacob | [
"Chemistry",
"Mathematics"
] | 660 | [
"Mathematical analysis",
"Fluid dynamicists",
"Mathematical analysts",
"Fluid dynamics"
] |
41,207,008 | https://en.wikipedia.org/wiki/Biological%20photovoltaics | Biological photovoltaics, also called biophotovoltaics or BPV, is an energy-generating technology which uses oxygenic photoautotrophic organisms, or fractions thereof, to harvest light energy and produce electrical power. Biological photovoltaic devices are a type of biological electrochemical system, or microbial fuel cell, and are sometimes also called photo-microbial fuel cells or “living solar cells”. In a biological photovoltaic system, electrons generated by photolysis of water are transferred to an anode. A relatively high-potential reaction takes place at the cathode, and the resulting potential difference drives current through an external circuit to do useful work. It is hoped that using a living organism (which is capable of self-assembly and self-repair) as the light harvesting material, will make biological photovoltaics a cost-effective alternative to synthetic light-energy-transduction technologies such as silicon-based photovoltaics.
Principle of operation
Like other fuel cells, biological photovoltaic systems are divided into anodic and cathodic half-cells.
Oxygenic photosynthetic biological material, such as purified photosystems or whole algal or cyanobacterial cells, are employed in the anodic half-cell. These organisms are able to use light energy to drive the oxidation of water, and a fraction of the electrons produced by this reaction are transferred to the extracellular environment, where they can be used to reduce an anode. No heterotrophic organisms are included in the anodic chamber - electrode reduction is performed directly by the photosynthetic material.
The higher electrode potential of the cathodic reaction relative to the reduction of the anode drives current through an external circuit. In the illustration, oxygen is being reduced to water at the cathode, though other electron acceptors can be used. If water is regenerated there is a closed loop in terms of electron flow (similar to a conventional photovoltaic system), i.e. light energy is the only net input required for production of electrical power. Alternatively, electrons can be used at the cathode for electrosynthetic reactions that produce useful compounds, such as the reduction of protons to hydrogen gas.
Distinctive properties
Similar to microbial fuel cells, biological photovoltaic systems which employ whole organisms have the advantage over non-biological fuel cells and photovoltaic systems of being able to self-assemble and self-repair (i.e. the photosynthetic organism is able to reproduce itself). The ability of the organism to store energy allows for power generation from biological photovoltaic systems in the dark, circumventing the grid supply and demand problems sometimes faced by conventional photovoltaics. Additionally, the use of photosynthetic organisms that fix carbon dioxide means the 'assembly' of the light harvesting material in a biological photovoltaic system could have a negative carbon footprint.
Compared to microbial fuel cells, which use heterotrophic microorganisms, biological photovoltaic systems need no input of organic compounds to supply reducing equivalents to the system. This improves the efficiency of light-to-electricity conversion by minimising the number of reactions separating the capture of light energy and reduction of the anode. A disadvantage of using oxygenic photosynthetic material in bioelectrochemical systems is that the production of oxygen in the anodic chamber has a detrimental effect on the cell voltage.
Types of biological photovoltaic system
Biological photovoltaic systems are defined by the type of light harvesting material that they employ, and the mode of electron transfer from the biological material to the anode.
Light harvesting materials
The light harvesting materials employed in biological photovoltaic devices can be categorised by their complexity; more complex materials are typically less efficient but more robust.
Isolated photosystems
Isolated photosystems offer the most direct connection between water photolysis and anode reduction. Typically, photosystems are isolated and adsorbed to a conductive surface. A soluble redox mediator (a small molecule capable of accepting and donating electrons) may be required to improve the electrical communication between photosystem and anode. Because other cellular components required for repair are absent, biological photovoltaic systems based on isolated photosystems have relatively short lifetimes (a few hours) and often require low temperatures to improve stability.
Sub-cellular fractions
Sub-cellular fractions of photosynthetic organisms, such as purified thylakoid membranes, can also be used in biological photovoltaic systems. A benefit of using material that contains both photosystem II and photosystem I is that electrons extracted from water by photosystem II can be donated to the anode at a more negative redox potential (from the reductive end of photosystem I). A redox mediator (e.g. ferricyanide) is required to transfer electrons between the photosynthetic components and the anode.
Whole organisms
Biological photovoltaic systems that employ whole organisms are the most robust type, and lifetimes of multiple months have been observed. The insulating outer membranes of whole cells impedes electron transfer from the sites of electron generation inside the cell to the anode. As a result, conversion efficiencies are low unless lipid-soluble redox mediators are included in the system. Cyanobacteria are typically used in these systems because their relatively simple arrangement of intracellular membranes compared to eukaryotic algae facilitates electron export. Potential catalysts such as platinum can be used to increase permeability of the cellular membrane.
Electron transfer to the anode
Reduction of the anode by the photosynthetic material can be achieved by a direct electron transfer, or via a soluble redox mediator. Redox mediators may be lipid-soluble (e.g. vitamin K2), allowing them to pass through cell membranes, and can either be added to the system or produced by the biological material.
Inherent electrode reduction activity
Isolated photosystems and sub-cellular photosynthetic fractions may be able to directly reduce the anode if the biological redox components are close enough to the electrode for electron transfer to occur. In contrast to organisms such as dissimilatory metal reducing bacteria, algae and cyanobacteria are poorly adapted for extracellular electron export - no molecular mechanisms enabling direct reduction of an insoluble extracellular electron acceptor have been conclusively identified. Nevertheless, a low rate of anode reduction has been observed from whole photosynthetic organisms without the addition of exogenous redox-active compounds. It has been speculated that electron transfer occurs through the release of low concentrations of endogenous redox mediator compounds. Improving the electron export activity of cyanobacteria for use in biological photovoltaic systems is a topic of current research.
Artificial electron mediators
Redox mediators are often added to experimental systems to improve the rate of electron export from the biological material and/or electron transfer to the anode, especially when whole cells are employed as the light harvesting material. Quinones, phenazines, and viologens have all been successfully employed to increase current output from photosynthetic organisms in biological photovoltaic devices. Adding artificial mediators is considered an unsustainable practice in scaled-up applications, so most modern research is on mediator-free systems.
Efficiency
The conversion efficiency of biological photovoltaic devices is presently too low for scaled-up versions to achieve grid parity. Genetic engineering approaches are being employed to increase the current output from photosynthetic organisms for use in biological photovoltaic systems.
References
External links
An introduction to biological photovoltaics video on YouTube
Bioelectrochemistry
Fuel cells
Electrochemistry | Biological photovoltaics | [
"Chemistry"
] | 1,631 | [
"Electrochemistry",
"Bioelectrochemistry"
] |
41,213,990 | https://en.wikipedia.org/wiki/Neutrino%20minimal%20standard%20model | The neutrino minimal standard model (often abbreviated as νMSM) is an extension of the Standard Model of particle physics, by the addition of three right-handed neutrinos with masses smaller than the electroweak scale. Introduced by Takehiko Asaka and Mikhail Shaposhnikov in 2005, it has provided a highly constrained model for many topics in physics and cosmology, such as baryogenesis and neutrino oscillations.
References
External links
Brief technical description
Neutrinos
Physics beyond the Standard Model | Neutrino minimal standard model | [
"Physics"
] | 111 | [
"Particle physics stubs",
"Unsolved problems in physics",
"Particle physics",
"Physics beyond the Standard Model"
] |
55,684,327 | https://en.wikipedia.org/wiki/Horndeski%27s%20theory | Horndeski's theory is the most general theory of gravity in four dimensions whose Lagrangian is constructed out of the metric tensor and a scalar field and leads to second order equations of motion. The theory was first proposed by Gregory Horndeski in 1974 and has found numerous applications, particularly in the construction of cosmological models of Inflation and dark energy. Horndeski's theory contains many theories of gravity, including General relativity, Brans-Dicke theory, Quintessence, Dilaton, Chameleon and covariant Galileon as special cases.
Action
Horndeski's theory can be written in terms of an action as
with the Lagrangian densities
Here is Newton's constant, represents the matter Lagrangian, to are generic functions of and , are the Ricci scalar and Einstein tensor, is the Jordan frame metric, semicolon indicates covariant derivatives, commas indicate partial derivatives, , and repeated indices are summed over following Einstein's convention.
Constraints on parameters
Many of the free parameters of the theory have been constrained, from the coupling of the scalar field to the top field and via coupling to jets down to low coupling values with proton collisions at the ATLAS experiment. and , are strongly constrained by the direct measurement of the speed of gravitational waves following GW170817.
See also
Classical theories of gravitation
General relativity
Brans–Dicke theory
Dual graviton
Massive gravity
Lovelock theory of gravity
Alternatives to general relativity
References
General relativity | Horndeski's theory | [
"Physics"
] | 316 | [
"General relativity",
"Theory of relativity"
] |
60,502,508 | https://en.wikipedia.org/wiki/Macromolecular%20cages | In host–guest chemistry, macromolecular cages are a type of macromolecule structurally consisting of a three-dimensional chamber surrounded by a molecular framework. Macromolecular cage architectures come in various sizes ranging from 1-50 nm and have varying topologies as well as functions. They can be synthesized through covalent bonding or self-assembly through non-covalent interactions. Most macromolecular cages that are formed through self-assembly are sensitive to pH, temperature, and solvent polarity.
Metal Organic Polyhedra
Metal Organic Polyhedra (MOPs) comprise a specific type of self-assembled macromolecular cage that is formed through unique coordination and is typically chemically and thermally stable. MOPs have cage-like frameworks with an enclosed cavity. The discrete self-assembly of metal ions and organic scaffolds to form MOPs into highly symmetrical architectures, is a modular process and has various applications. The self-assembly of various subunits that result in high symmetry is a common occurrence in biological systems. Specific examples of this are ferritin, capsid, and the tobacco mosaic virus, which are formed by the self-assembly of protein subunits into a polyhedral symmetry. Nonbiological polyhedra formed with metal ions and organic linkers are metal based macromolecular cages that have nanocavities with multiple openings or pores that allow small molecules to permeate and pass through. MOPs have been used to encapsulate a number of guests through various host-guest interactions (e.g. electrostatic interactions, hydrogen bonding, and steric interactions). MOPs are biomimetic materials that have potential for biomedical and biochemical applications. In order for the cage to work effectively and have biomedical relevance, it has to be chemically stable, biocompatible, and needs to operate mechanistically in aqueous media. Macromolecular cages in general can be used for a variety of applications (e.g. nanoencapsulation, biosensing, drug delivery, regulation of nanoparticle synthesis, and catalysis).
Cage Shaped Polymers
There are also a class of macromolecular cages that are synthetically formed through covalent bonding as opposed to self-assembly. Through the covalent-bond-forming strategy the cage molecules can be synthesized methodically with customizable functionality and regulated cavity size. Cage-shaped polymers are macromolecular analogues of molecular cages such as cryptand. A cage molecule of this type can be tuned by the degree of polymerization. The polymers that are typically used to make the polymer based macromolecular cages are made with star shaped polymers or nonlinear polymer precursors. The molecular size of the polymeric macromolecular cage is controlled by the molecular weight of the star-shaped polymer or branched polymer. The macromolecular cages made from non-linear polymers are designed to have molecular recognition, respond to external stimuli and self-assemble into higher order structures.
Fullerenes
Fullerenes are a class of carbon allotropes that were first discovered in 1985 and are also an example of macromolecular cages. Buckminsterfullerene (C60) and the 60 atoms of this molecule are arranged in a cage-like structure and the framework resembles a soccer ball; the molecule has an icosahedral symmetry. C60 has versatile applications due to its macromolecular cage structure; for example, it can be used for water purification, catalysis, bio-pharmaceuticals, serve as a carrier of radionuclides for MRI, and drug delivery.
Macromolecular Cage Architecture in Biology
There are many examples of highly symmetrical macromolecular cage motifs known as protein cages in biological systems. The term protein cage delineates a diverse range of protein structures that are formed by the self-assembly of protein subunits into hollow macromolecular nanoparticles. These protein cages are nanoparticles that have one or more cavities present in their structure. The size of the cavity contributes to the size of the particle that the cavity can enclose, for example inorganic nanoparticles, nucleic acids, and even other proteins. The interior or chamber portion of the protein cage is usually accessible through a pore which is located in between protein subunits. The RNA exosome has nuclease active sites that are present in a cavity where 3' RNA degradation takes place; access to this cavity is controlled by a pore and this serves to prevent uncontrollable RNA decay. Some protein cages are dynamic structures that assemble and disassemble in response to external stimuli. Other examples of protein cages are clathrin cages, viral envelopes, chaperonins, and the iron storage protein ferritin.
Synthetic Strategies to form Macromolecular Cages
There are various methods used to form polymeric macromolecular cages. One synthetic method uses ring opening and multiple click chemistry in the first step to form trefoil and quatrefoil-shaped polymers, which can then be topologically converted into cages using hydrogenolysis. The initiator in this synthesis is azido and hydroxy functionalized p-xylene and the monomer is butylene oxide. The ring opening polymerization and simultaneous click cyclizations of butylene oxide with the initiator is catalyzed by t-Bu-P4. This synthetic strategy was used to form cage-shaped polybutylene oxides; cage-shaped block copolymers are also formed using a similar method. One synthetic strategy utilizes atom transfer radical polymerization and click chemistry methods to form figure eight and cage-shaped polystyrene; in this case the precursor is nonlinear polystyrene. Another synthetic strategy employs intramolecular ring-opening metathesis oligomerization of a star polymer and this reaction method is catalyzed by diluted Grubb's third generation catalyst.
Covalent Organic Frameworks (COFs) have also been used to form cage architectures and in one such example Schiff base cyclization was used to form the macromolecular cage molecule. In this synthesis 1,3,5-triformylbenzene and (R,R)-(1,2)-diphenylethylenediamine undergo cycloimination in dichloromethane with trifluoroacetic acid as a catalyst to form a COF cage molecule. Macrocyclizations have also been employed to form peptoid based macromolecular cages, the specific methodology utilizes a one pot synthesis to form steroid-aryl hybrid cages using two- and three-fold Ugi type macrocyclization reactions.
Genetically engineered macromolecular cages made from biomolecules
Macromolecular cages can also be formed synthetically using biomolecules. Protein cages can be genetically engineered, and the outside of the cage can be tailored with synthetic polymers, which is known as protein-polymer conjugation. Preformed polymer chains can be attached to the surface of the protein using chemical linkers. Polymerization can also occur from the protein surface, and the polymer can also be bound to the surface of protein cages via electrostatic interactions. The purpose of this modification is to make synthetic protein cages more biocompatible; this post synthetic modification makes the protein cage less susceptible to an immune response and stabilizes the cage from degradation from proteases. Virus-like protein (VLP) cages have also been synthesized and recombinant DNA technology is used to form non-native virus-like proteins. The first reported case of the formation of non-native VLP constructs into a capsid-like structure utilized a functionalized gold core for nucleation. The self-assembly of the VLP was initiated by the electrostatic interaction of the functionalized gold nanoparticles which is similar to the interaction of a native virus with its nucleic acid component. These viral protein cages have potential applications in biosensing and medical imaging. DNA origami is another strategy to form macromolecular cages or containers. In one case, a 3D macromolecular cage with icosahedral symmetry (resembling viral capsids) was formed based on the synthetic strategy in 2D origami. The structure had an inside volume or hollow cavity encased by triangular faces, similar to a pyramid. This close-faced cage was designed to potentially encapsulate other materials such as proteins and metal nanoparticles.
References
Macromolecules | Macromolecular cages | [
"Physics",
"Chemistry"
] | 1,764 | [
"Molecules",
"Macromolecules",
"Matter"
] |
38,413,368 | https://en.wikipedia.org/wiki/WELMEC | WELMEC is a body set up to promote European cooperation in the field of legal metrology. WELMEC members are drawn from the national authorities responsible for legal metrology in European Union (EU) and European Free Trade Association (EFTA) member states.
WELMEC state their mission as being "to develop and maintain mutual acceptance among its members and to maintain effective cooperation to achieve a harmonised and consistent approach to the societies needs for legal metrology and for the benefit of all stakeholders including consumers and businesses."
WELMEC was established in 1990, at a meeting in Bern, Switzerland, and was originally the acronym for the "Western European Legal Metrology Cooperation". WELMEC has 30 members and 7 associate members. Today, although the name is still WELMEC, as the European Union extended its membership outside Western Europe, so did WELMEC, the organisation's membership encompassing EU member states, EFTA members and aspiring EU members: one of the aims of WELMEC being the provision of assistance to aspiring EU members in aligning their legal metrology process with those of the EU.
As of 2013, WELMEC's principal activities centered on the operation of the EU Nonautomatic Weighing Instruments Directive (NAWI – EU directive 2009/23/EC) and the implementation of the EU Measuring Instruments Directive (MID – EU directive (2004/22/EC). The organisation's working parties, which map onto various aspects of these two directives, are:
WG 2 Directive Implementation (2009/23/EC)
WG 5 Metrological supervision
WG 6 Prepackages
WG 7 Software
WG 8 Measuring Instruments Directive
WG 10 Measuring equipment for liquids other than water
WG 11 Gas and Electricity Meters
WG 13 Water and Thermal Energy Meters
See also
EURAMET, the European Association of National Metrology Institutes
International Organization of Legal Metrology
References
External links
Measurement
Standards organizations | WELMEC | [
"Physics",
"Mathematics"
] | 393 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
38,420,593 | https://en.wikipedia.org/wiki/F%2A%20%28programming%20language%29 | F* (pronounced F star) is a high-level, multi-paradigm, functional and object-oriented programming language inspired by the languages ML, Caml, and OCaml, and intended for program verification. It is a joint project of Microsoft Research, and the French Institute for Research in Computer Science and Automation (Inria). Its type system includes dependent types, monadic effects, and refinement types. This allows expressing precise specifications for programs, including functional correctness and security properties. The F* type-checker aims to prove that programs meet their specifications using a combination of satisfiability modulo theories (SMT) solving and manual proofs. For execution, programs written in F* can be translated to OCaml, F#, C, WebAssembly (via KaRaMeL tool), or assembly language (via Vale toolchain). Prior F* versions could also be translated to JavaScript.
It was introduced in 2011. and is under active development on GitHub.
History
Versions
Until version 2022.03.24, F* was written entirely in a common subset of F* and F# and supported bootstrapping in both OCaml and F#. This was dropped starting in version 2022.04.02.
Overview
Operators
F* supports common arithmetic operators such as +, -, *, and /. Also, F* supports relational operators like <, <=, ==, !=, >, and >=.
Data types
Common primitive data types in F* are bool, int, float, char, and unit.
References
Sources
Swamy, Nikhil; Martínez, Guido; Rastogi, Aseem (2024). Proof-Orented Programming in F*.
External links
F* tutorial
High-level programming languages
Functional languages
OCaml programming language family
.NET programming languages
Microsoft programming languages
Microsoft Research
Microsoft free software
Dependently typed languages
Automated theorem proving
Programming languages created in 2011
Proof assistants
2011 software
Cross-platform free software
Software using the Apache license
Statically typed programming languages | F* (programming language) | [
"Mathematics"
] | 424 | [
"Mathematical logic",
"Computational mathematics",
"Automated theorem proving"
] |
57,440,610 | https://en.wikipedia.org/wiki/Photovoltaic%20system%20performance | Photovoltaic system performance is a function of the climatic conditions, the equipment used and the system configuration. PV performance can be measured as the ratio of actual solar PV system output vs expected values, the measurement being essential for proper solar PV facility's operation and maintenance. The primary energy input is the global light irradiance in the plane of the solar arrays, and this in turn is a combination of the direct and the diffuse radiation.
The performance is measured by PV monitoring systems, which include a data logging device and often also a weather measurement device (on-site device or an independent weather data source). Photovoltaic performance monitoring systems serve several purposes - they are used to track trends in a single photovoltaic (PV) system, to identify faults in or damage to solar panels and inverters, to compare the performance of a system to design specifications or to compare PV systems at different locations. This range of applications requires various sensors and monitoring systems, adapted to the intended purpose. Specifically, there is a need for both electronic monitoring sensors and independent weather sensing (irradiance, temperature and more) in order to normalize PV facility output expectations. Irradiance sensing is very important for the PV industry and can be classified into two main categories - on-site pyranometers and satellite remote sensing; when onsite pyranometers are not available, regional weather stations are also sometimes utilized, but at lower quality of data; the Industrial IoT-powered sensorless measurement approach has recently evolved as the third option.
Sensors and photovoltaic monitoring systems are standardized in IEC 61724-1 and classified into three levels of accuracy, denoted by the letters “A”, “B” or “C”, or by the labels “High accuracy”, “Medium accuracy” and “Basic accuracy”. A parameter called the 'performance ratio' has been developed to evaluate the total value of PV system losses.
Principles
Photovoltaic system performance is generally dependent on incident irradiance in the plane of the solar panels, the temperature of the solar cells, and the spectrum of the incident light. Furthermore, it is dependent upon the inverter, which typically sets the operating voltage of the system. The voltage and current output of the system changes as lighting, temperature and load conditions change, so there is no specific voltage, current, or wattage at which the system always operates. Hence, system performance varies depending on its architecture (direction and tilt of modules), geographic location and the time of day, weather conditions (amount of solar insolation, cloud cover, temperature), and local disturbances such as shading, soiling, state of charge, and system component availability.
Performance by system type
Solar PV parks
Solar parks of industrial and utility scale may reach high performance figures. In modern solar parks the performance ratio should typically be in excess of 80%. Many solar PV parks utilize advanced performance monitoring solutions, which are supplied by a variety of technology providers.
Distributed solar PV
In rooftop solar systems it typically takes a longer time to identify a malfunction and send a technician, due to lower availability of sufficient photovoltaic system performance monitoring tools and higher costs of human labor. As a result, rooftop solar PV systems typically suffer from lower quality of operation & maintenance and essentially lower levels of system availability and energy output.
Off-grid solar PV
Most off-grid solar PV facilities lack any performance monitoring tools, due to a number of reasons - including monitoring equipment costs, cloud connection availability and O&M availability.
Performance measurement and monitoring
A number of technical solutions exist to provide performance measurement and monitoring for solar photovoltaic installations, differing according to data quality, compatibility with irradiance sensors as well as pricing.
Weather data acquisition
Weather data acquisition is generally relying on physical weather sensors and remote sensing with satellites.
Energy generation data availability and quality
An essential part of PV system performance evaluation is the availability and the quality of energy generation data. Access to the Internet has allowed a further improvement in energy monitoring and communication.
Typically, PV plant data is transmitted via a data logger to a central monitoring portal. Data transmission is dependent on the local cloud connectivity, thus being highly available in OECD countries, but more limited in developed countries. According to Samuel Zhang, vice president of Huawei Smart PV, over 90% of global PV plants will be fully digitilized by 2025.
Performance monitoring
In general, monitoring solutions can be classified to inverter manufacturer-provided logger and monitoring software solutions, independent data-logger solutions with custom software and finally agnostic monitoring software-only solutions compatible with different inverters and data-loggers.
Monitoring solutions by inverter manufacturers
Dedicated performance monitoring systems are available from a number of vendors. For solar PV systems that use microinverters (panel-level DC to AC conversion), module power data is automatically provided. Some systems allow setting performance alerts that trigger phone/email/text warnings when limits are reached. These solutions provide data for the system owner and/or the installer. Installers are able to remotely monitor multiple installations, and see at-a-glance the status of their entire installed base. All the major inverter manufacturers provide a data acquisition unit - whether a data logger or a direct means of communication with the portal.
These solutions have the advantage of providing of a maximum information from the inverter and of supplying it on a local display or transmitting it on the internet, in particular alerts from the inverter itself (temperature overload, loss of connection with a network, etc.).
Some of those monitoring solutions are:
Fronius accessible via Solar.web portal;
SMA's Webbox/Inverter Manager/Cluster-Controller loggers accessible via Sunnyportal and EnnexOS portals;
SolarEdge accessible via SolarEdge Monitoring and MySolarEdge (only application) portals;
Sungrow accessible via the iSolarCloud portal;
Independent data logging solutions connected to inverters
Generic data logging solutions connected to inverters make it possible to overcome the major drawback of inverter-specific manufacturer solutions - being compatible with several different manufacturers. These data acquisition units connect to the serial links of the inverters, complying with each manufacturer’s protocol. Generic data logging solutions are generally more affordable than inverter manufacturer solutions and allow aggregation of solar PV system fleets of varying inverter manufacturers.
Some of those monitoring solutions are:
AlsoEnergy loggers accessible via the PowerTrack portal;
Solar-Log loggers accessible via the WEB Enerest™ 4 portal;
Meteocontrol loggers accessible via the VCOM Cloud portal;
Solar Analytics' "Smart Solar logger"s accessible via the Solar Analytics portal;
Independent monitoring solutions
The last category is the most recent segment in the solar photovoltaic monitoring domain. Those are software based aggregation portals, able to aggregate information from both inverter-specific portals and data loggers as well as independent data loggers. Such solutions become more widespread as inverter-specific communication to the cloud is done more and more without data loggers, but rather as direct data connections.
Omnidian residential solar performance insurance partner Omnidian;
Solytic generic solar monitoring Solytic portal;
Sunreport device-agnostic cloud solar monitoring Sunreport platform;
Weather data sources
On-site irradiance sensors
On-site irradiance measurements are an important part of PV performance monitoring systems. Irradiance can be measured in the same orientation as the PV panels, so-called plane of array (POA) measurements, or horizontally, so-called global horizontal irradiance (GHI) measurements. Typical sensors used for such irradiance measurements include thermopile pyranometers, PV reference devices and photodiode sensors. To conform to a specific accuracy class, each sensor type must meet a certain set of specifications. These specifications are listed in the table below.
If an irradiance sensor is placed in POA, it must be placed at the same tilt angle as the PV module, either by attaching it to the module itself or with an extra platform or arm at the same tilt level. Checking if the sensor is properly aligned can be done with portable tilt sensors or with an integrated tilt sensor.
Sensor maintenance
The standard also specifies a required maintenance schedule per accuracy class. Class C sensors require maintenance per manufacturer's requirement. Class B sensors need to be re-calibrated every 2 years and require a heater to prevent precipitation or condensation. Class A sensors need to be re-calibrated once per year, require cleaning once per week, require a heater and require ventilation (for thermopile pyranometers).
Satellite remote sensing of irradiance
PV performance can also be estimated by satellite remote sensing. These measurements are indirect because the satellites measure the solar radiance reflected off the earth surface. In addition, the radiance is filtered by the spectral absorption of Earth's atmosphere. This method is typically used in non-instrumented class B and class C monitoring systems to avoid costs and maintenance of on-site sensors. If the satellite-derived data is not corrected for local conditions, an error in radiance up to 10% is possible.
Equipment and performance standards
Sensors and monitoring systems are standardized in IEC 61724-1 and classified into three levels of accuracy, denoted by the letters “A”, “B” or “C”, or by the labels “High accuracy”, “Medium accuracy” and “Basic accuracy”.
In California, solar PV performance monitoring has been regulated by the State government. As of 2017, the governmental agency California Solar Initiative (CSI) provided a Performance Monitoring & Reporting Service certificate to eligible companies active in the solar segment and acting in line with CSI requirements.
A parameter called the 'performance ratio' has been developed to evaluate the total value of PV system losses. The performance ratio gives a measure of the output AC power delivered as a proportion of the total DC power which the solar modules should be able to deliver under the ambient climatic conditions.
See also
Photovoltaics
Pyranometer
Remote sensing
Atmosphere of Earth
Absorption (electromagnetic radiation)
References
External links
NREL - Analytics of PV System Energy Performance Evaluation Method
Photovoltaic Geographical Information System (PVGIS) provides information on solar radiation and photovoltaic system performance for any location in the world, except the North and South Poles
Photovoltaics
Maintenance | Photovoltaic system performance | [
"Engineering"
] | 2,146 | [
"Maintenance",
"Mechanical engineering"
] |
57,449,266 | https://en.wikipedia.org/wiki/Thomas%20Edwin%20Nevin | Thomas Edwin Nevin (4 October 1906 in Bristol, Somerset – 16 July 1986 in Dublin) was an Irish physicist and academic who had a distinguished career in the field of molecular spectroscopy. He was Professor of Experimental Physics and Dean of the Faculty of Science in University College Dublin from 1963 to 1979.
Personal life
Thomas E. Nevin was born in Bristol, Somerset on 4 October 1906. He was the eldest of seven children born to Thomas Nevin of Cashel, County Tipperary, and Alice Nevin (née Higginson) of Herefordshire. Áine Ní Chnáimhín (1908–2001) who wrote a biography of Pádraic Ó Conaire was Nevin's sister; historian and trade unionist Donal Nevin was his brother.
In January 1936 he married Monica T. M. Morrissey, a UCD graduate in Celtic studies who went on to serve on the Council of the Royal Society of Antiquaries of Ireland and did research on antiquarian matters for Irish History Online. The couple had four daughters together.
Education and career
Born in 1906 in Bristol, England, the oldest of seven children of an Irish father and English mother, the family soon settled in Ireland where he spent his youth. From 1919 to 1924 he attended the CBS Sexton Street secondary school in Limerick City. The school had no science program, but Nevin was interested in physics and managed to learn the subject pretty thoroughly on his own. In 1924 he got a scholarship to University College Dublin, where he excelled in mathematics and physics, winning first-class honours every year, and earning an honours B.Sc. in Experimental Physics and Mathematics in 1927. He got his M.Sc. under J.J. Nolan in 1928 for a treatise on "The Effect of Water Vapour on the Diffusion Coefficients and Mobilities of Ions in the Air". That year he was also awarded an 1851 Research Fellowship, which enabled him to study spectroscopy at Imperial College, London (1929–1931). In 1931, he returned to Dublin to continue his research and was appointed an assistant in the department of experimental physics.
In 1940, he was awarded a D.Sc. degree at National University of Ireland for previously published work, and in 1942, he was awarded an honorary doctorate at Queen's University Belfast. Throughout the 1930s and 1940s he continued his research in molecular spectroscopy, often working with research groups in fundamental particle and cosmic ray physics. He was a capable administrator at UCD, serving on the university's finance and buildings committees, as well as the academic council and governing body, and he initiated many improvements to the physics department. When J. J. Nolan died in 1952, Nevin succeeded him as Professor of Experimental Physics, a position he held until his retirement in 1979.
Nevin was a strong advocate for expansion of the UCD campus, which for half a century was based at Earlsfort Terrace in the city. As a key member of UCD's academic council and a member of its buildings committee (1957–76), he was instrumental in moving the science faculty to the new Belfield campus in the southern suburbs in 1964.
He was a key figure in the formation of the Irish branch of Institute of Physics.
At the Dublin Institute for Advanced Studies he was a member of the governing boards of the school of theoretical physics (1943-1961) and the school of cosmic physics (1948-1956).
On 16 March 1942 he was elected a member of the Royal Irish Academy and served on its council from 1944 to 1968.
Thomas E. Nevin Medal
The Thomas E. Nevin Medal and Prize is given annually, in honour of Nevin, to the graduate who passes with first-class honours and is placed first in the BSc (Honours) degree examination in Physics, at the University College Dublin..
Papers
1985 Jeremiah Hogan and University College Dublin by Thomas E. Nevin, in Studies, An Irish Quarterly Review published by Irish Province of the Society of Jesus, Vol 74, No 295, pp. 325–335
1931 The spectrum of barium fluoride in the extreme red and near infra-red by Thomas E Nevin, Proceedings of the Physical Society, Vol 43, No 5.
1930 The Effect of Water Vapour on the Diffusion Coefficients and Mobilities of Ions in the Air by J. J. Nolan and T. E. Nevin, Proceedings of the Royal Society, London, 127, 155–174.
References
External links
"Physicists of Ireland, Passion and Precision", by Mark McCartney (Editor), Andrew Whitaker (Editor), 1 edition (15 September 2003) CRC Press.
20th-century Irish physicists
Spectroscopists
Academics of University College Dublin
Alumni of the National University of Ireland
Members of the Royal Irish Academy
1906 births
1986 deaths
Alumni of University College Dublin | Thomas Edwin Nevin | [
"Physics",
"Chemistry"
] | 974 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
50,406,093 | https://en.wikipedia.org/wiki/Grassmann%20bundle | In algebraic geometry, the Grassmann d-plane bundle of a vector bundle E on an algebraic scheme X is a scheme over X:
such that the fiber is the Grassmannian of the d-dimensional vector subspaces of . For example, is the projective bundle of E. In the other direction, a Grassmann bundle is a special case of a (partial) flag bundle. Concretely, the Grassmann bundle can be constructed as a Quot scheme.
Like the usual Grassmannian, the Grassmann bundle comes with natural vector bundles on it; namely, there are universal or tautological subbundle S and universal quotient bundle Q that fit into
.
Specifically, if V is in the fiber p−1(x), then the fiber of S over V is V itself; thus, S has rank r = d = dim(V) and is the determinant line bundle. Now, by the universal property of a projective bundle, the injection corresponds to the morphism over X:
,
which is nothing but a family of Plücker embeddings.
The relative tangent bundle TGd(E)/X of Gd(E) is given by
which morally is given by the second fundamental form. In the case d = 1, it is given as follows: if V is a finite-dimensional vector space, then for each line in V passing through the origin (a point of ), there is the natural identification (see Chern class#Complex projective space for example):
and the above is the family-version of this identification. (The general care is a generalization of this.)
In the case d = 1, the early exact sequence tensored with the dual of S = O(-1) gives:
,
which is the relative version of the Euler sequence.
References
Algebraic geometry | Grassmann bundle | [
"Mathematics"
] | 376 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
50,406,738 | https://en.wikipedia.org/wiki/Cellular%20dewetting | Cellular dewetting refers to the process of nucleation and enlargement of transendothelial cell macroaperture (TEM) tunnels in endothelial cells (Figure 1). This phenomenon is analogous to the nucleation and growth of dry patches in viscous liquids spreading on a non-wettable substrate (Figure 2). Cellular dewetting is triggered by several protein toxins from pathogenic bacteria, notably the EDIN-like factors from Staphylococcus aureus and from Clostridium botulinum, as well as edema toxin from Bacillus anthracis. TEMs form in response to the rupture of cytoskeleton physical connections through the cytoplasm due to inhibition of the RhoA/ROCK pathway or to induction of the flux of cyclic-AMP (cAMP) broad signaling molecule.
Physics behind cellular dewetting
The phenomenon of cellular dewetting can be interpreted by physical modeling (Figure 2). The driving force responsible for the spontaneous formation of TEM tunnels and their opening is the membrane tension that results from the spreading of cells due to actomyosin relaxation. Opposite to liquid dewetting, TEMs reach a maximum diameter, at which the driving force is balanced by a resisting force that develops along TEM edges (Figure 2). This resisting force is referred to as line tension and is uncharacterized at the molecular level.
Physical parameters
Driving forces pulling on a tunnel of radius R, as depicted in Figure 2. Here, pulling is due to the tensioning of the cell membrane (σ) that is partly counteracted by a line tension around the tunnel (T). In these conditions, the net driving force (FD) consists of two contributions:
Dewetting proceeds if FD>0.
Membrane tension (σ) depends on the tunnel radius R. A tunnel increase in size relaxes the membrane, inducing a decrease in membrane tension, as described by Helfrich’s law.
Line tension (T) corresponds to the resisting force along the edge of the tunnel that opposes membrane tension and limits dewetting. This line tension can have physical and molecular components.
References
Cell biology
Biophysics | Cellular dewetting | [
"Physics",
"Biology"
] | 454 | [
"Cell biology",
"Applied and interdisciplinary physics",
"Biophysics"
] |
50,407,105 | https://en.wikipedia.org/wiki/Pyramidal%20inversion | In chemistry, pyramidal inversion (also umbrella inversion) is a fluxional process in compounds with a pyramidal molecule, such as ammonia (NH3) "turns inside out". It is a rapid oscillation of the atom and substituents, the molecule or ion passing through a planar transition state. For a compound that would otherwise be chiral due to a stereocenter, pyramidal inversion allows its enantiomers to racemize. The general phenomenon of pyramidal inversion applies to many types of molecules, including carbanions, amines, phosphines, arsines, stibines, and sulfoxides.
Energy barrier
The identity of the inverting atom has a dominating influence on the barrier. Inversion of ammonia is rapid at room temperature, inverting 30 billion times per second. Three factors contribute to the rapidity of the inversion: a low energy barrier (24.2 kJ/mol; 5.8 kcal/mol), a narrow barrier width (distance between geometries), and the low mass of hydrogen atoms, which combine to give a further 80-fold rate enhancement due to quantum tunnelling. In contrast, phosphine (PH3) inverts very slowly at room temperature (energy barrier: 132 kJ/mol). Consequently, amines of the type RR′R"N usually are not optically stable (enantiomers racemize rapidly at room temperature), but P-chiral phosphines are. Appropriately substituted sulfonium salts, sulfoxides, arsines, etc. are also optically stable near room temperature. Steric effects can also influence the barrier.
Nitrogen inversion
Pyramidal inversion in nitrogen and amines is known as nitrogen inversion. It is a rapid oscillation of the nitrogen atom and substituents, the nitrogen "moving" through the plane formed by the substituents (although the substituents also move - in the other direction); the molecule passing through a planar transition state. For a compound that would otherwise be chiral due to a nitrogen stereocenter, nitrogen inversion provides a low energy pathway for racemization, usually making chiral resolution impossible.
Quantum effects
Ammonia exhibits a quantum tunnelling due to a narrow tunneling barrier, and not due to thermal excitation. Superposition of two states leads to energy level splitting, which is used in ammonia masers.
Examples
The inversion of ammonia was first detected by microwave spectroscopy in 1934.
In one study the inversion in an aziridine was slowed by a factor of 50 by placing the nitrogen atom in the vicinity of a phenolic alcohol group compared to the oxidized hydroquinone.
The system interconverts by oxidation by oxygen and reduction by sodium dithionite.
Exceptions
Conformational strain and structural rigidity can effectively prevent the inversion of amine groups. Tröger's base analogs (including the Hünlich's base) are examples of compounds whose nitrogen atoms are chirally stable stereocenters and therefore have significant optical activity.
References
Physical chemistry
Stereochemistry
Organic chemistry | Pyramidal inversion | [
"Physics",
"Chemistry"
] | 649 | [
"Applied and interdisciplinary physics",
"Stereochemistry",
"Space",
"nan",
"Spacetime",
"Physical chemistry"
] |
50,408,821 | https://en.wikipedia.org/wiki/C12orf42 | Chromosome 12 Open Reading Frame 42 (C12orf42) is a protein-encoding gene in Homo sapiens.
Gene
Locus
The genomic location for this gene is as follows: starts at 103,237,591 bp and ends 103,496,010 bp. The cytogenetic location for C12orf42 is 12q23.2. It is located on the negative strand
mRNA
Fifteen different mRNAs are made by transcription: fourteen alternative splice variants and one unspliced form.
Protein
The protein released by this gene is known as uncharacterized protein C12orf42. There are three isoforms for this protein produced by alternative splicing. The first isoform is a conical sequence. The second Isoform differs form the first in that it doesn't contain 1-95 aa in its sequence. The third isoform differs from the conical sequence in two ways:
87-107 aa is the sequence GSHHGQATQKLQGAMVLHLEE instead of VFPERTQNSMACKRLLHTCQY
the entire sequence 108-360 aa doesn't exist in this isoform
Secondary structure
C12orf42 protein takes on several secondary structures, such as: alpha helices, beta sheets, and random coils. C12orf42 protein is a soluble. Proteins that are soluble have a hydrophilic outside and hydrophobic interior . Proteins with this type of structure are able to freely float inside a cell, due to the liquid composition of the cytosol.
Subcellular location
C12orf42 is an intracellular protein. This is known by the lack of transmembrane domains or signal peptides. This suggests that it is predicted to be a nuclear protein, given the nuclear localization signal (NSL) found: PRDRRPQ at 292 aa and a bipartite KRLIKVCSSAPPRPTRR at 325 aa.
Post-translation modification
Predicted post-translation modification sites are seen below in the table. Nuclear proteins are known for having phosphorylation, acetylation, sumoylation, and O-GlcNAc as types of modifications:
Phosphorylation affects proteins-protein interaction and the stability of the protein.
Acetylation promotes protein folding and improves stability.
Sumoylation is involved in nuclear-cytosolic transport and DNA repair.
Glycosylation (known as O-GlcNAc while in the nucleus) promotes protein folding and stability.
Expression
Tissue profiles
Microarray data shows expression of the C12orf42 gene in different tissues throughout the human body. There is high expression in the lymph node, spleen, and thymus. There is significant expression in the brain, bladder, epididymis, and the helper T cell. Therefore, there is statistically significant expression of C12orf42 gene throughout the nervous system, immune system, and male reproductive system.
In situ hybridization
The table below shows the areas in the mouse brain where C12orf42 is expressed. The gene name for the mouse is 1700113H08Rik, it is the human homolog of C12orf42. Area one and two of the brain manages body and skeletal movement. Areas three and four in the brain are for sensory functions; area four specializes in perception of smell. Area five in the brain functions in emotional learning and memory.
Homology
Paralog
C12orf42 gene has only one other member in its gene family, this gene is known as Neuroligin 4, Y linked gene (NLGN4Y).
Orthologs
C12orf42 orthologs are mostly mammals. One exception that was found is the Pelodiscus Sinensis or more commonly known as the Chinese soft-shell turtle.
Conserved domain structure
The domain structure that is most important is DUF4607, it is conserved in the Eutheria clade in the Mammalia class. The order that it is conserved in is as follows: Artiodactyla, Carnivora, Chiroptera, Lagomorpha, Perissodactyla, Primates, Proboscidea, and Rodentia.
Clinical significance
In an experiment, fine-tiling comparative genomic hybridization (FT-CGH) and ligation-mediated PCR (LM-PCR) were combined. This resulted in the finding of a chromosomal translocation t(12;14)(q23;q11.2) in T-lymphoblastic lymphoma (T-LBL). The chromosomal translocation occurs during T-receptor delta gene-deleting rearrangement, which is important in T-cell differentiation. This translocation disrupts C12orf42 and it brings the gene ASCL1 closer to the T-cell receptor alpha (TRA) enhancer. As a result, the cross-fused gene encodes vital transcription factors that are found in medullary thyroid cancer and small-cell lung cancer.
References
Genes
Proteins | C12orf42 | [
"Chemistry"
] | 1,040 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
50,410,750 | https://en.wikipedia.org/wiki/Continuous%20monitoring%20and%20adaptive%20control%20%28stormwater%20management%29 | Continuous monitoring and adaptive control (CMAC) is a category of stormwater best management practice that allows for a wider range of operation of detention and retention ponds. CMAC systems typically consist of a water level sensor, an actuated valve, and an internet connection.
Specific applications of CMAC include flood protection, water quality treatment, water reuse, and channel protection.
See also
Urban runoff
References
External links
Chesapeake Bay Urban Stormwater Work Group
Flood control
Environmental engineering
Stormwater management
Water and the environment | Continuous monitoring and adaptive control (stormwater management) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 101 | [
"Water treatment",
"Stormwater management",
"Chemical engineering",
"Water pollution",
"Flood control",
"Civil engineering",
"Environmental engineering"
] |
50,412,926 | https://en.wikipedia.org/wiki/Human%20sperm%20competition | Sperm competition is a form of post-copulatory sexual selection whereby male sperm simultaneously physically compete to fertilize a single ovum. Sperm competition occurs between sperm from two or more rival males when they make an attempt to fertilize a female within a sufficiently short period of time. This results primarily as a consequence of polyandrous mating systems, or due to extra-pair copulations of females, which increases the chance of cuckoldry, in which the male mate raises a child that is not genetically related to him. Sperm competition among males has resulted in numerous physiological and psychological adaptations, including the relative size of testes, the size of the sperm midpiece, prudent sperm allocation, and behaviors relating to sexual coercion, however this is not without consequences: the production of large amounts of sperm is costly and therefore, researchers have predicted that males will produce larger amounts of semen when there is a perceived or known increase in sperm competition risk.
Sperm competition is not exclusive to humans, and has been studied extensively in other primates, as well as throughout much of the animal kingdom. The differing rates of sperm competition among other primates indicates that sperm competition is highest in primates with multi-male breeding systems, and lowest in primates with single-male breeding systems. Compared to other animals, and primates in particular, humans show low-to-intermediate levels of sperm competition, suggesting that humans have a history of little selection pressure for sperm competition.
Physiological adaptations to sperm competition
Physiological evidence, including testis size relative to body weight and the volume of sperm in ejaculations, suggests that humans have experienced a low-to-intermediate level of selection pressure for sperm competition in their evolutionary history. Nevertheless, there is a large body of research that explores the physiological adaptations males do have for sperm competition.
Testis size and body weight
Evidence suggests that, among the great apes, relative testis size is associated with the breeding system of each primate species. In humans, testis size relative to body weight is intermediate between monogamous primates (such as gorillas) and promiscuous primates (such as chimpanzees), indicating an evolutionary history of moderate selection pressures for sperm competition.
Ejaculate volume
The volume of sperm in ejaculates scales proportionately with testis size and, consistent with the intermediate weight of human males' testes, ejaculate volume is also intermediate between primates with high and low levels of sperm competition. Human males, like other animals, exhibit prudent sperm allocation, a physiological response to the high cost of sperm production as it relates to the actual or perceived risk of sperm competition at each insemination. In situations where the risk of sperm competition is higher, males will allocate more energy to producing higher ejaculate volumes. Studies have found that the volume of sperm does vary between ejaculates, and that sperm produced during copulatory ejaculations are of a higher quality (younger, more motile, etc.) than those sperm produced during masturbatory ejaculates or nocturnal emissions. This suggests that, at least within males, there is evidence of allocation of higher quality sperm production for copulatory purposes. Researchers have suggested that males produce more and higher quality sperm after spending time apart from their partners, implying that males are responding to an increased risk of sperm competition, although this view has been challenged in recent years. It is also possible that males may be producing larger volumes of sperm in response to actions from their partners, or it may be that males who produce larger volumes of sperm may be more likely to spend more time away from their partners.
Size of sperm midpiece
The size of the sperm midpiece is determined in part by the volume of mitochondria in the sperm. Sperm midpiece size is tied to sperm competition in that individuals with a larger midpiece will have more mitochondria, and will thus have more highly motile sperm than those with a lower volume of mitochondria. Among humans, as with relative testis size and ejaculate volume, the size of the sperm midpiece is small compared to other primates, and is most similar in size to that of primates with low levels of sperm competition, supporting the theory that humans have had an evolutionary history of intermediate levels of sperm competition.
Penis anatomy
Several features of the anatomy of the human penis are proposed to serve as adaptations to sperm competition, including the length of the penis and the shape of the penile head. By weight, the relative penis size of human males is similar to that of chimpanzees, although the overall length of the human penis is the largest among primates. It has been suggested by some authors that penis size is constrained by the size of the female reproductive tract (which, in turn is likely constrained by the availability of space in the female body), and that longer penises may have an advantage in depositing semen closer to the female cervix. Other studies have suggested that over our evolutionary history, the penis would have been conspicuous without clothing, and may have evolved its increased size due to female preference for longer penises.
The shape of the glans and coronal ridge of the penis may function to displace semen from rival males, although displacement of semen is only observed when the penis is inserted a minimum of 75% of its length into the vagina. After allegations of female infidelity or separation from their partner, both men and women report that men thrusted the penis more deeply and more quickly into the vagina at the couple's next copulation.
Psychological adaptations to sperm competition
In addition to physiological adaptations to sperm competition, men also have been shown to have psychological adaptations, including certain copulatory behaviors, behaviors relating to sexual coercion, investment in relationships, sexual arousal, performance of oral sex, and mate choice.
Copulatory behaviors
Human males have several physiological adaptations that have evolved in response to pressures from sperm competition, such as the size and shape of the penis. In addition to the anatomy of male sex organs, men have several evolved copulatory behaviors that are proposed to displace rival male semen. For example, males who are at a higher risk of sperm competition (defined as having female partners with high reproductive value, such as being younger and physically attractive) engaged more frequently in semen-displacing behaviors during sexual intercourse than men who were at a lower risk of sperm competition. These semen-displacing behaviors include deeper and quicker thrusts, increased duration of intercourse, and an increased number of thrusts.
Sexual coercion and relationship investment
Men who are more invested into a relationship have more to lose if their female partner is engaging in extra-pair copulations. This has led to the development of the cuckoldry risk hypothesis, which states that men who are at a higher risk of sperm competition due to female partner infidelity are more likely to sexually coerce their partners through threatening termination of the relationship, making their partners feel obligated to have sex, and other emotional manipulations of their partners, in addition to physically forcing partners to have sex. In forensic cases, it has been found that men who rape their partners experienced cuckoldry risk prior to raping their partners. Additionally, men who spend more time away from their partners are not only more likely to sexually coerce their partners, but they are also more likely to report that their partner is more attractive (as well as reporting that other men find her more attractive), in addition to reporting a greater interest in engaging in intercourse with her. Men who perceive that their female partners spend time with other men also are more likely to report that she is more interested in copulating with him.
Sexual arousal and sexual fantasies
Sperm competition has also been proposed to influence men's sexual fantasies and arousal. Some researchers have found that much pornography contains scenarios with high sperm competition, and it is more common to find pornography depicting one woman with multiple men than it is to find pornography depicting one man with multiple women, although this may be confounded by the fact that it is less expensive to hire male pornographic actors than female actors. Kilgallon and Simmons documented that men produce a higher percentage of motile sperm in their ejaculates after viewing sexually explicit images of two men and one woman (a sperm competition risk) than after viewing sexually explicit images of three women, likely indicating a response to an active risk of sperm competition.
Oral sex
It is unknown whether or not men's willingness and desire to perform oral sex on their female partners is an adaptation. Oral sex is not unique to humans, and it is proposed to serve a number of purposes relating to sperm competition risk. Some researchers have proposed that oral sex may serve to assess the reproductive health of a female partner and her fertility status, to increase her arousal, thereby reducing the likelihood of her having extra-pair copulations, to increase the arousal of the male to increase his semen quality, and thereby increase the likelihood of insemination, or to detect the presence of semen of other males in the vagina.
Mate choice
Sperm competition risk also influences males' choice of female partners. Men prefer to have as low of a sperm competition risk as possible, and they therefore tend to choose short-term sexual partners who are not in a sexual relationship with other men. Women who are perceived as the most desirable short-term sexual partners are those who are not in a committed relationship and who also do not have casual sexual partners, while women who are in a committed long-term relationship are the least desirable partners. Following the above, women who are at an intermediate risk of sperm competition, that is women who are not in a long-term relationship but who do engage in short-term mating or have casual sexual partners, are considered intermediate in desirability for short-term sexual partners.
Effects of sperm competition on human mating strategies
High levels of sperm competition among the great apes are generally seen among species with polyandrous (multimale) mating systems, while lower rates of competition are seen in species with monogamous or polygynous (multifemale) mating systems. Humans have intermediate levels of sperm competition, as seen by humans' intermediate relative testis size, ejaculate volume, and sperm midpiece size, compared with other primates. This suggests that there has been a relatively high degree of monogamous behavior throughout our evolutionary history. Additionally, the lack of a baculum in humans suggests a history of monogamous mating systems.
Males have the goal of reducing sperm competition by selecting women who are at low risk for sperm competition as the most ideal mating partners.
Intra-ejaculate sperm competition
Noticing that sperm in a mixed sample tends to clump together--making it less mobile--and to have a high mortality rate, reproductive biologist Robin Baker, formerly of the University of Manchester, proposed about a decade ago that some mammals, including humans, manufacture "killer" sperm whose only function is to attack foreign spermatozoa, destroying themselves in the process.
To test this idea, reproductive biologist Harry Moore and evolutionary ecologist Tim Birkhead of the University of Sheffield in the U.K. mixed sperm samples from 15 men in various combinations and checked for how the cells moved, clumped together, or developed abnormal shapes. "These are very simple experiments, but we tried to mimic what goes on in the reproductive tract," Moore says. The team found no excess casualties from any particular donor or other evidence of warring sperm, they report in the 7 December Proceedings of the Royal Society. "The kamikaze sperm hypothesis is probably not a mechanism in human sperm competition," says Birkhead.
The findings are "the nail in the coffin for the kamikaze hypothesis," says Michael Bedford, a reproductive biologist at Cornell University's Weill Medical Center in New York City. He says he had never given the idea much credence.
Female responses to sperm competition
A survey of 67 studies reporting nonpaternity suggests that for men with high paternity confidence rates of nonpaternity are (excluding studies of unknown methodology) typically 1.9%, substantially less than the typical rates of 10% or higher cited by many researchers. Cuckolded fathers are rare in human populations. "Media and popular scientific literature often claim that many alleged fathers are being cuckolded into raising children that biologically are not their own," said Maarten Larmuseau of KU Leuven in Belgium. "Surprisingly, the estimated rates within human populations are quite low--around 1 or 2 percent." "Reliable data on contemporary populations that have become available over the last decade, mainly as supplementary results of medical studies, don't support the notion that one in 10 people don't know who their "real" fathers are. The findings suggest that any potential advantage of cheating in order to have children that are perhaps better endowed is offset for the majority of women by the potential costs, the researchers say. Those costs likely include spousal aggression, divorce, or reduced paternal investment by the social partner or his relatives. The observed low cuckoldry rates in contemporary and past human populations challenge clearly the well-known idea that women routinely 'shop around' for good genes by engaging in extra-pair copulations to obtain genetic benefits for their children," Larmuseau said.
Women are loyal to men who are good providers. "With DNA tests now widely available, so-called paternity fraud has become a staple of talk shows and TV crime series. Aggrieved men accuse tearful wives who profess their fidelity, only to have their extramarital affairs brought to light...The rule of thumb seems to be that males of higher socioeconomic status, and from more conventionally bourgeois societies, have greater warranted paternity confidence. Lower paternity confidence among those who are the principals for sensational media shouldn’t be surprising then."
Sperm competition in other primates
The relative size of human male testes is bigger than those primates who have single-male (monogamous or polygynous) mating systems, such as gorillas and orangutans, while it is smaller when compared to primates with polyandrous mating systems, such as bonobos and chimpanzees. While it is possible that the large testis size of some primates could be due to seasonal breeding (and consequently a need to fertilize a large number of females in a short period of time), evidence suggests that primate groups with multi-male mating systems have significantly larger testes than do primates groups with single-male mating systems, regardless of whether that species exhibits seasonal breeding. Similarly, primate species with high levels of sperm competition also have larger ejaculate volumes and larger sperm midpieces.
Unlike all other Old World great apes and monkeys, humans do not have a baculum (penile bone). Dixson demonstrated that increased baculum length is associated with primates who live in dispersed groups, while small bacula are found in primates who live in pairs. Those primates that have multi-male mating systems tend to have bacula that are larger in size, in addition to prolongation of post-ejaculatory intromission and larger relative testis size.
References
Human reproduction
Sexual selection | Human sperm competition | [
"Biology"
] | 3,162 | [
"Evolutionary processes",
"Behavior",
"Sexual selection",
"Mating"
] |
50,413,747 | https://en.wikipedia.org/wiki/Energy%20modeling | Energy modeling or energy system modeling is the process of building computer models of energy systems in order to analyze them. Such models often employ scenario analysis to investigate different assumptions about the technical and economic conditions at play. Outputs may include the system feasibility, greenhouse gas emissions, cumulative financial costs, natural resource use, and energy efficiency of the system under investigation. A wide range of techniques are employed, ranging from broadly economic to broadly engineering. Mathematical optimization is often used to determine the least-cost in some sense. Models can be international, regional, national, municipal, or stand-alone in scope. Governments maintain national energy models for energy policy development.
Energy models are usually intended to contribute variously to system operations, engineering design, or energy policy development. This page concentrates on policy models. Individual building energy simulations are explicitly excluded, although they too are sometimes called energy models. IPCC-style integrated assessment models, which also contain a representation of the world energy system and are used to examine global transformation pathways through to 2050 or 2100 are not considered here in detail.
Energy modeling has increased in importance as the need for climate change mitigation has grown in importance. The energy supply sector is the largest contributor to global greenhouse gas emissions. The IPCC reports that climate change mitigation will require a fundamental transformation of the energy supply system, including the substitution of unabated (not captured by CCS) fossil fuel conversion technologies by low-GHG alternatives.
Model types
A wide variety of model types are in use. This section attempts to categorize the key types and their usage. The divisions provided are not hard and fast and mixed-paradigm models exist. In addition, the results from more general models can be used to inform the specification of more detailed models, and vice versa, thereby creating a hierarchy of models. Models may, in general, need to capture "complex dynamics such as:
energy system operation
technology stock turnover
technology innovation
firm and household behaviour
energy and non-energy capital investment and labour market adjustment dynamics leading to economic restructuring
infrastructure deployment and urban planning"
Models may be limited in scope to the electricity sector or they may attempt to cover an energy system in its entirety (see below).
Most energy models are used for scenario analysis. A scenario is a coherent set of assumptions about a possible system. New scenarios are tested against a baseline scenario – normally business-as-usual (BAU) – and the differences in outcome noted.
The time horizon of the model is an important consideration. Single-year models – set in either the present or the future (say 2050) – assume a non-evolving capital structure and focus instead on the operational dynamics of the system. Single-year models normally embed considerable temporal (typically hourly resolution) and technical detail (such as individual generation plant and transmissions lines). Long-range models – cast over one or more decades (from the present until say 2050) – attempt to encapsulate the structural evolution of the system and are used to investigate capacity expansion and energy system transition issues.
Models often use mathematical optimization to solve for redundancy in the specification of the system. Some of the techniques used derive from operations research. Most rely on linear programming (including mixed-integer programming), although some use nonlinear programming. Solvers may use classical or genetic optimisation, such as CMA-ES. Models may be recursive-dynamic, solving sequentially for each time interval, and thus evolving through time. Or they may be framed as a single forward-looking intertemporal problem, and thereby assume perfect foresight. Single-year engineering-based models usually attempt to minimize the short-run financial cost, while single-year market-based models use optimization to determine market clearing. Long-range models, usually spanning decades, attempt to minimize both the short and long-run costs as a single intertemporal problem.
The demand-side (or end-user domain) has historically received relatively scant attention, often modeled by just a simple demand curve. End-user energy demand curves, in the short-run at least, are normally found to be highly inelastic.
As intermittent energy sources and energy demand management grow in importance, models have needed to adopt an hourly temporal resolution in order to better capture their real-time dynamics. Long-range models are often limited to calculations at yearly intervals, based on typical day profiles, and are hence less suited to systems with significant variable renewable energy. Day-ahead dispatching optimization is used to aid in the planning of systems with a significant portion of intermittent energy production in which uncertainty around future energy predictions is accounted for using stochastic optimization.
Implementing languages include GAMS, MathProg, MATLAB, Mathematica, Python, Pyomo, R, Fortran, Java, C, C++, and Vensim. Occasionally spreadsheets are used.
As noted, IPCC-style integrated models (also known as integrated assessment models or IAM) are not considered here in any detail. Integrated models combine simplified sub-models of the world economy, agriculture and land-use, and the global climate system in addition to the world energy system. Examples include GCAM, MESSAGE, and REMIND.
Published surveys on energy system modeling have focused on techniques, general classification, an overview, decentralized planning, modeling methods, renewables integration, energy efficiency policies, electric vehicle integration, international development, and the use of layered models to support climate protection policy. Deep Decarbonization Pathways Project researchers have also analyzed model typologies. A 2014 paper outlines the modeling challenges ahead as energy systems become more complex and human and social factors become increasingly relevant.
Electricity sector models
Electricity sector models are used to model electricity systems. The scope may be national or regional, depending on circumstances. For instance, given the presence of national interconnectors, the western European electricity system may be modeled in its entirety.
Engineering-based models usually contain a good characterization of the technologies involved, including the high-voltage AC transmission grid where appropriate. Some models (for instance, models for Germany) may assume a single common bus or "copper plate" where the grid is strong. The demand-side in electricity sector models is typically represented by a fixed load profile.
Market-based models, in addition, represent the prevailing electricity market, which may include nodal pricing.
Game theory and agent-based models are used to capture and study strategic behavior within electricity markets.
Energy system models
In addition to the electricity sector, energy system models include the heat, gas, mobility, and other sectors as appropriate. Energy system models are often national in scope, but may be municipal or international.
So-called top-down models are broadly economic in nature and based on either partial equilibrium or general equilibrium. General equilibrium models represent a specialized activity and require dedicated algorithms. Partial equilibrium models are more common.
So-called bottom-up models capture the engineering well and often rely on techniques from operations research. Individual plants are characterized by their efficiency curves (also known as input/output relations), nameplate capacities, investment costs (capex), and operating costs (opex). Some models allow for these parameters to depend on external conditions, such as ambient temperature.
Producing hybrid top-down/bottom-up models to capture both the economics and the engineering has proved challenging.
Established models
This section lists some of the major models in use. These are typically run by national governments.
In a community effort, a large number of existing energy system models were collected in model fact sheets on the Open Energy Platform.
LEAP
LEAP, the Low Emissions Analysis Platform (formerly known as the Long-range Energy Alternatives Planning System) is a software tool for energy policy analysis, air pollution abatement planning and climate change mitigation assessment.
LEAP was developed at the Stockholm Environment Institute's (SEI) US Center. LEAP can be used to examine city, statewide, national, and regional energy systems. LEAP is normally used for studies of between 20–50 years. Most of its calculations occur at yearly intervals. LEAP allows policy analysts to create and evaluate alternative scenarios and to compare their energy requirements, social costs and benefits, and environmental impacts. As of June 2021, LEAP has over 6000 users in 200 countries and territories
Power system simulation
General Electric's MAPS (Multi-Area Production Simulation) is a production simulation model used by various Regional Transmission Organizations and Independent System Operators in the United States to plan for the economic impact of proposed electric transmission and generation facilities in FERC-regulated electric wholesale markets. Portions of the model may also be used for the commitment and dispatch phase (updated on 5 minute intervals) in operation of wholesale electric markets for RTO and ISO regions. ABB's PROMOD is a similar software package. These ISO and RTO regions also utilize a GE software package called MARS (Multi-Area Reliability Simulation) to ensure the power system meets reliability criteria (a loss of load expectation (LOLE) of no greater than 0.1 days per year). Further, a GE software package called PSLF (Positive Sequence Load Flow) and a Siemens software package called PSSE (Power System Simulation for Engineering) analyzes load flow on the power system for short-circuits and stability during preliminary planning studies by RTOs and ISOs.
MARKAL/TIMES
MARKAL (MARKet ALlocation) is an integrated energy systems modeling platform, used to analyze energy, economic, and environmental issues at the global, national, and municipal level over time-frames of up to several decades. MARKAL can be used to quantify the impacts of policy options on technology development and natural resource depletion. The software was developed by the Energy Technology Systems Analysis Programme (ETSAP) of the International Energy Agency (IEA) over a period of almost two decades.
TIMES (The Integrated MARKAL-EFOM System) is an evolution of MARKAL – both energy models have many similarities. TIMES succeeded MARKAL in 2008. Both models are technology explicit, dynamic partial equilibrium models of energy markets. In both cases, the equilibrium is determined by maximizing the total consumer and producer surplus via linear programming. Both MARKAL and TIMES are written in GAMS.
The TIMES model generator was also developed under the Energy Technology Systems Analysis Program (ETSAP). TIMES combines two different, but complementary, systematic approaches to modeling energy – a technical engineering approach and an economic approach. TIMES is a technology rich, bottom-up model generator, which uses linear programming to produce a least-cost energy system, optimized according to a number of user-specified constraints, over the medium to long-term. It is used for "the exploration of possible energy futures based on contrasted scenarios".
, the MARKAL and TIMES model generators are in use in 177 institutions spread over 70 countries.
NEMS
NEMS (National Energy Modeling System) is a long-standing United States government policy model, run by the Department of Energy (DOE). NEMS computes equilibrium fuel prices and quantities for the US energy sector. To do so, the software iteratively solves a sequence of linear programs and nonlinear equations. NEMS has been used to explicitly model the demand-side, in particular to determine consumer technology choices in the residential and commercial building sectors.
NEMS is used to produce the Annual Energy Outlook each year – for instance in 2015.
Criticisms
Public policy energy models have been criticized for being insufficiently transparent. The source code and data sets should at least be available for peer review, if not explicitly published. To improve transparency and public acceptance, some models are undertaken as open-source software projects, often developing a diverse community as they proceed. OSeMOSYS is an example of such a model. The Open Energy Outlook is an open community that has produced a long-term outlook of the U.S. energy system using the open-source TEMOA model.
Not a criticism perse, but it is necessary to understand that model results do not constitute future predictions.
See also
General
Climate change mitigation – actions to limit long-term climate change
Climate change mitigation scenarios – possible futures in which global warming is reduced by deliberate actions
Economic model
Energy system – the interpretation of the energy sector in system terms
Energy Modeling Forum – a Stanford University-based modeling forum
Open Energy Modelling Initiative – an open source energy modeling initiative, centered on Europe
Open energy system databases – database projects which collect, clean, and republish energy-related datasets
Open energy system models – a review of energy system models that are also open source
Power system simulation
Models
iNEMS (Integrated National Energy Modeling System) – a national energy model for China
MARKAL – an energy model
NEMS – the US government national energy model
POLES (Prospective Outlook on Long-term Energy Systems) – an energy sector world simulation model
KAPSARC Energy Model - an energy sector model for Saudi Arabia
Further reading
Introductory video on open energy system modeling with python language example
Introductory video with reference to public policy
References
External links
COST TD1207 Mathematical Optimization in the Decision Support Systems for Efficient and Robust Energy Networks wiki – a typology for optimization models
EnergyPLAN — a freeware energy model from the Department of Development and Planning, Aalborg University, Denmark
Open Energy Modelling Initiative open models page – a list of open energy models
model.energy — an online "toy" model utilizing the PyPSA framework that allows the public to experiment
Building Energy Modeling Tools by National Renewable Energy Laboratory
Climate change policy
Computational science
Computer programming
Economics models
Energy models
Energy policy
Mathematical modeling
Mathematical optimization
Simulation
Systems theory | Energy modeling | [
"Mathematics",
"Technology",
"Engineering",
"Environmental_science"
] | 2,755 | [
"Mathematical analysis",
"Mathematical modeling",
"Applied mathematics",
"Computer programming",
"Energy policy",
"Computational science",
"Software engineering",
"Mathematical optimization",
"Computers",
"Environmental social science"
] |
52,792,035 | https://en.wikipedia.org/wiki/De%20novo%20sequence%20assemblers | De novo sequence assemblers are a type of program that assembles short nucleotide sequences into longer ones without the use of a reference genome. These are most commonly used in bioinformatic studies to assemble genomes or transcriptomes. Two common types of de novo assemblers are greedy algorithm assemblers and De Bruijn graph assemblers.
Types of de novo assemblers
There are two types of algorithms that are commonly utilized by these assemblers: greedy, which aim for local optima, and graph method algorithms, which aim for global optima. Different assemblers are tailored for particular needs, such as the assembly of (small) bacterial genomes, (large) eukaryotic genomes, or transcriptomes.
Greedy algorithm assemblers are assemblers that find local optima in alignments of smaller reads. Greedy algorithm assemblers typically feature several steps: 1) pairwise distance calculation of reads, 2) clustering of reads with greatest overlap, 3) assembly of overlapping reads into larger contigs, and 4) repeat. These algorithms typically do not work well for larger read sets, as they do not easily reach a global optimum in the assembly, and do not perform well on read sets that contain repeat regions. Early de novo sequence assemblers, such as SEQAID (1984) and CAP (1992), used greedy algorithms, such as overlap-layout-consensus (OLC) algorithms. These algorithms find overlap between all reads, use the overlap to determine a layout (or tiling) of the reads, and then produce a consensus sequence. Some programs that used OLC algorithms featured filtration (to remove read pairs that will not overlap) and heuristic methods to increase speed of the analyses.
Graph method assemblers come in two varieties: string and De Bruijn. String graph and De Bruijn graph method assemblers were introduced at a DIMACS workshop in 1994 by Waterman and Gene Myers. These methods represented an important step forward in sequence assembly, as they both use algorithms to reach a global optimum instead of a local optimum. While both of these methods made progress towards better assemblies, the De Bruijn graph method has become the most popular in the age of next-generation sequencing. During the assembly of the De Bruijn graph, reads are broken into smaller fragments of a specified size, k. The k-mers are then used as edges in the graph assembly. Nodes are built as (k-1)-mers connect by an edge. The assembler will then construct sequences based on the De Bruijn graph. De Bruijn graph assemblers typically perform better on larger read sets than greedy algorithm assemblers (especially when they contain repeat regions).
Commonly used programs
Different assemblers are designed for different type of read technologies. Reads from second generation technologies (called short read technologies) like Illumina are typically short (with lengths of the order of 50-200 base pairs) and have error rates of around 0.5-2%, with the errors chiefly being substitution errors. However, reads from third generation technologies like PacBio and fourth generation technologies like Oxford Nanopore (called long read technologies) are longer with read lengths typically in the thousands or tens of thousands and have much higher error rates of around 10-20% with errors being chiefly insertions and deletions. This necessitates different algorithms for assembly from short and long read technologies.
Assemblathon
There are numerous programs for de novo sequence assembly and many have been compared in the Assemblathon. The Assemblathon is a periodic, collaborative effort to test and improve the numerous assemblers available. Thus far, two assemblathons have been completed (2011 and 2013) and a third is in progress (as of April 2017). Teams of researchers from across the world choose a program and assemble simulated genomes (Assemblathon 1) and the genomes of model organisms whose that have been previously assembled and annotated (Assemblathon 2). The assemblies are then compared and evaluated using numerous metrics.
Assemblathon 1
Assemblathon 1 was conducted in 2011 and featured 59 assemblies from 17 different groups and the organizers. The goal of this Assembalthon was to most accurately and completely assemble a genome that consisted of two haplotypes (each with three chromosomes of 76.3, 18.5, and 17.7 Mb, respectively) that was generated using Evolver. Numerous metrics were used to assess the assemblies, including: NG50 (point at which 50% of the total genome size is reached when scaffold lengths are summed from the longest to the shortest), LG50 (number of scaffolds that are greater than, or equal to, the N50 length), genome coverage, and substitution error rate.
Software compared: ABySS, Phusion2, phrap, Velvet, SOAPdenovo, PRICE, ALLPATHS-LG
N50 analysis: assemblies by the Plant Genome Assembly Group (using the assembler Meraculous) and ALLPATHS, Broad Institute, USA (using ALLPATHS-LG) performed the best in this category, by an order of magnitude over other groups. These assemblies scored an N50 of >8,000,000 bases.
Coverage of genome by assembly: for this metric, BGI's assembly via SOAPdenovo performed best, with 98.8% of the total genome being covered. All assemblers performed relatively well in this category, with all but three groups having coverage of 90% and higher, and the lowest total coverage being 78.5% (Dept. of Comp. Sci., University of Chicago, USA via Kiki).
Substitution errors: the assembly with the lowest substitution error rate was submitted by the Wellcome Trust Sanger Institute, UK team using the software SGA.
Overall: No one assembler performed significantly better in others in all categories. While some assemblers excelled in one category, they did not in others, suggesting that there is still much room for improvement in assembler software quality.
Assemblathon 2
Assemblathon 2 improved on Assemblathon 1 by incorporating the genomes of multiples vertebrates (a bird (Melopsittacus undulatus), a fish (Maylandia zebra), and a snake (Boa constrictor constrictor)) with genomes estimated to be 1.2, 1.0, and 1.6Gbp in length) and assessment by over 100 metrics. Each team was given four months to assemble their genome from Next-Generation Sequence (NGS) data, including Illumina and Roche 454 sequence data.
Software compared: ABySS, ALLPATHS-LG, PRICE, Ray, and SOAPdenovo
N50 analysis: for the assembly of the bird genome, the Baylor College of Medicine Human Genome Sequencing Center and ALLPATHS teams had the highest NG50s, at over 16,000,000 and over 14,000,000 bp, respectively.
Presence of core genes: Most assemblies performed well in this category (~80% or higher), with only one dropping to just over 50% in their bird genome assembly (Wayne State University via HyDA).
Overall: Overall, the Baylor College of Medicine Human Genome Sequencing Center utilizing a variety of assembly methods (SeqPrep, KmerFreq, Quake, BWA, Newbler, ALLPATHS-LG, Atlas-Link, Atlas-GapFill, Phrap, CrossMatch, Velvet, BLAST, and BLASR) performed the best for the bird and fish assemblies. For the snake genome assembly, the Wellcome Trust Sanger Institute using SGA, performed best. For all assemblies, SGA, BCM, Meraculous, and Ray submitted competitive assemblies and evaluations. The results of the many assemblies and evaluations described here suggest that while one assembler may perform well on one species, it may not perform as well on another. The authors make several suggestions for assembly: 1) use more than one assembler, 2) use more than one metric for evaluation, 3) select an assembler that excels in metrics of more interest (e.g., N50, coverage), 4) low N50s or assembly sizes may not be concerning, depending on user needs, and 5) assess the levels of heterozygosity in the genome of interest.
See also
Sequence assembly
Sequence alignment
De novo transcriptome assembly
References
Bioinformatics algorithms
Bioinformatics software
DNA sequencing
Metagenomics software | De novo sequence assemblers | [
"Chemistry",
"Biology"
] | 1,789 | [
"Bioinformatics algorithms",
"Bioinformatics software",
"Bioinformatics",
"Molecular biology techniques",
"DNA sequencing"
] |
44,079,682 | https://en.wikipedia.org/wiki/Cardiac%20Pacemakers%2C%20Inc. | Cardiac Pacemakers, Inc. (CPI), doing business as Guidant Cardiac Rhythm Management, manufactured implantable cardiac rhythm management devices, such as pacemakers and defibrillators. It sold microprocessor-controlled insulin pumps and equipment to regulate heart rhythm. It developed therapies to treat irregular heartbeat. The company was founded in 1971 and is based in Saint Paul, Minnesota, and is presently a subsidiary of Boston Scientific.
History
CPI was founded in February 1972 in Saint Paul, Minnesota. The first $50,000 capitalization for CPI was raised from a phone booth on the Minneapolis skyway system. They began designing and testing their implantable cardiac pacemaker powered with a new longer-life lithium battery in 1971. The first heart patient to receive a CPI pacemaker emerged from surgery in June 1973. Within two years, the upstart company that challenged Medtronic had sold approximately 8,500 pacemakers.
Medtronic at the time had 65% of the artificial pacemaker market. CPI was the first spin-off from Medtronic. It competition using the world's first lithium-powered pacemaker. Medtronic's market share plummeted to 35%.
Founding partners Anthony Adducci, Manny Villafaña, Jim Baustert, and Art Schwalm, were former Medtronic employees. Lawsuits ensued, all of which were settled out of court.
Eli Lilly acquisition
The company sold 8,500 pacemakers, increasing sales from zero in 1972 to over $47 million.
In early 1978, CPI was concerned about a friendly takeover attempt. Despite impressive sales, the company's stock price had fluctuated wildly the year before, dropping from $33 to $11 per share. Some speculated that the stock was being sold short, while others attributed the price to the natural volatility of high-tech stock. As a one-product company, CPI was susceptible to changing market conditions, and its founders knew they needed to diversify. They considered two options: acquiring other medical device companies or being acquired themselves. They chose the latter.
Several companies expressed interest in acquiring CPI, including 3M, American Hospital Supply, Pfizer, and Johnson & Johnson. However, Eli Lilly and Company, one of the premier pharmaceutical companies in the United States, was the most enthusiastic suitor. "Lilly had the research expertise, highly compatible interests, and similar values," Anthony Adducci recalls. "At CPI, we haven't been able to dedicate the dollars and time necessary to develop new products beyond our staple lithium-powered pacemaker. Lilly was a $2 billion company. We knew they had tremendous resources, especially in research and development." Additionally, Eli Lilly and CPI were already interested in developing insulin pumps, and Lilly was working with cardiovascular drugs, a natural link to CPI's heart pacemaker business.
Before the final negotiations in late 1978, there were numerous flights between Minneapolis and Indianapolis for CPI principals and representatives of Piper, Jaffray & Hopwood's corporate finance department. Lilly, a pharmaceutical giant, and CPI, the upstart pacemaker company, sat down at a bargaining table at a motel in suburban Bloomington, Minnesota. CPI's negotiation team included Anthony Adducci, Art Schwalm, Tom King, and Hunt Greene.
In December 1978, the company was acquired by Eli Lilly and Company for $127 million.
Lithium battery
CPI designed and manufactured the world's first pacemaker with a lithium anode and a lithium-iodide electrolyte solid-state battery. The pacemaker structure is enclosed in a hermetically sealed metallic enclosure, allowing electrode leads to pass in a sealed relationship. The surface of the casing is polished metal, with a zone through which the external electrode leads pass.
The Lithium-iodide or lithium anode cells revolutionized the medical industry by increasing the pacemaker life from 1 year up to 11 years. It became the standard for pacemaker designs.
References
American inventions
Biomedical engineering
Cardiac electrophysiology
Eli Lilly and Company
Embedded systems
Implants (medicine) | Cardiac Pacemakers, Inc. | [
"Technology",
"Engineering",
"Biology"
] | 828 | [
"Biological engineering",
"Computer engineering",
"Biomedical engineering",
"Embedded systems",
"Computer systems",
"Computer science",
"Medical technology"
] |
44,081,062 | https://en.wikipedia.org/wiki/Dynomak | Dynomak is a spheromak fusion reactor concept developed by the University of Washington using U.S. Department of Energy funding.
A dynomak is a spheromak that is started and maintained by magnetic flux injection. It is formed when an alternating current is used to induce a magnetic flux into plasma. An electric alternating current transformer uses the same induction process to create a secondary current. Once formed, the plasma inside a dynomak relaxes into its lowest energy state, while conserving overall flux. This is termed a Taylor state and inside the machine what is formed is a plasma structure named a spheromak. A dynomak is a kind of spheromak that is started and driven by externally induced magnetic fields.
Technical roots
Plasma is a fluid that conducts electricity, which gives it the unique property that it can be self-structured into vortex rings (e.g., smoke ring like objects) which include field-reversed configurations and spheromaks. A structured plasma has the advantage that it is hotter, denser and more controllable which makes it a good choice for a fusion reactor. But forming these plasma structures has been challenging since the first structures were observed in 1959 because they are inherently unstable.
In 1974, Dr. John B Taylor proposed that a spheromak could be formed by inducing a magnetic flux into a loop plasma. The plasma would then relax naturally into a spheromak also termed a Taylor state. This process worked if the plasma:
Conserved the total magnetic flux
Minimized the total energy
Later, in 1979, these claims were checked by Marshall Rosenbluth. In 1974, Dr. Taylor could only use results from the ZETA pinch device to back up these claims. But, since then, Taylor states have been formed in multiple machines including:
Compact Torus Experiment (CTX) at Los Alamos National Laboratory (LANL). The CTX ran from ~1979 to ~1987. It reached electron temperatures of 4.6 million kelvin ran for 3 microseconds and had a plasma to magnetic pressure ratio of 0.2.
Sustained Spheromak Physics Experiment (SSPX) at Lawrence Livermore National Laboratory (LLNL) was a more advanced version of the CTX that was used to measure the relaxation process that led to a Taylor state. The machine ran from 1999 to 2007.
Caltech Spheromak Experiment at California Institute of Technology (Caltech) was a small machine run by Dr. Paul Bellans’ lab, from ~2000 to ~2010.
Helicity Injected Torus-Steady Inductive (HIT-SI) at the University of Washington was run by Dr. Jarboe from ~2004 to ~2012. It was the precursor to the dynomak. The machine created 90 kiloamps of stable plasma current over a few (<2) microseconds, and demonstrated the first Imposed-Dynamo Current Drive (IDCD) in 2011. The IDCD breakthrough enabled Dr. Jarboes’ group to envision the first reactor-scale version of this machine; named the dynomak.
The dynomak evolved from the HIT-SI experiment. HIT-SI went through several upgrades: the HIT-SI3 (~2013 to ~2020) and HIT-SIU (post ~2020), both were variants on the same machine. These machines demonstrated that an inductive current can be used to make and sustain a spheromak plasma structure.
Magnetic induction drive
By definition, a dynomak is a plasma structure that is started, formed, and sustained using magnetic flux injection. Electric transformers use a similar process; a magnetic flux is created on the primary loop, and this makes an alternating current on the secondary side. Because of Faraday's law of induction, only a changing magnetic field can induce a secondary current – this is why a direct current transformer cannot exist. In a dynomak, magnetic induction is used to create a plasma current inside a plasma filled chamber. This gets the plasma moving and the system eventually relaxes into a Taylor state or spheromak. The relaxation process involves the flow of magnetic helicity (a twist in the field lines) from the injectors into the center of the machine.
Supporters of this heating approach have argued that induction is 2-3 orders of magnitude more efficient than radio frequency (RF) or neutral beam heating. If this is true, it gives a dynomak several distinct advantages over other fusion approaches like tokamaks or magnetic mirrors. But this is an open area of research; below are some examples of how effective inductive drive is in creating plasma current inside a dynomak.
A dynomak uses injectors, which are curved arms that are attached to the main chamber. An alternating current is applied around the curve of these arms, which creates the magnetic flux that drives a dynomak. The University of Washington experimented with two and three numbers of injectors. The phase of the alternating current is offset to allow continuous injection of flux into a dynomak. Injector count effects offset angle: The drive current, and thus injectors, are offset by 90 degrees with two injectors, and by 60 degrees with three injectors.
Advantages
A spheromak plasma structure forms naturally, with no added technology needed. Supporters argue that this gives dynomaks several inherent advantages, including:
It may avoid the kink, interchange, and other plasma instabilities that normally plague plasma structures. For this reason, a dynomak may be able to pressurize and heat a plasma up to the Mercer limit on beta number. If true, this could ultimately shrink a reactor relative to other fusion approaches.
An inductive drive can be 2-3 orders of magnitude more efficient than heating via RF or neutral beam. This is an open area of research.
A dynomak may need no added heating hardware such as neutral beam injection.
A dynomak has no central solenoid, in contrast to a tokamak, lowering mass, cost, and operating power needs for a reactor.
As of 2014 plasma densities reached 5x1019 m−3, temperatures of 60 eV, and maximum operation time of 1.5 ms. No confinement time results were available. At those temperatures, fusion, alpha heating, or neutron production do not occur.
Commercialization
Once the technical principals were proven in the HIT-SI machine, Dr. Jarboe challenged his students in a University of Washington class to come up with a fusion reactor based on this approach. The students designed the dynomak as a reactor-level power plant that built on discoveries made from the HIT-SI and earlier machines.
Eventually, these students formed CT Fusion as a spin off from the University of Washington, to commercialize the dynomak in 2015. The company has exclusive rights to 3 University of Washington patents and raised over $3.6 million from 2015 to 2019 in public and private funding. The acronym CT stands for Compact Toroid, which is what spheromaks were referred to for decades. The company has received funding as part of an Advanced Research Projects Agency – Energy (ARPA-E) funding award for fusion. CT Fusion shut down in 2023.
Unlike other fusion reactor designs (such as ITER), a dynomak can be, according to its engineering team, comparable in costs to a conventional coal plant. A dynomak is calculated to cost a tenth of ITER and produce five times more energy at an efficiency of 40 percent. A one gigawatt dynomak would cost US$2.7 billion compared to US$2.8 billion for a coal plant.
Design
Dynomak incorporates an ITER-developed cryogenic pumping system. Spheromak use an oblate spheroid instead of a tokamak configuration, with no central core, or large, complex superconducting magnets as in many tokamaks, e.g., ITER. The magnetic fields are produced by putting electric fields into the center of the plasma using superconducting tapes wrapped around the vessel, such that the plasma contains itself.
A dynomak is smaller simpler and cheaper to build than a tokamak, such as ITER, while producing more power. The fusion reaction is self-sustaining as excess heat is drawn off by a molten salt blanket to power a steam turbine. The prototype was about one tenth the scale of a commercial project, and can sustain plasma efficiently. Higher output would require larger scale, and higher plasma temperature.
Criticisms
A dynomak relies on a copper wall to conserve and direct the magnetic flux that is injected into the machine. This wall butts up against the plasma, creating the possibility of high conduction losses through the metal. The HIT-SI coated the inside of the copper wall with an aluminum-oxide insulator to reduce these losses, but this could still be a major loss mechanism if the machine goes to fusion reactor conditions.
Further, the injection of magnetic helicity into the field forces the machine to break the magnetic flux surfaces that hold and sustain the plasma structure. The breaking of these surfaces has been cited as a reason that a dynomaks' heating mechanism does not work as efficiently as predicted.
Lastly, a dynomak has a complex chamber geometry, which complicates and presents challenges for maintenance and vacuum forming.
See also
Spheromak
Field-reversed configuration, a similar concept
Spherical tokamak, essentially a spheromak formed around a central conductor–magnet
Taylor state
John Bryan Taylor
References
Research projects
Fusion power
University of Washington | Dynomak | [
"Physics",
"Chemistry"
] | 1,994 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
44,082,541 | https://en.wikipedia.org/wiki/Density%20matrix%20embedding%20theory | The density matrix embedding theory (DMET) is a numerical technique to solve strongly correlated electronic structure problems. By mapping the system to a fragment plus its entangled quantum bath, the local electron correlation effects on the fragment can be accurately modeled by a post-Hartree–Fock solver. This method has shown high-quality results in 1D- and 2D- Hubbard models,
and in chemical model systems incorporating the fully interacting electronic Hamiltonian, including long-range interactions.
The basis of DMET is the Schmidt decomposition for quantum states, which shows that a given quantum many-body state, with macroscopically many degrees of freedom, K, can be represented exactly by an Impurity model consisting of 2N degrees of freedom for N<<K. Using an existing approximation (here called the effective lattice model) to the many-body state (for example in the mean-field approximation where correlations are neglected), DMET relates this effective lattice model to the impurity model by a one-body local potential, U. This potential is then optimised by requiring that the Density matrix of the impurity model and effective lattice model projected onto the impurity cluster match. When this matching is determined self-consistently, U thus derived in principle exactly models the correlations of the system (since the mapping from the full Hamiltonian to the impurity Hamiltonian is exact).
References
Matrices
Computational physics
Computational chemistry | Density matrix embedding theory | [
"Physics",
"Chemistry",
"Mathematics"
] | 290 | [
"Theoretical chemistry stubs",
"Mathematical objects",
"Computational physics",
"Matrices (mathematics)",
"Theoretical chemistry",
"Computational chemistry",
"Computational chemistry stubs",
"Matrix stubs",
"Physical chemistry stubs",
"Computational physics stubs"
] |
42,634,883 | https://en.wikipedia.org/wiki/International%20Thorium%20Energy%20Committee | The international Thorium Energy Committee (iThEC) was founded in late 2012 at CERN in Geneva by scientists, engineers, political figures and industrialists under the leadership of its honorary president Carlo Rubbia, to promote the cause of using thorium as a means of reducing existing and future nuclear waste, and also for generating electricity.
International conference
After its founding, the first action of the committee was to organise an international conference on Thorium, ThEC13, using mostly private funding and institutional support from CERN. The conference lasted four days and attracted wide support from research institutes, energy companies and private individuals who contributed to the establishment of the current state-of-the-art in Thorium technology. Amongst the many contributions to the conference, one may note the announcement of the decision by the companies Solvay and Areva to jointly fund research in Thorium development and the tests by the Norwegian company Thor energy of Thorium fuel rods in the Halden Reactor.
Expansion
The Committee is expanding its membership to reach a wider audience and attracted to the ThEC13 conference keynote speakers such as Pascal Couchepin, former president of the Swiss confederation and member of the Liberal Party of Switzerland and Hans Blix, former head of the International Atomic Energy Agency.
See also
Subcritical reactor
Thorium Energy Alliance
Thorium fuel cycle
Thorium-based nuclear power
References
Nuclear organizations | International Thorium Energy Committee | [
"Engineering"
] | 278 | [
"Nuclear organizations",
"Energy organizations"
] |
42,637,669 | https://en.wikipedia.org/wiki/List%20of%20countries%20by%20stem%20cell%20research%20trials | This is a list of countries by stem cell research trials for the purpose of commercializing treatments as of June 2020, using data from ClinicalTrials.gov.
References
Research trials by country
Research trials by country
Cell biology
Cloning
Lists of countries | List of countries by stem cell research trials | [
"Engineering",
"Biology"
] | 50 | [
"Cell biology",
"Stem cell research",
"Biotechnology by country",
"Cloning",
"Genetic engineering",
"Induced stem cells"
] |
42,638,967 | https://en.wikipedia.org/wiki/E.%20Coli%20Metabolome%20Database | The E. coli Metabolome Database (ECMDB) is a freely accessible, online database of small molecule metabolites found in or produced by Escherichia coli (E. coli strain K12, MG1655). Escherichia coli is perhaps the best studied bacterium on earth and has served as the "model microbe" in microbiology research for more than 60 years. The ECMDB is essentially an E. coli "omics" encyclopedia containing detailed data on the genome, proteome and metabolome of E. coli. ECMDB is part of a suite of organism-specific metabolomics databases that includes DrugBank, HMDB, YMDB
and SMPDB. As a metabolomics resource, the ECMDB is designed to facilitate research in the area gut/microbiome metabolomics and environmental metabolomics. The ECMDB contains two kinds of data: 1) chemical data and 2) molecular biology and/or biochemical data. The chemical data includes more than 2700 metabolite structures with detailed metabolite descriptions along with nearly 5000 NMR, GC-MS and LC-MS spectra corresponding to these metabolites. The biochemical data includes nearly 1600 protein (and DNA) sequences and more than 3100 biochemical reactions that are linked to these metabolite entries. Each metabolite entry in the ECMDB contains more than 80 data fields with approximately 65% of the information being devoted to chemical data and the other 35% of the information devoted to enzymatic or biochemical data. Many data fields are hyperlinked to other databases (KEGG, PubChem, MetaCyc, ChEBI, PDB, UniProt, and GenBank). The ECMDB also has a variety of structure and pathway viewing applets. The ECMDB database offers a number of text, sequence, spectral, chemical structure and relational query searches. These are described in more detail below.
Accessing the database
The ECMDB's content may be explored or searched using a variety of database-specific tools. The text search box (located at the top of every ECMDB page) allows users to conduct a general text search of the database's textual data, including names, synonyms, numbers and identifiers. The ECMDB employs a software tool called "Elastic Search" that allows misspellings and fuzzy text matching. Using the text search, users may select either metabolites or proteins in the "search for" field using the pull-down box located on the right side of the text search box. In this way it is possible to restrict the search to only return results for those items associated with E. coli metabolites or with E. coli proteins. The ECMB has 7 selectable tabs located at the top of every page including: 1) Home; 2) Browse; 3) Search; 4) About; 5) Help; 6) Downloads and 7) Contact Us. The ECMDB's browser (accessed via the Browse tab) can be used to browse through the database and to re-sort its contents. Six different browse options are available: 1) Metabolite Browse (Fig. 1); 2) Protein Browse; 3) Reaction Browse (Fig. 2); 4) Pathway Browse (Fig. 3); 5) Class Browse; and 6) Concentration Browse. By selecting a specific Browse option the ECMDB's content can be displayed in a synoptic tabular format with the ECMDB identifiers, names and other data displayed in re-sortable tables. Clicking on an ECMDB MetaboCard or ProteinCard button will bring up the full data content for the corresponding metabolite (Fig. 4) or the corresponding protein. The ECMDB also offers a number of Search options listed Under the Search link. These include: 1) Chem Query; 2) Text Query; 3) Sequence Search; 4) Data Extractor; and 4 other MS or NMR spectral search tools. Chem Query option allows users to sketch or to type (via a SMILES string) a chemical compound and to search the ECMDB for metabolites similar or identical to the query compound. The Sequence Search can be used to perform BLAST (protein) sequence searches against all the protein sequences contained in ECMDB. Single and multiple sequence (i.e. whole proteome) BLAST queries are supported through this search tool. It is also possible to perform detailed spectral searches of ECMDB's reference compound NMR and MS spectral data through the ECMDB's MS, MS/MS, GC/MS and NMR Spectra Search links. These tools are intended to support the identification and characterization of bacterial (mainly E. coli) metabolites using NMR spectroscopy, GC-MS spectrometry and LC-MS spectrometry. The ECMDB also contains a large number of statistical tables, with detailed information about not only its content but also about E. coli, in general. In particular, under the "About" tab, a section called "E. coli numbers and stats" contains hundreds of interesting factoids about E. coli and E. coli physiology. Many components of the ECMDB are fully downloadable, including most of textual data, chemical structures and sequence data. These may be retrieved by clicking on the Download button, scrolling through the different files and selecting the appropriate hyperlinks.
Scope and access
All data in ECMDB is non-proprietary or is derived from a non-proprietary source. It is freely accessible and available to anyone. In addition, nearly every data item is fully traceable and explicitly referenced to the original source. ECMDB data is available through a public web interface and downloads.
See also
EcoCyc
KEGG
Metabolic network
Metabolome
Metabolomics
List of biological databases
References
Metabolomic databases
Medical databases
Model organism databases | E. Coli Metabolome Database | [
"Biology"
] | 1,236 | [
"Model organism databases",
"Model organisms"
] |
42,640,251 | https://en.wikipedia.org/wiki/Dynamic%20site%20acceleration | Dynamic Site Acceleration (DSA) is a group of technologies which make the delivery of dynamic websites more efficient. Manufacturers of application delivery controllers and content delivery networks (CDNs) use a host of techniques to accelerate dynamic sites, including:
Improved connection management, by multiplexing client connections and HTTP keep-alive
Prefetching of uncachable web responses
Dynamic cache control
On-the-fly compression
Full page caching
Off-loading SSL termination
Response based TTL-assignment (bending)
TCP optimization
Route optimization
Techniques
TCP multiplexing
An edge device, either an ADC or a CDN, is capable of TCP multiplexing which can be placed between web servers and clients to offload origin servers and accelerate content delivery.
Usually, each connection between client and server requires a dedicated process that lives on the origin for the duration of the connection. When clients have a slow connection, this occupies part of the origin server because the process has to stay alive while the server is waiting for a complete request. With TCP multiplexing, the situation is different. The device obtains a complete and valid request from the client before sending this to the origin when the request has fully arrived. This offloads application and database servers, which are slower and more expensive to use compared to ADCs or CDNs.
Dynamic cache control
HTTP has a built-in system for cache control, using headers such as ETag, "expires" and "last modified". Many CDNs and ADCs that claim to have DSA, have replaced this with their system, calling it dynamic caching or dynamic cache control. It gives them more options to invalidate and bypass the cache over the standard HTTP cache control.
The purpose of dynamic cache control is to increase the cache-hit ratio of a website, which is the rate between requests served by the cache and those served by the normal server.
Due to the dynamic nature of web 2.0 websites, it is difficult to use static web caching. The reason is that dynamic sites, per definition, have personalized content for different users and regions. For example, mobile users may see different content from what desktop users may see, and registered users may need to see different content from what anonymous users see. Even among registered users, content may vary widely, often example being social media websites.
Static caching of dynamic user-specific pages introduces a potential risk of serving irrelevant content or 3rd party's content to the wrong users, if the identifier allowing the caching system to differentiate content, the URL/GET-request, isn't correctly varied by appending user-specific tokens/keys to it.
Dynamic cache control has more options to configure caching, such as cookie-based cache control, that allows serving content from cache based on the presence or lack of specific cookies. A cookie stores the unique identifier-key of a logged-in user on their device and it's already implemented for authenticating users upon execution of any page that opens a session, in a dynamic caching system, the caches are referred to by URL as well as the cookie keys, allowing to simply enable serving of default caches to anonymous users and personalized caches to logged-in users (without forcing you to modify the code, to make it append additional user identifiers to the URL, like in a static caching system).
Prefetching
If personalized content cannot be cached, it might be queued on an edge device. This means that the system will store a list of possible responses that might needed in the future, allowing them to be readily served. This differs from caching as prefetched responses are only served once, being especially useful for accelerating responses of third-party APIs, such as advertisements.
Route Optimization
Route optimization, also known as "latency-based routing", optimizes the route of traffic between clients and the different origin servers in order to minimize latency. Route optimization can be done by a DNS provider or by a CDN.
Route optimization comes down to measuring multiple paths between the client and origin server, and then recording the fastest path between them. This path is then used to serve content when a client in a specific geographical zone makes a request.
Relationship with Front-end Optimization
Although Front-end Optimization (FEO) and DSA both describe a group of techniques to improve online content delivery, they work over different aspects. There are overlaps, such as on-the-fly data compression and improved cache-control, however, the key differences are:
FEO focuses on changing the actual content, whereas DSA focuses on improving content delivery without touching content (i.e. DSA has verbatim delivery of content). DSA focuses on optimizing bit delivery across the network, without changing the content while FEO aims to decrease the number of objects required to download websites, and to decrease the total amount of traffic. This can be done by device-aware content serving (e.g. dropping the quality of images), minification, resource consolidation and inlining Because FEO changes the actual traffic, configuration tends to be more difficult, as there is a risk of affecting the user-experience, by serving content that was incorrectly changed.
DSA focuses on decreasing page-loading times and offloading web-servers, especially for dynamic sites. FEO focuses primarily on decreasing page loading times and reducing bandwidth. Still, cost-savings on origin servers can also be made by implementing FEO as it decreases page-loading time, without rewriting code, consequently saving man-hours that would normally be necessary to optimize the code. Also, revenue might increase from lower page-loading times
References
Computer networking
Web services | Dynamic site acceleration | [
"Technology",
"Engineering"
] | 1,177 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
42,642,303 | https://en.wikipedia.org/wiki/SLAC%20bag%20model | The SLAC bag model is a simple theoretical model for a possible structure for hadrons. The MIT bag model is another similar model. The "SLAC" in the name stands for Stanford Linear Accelerator Center.
The chiral bag model is a variant of the MIT bag model that couples pions to the bag boundary, with the pion field being modeled by the skyrmion. In this model, the boundary condition is that the axial vector current is continuous across the boundary, with free (non-interacting) quarks on the inside, obeying the boundary condition.
See also
Fermi ball
Partons
References
Particle physics | SLAC bag model | [
"Physics"
] | 128 | [
"Nuclear and atomic physics stubs",
"Particle physics",
"Nuclear physics"
] |
36,995,178 | https://en.wikipedia.org/wiki/RxNorm | RxNorm is US-specific terminology in medicine that contains all medications available on the US market. It can also be used in personal health records applications. RxNorm is part of Unified Medical Language System (UMLS) terminology and is maintained by the United States National Library of Medicine (NLM).
Use
NLM provides six APIs related to RxNorm. There is also a web application called RxMix that allows users to access the RxNorm APIs without writing their own programs.
See also
Anatomical Therapeutic Chemical Classification System
References
External links
National Institutes of Health
Pharmacological classification systems
United States National Library of Medicine | RxNorm | [
"Chemistry"
] | 135 | [
"Pharmacological classification systems",
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
36,996,546 | https://en.wikipedia.org/wiki/Chemical%20reaction%20network%20theory | Chemical reaction network theory is an area of applied mathematics that attempts to model the behaviour of real-world chemical systems. Since its foundation in the 1960s, it has attracted a growing research community, mainly due to its applications in biochemistry and theoretical chemistry. It has also attracted interest from pure mathematicians due to the interesting problems that arise from the mathematical structures involved.
History
Dynamical properties of reaction networks were studied in chemistry and physics after the invention of the law of mass action. The essential steps in this study were introduction of detailed balance for the complex chemical reactions by Rudolf Wegscheider (1901), development of the quantitative theory of chemical chain reactions by Nikolay Semyonov (1934), development of kinetics of catalytic reactions by Cyril Norman Hinshelwood, and many other results.
Three eras of chemical dynamics can be revealed in the flux of research and publications. These eras may be associated with leaders: the first is the van 't Hoff era, the second may be called the Semenov–Hinshelwood era and the third is definitely the Aris era.
The "eras" may be distinguished based on the main focuses of the scientific leaders:
van’t Hoff was searching for the general law of chemical reaction related to specific chemical properties. The term "chemical dynamics" belongs to van’t Hoff.
The Semenov-Hinshelwood focus was an explanation of critical phenomena observed in many chemical systems, in particular in flames. A concept chain reactions elaborated by these researchers influenced many sciences, especially nuclear physics and engineering.
Aris’ activity was concentrated on the detailed systematization of mathematical ideas and approaches.
The mathematical discipline "chemical reaction network theory" was originated by Rutherford Aris, a famous expert in chemical engineering, with the support of Clifford Truesdell, the founder and editor-in-chief of the journal Archive for Rational Mechanics and Analysis. The paper of R. Aris in this journal was communicated to the journal by C. Truesdell. It opened the series of papers of other authors (which were communicated already by R. Aris). The well known papers of this series are the works of Frederick J. Krambeck, Roy Jackson, Friedrich Josef Maria Horn, Martin Feinberg and others, published in the 1970s. In his second "prolegomena" paper, R. Aris mentioned the work of N.Z. Shapiro, L.S. Shapley (1965), where an important part of his scientific program was realized.
Since then, the chemical reaction network theory has been further developed by a large number of researchers internationally.<ref>P. De Leenheer, D. Angeli and E. D. Sontag, "Monotone chemical reaction networks" , J. Math. Chem.', 41(3):295–314, 2007.</ref>G. Craciun and C. Pantea, "Identifiability of chemical reaction networks", J. Math. Chem., 44:1, 2008.A. N. Gorban and G. S. Yablonsky, "Extended detailed balance for systems with irreversible reactions", Chemical Engineering Science, 66:5388–5399, 2011.I. Otero-Muras, J. R. Banga and A. A. Alonso, "Characterizing multistationarity regimes in biochemical reaction networks", PLoS ONE,7(7):e39194,2012.
Overview
A chemical reaction network (often abbreviated to CRN) comprises a set of reactants, a set of products (often intersecting the set of reactants), and a set of reactions. For example, the pair of combustion reactions
form a reaction network. The reactions are represented by the arrows. The reactants appear to the left of the arrows, in this example they are H2 (hydrogen), O2 (oxygen) and (carbon). The products appear to the right of the arrows, here they are H2O (water) and CO2 (carbon dioxide). In this example, since the reactions are irreversible and neither of the products are used in the reactions, the set of reactants and the set of products are disjoint.
Mathematical modelling of chemical reaction networks usually focuses on what happens to the concentrations of the various chemicals involved as time passes. Following the example above, let represent the concentration of H2 in the surrounding air, represent the concentration of O2, represent the concentration of H2O, and so on. Since all of these concentrations will not in general remain constant, they can be written as a function of time e.g. , etc.
These variables can then be combined into a vector
and their evolution with time can be written
This is an example of a continuous autonomous dynamical system, commonly written in the form . The number of molecules of each reactant used up each time a reaction occurs is constant, as is the number of molecules produced of each product. These numbers are referred to as the stoichiometry of the reaction, and the difference between the two (i.e. the overall number of molecules used up or produced) is the net stoichiometry. This means that the equation representing the chemical reaction network can be rewritten as
Here, each column of the constant matrix represents the net stoichiometry of a reaction, and so is called the stoichiometry matrix. is a vector-valued function where each output value represents a reaction rate, referred to as the kinetics.
Common assumptions
For physical reasons, it is usually assumed that reactant concentrations cannot be negative, and that each reaction only takes place if all its reactants are present, i.e. all have non-zero concentration. For mathematical reasons, it is usually assumed that is continuously differentiable.
It is also commonly assumed that no reaction features the same chemical as both a reactant and a product (i.e. no catalysis or autocatalysis), and that increasing the concentration of a reactant increases the rate of any reactions that use it up. This second assumption is compatible with all physically reasonable kinetics, including mass action, Michaelis–Menten and Hill kinetics. Sometimes further assumptions are made about reaction rates, e.g. that all reactions obey mass action kinetics.
Other assumptions include mass balance, constant temperature, constant pressure, spatially uniform concentration of reactants, and so on.
Types of results
As chemical reaction network theory is a diverse and well-established area of research, there is a significant variety of results. Some key areas are outlined below.
Number of steady states
These results relate to whether a chemical reaction network can produce significantly different behaviour depending on the initial concentrations of its constituent reactants. This has applications in e.g. modelling biological switches—a high concentration of a key chemical at steady state could represent a biological process being "switched on" whereas a low concentration would represent being "switched off".
For example, the catalytic trigger is the simplest catalytic reaction without autocatalysis that allows multiplicity of steady states (1976):V.I. Bykov, V.I. Elokhin, G.S. Yablonskii, "The simplest catalytic mechanism permitting several steady states of the surface", React. Kinet. Catal. Lett. 4 (2) (1976), 191–198.
This is the classical adsorption mechanism of catalytic oxidation.
Here, A2, B and AB are gases (for example, O2, CO and CO2), Z is the "adsorption place" on the surface of the solid catalyst (for example, Pt), AZ and BZ are the intermediates on the surface (adatoms, adsorbed molecules or radicals).
This system may have two stable steady states of the surface for the same concentrations of the gaseous components.
Stability of steady states
Stability determines whether a given steady state solution is likely to be observed in reality. Since real systems (unlike deterministic models) tend to be subject to random background noise, an unstable steady state solution is unlikely to be observed in practice. Instead of them, stable oscillations or other types of attractors may appear.
Persistence
Persistence has its roots in population dynamics. A non-persistent species in population dynamics can go extinct for some (or all) initial conditions. Similar questions are of interests to chemists and biochemists, i.e. if a given reactant was present to start with, can it ever be completely used up?
Existence of stable periodic solutions
Results regarding stable periodic solutions attempt to rule out "unusual" behaviour. If a given chemical reaction network admits a stable periodic solution, then some initial conditions will converge to an infinite cycle of oscillating reactant concentrations. For some parameter values it may even exhibit quasiperiodic or chaotic behaviour. While stable periodic solutions are unusual in real-world chemical reaction networks, well-known examples exist, such as the Belousov–Zhabotinsky reactions. The simplest catalytic oscillator (nonlinear self-oscillations without autocatalysis)
can be produced from the catalytic trigger by adding a "buffer" step.
where (BZ) is an intermediate that does not participate in the main reaction.
Network structure and dynamical properties
One of the main problems of chemical reaction network theory is the connection between network structure and properties of dynamics. This connection is important even for linear systems, for example, the simple cycle with equal interaction weights has the slowest decay of the oscillations among all linear systems with the same number of states.
For nonlinear systems, many connections between structure and dynamics have been discovered. First of all, these are results about stability. For some classes of networks, explicit construction of Lyapunov functions is possible without apriori assumptions about special relations between rate constants. Two results of this type are well known: the deficiency zero theorem and the theorem about systems without interactions between different components.
The deficiency zero theorem gives sufficient conditions for the existence of the Lyapunov function in the classical free energy form , where is the concentration of the i-th component. The theorem about systems without interactions between different components states that if a network consists of reactions of the form (for , where r is the number of reactions, is the symbol of ith component, , and are non-negative integers) and allows the stoichiometric conservation law (where all ), then the weighted L1 distance between two solutions with the same M(c'') monotonically decreases in time.
Model reduction
Modelling of large reaction networks meets various difficulties: the models include too many unknown parameters and high dimension makes the modelling computationally expensive. The model reduction methods were developed together with the first theories of complex chemical reactions. Three simple basic ideas have been invented:
The quasi-equilibrium (or pseudo-equilibrium, or partial equilibrium) approximation (a fraction of reactions approach their equilibrium fast enough and, after that, remain almost equilibrated).
The quasi steady state approximation or QSS (some of the species, very often these are some of intermediates or radicals, exist in relatively small amounts; they reach quickly their QSS concentrations, and then follow, as dependent quantities, the dynamics of these other species remaining close to the QSS). The QSS is defined as the steady state under the condition that the concentrations of other species do not change.
The limiting step or bottleneck is a relatively small part of the reaction network, in the simplest cases it is a single reaction, which rate is a good approximation to the reaction rate of the whole network.
The quasi-equilibrium approximation and the quasi steady state methods were developed further into the methods of slow invariant manifolds and computational singular perturbation. The methods of limiting steps gave rise to many methods of the analysis of the reaction graph.
References
External links
Specialist wiki on the mathematics of reaction networks
Mathematical chemistry | Chemical reaction network theory | [
"Chemistry",
"Mathematics"
] | 2,458 | [
"Drug discovery",
"Applied mathematics",
"Molecular modelling",
"Mathematical chemistry",
"Theoretical chemistry"
] |
36,997,969 | https://en.wikipedia.org/wiki/Thyristor-switched%20capacitor | A thyristor-switched capacitor (TSC) is a type of equipment used for compensating reactive power in electrical power systems. It consists of a power capacitor connected in series with a bidirectional thyristor valve and, usually, a current limiting reactor (inductor). The thyristor switched capacitor is an important component of a Static VAR Compensator (SVC), where it is often used in conjunction with a thyristor controlled reactor (TCR). Static VAR compensators are a member of the Flexible AC transmission system (FACTS) family.
Circuit diagram
A TSC is usually a three-phase assembly, connected either in a delta or a star arrangement. Unlike the TCR, a TSC generates no harmonics and so requires no filtering. For this reason, some SVCs have been built with only TSCs . This can lead to a relatively cost-effective solution where the SVC only requires capacitive reactive power, although a disadvantage is that the reactive power output can only be varied in steps. Continuously variable reactive power output is only possible where the SVC contains a TCR or another variable element such as a STATCOM.
Operating principles
Unlike the TCR, the TSC is only ever operated fully on or fully off. An attempt to operate a TSC in ‘’phase control’’ would result in the generation of very large amplitude resonant currents, leading to overheating of the capacitor bank and thyristor valve, and harmonic distortion in the AC system to which the SVC is connected.
Steady state current
When the TSC is on, or ‘’deblocked’’, the current leads the voltage 90° (as with any capacitor). The rms current is given by:
Where:
Vsvc is the rms value of the line-to-line busbar voltage to which the SVC is connected
Ctsc is the total TSC capacitance per phase
Ltsc is the total TSC inductance per phase
f is the frequency of the AC system
The TSC forms an inductor-capacitor (LC) resonant circuit with a characteristic frequency of :
The tuned frequency is usually chosen to be in the range 150-250 Hz on 60 Hz systems or 120-210 Hz on 50 Hz systems. It is an economic choice between the size of the TSC reactor (which increases with decreasing frequency) and the need to protect the thyristor valve from excessive oscillatory currents when the TSC is turned on at an incorrect point of wave (‘’misfiring’’).
The TSC is usually tuned to a non-integer harmonic of the mains frequency so as to avoid the risk of the TSC being overloaded by harmonic currents flowing into it from the AC system.
Off-state voltage
When the TSC is switched off, or ‘’blocked’’, no current flows and the voltage is supported by the thyristor valve. After the TSC has been switched off for a long time (hours) the capacitor will be fully discharged, and the thyristor valve will experience only the AC voltage of the SVC busbar. However, when the TSC turns off, it does so at zero current, corresponding to peak capacitor voltage. The capacitor only discharges very slowly, so the voltage experienced by the thyristor valve will reach a peak of more than twice the peak AC voltage, about half a cycle after blocking. The thyristor valve needs to contain enough thyristors in series to withstand this voltage safely.
Deblocking – normal conditions
When the TSC is turned on ("deblocked") again, care must be taken to choose the correct instant in order to avoid creating very large oscillatory currents. Since the TSC is a resonant circuit, any sudden shock excitation will produce a high-frequency ringing effect which could damage the thyristor valve.
The optimum time to turn on a TSC is when the capacitor is still charged to its normal peak value and the turn-on command is sent at the minimum of valve voltage. If the TSC is deblocked at this point, the transition back into the conducting state will be smooth.
Deblocking – abnormal conditions
Sometimes, however, the TSC may turn on at an incorrect instant (as a result of a control or measurement fault), or the capacitor may become charged to a voltage above the normal value so that even at the minimum of valve voltage, a large transient current results. The current in the TSC will then consist of a fundamental-frequency component (50 Hz or 60 Hz) superimposed on a much larger current at the tuned frequency of the TSC. This transient current can take hundreds of milliseconds to die away, during which time the cumulative heating in the thyristors may be excessive.
Main equipment
A TSC normally comprises three main items of equipment: the main capacitor bank, the thyristor valve and a current-limiting reactor, which is usually air-cored.
Capacitor bank
The largest item of equipment in a TSC, the capacitor bank, is constructed from rack-mounted outdoor capacitor units, each unit typically having a rating in the range 500 – 1000 kilovars (kVAr).
TSC reactor
The function of the TSC reactor is to limit the peak current and rate of rise of current (di/dt) when the TSC turns on at an incorrect time. The reactor is usually an air-cored reactor, similar to that of a TCR, but smaller. The size and cost of the TSC reactor is heavily influenced by the tuning frequency of the TSC, lower frequencies requiring larger reactors.
The TSC reactor is usually located outside, close to the main capacitor bank.
Thyristor valve
The thyristor valve typically consists of 10-30 inverse-parallel-connected pairs of thyristors connected in series. The inverse-parallel connection is needed because most commercially available thyristors can conduct current in only one direction. The series connection is needed because the maximum voltage rating of commercially available thyristors (up to approximately 8.5 kV) is insufficient for the voltage at which the TCR is connected. For some low-voltage applications, it may be possible to avoid the series-connection of thyristors; in such cases the thyristor valve is simply an inverse-parallel connection of two thyristors.
In addition to the thyristors themselves, each inverse-parallel pair of thyristors has a resistor–capacitor ‘’snubber’ circuit connected across it, to force the voltage across the valve to divide uniformly amongst the thyristors and to damp the "commutation overshoot" which occurs when the valve turns off.
The thyristor valve for a TSC is very similar to that of a TCR, but (for a given AC voltage) generally has between 1.5 and 2 times as many thyristors connected in series because of the need to withstand both the AC voltage and the trapped capacitor voltage after blocking.
The thyristor valve is usually installed in a purpose-built, ventilated building, or a modified shipping container. Cooling for the thyristors and snubber resistors is usually provided by deionised water.
Special types of TSC
Some TSCs have been built with the capacitor and inductor arranged not as a simple tuned LC circuit but rather as a damped filter. This type of arrangement is useful when the power system to which the TSC is connected contains significant levels of background harmonic distortion, or where there is a risk of resonance between the power system and the TSC.
In several “Relocatable SVCs” built for National Grid (Great Britain), three TSCs of unequal size were provided, in each case with the capacitor and inductor arranged as a “C-type” damped filter. In a C-type filter, the capacitor is split into two series-connected sections. A damping resistor is connected across one of the two capacitor sections and the inductor, the tuned frequency of this section being equal to the grid frequency. In this way, damping is provided for harmonic frequencies but the circuit incurs no power loss at grid frequency.
See also
Switched capacitor (SC)
References
External links
ABB FACTS
Alstom Grid FACTS Solutions Alstom Grid homepage
Siemens Flexible AC Transmission Systems (FACTS), Siemens, Energy Sector homepage
https://web.archive.org/web/20090614120113/http://www.amsc.com/products/transmissiongrid/static-VAR-compensators-SVC.html
https://web.archive.org/web/20111008175713/http://www.amsc.com/products/transmissiongrid/reactive-power-AC-transmission.html
Electric power
Electric power systems components | Thyristor-switched capacitor | [
"Physics",
"Engineering"
] | 1,880 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
37,003,705 | https://en.wikipedia.org/wiki/Trespa | Trespa is the brand name of a type of high-pressure laminate (HPL) plate manufactured by Trespa International BV, based in Weert, the Netherlands. Their panels are used for exterior cladding, decorative facades and interior surfaces. It is composed of woodbased fibres or Kraft paper with phenolic resin applied.
History
The company was founded by Hermann Krages (1909-1992), the son of a Bremen merchant in wood and fibreboard and the brother of the racer Louis Krages. He is known mostly today for his speculation on the German stock market. Krages initially was given a fiberboard plant in the Ore Mountains by his father, which he lost at the end of WWII. He then moved to Scheuerfeld where he acquired the Berger paper mill and began the "Deutsche Duroleum Gesellschaft", making fiberboard plates again. He expanded with new factories in Etzbach, Höxter an der Weser, Leutkirch im Allgäu and in Bremen. The company in Weert was founded in 1960.
Initially the company focused mainly on the sale and storage of panels produced in the German plant at Leutkirch. Gradually it switched to the production of hardboard for mattresses. This activity was later incorporated into the company Thermopal after Krages sold his holdings due to his stock market speculation in the 1960s. In 1964 the name changed to "Weerter Plastics Industry" (“Weerter Kunststoffen Fabrieken”, or WKF). In 1967, the company was acquired by Hoechst, which used its product for surfaces in its laboratories. In 1991 the company passed to HAL Holding NV.
Manufacture
Trespa plate is made by compressing impregnated paper or wood fibers and epoxy, phenolic or polypropylene resin at high pressure and high temperature. A special surface made with Electronic Beam Curing (EBC), a coating technique developed by Trespa, ensures durability and scratch resistance. Due to the ability to add colored pigments to the surface during curing, a variety of colors are possible. The production technique for the plates, which are made of wood, is also called "Dry Forming" technology. In this technique, cheaper pre-pregs instead of the more expensive impregnated paper layers in the production of fiber boards is applied. These prepregs consist of wood fibers and thermosetting resins. This technique was applied for the first time in 1984. Because the surface of Trespa Plate has a dense molecular coating, it is virtually impervious to weather (temperature, UV radiation and humidity). Also, any contamination, such as graffiti, can be removed quite easily. Because of these advantages, the material has been popular since the 1980s in the production of laboratory surfaces and outdoor signage, but also for shower stalls and toilet cubicles in educational, hospital, and campground facilities.
A new production line, EBC 2, was completed in 2015 to improve the quality of Trespa panel surfaces and increase production.
Today Trespa brings HPL Trespa panels under various brand names, with different qualities for indoor and outdoor applications, such as Trespa Virtuon for indoor applications, Trespa TopLabPLUS, Trespa TopLab BASE, Trespa TopLab VERTICAL, and newly introduced in 2022, Trespa TopLab PLUS ALIGN for laboratory, Trespa Meteon and Pura NFC for outdoor applications.
References
History of Trespa on the Trespa website
Patents held by Trespa International on IPEXL
BePlas
Composite materials
Companies based in Limburg (Netherlands)
Buildings and structures in Weert | Trespa | [
"Physics"
] | 753 | [
"Materials",
"Composite materials",
"Matter"
] |
39,848,086 | https://en.wikipedia.org/wiki/Fiber-optic%20current%20sensor | A fiber-optic current sensor (FOCS) is a device designed to measure direct current. Utilizing a single-ended optical fiber wrapped around the current conductor, FOCS exploits the magneto-optic effect (Faraday effect). The FOCS can measure uni- or bi-directional DC currents up to 600 kA, with an accuracy within ±0.1% of the measured value.
Design
The fiber-optic current sensor uses an interferometer to measure the phase change in the light produced by a magnetic field. As it does not require a magnetic yoke, the FOCS is smaller and lighter than Hall effect current sensors, and its accuracy is not reduced by saturation effects. The inherent insulating properties of the optical fiber make it easier to maintain electrical isolation. It also does not need recalibration after installation or during its service life.
The optical phase detection circuit, light source and digital signal processor are contained within the sensor electronics; this technology has been proven in highly demanding applications such as navigation systems in the air, on land and at sea.
Interferometric fiber optic current sensors (FOCS) employ circularly polarized light traversing a closed loop path around an electrical conductor's current-generated magnetic flux, which reflects off a mirror and experiences a reciprocal phase shift as the refractive index, and effective path length, is modulated by the presence of a magnetic field which optically induces circular birefringence, and where the interference pattern relative to a reference wave form is a transduced optical intensity value corresponding to the current magnitude.
Such configurations are vulnerable to acoustic perturbations in the fiber optical cables, as changes to the linear birefringence of the fiber cable causes additional phase shifts between the orthogonally polarized modes which must be of equal magnitude to generate circular polarization, as an exact quarter wave displacement between the fast and slow axis modes is required for a circular polarization state, and additional phase shifts in the sensing network cause the circularly polarized measurement photons, which experience a phase shift in the fiber optic current sensing coil in proportion to magnetic field density, to degenerate to a random form of elliptical polarization, which degrades interference measurement abilities as the measurement and reference photon wave forms become non-coherent at the analyzer.
Applications
As FOCS are resistant to effects from magnetic or electrical field interferences, they are ideal for the measurement of electrical currents and high voltages in electrical power stations.
In 2013, ABB introduced a 420 kV Disconnecting Circuit Breaker (DCB) that integrates FOCS technology replacing many conventional current transformers, thereby simplifying the engineering and design of the substation. By reducing the materials needed (including insulation), a 420 kV DCB with integrated FOCS can reduce a substation's footprint by over 50% by minimizing the need for materials, in comparison to conventional solutions involving live tank breakers with disconnectors and current transformers.
References
Sensors
Electrical components
Electric power distribution
Electric power systems components
de:FOCS | Fiber-optic current sensor | [
"Technology",
"Engineering"
] | 628 | [
"Electrical components",
"Measuring instruments",
"Electrical engineering",
"Sensors",
"Components"
] |
39,848,877 | https://en.wikipedia.org/wiki/Quantum%20Hall%20transitions | Quantum Hall transitions are the quantum phase transitions that occur between different robustly quantized electronic phases of the quantum Hall effect. The robust quantization of these electronic phases is due to strong localization of electrons in their disordered, two-dimensional potential. But, at the quantum Hall transition, the electron gas delocalizes as can be observed in the laboratory. This phenomenon is understood in the language of topological field theory. Here, a vacuum angle (or 'theta angle') distinguishes between topologically different sectors in the vacuum. These topological sectors correspond to the robustly quantized phases. The quantum Hall transitions can then be understood by looking at the topological excitations (instantons) that occur between those phases.
Historical perspective
Just after the first measurements on the quantum Hall effect in 1980, physicists wondered how the strongly localized electrons in the disordered potential were able to delocalize at their phase transitions. At that time, the field theory of Anderson localization didn't yet include a topological angle and hence it predicted that: "for any given amount of disorder, all states in two dimensions are localized". A result that was irreconcilable with the observations on delocalization. Without knowing the solution to this problem, physicists resorted to a semi-classical picture of localized electrons that, given a certain energy, were able to percolate through the disorder. This percolation mechanism was what assumed to delocalize the electrons
As a result of this semi-classical idea, many numerical computations were done based on the percolation picture. On top of the classical percolation phase transition, quantum tunneling was included in computer simulations to calculate the critical exponent of the `semi-classical percolation phase transition'. To compare this result with the measured critical exponent, the Fermi-liquid approximation was used, where the Coulomb interactions between electrons are assumed to be finite. Under this assumption, the ground state of the free electron gas can be adiabatically transformed into the ground state of the interacting system and this gives rise to an inelastic scattering length so that the canonical correlation length exponent can be compared to the measured critical exponent.
But, at the quantum phase transition, the localization lengths of the electrons becomes infinite (i.e. they delocalize) and this compromises the Fermi-liquid assumption of an inherently free electron gas (where individual electrons must be well-distinguished). The quantum Hall transition will therefore not be in the Fermi-liquid universality class, but in the 'F-invariant' universality class that has a different value for the critical exponent. The semi-classical percolation picture of the quantum Hall transition is therefore outdated (although still widely used) and we need to understand the delocalization mechanism as an instanton effect.
Disorder in the sample
The random disorder in the potential landscape of the two-dimensional electron gas plays a key role in the observation of topological sectors and their instantons (phase transitions). Because of the disorder, the electrons are localized and thus they cannot flow across the sample. But if we consider a loop around a localized 2D electron, we can notice that current is still able to flow in the direction around this loop. This current is able to renormalize to larger scales and eventually becomes the Hall current that rotates along the edge of the sample. A topological sector corresponds to an integer number of rotations and it is now visible macroscopically, in the robustly quantized behavior of the measurable Hall current. If the electrons were not sufficiently localized, this measurement would be blurred out by the usual flow of current through the sample.
For the subtle observations on phase transitions it is important that the disorder is of the right kind. The random nature of the potential landscape should be apparent on a scale sufficiently smaller than the sample size in order to clearly distinguish the different phases of the system. These phases are only observable by the principle of emergence, so the difference between self-similar scales has to be multiple orders of magnitude for the critical exponent to be well-defined. On the opposite side, when the disorder correlation length is too small, the states are not sufficiently localized to observe them delocalize.
Renormalization group flow diagram
On the basis of the Renormalization Group Theory of the instanton vacuum one can form a general flow diagram where the topological sectors are represented by attractive fixed points. When scaling the effective system to larger sizes, the system generally flows to a stable phase at one of these points and as we can see in the flow diagram on the right, the longitudinal conductivity will vanish and the Hall conductivity takes on a quantized value. If we started with a Hall conductivity that is halfway between two attractive points, we would end up on the phase transition between topological sectors. As long as the symmetry isn't broken, the longitudinal conductivity doesn't vanish and is even able to increase when scaling to a larger system size. In the flow diagram, we see fixed points that are repulsive in the direction of the Hall current and attractive in the direction of the longitudinal current. It is most interesting to approach these fixed saddle points as close as possible and measure the (universal) behavior of the quantum Hall transitions.
Super-universality
If the system is rescaled, the change in conductivity depends only on the distance between a fixed saddle point and the conductivity. The scaling behavior near the quantum Hall transitions is then universal and different quantum Hall samples will give the same scaling results. But, by studying the quantum Hall transitions theoretically, many different systems that are all in different universality classes have been found to share a super-universal fixed point structure. This means that many different systems that are all in different universality classes still share the same fixed point structure. They all have stable topological sectors and also share other super-universal features. That these features are super-universal is due to the fundamental nature of the vacuum angle that governs the scaling behavior of the systems. The topological vacuum angle can be constructed in any quantum field theory but only under the right circumstances can its features be observed. The vacuum angle also appears in quantum chromodynamics and might have been important in the formation of the early universe.
See also
Quantum Hall effect
Anderson localization
Fermi-liquid theory
Instantons
Universality (dynamical systems)
References
Hall effect
Phase transitions | Quantum Hall transitions | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,310 | [
"Physical phenomena",
"Phase transitions",
"Hall effect",
"Phases of matter",
"Critical phenomena",
"Electric and magnetic fields in matter",
"Electrical phenomena",
"Statistical mechanics",
"Solid state engineering",
"Matter"
] |
54,132,405 | https://en.wikipedia.org/wiki/Zden%C4%9Bk%20Herman | Zdeněk Herman (24 March 1934 – 25 February 2021) was a Czech physical chemist.
Life and work
Herman was born on 24 March 1934 in Libušín. He studied physical chemistry and radiochemistry at the School of Mathematics and Physics of Charles University, Prague (1952–1957). He then joined the Institute of Physical Chemistry of the Czech Academy of Sciences, to which he remained affiliated.
Herman's early work, with Vladimír Čermák concerned mass spectrometric studies of the kinetics of collision and ionization processes of ions (chemical reaction of ions, Penning and associative ionization). During his post-doctoral years (1964–1965), with Richard Wolfgang at Yale University, Herman built one of the first crossed beam machines to study ion-molecule processes.
Herman also built an improved crossed beam machine that was used in Prague with colleagues to investigate the dynamics of ion-molecule and charge transfer reactions of cations and dications, and ion-surface collisions by the scattering method (1970–2010).
Herman has published over 240 scientific articles in this field.
Awards
Herman's academic awards include the Ian Marcus Marci Medal (Czech Spectroscopic Society, 1989), the Alexander von Humboldt Research Prize (awarded in Germany in 1992, the first time the prize was awarded to a Czech natural scientist), the Česká hlava ("Czech Head") National Prize for lifetime achievements (2003), an Honorary Degree from the Leopold-Franzens University in Innsbruck (2007), and honorary membership of the Czech Mass Spectrometric Society.
Special honorary issues of The Journal of Physical Chemistry (1995) and The International Journal of Mass Spectrometry (2009) were issued to celebrate his 60th and 75th birthdays respectively. Since 2014 the Resonance Foundation awards "The Zdeněk Herman Prize" for the best PhD thesis in chemical physics and mass spectrometry. Since 2016 the international conference MOLEC (Dynamics of Molecular Systems) awards the "Zdeněk Herman Young Scientist Prize".
In his free time, Herman painted and sculpted, and has exhibited his work on several occasions. Busts by Herman of founders of several institutes of the Academy of Sciences are on display at those institutes. Three statues sculpted by Herman stand in the countryside around Rakovník (e.g., in the park in Pavlíkov).
References
1934 births
2021 deaths
Czech chemists
Physical chemists
Charles University alumni
People from Libušín | Zdeněk Herman | [
"Chemistry"
] | 505 | [
"Physical chemists"
] |
54,136,572 | https://en.wikipedia.org/wiki/Glass%20breaker | A glass breaker is a hand tool designed to break through a window glass in an emergency. It is a common safety device found in vehicles to aid in the emergency extrication of occupants from a vehicle, as well as in some buildings.
Most glass breakers are standalone devices containing a sharp pointed metal tip for glass-breaking tempered glass, and many also feature a sharp shielded knife for slicing through seatbelts. There are also many examples of glass breakers being built into other tools, such as flashlights or multitools.
Materials
One variation found in glass breakers is the material from which the metal tip is made. Although all glass breakers are made of strong materials, some glass breakers make breaking glass easier than others, depending on the hardness of the metal that the tip is made of. It is often believed that tips made of harder metals makes breaking the glass easier. Some sources have found that tungsten carbide tips make breaking glass less difficult. However, laminated glass cannot be broken by any glass breakers on the market (but can be broken by other sharp tools as a chisel and hammer) Glass breakers are only effective on tempered windows.
Types
Glass breakers comes in two main styles: hammer and spring-loaded.
Glass breakers can also be distinguished into categories based on size, and whether they are intended to remain stationary in the vehicle or everyday carry by a person. Compared to emergency hammers such as those used in buses, glass breakers for personal use are generally smaller in size. They are often incorporated into other items such as flashlights and multi-tools, and there are many variations of glass breakers on the market.
Emergency hammer
An emergency hammer is a type of glass breaker shaped like a hammer. Emergency hammers are also known under various names, such as bus mallets, dotty hammers, safety mallets, and bus hammers. Many are attached to a cable or an alarm device to deter theft or vandalism.
It is a simple tool with a plastic handle and a steel tip. Its primary use is for breaking through vehicle windows and vertical glazing, which are often tempered, in the event of a crash which prevents exit through the doors. They are commonly found on public transport, in particular trains and buses and buildings worldwide (except North America including Canada and the United States). There can also be a cutting tool at the other end of the hammer. This is used for cutting through seatbelts in the event that they are inhibiting a passenger's exit.
Emergency hammers can be purchased by consumers in store for their vehicles, homes, hotels etc. to provide a means of escape should the doors/windows become unusable, such as in a collision, if the vehicle falls into water and is sinking or there is a fire within a building.
Spring-loaded glass breaker
A spring-loaded glass breaker takes away the need to swing a tool to break a window. To break the glass, the metal tip is held up against the window and a pin is pulled back and released to activate a spring, either automatically or manually. Spring-loaded glass breakers can be designed to be used underwater or to simply reduce the strength needed to shatter a window. When a car is submerged underwater, a hammer style glass breaker becomes significantly more difficult to use. While spring-loaded glass breakers can be convenient in some cases, most spring-loaded glass breakers are also limited to the amount of force the spring can deliver, which could limit the user's ability to break tough glasses.
Gunpowder driven glass breaker
Larry Goodman's safety glass breaker has many similarities with a spring-loaded glass breaker, but instead of relying on a stored spring force, it depends on detonating a .22 blank cartridge with a spring loaded firing pin, driving the glass striker with the help of expanding gases rather than spring tension. This allegedly makes the device less susceptible to mechanical failure and applies a much greater force on the window surface, which in turn increases the likelihood of success in breaking the window.
Seatbelt cutter
Seatbelt cutter, ligature cutter, Hoffman Design 911 Rescue Tool, ligature knife, with no open cutting surface may be incorporated into the device.
History and Creation
Invented in 1932 by Oscar Nisbett and Tobias Hockaday in the Chicago area. They were originally used by the notorious criminals Nisbett and Hockaday to steal from expensive vehicles until their arrest in 1941 in which the glass breaker was now being mass produced.
A patent for a glass breaker also was filed in 2001 as US patent 6,418,628.
References
External links
Tools
Hammers
Transport safety
Vehicle safety technologies | Glass breaker | [
"Physics"
] | 930 | [
"Physical systems",
"Transport",
"Transport safety"
] |
54,139,796 | https://en.wikipedia.org/wiki/Van%20der%20Corput%20inequality | In mathematics, the van der Corput inequality is a corollary of the Cauchy–Schwarz inequality that is useful in the study of correlations among vectors, and hence random variables. It is also useful in the study of equidistributed sequences, for example in the Weyl equidistribution estimate. Loosely stated, the van der Corput inequality asserts that if a unit vector in an inner product space is strongly correlated with many unit vectors , then many of the pairs must be strongly correlated with each other. Here, the notion of correlation is made precise by the inner product of the space : when the absolute value of is close to , then and are considered to be strongly correlated. (More generally, if the vectors involved are not unit vectors, then strong correlation means that .)
Statement of the inequality
Let be a real or complex inner product space with inner product and induced norm . Suppose that and that . Then
In terms of the correlation heuristic mentioned above, if is strongly correlated with many unit vectors , then the left-hand side of the inequality will be large, which then forces a significant proportion of the vectors to be strongly correlated with one another.
Proof of the inequality
We start by noticing that for any there exists (real or complex) such that and . Then,
since the inner product is bilinear
by the Cauchy–Schwarz inequality
by the definition of the induced norm
since is a unit vector and the inner product is bilinear
since for all .
External links
A blog post by Terence Tao on correlation transitivity, including the van der Corput inequality
Inequalities
Diophantine approximation | Van der Corput inequality | [
"Mathematics"
] | 335 | [
"Mathematical theorems",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Diophantine approximation",
"Approximations",
"Number theory"
] |
54,140,352 | https://en.wikipedia.org/wiki/Commandino%27s%20theorem | Commandino's theorem, named after Federico Commandino (1509–1575), states that the four medians of a tetrahedron are concurrent at a point S, which divides them in a 3:1 ratio. In a tetrahedron a median is a line segment that connects a vertex with the centroid of the opposite face – that is, the centroid of the opposite triangle. The point S is also the centroid of the tetrahedron.
History
The theorem is attributed to Commandino, who stated, in his work De Centro Gravitatis Solidorum (The Center of Gravity of Solids, 1565), that the four medians of the tetrahedron are concurrent. However, according to the 19th century scholar Guillaume Libri, Francesco Maurolico (1494–1575) claimed to have found the result earlier. Libri nevertheless thought that it had been known even earlier to Leonardo da Vinci, who seemed to have used it in his work. Julian Coolidge shared that assessment but pointed out that he couldn't find any explicit description or mathematical treatment of the theorem in da Vinci's works. Other scholars have speculated that the result may have already been known to Greek mathematicians during antiquity.
Generalizations
Commandino's theorem has a direct analog for simplexes of any dimension:
Let be a -simplex of some dimension in and let be its vertices. Furthermore, let , be the medians of , the lines joining each vertex with the centroid of the opposite -dimensional facet . Then, these lines intersect each other in a point , in a ratio of .
Full generality
The former analog is easy to prove via the following, more general result, which is analogous to the way levers in physics work:
Let and be natural numbers, so that in an -vector space , pairwise different points are given.
Let be the centroid of the points , let be the centroid of the points , and let be the centroid of all of these points.
Then, one has
In particular, the centroid lies on the line and divides it in a ratio of .
Reusch's theorem
The previous theorem has further interesting consequences other than the aforementioned generalization of Commandino's theorem. It can be used to prove the following theorem about the centroid of a tetrahedron, first described in the Mathematische Unterhaltungen by the German physicist :
One may find the centroid of a tetrahedron by taking the midpoints of two pairs of two of its opposite edges and connecting the corresponding midpoints through their respective midline. The intersection point of both midlines will be the centroid of the tetrahedron.
Since a tetrahedron has six edges in three opposite pairs, one obtains the following corollary:
In a tetrahedron, the three midlines corresponding to opposite edge midpoints are concurrent, and their intersection point is the centroid of the tetrahedron.
Varignon's theorem
A specific case of Reusch's theorem where all four vertices of a tetrahedron are coplanar and lie on a single plane, thereby degenerating into a quadrilateral, Varignon's theorem, named after Pierre Varignon, states the following:
Let a quadrilateral in be given. Then the two midlines connecting opposite edge midpoints intersect in the centroid of the quadrilateral and are divided in half by it.
References
External links
A Couple of Nice Extensions of the Median Properties
Theorems in geometry
Euclidean geometry
Tetrahedra | Commandino's theorem | [
"Mathematics"
] | 730 | [
"Mathematical theorems",
"Mathematical problems",
"Geometry",
"Theorems in geometry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.