id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
51,149,398
https://en.wikipedia.org/wiki/Oil%20Mines%20Regulations-1984
Oil Mines Regulations-1984(OMR 1984) replaces the Oil Mines Regulations-1933, with effect from October 1984 to deal with matters for the prevention of possible dangers in oil mines in India. OMR 1984 was Published in 1986 by Directorate General of Mines Safety, Ministry of Labour in Dhanbad, Jharkhand. Salient Features Chapter-1 Priliminary Short Title; Extent; Application and; Definitions. (Reg 1–2) Chapter-II Returns, Notices and Plans. (Reg 3–9) Chapter-III : Inspectors, Management and Duties Qualifications; Appointment; General Management and; Duties of Pe ns Employed in Mines for various functions. (Reg 10–23) Chapter-IV : Drilling and Workover Reg 24- Derricks; 25- Derrick platforms and floors; 26- Ladders; 27- Safety belts and life lines; 28- Emergency escape device; 29- Weight indicator; 30- Escape exits; 31- Guardrails, handrails and covers; 32- Draw-works; 33- Cathead and cat line; 34- Tongs; 35- Safety chains or wire lines; 36- Casing lines; 37- Rigging equipment for material handling; 38- Storage of materials; 39- Construction and loading of pipe-racks; 40- Rigging-up and rig dismantling; 41- Mud tanks and mud pumps; 42- Blowout preventer assembly; 43- Control system for blowout preventers; 44- Testing of blowout preventer assembly; 45- Precautions against blowout; 46- Precautions after a blowout has occurred; 47- Drilling workover and other operations; 48- Precautions during drill stem test. Chapter-V: Production Well completion, Testing and Activation (Reg 49–50) Group Gathering Station and Emergency Plan (Reg 51-51A) Precautions during acidizing operations; fractu operations and; loading and unloading of petroleum tankers. (Reg 52–54) Storage Tank; Well servicing operations; Artificial lifting of oil; Temporary closure of producing well and; Plugging requirements of abandoned wells (Reg 55–59) Chapter-V: Production Application (Reg-60) Chapter-VI : Transport and pipelines Approval and design of the route and design of pipeline, their laying and, Emergency procedure. (Reg 61–64) Chapter - VII : Protection against Gases and Fires Storage and use of flammable material; Precaution against noxious, flammable gases and, fire; Fire Fighting Equipment and; Contingency plan. (Reg 65–72) Chapter-VIII : Machinery, Plant and Equipment Use of certain machinery and equipment; Classification of Hazardous Area; Use of electrical equipment in hazardous area; General Provisions about construction and maintenance of machinery; Internal combustion Engines; Apparatus under pressure; Precautions regarding moving parts of machinery; Engine rooms and their exits and; Working and examination of machinery. (Reg 73–81) Chapter-IX : General Safety Provision Housekeeping; General/Emergency lighting; Supply and use of protective equipments; Protection against noise, toxic dusts, gases and ionising radiation; Communication; Protection against pollution of environment; Fencings and; General Safety. (Reg 82–98) Chapter-X : Miscellaneous Safety and health education, instructions and, inspections; Returns, notices, correspondence and Appeals. (Reg 99-106) References See also Oil spill Mining law and governance Mine safety Occupational safety and health organizations Safety engineering Indian legislation Oil spills Technology hazards Bird mortality Ocean pollution Product safety scandals Road hazards
Oil Mines Regulations-1984
[ "Chemistry", "Technology", "Engineering", "Environmental_science" ]
717
[ "Ocean pollution", "Systems engineering", "Safety engineering", "Road hazards", "Water pollution", "nan", "Oil spills" ]
51,151,663
https://en.wikipedia.org/wiki/RepRap%20Morgan
The RepRap Morgan is an open-source fused deposition modeling 3D printer. The Morgan is part of the RepRap project and has an unusual SCARA arm design. The first Morgan printer was designed by Quentin Harley, a South African engineer (working for Siemens at the time) at the House4Hack Makerspace in Centurion. The SCARA arm design was developed due to the lack of access to components of existing 3D printer designs in South Africa and their relatively high cost. In 2013 the Morgan won the HumanityPlus Uplift Personal Manufacturing Prize and third place in the Gauteng Accelerator Program. The Morgan name comes from the RepRap convention of naming printers after famous deceased biologists. The Morgan printers was named after Thomas Hunt Morgan. He worked on the genome of the common fruitfly with his wife, Lilian Vaughan Morgan. Their names were used in the development codenames for the first two generations of Morgan 3D Printers. Morgan printers are now manufactured full-time by the inventor in a small workshop factory in the House4Hack makerspace. Versions There are four versions of the RepRap Morgan, the Morgan v1 (codenamed Thomas), Morgan Pro, Morgan Mega and Morgan Pro 2 (codenamed Lilian). External links Official website RepRap Morgan page on the RepRap.org RepRap Morgan files on Github References Open hardware electronic devices 3D printers RepRap project
RepRap Morgan
[ "Engineering", "Biology" ]
295
[ "RepRap project", "Self-replication", "3D printers", "Industrial machinery" ]
51,153,954
https://en.wikipedia.org/wiki/Stuart%20Dalziel
Stuart Bruce Dalziel is a British and New Zealand fluid dynamicist. He is currently based at the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge, where he has directed the GKB Laboratory since 1997. He was promoted to the rank of Professor in 2016. Dalziel completed his PhD in Cambridge in 1988, under the supervision of Paul Linden. Dalziel's research areas include stratified turbulence and internal gravity waves. References Year of birth missing (living people) Living people Fluid dynamicists British physicists Fellows of the American Physical Society 20th-century New Zealand physicists 21st-century British physicists Alumni of the University of Cambridge
Stuart Dalziel
[ "Chemistry" ]
136
[ "Fluid dynamicists", "Fluid dynamics" ]
38,295,028
https://en.wikipedia.org/wiki/Supercritical%20hydrolysis
Supercritical hydrolysis is a chemical engineering process in which water in the supercritical state can be employed to achieve a variety of reactions within seconds. To cope with the extremely short times of reaction on an industrial scale, the process should be continuous. This continuity enables the ratio of the amount of water to the other reactants to be less than unity which minimizes the energy needed to heat the water above , the critical temperature. Application of the process to biomass provides simple sugars in near quantitative yield by supercritical hydrolysis of the constituent polysaccharides. The phenolic polymer components of the biomass, usually exemplified by lignins, are converted into a water-insoluble liquid mixture of low molecular phenols (monomerization). A private company, Renmatix, based in King of Prussia, PA, has developed a supercritical hydrolysis technology to convert a range of non-food biomass feedstocks into cellulosic sugars for application in biochemicals and biofuels. It has a demonstration facility in Georgia, currently capable of processing three dry tons of hardwood biomass into cellulosic sugar daily. In Australia, a government-sponsored entity called Licella, is similarly transforming sawdust. Both processes require high ratios of water to the amount of feedstock. This energy profligacy can be avoided by the use of a plastic-type extruder through which the solid, but wet, biomass is conveyed to a small inductively heated reaction zone as shown by Xtrudx Technologies Inc of Seattle. Supercritical hydrolysis can be considered a broadly applicable green chemistry process that utilizes water simultaneously as a heat transfer agent, a solvent, a reactant, a source of hydrogen and as a char-reduction component. References Ethanol Producers Magazine 2012, 18(3), 70-72 US Patent 7,955,508 June 11, 2011 US Patent 8,057,666 November 15, 2011 US Patent 8.890,143 March 17, 2015 Environmental chemistry Green chemistry
Supercritical hydrolysis
[ "Chemistry", "Engineering", "Environmental_science" ]
421
[ "Green chemistry", "Chemical engineering", "Environmental chemistry", "nan" ]
38,296,930
https://en.wikipedia.org/wiki/Blade%20element%20momentum%20theory
Blade element momentum theory is a theory that combines both blade element theory and momentum theory. It is used to calculate the local forces on a propeller or wind-turbine blade. Blade element theory is combined with momentum theory to alleviate some of the difficulties in calculating the induced velocities at the rotor. This article emphasizes application of blade element theory to ground-based wind turbines, but the principles apply as well to propellers. Whereas the streamtube area is reduced by a propeller, it is expanded by a wind turbine. For either application, a highly simplified but useful approximation is the Rankine–Froude "momentum" or "actuator disk" model (1865, 1889). This article explains the application of the "Betz limit" to the efficiency of a ground-based wind turbine. Froude's blade element theory (1878) is a mathematical process to determine the behavior of propellers, later refined by Glauert (1926). Betz (1921) provided an approximate correction to momentum "Rankine–Froude actuator-disk" theory to account for the sudden rotation imparted to the flow by the actuator disk (NACA TN 83, "The Theory of the Screw Propeller" and NACA TM 491, "Propeller Problems"). In blade element momentum theory, angular momentum is included in the model, meaning that the wake (the air after interaction with the rotor) has angular momentum. That is, the air begins to rotate about the z-axis immediately upon interaction with the rotor (see diagram below). Angular momentum must be taken into account since the rotor, which is the device that extracts the energy from the wind, is rotating as a result of the interaction with the wind. Rankine–Froude model The "Betz limit," not yet taking advantage of Betz' contribution to account for rotational flow with emphasis on propellers, applies the Rankine–Froude "actuator disk" theory to obtain the maximum efficiency of a stationary wind turbine. The following analysis is restricted to axial motion of the air: In our streamtube we have fluid flowing from left to right, and an actuator disk that represents the rotor. We will assume that the rotor is infinitesimally thin. From above, we can see that at the start of the streamtube, fluid flow is normal to the actuator disk. The fluid interacts with the rotor, thus transferring energy from the fluid to the rotor. The fluid then continues to flow downstream. Thus we can break our system/streamtube into two sections: pre-acuator disk, and post-actuator disk. Before interaction with the rotor, the total energy in the fluid is constant. Furthermore, after interacting with the rotor, the total energy in the fluid is constant. Bernoulli's equation describes the different forms of energy that are present in fluid flow where the net energy is constant, i.e. when a fluid is not transferring any energy to some other entity such as a rotor. The energy consists of static pressure, gravitational potential energy, and kinetic energy. Mathematically, we have the following expression: where is the density of the fluid, is the velocity of the fluid along a streamline, is the static pressure energy, is the acceleration due to gravity, and is the height above the ground. For the purposes of this analysis, we will assume that gravitational potential energy is unchanging during fluid flow from left to right such that we have the following: Thus, if we have two points on a streamline, point 1 and point 2, and at point 1 the velocity of the fluid along the streamline is and the pressure at 1 is , and at point 2 the velocity of the fluid along the streamline is and the pressure at 2 is , and no energy has been extracted from the fluid between points 1 and 2, then we have the following expression: Now let us return to our initial diagram. Consider pre-actuator flow. Far upstream, the fluid velocity is ; the fluid velocity then decreases and pressure increases as it approaches the rotor. In accordance with mass conservation, the mass flow rate through the rotor must be constant. The mass flow rate, , through a surface of area is given by the following expression: where is the density and is the velocity of the fluid along a streamline. Thus, if mass flow rate is constant, increases in area must result in decreases in fluid velocity along a streamline. This means the kinetic energy of the fluid is decreasing. If the flow is expanding but not transferring energy, then Bernoulli applies. Thus the reduction in kinetic energy is countered by an increase in static pressure energy. So we have the following situation pre-rotor: far upstream, fluid pressure is the same as atmospheric, ; just before interaction with the rotor, fluid pressure has increased and so kinetic energy has decreased. This can be described mathematically using Bernoulli's equation: where we have written the fluid velocity at the rotor as , where is the axial induction factor. The pressure of the fluid on the upstream side of the actuator disk is . We are treating the rotor as an actuator disk that is infinitely thin. Thus we will assume no change in fluid velocity across the actuator disk. Since energy has been extracted from the fluid, the pressure must have decreased. Now consider post-rotor: immediately after interacting with the rotor, the fluid velocity is still , but the pressure has dropped to a value ; far downstream, pressure of the fluid has reached equilibrium with the atmosphere; this has been accomplished in the natural and dynamically slow process of decreasing the velocity of flow in the stream tube in order to maintain dynamic equilibrium ( i.e. far downstream. Assuming no further energy transfer, we can apply Bernoulli for downstream: where The velocity far downstream in the Wake Thus we can obtain an expression for the pressure difference between fore and aft of the rotor: If we have a pressure difference across the area of the actuator disc, there is a force acting on the actuator disk, which can be determined from : where is the area of the actuator disk. If the rotor is the only thing absorbing energy from the fluid, the rate of change in axial momentum of the fluid is the force that is acting on the rotor. The rate of change of axial momentum can be expressed as the difference between the initial and final axial velocities of the fluid, multiplied by the mass flow rate: Thus we can arrive at an expression for the fluid velocity far downstream: This force is acting at the rotor. The power taken from the fluid is the force acting on the fluid multiplied by the velocity of the fluid at the point of power extraction: Maximum power Suppose we are interested in finding the maximum power that can be extracted from the fluid. The power in the fluid is given by the following expression: where is the fluid density as before, is the fluid velocity, and is the area of an imaginary surface through which the fluid is flowing. The power extracted from the fluid by a rotor in the scenario described above is some fraction of this power expression. We will call the fraction the power co-efficient, . Thus the power extracted, is given by the following expression: Our question is this: what is the maximum value of using the Betz model? Let us return to our derived expression for the power transferred from the fluid to the rotor (). We can see that the power extracted is dependent on the axial induction factor. If we differentiate with respect to , we get the following result: If we have maximised our power extraction, we can set the above to zero. This allows us to determine the value of which yields maximum power extraction. This value is a . Thus we are able to find that . In other words, the rotor cannot extract more than 59 per cent of the power in the fluid. Blade element momentum theory Compared to the Rankine–Froude model, Blade element momentum theory accounts for the angular momentum of the rotor. Consider the left hand side of the figure below. We have a streamtube, in which there is the fluid and the rotor. We will assume that there is no interaction between the contents of the streamtube and everything outside of it. That is, we are dealing with an isolated system. In physics, isolated systems must obey conservation laws. An example of such is the conservation of angular momentum. Thus, the angular momentum within the streamtube must be conserved. Consequently, if the rotor acquires angular momentum through its interaction with the fluid, something else must acquire equal and opposite angular momentum. As already mentioned, the system consists of just the fluid and the rotor, the fluid must acquire angular momentum in the wake. As we related the change in axial momentum with some induction factor , we will relate the change in angular momentum of the fluid with the tangential induction factor, . Consider the following setup. We will break the rotor area up into annular rings of infinitesimally small thickness. We are doing this so that we can assume that axial induction factors and tangential induction factors are constant throughout the annular ring. An assumption of this approach is that annular rings are independent of one another i.e. there is no interaction between the fluids of neighboring annular rings. Bernoulli for rotating wake Let us now go back to Bernoulli: The velocity is the velocity of the fluid along a streamline. The streamline may not necessarily run parallel to a particular co-ordinate axis, such as the z-axis. Thus the velocity may consist of components in the axes that make up the co-ordinate system. For this analysis, we will use cylindrical polar co-ordinates . Thus . NOTE: We will in fact, be working in cylindrical co-ordinates for all aspects e.g. Now consider the setup shown above. As before, we can break the setup into two components: upstream and downstream. Pre-rotor where is the velocity of the fluid along a streamline far upstream, and is the velocity of the fluid just prior to the rotor. Written in cylindrical polar co-ordinates, we have the following expression: where and are the z-components of the velocity far upstream and just prior to the rotor respectively. This is exactly the same as the upstream equation from the Betz model. As can be seen from the figure above, the flow expands as it approaches the rotor, a consequence of the increase in static pressure and the conservation of mass. This would imply that upstream. However, for the purpose of this analysis, that effect will be neglected. Post-rotor where is the velocity of the fluid just after interacting with the rotor. This can be written as . The radial component of the velocity will be zero; this must be true if we are to use the annular ring approach; to assume otherwise would suggest interference between annular rings at some point downstream. Since we assume that there is no change in axial velocity across the disc, . Angular momentum must be conserved in an isolated system. Thus the rotation of the wake must not die away. Thus in the downstream section is constant. Thus Bernoulli simplifies in the downstream section: In other words, the Bernoulli equations up and downstream of the rotor are the same as the Bernoulli expressions in the Betz model. Therefore, we can use results such as power extraction and wake speed that were derived in the Betz model i.e. This allows us to calculate maximum power extraction for a system that includes a rotating wake. This can be shown to give the same value as that of the Betz model i.e. 0.59. This method involves recognising that the torque generated in the rotor is given by the following expression: with the necessary terms defined immediately below. Blade forces Consider fluid flow around an airfoil. The flow of the fluid around the airfoil gives rise to lift and drag forces. By definition, lift is the force that acts on the airfoil normal to the apparent fluid flow speed seen by the airfoil. Drag is the forces that acts tangential to the apparent fluid flow speed seen by the airfoil. What do we mean by an apparent speed? Consider the diagram below: The speed seen by the rotor blade is dependent on three things: the axial velocity of the fluid, ; the tangential velocity of the fluid due to the acceleration round an airfoil, ; and the rotor motion itself, . That is, the apparent fluid velocity is given as below: Thus the apparent wind speed is just the magnitude of this vector i.e.: We can also work out the angle from the above figure: Supposing we know the angle , we can then work out simply by using the relation ; we can then work out the lift co-efficient, , and the drag co-efficient , from which we can work out the lift and drag forces acting on the blade. Consider the annular ring, which is partially occupied by blade elements. The length of each blade section occupying the annular ring is (see figure below). The lift acting on those parts of the blades/airfoils each with chord is given by the following expression: where is the lift co-efficient, which is a function of the angle of attack, and is the number of blades. Additionally, the drag acting on that part of the blades/airfoils with chord is given by the following expression: Remember that these forces calculated are normal and tangential to the apparent speed. We are interested in forces in the and axes. Thus we need to consider the diagram below: Thus we can see the following: is the force that is responsible for the rotation of the rotor blades; is the force that is responsible for the bending of the blades. Recall that for an isolated system the net angular momentum of the system is conserved. If the rotor acquired angular momentum, so must the fluid in the wake. Let us suppose that the fluid in the wake acquires a tangential velocity . Thus the torque in the air is given by By the conservation of angular momentum, this balances the torque in the blades of the rotor; thus, Furthermore, the rate of change of linear momentum in the air is balanced by the out-of-plane bending force acting on the blades, . From momentum theory, the rate of change of linear momentum in the air is as follows: which may be expressed as Balancing this with the out-of-plane bending force gives Let us now make the following definitions: So we have the following equations: Let us make reference to the following equation which can be seen from analysis of the above figure: Thus, with these three equations, it is possible to get the following result through some algebraic manipulation: We can derive an expression for in a similar manner. This allows us to understand what is going on with the rotor and the fluid. Equations of this sort are then solved by iterative techniques. Assumptions and possible drawbacks of BEM models Assumes that each annular ring is independent of every other annular ring. Does not account for wake expansion. Does not account for tip losses, though correction factors can be included. Does not account for yaw, though it can be made to do so. Based on steady flow (non-turbulent). References Fluid dynamics Propellers Wind turbines
Blade element momentum theory
[ "Chemistry", "Engineering" ]
3,128
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
38,300,392
https://en.wikipedia.org/wiki/Geometrothermodynamics
In physics, geometrothermodynamics (GTD) is a formalism developed in 2007 by Hernando Quevedo to describe the properties of thermodynamic systems in terms of concepts of differential geometry. Consider a thermodynamic system in the framework of classical equilibrium thermodynamics. The states of thermodynamic equilibrium are considered as points of an abstract equilibrium space in which a Riemannian metric can be introduced in several ways. In particular, one can introduce Hessian metrics like the Fisher information metric, the Weinhold metric, the Ruppeiner metric and others, whose components are calculated as the Hessian of a particular thermodynamic potential. Another possibility is to introduce metrics which are independent of the thermodynamic potential, a property which is shared by all thermodynamic systems in classical thermodynamics. Since a change of thermodynamic potential is equivalent to a Legendre transformation, and Legendre transformations do not act in the equilibrium space, it is necessary to introduce an auxiliary space to correctly handle the Legendre transformations. This is the so-called thermodynamic phase space. If the phase space is equipped with a Legendre invariant Riemannian metric, a smooth map can be introduced that induces a thermodynamic metric in the equilibrium manifold. The thermodynamic metric can then be used with different thermodynamic potentials without changing the geometric properties of the equilibrium manifold. One expects the geometric properties of the equilibrium manifold to be related to the macroscopic physical properties. The details of this relation can be summarized in three main points: Curvature is a measure of the thermodynamical interaction. Curvature singularities correspond to curvature phase transitions. Thermodynamic geodesics correspond to quasi-static processes. Geometric aspects The main ingredient of GTD is a (2n + 1)-dimensional manifold with coordinates , where is an arbitrary thermodynamic potential, , , are the extensive variables, and the intensive variables. It is also possible to introduce in a canonical manner the fundamental one-form (summation over repeated indices) with , which satisfies the condition , where is the number of thermodynamic degrees of freedom of the system, and is invariant with respect to Legendre transformations where is any disjoint decomposition of the set of indices , and . In particular, for and we obtain the total Legendre transformation and the identity, respectively. It is also assumed that in there exists a metric which is also invariant with respect to Legendre transformations. The triad defines a Riemannian contact manifold which is called the thermodynamic phase space (phase manifold). The space of thermodynamic equilibrium states (equilibrium manifold) is an n-dimensional Riemannian submanifold induced by a smooth map , i.e. , with and , such that holds, where is the pullback of . The manifold is naturally equipped with the Riemannian metric . The purpose of GTD is to demonstrate that the geometric properties of are related to the thermodynamic properties of a system with fundamental thermodynamic equation . The condition of invariance with respect total Legendre transformations leads to the metrics where is a constant diagonal matrix that can be expressed in terms of and , and is an arbitrary Legendre invariant function of . The metrics and have been used to describe thermodynamic systems with first and second order phase transitions, respectively. The most general metric which is invariant with respect to partial Legendre transformations is The components of the corresponding metric for the equilibrium manifold can be computed as Applications GTD has been applied to describe laboratory systems like the ideal gas, van der Waals gas, the Ising model, etc., more exotic systems like black holes in different gravity theories, in the context of relativistic cosmology, and to describe chemical reactions . References Branches of thermodynamics Geometry
Geometrothermodynamics
[ "Physics", "Chemistry", "Mathematics" ]
829
[ "Branches of thermodynamics", "Thermodynamics", "Geometry" ]
38,301,499
https://en.wikipedia.org/wiki/Congruence%20ideal
In algebra, the congruence ideal of a surjective ring homomorphism f : B → C of commutative rings is the image under f of the annihilator of the kernel of f. It is called a congruence ideal because when B is a Hecke algebra and f is a homomorphism corresponding to a modular form, the congruence ideal describes congruences between the modular form of f and other modular forms. Example Suppose C and D are rings with homomorphisms to a ring E, and let B = C×ED be the pullback, given by the subring of C×D of pairs (c,d) where c and d have the same image in E. If f is the natural projection from B to C, then the kernel is the ideal J of elements (0,d) where d has image 0 in E. If J has annihilator 0 in D, then its annihilator in B is just the kernel I of the map from C to E. So the congruence ideal of f is the ideal (I,0) of B. Suppose that B is the Hecke algebra generated by Hecke operators Tn acting on the 2-dimensional space of modular forms of level 1 and weight 12.This space is 2 dimensional, spanned by the Eigenforms given by the Eisenstein series E12 and the modular discriminant Δ. The map taking a Hecke operator Tn to its eigenvalues (σ11(n),τ(n)) gives a homomorphism from B into the ring Z×Z (where τ is the Ramanujan tau function and σ11(n) is the sum of the 11th powers of the divisors of n). The image is the set of pairs (c,d) with c and d congruent mod 619 because of Ramanujan's congruence σ11(n) ≡ τ(n) mod 691. If f is the homomorphism taking (c,d) to c in Z, then the congruence ideal is (691). So the congruence ideal describes the congruences between the forms E12 and Δ. References Commutative algebra Modular forms
Congruence ideal
[ "Mathematics" ]
466
[ "Modular forms", "Fields of abstract algebra", "Commutative algebra", "Number theory" ]
38,303,683
https://en.wikipedia.org/wiki/Divided%20domain
In algebra, a divided domain is an integral domain R in which every prime ideal satisfies . A locally divided domain is an integral domain that is a divided domain at every maximal ideal. A Prüfer domain is a basic example of a locally divided domain. Divided domains were introduced by who called them AV-domains. References External links Commutative algebra
Divided domain
[ "Mathematics" ]
74
[ "Fields of abstract algebra", "Commutative algebra" ]
41,085,102
https://en.wikipedia.org/wiki/Wave%20action%20%28continuum%20mechanics%29
In continuum mechanics, wave action refers to a conservable measure of the wave part of a motion. For small-amplitude and slowly varying waves, the wave action density is: where is the intrinsic wave energy and is the intrinsic frequency of the slowly modulated waves – intrinsic here implying: as observed in a frame of reference moving with the mean velocity of the motion. The action of a wave was introduced by in the study of the (pseudo) energy and momentum of waves in plasmas. derived the conservation of wave action – identified as an adiabatic invariant – from an averaged Lagrangian description of slowly varying nonlinear wave trains in inhomogeneous media: where is the wave-action density flux and is the divergence of . The description of waves in inhomogeneous and moving media was further elaborated by for the case of small-amplitude waves; they also called the quantity wave action (by which name it has been referred to subsequently). For small-amplitude waves the conservation of wave action becomes: using and where is the group velocity and the mean velocity of the inhomogeneous moving medium. While the total energy (the sum of the energies of the mean motion and of the wave motion) is conserved for a non-dissipative system, the energy of the wave motion is not conserved, since in general there can be an exchange of energy with the mean motion. However, wave action is a quantity which is conserved for the wave-part of the motion. The equation for the conservation of wave action is for instance used extensively in wind wave models to forecast sea states as needed by mariners, the offshore industry and for coastal defense. Also in plasma physics and acoustics the concept of wave action is used. The derivation of an exact wave-action equation for more general wave motion – not limited to slowly modulated waves, small-amplitude waves or (non-dissipative) conservative systems – was provided and analysed by using the framework of the generalised Lagrangian mean for the separation of wave and mean motion. Notes References Continuum mechanics Waves
Wave action (continuum mechanics)
[ "Physics" ]
425
[ "Physical phenomena", "Continuum mechanics", "Classical mechanics", "Waves", "Motion (physics)" ]
41,086,554
https://en.wikipedia.org/wiki/Nanoparticles%20for%20drug%20delivery%20to%20the%20brain
Nanoparticles for drug delivery to the brain is a method for transporting drug molecules across the blood–brain barrier (BBB) using nanoparticles. These drugs cross the BBB and deliver pharmaceuticals to the brain for therapeutic treatment of neurological disorders. These disorders include Parkinson's disease, Alzheimer's disease, schizophrenia, depression, and brain tumors. Part of the difficulty in finding cures for these central nervous system (CNS) disorders is that there is yet no truly efficient delivery method for drugs to cross the BBB. Antibiotics, antineoplastic agents, and a variety of CNS-active drugs, especially neuropeptides, are a few examples of molecules that cannot pass the BBB alone. With the aid of nanoparticle delivery systems, however, studies have shown that some drugs can now cross the BBB, and even exhibit lower toxicity and decrease adverse effects throughout the body. Toxicity is an important concept for pharmacology because high toxicity levels in the body could be detrimental to the patient by affecting other organs and disrupting their function. Further, the BBB is not the only physiological barrier for drug delivery to the brain. Other biological factors influence how drugs are transported throughout the body and how they target specific locations for action. Some of these pathophysiological factors include blood flow alterations, edema and increased intracranial pressure, metabolic perturbations, and altered gene expression and protein synthesis. Though there exist many obstacles that make developing a robust delivery system difficult, nanoparticles provide a promising mechanism for drug transport to the CNS. Background The first successful delivery of a drug across the BBB occurred in 1995. The drug used was hexapeptide dalargin, an anti-nociceptive peptide that cannot cross the BBB alone. It was encapsulated in polysorbate 80 coated nanoparticles and intravenously injected. This was a huge breakthrough in the nanoparticle drug delivery field, and it helped advance research and development toward clinical trials of nanoparticle delivery systems. Nanoparticles range in size from 10 - 1000 nm (or 1 μm) and they can be made from natural or artificial polymers, lipids, dendrimers, and micelles. Most polymers used for nanoparticle drug delivery systems are natural, biocompatible, and biodegradable, which helps prevent contamination in the CNS. Several current methods for drug delivery to the brain include the use of liposomes, prodrugs, and carrier-mediated transporters. Many different delivery methods exist to transport these drugs into the body, such as peroral, intranasal, intravenous, and intracranial. For nanoparticles, most studies have shown increasing progression with intravenous delivery. Along with delivery and transport methods, there are several means of functionalizing, or activating, the nanoparticle carriers. These means include dissolving or absorbing a drug throughout the nanoparticle, encapsulating a drug inside the particle, or attaching a drug on the surface of the particle. Types of nanoparticles for CNS drug delivery Lipid-based One type of nanoparticle involves use of liposomes as drug molecule carriers. The diagram on the right shows a standard liposome. It has a phospholipid bilayer separating the interior from the exterior of the cell. Liposomes are composed of vesicular bilayers, lamellae, made of biocompatible and biodegradable lipids such as sphingomyelin, phosphatidylcholine, and glycerophospholipids. Cholesterol, a type of lipid, is also often incorporated in the lipid-nanoparticle formulation. Cholesterol can increase stability of a liposome and prevent leakage of a bilayer because its hydroxyl group can interact with the polar heads of the bilayer phospholipids. Liposomes have the potential to protect the drug from degradation, target sites for action, and reduce toxicity and adverse effects. Lipid nanoparticles can be manufactured by high pressure homogenization, a current method used to produce parenteral emulsions. This process can ultimately form a uniform dispersion of small droplets in a fluid substance by subdividing particles until the desired consistency is acquired. This manufacturing process is already scaled and in use in the food industry, which therefore makes it more appealing for researchers and for the drug delivery industry. Liposomes can also be functionalized by attaching various ligands on the surface to enhance brain-targeted delivery. Cationic liposomes Another type of lipid-nanoparticle that can be used for drug delivery to the brain is a cationic liposome. These are lipid molecules that are positively charged. One example of cationic liposomes uses bolaamphiphiles, which contain hydrophilic groups surrounding a hydrophobic chain to strengthen the boundary of the nano-vesicle containing the drug. Bolaamphiphile nano-vesicles can cross the BBB, and they allow controlled release of the drug to target sites. Lipoplexes can also be formed from cationic liposomes and DNA solutions, to yield transfection agents. Cationic liposomes cross the BBB through adsorption mediated endocytosis followed by internalization in the endosomes of the endothelial cells. By transfection of endothelial cells through the use of lipoplexes, physical alterations in the cells could be made. These physical changes could potentially improve how some nanoparticle drug-carriers cross the BBB. Metallic Metal nanoparticles are promising as carriers for drug delivery to the brain. Common metals used for nanoparticle drug delivery are gold, silver, and platinum, owing to their biocompatibility. These metallic nanoparticles are used due to their large surface area to volume ratio, geometric and chemical tunability, and endogenous antimicrobial properties. Silver cations released from silver nanoparticles can bind to the negatively charged cellular membrane of bacteria and increase membrane permeability, allowing foreign chemicals to enter the intracellular fluid. Metal nanoparticles are chemically synthesized using reduction reactions. For example, drug-conjugated silver nanoparticles are created by reducing silver nitrate with sodium borohydride in the presence of an ionic drug compound. The drug binds to the surface of the silver, stabilizing the nanoparticles and preventing the nanoparticles from aggregation. Metallic nanoparticles typically cross the BBB via transcytosis. Nanoparticle delivery through the BBB can be increased by introducing peptide conjugates to improve permeability to the central nervous system. For instance, recent studies have shown an improvement in gold nanoparticle delivery efficiency by conjugating a peptide that binds to the transferrin receptors expressed in brain endothelial cells. Solid lipid Also, solid lipid nanoparticles (SLNs) are lipid nanoparticles with a solid interior as shown in the diagram on the right. SLNs can be made by replacing the liquid lipid oil used in the emulsion process with a solid lipid. In solid lipid nanoparticles, the drug molecules are dissolved in the particle's solid hydrophobic lipid core, this is called the drug payload, and it is surrounded by an aqueous solution. Many SLNs are developed from triglycerides, fatty acids, and waxes. High-pressure homogenization or micro-emulsification can be used for manufacturing. Further, functionalizing the surface of solid lipid nanoparticles with polyethylene glycol (PEG) can result in increased BBB permeability. Different colloidal carriers such as liposomes, polymeric nanoparticles, and emulsions have reduced stability, shelf life and encapsulation efficacy. Solid lipid nanoparticles are designed to overcome these shortcomings and have an excellent drug release and physical stability apart from targeted delivery of drugs. Nanoemulsions Another form for nanoparticle delivery systems is oil-in-water emulsions done on a nano-scale. This process uses common biocompatible oils such as triglycerides and fatty acids, and combines them with water and surface-coating surfactants. Oils rich in omega-3 fatty acids especially contain important factors that aid in penetrating the tight junctions of the BBB. Polymer-based Other nanoparticles are polymer-based, meaning they are made from a natural polymer such as polylactic acid (PLA), poly D,L-glycolide (PLG), polylactide-co-glycolide (PLGA), and polycyanoacrylate (PCA). Some studies have found that polymeric nanoparticles may provide better results for drug delivery relative to lipid-based nanoparticles because they may increase the stability of the drugs or proteins being transported. Polymeric nanoparticles may also contain beneficial controlled release mechanisms. Nanoparticles made from natural polymers that are biodegradable have the abilities to target specific organs and tissues in the body, to carry DNA for gene therapy, and to deliver larger molecules such as proteins, peptides, and even genes. To manufacture these polymeric nanoparticles, the drug molecules are first dissolved and then encapsulated or attached to a polymer nanoparticle matrix. Three different structures can then be obtained from this process; nanoparticles, nanocapsules (in which the drug is encapsulated and surrounded by the polymer matrix), and nanospheres (in which the drug is dispersed throughout the polymeric matrix in a spherical form). One of the most important traits for nanoparticle delivery systems is that they must be biodegradable on the scale of a few days. A few common polymer materials used for drug delivery studies are polybutyl cyanoacrylate (PBCA), poly(isohexyl cyanoacrylate) (PIHCA), polylactic acid (PLA), or polylactide-co-glycolide (PLGA). PBCA undergoes degradation through enzymatic cleavage of its ester bond on the alkyl side chain to produce water-soluble byproducts. PBCA also proves to be the fastest biodegradable material, with studies showing 80% reduction after 24 hours post intravenous therapy injection. PIHCA, however, was recently found to display an even lower degradation rate, which in turn further decreases toxicity. PIHCA, due to this slight advantage, is currently undergoing phase III clinical trials for transporting the drug doxorubicin as a treatment for hepatocellular carcinomas. Human serum albumin (HSA) and chitosan are also materials of interest for the generation of nanoparticle delivery systems. Using albumin nanoparticles for stroke therapy can overcome numerous limitations. For instance, albumin nanoparticles can enhance BBB permeability, increase solubility, and increase half-life in circulation. Patients who have brain cancer overexpress albumin-binding proteins, such as SPARC and gp60, in their BBB and tumor cells, naturally increasing the uptake of albumin into the brain. Using this relationship, researches have formed albumin nanoparticles that co-encapsulate two anticancer drugs, paclitaxel and fenretinide, modified with low weight molecular protamine (LMWP), a type of cell-penetrating protein, for anti-glioma therapy. Once injected into the patient's body, the albumin nanoparticles can cross the BBB more easily, bind to the proteins and penetrate glioma cells, and then release the contained drugs. This nanoparticle formulation enhances tumor-targeting delivery efficiency and improves the solubility issue of hydrophobic drugs. Specifically, cationic bovine serum albumin-conjugated tanshinone IIA PEGylated nanoparticles injected into a MCAO rat model decreased the volume of infarction and neuronal apoptosis. Chitosan, a naturally abundant polysaccharide, is particularly useful due to its biocompability and lack of toxicity. With its adsorptive and mucoadhesive properties, chitosan can overcome limitations of internasal administration to the brain. It has been shown that cationic chitosan nanoparticles interact with the negatively charged brain endothelium. Coating these polymeric nanoparticle devices with different surfactants can also aid BBB crossing and uptake in the brain. Surfactants such as polysorbate 80, 20, 40, 60, and poloxamer 188, demonstrated positive drug delivery through the blood–brain barrier, whereas other surfactants did not yield the same results. It has also been shown that functionalizing the surface of nanoparticles with polyethylene glycol (PEG), can induce the "stealth effect", allowing the drug-loaded nanoparticle to circulate throughout the body for prolonged periods of time. Further, the stealth effect, caused in part by the hydrophilic and flexible properties of the PEG chains, facilitates an increase in localizing the drug at target sites in tissues and organs. Mechanisms for delivery Liposomes A mechanism for liposome transport across the BBB is lipid-mediated free diffusion, a type of facilitated diffusion, or lipid-mediated endocytosis. There exist many lipoprotein receptors which bind lipoproteins to form complexes that in turn transport the liposome nano-delivery system across the BBB. Apolipoprotein E (apoE) is a protein that facilitates transport of lipids and cholesterol. ApoE constituents bind to nanoparticles, and then this complex binds to a low-density lipoprotein receptor (LDLR) in the BBB and allows transport to occur. Polymeric nanoparticles The mechanism for the transport of polymer-based nanoparticles across the BBB has been characterized as receptor-mediated endocytosis by the brain capillary endothelial cells. Transcytosis then occurs to transport the nanoparticles across the tight junction of endothelial cells and into the brain. Surface coating nanoparticles with surfactants such as polysorbate 80 or poloxamer 188 was shown to increase uptake of the drug into the brain also. This mechanism also relies on certain receptors located on the luminal surface of endothelial cells of the BBB. Ligands coated on the nanoparticle's surface bind to specific receptors to cause a conformational change. Once bound to these receptors, transcytosis can commence, and this involves the formation of vesicles from the plasma membrane pinching off the nanoparticle system after internalization. Additional receptors identified for receptor-mediated endocytosis of nanoparticle delivery systems are the scavenger receptor class B type I (SR-BI), LDL receptor (LRP1), transferrin receptor, and insulin receptor. As long as a receptor exists on the endothelial surface of the BBB, any ligand can be attached to the nanoparticle's surface to functionalize it so that it can bind and undergo endocytosis. Another mechanism is adsorption mediated transcytosis, where electrostatic interactions are involved in mediating nanoparticle crossing of the BBB. Cationic nanoparticles (including cationic liposomes) are of interest for this mechanism, because their positive charges assist binding on the brain's endothelial cells. Using TAT-peptides, a cell-penetrating peptide, to functionalize the surface of cationic nanoparticles can further improve drug transport into the brain. Magnetic and Magnetoelectric nanoparticles In contrast to the above mechanisms, a delivery with magnetic fields does not strongly depend on the biochemistry of the brain. In this case, nanoparticles are literally pulled across the BBB via application of a magnetic field gradient. The nanoparticles can be pulled in as well as removed from the brain merely by controlling the direction of the gradient. For the approach to work, the nanoparticles must have a non-zero magnetic moment and have a diameter of less than 50 nm. Both magnetic and magnetoelectric nanoparticles (MENs) satisfy the requirements. However, it is only the MENs which display a non-zero magnetoelectric (ME) effect. Due to the ME effect, MENs can provide a direct access to local intrinsic electric fields at the nanoscale to enable a two-way communication with the neural network at the single-neuron level. MENs, proposed by the research group of Professor Sakhrat Khizroev at Florida International University (FIU), have been used for targeted drug delivery and externally controlled release across the BBB to treat HIV and brain tumors, as well as to wirelessly stimulate neurons deep in the brain for treatment of neurodegenerative diseases such as Parkinson's Disease and others. Focused ultrasound Studies have shown that focused ultrasound bursts can noninvasively be used to disrupt tight junctions in desired locations of BBB, allowing for the increased passage of particles at that location. This disruption can last up to four hours after burst administration. Focused ultrasound works by generating oscillating microbubbles, which physically interact with the cells of the BBB by oscillating at a frequency which can be tuned by the ultrasound burst. This physical interaction is believed to cause cavitation and ultimately the disintegration of the tight junction complexes which may explain why this effect lasts for several hours. However, the energy applied from ultrasound can result in tissue damage. Fortunately, studies have demonstrated that this risk can be reduced if preformed microbubbles are first injected before focused ultrasound is applied, reducing the energy required from the ultrasound. This technique has applications in the treatment of various diseases. For example, one study has shown that using focused ultrasound with oscillating bubbles loaded with a chemotherapeutic drug, carmustine, facilitates the safe treatment of glioblastoma in an animal model. This drug, like many others, normally requires large dosages to reach the target brain tissue diffusion from the blood, leading to systemic toxicity and the possibilities of multiple harmful side effects manifesting throughout the body. However, focused ultrasound has the potential to increase the safety and efficacy of drug delivery to the brain. Toxicity A study was performed to assess the toxicity effects of doxorubicin-loaded polymeric nanoparticle systems. It was found that doses up to 400 mg/kg of PBCA nanoparticles alone did not cause any toxic effects on the organism. These low toxicity effects can most likely be attributed to the controlled release and modified biodistribution of the drug due to the traits of the nanoparticle delivery system. Toxicity is a highly important factor and limit of drug delivery studies, and a major area of interest in research on nanoparticle delivery to the brain. Metal nanoparticles are associated with risks of neurotoxicity and cytotoxicity. These heavy metals generate reactive oxygen species, which causes oxidative stress and damages the cells' mitochondria and endoplasmic reticulum. This leads to further issues in cellular toxicity, such as damage to DNA and disruption of cellular pathways. Silver nanoparticles in particular have a higher degree of toxicity compared to other metal nanoparticles such as gold or iron. Silver nanoparticles can circulate through the body and accumulate easily in multiple organs, as discovered in a study on the silver nanoparticle distribution in rats. Traces of silver accumulated in the rats' lungs, spleen, kidney, liver, and brain after the nanoparticles were injected subcutaneously. In addition, silver nanoparticles generate more reactive oxygen species compared to other metals, which leads to an overall larger issue of toxicity. Research In the early 21st century, extensive research is occurring in the field of nanoparticle drug delivery systems to the brain. One of the common diseases being studied in neuroscience is Alzheimer's disease. Many studies have been done to show how nanoparticles can be used as a platform to deliver therapeutic drugs to these patients with the disease. A few Alzheimer's drugs that have been studied especially are rivastigmine, tacrine, quinoline, piperine, and curcumin. PBCA, chitosan, and PLGA nanoparticles were used as delivery systems for these drugs. Overall, the results from each drug injection with these nanoparticles showed remarkable improvements in the effects of the drug relative to non-nanoparticle delivery systems. This possibly suggests that nanoparticles could provide a promising solution to how these drugs could cross the BBB. One factor that still must be considered and accounted for is nanoparticle accumulation in the body. With long-term and frequent injections that are often required to treat chronic diseases such as Alzheimer's disease, polymeric nanoparticles could potentially build up in the body, causing undesirable effects. This area for concern would have to be further assessed to analyze these possible effects and to improve them. References External links Brain Nanomedicine
Nanoparticles for drug delivery to the brain
[ "Chemistry", "Materials_science" ]
4,530
[ "Nanomedicine", "Pharmacology", "Drug delivery devices", "Nanotechnology" ]
41,090,148
https://en.wikipedia.org/wiki/Nitridoborate
The nitridoborates are chemical compounds of boron and nitrogen with metals. These compounds are typically produced at high temperature by reacting hexagonal boron nitride (α -BN) with metal nitrides or by metathesis reactions involving nitridoborates. A wide range of these compounds have been made involving lithium, alkaline earth metals and lanthanides, and their structures determined using crystallographic techniques such as X-ray crystallography. Structurally one of their interesting features is the presence of polyatomic anions of boron and nitrogen where the geometry and the B–N bond length have been interpreted in terms of π-bonding. Many of the compounds produced can be described as ternary compounds of metal boron and nitrogen and examples of these are Li3BN2, Mg3BN3, La3B3N6, La5B4N9. However, there are examples of compounds with more than one metal, for example La3Ni2B2N3 and compounds containing anions such as Cl−, for example Mg2BN2Cl. Structures and bonding Examination of the crystallographic data shows the presence of polyatomic units consisting of boron and nitrogen. These units have structures similar to those of isoelectronic anions, which have π-bonded structures. The bonding in some of these compounds is ionic in character, such as Ca3[BN2]2, other compounds have metallic characteristics, where the bonding has been described in terms of π-bonded anions with extra electrons in anti-bonding orbitals that not only cause a lengthening of the B–N bonds but also form part of the conduction band of the solid. The simplest ion BNn− is comparable to the ion, but attempts to prepare the compound CaBN analogous to CaC2 calcium carbide failed. The bonding of compounds containing the diatomic BN anion have been explained in terms of electrons entering anti-bonding orbitals and reducing the B–N bond order from 3 (triple bond) in BN2− to 2 (double bond) in BN4−. Some nitridoborates are salt-like such as Li3BN2, LiCa4[BN2]3 others have a metallic lustre, such as LiEu4[BN2]3. Bonding calculations show that the energy of the valence orbitals of metal atoms of group 2 and lanthanide elements are higher than those of the bonding orbitals in BNx ions which indicates an ionic like interaction between a metal atom and a BNx ion. With lanthanide compounds where extra electrons enter the anti-bonding orbitals of an ion there can be a smaller band gap giving the compounds metal like properties such as lustre. With transition metals the d orbitals can be similar in energy to bonding orbitals in the BN anions suggesting covalent interactions. For comparison purposes the following are considered to be typical BN bond lengths References Boron–nitrogen compounds Anions
Nitridoborate
[ "Physics", "Chemistry" ]
614
[ "Ions", "Matter", "Anions" ]
41,093,622
https://en.wikipedia.org/wiki/Electrodynamic%20droplet%20deformation
Electrohydrodynamic droplet deformation is a phenomenon that occurs when liquid droplets suspended in a second immiscible liquid are exposed to an oscillating electric field. Under these conditions, the droplet will periodically deform between prolate and oblate spheroids. The characteristic frequency and magnitude of the deformation is determined by a balance of electrodynamic, hydrodynamic, and capillary stresses acting on the droplet interface. This phenomenon has been studied extensively both mathematically and experimentally because of the complex fluid dynamics that occur. Characterization and modulation of electrodynamic droplet deformation is of particular interest for engineering applications because of the growing need to improve the performance of complex industrial processes(e.g. two-phase cooling, crude oil demulsification). The primary advantage of using oscillatory droplet deformation to improve these engineering processes is that the phenomenon does not require sophisticated machinery or the introduction of heat sources. This effectively means that improving performance via oscillatory droplet deformation is simple and in no way diminishes the effectiveness of the existing engineering system. Motivation The heat transfer dynamics in two-phase two component flow systems are governed by the dynamic behavior of droplets/bubbles that are injected into the circulating coolant stream. The injected bubbles/droplets are typically of a lower density than the coolant and thus experience an upward buoyancy force. They enhance the thermal performance of cooling systems because as they float upwards in heated pipes the coolant is forced to flow around the bubbles/droplets. The secondary flow around the droplets modifies the coolant flow creating a quasi-mixing effect in the bulk fluid that increases the heat transfer from the pipe walls to the coolant. Current two-component, two-phase cooling systems such as nuclear reactors, control the cooling rate by optimizing solely the coolant type, flow rate, and bubble/droplet injection rate. This approach modifies only bulk flow settings and does not provide engineers the option of control of directly modulating the mechanisms that govern the heat transfer dynamics. Inducing oscillations in the bubbles/droplets is a promising approach to improving convective cooling because creates secondary and tertiary flow patterns that could improve heat transfer without introducing significant heat to the system. Electrodynamic droplet deformation also of particular interest in crude oil processing as a method to improve the separation rate of water and salts from the bulk. In its unprocessed form, crude oil cannot be used directly in industrial processes because the presence of salts can corrode heat exchangers and distillation equipment. To avoid fouling due to these impurities it is necessary to first remove the salt, which is concentrated in suspended water droplets. Exposing batches of crude oil to both DC and AC high-voltage electric fields induces droplet deformation that ultimately causes the water droplets to coalesce into larger droplets. Droplet coalescence improves the separation rate of water from crude oil because the upward velocity of a sphere is proportional to the square of the sphere’s radius. This can be easily shown by considering gravitational force, buoyancy, and Stokes flow drag. It has been reported that increasing both the amplitude and frequency of the applied electric fields can significantly increases water separation up to 90%. Taylor’s 1966 Solution Taylor’s 1966 solution to internal and external flow of a sphere induced by an electric field was the first to provide an argument that accounted for pressure induced by fluid flow both inside the droplet and in the external fluid field. Unlike some of his contemporaries, Taylor argued that surface tension and a uniform internal pressure could not balance the spatially varying normal stress on a droplet interface that was resulted from the presence of a steady, uniform electric field. He posited that in order for a droplet interface to remain in a non-deformed state in the presence of an electric field, there must be fluid flow both inside and outside the droplet interface. He developed a solution for the internal and external flow field using a streamfunction approach similar to that of creeping flow past a sphere. Taylor confirmed the validity of his solution by comparing it to images from flow visualization studies that observed circulation both inside and outside the droplet interface. Torza's solution Torza's 1971 solution for droplet deformation under the presence of a uniform, time-varying electric field is the most widely reference model for predicting small amplitude droplet deformations. Similar to the solution developed by Taylor, Torza developed a solution for electrodynamic droplet deformation by considering fluid circulation both inside and outside of a droplet interface. His solution is innovative because it derives an expression for the instantaneous droplet deformation ratio by considering separate sub-problems to derive the effects of electric stress, internal hydrodynamic stress, external hydrodynamic stress, and the surface tension on the droplet interface. The droplet deformation ratio D is a quantity that expresses the relative extension and shortening of the vertical and horizontal dimensions of a sphere. The electric stress sub-problem is formulated by defining electric potential fields on the inside and outside of the droplet interface that are expressible as complex phasors with the oscillation frequency as the imposed electric field. Since Torza treats the fluid inside the droplet and outside the droplet as having no net charge, the governing equation for the electric stress sub-problem reduces to Gauss's law with a spatial charge density of zero. By re-expressing the electric field in terms of the gradient of the electric potential, the governing electric equation reduces to Laplace's equation. Separation of variables can be used to derive a solution to this equation of the form of a power series multiplied by the cosine of the polar angle taken relative to the direction of the electric field. Using the solutions for the magnitude of the electric potentials on the inside and outside of the droplet, the electric stress created on the bubble/droplet interface can be determined using the definition of the Maxwell stress tensor and neglecting the electric field. It is worth noting that because the electric field is in the form of a phasor, the scalar product and tensor product of electric field with itself, as are present in the Maxwell stress tensor, result in a doubling of the oscillation frequency. The sub-problem Torza solves to determine the velocity fields and hydrodynamic stresses that result from the electric stress is of exactly the same form as the one Taylor used for his solution for steady electric fields. Specifically, Torza solves the streamfunction formulation of the curl of the Navier–Stokes equations in spherical coordinates by adopting Taylor’s streamfunction solution form and imposing stress balance conditions at the interface. Using the streamfunction solution, Torza derived analytical expressions for the velocity fields that could be used to derive analytical expressions for hydrodynamic stress on the interface for incompressible Newtonian fluids. To incorporate the effect of surface tension into the periodic deformation of a droplet, Torza calculated the difference in electric and hydrodynamic stresses across the interface and used that as the driving stress in the Laplace Pressure Equation. This is the most important relation for this system because it describes the mechanism by which differences in stress across the droplet interface can induce deformation by inducing a change of the principle radii of curvature. Using this relation between surface pressures in conjunction with geometrical arguments derived by Taylor for small deformations, Torza was able to derive an analytical expression for the deformation ratio as the sum of a steady component and an oscillating component with a frequency that is twice that of the imposed electric field as shown. The important terms to recognize in this expression are in the steady term, the cosine in the time-varying term, and gamma in both terms. The phi term is what Taylor and Torza refer to as a “discriminating function” because its value determines whether the droplet will tend to spend more time in either a prolate or oblate shape. It is a function of all the material properties and the frequency of oscillation, but is completely independent of time. The time varying cosine term shows that the droplet does in fact oscillate at twice the frequency of the imposed electric field but is also generally out of phase due to the constant alpha term that arises due to the mathematics. The other variables are constants that depend on the geometric, electric, and thermodynamic properties of the relevant liquids in addition to the oscillation frequency. In general, it is apparent that the magnitude of the droplet deformation is constrained by the interfacial tension, represented by gamma. As the interfacial tension increases, the net magnitude decreases due to an increase in capillary forces. Since the equilibrium shape of a droplet tends towards the one with the minimum energy, a large value of interfacial tension tends to drive the droplet shape towards a sphere. Safety and practical considerations Although periodic droplet deformation is widely studied for its practical industrial applications, its implementation poses significant safety issues and physical limitations due to the use of electric field. In order to induce periodic droplet deformation using an electric field, an extremely large amplitude electric field must be applied. Research studies using water droplets suspended in silicone oil required root-mean-square values as high as 10^6 V/m . Even for a small electrode spacing, this type of field requires electric potentials greater than 500V, which is roughly three times wall voltage in the United States. Practically speaking, this large of an electric field can only be achieved if the electrode spacing is very small (~ O(0.1 mm)) or if a high-voltage amplifier is available. It is for this reason that the majority of studies of this phenomenon are currently being conducted in research laboratories using small diameter tubes; tubes of this size are in fact present in industrial cooling systems, such as nuclear reactors. References Fluid dynamics Electrodynamics
Electrodynamic droplet deformation
[ "Chemistry", "Mathematics", "Engineering" ]
2,033
[ "Electrodynamics", "Chemical engineering", "Piping", "Fluid dynamics", "Dynamical systems" ]
36,873,736
https://en.wikipedia.org/wiki/Fenske%E2%80%93Hall%20method
The Fenske–Hall method is a molecular orbital method in computational chemistry, usually applied to inorganic compounds. This method was developed in Richard F. Fenske's research group at the University of Wisconsin. The method is named after Fenske and Michael B. Hall, who co-authored the last paper in its development. The Fenske–Hall method is derived from Roothaan equations. It is ab initio in the sense that it does not make use of parameters from experimental data (see semi-empirical quantum chemistry method). Electronic exchange is considered, but electron correlation is not. It is able to predict the shapes of molecular orbitals and their energies that are similar to the more rigorous analysis by density functional theory (DFT), but using less computational resources. As a result, some consider the Fenske–Hall method an approximation to DFT. Fenske-Hall calculations may be performed by Jimp 2. See also Molecular orbital theory Quantum chemistry computer programs Semi-empirical quantum chemistry methods References Computational chemistry
Fenske–Hall method
[ "Chemistry" ]
208
[ "Theoretical chemistry stubs", "Theoretical chemistry", "Computational chemistry", "Computational chemistry stubs", "Physical chemistry stubs" ]
46,254,141
https://en.wikipedia.org/wiki/AuthaGraph%20projection
AuthaGraph is an approximately equal-area world map projection invented by Japanese architect Hajime Narukawa in 1999. The map is made by equally dividing a spherical surface into 96 triangles, transferring it to a tetrahedron while maintaining area proportions, and unfolding it in the form of a rectangle: it is a polyhedral map projection. The map substantially preserves sizes and shapes of all continents and oceans while it reduces distortions of their shapes, as inspired by the Dymaxion map. The projection does not have some of the major distortions of the Mercator projection, like the expansion of countries in far northern latitudes, and allows for Antarctica to be displayed accurately and in whole. Triangular world maps are also possible using the same method. The name is derived from "authalic" and "graph". The method used to construct the projection ensures that the 96 regions of the sphere that are used to define the projection each have the correct area, but the projection does not qualify as equal-area because the method does not control area at infinitesimal scales or even within those regions. The AuthaGraph world map can be tiled in any direction without visible seams. From this map-tiling, a new world map with triangular, rectangular or a parallelogram's outline can be framed with various regions at its center. This tessellation allows for depicting temporal themes, such as a satellite's long-term movement around the Earth in a continuous line. In 2011 the AuthaGraph mapping projection was selected by the Japanese National Museum of Emerging Science and Innovation (Miraikan) as its official mapping tool. In October 2016, the AuthaGraph mapping projection won the 2016 Good Design Grand Award from the Japan Institute of Design Promotion. On April 16, 2024, Nebraska Governor Jim Pillen signed a law that requires public schools to use only maps based on the Gall–Peters projection, a similar cylindrical equal-area projection, or the AuthaGraph projection, beginning in the 2024–2025 school year. See also List of map projections Lee conformal world in a tetrahedron, another tetrahedral projection, 1965 Dymaxion map, 1943 Peirce quincuncial projection, 1879 Polyhedral map projection, earliest known is by Leonardo da Vinci, 1514 References External links Good Design Award 1999 introductions Map projections Japanese inventions
AuthaGraph projection
[ "Mathematics" ]
483
[ "Map projections", "Coordinate systems" ]
46,255,905
https://en.wikipedia.org/wiki/Black%20box%20group
In computational group theory, a black box group (black-box group) is a group G whose elements are encoded by bit strings of length N, and group operations are performed by an oracle (the "black box"). These operations include: taking a product g·h of elements g and h, taking an inverse g−1 of element g, deciding whether g = 1. This class is defined to include both the permutation groups and the matrix groups. The upper bound on the order of G given by |G| ≤ 2N shows that G is finite. Applications The black box groups were introduced by Babai and Szemerédi in 1984. They were used as a formalism for (constructive) group recognition and property testing. Notable algorithms include the Babai's algorithm for finding random group elements, the Product Replacement Algorithm, and testing group commutativity. Many early algorithms in CGT, such as the Schreier–Sims algorithm, require a permutation representation of a group and thus are not black box. Many other algorithms require finding element orders. Since there are efficient ways of finding the order of an element in a permutation group or in a matrix group (a method for the latter is described by Celler and Leedham-Green in 1997), a common recourse is to assume that the black box group is equipped with a further oracle for determining element orders. See also Implicit graph Matroid oracle Notes References Derek F. Holt, Bettina Eick, Eamonn A. O'Brien, Handbook of computational group theory, Discrete Mathematics and its Applications (Boca Raton). Chapman & Hall/CRC, Boca Raton, Florida, 2005. Ákos Seress, Permutation group algorithms, Cambridge Tracts in Mathematics, vol. 152, Cambridge University Press, Cambridge, 2003. . Computational group theory Finite groups
Black box group
[ "Mathematics" ]
383
[ "Mathematical structures", "Algebraic structures", "Finite groups" ]
46,260,673
https://en.wikipedia.org/wiki/Vinylcyclohexene%20dioxide
4-Vinylcyclohexene dioxide (VCD) is an organic compound that contains two epoxide functional groups. It is industrially used as a crosslinking agent for the production of epoxy resins. It is a colourless liquid. It is an intermediate for synthesis of organic compounds. Preparation and properties 4-Vinylcyclohexene dioxide is prepared by epoxidation of 4-vinylcyclohexene with peroxybenzoic acid. Its viscosity is 15 mPa·s. Safety 4-Vinylcyclohexene dioxide, like other volatile epoxides, is classified as an alkylating agent. VCD has toxic effects on fertility. It is a killer of oocytes, eggs in a female's ovaries, in immature ovarian follicles in mice and rats. In pest control, it has been used as an ovotoxic agent for reducing rat fertility. References Epoxides Cyclohexanes Monomers
Vinylcyclohexene dioxide
[ "Chemistry", "Materials_science" ]
211
[ "Monomers", "Polymer chemistry" ]
46,260,684
https://en.wikipedia.org/wiki/Straight-Through%20Quality
Straight-Through Quality (STQ) are approaches and outputs of test automation that have quality and deliver business benefit. STQ takes its name from the business concept of straight-through processing (STP). Also acting as a tool and enabler for STP. Traditional techniques for testing and delivery have often required a great deal of manual support and intervention. These approaches are subject to human error, cost of delay and lack of reuse. These also have the negative side-effect of being unable to deliver 'fail-fast' approaches, which have proven popular with Agile practitioners. Previous traditional approaches have been typically expensive where whole silo'ed departments are created within commercial companies to deliver Quality and Deployment alone. Thus STQ as an approach hopes to resolve this problem. Examples Tangible examples of STQ approaches in the software industry are present and often known as continuous integration (CI) and continuous delivery (CD). These combined can ensure that software delivery is integrated, automatically tested and ready for automatic delivery at any time. Together CI/CD can enable STQ which can be used as Business output terminology for business users who do not understand the technical complexities of CI/CD. See also Straight-through processing Continuous integration Continuous delivery References External links Business Case for Test Automation Quality Automation
Straight-Through Quality
[ "Engineering" ]
259
[ "Control engineering", "Automation" ]
58,891,956
https://en.wikipedia.org/wiki/Cerdulatinib
Cerdulatinib is a small molecule SYK/JAK kinase inhibitor in development for treatment of hematological malignancies. It has lowest nM IC50 values against TYK2, JAK1, JAK2, JAK3, FMS, and SYK. It is being developed by Portola Pharmaceuticals; in September 2018 the FDA granted orphan drug status to cerdulatinib for the treatment of peripheral T-cell lymphoma (PTCL). See also Ruxolitinib Fostamatinib Entospletinib References Experimental cancer drugs Protein kinase inhibitors Aminopyrimidines
Cerdulatinib
[ "Chemistry" ]
131
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
58,892,481
https://en.wikipedia.org/wiki/Ulam%E2%80%93Warburton%20automaton
The Ulam–Warburton cellular automaton (UWCA) is a 2-dimensional fractal pattern that grows on a regular grid of cells consisting of squares. Starting with one square initially ON and all others OFF, successive iterations are generated by turning ON all squares that share precisely one edge with an ON square. This is the von Neumann neighborhood. The automaton is named after the Polish-American mathematician and scientist Stanislaw Ulam and the Scottish engineer, inventor and amateur mathematician Mike Warburton. Properties and relations The UWCA is a 2D 5-neighbor outer totalistic cellular automaton using rule 686. The number of cells turned ON in each iteration is denoted with an explicit formula: and for where is the Hamming weight function which counts the number of 1's in the binary expansion of The minimum upper bound of summation for is such that The total number of cells turned ON is denoted Table of wt(n), u(n) and U(n) The table shows that different inputs to can lead to the same output. This surjective property emerges from the simple rule of growth – a new cell is born if it shares only one-edge with an existing ON cell - the process appears disorderly and is modeled by functions involving but within the chaos there is regularity. is OEIS sequence A147562 and is OEIS sequence A147582 Counting cells with quadratics For all integer sequences of the form where and Let ( is OEIS sequence A130665) Then the total number of ON cells in the integer sequence is given by Or in terms of we have Table of integer sequences nm and Um Upper and lower bounds has fractal-like behavior with a sharp upper bound for given by The upper bound only contacts at 'high-water' points when . These are also the generations at which the UWCA based on squares, the Hex–UWCA based on hexagons and the Sierpinski triangle return to their base shape. Limit superior and limit inferior We have The lower limit was obtained by Robert Price (OEIS sequence A261313 ) and took several weeks to compute and is believed to be twice the lower limit of where is the total number of toothpicks in the toothpick sequence up to generation Relationship to Hexagonal UWCA The Hexagonal-Ulam–Warburton cellular automaton (Hex-UWCA) is a 2-dimensional fractal pattern that grows on a regular grid of cells consisting of hexagons. The same growth rule for the UWCA applies and the pattern returns to a hexagon in generations , when the first hexagon is considered as generation . The UWCA has two reflection lines that pass through the corners of the initial cell dividing the square into four quadrants, similarly the Hex-UWCA has three reflection lines dividing the hexagon into six sections and the growth rule follows the symmetries. Cells whose centers lie on a line of reflection symmetry are never born. The Hex-UWCA pattern can be explored here. Sierpinski triangle The Sierpinski triangle appears in 13th century Italian floor mosaics. Wacław Sierpiński described the triangle in 1915. If we consider the growth of the triangle, with each row corresponding to a generation and the top row generation is a single triangle, then like the UWCA and the Hex-UWCA it returns to its starting shape, in generations Toothpick sequence The toothpick pattern is constructed by placing a single toothpick of unit length on a square grid, aligned with the vertical axis. At each subsequent stage, for every exposed toothpick end, place a perpendicular toothpick centred at that end. The resulting structure has a fractal-like appearance. The toothpick and UWCA structures are examples of cellular automata defined on a graph and when considered as a subgraph of the infinite square grid the structure is a tree. The toothpick sequence returns to its base rotated ‘H’ shape in generations where The toothpick sequence and various toothpick-like sequences can be explored here. Combinatorial game theory A subtraction game called LIM, in which two players alternately modify three piles of tokens by taking an equal amount of tokens from two of the piles and adding the same amount to the third pile, has a set of winning positions that can be described using the Ulam–Warburton automaton. History The beginnings of automata go back to a conversation Ulam had with Stanislaw Mazur in a coffee house in Lwów Poland when Ulam was twenty in 1929. Ulam worked with John von Neumann during the war years when they became good friends and discussed cellular automaton. Von Neumann’s used these ideas in his concept of a universal constructor and the digital computer. Ulam focussed on biological and ‘crystal like’ patterns publishing a sketch of the growth of a square based cell structure using a simple rule in 1962. Mike Warburton is an amateur mathematician working in probabilistic number theory who was educated at George Heriot's School in Edinburgh. His son's mathematics GCSE coursework involved investigating the growth of equilateral triangles or squares in the Euclidean plane with the rule – a new generation is born if and only if connected to the last by only one-edge. That coursework concluded with a recursive formula for the number of ON cells born in each generation. Later, Warburton found the sharp upper bound formula which he wrote up as a note in the Open University’s M500 magazine in 2002. David Singmaster read the article, analysed the structure and named the object the Ulam-Warburton cellular automaton in his 2003 article. Since then it has given rise to numerous integer sequences. References External links Explore the UWCA, Hex-UWCA and related integer sequence animations Neil Sloane: Terrific Toothpick Patterns - Numberphile. (The UWCA starts at time 8:20) Cellular automaton rules Fractals
Ulam–Warburton automaton
[ "Mathematics" ]
1,275
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Fractals", "Mathematical relations" ]
58,898,994
https://en.wikipedia.org/wiki/Non-orthogonal%20frequency-division%20multiplexing
Non-orthogonal frequency-division multiplexing (N-OFDM) is a method of encoding digital data on multiple carrier frequencies with non-orthogonal intervals between frequency of sub-carriers. N-OFDM signals can be used in communication and radar systems. Subcarriers system The low-pass equivalent N-OFDM signal is expressed as: where are the data symbols, is the number of sub-carriers, and is the N-OFDM symbol time. The sub-carrier spacing for  makes them non-orthogonal over each symbol period. History The history of N-OFDM signals theory was started in 1992 from the Patent of Russian Federation No. 2054684. In this patent, Vadym Slyusar proposed the 1st method of optimal processing for N-OFDM signals after Fast Fourier transform (FFT). In this regard need to say that W. Kozek and A. F. Molisch wrote in 1998 about N-OFDM signals with that "it is not possible to recover the information from the received signal, even in the case of an ideal channel." In 2001, V. Slyusar proposed non-orthogonal frequency digital modulation (N-OFDM) as an alternative of OFDM for communications systems. The next publication about this method has priority in July 2002 before the conference paper regarding SEFDM of I. Darwazeh and M.R.D. Rodrigues (September, 2003). Advantages of N-OFDM Despite the increased complexity of demodulating N-OFDM signals compared to OFDM, the transition to non-orthogonal subcarrier frequency arrangement provides several advantages: higher spectral efficiency, which allows to reduce the frequency band occupied by the signal and improve the electromagnetic compatibility of many terminals; adaptive detuning from interference concentrated in frequency by changing the nominal frequencies of the subcarriers; an ability to take into account Doppler frequency shifts of subcarriers when working with subscribers moving at high speeds; reduction of the peak factor of the multi-frequency signal mixture. Idealized system model This section describes a simple idealized N-OFDM system model suitable for a time-invariant AWGN channel. Transmitter N-OFDM signals An N-OFDM carrier signal is the sum of a number of not-orthogonal subcarriers, with baseband data on each subcarrier being independently modulated commonly using some type of quadrature amplitude modulation (QAM) or phase-shift keying (PSK). This composite baseband signal is typically used to modulate a main RF carrier. is a serial stream of binary digits. By inverse multiplexing, these are first demultiplexed into parallel streams, and each one mapped to a (possibly complex) symbol stream using some modulation constellation (QAM, PSK, etc.). Note that the constellations may be different, so some streams may carry a higher bit-rate than others. A Digital Signal Processor (DSP) is computed on each set of symbols, giving a set of complex time-domain samples. These samples are then quadrature-mixed to passband in the standard way. The real and imaginary components are first converted to the analogue domain using digital-to-analogue converters (DACs); the analogue signals are then used to modulate cosine and sine waves at the carrier frequency, , respectively. These signals are then summed to give the transmission signal, . Demodulation Receiver The receiver picks up the signal , which is then quadrature-mixed down to baseband using cosine and sine waves at the carrier frequency. This also creates signals centered on , so low-pass filters are used to reject these. The baseband signals are then sampled and digitised using analog-to-digital converters (ADCs), and a forward FFT is used to convert back to the frequency domain. This returns parallel streams, which use in appropriate symbol detector. Demodulation after FFT The 1st method of optimal processing for N-OFDM signals after FFT was proposed in 1992. Demodulation without FFT Demodulation by using of ADC samples The method of optimal processing for N-OFDM signals without FFT was proposed in October 2003. In this case can be used ADC samples. Demodulation after discrete Hartley transform N-OFDM+MIMO The combination N-OFDM and MIMO technology is similar to OFDM. To the building of MIMO system can be used digital antenna array as transmitter and receiver of N-OFDM signals. Fast-OFDM Fast-OFDM method was proposed in 2002. Filter-bank multi-carrier modulation (FBMC) Filter-bank multi-carrier modulation (FBMC) is. As example of FBMC can consider Wavelet N-OFDM. Wavelet N-OFDM N-OFDM has become a technique for power-line communications (PLC). In this area of research, a wavelet transform is introduced to replace the DFT as the method of creating non-orthogonal frequencies. This is due to the advantages wavelets offer, which are particularly useful on noisy power lines. To create the sender signal the wavelet N-OFDM uses a synthesis bank consisting of a -band transmultiplexer followed by the transform function On the receiver side, an analysis bank is used to demodulate the signal again. This bank contains an inverse transform followed by another -band transmultiplexer. The relationship between both transform functions is Spectrally-efficient FDM (SEFDM) N-OFDM is a spectrally efficient method. All SEFDM methods are similar to N-OFDM. Generalized frequency division multiplexing (GFDM) Generalized frequency division multiplexing (GFDM) is. See also OFDM COFDM References Multiplexing Quantized radio modulation modes Software-defined radio
Non-orthogonal frequency-division multiplexing
[ "Engineering" ]
1,215
[ "Radio electronics", "Software-defined radio" ]
58,901,225
https://en.wikipedia.org/wiki/E-NABLE
e-NABLE is a distributed, open source community that creates and shares open source designs for assistive devices. It is known for creating the first 3D printable prosthetic hand and sharing the designs and code for bioelectric limbs. History In 2011, Ivan Owen created a metal, functional puppet hand for a Steampunk costume. After posting a video of the hand on YouTube, he was contacted by South African carpenter Richard Van As who had lost his fingers in a woodworking accident. Owen and Van As worked on prototypes of a prosthetic hand, before Owen decided to incorporate 3D printing into the design process. This led to the creation of the first 3D printed mechanical hand. The sharing of the design of this hand on an Open License led to the creation of the community. The e-NABLE community "started with around 100 or so people who were simply offering to print the files that were already in existence". Chapters of the organisation exist in many countries, and each works in different ways. For example, one Canadian chapter recycles excess plastic waste to create the prosthetics. A chapter in Aden, Yemen, is producing prosthetic hands for people injured in Yemen's civil war. The Open Source nature of the project is enabling diverse groups around the world to create prosthetics for people within their own communities. A Colombian engineer called Christian Silva has created superhero-themed prosthetic arms for children. In 2016, an Iron Man-themed arm created by Albert Menero was given to a child by Iron Man actor Robert Downey Jr. How it works The E-nable website contains a tool called the “Handomatic,” which is used to fit prosthetic hands according to the measurements of the individual recipient. The tool then creates a custom design which can then be downloaded. Categories of design Body powered arms and hands Functional lower legs Myoelectric upper limbs Upper limb exoskeleton Tools Devices for people with vision impairment Teaching manipulatives References Robotic manipulators Prosthetics Biological engineering Biomedical engineering
E-NABLE
[ "Engineering", "Biology" ]
423
[ "Biological engineering", "Medical technology", "Biomedical engineering" ]
58,903,229
https://en.wikipedia.org/wiki/Claude%20Itzykson
Claude Georges Itzykson, (11 April 1938 – 22 May 1995) was a French theoretical physicist who worked in quantum field theory and statistical mechanics. Biography Separated from his parents by World War II, his father was taken to a Nazi concentration camp and Itzykson is raised in a Jewish orphanage in Maisons-Laffitte. After studying at the Lycée Condorcet Itzykson graduated from the Ecole Polytechnique in 1959. He joined the Theoretical Physics Department of the CEA in Saclay in 1962, then headed by Claude Bloch. He spent most of his career at Saclay, except for numerous visiting positions he held throughout his working life, such as at the Institute for Advanced Study, Princeton. Works He was a specialist in quantum field theory and applications of group theory in physics. In particular, he worked on the symmetries of the hydrogen atom, the discretization of network gauge theories, the integrals on large matrices and their applications to problems of combinatorics and physics of random surfaces, and conformal field theories and their classification. His first works were done in collaboration with Maurice Jacob and Raymond Stora. In 1980 he published a treatise on quantum field theory with Jean-Bernard Zuber that became a staple textbook on the subject. Awards In 1995 Itzykson received the Ampère Prize of the French Academy of Sciences. Bibliography Textbooks Selected publications References 1938 births 1995 deaths 20th-century French physicists Quantum physicists French theoretical physicists Mathematical physicists École Polytechnique alumni Members of the Académie Française Members of the French Academy of Sciences Scientists from Paris
Claude Itzykson
[ "Physics" ]
324
[ "Quantum physicists", "Quantum mechanics" ]
60,380,042
https://en.wikipedia.org/wiki/Liquid%20Haskell
Liquid Haskell is a program verifier for the programming language Haskell which allows specifying correctness properties by using refinement types. Properties are verified using a satisfiability modulo theories (SMT) solver which is SMTLIB2-compliant, such as the Z3 Theorem Prover. See also Formal verification References Further reading External links Formal methods tools Static program analysis tools Type systems Free software programmed in Haskell Software using the BSD license
Liquid Haskell
[ "Mathematics" ]
98
[ "Mathematical structures", "Type systems", "Type theory", "Formal methods tools", "Mathematical software" ]
60,385,043
https://en.wikipedia.org/wiki/John%20Mitchell%20Watt
John Mitchell Watt (1 December 1892 – 23 April 1980) was a 20th-century South African physician and pharmacologist. He served in both World Wars. He made extensive catalogues of traditional African medicines. Life He was born in Port Elizabeth in Cape Colony on 1 December 1892. He was educated at the Grey Institute High School in Port Elizabeth. His family moved to Scotland and he completed his education at Stirling High School. He studied medicine at the University of Edinburgh graduating with an MB ChB in 1916. He then joined the Royal Army Medical Corps. In 1921 he became Professor of Pharmacology at University College, Johannesburg. In 1933 he was elected a Fellow of the Royal Society of Edinburgh. His proposers were John Phillips, Robert Burns Young, James Harvey Pirie and Sir William Wright Smith. In the Second World War he was in charge of medical supplies for the South African Defence Headquarters for the entire war. In 1957 he joined the South African Institute for Medical Research. In 1965 he moved back to Britain to teach at the Plymouth College of Technology. He went into semi-retirement in 1965, also moving to Australia. He was a part-time Demonstrator in Physiology at the University of Queensland. Rand Afrikaans University awarded him an honorary doctorate (LLD) for his academic writing in 1972. He died in Brisbane on 23 April 1980. Family He was married twice: firstly in 1920 to Yelena Nikonova, secondly in 1942 to Betty Gwendoline Lory. Publications Basuto Medicines (1927) Salanocapsine (1932) with H L Heimann The Medicinal and Poisonous Plants of Southern and Eastern Africa (1932, revised 1962) Practical Notes on Pharmacology (1940) References 1892 births 1980 deaths People from Gqeberha 20th-century South African physicians Alumni of the University of Edinburgh Pharmacologists Fellows of the Royal Society of South Africa Fellows of the Royal Society of Edinburgh Royal Army Medical Corps officers British Army personnel of World War I South African military personnel of World War II South African Army officers
John Mitchell Watt
[ "Chemistry" ]
415
[ "Pharmacology", "Biochemists", "Pharmacologists" ]
60,388,478
https://en.wikipedia.org/wiki/Unit%20system%20of%20machinery
The unit system of machinery was a method of arranging a ship's propulsion machinery into separate units that could each operate autonomously in case of damage to the ship. For a steamship, this would be a boiler room supplying steam to an engine room. There might also be a gearing room that housed the transmission that actually turned the propeller shaft(s). Ideally each "unit" should have an additional compartment between them to further reduce the risk. Many ships were able to provide steam via cross-connections from either boiler room to either engine room. The unit system was developed during World War I to help mitigate damage and flooding from damage inflicted by a weapon and to preserve a ship's mobility by physically separating the engines and boilers into at least two groups so that a single torpedo hit, for example, could not flood all of the boiler or engine rooms, disabling all of the ship's propulsion machinery. A single World War II torpedo hit would typically blow a hole in the hull and would compromise the integrity of adjacent watertight bulkheads over twice that length and further if the ship was of riveted construction rather than welded. This would usually flood two compartments and possibly three. The unit system invariably added length to accommodate the additional piping to cross-connect the engines and boilers and the widely separated boilers required two funnels which reduced the fields of fire of the ship's anti-aircraft guns and added topweight. There could be significant knock-on costs as well. For example, the second batch of the British Leander-class light cruisers of the 1930s (which were all eventually sold to Australia) were modified to use the unit system. This increased the length of the machinery spaces by and the waterline belt armor needed to protect the boilers increased by a length of . The extra weight of the armor required the beam to be increased by to preserve stability. All of these changes made the ships more expensive than their predecessors. Citations Bibliography Naval architecture
Unit system of machinery
[ "Engineering" ]
397
[ "Naval architecture", "Marine engineering" ]
60,389,003
https://en.wikipedia.org/wiki/RISE%20project
The RISE Project (Rivera Submersible Experiments) was a 1979 international marine research project which mapped and investigated seafloor spreading in the Pacific Ocean, at the crest of the East Pacific Rise (EPR) at 21° north latitude. Using a deep sea submersible (ALVIN) to search for hydrothermal activity at depths around 2600 meters, the project discovered a series of vents emitting dark mineral particles at extremely high temperatures which gave rise to the popular name, "black smokers". Biologic communities found at 21° N vents, based on chemosynthesis and similar to those found at the Galápagos spreading center, established that these communities are not unique. Discovery of a deep-sea ecosystem not based on sunlight spurred theories of the origin of life on Earth. Location The RISE expedition took place on the East Pacific Rise spreading center at depths around , at 21° north latitude about south of Baja California, and southwest of Mazatlán, Mexico. The study area at 21° N was selected following results from a series of detailed near-bottom geophysical surveys that were designed to map the geologic features associated with a known spreading center. Experiments The project objective was detecting and mapping the sub-seafloor magma chamber that feeds lavas and igneous intrusions that create the oceanic crust and lithosphere in the process of seafloor spreading. The approach comprised many geophysical techniques including seismology, magnetism, crustal electrical properties, and gravity. The major experiment effort though, was seafloor observation and sample collection using the deep submergence submersible ALVIN on the crest of the EPR at depths of 2600 meters or more. RISE was part of the RITA (Rivera-Tamayo expeditions) project, which included submersible investigations (CYAMEX) at 21° N and at the Tamayo Fracture zone at the mouth of the Gulf of California. The RITA project used the French submersible CYANA on the CYAMEX expeditions. CYANA dives at 21° N occurred in 1978, one year prior to the RISE expedition. Participants American, French, and Mexican biologists, geologists, and geophysicists participated in both the RISE and RITA expeditions. The RISE expedition was directed by scientists at the Scripps Institution of Oceanography, part of the University of California, San Diego. Project leaders were Fred Spiess and Ken Macdonald. Woods Hole Oceanographic Institution provided the ALVIN and its support tender the catamaran Lulu. Scripps provided surface survey vessels the Melville and New Horizon. The expedition took place during March to May 1979. The RITA Project was directed by French scientists and was led by Jean Francheteau. Findings The major finding of the RISE project was discovery of very hot hydrothermal fluids emanating from the sea floor from vents at separate locations along the crest of the rise. These were anticipated by the discovery during the CYAMEX expedition a year earlier of massive sulfide mineral deposits on the sea floor at 21°N, which were presumed to be due to hydrothermal activity, but which was not then observed. During RISE dives, the hot vents were found and were marked by mineralized chimneys, about a half-meter in diameter and one to a few meters high, composed of sulfide minerals of zinc, copper and iron. Emitting from the chimneys were black plumes or jets of fine particles of these minerals, giving rise to the popular name "black smokers". Temperatures measured of these jets were 380±30 °C. Several vents of lower temperature emissions were found (<23 °C). These warm vents were similar to those discovered at the Galapagos Spreading Center a few years earlier. Hot vents and black smokers were not found at the Galapagos. Modeling of gravity data measured on the seafloor suggested that much of the upper ocean crust at 21°N was fractured and filled with warm water. Scientific impact Massive sulfide deposits have been mined on land in places including Cyprus, Oman and Australia. The discovery of massive sulfide deposits associated with vent fields at spreading centers provided a model for how these deposits formed. It also spurred commercial efforts to mine these deep sea deposits found elsewhere. Marine geologists were puzzled for years by conductive heat flow data from the seafloor that showed the measured values at spreading centers were too low for theoretical models of seafloor spreading. The convective crustal heat transfer computed for the first time from the vent plumes was estimated to be many-fold the observed conductive heat flow at a spreading center. These observations pointed to the importance of convective heat flow at spreading centers and provided an answer to the low heat flow problem. The discovery of biological communities at low temperature warm vents at 21°N, populated by a benthic community the same or similar to that discovered at the Galapagos spreading center, established that life forms found at the Galapagos were not unique. Further, the significance of discovering at the Galapagos site and 21°N of a chemosynthetic ecosystem that was not dependent on sunlight, existed at high pressures, and was based on chemicals emitted via volcanism, provided a model for how life could have originated on Earth. See also Tanya Atwater Robert Ballard Jack Corliss Rachel Haymon Miriam Kastner Bruce P. Luyendyk Endeavor Hydrothermal Vents Magic Mountain (vents offshore British Columbia, Canada) Rivera Plate References Further reading External links Discovery narrative by WHOI for black smokers Oceanography Hydrothermal vents Pacific Ocean
RISE project
[ "Physics", "Environmental_science" ]
1,124
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
32,693,647
https://en.wikipedia.org/wiki/Flow%20Science%2C%20Inc.
Flow Science, Inc. is a developer of software for computational fluid dynamics, also known as CFD, a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. History The firm was founded by Dr. C. W. "Tony" Hirt, previously a scientist at Los Alamos National Laboratory (LANL). Hirt is known for having pioneered the volume of fluid method (VOF) for tracking and locating the free surface or fluid-fluid interface. T Hirt left LANL and founded Flow Science in 1980 to develop CFD software for industrial and scientific applications using the VOF method . The company is located in Santa Fe, New Mexico. The company opened an office in Japan in June 2011, and an office in Germany in 2012. In December 2021 the holding company Dr. Flender Holding GmbH, of Aachen, Germany, acquired 100% of Flow Science Inc. shares. Products The company's products include FLOW-3D, a CFD software analyzing various physical flow processes; FLOW-3D CAST, a software product for metal casting users; FLOW-3D AM, a software product for simulating additive manufacturing and laser welding processes; FLOW-3D HYDRO, a software product for civil, environmental, and coastal engineers; FLOW-3D CLOUD, a cloud computing service installed on Penguin Computing On Demand (POD); FLOW-3D POST, a post-processing software built on ParaView; and FLOW-3D (x), an optimization and workflow automation software. There are high-performance computing (HPC) versions of both FLOW-3D and FLOW-3D CAST. FLOW-3D software uses a fractional areas/volumes approach called FAVOR for defining problem geometry, and a free-gridding technique for mesh generation. Desktop Engineering Magazine, in a review of FLOW-3D Version 10.0, said: “Key enhancements include fluid structure interaction (FSI) and thermal stress evolution (TSE) models that use a combination of conforming finite-element and structured finite-difference meshes. You use these to simulate and analyze the deformations of solid components as well as solidified fluid regions and resulting stresses in response to pressure forces and thermal gradients.” Key improvements of FLOW-3D Version 11.0 included increased meshing capabilities, solution sub-domains, an improved core gas model and improved surface tension model. FLOW-3D v11.0 also included a new visualization tool, FlowSight. Key improvements of FLOW-3D Version 12.0 included a visual overhaul of the GUI, an immersed boundary method, sludge settling model, a 2-fluid 2-temperature model, and a steady-state accelerator. Applications Blue Hill Hydraulics used FLOW-3D software to update the design of a fish ladder on Mt. Desert Island, Maine, that helps alewife migrate to the fresh water spawning habitat. T. AECOM Technology Corporation studied emergency overflows from the Powell Butte Reservoir and demonstrated that the existing energy dissipation structure was not capable of handling per day, the maximum expected overflow rate. The FLOW-3D simulation demonstrated that problem could be solved by increasing the height of the wing walls by exactly one foot. Researchers from the CAST Cooperative Research Centre and M. Murray Associates developed flow and thermal control methods for the high pressure die casting of thin-walled aluminum components with thicknesses of less than 1 mm. FLOW-3D simulation predicted the complex structure of the metal flow in the die and subsequent casting solidification. Researchers at DuPont used FLOW-3D to optimize coating processes for a solution-coated active-matrix organic light-emitting diode (AMOLED) display technology. Eastman Kodak Company researchers rapidly developed an inkjet printer technology using FLOW 3-D simulation technology for predicting the performance of printhead designs . A research team composed of members from Auburn University, Lamar University and RJR Engineering used Flow Science’s TruVOF method as a virtual laboratory to evaluate performance of highway pavement and drainage inlets with different geometries. Researchers at Albany Chicago LLC and the University of Wisconsin – Milwaukee used FLOW-3D in conjunction with a one-dimensional algorithm to analyze the slow-shot and fast-shot die casting processes in order to reduce the number of iterations required to achieve desired process parameters. References Companies based in Santa Fe, New Mexico Computational fluid dynamics Software companies based in New Mexico Software companies of the United States
Flow Science, Inc.
[ "Physics", "Chemistry" ]
900
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
48,808,564
https://en.wikipedia.org/wiki/List%20of%20most-polluted%20cities%20by%20particulate%20matter%20concentration
This list contains the top 500 cities by PM2.5 annual mean concentration measurement as documented by the World Health Organization covering the period from 2010 to 2022. The January 2024 version of the WHO database contains results of ambient (outdoor) air pollution monitoring from almost 5,390 towns and cities in 63 countries. Air quality in the database is represented by the annual mean concentration of particulate matter (PM10 and PM2.5, i.e. particles smaller than 10 or 2.5 micrometers, respectively). See also Healthy city Zero-carbon city References Cities Cities Cities Cities Pollution Pollution by city Pollution Cities
List of most-polluted cities by particulate matter concentration
[ "Physics", "Chemistry", "Mathematics" ]
128
[ "Visibility", "Physical quantities", "Quantity", "Particulates", "Particle technology", "Wikipedia categories named after physical quantities" ]
48,808,889
https://en.wikipedia.org/wiki/Air%20quality%20guideline
The World Health Organization guidelines were most recently updated in 2021. The guidelines offer guidance about these air pollutants: particulate matter (PM), ozone (O3), nitrogen dioxide (NO2), sulfur dioxide (SO2) and carbon monoxide (CO). The WHO first released the air quality guidelines in 1987, then updated them in 1997. The reports provide guidelines intending to give guidelines to reduce the health effects of air pollution. The guidelines stipulate that PM2.5 should not exceed 5 μg/m3 annual mean, or 15 μg/m3 24-hour mean; and that PM10 should not exceed 15 μg/m3 annual mean, or 45 μg/m3 24-hour mean. For ozone (O3), the guidelines suggest values no higher than 100 μg/m3 for an 8-hour mean and 60 μg/m3 peak season mean. For nitrogen dioxide (NO2), the guidelines set 10 μg/m3 for the annual mean or 25 μg/m3 for a 24-hours mean. For sulfur dioxide (SO2), the guidelines stipulate concentrations not exceeding 40 μg/m3 24-hour mean. For carbon monoxide concentrations not exceeding 4 mg/m3 24-hour mean. In terms of health effects, the guideline states that PM2.5 concentration of 10 is the lowest level at which total, cardiopulmonary and lung cancer mortality have been shown to increase with more than 95% confidence in response to long-term exposure to PM2.5. Along with cardiopulmonary and lung cancer deaths, the chances of which an individual increases their risk of being diagnosed with these is highly coordinated to fine particulate matter and sulfur dioxide-related pollution. A 2002 study found that "Each 10 μg/m3 elevation in fine particulate air pollution was associated with approximately a 4%, 6% and 8% increased risk of all-cause, cardiopulmonary, and lung cancer mortality, respectively." A 2021 study found that outdoor air pollution is associated with substantially increased mortality "even at low pollution levels below the current European and North American standards and WHO guideline values". Shortly afterwards, on 22 September 2021, for the first time since 2005, the WHO, after a systematic review of the accumulated evidence, adjusted their air quality guidelines whose adherence "could save millions of lives, protect against future diseases and help meet climate goals". On 4 April 2022 the WHO released their report based on the new guidelines. Pollutants for which new guidelines for annual mean have been set are PM2.5, with guideline value half the previous one, PM10, which is decreased by 25%, and that for nitrogen dioxide (NO2), which is four times lower than the previous guideline. See also Air pollution List of most-polluted cities by particulate matter concentration References WHO - WHO Air quality guidelines for particulate matter, ozone, nitrogen dioxide and sulfur dioxide - Page 11 Particulates Pollutants Visibility Air pollution Environmental policy World Health Organization
Air quality guideline
[ "Physics", "Chemistry", "Mathematics" ]
631
[ "Visibility", "Physical quantities", "Quantity", "Particulates", "Particle technology", "Wikipedia categories named after physical quantities" ]
48,808,954
https://en.wikipedia.org/wiki/Normalized%20difference%20water%20index
Normalized Difference Water Index (NDWI) may refer to one of at least two remote sensing-derived indexes related to liquid water: One is used to monitor changes in water content of leaves, using near-infrared (NIR) and short-wave infrared (SWIR) wavelengths, proposed by Gao in 1996: Another is used to monitor changes related to water content in water bodies, using green and NIR wavelengths, defined by McFeeters (1996): Overview In remote sensing, ratio image or spectral ratioing are enhancement techniques in which a raster pixel from one spectral band is divided by the corresponding value in another band. Both the indexes above share this same functional form; the choice of bands used is what makes them appropriate for a specific purpose. If looking to monitor vegetation in drought affected areas, then it is advisable to use NDWI index proposed by Gao utilizing NIR and SWIR. The SWIR reflectance in this index reflects changes in both the vegetation water content and the spongy mesophyll structure in vegetation canopies. The NIR reflectance is affected by leaf internal structure and leaf dry matter content, but not by water content. The combination of the NIR with the SWIR removes variations induced by leaf internal structure and leaf dry matter content, improving the accuracy in retrieving the vegetation water content. NDWI concept as formulated by Gao combining reflectance of NIR and SWIR is more common and has wider range of application. It can be used for exploring water content at single leaf level as well as canopy/satellite level. The range of application of NDWI (Gao, 1996) spreads from agricultural monitoring for crop irrigation and pasture management to forest monitoring for assessing fire risk and live fuel moisture particularly relevant in the context of climate change. Different SWIR bands can be used to characterize the water absorption in generalized form of NDWI as shown in eq. 1. Two major water absorption features in SWIR spectral region are centered near 1450 nm and 1950 nm while two minor absorption features are centered near 970 and 1200 nm in a living vegetation spectrum. Sentinel-2 MSI has two spectral bands in SWIR region: band 11 (central wavelength 1610 nm) and band 12 (central wavelength 2200 nm). Spectral band in NIR region with similar 20 m ground resolution is band 8A (central wavelength 865 nm). Sentinel-2 NDWI for agricultural monitoring of drought and irrigation management can be constructed using either combinations: band 8A (864 nm) and band 11 (1610 nm) band 8A (864 nm) and band 12 (2200 nm) Both formulations are suitable. Sentinel-2 NDWI for waterbody detection can be constructed by using: "Green" Band 3 (559 nm) and "NIR" Band 8A (864 nm) McFeeters index: If looking for water bodies or change in water level (e.g. flooding), then it is advisable to use the green and NIR spectral bands or green and SWIR spectral bands. Modification of normalised difference water index (MNDWI) has been suggested for improved detection of open water by replacing NIR spectral band with SWIR. Interpretation Visual or digital interpretation of the output image/raster created is similar to NDVI: -1 to 0 - Bright surface with no vegetation or water content +1 - represent water content For the second variant of the NDWI, another threshold can also be found in that avoids creating false alarms in urban areas: < 0.3 - Non-water >= 0.3 - Water. External links https://edo.jrc.ec.europa.eu/documents/factsheets/factsheet_ndwi.pdf (NDWI for crop monitoring: index by Gao, 1996) https://developers.google.com/earth-engine/datasets/catalog/MODIS_MYD09GA_006_NDWI (MODIS NDWI calculation) https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C01_T1_32DAY_NDWI (Landsat NDWI calculation) http://deltas.usgs.gov/fm/data/data_ndwi.aspx (regarding the McFeeters index for water bodies) http://space4water.org/taxonomy/term/1246 (Modification of the McFeeters index for improved detection of water bodies) References Measurement Infrared spectroscopy Remote sensing
Normalized difference water index
[ "Physics", "Chemistry", "Mathematics", "Environmental_science" ]
961
[ "Hydrology", "Spectrum (physical sciences)", "Physical quantities", "Quantity", "Measurement", "Size", "Infrared spectroscopy", "Water", "Spectroscopy" ]
52,670,479
https://en.wikipedia.org/wiki/Uehling%20potential
In quantum electrodynamics, the Uehling potential describes the interaction potential between two electric charges which, in addition to the classical Coulomb potential, contains an extra term responsible for the electric polarization of the vacuum. This potential was found by Edwin Albrecht Uehling in 1935. Uehling's corrections take into account that the electromagnetic field of a point charge does not act instantaneously at a distance, but rather it is an interaction that takes place via exchange particles, the photons. In quantum field theory, due to the uncertainty principle between energy and time, a single photon can briefly form a virtual particle-antiparticle pair, that influences the point charge. This effect is called vacuum polarization, because it makes the vacuum appear like a polarizable medium. By far the dominant contribution comes from the lightest charged elementary particle, the electron. The corrections by Uehling are negligible in everyday practice, but it allows to calculate the spectral lines of hydrogen-like atoms with high precision. Definition The Uehling potential is given by (units and ) from where it is apparent that this potential is a refinement of the classical Coulomb potential. Here is the electron mass and is the elementary charge measured at large distances. If , this potential simplifies to while for we have where is the Euler–Mascheroni constant (0.57721...). Properties It was recently demonstrated that the above integral in the expression of can be evaluated in closed form by using the modified Bessel functions of the second kind and its successive integrals. Effect on atomic spectra Since the Uehling potential only makes a significant contribution at small distances close to the nucleus, it mainly influences the energy of the s orbitals. Quantum mechanical perturbation theory can be used to calculate this influence in the atomic spectrum of atoms. The quantum electrodynamics corrections for the degenerated energy levels of the hydrogen atom are given by up to leading order in . Here stands for electronvolts. Since the wave function of the s orbitals does not vanish at the origin, the corrections provided by the Uehling potential are of the order (where is the fine structure constant) and it becomes less important for orbitals with a higher azimuthal quantum number. This energy splitting in the spectra is about a ten times smaller than the fine structure corrections provided by the Dirac equation and this splitting is known as the Lamb shift (which includes Uehling potential and additional higher corrections from quantum electrodynamics). The Uehling effect is also central to muonic hydrogen as most of the energy shift is due to vacuum polarization. In contrast to other variables such as the splitting through the fine structure, which scale together with the mass of the muon, i.e. by a factor of , the light electron mass continues to be the decisive size scale for the Uehling potential. The energy corrections are on the order of . See also QED vacuum Virtual particles Anomalous magnetic dipole moment Schwinger limit Schwinger effect Euler–Heisenberg Lagrangian References Further reading More on the vacuum polarization in QED, Quantum electrodynamics Quantum mechanical potentials Quantum field theory
Uehling potential
[ "Physics" ]
668
[ "Quantum mechanical potentials", "Quantum field theory", "Quantum mechanics" ]
52,673,746
https://en.wikipedia.org/wiki/C20H34
{{DISPLAYTITLE:C20H34}} The molecular formula C20H34 (molar mass: 274.49 g/mol, exact mass: 274.2661 u) may refer to: 19-Norpregnane, or 13β-methyl-17β-ethylgonane Phyllocladane Tigliane Molecular formulas
C20H34
[ "Physics", "Chemistry" ]
78
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
50,265,507
https://en.wikipedia.org/wiki/Heat%20transfer%20through%20fins
Fins are extensions on exterior surfaces of objects that increase the rate of heat transfer to or from the object by increasing convection. This is achieved by increasing the surface area of the body, which in turn increases the heat transfer rate by a sufficient degree. This is an efficient way of increasing the rate, since the alternative way of doing so is by increasing either the heat transfer coefficient (which depends on the nature of materials being used and the conditions of use) or the temperature gradient (which depends on the conditions of use). Clearly, changing the shape of the bodies is more convenient. Fins are therefore a very popular solution to increase the heat transfer from surfaces and are widely used in a number of objects. The fin material should preferably have high thermal conductivity. In most applications the fin is surrounded by a fluid in motion, which heats or cools it quickly due to the large surface area, and subsequently the heat gets transferred to or from the body quickly due to the high thermal conductivity of the fin. In order to design a fin for optimal heat transfer performance with minimal cost, the dimensions and shape of the fin have to be calculated for specific applications. A common way of doing so is by creating a model of the fin and then simulating it under required service conditions. Modeling Consider a body with fins on its outer surface, with air flowing around it. The heat transfer rate depends on Shape and geometry of the external surface Surface area of the body Velocity of the wind (or any fluid in other cases) Temperature of surroundings Modelling of the fins in this case involves, experimenting on this physical model and optimizing the number of fins and fin pitch for maximum performance. One of the experimentally obtained equations for heat transfer coefficient for the fin surface for low wind velocities is: where k= Fin surface heat transfer coefficient [W/m2K ] a=fin length [mm] v=wind velocity [km/h] θ=fin pitch [mm] Another equation for high fluid velocities, obtained from experiments conducted by Gibson, is where k=Fin surface heat transfer coefficient[W/m2K ] a=Fin length[mm] θ=Fin pitch[mm] v=Wind velocity[km/h] A more accurate equation for fin surface heat transfer coefficient is: where k (avg)= Fin surface heat transfer coefficient[W/m2K ] θ=Fin pitch[mm] v=Wind velocity[km/h] All these equations can be used to evaluate average heat transfer coefficient for various fin designs. Design The momentum conservation equation for this case is given as follows: This is used in combination with the continuity equation. The energy equation is also needed, which is: . The above equation, on solving, gives the temperature profile for the fluid region. When solved as a scalar equation, it can be used to calculate the temperatures at the fin and cylinder surfaces, by reducing to: Where: q = internal heat generation = 0 (in this case). Also dT/dt = 0 due to steady state assumption. These flow and energy equations can be set up and solved in any simulation software, e.g. Fluent. In order to do so, all parameters of flow and thermal conditions like fluid velocity and temperature of body have to be specified according to the requirement. Also, the boundary conditions and assumptions if any must be specified. This results in velocity profiles and temperature profiles for various surfaces and this knowledge can be used to design the fin. References Unit operations Transport phenomena Heat transfer
Heat transfer through fins
[ "Physics", "Chemistry", "Engineering" ]
713
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Unit operations", "Chemical engineering", "Thermodynamics", "Chemical process engineering" ]
50,270,017
https://en.wikipedia.org/wiki/Von%20Baer%27s%20laws%20%28embryology%29
In developmental biology, von Baer's laws of embryology (or laws of development) are four rules proposed by Karl Ernst von Baer to explain the observed pattern of embryonic development in different species. von Baer formulated the laws in his book On the Developmental History of Animals (), published in 1828, while working at the University of Königsberg. He specifically intended to rebut Johann Friedrich Meckel's 1808 recapitulation theory. According to that theory, embryos pass through successive stages that represent the adult forms of less complex organisms in the course of development, and that ultimately reflects (the great chain of being). von Baer believed that such linear development is impossible. He posited that instead of linear progression, embryos started from one or a few basic forms that are similar in different animals, and then developed in a branching pattern into increasingly different organisms. Defending his ideas, he was also opposed to Charles Darwin's 1859 theory of common ancestry and descent with modification, and particularly to Ernst Haeckel's revised recapitulation theory with its slogan "ontogeny recapitulates phylogeny". Darwin was however broadly supportive of von Baer's view of the relationship between embryology and evolution. The laws Von Baer described his laws in his book Über Entwickelungsgeschichte der Thiere. Beobachtung und Reflexion published in 1828. They are a series of statements generally summarised into four points, as translated by Thomas Henry Huxley in his Scientific Memoirs: The more general characters of a large group appear earlier in the embryo than the more special characters. From the most general forms the less general are developed, and so on, until finally the most special arises. Every embryo of a given animal form, instead of passing through the other forms, rather becomes separated from them. The embryo of a higher form never resembles any other form, but only its embryo. Description Von Baer discovered the blastula (the early hollow ball stage of an embryo) and the development of the notochord (the stiffening rod along the back of all chordates, that forms after the blastula and gastrula stages). From his observations of these stages in different vertebrates, he realised that Johann Friedrich Meckel's recapitulation theory must be wrong. For example, he noticed that the yolk sac is found in birds, but not in frogs. According to the recapitulation theory, such structures should invariably be present in frogs because they were assumed to be at a lower level in the evolutionary tree. Von Baer concluded that while structures like the notochord are recapitulated during embryogenesis, whole organisms are not. He asserted that (as translated): In terms of taxonomic hierarchy, according to von Baer, characters in the embryo are formed in top-to-bottom sequence, first from those of the largest and oldest taxon, the phylum, then in turn class, order, family, genus, and finally species. Reception The laws received a mixed appreciation. While they were criticised in detail, they formed the foundation of modern embryology. Charles Darwin The most important supporter of von Baer's laws was Charles Darwin. Darwin came across von Baer's laws from the work of Johannes Peter Müller in 1842, and realised that it was a support for his own theory of descent with modification. Darwin was a critique of the recapitulation theory and agreed with von Baer that an adult animal is not reflected by an embryo of another animal, and only embryos of different animals appear similar. He wrote in his Origin of Species (first edition, 1859): Darwin also said: It has already been casually remarked that certain organs in the individual, which when mature become widely different and serve for different purposes, are in the embryo exactly alike. The embryos, also, of distinct animals within the same class are often strikingly similar: a better proof of this cannot be given, than a circumstance mentioned by Agassiz, namely, that having forgotten to ticket the embryo of some vertebrate animal, he cannot now tell whether it be that of a mammal, bird, or reptile. Darwin's attribution to Louis Agassiz was a mistake, and was corrected in the third edition as von Baer. He further explained in the later editions of Origin of Species (from third to sixth editions), and wrote: It might be thought that the amount of change which the various parts and organs [of vertebrates] undergo in their development from the embryo to maturity would suffice as a standard of comparison; but there are cases, as with certain parasitic crustaceans, in which several parts of the structure become less perfect, so that the mature animal cannot be called higher than its larva. Von Baer's standard seems the most widely applicable and the best, namely, the amount of differentiation of the different parts (in the adult state, as I should be inclined to add) and their specialisation for different functions. Even so, von Baer was a vociferous anti-Darwinist, although he believed in the common ancestry of species. Devoting much of his scholarly effort to criticising natural selection, his criticism culminated with his last work Über Darwins Lehre ("On Darwin's Doctrine"), published in the year of his death in 1876. Later biologists The British zoologist Adam Sedgwick studied the developing embryos of dogfish and chicken, and in 1894 noted a series of differences, such as the green yolk in the dogfish and yellow yolk in the chicken, absence of embryonic rim in chick embryos, absence of blastopore in dogfish, and differences in the gill slits and gill clefts. He concluded: Modern biologists still debate the validity of the laws. In one line of argument, it is said that although every detail of von Baer's law may not work, the basic assumption that early developmental stages of animals are highly conserved is a biological fact. But an opposition says that there are conserved genetic conditions in embryos, but not the genetic events that govern the development. One example on the problem of von Baer's law is the formation of notochord before heart. This is due to the fact that heart is present in many invertebrates, which never have notochord. See also Evolutionary developmental biology References Biological rules Biology theories Evolutionary biology Animal developmental biology 1828 in science
Von Baer's laws (embryology)
[ "Biology" ]
1,333
[ "Evolutionary biology", "Biological rules", "Biology theories", "nan" ]
50,274,456
https://en.wikipedia.org/wiki/HUMARA%20assay
HUMARA assay is one of the most widely used methods to determine the clonal origin of a tumor. The method is based on X chromosome inactivation and it takes advantage of the different methylation status of the gene HUMARA (short for human androgen receptor) located on the X chromosome. Considering the fact that once one X chromosome is inactivated in a cell, all other cells derived from it will have the same X chromosome inactivated, this approach becomes a tool to differentiate a monoclonal population from a polyclonal one in a female tissue. The HUMARA gene, in particular, has three important features that make it highly convenient for the purpose: The gene is located on the X chromosome and it goes through inactivation by methylation in normal embryogenesis of a female infant. Because most genes on the X chromosome undergo inactivation, this feature is important. Human androgen receptor gene alleles have varying numbers of CAG repeats. Thus, when DNA from a healthy female tissue is amplified by polymerase chain reaction (PCR) for a specific region of the gene, two separated bands can be seen on the gel. The region that is amplified by PCR also has certain base orders that make it susceptible to be digested by HpaII (or HhaI) enzyme when it is not methylated. This detail gives the opportunity to researchers to differentiate a methylated allele from the unmethylated allele. Due to these qualities of the HUMARA gene, clonal origin of any tissue from a female mammalian organism can be determined. Process The basic process is as follows: DNA from the tissue is isolated. The isolated DNA is treated with the suitable enzyme (such as HpaII) in optimal conditions for a suggested amount of time (i.e. overnight). DNA is cleaned and the isolated region of the HUMARA gene is amplified by PCR using "suitable" primers (as an example, please see: Ref. 2) After running PCR products through a gel, the gel is visualized and the results are analyzed accordingly. Interpretation If two bands are apparent, the tissue studied is most likely of polyclonal origin. If a single band is observed, the tissue is monoclonal unless two alleles have exactly the same numbers of CAG repeats or different cells with the same inactivated initiated the tumor; so, seemingly monoclonal although it is actually polyclonal. In order to make a conclusion about the clonality of a tumor, the DNA from a normal tissue of the same person is taken, and a sample without enzyme treatment is amplified as a control. If a single band is observed even in normal tissues without enzyme treatment, it may be explained as follows: this person has the genetic pattern XO (this possibility can be excluded if a single band is observed after enzyme treatment because, if XO is indeed the genetic pattern of the sample, then there will be no methylation, and therefore no band should be visible after digesting with the enzyme. If a band is observed after enzyme treatment, the person most likely has two X chromosomes with the exact same CAG repeats.) When two bands appear for normal tissue (both enzyme treated and untreated), and two bands are observed for both the enzyme-treated tumor sample and for untreated tumor DNA, the tumor is polyclonal. However, if the same number of bands are observed with a single band after enzyme treatment, there is a high chance for the tumor to be monoclonal, though this is not certain as it is possible for both alleles to have the exact same CAG repeats. References Carcinogenesis Molecular genetics
HUMARA assay
[ "Chemistry", "Biology" ]
743
[ "Molecular genetics", "Molecular biology" ]
50,275,381
https://en.wikipedia.org/wiki/Provirus%20silencing
Provirus silencing, or proviral silencing, is the repression of expression of proviral genes in cells. A provirus is a viral DNA that has been incorporated into the chromosome of a host cell, often by retroviruses such as HIV. Endogenous retroviruses are always in the provirus state in the host cell and replicate through reverse transcription. By integrating their genome into the host cell genome, they make use of the host cell's transcription and translation mechanisms to achieve their own propagation. This often leads to harmful impact on the host. However, in recent gene therapy techniques, retroviruses are often used to deliver desired genes instead of their own viral genome into the host genome. As such, researchers are interested in the host cell's mechanisms to silence such gene expressions to find out firstly, how the host cell manages provirus transcription to eliminate the deleterious effects of retroviruses; and secondly, how can researchers ensure stable and long-term expression of retrovirus-mediated gene transfer. Mechanisms and Pathways It has been found that the level transcription of integrated retroviruses depends on both genetic and chromatin remodeling at the site of integration. Mechanisms such as DNA methylation and histone modification seem to play important roles in the suppression provirus transcription, such that proviral activity can be silenced. The location of integration also plays a crucial role with the level of silencing that is observed. For example, integration into the H3K4me3 regions, areas of the genome that are twisted around histone H3 proteins that are tri-methylated at the 4th lysine residue. It has been reported that the manipulation or insertion of CpG dinucleotide islands can lead to the disruption of proviral silencing. Silencing frequently begins with the binding of a zinc finger DNA-binding protein to the primer sequence, targeting more the expression of the provirus itself rather than attempting to curtail the sequence. The protein then proceeds to recruit other enzymes that complete the silencing through DNA or histone methylation. However, studies within the field do note that the patterns are species-specific with regards to the virus in question, thus caution should be taken when attempting to generalize to all cases. Additionally, many studies focus on proviral silencing within murine embryonic cells as opposed to human cells. Some researchers also posit that proviral silencing may be more complex than a simple question of whether the virus is repressed or not. They suggest that proviruses played more of a role with transcriptional regulation as they integrated and evolved with the host sequence over time, occasionally serving as promoters or enhancers. Challenges with Effective Silencing It has been shown that the orientation of proviruses can have dramatic effects on the expression of proviruses. With regard to HIV-1, the viral genome is frequently inserted into the introns of active genes. Perhaps unsurprisingly, when the viral genome is oriented in the same direction as the host gene, expression is increased. The converse is also true, with genes that are oriented in the opposite direction of the gene showing reduced expression. This produces challenges for effective therapeutics to aid in treatment for the disease because it can lead to large variations in detectability. This can lead to struggles for physicians who are attempting to maintain HIV latency. HIV reservoirs, or cells that are infected with HIV but not actively producing viral particles, additionally contribute to this problem. CD4+ T cells are considered to be the main reservoir and are reported to have a half-life of over three years. While the cells are effectively temporarily silencing the expression of HIV, this results in the condition being essentially impossible to eradicate. Additionally, DNA methylation has been linked to aging and geriatric disease. Increases in DNA methylation have been linked to diseases including various types of cancer, Alzheimer's disease, Type 2 Diabetes, and cardiovascular disease. From a proviral silencing standpoint, this does make logical sense as individuals would naturally accumulate more proviruses over their lifetimes. This does pose a slight concern because as groups have researched the utility of DNA methylation clocks to predict age, there is the risk that treatments which treat DNA methylation with the goal of reducing biological age inadvertently result in the increase of proviral expression within their patients. Additionally, it must be emphasized that most of the work in this field is correlational rather than causational. Managing Proviral Silencing in a Gene Therapy Context The expression of transgenes is often hindered by mechanisms associated with proviral silencing. This naturally proves to be an issue when attempting to create longer-lasting gene therapies or transgenic cell lines. Most methods center around choosing a specific locus of integration. Recently, researchers have demonstrated that targeted integration of a lentiviral payload using homology-directed repair can result in stable integration and expression. In this approach, CRISPR-associated ribonucleoprotein complexes (CRISPR RNP complexes) are used to create double-stranded breaks upstream of an endogenously promoted essential gene. The payload is designed to where it contains the transgene flanked by two regions of DNA that are homologous/identical to the regions upstream of the gene, enabling it to integrate in the same reading frame as the gene. This approach is similar to other strategies that seek to integrate in areas that are less susceptible to silencing through more mechanistic methods. References Genetic engineering Viral genes
Provirus silencing
[ "Chemistry", "Engineering", "Biology" ]
1,141
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
61,525,719
https://en.wikipedia.org/wiki/Control%20%28optimal%20control%20theory%29
In optimal control theory, a control is a variable chosen by the controller or agent to manipulate state variables, similar to an actual control valve. Unlike the state variable, it does not have a predetermined equation of motion. The goal of optimal control theory is to find some sequence of controls (within an admissible set) to achieve an optimal path for the state variables (with respect to a loss function). A control given as a function of time only is referred to as an open-loop control. In contrast, a control that gives optimal solution during some remainder period as a function of the state variable at the beginning of the period is called a closed-loop control. See also Control loop References Control loop theory
Control (optimal control theory)
[ "Mathematics" ]
147
[ "Applied mathematics", "Applied mathematics stubs" ]
61,528,341
https://en.wikipedia.org/wiki/C17H9NO3
{{DISPLAYTITLE:C17H9NO3}} The molecular formula C17H9NO3 (molar mass: 275.258 g/mol) may refer to: Liriodenine 3-Nitrobenzanthrone (3-nitro-7H-benz[de]anthracen-7-one) Molecular formulas
C17H9NO3
[ "Physics", "Chemistry" ]
77
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,529,657
https://en.wikipedia.org/wiki/Cyclo%2818%29carbon
Cyclooctadeca-1,3,5,7,9,11,13,15,17-nonayne or cyclo[18]carbon is an allotrope of carbon with molecular formula . The molecule is a ring of eighteen carbon atoms, connected by alternating triple and single bonds; thus, it is a polyyne and a cyclocarbon. Cyclo[18]carbon is the smallest cyclo[n]carbon predicted to be thermodynamically stable, with a computed strain energy of 72 kilocalories per mole. Above 122 K, it explosively decomposes to amorphous graphite. A collaboration of teams at IBM and the University of Oxford team claimed to synthesize it in solid state in 2019 by electrochemical decarbonylation of several sites of a cyclobutanone structure: Later, researchers from Spain have used computational techniques to probe the structural and electronic properties of the molecule, and have discovered it to be an electron acceptor. According to these IBM researchers, the electronic structure of their product consists of alternating triple bonds and single bonds, rather than a cumulene-type structure of consecutive double bonds. This supposedly makes this molecule a semiconductor. References Aromatic compounds Group IV semiconductors Polyynes Cyclocarbons Substances discovered in the 2010s Cycloalkynes
Cyclo(18)carbon
[ "Chemistry" ]
286
[ "Organic compounds", "Aromatic compounds", "Semiconductor materials", "Group IV semiconductors" ]
61,534,334
https://en.wikipedia.org/wiki/Vibrational%20Spectroscopy
Vibrational Spectroscopy is a bi-monthly peer-reviewed scientific journal covering all aspects of Raman spectroscopy, infrared spectroscopy and near infrared spectroscopy. Publication began in December 1990 under the original editors Jeanette G. Grasselli and John van der Maas. The current editor-in-chief is Keith C. Gordon. In addition to research articles and communications, review articles are also published in the journal. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2018 impact factor of 1.861. See also Journal of Raman Spectroscopy References Raman spectroscopy Infrared spectroscopy English-language journals Academic journals established in 1990 Spectroscopy journals
Vibrational Spectroscopy
[ "Physics", "Chemistry", "Astronomy" ]
140
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Astronomy stubs", "Infrared spectroscopy", "Spectroscopy journals", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
55,578,085
https://en.wikipedia.org/wiki/Stem%20Cell%20Reports
Stem Cell Reports is a monthly peer-reviewed open access journal covering research into stem cells. It was established in 2013 and is published exclusively online by Cell Press. It is the official journal of the International Society for Stem Cell Research. The editor-in-chief is Martin Pera (Jackson Laboratory). According to the Journal Citation Reports, the journal has a 2020 impact factor of 7.765. References External links Regenerative medicine journals Stem cell research Cell Press academic journals Academic journals associated with international learned and professional societies English-language journals Academic journals established in 2013 Monthly journals Online-only journals Open access journals
Stem Cell Reports
[ "Chemistry", "Biology" ]
125
[ "Regenerative medicine journals", "Translational medicine", "Tissue engineering", "Stem cell research" ]
55,582,147
https://en.wikipedia.org/wiki/Araki%E2%80%93Sucher%20correction
In atomic, molecular and optical physics, the Araki–Sucher correction is a leading-order correction to the energy levels of atoms and molecules due to effects of quantum electrodynamics (QED). It is named after Huzihiro Araki and Joseph Sucher, who first calculated it for the helium atom in 1957. The method is based on a perturbative expansion of the energy in the Bethe–Salpeter equation, and have since been used to calculate corrections for atoms other than helium (e.g. beryllium and lithium), and for systems with more than two electrons. The correction typically involves the fine-structure constant and may sometimes include terms of third order and higher . References Eponymous equations of physics Quantum mechanics Atomic, molecular, and optical physics
Araki–Sucher correction
[ "Physics", "Chemistry" ]
162
[ " and optical physics stubs", "Equations of physics", "Eponymous equations of physics", "Quantum mechanics", " molecular", "Atomic", "Physical chemistry stubs", "Quantum physics stubs", " and optical physics" ]
55,583,484
https://en.wikipedia.org/wiki/Long-Term%20Pavement%20Performance
Long-Term Pavement Performance Program, known as LTPP, is a research project supported by Federal Highway Administration (FHWA) to collect and analyze pavement data in the United States and Canada. Currently, the LTPP acquires the largest road performance database. History LTPP program was initiated by the Transportation Research Board (TRB) of the National Research Council (NRC) in the early 1980s. The FHWA with the cooperation of the American Association of State Highway and Transportation Officials (AASHTO) sponsored the program. The program was focusing on examining the deterioration of the nation’s highway and bridge infrastructure system. In the early 1980s, TRB and NRC suggested that a "Strategic Highway Research Program (SHRP)" should be started to concentrate on research and development activities that would majorly contribute to highway transportation improvement. Later in 1986, the detailed programs were published entitled "Strategic Highway Research Program—Research Plans". The LTPP program collects data from in-service roads and analyzes it as planned by the SHRP. The LTPP aims to understand the possible reasons behind the poor or good performance of pavements. Hence, the effects of different parameters such as weather, maintenance actions, material and traffic on performance are studied. Data is collected in a timely manner, and then analyzed to understand and predict the performance of roads. It is worth to mention that the LTPP program was transferred from SHRP to FHWA in 1992 to continue the work. Database LTPP Data is collected by four regional contractors. New data is regularly uploaded to the online platform every six months. The number of the pavement test sections monitored in the LTPP program is more than 2,500. These pavement sections include both asphalt and Portland cement concrete. Road sections are across different states and provinces of the United States and Canada. Data analysis contest The LTPP holds an annual international data analysis contest in collaboration with the ASCE. The participants are supposed to use the LTPP data. References External links LTPP InfoPave Federal Highway Administration Pavements Asphalt
Long-Term Pavement Performance
[ "Physics", "Chemistry" ]
412
[ "Amorphous solids", "Asphalt", "Unsolved problems in physics", "Chemical mixtures" ]
51,158,586
https://en.wikipedia.org/wiki/ICORES
The International Conference on Operations Research and Enterprise Systems (ICORES) is an annual conference in the field of operations research. Two tracks are held simultaneously, covering domain independent methodologies and technologies and also practical work developed in specific application areas. These tracks are present in the conference not only in technical sessions but also in poster sessions, keynote lectures and tutorials. The works presented in the conference are published in the conference proceedings and are made available at the SCITEPRESS digital library. Usually, it's established a cooperation with Springer for a post-publication with some of the conference best papers. The first edition of ICORES was held in 2012 in conjunction with the International Conference on Agents and Artificial Intelligence (ICAART) and the International Conference on Pattern Recognition Applications and Methods (ICPRAM). Areas Methodologies and Technologies Analytics for Enterprise (Engineering) Systems Inventory theory Linear programming Management sciences Network optimization Optimization Predictive analytics Queuing theory Simulation Stochastic optimization Data mining and business analytics Decision analysis Design in ES (E.G., Real Options for Flexible Design, Etc) Dynamic programming Forecasting Game theory Industrial engineering Information systems Applications Automation of operations OR in education OR in emergency management OR in health OR in national defense/international security OR in telecommunications OR in transportation Project management Resource allocation Risk management Routing Decision support systems Scheduling Supply chain management Systems of systems/teams and socio-technical systems Energy and environment Engines for innovation Globalization and Productivity Logistics Maintenance New applications of OR Optimization in finance Current chairs Conference Chair Marc Demange, RMIT University, School of Science - Mathematical Sciences, Australia Program Co-Chairs Greg H. Parlier, NCSU, United States Federico Liberatore, Cardiff University, United Kingdom Editions ICORES 2019 - Prague, Czech Republic Proceedings - Proceedings of the International Conference on Operations Research and Enterprise Systems. Best paper award Area: Methodologies and Technologies - Rainer Schlosser. "Stochastic Dynamic Pricing with Strategic Customers and Reference Price Effects" Area: Methodologies and Technologies - Sally I. McClean, David A. Stanford, Lalit Garg and Naveed Khan. "Using Phase-type Models to Monitor and Predict Process Target Compliance" Best Student Paper Award Area: Methodologies and Technologies - Marek Vlk, Antonin Novak and Zdenek Hanzalek. "Makespan Minimization with Sequence-dependent Non-overlapping Setups" Best poster award Area: Applications - Clémence Bisot. "Dynamic Linear Assignment for Pairing Two Parts in Production - A Case Study in Aeronautics Industry" Area: Methodologies and Technologies - Maria E. Bruni, Lorenzo Brusco, Giuseppe Ielpa and Patrizia Beraldi. "The Risk-averse Profitable Tour Problem" ICORES 2018 - Funchal, Madeira, Portugal Proceedings - Proceedings of the International Conference on Operations Research and Enterprise Systems. Best paper award Area: Methodologies and Technologies - Rainer Schlosser and Keven Richly. "Dynamic Pricing Strategies in a Finite Horizon Duopoly with Partial Information" Best student paper award Area: Methodologies and Technologies - Claudio Arbib, Fabrizio Marinelli, Andrea Pizzuti and Roberto Rosetti. "A Heuristic for a Rich and Real Two-dimensional Woodboard Cutting Problem" Best poster award Area: Applications - Clémence Bisot. "Dynamic Linear Assignment for Pairing Two Parts in Production - A Case Study in Aeronautics Industry" Area: Methodologies and Technologies - Maria Elena Bruni, Luigi Di Puglia Pugliese, Patrizia Beraldi and Francesca Guerriero. "A Two-stage Stochastic Programming Model for the Resource Constrained Project Scheduling Problem under Uncertainty" ICORES 2017 - Porto, Portugal Proceedings - Proceedings of the International Conference on Operations Research and Enterprise Systems. Best paper award Area: Methodologies and Technologies - Michael Dreyfuss and Yahel Giat. "Optimizing Spare Battery Allocation in an Electric Vehicle Battery Swapping System" Best student paper award Area: Applications - Martina Fischetti and David Pisinger. "On the Impact of using Mixed Integer Programming Techniques on Real-world Offshore Wind Parks" Best poster award Area: Applications - Alexander Hämmerle and Georg Weichhart. "Variable Neighbourhood Search Solving Sub-problems of a Lagrangian Flexible Scheduling Problem" ICORES 2016 - Lisbon, Portugal Proceedings - Proceedings of the International Conference on Operations Research and Enterprise Systems. Best paper award Area: Applications - Yujie Chen, Fiona Polack, Peter Cowling, Philip Mourdjis and Stephen Remde. "Risk Driven Analysis of Maintenance for a Large-scale Drainage System" Area: Methodologies and Technologies - Vikas Vikram Singh, Oualid Jouini and Abdel Lisser. "A Complementarity Problem Formulation for Chance-constraine Games" Best student paper award Area: Applications - Jose L. Saez and Victor M. Albornoz. "Delineation of Rectangular Management Zones Under Uncertainty Conditions" Best PhD project award Parisa Madhooshiarzanagh. "Preference Dissagrigation Model of ELECTRE TRI-NC and Its Application on Identifying Preferred Climates for Tourism". ICORES 2015 - Lisbon, Portugal Proceedings - Proceedings of the International Conference on Operations Research and Enterprise Systems. Best paper award Area: Applications - L. Berghman, C. Briand, R. Leus and P. Lopez. "The Truck Scheduling Problem at Crossdocking Terminals" Area: Methodologies and Technologies - Kailiang Xu and Gang Zheng. "Schedule Two-machine Flow-shop with Controllable Processing Times Using Tabu-search" Best student paper award Area: Applications - Wasakorn Laesanklang, Dario Landa-Silva and J. Arturo Castillo Salazar. "Mixed Integer Programming with Decomposition to Solve a Workforce Scheduling and Routing Problem" Area: Methodologies and Technologies - Jan Bok and Milan Hladík. "Selection-based Approach to Cooperative Interval Games" Best PhD project award Nigel M. Clay, John Hearne, Babak Abbasi and Andrew Eberhard. "Ensuring Blood is Available When it is Needed Most" Sandy Jorens, Annelies De Corte, Kenneth Sörensen and Gunther Steenackers. "The Air Distribution Network Design Problem - A Complex Non-linear Combinatorial Optimization Problem " ICORES 2014 - ESEO, Angers, Loire Valley, France Proceedings - Proceedings of the 3rd International Conference on Operations Research and Enterprise Systems. Best paper award Area: Applications - Céline Gicquel and Michel Minoux. "New Multi-product Valid Inequalities for a Discrete Lot-sizing Problem" Area: Methodologies and Technologies - Nadia Chaabane Fakhfakh, Cyril Briand and Marie-José Huguet. "A Multi-Agent Min-Cost Flow problem with Controllable Capacities" Best student paper award Area: Applications - Nhat-Vinh Vo, Pauline Fouillet and Christophe Lenté. "General Lower Bounds for the Total Completion Time in a Flowshop Scheduling Problem" Area: Methodologies and Technologies - António Quintino, João Carlos Lourenço and Margarida Catalão-Lopes. "Managing Price Risk for an Oil and Gas Company" Best PhD project award Laura Wagner and Mustafa Çagri Gürbüz. "Sourcing Decisions for Goods with Potentially Imperfect Quality under the Presence of Supply Disruption " ICORES 2013 - Barcelona, Spain Proceedings - Proceedings of the 2nd International Conference on Operations Research and Enterprise Systems. Best paper award Area: Applications - Daniel Reich, Sandra L. Winkler and Erica Klampfl. "The Pareto Frontier for Vehicle Fleet Purchases" Area: Methodologies and Technologies - N. Perel, J. L. Dorsman and M. Vlasiou. "Cyclic-type Polling Models with Preparation Times" Best student paper award Area: Applications - Ahmad Almuhtady, Seungchul Lee, Edwin Romeijn and Jun Ni. "A Maintenance-optimal Swapping Policy" Area: Methodologies and Technologies - Pablo Adasme, Abdel Lisser and Chen Wang. "A Distributionally Robust Formulation for Stochastic Quadratic Bi-level Programming" ICORES 2012 - Vilamoura, Algarve, Portugal Proceedings - Proceedings of the 1st International Conference on Operations Research and Enterprise Systems. Best paper award Area: Applications - Rita Macedo, Saïd Hanafi, François Clautiaux, Cláudio Alves and J. M. Valério de Carvalho. "GENERALIZED DISAGGREGATION ALGORITHM FOR THE VEHICLE ROUTING PROBLEM WITH TIME WINDOWS AND MULTIPLE ROUTES" Area: Methodologies and Technologies - Herwig Bruneel, Willem Mélange, Bart Steyaert, Dieter Claeys and Joris Walraevens. "IMPACT OF BLOCKING WHEN CUSTOMERS OF DIFFERENT CLASSES ARE ACCOMMODATED IN ONE COMMON QUEUE" Best student paper award Area: Applications - Jianqiang Cheng, Stefanie Kosuch and Abdel Lisser. "STOCHASTIC SHORTEST PATH PROBLEM WITH UNCERTAIN DELAYS" Area: Methodologies and Technologies - A. Papayiannis, P. Johnson, D. Yumashev, S. Howell, N. Proudlove and P. Duck. "CONTINUOUS-TIME REVENUE MANAGEMENT IN CARPARKS" References External links Science and Technology Events Science and Technology Publications Event management system WikiCfp call for papers Operations research Computer science conferences Academic conferences
ICORES
[ "Mathematics", "Technology" ]
1,909
[ "Applied mathematics", "Computer science", "Computer science conferences", "Operations research" ]
51,159,172
https://en.wikipedia.org/wiki/One-relator%20group
In the mathematical subject of group theory, a one-relator group is a group given by a group presentation with a single defining relation. One-relator groups play an important role in geometric group theory by providing many explicit examples of finitely presented groups. Formal definition A one-relator group is a group G that admits a group presentation of the form where X is a set (in general possibly infinite), and where is a freely and cyclically reduced word. If Y is the set of all letters that appear in r and then For that reason X in () is usually assumed to be finite where one-relator groups are discussed, in which case () can be rewritten more explicitly as where for some integer Freiheitssatz Let G be a one-relator group given by presentation () above. Recall that r is a freely and cyclically reduced word in F(X). Let be a letter such that or appears in r. Let . The subgroup is called a Magnus subgroup of G. A famous 1930 theorem of Wilhelm Magnus, known as Freiheitssatz, states that in this situation H is freely generated by , that is, . See also for other proofs. Properties of one-relator groups Here we assume that a one-relator group G is given by presentation () with a finite generating set and a nontrivial freely and cyclically reduced defining relation . A one-relator group G is torsion-free if and only if is not a proper power. Every one-relator group G is virtually torsion-free, that is, admits a torsion-free subgroup of finite index. A one-relator presentation is diagrammatically aspherical. If is not a proper power then the presentation complex P for presentation () is a finite Eilenberg–MacLane complex . If is not a proper power then a one-relator group G has cohomological dimension . A one-relator group G is free if and only if is a primitive element; in this case G is free of rank n − 1. Suppose the element is of minimal length under the action of , and suppose that for every either or occurs in r. Then the group G is freely indecomposable. If is not a proper power then a one-relator group G is locally indicable, that is, every nontrivial finitely generated subgroup of G admits a group homomorphism onto . Every one-relator group G has algorithmically decidable word problem. If G is a one-relator group and is a Magnus subgroup then the subgroup membership problem for H in G is decidable. It is unknown if one-relator groups have solvable conjugacy problem. It is unknown if the isomorphism problem is decidable for the class of one-relator groups. A one-relator group G given by presentation () has rank n (that is, it cannot be generated by fewer than n elements) unless is a primitive element. Let G be a one-relator group given by presentation (). If then the center of G is trivial, . If and G is non-abelian with non-trivial center, then the center of G is infinite cyclic. Let where . Let and be the normal closures of r and s in F(X) accordingly. Then if and only if is conjugate to or in F(X). There exists a finitely generated one-relator group that is not Hopfian and therefore not residually finite, for example the Baumslag–Solitar group . Let G be a one-relator group given by presentation (). Then G satisfies the following version of the Tits alternative. If G is torsion-free then every subgroup of G either contains a free group of rank 2 or is solvable. If G has nontrivial torsion, then every subgroup of G either contains a free group of rank 2, or is cyclic, or is infinite dihedral. Let G be a one-relator group given by presentation (). Then the normal subgroup admits a free basis of the form for some family of elements . One-relator groups with torsion Suppose a one-relator group G given by presentation () where where and where is not a proper power (and thus s is also freely and cyclically reduced). Then the following hold: The element s has order m in G, and every element of finite order in G is conjugate to a power of s. Every finite subgroup of G is conjugate to a subgroup of in G. Moreover, the subgroup of G generated by all torsion elements is a free product of a family of conjugates of in G. G admits a torsion-free normal subgroup of finite index. Newman's "spelling theorem" Let be a freely reduced word such that in G. Then w contains a subword v such that v is also a subword of or of length . Since that means that and presentation () of G is a Dehn presentation. G has virtual cohomological dimension . G is a word-hyperbolic group. G has decidable conjugacy problem. G is coherent, that is every finitely generated subgroup of G is finitely presentable. The isomorphism problem is decidable for finitely generated one-relator groups with torsion, by virtue of their hyperbolicity. G is residually finite. is virtually free-by-cyclic, i.e. has a subgroup of finite-index such that there is a free normal subgroup with cyclic quotient . Magnus–Moldavansky method Starting with the work of Magnus in the 1930s, most general results about one-relator groups are proved by induction on the length |r| of the defining relator r. The presentation below follows Section 6 of Chapter II of Lyndon and Schupp and Section 4.4 of Magnus, Karrass and Solitar for Magnus' original approach and Section 5 of Chapter IV of Lyndon and Schupp for the Moldavansky's HNN-extension version of that approach. Let G be a one-relator group given by presentation () with a finite generating set X. Assume also that every generator from X actually occurs in r. One can usually assume that (since otherwise G is cyclic and whatever statement is being proved about G is usually obvious). The main case to consider when some generator, say t, from X occurs in r with exponent sum 0 on t. Say in this case. For every generator one denotes where . Then r can be rewritten as a word in these new generators with . For example, if then . Let be the alphabet consisting of the portion of given by all with where are the minimum and the maximum subscripts with which occurs in . Magnus observed that the subgroup is itself a one-relator group with the one-relator presentation . Note that since , one can usually apply the inductive hypothesis to when proving a particular statement about G. Moreover, if for then is also a one-relator group, where is obtained from by shifting all subscripts by . Then the normal closure of in G is Magnus' original approach exploited the fact that N is actually an iterated amalgamated product of the groups , amalgamated along suitably chosen Magnus free subgroups. His proof of Freiheitssatz and of the solution of the word problem for one-relator groups was based on this approach. Later Moldavansky simplified the framework and noted that in this case G itself is an HNN-extension of L with associated subgroups being Magnus free subgroups of L. If for every generator from its minimum and maximum subscripts in are equal then and the inductive step is usually easy to handle in this case. Suppose then that some generator from occurs in with at least two distinct subscripts. We put to be the set of all generators from with non-maximal subscripts and we put to be the set of all generators from with non-maximal subscripts. (Hence every generator from and from occurs in with a non-unique subscript.) Then and are free Magnus subgroups of L and . Moldavansky observed that in this situation is an HNN-extension of L. This fact often allows proving something about G using the inductive hypothesis about the one-relator group L via the use of normal form methods and structural algebraic properties for the HNN-extension G. The general case, both in Magnus' original setting and in Moldavansky's simplification of it, requires treating the situation where no generator from X occurs with exponent sum 0 in r. Suppose that distinct letters occur in r with nonzero exponents accordingly. Consider a homomorphism given by and fixing the other generators from X. Then for the exponent sum on y is equal to 0. The map f induces a group homomorphism that turns out to be an embedding. The one-relator group G can then be treated using Moldavansky's approach. When splits as an HNN-extension of a one-relator group L, the defining relator of L still turns out to be shorter than r, allowing for inductive arguments to proceed. Magnus' original approach used a similar version of an embedding trick for dealing with this case. Two-generator one-relator groups It turns out that many two-generator one-relator groups split as semidirect products . This fact was observed by Ken Brown when analyzing the BNS-invariant of one-relator groups using the Magnus-Moldavansky method. Namely, let G be a one-relator group given by presentation () with and let be an epimorphism. One can then change a free basis of to a basis such that and rewrite the presentation of G in this generators as where is a freely and cyclically reduced word. Since , the exponent sum on t in r is equal to 0. Again putting , we can rewrite r as a word in Let be the minimum and the maximum subscripts of the generators occurring in . Brown showed that is finitely generated if and only if and both and occur exactly once in , and moreover, in that case the group is free. Therefore if is an epimorphism with a finitely generated kernel, then G splits as where is a finite rank free group. Later Dunfield and Thurston proved that if a one-relator two-generator group is chosen "at random" (that is, a cyclically reduced word r of length n in is chosen uniformly at random) then the probability that a homomorphism from G onto with a finitely generated kernel exists satisfies for all sufficiently large n. Moreover, their experimental data indicates that the limiting value for is close to . Examples of one-relator groups Free abelian group Baumslag–Solitar group where . Torus knot group where are coprime integers. Baumslag–Gersten group Oriented surface group where and where . Non-oriented surface group , where . Generalizations and open problems If A and B are two groups, and is an element in their free product, one can consider a one-relator product''' . The so-called Kervaire conjecture, also known as Kervaire–Laudenbach conjecture, asks if it is true that if A is a nontrivial group and is infinite cyclic then for every the one-relator product is nontrivial. Klyachko proved the Kervaire conjecture for the case where A'' is torsion-free. A conjecture attributed to Gersten says that a finitely generated one-relator group is word-hyperbolic if and only if it contains no Baumslag–Solitar subgroups. See also 3-manifolds Geometric topology Small cancellation theory Sources Wilhelm Magnus, Abraham Karrass, Donald Solitar, Combinatorial group theory. Presentations of groups in terms of generators and relations, Reprint of the 1976 second edition, Dover Publications, Inc., Mineola, NY, 2004. . References External links Andrew Putman's notes on one-relator groups, University of Notre Dame Group theory Algebraic topology Geometric topology
One-relator group
[ "Mathematics" ]
2,582
[ "Algebraic topology", "Geometric topology", "Group theory", "Fields of abstract algebra", "Topology" ]
51,164,063
https://en.wikipedia.org/wiki/STARlight
STARlight is a computer simulation (Monte Carlo) event generator program to simulate ultra-peripheral collisions among relativistic nuclei. It simulates both photonuclear and two-photon interactions. It can simulate multiple interactions among a single ion pair, such as vector meson photoproduction accompanied by mutual Coulomb excitation. These reactions are currently the primary method of studying photo-nuclear and two-photon interactions. History STARlight was initially written in the late 1990s, in FORTRAN. After a period of expansion to include additional final states, etc. it was recoded into C++ in the early 2000s. The code is now hosted on the Hepforge code repository. Reactions simulated Two-photon production of lepton pairs Two-photon production of single mesons Photonproduction of vector mesons Generalized photoproduction (via an interface to DPMJet) STARlight has been used by both STAR and PHENIX, at RHIC, and at the ALICE, CMS, ATLAS and LHC-b experiment at the Large Hadron Collider, for simulations of ultra-peripheral collisions. STARlight is designed to handle complex reactions involving multiple photon exchange between a single ion pair. These reactions are important at heavy ion colliders, because, with the large nuclear charges, the probability of multi-photon interactions in near grazing collisions (impact parameter b just slightly above twice the nuclear radius) is large. STARlight does this by calculating cross-sections in an impact-parameter dependent formalism. One of its major successes was the successful prediction of the cross-sections for ρ0 photoproduction at both RHIC and the LHC. It also accurately predicted the cross-section for e+e− pair production at RHIC and the LHC, using lowest order quantum electrodynamics. The latter reaction is important because it shows that there are no large higher order corrections, as could be expected because of the large nuclear charge. In both of the RHIC results, the presence of neutrons in downstream zero-degree calorimeters was used in the trigger, selecting events with impact parameters less than about 40 fermi; these events were then searched for photoproduced ρ0. A detailed description of the code is available. References Computational particle physics Nuclear physics
STARlight
[ "Physics" ]
469
[ "Nuclear physics", "Particle physics", "Computational particle physics", "Computational physics" ]
51,165,385
https://en.wikipedia.org/wiki/TCP-seq
Translation complex profile sequencing (TCP-seq) is a molecular biology method for obtaining snapshots of momentary distribution of protein synthesis complexes along messenger RNA (mRNA) chains. Application Expression of genetic code in all life forms consists of two major processes, synthesis of copies of the genetic code recorded in DNA into the form of mRNA (transcription), and protein synthesis itself (translation), whereby the code copies in mRNA are decoded into amino acid sequences of the respective proteins. Both transcription and translation are highly regulated processes essentially controlling everything of what happens in live cells (and multicellular organisms, consequently). Control of translation is especially important in eukaryotic cells where it forms part of post-transcriptional regulatory networks of genes expression. This additional functionality is reflected in the increased complexity of the translation process, making it a hard object to investigate. Yet details on when and what mRNA is translated and what mechanisms are responsible for this control are key to understanding of normal and pathological cell functionality. TCP-seq can be used to obtain this information. Principles With the advent of the high-throughput DNA and RNA sequence identification methods (such as Illumina sequencing), it became possible to efficiently analyse nucleotide sequences of large numbers of relatively short DNA and RNA fragments. Sequences of these fragments can be superimposed to reconstruct the source. Alternatively, if the source sequence is already known, the fragments can be found within it (“mapped”), and their individual numbers counted. Thus, if an initial stage exists whereby the fragments are differentially present or selected (“enriched”), this approach can be used to quantitatively describe such stage over even a very large number or length of the input sequences, most usually encompassing the entire DNA or RNA of the cell. TCP-seq is based on these capabilities of the high-throughput RNA sequencing and further uses the nucleic acid protection phenomenon. The protection is manifested as resistance to depolymerisation or modification of stretches of nucleic acids (particularly, RNA) that are tightly bound to or engulfed with other biomolecules, which thus leave their “footprints” over the nucleic acid strand. These “footprint” fragments therefore represent location on nucleic acid chain where the interaction occurs. By sequencing and mapping the fragments back to the source sequence, it is possible to precisely identify the locations and counts of these intermolecular contacts. In case of TCP-seq, ribosomes and ribosomal subunits engaged in interaction with mRNA are first fast chemically crosslinked to it with formaldehyde to preserve existing state of interactions (“snapshot” of distribution) and to block any possible non-equilibrium processes. The crosslinking can be performed directly in, but not restricted to, live cells. The RNA is then partially degraded (e.g. with ribonuclease) so that only fragments protected by the ribosomes or ribosomal subunits are left. The protected fragments are then purified according to the sedimentation dynamics of the attached ribosomes or ribosomal subunits, de-blocked, sequenced and mapped to the source transcriptome, giving the original locations of the translation complexes over mRNA. TCP-seq merges several elements typical to other transcriptome-wide analyses of its kind. In particular, polysome profiling and ribosome (translation) profiling approaches are also employed to identify mRNA involved in polysome formation and locations of elongating ribosomes over coding regions of transcripts, correspondingly. These methods, however, do not use chemical stabilisation of translation complexes and purification of the covalently bound intermediates from the live cells. TCP-seq thus can be considered more as a functional equivalent of ChIP-seq and similar methods of investigating momentary interactions of DNA that are redesigned to be applicable for translation. Advantages and disadvantages The advantages of the method include: uniquely wide field of view (because translation complexes of any type, including scanning small ribosomal subunits, are captured for the first time); potentially more natural representation of complex dynamics (because all, and not only selected, translation processes are arrested by formaldehyde fixation); possibly more faithful and/or sensitive detection of translation complexes locations (as covalent fixation prevents detachment of the fragments from the ribosomes or their subunits). The disadvantages include: higher overall complexity of the experimental procedure (due to requirement of the initial isolation of translated mRNA and preparative sedimentation to separate ribosomes and ribosomal subunits); higher contamination of the useful sequencing read depth with the undesired fragments of the ribosomal RNA (inherited from the wide size selection window used for protected RNA fragments); a pre-requirement for optimization of the formaldehyde fixation procedure for each new cell or sample type (as optimal formaldehyde fixation timings strongly depend on sample morphology and both over- and under-fixation will compromise the results). Development The method is currently being developed and was applied to investigate translation dynamics in live yeast cells and is extending, rather than simply combining, the capabilities of the previous techniques. The only other transcriptome-wide method for mapping ribosome positions over mRNA with nucleotide precision is ribosome (translation) profiling. However, it captures positions of only elongating ribosomes, and most dynamic and functionally important intermediates of translation at the initiation stage are not detected. TCP-seq was designed to specifically target these blind spots. It can essentially provide the same level of details for elongation phase as ribosome (translation) profiling, but also includes recording of initiation, termination and recycling intermediates (and basically any other possible translation complexes as long as the ribosome or its subunits are contacting and protecting the mRNA) of protein synthesis that previously remained out of the reach. Therefore, TCP-seq provides a single approach for a complete insight into the translation process of a biological sample. This particular aspect of the method can be expected to be developed further as the dynamics of ribosomal scanning on mRNA during translation initiation is generally unknown for the most of life. Current dataset containing TCP-seq data for translation initiation is available for yeast Saccharomyces cerevisiae, and likely to be extended for other organisms in the future. References Molecular biology techniques Biochemistry methods Molecular biology
TCP-seq
[ "Chemistry", "Biology" ]
1,313
[ "Biochemistry methods", "Molecular biology techniques", "Biochemistry", "Molecular biology" ]
42,523,620
https://en.wikipedia.org/wiki/Desalting%20and%20buffer%20exchange
Desalting and buffer exchange are methods to separate soluble macromolecules from smaller molecules (desalting) or replace the buffer system used for another one suitable for a downstream application (buffer exchange). These methods are based on gel filtration chromatography, also called molecular sieve chromatography, which is a form of size-exclusion chromatography. Desalting and buffer exchange are two of the most common gel filtration chromatography applications, and they can be performed using the same resin. Desalting and buffer exchange both entail recovering the components of a sample in whatever buffer is used to pre-equilibrate the small, porous polymer beads (resin). Desalting occurs when buffer salts and other small molecules are removed from a sample in exchange for water (with the resin being pre-equilibrated in water). Buffer exchange occurs when the buffer salts in a sample are exchanged for those in another buffer. Applications Desalting is used to remove salts from protein solutions, phenol or unincorporated nucleotides from nucleic acids or excess crosslinking or labeling reagents from conjugated proteins. Buffer exchange is used to transfer a protein solution into a buffer system appropriate for downstream applications such as ion exchange, electrophoresis or affinity chromatography. Principles of Desalting and Buffer Exchange Size exclusion chromatography applications for separating macromolecules based on subtle differences in size typically use resins with large and varied pore sizes in long chromatography columns. However, for buffer exchange and desalting applications, it is mainly the maximum effective pore size (exclusion limit or molecular weight cut off (MWCO) of the resin) that determines the size of molecules that can be separated. Molecules that are significantly smaller than the MWCO penetrate into the pores of the resin, while molecules larger than the MWCO are unable to enter the pores and remain together in the void volume of the column. By passing samples through a column resin bed with sufficient length and volume, macromolecules can be fully separated from small molecules that travel a greater distance though the pores of the resin bed. No significant separation of molecules larger than the exclusion limit occurs. In order for the desired macromolecules to remain in the void volume, resins with very small pores sizes must be utilized. For typical desalting and buffer exchange applications choosing a resin with a molecular weight cut off between 5 and 10KDa is usually best. For other applications, such as separating peptides from full-sized proteins, resins with larger exclusion limits may be necessary. The macromolecular components are recovered in the buffer used to pre-equilibrate the gel-filtration matrix, while the small molecules can be collected in a later fraction volume or be left trapped in the resin. One important feature to note when choosing a resin is that the small molecules targeted for removal must be several times smaller than the MWCO for proper separation. Desalting and Buffer Exchange vs. Dialysis Dialysis is useful for many of the same desalting and buffer exchange applications performed with gel filtration chromatography, as both methods are based on similar molecular weight cut-off limits. Gel filtration has the advantage of speed (a few minutes vs. hours for dialysis) along with the ability to remove contaminants from relatively small-volume samples compared to dialysis which is an important feature when working with toxic or radioactive substances. Dialysis, on the other hand, is much less dependent on sample size as related to device format. For dialysis applications, achieving a high percentage sample recovery and molecule removal is generally straight forward with little optimization. For gel filtration applications it is important to select a column size and format that is suitable for your sample. Gel Filtration Formats for Small Sample Processing There are a number of common formats for performing gel filtration for smaller (less than 4mL) volumes: Chromatography columns Gravity-flow columns Chromatography cartridges Centrifuge columns Centrifuge plates Gravity-flow, or drip, columns use head-pressure from a buffer-chase to push the sample through the gel filtration matrix. Sample is loaded into the top of an upright column and allowed to flow into the resin bed. The sample is then chased through the column by adding additional buffer or water to the top of the column. During this process, small fractions are typically collected and each is tested for the macromolecules of interest. In some cases, several fractions might contain the protein and may have to be pooled to improve yield. In order to eliminate the time and monitoring assorted with drip columns, fractions often equal to the full exclusion volume of the column are collected regardless of sample volume resulting in significant dilution of sample. Sealed chromatography cartridges or columns work similarly except the sample and buffer is pumped into and through the resin by an external device such as a liquid chromatographic (LC) system, also requiring collection and monitoring of several fractions. Even though this method is often semi-automated, using chromatography cartridges is typically limited to processing one sample at a time and some sample dilution from the chase buffer is still likely to occur. To eliminate sample dilution and the collecting and monitoring of fractions, centrifuge column or plate -based gel filtration, also referred to as spin desalting, methods are commonly used. Spin desalting is unique in that a centrifuge is used to first clear the void volume of liquid in the resin, followed by sample addition and centrifugation. After centrifugation, the macromolecules in the sample have moved through the column in approximately the same initial volume, but the small molecules have been forced into the pores of the resin and replaced by the buffer that was used to pre-equilibrate the gel-filtration matrix. Spin columns and plates eliminate the need to wait for samples to emerge by gravity flow and require no chromatography system, allowing for multiple-sample processing simultaneously. Desalting and Buffer Exchange Column and Plate Suppliers Desalting spin columns are widely available with various volumes and MWCO limits: Thermo Scientific Pierce Products Bio-Rad Laboratories, Inc. Cytiva External Resources Animation of desalting using gel filtration chromatography References Chromatography
Desalting and buffer exchange
[ "Chemistry" ]
1,339
[ "Chromatography", "Separation processes" ]
56,948,914
https://en.wikipedia.org/wiki/Neuroprostanes
The neuroprostanes are prostaglandin-like compounds formed in vivo from the free radical-catalyzed peroxidation of essential fatty acids (primarily docosahexaenoic acid) without the direct action of cyclooxygenase (COX) enzymes. The result is the formation of isoprostane-like compounds F4-, D4-, E4-, A4-, and J4-neuroprostanes which have been shown to be produced in vivo. These oxygenated essential fatty acids possess potent biological activity as anti-inflammatory mediators inhibiting the response of human macrophages that augment the perception of pain. See also Isoprostanes Prostaglandin References Prostaglandins
Neuroprostanes
[ "Chemistry", "Biology" ]
159
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
56,951,751
https://en.wikipedia.org/wiki/Necrobiome
The necrobiome has been defined as the community of species associated with decaying remains after the death of an organism. The process of decomposition is complex. Microbes decompose cadavers, but other organisms including fungi, nematodes, insects, and larger scavenger animals also contribute. Once the immune system is no longer active, microbes colonizing the intestines and lungs decompose their respective tissues and then travel throughout the body via the circulatory and lymphatic systems to break down other tissue and bone. During this process, gases are released as a by-product and accumulate, causing bloating. Eventually, the gases seep through the body's wounds and natural openings, providing a way for some microbes to exit from the inside of the cadaver and inhabit the outside. The microbial communities colonizing the internal organs of a cadaver are referred to as the thanatomicrobiome. The region outside of the cadaver that is exposed to the external environment is referred to as the epinecrotic microbial communities of the necrobiome, and is especially important when determining the time and location of death for an individual. Different microbes play specific roles during each stage of the decomposition process. The microbes that colonize the cadaver and the rate of their activity are determined by the cadaver itself and the cadaver's surrounding environmental conditions. History There is textual evidence that human cadavers were first studied around the third century BC to gain an understanding of human anatomy. Many of the first human cadaver studies took place in Italy, where the earliest record of determining the cause of death from a human corpse dates back to 1286. However, understanding of the human body progressed slowly, in part because the spread of Christianity and other religious beliefs resulted in human dissection becoming illegal. Non-human animals only were dissected for anatomical understanding until the 13th century when officials realized human cadavers were necessary for a better understanding of the human body. It was not until 1676 that Antonie van Leeuwenhoek designed a lens that made it possible to visualize microbes, and not until the late 18th century when microbes were considered useful in understanding the body after death. In modern times, human cadavers are used for research, but other animal models can provide larger sample sizes and produce more controlled studies. Microbial colonization between humans and some non-human animals is so similar that those models can be used to understand the decomposition process for humans. Swine have been used repeatedly to understand the human decomposition process in terrestrial environments. Pigs are suitable for studying human decomposition because of their size, sparse hairs, and similar bacteria found in their GI tracts. Pig carcasses also minimizing the issue of variation that exists when using human cadavers as study subjects. Sophisticed molecular techniques have made it possible to identify the microbial communities that inhabit and decompose cadavers; however, this research is fairly new. Studying the necrobiome has become increasingly useful in determining the time and cause of death, which is useful in crime scene investigations. Applications in forensics Microbial forensics As the necrobiome deals with the various communities of bacteria and other organisms that catalyze the decomposition of plants and animals, this particular biome is an increasingly vital part of forensic science. The microbes occupying the space underneath and around a decomposing body are unique to it—similar to how fingerprints are exclusively unique to only one person. Using this differentiation, forensic investigators at a crime scene are able to distinguish between burial sites, as well as gain concrete factual information about how long the body has been there and the predicted area in which the death possibly occurred. Forensic microbiologists investigate ways to determine time and place of death by analyzing the microbes present on the corpse. The microbial timeline of how a body decays is known as the microbial clock. It estimates how long a body has been in a certain place based on microbes present or missing. The succession of bacterial species populating the body after a period of four days is an indicator of minimum time since death. Recent studies have taken place to determine if bacteria alone can inform the post-mortem interval. Bacteria responsible for decomposing cadavers can be difficult to study because the bacteria found on a cadaver vary and change quickly. Bacteria can be brought to a cadaver by scavengers, air, or water. Other environmental factors like temperature and soil can impact the microbes found on a cadaver. The time of death can be estimated not only by the type and amount of bacteria on a cadaver, but also by the chemical compounds produced by those bacteria. Forensic anthropologist Arpad Vass determined, from research he undertook in the 1990s, that three types of fatty acids, produced when bacteria break down fat tissues, muscles, and food remnants in the stomach are useful in predicting the time since death during forensic investigations. Forensic entomology Forensic entomology, the study of insects (arthropods) found in decomposing humans, is useful in determining the post-mortem interval after 3–4 days have passed since the death. Various types of flies are usually drawn to a cadaver and typically lay their eggs there. Therefore, both the developmental stages of one species of fly and the succession of different species can give an estimate of how long the person has been deceased. Since the presence and life cycle of insects varies by temperature and environmental conditions, this type of analysis cannot give the actual time of death, but results only in a minimum time since death. The deceased could not have been dead longer than the oldest maggot found. Insect activity can also indicate the cause of death. Blowflies typically lay their eggs in natural body cavities that are easily assessible, yet also sheltered. If the pattern of maggot activity appears elsewhere, that could indicate an injury, such as a stab wound, even if the surrounding tissue has decomposed. In the event of a death caused by poison, traces of the toxin may have been consumed by the maggots, without harming them. Since insect species tend to have certain geographic ranges and known habitat preferences, forensic entomologists can determine if a body has been moved after death. Analysis of the insects in the necrobiome can indicate if the death occurred in a different ecological or geographical environment than where the cadaver was found. Research Human cadavers The decomposition of human bodies is studied at research facilities known as body farms. Seven educational institution house such facilities in the United States: University of Tennessee in Knoxville, Western Carolina University, Texas State University, Sam Houston State University, Southern Illinois University, Colorado Mesa University, and University of South Florida. These facilities study the decomposition of cadavers in all possible manners of decay, including in open or frozen environments, buried underground, or within cars. Through the study of the cadavers, experts examine the microbial timeline and document what is typical in each stage in the various locations that each body is placed. In 2013, at the Southeast Texas Applied Forensics Science facility at Sam Houston State University, researchers documented the bacteria growing in two decomposing cadavers placed in a natural outdoor environment. Their focus was on the bloat stage, when hydrogen sulfide and methane produced by bacteria build up and inflate the cadaver. They found that "by the end of the bloat period...anaerobic bacteria such as Clostridia had become dominant" and swaps of the oral cavity "showed a shift toward Firmicutes, a group of bacteria that includes Clostridia." By 2019, Jennifer Pechal, a forensic science researcher at Michigan State University, had worked with microbes on almost 2,000 human remains in a spectrum of conditions. She proposed a pattern in the necrobiome that concurs with data from scientists in Italy, Austria, and France. They found that a "large, consistent shift in the microbial community" occurs about 48 hours after death, making it "fairly easy to tell if a body has been dead for more or less than 2 days." Pechal also hopes that microbial tests can be used in the future to help pathologists determine undiagnosed medical conditions that were the cause of death. Non-human remains A 2019 study at the University of Huddersfield in West Yorkshire, United Kingdom sought to investigate the influence fur has on the necrobiome of rabbits. The experiment involved six dead rabbits purchased from the pet food company, Kiezebrink. The fur was removed from the torsos of three of the test subjects. All six samples were placed on "sterile sand in clean plastic containers." Lids covering the containers prevented birds and other scavengers from accessing the carcasses, while small holes drilled into the sides of the containers allowed air flow and insect activity while the containers were exposed on the roof of a university building. Samples were collected from inside of the mouth, the upper skin of the torso exposed to the air environment, and the bottom skin of the torso in contact with the sand. Proteobacteria were the most abundant present, followed by Firmicutes, Bacteroidetes, and Actinobacteria during the active stage of decomposition. During the advanced stage of decomposition, Proteobacteria decreased from 99.4% to 81.6% in the oral cavity but were most abundant in the non-fur samples. Firmicutes were the most abundant for the skin samples in both fur and non-fur samples. Finally, Proteobacteria was most abundant in the soil interface during the beginning of decomposition in both fur and non-fur samples. The researchers also noted that Actinobacteria was the least abundant in the active stage and decreased even more during the dry stage. The conclusion of the experiment was that while bacterial communities changed over the course of decomposition, the most significant variation is attributed to different anatomical regions "but independently of the presence of the fur." Technology and techniques Techniques for analyzing the necrobiome involve phospholipid fatty acid (PLFA) analysis, total soil fatty acid methyl esters, and DNA profiling. This technology is used to simplify the sample collection into sequences that scientists can read. The simplified sequence of the necrobiome is run through a data bank to match the name of it. Due to the lack of universal algorithm technology, there is a knowledge gap in various platforms across different regions of the world. In order to close that gap, there needs to be an expansion of the technology. However, there are a few obstacles, including identifying needs, research, prototype development, acceptance, and adoption. Researchers are working on an algorithm to predict time since death with an accuracy of within two days, which would be an improvement over time frames given by forensic entomology. Jennifer Pechal states that those computer models must "be tested on bodies with a known time of death to ensure they are accurate." As of 2020, that technology is still 5 to 10 years away from becoming available. See also Microbiology of decomposition Biome Human microbiome References Bacteriology Bacteria and humans Microbiology Microbiomes Medical aspects of death Forensic science
Necrobiome
[ "Chemistry", "Biology", "Environmental_science" ]
2,312
[ "Bacteria and humans", "Microbiology", "Bacteria", "Microscopy", "Microbiomes", "Environmental microbiology" ]
47,106,648
https://en.wikipedia.org/wiki/Wei-Shou%20Hu
Wei-Shou Hu (born November 5, 1951) is a Taiwanese-American chemical engineer. He is currently the Distinguished McKnight University Professor of Chemical Engineering and Material Science at the University of Minnesota. Education He earned his B.S. in agricultural chemistry from National Taiwan University in 1974 and his Ph.D. in biochemical engineering from the Massachusetts Institute of Technology under the guidance of Daniel I.C. Wang in 1983. He has been a professor with the University of Minnesota since 1983. Hu has long impacted the field of cell culture bioprocessing since its infancy by steadfastly introducing quantitative and systematic analysis into this field. His work, which covers areas such as modeling and controlling cell metabolism, modulating glycosylation, and process data mining, has helped shape the advances of biopharmaceutical process technology. He recently led an industrial consortium to embark on genomic research on Chinese hamster ovary cells, the main workhorse of biomanufacturing, and to promote post-genomic research in cell bioprocessing. Hu's research focuses on the field of cell culture bioprocessing, particularly metabolic control of the physiological state of the cell. In addition to his work with Chinese hamster ovary cells, his work has enabled the use of process engineering for cell therapy, especially with liver cells. Hu has written four different biotechnology books. One of his articles is cited by 63. He is the 2005 recipient of the Marvin Johnson Award from the American Chemical Society, the distinguished service award of Society of Biological Engineers, a special award from Asia Pacific Biochemical Engineering Conference (2009), and the Amgen Award from Engineering Conferences International, as well as both the distinguished service award and the Division award from the Food, Pharmaceuticals and Bioengineering Division of the American Institute of Chemical Engineers. He has authored the books Bioseparations, Cell Culture Technology for Pharmaceutical and Cell-Based Therapies and Cell Culture Bioprocess Engineering References External links Hu Group Website University of Minnesota page Cellular Bioprocess Technology Course 1951 births Living people 20th-century American engineers 21st-century American engineers American chemical engineers Biochemical engineering National Taiwan University alumni Minnesota CEMS MIT School of Engineering alumni Place of birth missing (living people) Taiwanese chemical engineers Taiwanese emigrants to the United States University of Minnesota faculty
Wei-Shou Hu
[ "Chemistry", "Engineering", "Biology" ]
478
[ "Biochemistry", "Chemical engineering", "Biological engineering", "Biochemical engineering" ]
47,110,350
https://en.wikipedia.org/wiki/Thermal%20integrity%20profiling
Thermal Integrity Profiling (TIP) is a non-destructive testing method used to evaluate the integrity of concrete foundations. It is standardized by ASTM D7949 - Standard Test Methods for Thermal Integrity Profiling of Concrete Deep Foundations. The testing method was first developed in the mid 1990s at the University of South Florida. It relates the heat generated by curing of cement to the integrity and quality of drilled shafts, augered cast in place (ACIP) piles and other concrete foundations. In general, a shortage of competent concrete (necks or inclusions) is registered by relative cool regions; the presence of extra concrete (over-pour bulging into soft soil strata) is registered by relative warm regions. Concrete temperatures along the length of the foundation element are sampled throughout the concrete hydration process. TIP analysis is performed at the point of peak temperature, generally 18 to 24hrs post-concreting. Measurements are available relatively soon after pouring (6 to 72 hours), generally before other integrity testing methods such as cross hole sonic logging and low strain integrity testing can be performed. TIP can be performed using a probe lowered down standard access tubes or by installing embedded thermal wires along the length of the reinforcement cage. Four thermal wires are commonly installed along the steel cage, each 90 degrees from one another, forming a north-east-south-west configuration. If records at a certain depth show regions with cooler temperatures (when compared to the average temperature at that depth), a concrete deficiency or defect may be present. An average temperature at a certain depth that is significantly lower than the average temperatures at other depths may also be indication of a potential problem. It is also possible to estimate the effective area of the foundation, and to assess if the reinforcing cage is properly aligned and centered. References Cement Concrete Corrosion Nondestructive testing 1990s introductions Temperature
Thermal integrity profiling
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
373
[ "Physical quantities", "Metallurgy", "Thermodynamics", "Wikipedia categories named after physical quantities", "Electrochemistry stubs", "Physical chemistry stubs", "Scalar physical quantities", "Temperature", "Civil engineering", "Civil engineering stubs", "Concrete", "Materials degradation",...
38,308,532
https://en.wikipedia.org/wiki/Benchtop%20nuclear%20magnetic%20resonance%20spectrometer
A Benchtop nuclear magnetic resonance spectrometer (Benchtop NMR spectrometer) refers to a Fourier transform nuclear magnetic resonance (FT-NMR) spectrometer that is significantly more compact and portable than the conventional equivalents, such that it is portable and can reside on a laboratory benchtop. This convenience comes from using permanent magnets, which have a lower magnetic field and decreased sensitivity compared to the much larger and more expensive cryogen cooled superconducting NMR magnets. Instead of requiring dedicated infrastructure, rooms and extensive installations these benchtop instruments can be placed directly on the bench in a lab and moved as necessary (e.g., to the fumehood). These spectrometers offer improved workflow, even for novice users, as they are simpler and easy to use. They differ from relaxometers in that they can be used to measure high resolution NMR spectra and are not limited to the determination of relaxation or diffusion parameters (e.g., T1, T2 and D). Magnet development The first generation of NMR spectrometers used large Electromagnets weighing hundreds of kilograms or more. Slightly smaller permanent magnet systems were developed in the 1960s-70s at proton resonance frequencies of 60 and 90 MHz and were widely used for chemical analysis using continuous wave methods, but these permanent magnets still weighed hundreds of kilograms and could not be placed on a benchtop. Superconducting magnets were developed to achieve stronger magnetic fields for higher resolution and increased sensitivity. However, these superconducting magnets are expensive, large, and require specialized building facilities. In addition, the cryogens needed for the superconductors are hazardous, and represent an ongoing maintenance cost. As a result, these instruments are usually installed in dedicated NMR rooms or facilities for use by multiple research groups. Since the early 2000s there has been a renaissance in permanent-magnet technology and design, with advances sufficient to allow development of much smaller NMR instruments with useful resolution and sensitivity for education, research and industrial applications. Samarium–cobalt and neodymium hard ferromagnets have reduced the size of NMR permanent magnets, and fields up to 2.9 T have been reached, corresponding to a 125 MHz proton Larmor frequency. These designs, which operate with magnet temperatures from room temperature to 60 °C, allow instruments to be made small enough to fit on a lab bench, and are safe to operate in a typical lab environment. They require only single phase local power, and with UPS systems can be made portable and can perform NMR analyses at different points in a manufacturing area. Disadvantages of Small-Size Magnets and Methods to Overcome them One of the biggest disadvantages of low-field (0.3-1.5T) NMR spectrometers is the temperature dependence of the permanent magnets used to produce the main magnetic field. For small magnets there was a concern that the intensity of external magnetic fields may adversely affect the main field, however the use of magnetic shielding materials inside the spectrometer eliminates this problem. The currently available spectrometers are easily moved from one location to another, including some that are mounted on portable trolleys with continuous power supplies. Another related difficulty is that currently available spectrometers do not support elevated sample temperatures which may be required for some in-situ measurements in chemical reactions. A recent paper suggests that a special experimental setup, with two or more coils and synchronous oscillators, may help overcome this problem and allow it to work with unstable magnetic fields and with affordable oscillators. NMR spectra acquired at low field suffer from less signal dispersion, which also leads to more complicated spectra with overlapping signals and higher order effects. The complete interpretation of such spectra requires computational quantum mechanical spectral analysis, for 1H-1D NMR spectra also known as HiFSA. Applications NMR spectroscopy can be used for chemical analysis, reaction monitoring, and quality assurance/quality control experiments. Higher-field instruments enable unparalleled resolution for structure determination, particularly for complex molecules. Cheaper, more robust, and more versatile medium and low field instruments have sufficient sensitivity and resolution for reaction monitoring and QA/QC analyses. As such permanent magnet technology offers the potential to extend the accessibility and availability of NMR to institutions that do not have access to super-conducting spectrometers (e.g., beginning undergraduates or small-businesses). Many automated applications utilizing multivariate statistical analyses (chemometrics) approaches to derive structure-property and chemical and physical property correlations between 60 MHz 1H NMR spectra and primary analysis data particularly for petroleum and petrochemical process control applications have been developed over the past decade. Available Benchtop NMR Spectrometers Development of this new class of spectrometers began in the mid-2000s leaving this one of the last molecular spectroscopy techniques to be made available for the benchtop. Spinsolve New Zealand- and Germany-based Magritek's Spinsolve instrument, operating at 90 MHz, 80 MHz, and 60 MHz, offers very good sensitivity and resolution less than 0.4 Hz and weighs 115 kg, 73 kg, and 60 kg respectively. The ULTRA model has an even higher resolution of 0.2 Hz with a lineshape of 0.2 Hz/ 6 Hz/ 12 Hz comparable to high field NMR specifications. 1H Proton, 19F Fluorine, 13C Carbon, 31P Phosphorus and other X-nuclei such as 7Li, 23Na, 29Si and others can be measured. Multiple X nuclei can be included on a single spectrometer, without sensitivity loss, using the Multi X option. A wide range of NMR spectra can be acquired including 1D, 1D with decoupling, solvent suppression, DEPT, T1, T2 and 2D HETCOR, HMBC, HMQC, COSY and JRES spectra. Pulsed field gradients for spectroscopy are included, and optional Diffusion pulsed field gradients can also be added. The magnet is stabilised with an external lock, which means it does not require the use of deuterated solvents. An online reaction monitoring accessory using a flow cell, and an autosampler are available. Samples are measured using standard 5 mm NMR tubes and the spectrometer is controlled through an external computer where standard NMR data collection and processing takes place. In 2009, , based in Boulder, Colorado, launched the first benchtop NMR spectrometer with the . A small (7 x 5.75 x 11.5”) 45 MHz spectrometer with good resolution (< 1.8 Hz) and mid-to-low-range sensitivity that weighs 4.76 kg (10.5 lbs) and can acquire a 1D 1H or 19F spectra. was acquired by Thermo Fisher Scientific in December 2012, and subsequently renamed the product . Instead of the traditional static 5 mm NMR tubes, the spectrometer uses a flow-through system that requires sample injection into an 0.4 mm ID PTFE and quartz capillary. Deuterated solvents are optional due to the presence of a software lock. It needs only a web browser on any external computer or mobile device for control as the spectrometer has a built-in web server board; no installed software on a dedicated PC is required. In August 2013 a second version was introduced, the , that operates at 82 MHz with a resolution of 1.2 Hz and ten times the sensitivity of the original . Nanalysis Calgary, AB, Canada based Nanalysis Corp offers two benchtop NMR platforms: 60 and 100 MHz, which is 1.4 T and 2.35 T, respectively. The spectrometers are in an all-in-one enclosure (magnet, electronics and touchscreen computer) making them easier to site but all systems can be controlled locally or remotely by an external computer as preferred by the user. The 60 MHz is the smallest 60 MHz available on the market, weighing about 25 kg, and the 100 MHz, just under 100 kg. Both platforms come in an ‘e’ model, which can acquire 1H/19F or in a ‘PRO’ model that observes 1H/19F/X (where X is defined by the customer but is most commonly 7Li, 11B, 13C, 31P). Depending on the model of instrument, it can perform 1D 1H, 13C{1H}, 19F, 31P, 31P{1H}, COSY, JRES, DEPT, APT, HSQC, HSQC-ME, HMBC, T1 and T2 experiments. The spectrometers use standard 5mm NMR tubes and are compatible with most third party NMR software suites. Nanalysis acquired RS2D in 2020, expanding their magnetic resonance technology portfolio to include their superior cameleon4 technology, NMR consoles , preclinical MRI, and MR product lines. In 2021 Nanalysis also acquired the New York based software company, One Moon Scientific, to both offer routine, high-performance data processing and expand the analysis of NMR data including machine learning, database construction and search algorithms. X-Pulse / Pulsar In 2019, Oxford Instruments launched a new 60 MHz spectrometer called X-Pulse. This instrument is a significant improvement on the previous Pulsar system, launched in 2013. X-Pulse has the highest, as standard, resolution (<0.35 Hz / 10 Hz) of the currently available benchtop, cryogen-free NMR analysers. It incorporates a 60 MHz rare-earth permanent magnet. X-Pulse is the only benchtop NMR system to offer a full broadband X channel for the allowing the measurement of 1H,19F, 13C, 31P, 7Li, 29Si, 11B and 23Na on a single probe. A large range of 1D and 2D measurements can be performed on all nuclei, 1D spectra, T1, T2, HETCOR, COSY, HSQC, HMBC, JRES, and many others including solvent suppression and selective excitation. X-Pulse also has options for flow NMR and a variable temperature probe allowing the measurement of samples in NMR tubes at temperatures from 20 °C to 60 °C. The magnet and spectrometer are in two separate boxes with the magnet weighing 149 kg and the electronics weighing 22 kg. X-Pulse requires a standard mains electrical supply and uses standard 5mm NMR tubes. Instrument control comes from the SpinFlow workflow package, while the processing and manipulation of data is achieved using third-party NMR software suites. Pulsar instruments were discontinued in 2019 following the launch of X-Pulse. Bruker In 2019, Bruker, a long time manufacturer and market leader of high performance NMR machines, introduced a Benchtop NMR, Fourier 80 FT-NMR. The machine uses permanent magnets, and operates using Bruker standard software (a full futured TopSpin 4 software for Windows and Linux; as well Python based API from Windows and Linux; and a simplified app called GoScan). Machine can be configured for 1H and 13C spectra (possibly more by a custom order) in 1D and 2D modes, and operates at 80 MHz (1.88 T). The machine weighs about 93 kg and consumes less than 300W when operating. Q Magnetics In late 2021, Q Magnetics introduced the QM-125, a 125 MHz (2.9 T) 1H benchtop NMR spectrometer with resolution better than 0.5 Hz. The instrument is contained in a single enclosure with a mass of 28 kg, and is connected to a controlling computer by a USB interface. The QM-125 spectrometer does not require the user to first transfer their sample to an NMR tube. It may be used in two ways:  a walkup mode where a sample is drawn from a source with a syringe and then injected into the spectrometer; and an automated or hyphenated mode, where the sample is delivered to the RF coil by flow from another instrument. Other features that support automated and hyphenated applications are stable shim, open-source Python control software, and front-panel fluid connections. Power consumption of less than 50 W and relatively low cost support integration into vertical and dedicated applications. References Nuclear magnetic resonance spectroscopy
Benchtop nuclear magnetic resonance spectrometer
[ "Physics", "Chemistry" ]
2,565
[ "Nuclear magnetic resonance", "Spectroscopy", "Spectrum (physical sciences)", "Nuclear magnetic resonance spectroscopy" ]
38,308,976
https://en.wikipedia.org/wiki/Energy-dispersive%20X-ray%20diffraction
Energy-dispersive X-ray diffraction (EDXRD) is an analytical technique for characterizing materials. It differs from conventional X-ray diffraction by using polychromatic photons as the source and is usually operated at a fixed angle. With no need for a goniometer, EDXRD is able to collect full diffraction patterns very quickly. EDXRD is almost exclusively used with synchrotron radiation which allows for measurement within real engineering materials. History EDXRD was originally proposed independently by Buras et al. and Giessen and Gordon in 1968. Advantages The advantages of EDXRD are (1) it uses a fixed scattering angle, (2) it works directly in reciprocal space, (3) fast collection time, and (4) parallel data collection. The fixed scattering angle geometry makes EDXRD especially suitable for in situ studies in special environments (e.g. under very low or high temperatures and pressures). When the EDXRD method is used, only one entrance and one exit window are needed. The fixed scattering angle also allows for measurement of the diffraction vector directly. This allows for high-accuracy measurement of lattice parameters. It allows for rapid structure analysis and the ability to study materials that are unstable and only exist for short periods of time. Because the whole spectrum of diffracted radiation is obtained simultaneously, it enables parallel data collection studies where structural changes can be determined over time. Facilities References Diffraction Synchrotron-related techniques
Energy-dispersive X-ray diffraction
[ "Physics", "Chemistry", "Materials_science" ]
312
[ "Spectrum (physical sciences)", "Crystallography", "Diffraction", "Analytical chemistry stubs", "X-ray crystallography", "Spectroscopy" ]
38,314,318
https://en.wikipedia.org/wiki/List%20of%20PET%20radiotracers
This is a list of positron emission tomography (PET) radiotracers. These are chemical compounds in which one or more atoms have been replaced by a short-lived, positron emitting radioisotope. Cardiology water ammonia Rubidium-82 chloride Acetate (Also used in oncology) Neurology [11C] 25B-NBOMe (Cimbi-36) [18F] Altanserin [11C] Carfentanil [11C] DASB [11C] DTBZ or [18F]Fluoropropyl-DTBZ [11C] [11C] ME@HAPTHI [18F] Fallypride [18F] Florbetaben [18F] Flubatine [18F] Fluspidine [18F] Florbetapir [18F] or [11C] Flumazenil [18F] Flutemetamol [18F] Fluorodopa [18F] Desmethoxyfallypride [18F] Mefway [18F] MPPF [18F] Nifene [11C] Pittsburgh compound B [11C] Raclopride [18F] Setoperone [18F] or [11C] N-Methylspiperone [11C] Verapamil NIMH maintains a list of CNS radiotracers that may be useful for additional information. Neuroepigenetics [11C] Martinostat Oncology [18F] Fludeoxyglucose (18F) (FDG)-glucose analogue [11C] Acetate [11C] Methionine [11C] choline [18F] EF5 [18F] Fluciclovine [18F] Fluorocholine [18F] FET [18F] FMISO [18F] Fluorothymidine F-18 [64Cu] Cu-ETS2 [64Cu] Copper-64 DOTA-TATE [68Ga] DOTA-pseudopeptides [68Ga] DOTA-TATE [68Ga] PSMA [68Ga] CXCR4; solid and hematologic cancers Infectious diseases [18F] Fluorodeoxysorbitol (FDS) Further reading CNS Radiotracers that have been advanced for use in Human Studies References Neuroimaging PET radiotracers
List of PET radiotracers
[ "Chemistry" ]
519
[ "Chemicals in medicine", "Medicinal radiochemistry", "PET radiotracers" ]
54,025,008
https://en.wikipedia.org/wiki/Chandrasekhar%20virial%20equations
In astrophysics, the Chandrasekhar virial equations are a hierarchy of moment equations of the Euler equations, developed by the Indian American astrophysicist Subrahmanyan Chandrasekhar, and the physicist Enrico Fermi and Norman R. Lebovitz. Mathematical description Consider a fluid mass of volume with density and an isotropic pressure with vanishing pressure at the bounding surfaces. Here, refers to a frame of reference attached to the center of mass. Before describing the virial equations, let's define some moments. The density moments are defined as the pressure moments are the kinetic energy moments are and the Chandrasekhar potential energy tensor moments are where is the gravitational constant. All the tensors are symmetric by definition. The moment of inertia , kinetic energy and the potential energy are just traces of the following tensors Chandrasekhar assumed that the fluid mass is subjected to pressure force and its own gravitational force, then the Euler equations is First order virial equation Second order virial equation In steady state, the equation becomes Third order virial equation In steady state, the equation becomes Virial equations in rotating frame of reference The Euler equations in a rotating frame of reference, rotating with an angular velocity is given by where is the Levi-Civita symbol, is the centrifugal acceleration and is the Coriolis acceleration. Steady state second order virial equation In steady state, the second order virial equation becomes If the axis of rotation is chosen in direction, the equation becomes and Chandrasekhar shows that in this case, the tensors can take only the following form Steady state third order virial equation In steady state, the third order virial equation becomes If the axis of rotation is chosen in direction, the equation becomes Steady state fourth order virial equation With being the axis of rotation, the steady state fourth order virial equation is also derived by Chandrasekhar in 1968. The equation reads as Virial equations with viscous stresses Consider the Navier-Stokes equations instead of Euler equations, and we define the shear-energy tensor as With the condition that the normal component of the total stress on the free surface must vanish, i.e., , where is the outward unit normal, the second order virial equation then be This can be easily extended to rotating frame of references. See also Virial theorem Dirichlet's ellipsoidal problem Chandrasekhar tensor References Stellar dynamics Fluid dynamics
Chandrasekhar virial equations
[ "Physics", "Chemistry", "Engineering" ]
497
[ "Chemical engineering", "Astrophysics", "Piping", "Stellar dynamics", "Fluid dynamics" ]
54,031,414
https://en.wikipedia.org/wiki/Lysibody
Although cell wall carbohydrates are ideal immunotherapeutic targets due to their abundance in bacteria and high level of conservation, their poor immunogenicity compared with protein targets complicates their use for the development of protective antibodies. A lysibody is a chimeric antibody in which the Fab region is the binding domain from a bacteriophage lysin, or the binding domain from an autolysin or bacteriocin, all of which bind to bacterial cell wall carbohydrate epitopes. This is linked to the Fc of Immunoglobulin G (IgG). The chimera forms a stable homodimer held together by hinge-region disulfide bonds. Thus, lysibodies are homodimeric hybrid immunoglobulin G molecules that can bind with high affinity and specificity to a carbohydrate substrate in the bacterial cell wall peptidoglycan. Lysibodies behave like authentic IgG by binding at high affinity to their bacterial wall receptor, fix complement and therefore promote phagocytosis by macrophages and neutrophils, protecting mice from infection in model systems. Since cell wall hydrolases, autolysins and bacteriocins are ubiquitous in nature, production of lysibodies specific for difficult to treat pathogenic bacteria is possible. Binding domains may be linked to either the N-terminus of the IgG Fc (as is the case for autolysins) or to the C-terminus (as seen with phage lysins - see figure). In both cases the binding domains are able to bind their substrates in the bacterial cell wall and the Fc is able to perform its effector functions (see ref 2 for more detail). Lysibodies may be used prophylactically to help protect surgical patients from bacterial infections, particularly methicillin resistant Staphylococcus aureus (MRSA) and boost immune clearance in infected individuals. References Monoclonal antibodies Glycoproteins Immune system
Lysibody
[ "Chemistry", "Biology" ]
436
[ "Glycoproteins", "Glycobiology", "Organ systems", "Immune system" ]
54,033,014
https://en.wikipedia.org/wiki/Kerr%20frequency%20comb
Kerr frequency combs (also known as microresonator frequency combs) are optical frequency combs which are generated from a continuous wave pump laser by the Kerr nonlinearity. This coherent conversion of the pump laser to a frequency comb takes place inside an optical resonator which is typically of micrometer to millimeter in size and is therefore termed a microresonator. The coherent generation of the frequency comb from a continuous wave laser with the optical nonlinearity as a gain sets Kerr frequency combs apart from today's most common optical frequency combs. These frequency combs are generated by mode-locked lasers where the dominating gain stems from a conventional laser gain medium, which is pumped incoherently. Because Kerr frequency combs only rely on the nonlinear properties of the medium inside the microresonator and do not require a broadband laser gain medium, broad Kerr frequency combs can in principle be generated around any pump frequency. While the principle of Kerr frequency combs is applicable to any type of optical resonator, the requirement for Kerr frequency comb generation is a pump laser field intensity above the parametric threshold of the nonlinear process. This requirement is easier to fulfill inside a microresonator because of the possible very low losses inside microresonators (and corresponding high quality factors) and because of the microresonators’ small mode volumes. These two features combined result in a large field enhancement of the pump laser inside the microresonator which allow the generation of broad Kerr frequency combs for reasonable powers of the pump laser. One important property of Kerr frequency combs, which is a direct consequence of the small dimensions of the microresonators and their resulting large free spectral ranges (FSR), is the large mode spacing of typical Kerr frequency combs. For mode-locked lasers this mode spacing, which defines the distance in between adjacent teeth of the frequency comb, is typically in the range of 10 MHz to 1 GHz. For Kerr frequency combs the typical range is from around 10 GHz to 1 THz. The coherent generation of an optical frequency comb from a continuous wave pump laser is not a unique property of Kerr frequency combs. Optical frequency combs generated with cascaded optical modulators also possess this property. For certain application this property can be advantageous. For example, to stabilize the offset frequency of the Kerr frequency comb one can directly apply feedback to the pump laser frequency. In principle it is also possible to generate a Kerr frequency comb around a particular continuous wave laser in order to use the bandwidth of the frequency comb to determine the exact frequency of the continuous wave laser. Since their first demonstration in silica micro-toroid resonators, Kerr frequency combs have been demonstrated in a variety of microresonator platforms which notably also include crystalline microresonators and integrated photonics platforms such as waveguide resonators made from silicon nitride. More recent research has expanded the range of available platforms further which now includes diamond, aluminum nitride, lithium niobate, and, for mid-infrared pump wavelengths, silicon. Because both use the nonlinear effects of the propagation medium, the physics of Kerr frequency combs and of supercontinuum generation from pulsed lasers is very similar. In addition to the nonlinearity, the chromatic dispersion of the medium also plays a crucial role for these systems. As a result of the interplay of nonlinearity and dispersion, solitons can form. The most relevant type of solitons for Kerr frequency comb generation are bright dissipative cavity solitons, which are sometimes also called dissipative Kerr solitons (DKS). These bright solitons have helped to significantly advance the field of Kerr frequency combs as they provide a way to generate ultra-short pulses which in turn represent a coherent, broadband optical frequency comb, in a more reliable fashion than what was possible before. In its simplest form with only the Kerr nonlinearity and second order dispersion the physics of Kerr frequency combs and dissipative solitons can be described well by the Lugiato–Lefever equation. Other effects such as the Raman effect and higher order dispersion effects require additional terms in the equation. See also Frequency comb Mode-locking Lugiato–Lefever equation References Nonlinear optics Laser science Spectroscopy Photonics
Kerr frequency comb
[ "Physics", "Chemistry" ]
891
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
41,097,043
https://en.wikipedia.org/wiki/Impedance%20%28accelerator%20physics%29
In accelerator physics, impedance is a quantity that characterizes the self interaction of a charged particle beam, mediated by the beam environment, such as the vacuum chamber, RF cavities, and other elements encountered along the accelerator or storage ring. Definition in terms of wakefunction The impedance is defined as the Fourier transform of the Wakefunction. From this expression and the fact that the wake function is real, one can derive the property: Important sources of impedance The impedance is defined at all positions along the beam trajectory. The beam travels through a vacuum chamber. Substantial impedance is generated in transitions, where the shape of the beam pipe changes. The RF cavities are another important source. Impedance models In the absence of detailed geometric modeling, one can use various models to represent different aspects of the accelerator beam pipe structure. One such model is the Broadband resonator For the longitudinal case, one has with the shunt impedance, , the quality factor, and the resonant frequency. Resistive Wall Given a circular beam piper of radius , and conductivity , the impedance is given by The corresponding longitudinal wakefield is approximately given by The transverse wake-function from the resistive wall is given by Effect of Impedance on beam The impedance acts back on the beam and can cause a variety of effects, often considered deleterious for accelerator functioning. In general, impedance effects are classified under the category of "collective effects" due to the fact that the whole beam must be considered together, and not just a single particle. The whole beam may, however, cause particular changes in the dynamics of individual particles such as tune shifts and coupling. Whole beam changes include emittance growth and instabilities that can lead to beam loss. See also https://impedance.web.cern.ch/impedance/ References Accelerator physics
Impedance (accelerator physics)
[ "Physics" ]
380
[ "Accelerator physics", "Applied and interdisciplinary physics", "Experimental physics" ]
41,100,419
https://en.wikipedia.org/wiki/Structure%20validation
Macromolecular structure validation is the process of evaluating reliability for 3-dimensional atomic models of large biological molecules such as proteins and nucleic acids. These models, which provide 3D coordinates for each atom in the molecule (see example in the image), come from structural biology experiments such as x-ray crystallography or nuclear magnetic resonance (NMR). The validation has three aspects: 1) checking on the validity of the thousands to millions of measurements in the experiment; 2) checking how consistent the atomic model is with those experimental data; and 3) checking consistency of the model with known physical and chemical properties. Proteins and nucleic acids are the workhorses of biology, providing the necessary chemical reactions, structural organization, growth, mobility, reproduction, and environmental sensitivity. Essential to their biological functions are the detailed 3D structures of the molecules and the changes in those structures. To understand and control those functions, we need accurate knowledge about the models that represent those structures, including their many strong points and their occasional weaknesses. End-users of macromolecular models include clinicians, teachers and students, as well as the structural biologists themselves, journal editors and referees, experimentalists studying the macromolecules by other techniques, and theoreticians and bioinformaticians studying more general properties of biological molecules. Their interests and requirements vary, but all benefit greatly from a global and local understanding of the reliability of the models. Historical summary Macromolecular crystallography was preceded by the older field of small-molecule x-ray crystallography (for structures with less than a few hundred atoms). Small-molecule diffraction data extends to much higher resolution than feasible for macromolecules, and has a very clean mathematical relationship between the data and the atomic model. The residual, or R-factor, measures the agreement between the experimental data and the values back-calculated from the atomic model. For a well-determined small-molecule structure the R-factor is nearly as small as the uncertainty in the experimental data (well under 5%). Therefore, that one test by itself provides most of the validation needed, but a number of additional consistency and methodology checks are done by automated software as a requirement for small-molecule crystal structure papers submitted to the International Union of Crystallography (IUCr) journals such as Acta Crystallographica section B or C. Atomic coordinates of these small-molecule structures are archived and accessed through the Cambridge Structural Database (CSD) or the Crystallography Open Database (COD). The first macromolecular validation software was developed around 1990, for proteins. It included Rfree cross-validation for model-to-data match, bond length and angle parameters for covalent geometry, and sidechain and backbone conformational criteria. For macromolecular structures, the atomic models are deposited in the Protein Data Bank (PDB), still the single archive of this data. The PDB was established in the 1970s at Brookhaven National Laboratory, moved in 2000 to the RCSB (Research Collaboration for Structural Biology) centered at Rutgers, and expanded in 2003 to become the wwPDB (worldwide Protein Data Bank), with access sites added in Europe () and Asia (), and with NMR data handled at the BioMagResBank (BMRB) in Wisconsin. Validation rapidly became standard in the field, with further developments described below. *Obviously needs expansion* A large boost was given to the applicability of comprehensive validation for both x-ray and NMR as of February 1, 2008, when the worldwide Protein Data Bank (wwPDB) made mandatory the deposition of experimental data along with atomic coordinates. Since 2012 strong forms of validation have been in the process of being adopted for wwPDB deposition from recommendations of the wwPDB Validation Task Force committees for x-ray crystallography, for NMR, for SAXS (small-angle x-ray scattering), and for cryoEM (cryo-Electron Microscopy). Stages of validation Validations can be broken into three stages: validating the raw data collected (data validation), the interpretation of the data into the atomic model (model-to-data validation), and finally validation on the model itself. While the first two steps are specific to the technique used, validating the arrangement of atoms in the final model is not. Model validation Geometry Conformation (dihedrals): protein & RNA The backbone and side-chain dihedral angles of protein and RNA have been shown to have specific combinations of angles which are allowed (or forbidden). For protein backbone dihedrals (φ, ψ), this has been addressed by the legendary Ramachandran Plot while for side-chain dihedrals (χ's), one should refer to the Dunbrack Backbone-dependent rotamer library. Though, mRNA structures are generally short-lived and single-stranded, there are an abundance of non-coding RNAs with different secondary and tertiary folding (tRNA, rRNA etc.) which contain a preponderance of the canonical Watson-Crick (WC) base-pairs, together with significant number of non-Watson Crick (NWC) base-pairs - for which such RNA also qualify for regular structural validation that apply for nucleic acid helices. The standard practice is to analyse the intra- (Transnational: Shift, Slide, Rise; Rotational: Tilt, Roll, Twist) and inter-base-pair geometrical parameters (Transnational: Shear, Stagger, Stretch, Rotational: Buckle, Propeller, Opening) - whether in-range or out-of-range with respect to their suggested values. These parameters describe the relative orientations of the two paired bases with respect to each other in two strands (intra) along with those of the two stacked base pairs (inter) with respect to each other, and, hence, together, they serve to validate nucleic acid structures in general. Since, RNA-helices are small in length (average: 10-20 bps), the use of electrostatic surface potential as a validation parameter has been found to be beneficial, particularly for modelling purposes. Packing and Electrostatics: globular proteins For globular proteins, interior atomic packing (arising from short-range, local interactions) of side-chains has been shown to be pivotal in the structural stabilization of the protein-fold. On the other hand, the electrostatic harmony (non-local, long-range) of the overall fold has also been shown to be essential for its stabilization. Packing anomalies include steric clashes, short contacts, holes and cavities while electrostatic disharmony refer to unbalanced partial charges in the protein core (particularly relevant for designed protein interiors). While the clash-score of Molprobity identifies steric clashes at a very high resolution, the Complementarity Plot combines packing anomalies with electrostatic imbalance of side-chains and signals for either or both. Carbohydrates The branched and cyclic nature of carbohydrates poses particular problems to structure validation tools. At higher resolutions, it is possible to determine the sequence/structure of oligo- and poly-saccharides, both as covalent modifications and as ligands. However, at lower resolutions (typically lower than 2.0Å), sequences/structures should either match known structures, or be supported by complementary techniques such as Mass Spectrometry. Also, monosaccharides have clear conformational preferences (saturated rings are typically found in chair conformations), but errors introduced during model building and/or refinement (wrong linkage chirality or distance, or wrong choice of model - see for recommendations on carbohydrate model building and refinement and for reviews on general errors in carbohydrate structures) can bring their atomic models out of the more likely low-energy state. Around 20% of the deposited carbohydrate structures are in a higher-energy conformation not justified by the structural data (measured using real-space correlation coefficient). A number of carbohydrate validation web services are available at glycosciences.de (including nomenclature checks and linkage checks by pdb-care, and cross-validation with Mass Spectrometry data through the use of GlycanBuilder), whereas the CCP4 suite currently distributes Privateer, which is a tool that is integrated into the model building and refinement process itself. Privateer is able to check stereo- and regio-chemistry, ring conformation and puckering, linkage torsions, and real-space correlation against positive omit density, generating aperiodic torsion restraints on ring bonds, which can be used by any refinement software in order to maintain the monosaccharide's minimal energy conformation. Privateer also generates scalable two-dimensional SVG diagrams according to the Essentials of Glycobiology standard symbol nomenclature containing all the validation information as tooltip annotations (see figure). This functionality is currently integrated into other CCP4 programs, such as the molecular graphics program CCP4mg (through the Glycoblocks 3D representation, which conforms to the standard symbol nomenclature) and the suite's graphical interface, CCP4i2. Validation for crystallography Overall considerations Global vs local criteria Many evaluation criteria apply globally to an entire experimental structure, most notably the resolution, the anisotropy or incompleteness of the data, and the residual or R-factor that measures overall model-to-data match (see below). Those help a user choose the most accurate among related Protein Data Bank entries to answer their questions. Other criteria apply to individual residues or local regions in the 3D structure, such as fit to the local electron density map or steric clashes between atoms. Those are especially valuable to the structural biologist for making improvements to the model, and to the user for evaluating the reliability of that model right around the place they care about - such as a site of enzyme activity or drug binding. Both types of measures are very useful, but although global criteria are easier to state or publish, local criteria make the greatest contribution to scientific accuracy and biological relevance. As expressed in the Rupp textbook, "Only local validation, including assessment of both geometry and electron density, can give an accurate picture of the reliability of the structure model or any hypothesis based on local features of the model." Relationship to resolution and B-factor Data validation Structure factors Twinning Model-to-data validation Residuals and Rfree Real-space correlation Improvement by correcting diagnosed problems In nuclear magnetic resonance Data Validation: Chemical Shifts, NOEs, RDCs AVS Assignment validation suite (AVS) checks the chemical shifts list in BioMagResBank (BMRB) format for problems. PSVS Protein Structure Validation Server at the NESG based on information retrieval statistics PROSESS PROSESS (Protein Structure Evaluation Suite & Server) is a new web server that offers an assessment of protein structural models by NMR chemical shifts as well as NOEs, geometrical, and knowledge-based parameters. LACSLinear analysis of chemical shifts is used for absolute referencing of chemical shift data. Model-to-data validation TALOS+. Predicts protein backbone torsion angles from chemical shift data. Frequently used to generate further restraints applied to a structure model during refinement. Model validation: as above Dynamics: core vs loops, tails, and mobile domains One of the critical needs for NMR structural ensemble validation is to distinguish well-determined regions (those that have experimental data) from regions that are highly mobile and/or have no observed data. There are several current or proposed methods for making this distinction such as Random Coil Index, but so far the NMR community has not standardized on one. Software and websites In cryo-EM Cyro-EM presents special challenges to model-builders as the observed electron density is frequently insufficient to resolve individual atoms, leading to a higher likelihood of errors. Geometry-based validation tools similar to those used in X-ray crystallography can be used to highlight implausible modeling choices and guide modeler toward more native-like structures. The CaBLAM method, which only uses Cα atoms, is suitable for low-resolution structures from cyro-EM. A way to compute the difference density map has been formulated for cyro-EM. Cross-validation using a "free" map, comparable to the use of a free R-factor, is also available. Other methods for checking model-map fit include correlation coefficients, model-map FSC, confidence maps, CryoEF (orientation bias check), and TEMPy SMOC. In SAXS SAXS (small-angle x-ray scattering) is a rapidly growing area of structure determination, both as a source of approximate 3D structure for initial or difficult cases and as a component of hybrid-method structure determination when combined with NMR, EM, crystallographic, cross-linking, or computational information. There is great interest in the development of reliable validation standards for SAXS data interpretation and for quality of the resulting models, but there are as yet no established methods in general use. Three recent steps in this direction are the creation of a Small-Angle Scattering Validation Task Force committee by the worldwide Protein DataBank and its initial report, a set of suggested standards for data inclusion in publications, and an initial proposal of statistically derived criteria for automated quality evaluation. For computational biology It is difficult to do meaningful validation of an individual, purely computational, macromolecular model in the absence of experimental data for that molecule, because the model with the best geometry and conformational score may not be the one closest to the right answer. Therefore, much of the emphasis in validation of computational modeling is in assessment of the methods. To avoid bias and wishful thinking, double-blind prediction competitions have been organized, the original example of which (held every 2 years since 1994) is CASP (Critical Assessment of Structure Prediction) to evaluate predictions of 3D protein structure for newly solved crystallographic or NMR structures held in confidence until the end of the relevant competition. The major criterion for CASP evaluation is a weighted score called GDT-TS for the match of Calpha positions between the predicted and the experimental models. See also List of biophysically important macromolecular crystal structures References External links Computational prediction CASP experiments home page Model validation in Yasara General-purpose structure validation validation/deposition site (wwPDB version) MolProbity web service (has NMR-specific features) PDBREPORT () Protein structure validation database What_Check software ProCheck software Complementarity Plot pdb-care (carbohydrate validation) Privateer (carbohydrate validation) OOPS2, part of the Uppsala Software Factory ProSA web service Verify-3D profile analysis NUPARM (Nucleic Acid validation) RNAhelix (RNA validation) X-ray EDS (Electron Density Server) Coot - modeling software (built-in validation) PDB-REDO - X-ray model optimization: rebuilding and refining all PDB models using up-to-date techniques PROSESS - Protein Structure Evaluation Suite & Server Resolution by Proxy, ResProx - protein model resolution-by-proxy VADAR - Volume, Area, Dihedral Angle Reporter NMR PSVS (Protein Structure Validation Server at the NESG) CING (Common Interface for NMR structure Generation) software ProCheck - stereochemical quality check for X-ray and NMR TALOS+ Software & Server (server for predicting protein backbone torsion angles from chemical shift) VADAR - Volume, Area, Dihedral Angle Reporter PROSESS - Protein Structure Evaluation Suite & Server ResProx - protein model resolution-by-proxy Cyro-EM EM Data Bank, for EM map deposition EMDB at the PDB, info on ftp download of maps CERES, rebuilds (and hopefully improves) Cyro-EM models using the latest version of PHENIX Link references Further reading Structural biology Protein methods Protein structure
Structure validation
[ "Chemistry", "Biology" ]
3,324
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Structural biology", "Biochemistry", "Protein structure" ]
41,104,045
https://en.wikipedia.org/wiki/Urs%20Schreiber
Urs Schreiber (born 1974) is a mathematician specializing in the connection between mathematics and theoretical physics (especially string theory) and currently working as a researcher at New York University Abu Dhabi. He was previously a researcher at the Czech Academy of Sciences, Institute of Mathematics, Department for Algebra, Geometry and Mathematical Physics. Education Schreiber obtained his doctorate from the University of Duisburg-Essen in 2005 with a thesis supervised by Robert Graham and titled From Loop Space Mechanics to Nonabelian Strings. Work Schreiber's research fields include the mathematical foundation of quantum field theory. Schreiber is a co-creator of the nLab, a wiki for research mathematicians and physicists working in higher category theory. Selected writings With Hisham Sati, Mathematical Foundations of Quantum Field and Perturbative String Theory, Proceedings of Symposia in Pure Mathematics, volume 83 AMS (2011) Notes References Interview of John Baez and Urs Schreiber External links Home page in nLab Category theorists 21st-century German mathematicians Living people 1974 births Science bloggers 21st-century science writers String theorists University of Duisburg-Essen alumni
Urs Schreiber
[ "Mathematics" ]
240
[ "Category theorists", "Mathematical structures", "Category theory" ]
41,104,382
https://en.wikipedia.org/wiki/Frobenius%20category
In category theory, a branch of mathematics, a Frobenius category is an exact category with enough projectives and enough injectives, where the classes of projectives and injectives coincide. It is an analog of a Frobenius algebra. Properties The stable category of a Frobenius category is canonically a triangulated category. See also Dagger compact category Tannakian category References Section 13.4 of Monoidal categories
Frobenius category
[ "Mathematics" ]
90
[ "Monoidal categories", "Mathematical structures", "Category theory", "Category theory stubs" ]
41,104,429
https://en.wikipedia.org/wiki/Kenneth%20Brown%20%28mathematician%29
Kenneth Stephen Brown (born 1945) is a professor of mathematics emeritus who spent his career at Cornell University, working in category theory and cohomology theory as well as in buildings. Among other things, he is known for Ken Brown's lemma in the theory of model categories. He is also the author of the book Cohomology of Groups (Graduate Texts in Mathematics 87, Springer, 1982). Brown grew up in University City, Missouri, outside St. Louis, and graduated from University City High School in 1963. Using a National Merit Scholarship, he attended Stanford University, graduating from there in 1967 with an A.B. degree. Brown earned his Ph.D. in 1971 from the Massachusetts Institute of Technology, under the supervision of Daniel Quillen, with thesis Abstract Homotopy Theory and Generalized Sheaf Cohomology. Hired by Cornell University in 1971, Brown started as an assistant professor. Courses that Brown taught in his early years there included "Calculus [IV]", "Linear Algebra", and "Algebra and Number Theory". Brown became an associate professor in 1976 and a full professor in 1981. He served as chair of the mathematics department at Cornell beginning in 2002, and concluding in 2006. He also directed the Summer Math Institute in 2009. He was an invited speaker at the International Congress of Mathematicians in 1978 in Helsinki. In 2012 he became a fellow of the American Mathematical Society. Known for his teaching, at Cornell Brown received the college-wide Clark Teaching Award and also the Mathematics Department Senior Faculty Award. An October 2010 conference entitled "Approaches to Group Theory" was held in his honor at Cornell. It was organized by his colleagues and former students, and led by the mathematicians Susan Hermiller, John Meier, Karen Vogtmann, and David Webb. Brown became a professor emeritus at Cornell in 2014. Selected publications Cohomology of Groups, Graduate Texts in Mathematics 87, Springer-Verlag, New York, 1982 [reprinted including corrections, 1994] Buildings, Springer-Verlag, New York, 1989 [reprinted 1998] Buildings: Theory and Applications, Graduate Texts in Mathematics 248, Springer, New York, 2008 [co-author with Peter Abramenko] References External links – Faculty page in College of Arts & Sciences 1945 births Place of birth missing (living people) Living people 20th-century American mathematicians 21st-century American mathematicians People from St. Louis County, Missouri Stanford University alumni Massachusetts Institute of Technology School of Science alumni Cornell University faculty Fellows of the American Mathematical Society Category theorists American topologists
Kenneth Brown (mathematician)
[ "Mathematics" ]
511
[ "Category theorists", "Mathematical structures", "Category theory" ]
60,392,089
https://en.wikipedia.org/wiki/Cyclopentadienyl%20magnesium%20bromide
Cyclopentadienyl magnesium bromide is a chemical compound with the molecular formula . The molecule consists of a magnesium atom bonded to a bromine atom and a cyclopentadienyl group, a ring of five carbons each with one hydrogen atom. The compound is a Grignard reagent, a type of organometallic compound that features a magnesium atom bonded to a halogen atom and to a carbon atom of some organic functional group. This compound is of historic importance as the starting material for the first published synthesis of ferrocene by Peter Pauson and Thomas J. Kealy in 1951. Preparation The compound can be prepared by reacting cyclopentadiene with magnesium and bromoethane in anhydrous benzene. References Magnesium compounds Bromides Cyclopentadienyl complexes
Cyclopentadienyl magnesium bromide
[ "Chemistry" ]
170
[ "Bromides", "Organometallic chemistry", "Cyclopentadienyl complexes", "Salts" ]
60,401,671
https://en.wikipedia.org/wiki/Suspension%20culture
A cell suspension or suspension culture is a type of cell culture in which single cells or small aggregates of cells are allowed to function and multiply in an agitated growth medium, thus forming a suspension. Suspension culture is one of the two classical types of cell culture, the other being adherent culture. The history of suspension cell culture closely aligns with the history of cell culture overall, but differs in maintenance methods and commercial applications. The cells themselves can either be derived from homogenized tissue or from heterogenous cell solutions. Suspension cell culture is commonly used to culture nonadhesive cell lines like hematopoietic cells, plant cells, and insect cells. While some cell lines are cultured in suspension, the majority of commercially available mammalian cell lines are adherent. Suspension cell cultures must be agitated to maintain cells in suspension, and may require specialized equipment (e.g. magnetic stir plate, orbital shakers, incubators) and flasks (e.g. culture flasks, spinner flasks, shaker flasks). These cultures need to be maintained with nutrient containing media and cultured in a specific cell density range to avoid cell death. History The history of suspension cell culture is closely tied to the overall history of cell and tissue culture. In 1885, Wilhelm Roux laid the groundwork for future tissue culture, by developing a saline buffer that was used to maintain living cells (chicken embryos) for a few days. Ross Granville Harrison in 1907 then developed in vitro cell culture techniques, including modifying the hanging drop technique for nerve cells and introducing aseptic technique to the culture process. Later in 1910, Montrose Thomas Burrows adapted Harrison's technique and collaborated with Alexis Carrel to establish multiple tissue cultures that could be maintained in vitro using fresh plasma combined with saline solutions. Carrel went on to develop the first known cell line, a line derived from chicken embryo heart which was maintained continuously for 34 years. Though the "immortality" of the cell line was later challenged by Leonard Hayflick, this was a major breakthrough and inspired others to pursue creating other cell lines. Notably in 1952, George Otto Gay and his assistant Mary Kubicek cultured the first human derived immortalized cell line - HeLa. While the other cell lines were adherent, HeLa cells were able to be maintained in suspension. Methods and maintenance Isolating cells and starting a culture All primary cells (cells derived directly from a subject) must first be removed from a subject, isolated (using digestion enzymes), and suspended in media before being cultured. However, this does not mean that these cells are compatible with suspension culture, as most mammalian cells are adherent and need to attach to a surface to divide. White blood cells can be taken from a subject and cultured in suspension, since they naturally exist in suspension in blood. Adhesion of white blood cells in vivo is typically the result of an inflammatory immune response and requires specific cell-cell interactions that should not occur in a suspension of a single type of white blood cell. Immortalized mammalian cell lines (cells that are able to replicate indefinitely), plant cells, and insect cells can be obtained cryopreserved from manufacturers and used to start a suspension culture. To start a culture from cryopreserved cells, the cells must first be thawed and added to a flask or bioreactor containing media. Depending upon the cryoprotectant agent, the cells might need to be washed to avoid deleterious effects from the agent. Suspension cell culture maintenance for laboratories Suspension cell cultures are similar to adherent cultures in a number of ways. Both require specialized nutrient containing media, containers that allow for gas transfer, aseptic conditions to avoid contamination, and frequent passaging to prevent overcrowding of cells. However, even within these similarities there are a few key differences between these culture methods. For example, though both adherent and suspension cell cultures can be maintained in standard flasks such as the T-75 tissue culture flask, suspension cultures need to be agitated to avoid settling to the bottom of the flask. While adherent cell cultures can be maintained in flat flasks with a lot of surface area (to promote cell adhesion), suspension cultures require agitation otherwise the cells will fall to the bottom of a flask, greatly impacting their access to nutrients and oxygen, eventually resulting in cell death. For this reason, specialized flasks (including the spinner flask and shaker flask, discussed below) have been developed to agitate media and keep the cells in suspension. However, the agitation of media subjects the cells to shear forces which can stress the cells and negatively impact growth. Although both adherent and suspension cell cultures require media, media used in suspension culture may contain a surfactant to protect cells from shear forces in addition to the amino acids, vitamins and salt solution contained in culture media such as DMEM. Spinner flasks Spinner flasks, which are used for suspension cultures, contain a magnetic spinner bar which circulates the media throughout the flask and keeps cells in suspension. Spinner flasks contain one central capped opening flanked by two protruding arms which are also capped and allow for additional gas exchange. The magnetic spinner bar itself is typically suspended from a rod attached to the central cap so that it maximizes media circulation in the cell suspension. When culturing cells, the spinner flask containing cells is placed on a magnetic stir plate, inside of an incubator and the spinner parameters need to be adjusted carefully to avoid killing cells with shear forces. Shaker flasks Shaker flasks are also used for suspension cultures, and appear similar to typical Erlenmeyer flasks but have a semi-permeable lid to allow for gas exchange. During suspension cell culturing, shaker flasks are loaded with cells and the appropriate media before they are placed on an orbital shaker. To optimize cell culture proliferation, the revolutions per minute of the orbital shaker must be adjusted within an acceptable range depending on the cells and media used. The media must be allowed to stir, but cannot disturb the cells too much causing them excessive stress. Shaker flasks are often used for fermentation cultures with microorganisms such as yeast. Passaging (subculturing) cells Passaging, or subculturing, suspension cell cultures is more straightforward than passaging adherent cells. While adherent cells require initial processing with a digestion enzyme, to remove them from the culture flask surface, suspension cells are floating freely in media. A sample from the culture can then be taken and analyzed to determine the ratio of living to dead cells (using a stain such as trypan blue) and the total concentration of cells in the flask (using a hemocytometer). Using this information, a portion of the current suspension culture will be transferred to fresh flask and supplemented with media. The passage number should be recorded, particularly if the cells are primary and not immortalized as primary cell lines will eventually undergo senescence. Suspension cells are often passaged outright without changing the media. In order to change the media for a suspension culture, all cells from the current container should be removed and centrifuged into a pellet. The excess media is then removed from the centrifuged sample, and the flask is refilled with fresh media before re-adding the cells to the flask. Media changes and subculturing are important to maintain cell lines, since cells will consume nutrients in media to expand. Cells will also grow exponentially until the environment becomes inhospitable due to lack of nutrients, extreme pH, or lack of space to grow. Commercial applications of suspension cell culture Unlike adherent cultures, which are limited by the surface area provided for them to expand on, suspension cultures are limited by the volume of their container. Meaning, suspension cells can exist in much larger quantities in a given flask and are preferred when using cells to make products including proteins, antibodies, metabolites or just to produce a high volume of cells. However, there are far fewer mammalian suspension cell lines than mammalian adhesive cell lines. Most large scale suspension culture involves non-mammalian cells and takes place in bioreactors. Some examples of suspension cell culture: Antibody production by hybridomas Fermentation cultures for beer Therapeutic protein production by CHO cells Secondary metabolite production for drugs in plant cells Recombinant protein production in insect cells Bulk protein production for enzyme and vaccine research Producing cell suspension cultures to support oncolytic adenovirus used in cancer immunotherapy List of suspension cell lines See also Cell culture Adherent culture Bio-MEMS Cell adhesion Cell adhesion molecule References Cell biology Cell culture techniques
Suspension culture
[ "Chemistry", "Biology" ]
1,798
[ "Biochemistry methods", "Cell culture techniques", "Cell biology" ]
52,684,573
https://en.wikipedia.org/wiki/OMNeT%2B%2B
OMNeT++ (Objective Modular Network Testbed in C++) is a modular, component-based C++ simulation library and framework, primarily for building network simulators. OMNeT++ can be used for free for non-commercial simulations like at academic institutions and for teaching. OMNEST is an extended version of OMNeT++ for commercial use. OMNeT++ itself is a simulation framework without models for network protocols like IP or HTTP. The main computer network simulation models are available in several external frameworks. The most commonly used one is INET which offers a variety of models for all kind of network protocols and technologies like for IPv6, BGP. INET also offers a set of mobility models to simulate the node movement in simulations. The INET models are licensed under the LGPL or GPL. NED (NEtwork Description) is the topology description language of OMNeT++. To manage and reduce the time to carry out large-scale simulations, additional tools have been developed, for example, based on Python. See also MLDesigner QualNet NEST (software) References Computer networking Computer network analysis Simulation software Telecommunications engineering
OMNeT++
[ "Technology", "Engineering" ]
239
[ "Computer networking", "Telecommunications engineering", "Computer engineering", "Computer science", "Electrical engineering" ]
32,701,959
https://en.wikipedia.org/wiki/Ion%20layer%20gas%20reaction
Ion layer gas reaction (ILGAR®) is a non-vacuum, thin-film deposition technique developed and patented by the group of Professor Dr. Christian-Herbert Fischer at the Helmholtz-Zentrum Berlin for materials and energy in Berlin, Germany. It is a sequential and cyclic process that enables the deposition of semiconductor thin films, mainly for (although not restricted to) photovoltaic applications, specially chalcopyrite absorber layers and buffer layers. The ILGAR technique was awarded as German High Tech Champion 2011 by the Fraunhofer Society. ILGAR is a chemical process that allows for the deposition of layers in a homogeneous, adherent and mechanically stable form without using vacuum or high temperatures. It is a sequential and cyclic process which can be automated and scaled up. It consists basically of the following steps: Application of a precursor solution on a substrate by dipping (Dip-ILGAR) or spraying (Spray ILGAR). Reaction of the dry solid precursor layer with a hydrogen chalcogenide gas. These steps are repeated until the desired layer thickness is obtained. In the case of spray-ILGAR, the spray deposition of the ionic layer is performed using similar equipment to atmospheric pressure aerosol assisted chemical vapour deposition or spray pyrolysis. Spray pyrolysis can be regarded as a simplified version of the spray ILGAR process, where there is no reaction of the precursor layer with a reactant gas. The cyclical nature of this process makes it similar to atomic layer deposition (ALD), which is also used for buffer layer deposition. Applications The applications of the ILGAR and spray-pyrolysis techniques at the Helmholtz-Zentrum Berlin lie mainly in the field of chalcopyrite thin-film solar cells, although these techniques can be used for other applications involving substrate coating with thin films. The following list summarizes the applications of these techniques: Buffer layers for chalcopyrite-based thin-film devices: Replacement of the standard CdS by ecologically more favorable materials (In2S3, etc.) deposited using Spray-ILGAR on different absorber materials. Nano-dots passivation layers: nanodots can be deposited in a controlled way using the spray ILGAR technique. ZnS nanodots have been used as passivation layers-point contact buffer layers in chalcopyrite based thin-film solar cells. These dots (5–10 nm in diameter) act as a passivation layer at the absorber-buffer interface which results is some cases in an efficiency gain up to 2% absolute. Al2O3 barrier layers: Thin-film solar modules on metallic substrates like steel foil need a barrier layer between substrate and Mo back-contact for electrical insulation and to prevent a detrimental iron diffusion into the absorber. Also uncontrolled sodium diffusion from the glass substrate can be stopped by a barrier before intentionally doping the absorber with the desired amount of sodium. Al2O3 layers deposited by spray-pyrolysis result in fully functioning barrier layers for the cases stated above ZnO Window layers: high quality i-ZnO window layers have been grown by spray pyrolysis and constitute a feasible replacement of the standard sputtered i-ZnO layers. Chalcopyrite absorber layers: Spray-ILGAR is a low temperature technique that enables the growth of chalcopyrite absorber layers such as CuInS2 and . Spray-ILGAR CuInS2 layers can be used as absorbers in thin-film solar cells. Surface Coating: The surface coating of ceramics, metal, glass and even plastics for catalytic purposes as well as anti-corrosion, antistatic or mechanical protection is feasible using the ILGAR technique. ILGAR as a replacement for chemical bath deposition The advantage of ILGAR compared to chemical bath deposition (CBD) lies in the fact that it is easier to deposit high quality precursor layers and convert them to the chalcogenide than to directly deposit chalcogenide thin films. It is also possible to grow films with graded properties or compositions by changing the precursors or the process parameters. Furthermore, ILGAR is an in-line process whereas chemical bath deposition is intrinsically a batch process. References Semiconductor device fabrication Thin film deposition Coatings
Ion layer gas reaction
[ "Chemistry", "Materials_science", "Mathematics" ]
866
[ "Microtechnology", "Thin film deposition", "Coatings", "Thin films", "Semiconductor device fabrication", "Planes (geometry)", "Solid state engineering" ]
32,702,686
https://en.wikipedia.org/wiki/Wellman%20Group
The Wellman Group is a group of manufacturing companies that make boilers and advanced defence equipment. It is one of the main boilermakers in the UK, if not the most common for large-scale industrial applications, having taken over many well-known boiler companies. History The main company began as the Wellman Smith Owen Engineering Corporation, a large conglomerate British engineering company. It derived from an English company of Samuel T. Wellman, a steel industry pioneer. It was based at Parnell House on Wilton Road in London, next to London Victoria station. It supplied furnaces to the British steel industry. The site at Oldbury has been making boilers since 1862. In August 1965 it split in several subsidiaries, including Wellman Machines (at Darlaston), Wellman Incandescent Furnace Company (at Darlaston, Staffordshire), Wellman Steelworks Engineering, and Wellman Incandescent International. Wellman became a private company in December 1997, when bought by Alchemy Partners for £82 million, by setting up a nominal transitional consortium company called Newmall (an anagram of Wellman). In 2003 it formed a commercial alliance with Loos International, a German boiler maker. In August 2005, Alchemy split the company into two – Newmall, for the US subsidiaries and Really Newmall for the other subsidiaries; this became Wellman Group Ltd, owned by Kwikpower International, in the Kwikpower Wellman division. In May 2009 the company formed an alliance with Wulff Energy Technologies GmbH of Husum to form Wellman Wulff. Robey of Lincoln Robey of Lincoln was an agricultural firm that went into making boilers in 1870. It was bought by Babcock International in July 1985 when Robey had a turnover of £7 million. Structure It is headquartered on the A457 on the western side of Oldbury, next to the Birmingham Canal. It has three subsidiaries. Wellman Hunt-Graham Wellman Graham merged with Hunt Thermal Engineering Ltd to form Wellman Hunt-Graham in 2012. It works in the heat transfer industry. It is the UK's largest manufacturer of shell and tube heat exchangers. Wellman Graham began in 1956 as Heat Transfer Ltd. The company changed name to Graham Manufacturing Ltd in 1977, It was sold to Wellman Group in 1995 and Changed name to Wellman Graham Ltd, and was based in Gloucester before moving its design and manufacturing facility to Oldbury in the West Midlands. Wellman Hunt Graham was acquired by Corac Group plc in 2012 and renamed to Hunt Graham Ltd, and in 2013 changed its name to Hunt Thermal Technologies. Corac Group plc was renamed as TP Group plc in 2015 and Hunt Thermal Technologies has since been renamed as TPG Engineering. Wellman Thermal Services This makes industrial furnaces. A division of the company, Wellman Process Engineering, makes evaporators and crystallisers. The company makes boilers for combined heat and power schemes. Wellman-Robey boilers are made at Oldbury. Wellman Defence This started as the research division of John Brown Engineers and Constructors Ltd in 1957. It became part of Wellman Group in 1996. Its main significance is that it developed the equipment for purified air that allows the Royal Navys nuclear submarines to be submerged for months at a time – Submarine Atmosphere Control. This uses an electrolyser. Carbon dioxide from the submarine reacts with hydrogen from the electrolyser and is removed. It also supplies oxygen generation equipment to other countries such as France for the new Barracuda-class submarine. Wellman Defence was acquired by Corac Group plc in 2012 and renamed to Atmosphere Control International. Corac Group plc was renamed as TP Group plc in 2015 and Atmosphere Control International has since been renamed as TPG Maritime Products Boilers – 100 kW to 35MW Steam boilers Water heat recovery boilers Furnaces – now part of the Almor Group (www.wellman-furnaces.com] Combined heat and power schemes Heat exchangers Air purifiers for nuclear submarines Installations North British Distillery, Edinburgh References External links Wellman Defence Wellman Hunt Graham Wellman Robey Boilers EPCB Boiler Website Graces Guide Profile Companies based in the West Midlands (county) Boilers Industrial furnaces Heat exchangers Heating, ventilation, and air conditioning companies Defence companies of the United Kingdom Engineering companies of the United Kingdom Conglomerate companies established in 1919 1919 establishments in England Manufacturing companies established in 1919 See also
Wellman Group
[ "Chemistry", "Engineering" ]
901
[ "Chemical equipment", "Metallurgical processes", "Industrial furnaces", "Heat exchangers", "Boilers", "Pressure vessels" ]
32,703,814
https://en.wikipedia.org/wiki/Chirality
Chirality () is a property of asymmetry important in several branches of science. The word chirality is derived from the Greek (kheir), "hand", a familiar chiral object. An object or a system is chiral if it is distinguishable from its mirror image; that is, it cannot be superposed (not to be confused with superimposed) onto it. Conversely, a mirror image of an achiral object, such as a sphere, cannot be distinguished from the object. A chiral object and its mirror image are called enantiomorphs (Greek, "opposite forms") or, when referring to molecules, enantiomers. A non-chiral object is called achiral (sometimes also amphichiral) and can be superposed on its mirror image. The term was first used by Lord Kelvin in 1893 in the second Robert Boyle Lecture at the Oxford University Junior Scientific Club which was published in 1894: Human hands are perhaps the most recognized example of chirality. The left hand is a non-superposable mirror image of the right hand; no matter how the two hands are oriented, it is impossible for all the major features of both hands to coincide across all axes. This difference in symmetry becomes obvious if someone attempts to shake the right hand of a person using their left hand, or if a left-handed glove is placed on a right hand. In mathematics, chirality is the property of a figure that is not identical to its mirror image. Mathematics In mathematics, a figure is chiral (and said to have chirality) if it cannot be mapped to its mirror image by rotations and translations alone. For example, a right shoe is different from a left shoe, and clockwise is different from anticlockwise. See for a full mathematical definition. A chiral object and its mirror image are said to be enantiomorphs. The word enantiomorph stems from the Greek () 'opposite' + () 'form'. A non-chiral figure is called achiral or amphichiral. The helix (and by extension a spun string, a screw, a propeller, etc.) and Möbius strip are chiral two-dimensional objects in three-dimensional ambient space. The J, L, S and Z-shaped tetrominoes of the popular video game Tetris also exhibit chirality, but only in a two-dimensional space. Many other familiar objects exhibit the same chiral symmetry of the human body, such as gloves, glasses (sometimes), and shoes. A similar notion of chirality is considered in knot theory, as explained below. Some chiral three-dimensional objects, such as the helix, can be assigned a right or left handedness, according to the right-hand rule. Geometry In geometry, a figure is achiral if and only if its symmetry group contains at least one orientation-reversing isometry. In two dimensions, every figure that possesses an axis of symmetry is achiral, and it can be shown that every bounded achiral figure must have an axis of symmetry. In three dimensions, every figure that possesses a plane of symmetry or a center of symmetry is achiral. There are, however, achiral figures lacking both plane and center of symmetry. In terms of point groups, all chiral figures lack an improper axis of rotation (Sn). This means that they cannot contain a center of inversion (i) or a mirror plane (σ). Only figures with a point group designation of C1, Cn, Dn, T, O, or I can be chiral. Knot theory A knot is called achiral if it can be continuously deformed into its mirror image, otherwise it is called chiral. For example, the unknot and the figure-eight knot are achiral, whereas the trefoil knot is chiral. Physics In physics, chirality may be found in the spin of a particle, where the handedness of the object is determined by the direction in which the particle spins. Not to be confused with helicity, which is the projection of the spin along the linear momentum of a subatomic particle, chirality is an intrinsic quantum mechanical property, like spin. Although both chirality and helicity can have left-handed or right-handed properties, only in the massless case are they identical. In particular for a massless particle the helicity is the same as the chirality while for an antiparticle they have opposite sign. The handedness in both chirality and helicity relate to the rotation of a particle while it proceeds in linear motion with reference to the human hands. The thumb of the hand points towards the direction of linear motion whilst the fingers curl into the palm, representing the direction of rotation of the particle (i.e. clockwise and counterclockwise). Depending on the linear and rotational motion, the particle can either be defined by left-handedness or right-handedness. A symmetry transformation between the two is called parity. Invariance under parity by a Dirac fermion is called chiral symmetry. Electromagnetism Electromagnetic waves can have handedness associated with their polarization. Polarization of an electromagnetic wave is the property that describes the orientation, i.e., the time-varying direction and amplitude, of the electric field vector. For example, the electric field vectors of left-handed or right-handed circularly polarized waves form helices of opposite handedness in space. Circularly polarized waves of opposite handedness propagate through chiral media at different speeds (circular birefringence) and with different losses (circular dichroism). Both phenomena are jointly known as optical activity. Circular birefringence causes rotation of the polarization state of electromagnetic waves in chiral media and can cause a negative index of refraction for waves of one handedness when the effect is sufficiently large. While optical activity occurs in structures that are chiral in three dimensions (such as helices), the concept of chirality can also be applied in two dimensions. 2D-chiral patterns, such as flat spirals, cannot be superposed with their mirror image by translation or rotation in two-dimensional space (a plane). 2D chirality is associated with directionally asymmetric transmission (reflection and absorption) of circularly polarized waves. 2D-chiral materials, which are also anisotropic and lossy exhibit different total transmission (reflection and absorption) levels for the same circularly polarized wave incident on their front and back. The asymmetric transmission phenomenon arises from different, e.g. left-to-right, circular polarization conversion efficiencies for opposite propagation directions of the incident wave and therefore the effect is referred to as circular conversion dichroism. Like the twist of a 2d-chiral pattern appears reversed for opposite directions of observation, 2d-chiral materials have interchanged properties for left-handed and right-handed circularly polarized waves that are incident on their front and back. In particular left-handed and right-handed circularly polarized waves experience opposite directional transmission (reflection and absorption) asymmetries. While optical activity is associated with 3d chirality and circular conversion is associated with 2d chirality, both effects have also been observed in structures that are not chiral by themselves. For the observation of these chiral electromagnetic effects, chirality does not have to be an intrinsic property of the material that interacts with the electromagnetic wave. Instead, both effects can also occur when the propagation direction of the electromagnetic wave together with the structure of an (achiral) material form a chiral experimental arrangement. This case, where the mutual arrangement of achiral components forms a chiral (experimental) arrangement, is known as extrinsic chirality. Chiral mirrors are a class of metamaterials that reflect circularly polarized light of a certain helicity in a handedness-preserving manner, while absorbing circular polarization of the opposite handedness. However, most absorbing chiral mirrors operate only in a narrow frequency band, as limited by the causality principle. Employing a different design methodology that allows undesired waves to pass through instead of absorbing the undesired waveform, chiral mirrors are able to show good broadband performance. Chemistry A chiral molecule is a type of molecule that has a non-superposable mirror image. The feature that is most often the cause of chirality in molecules is the presence of an asymmetric carbon atom. The term "chiral" in general is used to describe the object that is non-superposable on its mirror image. In chemistry, chirality usually refers to molecules. Two mirror images of a chiral molecule are called enantiomers or optical isomers. Pairs of enantiomers are often designated as "right-", "left-handed" or, if they have no bias, "achiral". As polarized light passes through a chiral molecule, the plane of polarization, when viewed along the axis toward the source, will be rotated clockwise (to the right) or anticlockwise (to the left). A right handed rotation is dextrorotary (d); that to the left is levorotary (l). The d- and l-isomers are the same compound but are called enantiomers. An equimolar mixture of the two optical isomers, which is called a racemic mixture, will produce no net rotation of polarized light as it passes through. Left handed molecules have l- prefixed to their names; d- is prefixed to right handed molecules. However, this d- and l- notation of distinguishing enantiomers does not say anything about the actual spatial arrangement of the ligands/substituents around the stereogenic center, which is defined as configuration. Another nomenclature system employed to specify configuration is Fischer convention. This is also referred to as the D- and L-system. Here the relative configuration is assigned with reference to D-(+)-Glyceraldehyde and L-(−)-Glyceraldehyde, being taken as standard. Fischer convention is widely used in sugar chemistry and for α-amino acids. Due to the drawbacks of Fischer convention, it is almost entirely replaced by Cahn-Ingold-Prelog convention, also known as the sequence rule or R and S nomenclature. This was further extended to assign absolute configuration to cis-trans isomers with the E-Z notation. Molecular chirality is of interest because of its application to stereochemistry in inorganic chemistry, organic chemistry, physical chemistry, biochemistry, and supramolecular chemistry. More recent developments in chiral chemistry include the development of chiral inorganic nanoparticles that may have the similar tetrahedral geometry as chiral centers associated with sp3 carbon atoms traditionally associated with chiral compounds, but at larger scale. Helical and other symmetries of chiral nanomaterials were also obtained. Biology All of the known life-forms show specific chiral properties in chemical structures as well as macroscopic anatomy, development and behavior. In any specific organism or evolutionarily related set thereof, individual compounds, organs, or behavior are found in the same single enantiomorphic form. Deviation (having the opposite form) could be found in a small number of chemical compounds, or certain organ or behavior but that variation strictly depends upon the genetic make up of the organism. From chemical level (molecular scale), biological systems show extreme stereospecificity in synthesis, uptake, sensing, metabolic processing. A living system usually deals with two enantiomers of the same compound in drastically different ways. In biology, homochirality is a common property of amino acids and carbohydrates. The chiral protein-making amino acids, which are translated through the ribosome from genetic coding, occur in the L form. However, D-amino acids are also found in nature. The monosaccharides (carbohydrate-units) are commonly found in D-configuration. DNA double helix is chiral (as any kind of helix is chiral), and B-form of DNA shows a right-handed turn. Sometimes, when two enantiomers of a compound are found in organisms, they significantly differ in their taste, smell and other biological actions. For example,(+)-Carvone is responsible for the smell of caraway seed oil, whereas (–)-carvone is responsible for smell of spearmint oil. However, it is a commonly held misconception that (+)-limonene is found in oranges (causing its smell), and (–)-limonene is found in lemons (causing its smell). In 2021, after rigorous experimentation, it was found that all citrus fruits contain only (+)-limonene and the odor difference is because of other contributing factors. Also, for artificial compounds, including medicines, in case of chiral drugs, the two enantiomers sometimes show remarkable difference in effect of their biological actions. Darvon (dextropropoxyphene) is a painkiller, whereas its enantiomer, Novrad (levopropoxyphene) is an anti-cough agent. In case of penicillamine, the (S-isomer is used in the treatment of primary chronic arthritis, whereas the (R)-isomer has no therapeutic effect, as well as being highly toxic. In some cases, the less therapeutically active enantiomer can cause side effects. For example, (S-naproxen is an analgesic but the (R-isomer causes renal problems. In such situations where one of the enantiomers of a racemic drug is active and the other partner has undesirable or toxic effect one may switch from racemate to a single enantiomer drug for a better therapeutic value. Such a switching from a racemic drug to an enantiopure drug is called a chiral switch. The naturally occurring plant form of alpha-tocopherol (vitamin E) is RRR-α-tocopherol whereas the synthetic form (all-racemic vitamin E, or dl-tocopherol) is equal parts of the stereoisomers RRR, RRS, RSS, SSS, RSR, SRS, SRR, and SSR with progressively decreasing biological equivalency, so that 1.36 mg of dl-tocopherol is considered equivalent to 1.0 mg of d-tocopherol. Macroscopic examples of chirality are found in the plant kingdom, the animal kingdom and all other groups of organisms. A simple example is the coiling direction of any climber plant, which can grow to form either a left- or right-handed helix. In anatomy, chirality is found in the imperfect mirror image symmetry of many kinds of animal bodies. Organisms such as gastropods exhibit chirality in their coiled shells, resulting in an asymmetrical appearance. Over 90% of gastropod species have dextral (right-handed) shells in their coiling, but a small minority of species and genera are virtually always sinistral (left-handed). A very few species (for example Amphidromus perversus) show an equal mixture of dextral and sinistral individuals. In humans, chirality (also referred to as handedness or laterality) is an attribute of humans defined by their unequal distribution of fine motor skill between the left and right hands. An individual who is more dexterous with the right hand is called right-handed, and one who is more skilled with the left is said to be left-handed. Chirality is also seen in the study of facial asymmetry and is known as aurofacial asymmetry. According to the Axial Twist theory, vertebrate animals develop into a left-handed chirality. Due to this, the brain is turned around and the heart and bowels are turned by 90°. In the case of the health condition situs inversus totalis, in which all the internal organs are flipped horizontally (i.e. the heart placed slightly to the right instead of the left), chirality poses some problems should the patient require a liver or heart transplant, as these organs are chiral, thus meaning that the blood vessels which supply these organs would need to be rearranged should a normal, non situs inversus (situs solitus) organ be required. In the monocot bloodroot family, the species of the genera Wachendorfia and Barberetta have only individuals that either have the style points to the right or the style pointed to the left, with both morphs appearing within the same populations. This is thought to increase outcrossing and so boost genetic diversity, which in turn may help to survive in a changing environment. Remarkably, the related genus Dilatris also has chirally dimorphic flowers, but here both morphs occur on the same plant. In flatfish, the summer flounder or fluke are left-eyed, while halibut are right-eyed. Resources and Research Journal Chirality- a scientific journal focused on chirality in chemistry and biochemistry in respect to biological, chemical, materials, pharmacological, spectroscopic and physical properties. Selected Books Creutz, Michael (2018). From quarks to pions: chiral symmetry and confinement. New Jersey London Singapore Beijing , Shanghai Hong Kong Taipei Chennai Tokyo: World Scientific. Wolf, Christian (2008). Dynamic stereochemistry of chiral compounds: principles and applications. Cambridge: RSC Publ. Beesley, Thomas E.; Scott, Raymond P. W. (1998). Chiral chromatography. Separation science series. Chichester Weinheim: Wiley. See also Handedness Chiral drugs Chiral switch Chiral inversion Metachirality Orientation (space) Sinistral and dextral Tendril perversion Chirality (physics) References External links Asymmetry Biochemistry Stereochemistry Pharmacology Origin of life 1890s neologisms de:Chiralität (Chemie)
Chirality
[ "Physics", "Chemistry", "Biology" ]
3,828
[ "Pharmacology", "Origin of life", "Biochemistry", "Stereochemistry", "Chirality", "Space", "Medicinal chemistry", "Asymmetry", "nan", "Spacetime", "Symmetry", "Biological hypotheses" ]
32,704,529
https://en.wikipedia.org/wiki/Ping%20test%20%28engineering%29
A ping test is a physical test to determine the natural frequency of an object or assembly. The test consists of instrumenting the object or assembly with measuring devices and then tapping it with another metallic object (usually a hammer.) The undamped system will then vibrate at its natural frequency. The ping test is used on assemblies and objects where vibration can be an issue. See also Ping Ping test References Materials testing Mechanical tests
Ping test (engineering)
[ "Materials_science", "Engineering" ]
87
[ "Mechanical tests", "Materials testing", "Materials science", "Mechanical engineering" ]
32,708,967
https://en.wikipedia.org/wiki/Minimal%20coupling
In analytical mechanics and quantum field theory, minimal coupling refers to a coupling between fields which involves only the charge distribution and not higher multipole moments of the charge distribution. This minimal coupling is in contrast to, for example, Pauli coupling, which includes the magnetic moment of an electron directly in the Lagrangian. Electrodynamics In electrodynamics, minimal coupling is adequate to account for all electromagnetic interactions. Higher moments of particles are consequences of minimal coupling and non-zero spin. Non-relativistic charged particle in an electromagnetic field In Cartesian coordinates, the Lagrangian of a non-relativistic classical particle in an electromagnetic field is (in SI Units): where is the electric charge of the particle, is the electric scalar potential, and the , , are the components of the magnetic vector potential that may all explicitly depend on and . This Lagrangian, combined with Euler–Lagrange equation, produces the Lorentz force law and is called minimal coupling. Note that the values of scalar potential and vector potential would change during a gauge transformation, and the Lagrangian itself will pick up extra terms as well, but the extra terms in the Lagrangian add up to a total time derivative of a scalar function, and therefore still produce the same Euler–Lagrange equation. The canonical momenta are given by Note that canonical momenta are not gauge invariant, and are not physically measurable. However, the kinetic momenta are gauge invariant and physically measurable. The Hamiltonian, as the Legendre transformation of the Lagrangian, is therefore This equation is used frequently in quantum mechanics. Under a gauge transformation, where f(r,t) is any scalar function of space and time, the aforementioned Lagrangian, canonical momenta and Hamiltonian transform like which still produces the same Hamilton's equation: In quantum mechanics, the wave function will also undergo a local U(1) group transformation during the gauge transformation, which implies that all physical results must be invariant under local U(1) transformations. Relativistic charged particle in an electromagnetic field The relativistic Lagrangian for a particle (rest mass and charge ) is given by: Thus the particle's canonical momentum is that is, the sum of the kinetic momentum and the potential momentum. Solving for the velocity, we get So the Hamiltonian is This results in the force equation (equivalent to the Euler–Lagrange equation) from which one can derive The above derivation makes use of the vector calculus identity: An equivalent expression for the Hamiltonian as function of the relativistic (kinetic) momentum, , is This has the advantage that kinetic momentum can be measured experimentally whereas canonical momentum cannot. Notice that the Hamiltonian (total energy) can be viewed as the sum of the relativistic energy (kinetic+rest), , plus the potential energy, . Inflation In studies of cosmological inflation, minimal coupling of a scalar field usually refers to minimal coupling to gravity. This means that the action for the inflaton field is not coupled to the scalar curvature. Its only coupling to gravity is the coupling to the Lorentz invariant measure constructed from the metric (in Planck units): where , and utilizing the gauge covariant derivative. References Gauge theories Hamiltonian mechanics Lagrangian mechanics
Minimal coupling
[ "Physics", "Mathematics" ]
696
[ "Theoretical physics", "Lagrangian mechanics", "Classical mechanics", "Hamiltonian mechanics", "Dynamical systems" ]
32,713,174
https://en.wikipedia.org/wiki/Diffractive%20beam%20splitter
The diffractive beam splitter (also known as multispot beam generator or array beam generator) is a single optical element that divides an input beam into multiple output beams. Each output beam retains the same optical characteristics as the input beam, such as size, polarization and phase. A diffractive beam splitter can generate either a 1-dimensional beam array (1xN) or a 2-dimensional beam matrix (MxN), depending on the diffractive pattern on the element. The diffractive beam splitter is used with monochromatic light such as a laser beam, and is designed for a specific wavelength and angle of separation between output beams. Applications Normally, a diffractive beam splitter is used in tandem with a focusing lens so that the output beam array becomes an array of focused spots on a plane at a given distance from the lens, called the "working distance". The focal length of the lens, together with the separation angle between the beams, determines the separation distance between the focused spots. This simple optical set-up is used in a variety of high-power laser research and industrial applications that typically include: Laser scribing (solar cells) Glass dicing (LCD displays) Perforation (cigarette filters) Beam sampling (Power monitoring and control) 3-D motion sensing (Example) Medical/aesthetic applications (skin treatment) Design principle The theory of operation is based on the wave nature of light and Huygens' Principle (See also Diffraction). Designing the diffractive pattern for a beam splitter follows the same principle as a diffraction grating, with a repetitive pattern etched on the surface of a substrate. The depth of the etching pattern is roughly on the order of the wavelength of light in the application, with an adjustment factor related to the substrate's index of refraction. The etching pattern is composed of "periods" – identical sub-pattern units that repeat cyclically. The width d of the period is related to the separation angle θ between output beams according to the grating equation: m represents the order of the diffracted beam, with the zero order output simply being the undiffracted continuation of the input beam. While the grating equation determines the direction of the output beams, it does not determine the distribution of light intensity among those beams. The power distribution is defined by the etching profile within the unit period, which can involve many (not less than two) etching transitions of varying duty cycles. In a 1-dimensional diffractive beam splitter, the diffractive pattern is linear, while a 2-dimensional element will have a complex pattern. For information on manufacturing process, see lithography. References External links HOLOOR Diffractive beam-splitter Video presenting diffractive beam splitter developing. IFTA Video.Light propagation simulation through diffractive splitter Optical components Diffraction
Diffractive beam splitter
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
598
[ "Glass engineering and science", "Optical components", "Spectrum (physical sciences)", "Crystallography", "Diffraction", "Spectroscopy", "Components" ]
48,814,120
https://en.wikipedia.org/wiki/Natural%20isotopes
Natural isotopes are either stable isotopes or radioactive isotopes that have a sufficiently long half-life to allow them to exist in substantial concentrations in the Earth (such as bismuth-209, with a half-life of 1.9 years, potassium-40 with a half-life of 1.251(3) years), daughter products of those isotopes (such as 234Th, with a half-life of 24 days) or cosmogenic elements. The heaviest stable isotope is lead-208, but the heaviest 'natural' isotope is U-238. Many elements have both natural and artificial isotopes. For example, hydrogen has three natural isotopes and another four known artificial isotopes. A further distinction among stable natural isotopes is division into primordial (existed when the Solar System formed) and cosmogenic (created by cosmic ray bombardment or other similar processes). What defines a natural isotope Natural isotopes must be either stable, have a half-life exceeding about 7 years (there are 35 isotopes in this category, see stable isotope for more details) or are generated in large amounts cosmogenically (such as 14C, which has a half-life of only 6000 years but is made by cosmic rays colliding with 14N). Naturally occurring radioisotopes Some radioisotopes occur in nature with a half-life of less than 7 years (carbon-14: 5,730 ± 40 years, tritium: 12.32 years etc.). They are synthesised all the time by cosmic radiation. A practical use is radiocarbon dating with carbon-14. See also Stable isotope Environmental isotopes Bibliography Isotopes
Natural isotopes
[ "Physics", "Chemistry" ]
343
[ "Isotopes", "Nuclear physics" ]
48,815,776
https://en.wikipedia.org/wiki/Impedance%20microbiology
Impedance microbiology is a microbiological technique used to measure the microbial number density (mainly bacteria but also yeasts) of a sample by monitoring the electrical parameters of the growth medium. The ability of microbial metabolism to change the electrical conductivity of the growth medium was discovered by Stewart and further studied by other scientists such as Oker-Blom, Parson and Allison in the first half of 20th century. However, it was only in the late 1970s that, thanks to computer-controlled systems used to monitor impedance, the technique showed its full potential, as discussed in the works of Fistenberg-Eden & Eden, Ur & Brown and Cady. Principle of operation When a pair of electrodes are immersed in the growth medium, the system composed of electrodes and electrolyte can be modeled with the electrical circuit of Fig. 1, where Rm and Cm are the resistance and capacitance of the bulk medium, while Ri and Ci are the resistance and capacitance of the electrode-electrolyte interface. However, when frequency of the sinusoidal test signal applied to the electrodes is relatively low (lower than 1 MHz) the bulk capacitance Cm can be neglected and the system can be modeled with a simpler circuit consisting only of a resistance Rs and a capacitance Cs in series. The resistance Rs accounts for the electrical conductivity of the bulk medium while the capacitance Cs is due to the capacitive double-layer at the electrode-electrolyte interface. During the growth phase, bacterial metabolism transforms uncharged or weakly charged compounds of the bulk medium in highly charged compounds that change the electrical properties of the medium. This results in a decrease of resistance Rs and an increase of capacitance Cs. In impedance microbiology technique works this way, the sample with the initial unknown bacterial concentration (C0) is placed at a temperature favoring bacterial growth (in the range 37 to 42 °C if mesophilic microbial population is the target) and the electrical parameters Rs and Cs are measured at regular time intervals of few minutes by means of a couple of electrodes in direct contact with the sample. Until the bacterial concentration is lower than a critical threshold CTH the electrical parameters Rs and Cs remain essentially constant (at their baseline values). CTH depends on various parameters such as electrode geometry, bacterial strain, chemical composition of the growth medium etc., but it is always in the range 106 to 107 cfu/ml. When the bacterial concentration increases over CTH, the electrical parameters deviate from their baseline values (generally in the case of bacteria there is a decrease of Rs and an increase of Cs, the opposite happens in the case of yeasts). The time needed for the electrical parameters Rs and Cs to deviate from their baseline value is referred as Detect Time (DT) and is the parameter used to estimate the initial unknown bacterial concentration C0. In Fig. 2 a typical curve for Rs as well as the corresponding bacterial concentration are plotted vs. time. Fig. 3 shows typical Rs curves vs time for samples characterized by different bacterial concentration. Since DT is the time needed for the bacterial concentration to grow from the initial value C0 to CTH, highly contaminated samples are characterized by lower values of DT than samples with low bacterial concentration. Given C1, C2 and C3 the bacterial concentration of three samples with C1 > C2 > C3, it is DT1 < DT2 < DT3. Data from literature show how DT is a linear function of the logarithm of C0: where the parameters A and B are dependent on the particular type of samples under test, the bacterial strains, the type of enriching medium used and so on. These parameters can be calculated by calibrating the system using a set of samples whose bacterial concentration is known and calculating the linear regression line that will be used to estimate the bacterial concentration from the measured DT. Impedance microbiology has different advantages on the standard plate count technique to measure bacterial concentration. It is characterized by faster response time. In the case of mesophilic bacteria, the response time range from 2 – 3 hours for highly contaminated samples (105 - 106 cfu/ml) to over 10 hours for samples with very low bacterial concentration (less than 10 cfu/ml). As a comparison, for the same bacterial strains the Plate Count technique is characterized by response times from 48 to 72 hours. Impedance microbiology is a method that can be easily automated and implemented as part of an industrial machine or realized as an embedded portable sensor, while plate count is a manual method that needs to be carried out in a laboratory by long trained personnel. Instrumentation Over the past decades different instruments (either laboratory built or commercially available) to measure bacterial concentration using impedance microbiology have been built. One of the best selling and well accepted instruments in the industry is the Bactometer by Biomerieux. The original instrument of 1984 features a multi-incubator system capable of monitoring up to 512 samples simultaneously with the ability to set 8 different incubation temperatures. Other instruments with performance comparable to the Bactometer are Malthus by Malthus Instruments Ltd (Bury, UK), RABIT by Don Whitley Scientific (Shipley, UK) and Bac Trac by Sy-Lab (Purkensdorf, Austria). A portable embedded system for microbial concentration measurement in liquid and semi-liquid media using impedance microbiology has been recently proposed. The system is composed of a thermoregulated incubation chamber where the sample under test is stored and a controller for thermoregulation and impedance measurements. Applications Impedance microbiology has been extensively used in the past decades to measure the concentration of bacteria and yeasts in different type of samples, mainly for quality assurance in the food industry. Some applications are, the determination of the shelf life of pasteurized milk and the measure of total bacterial concentration in raw-milk, frozen vegetables, grain products, meat products and beer. The technique has been also used in environmental monitoring to detect the coliform concentration in water samples as well as other bacterial pathogens like E.coli present in water bodies, in the pharmaceutical industry to test the efficiency of novel antibacterial agents and the testing of final products. References Microbiology techniques Microbiology Impedance measurements
Impedance microbiology
[ "Physics", "Chemistry", "Biology" ]
1,292
[ "Physical quantities", "Microbiology", "Microbiology techniques", "Microscopy", "Impedance measurements", "Electrical resistance and conductance" ]
61,539,682
https://en.wikipedia.org/wiki/C14H16N4O3
{{DISPLAYTITLE:C14H16N4O3}} The molecular formula C14H16N4O3 (molar mass: 288.302 g/mol, exact mass: 288.1222 u) may refer to: Obidoxime Piromidic acid Molecular formulas
C14H16N4O3
[ "Physics", "Chemistry" ]
66
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,547,718
https://en.wikipedia.org/wiki/Mathematics%20of%20artificial%20neural%20networks
An artificial neural network (ANN) combines biological principles with advanced statistics to solve problems in domains such as pattern recognition and game-play. ANNs adopt the basic model of neuron analogues connected to each other in a variety of ways. Structure Neuron A neuron with label receiving an input from predecessor neurons consists of the following components: an activation , the neuron's state, depending on a discrete time parameter, an optional threshold , which stays fixed unless changed by learning, an activation function that computes the new activation at a given time from , and the net input giving rise to the relation and an output function computing the output from the activation Often the output function is simply the identity function. An input neuron has no predecessor but serves as input interface for the whole network. Similarly an output neuron has no successor and thus serves as output interface of the whole network. Propagation function The propagation function computes the input to the neuron from the outputs and typically has the form Bias A bias term can be added, changing the form to the following: where is a bias. Neural networks as functions Neural network models can be viewed as defining a function that takes an input (observation) and produces an output (decision) or a distribution over or both and . Sometimes models are intimately associated with a particular learning rule. A common use of the phrase "ANN model" is really the definition of a class of such functions (where members of the class are obtained by varying parameters, connection weights, or specifics of the architecture such as the number of neurons, number of layers or their connectivity). Mathematically, a neuron's network function is defined as a composition of other functions , that can further be decomposed into other functions. This can be conveniently represented as a network structure, with arrows depicting the dependencies between functions. A widely used type of composition is the nonlinear weighted sum, where , where (commonly referred to as the activation function) is some predefined function, such as the hyperbolic tangent, sigmoid function, softmax function, or rectifier function. The important characteristic of the activation function is that it provides a smooth transition as input values change, i.e. a small change in input produces a small change in output. The following refers to a collection of functions as a vector . This figure depicts such a decomposition of , with dependencies between variables indicated by arrows. These can be interpreted in two ways. The first view is the functional view: the input is transformed into a 3-dimensional vector , which is then transformed into a 2-dimensional vector , which is finally transformed into . This view is most commonly encountered in the context of optimization. The second view is the probabilistic view: the random variable depends upon the random variable , which depends upon , which depends upon the random variable . This view is most commonly encountered in the context of graphical models. The two views are largely equivalent. In either case, for this particular architecture, the components of individual layers are independent of each other (e.g., the components of are independent of each other given their input ). This naturally enables a degree of parallelism in the implementation. Networks such as the previous one are commonly called feedforward, because their graph is a directed acyclic graph. Networks with cycles are commonly called recurrent. Such networks are commonly depicted in the manner shown at the top of the figure, where is shown as dependent upon itself. However, an implied temporal dependence is not shown. Backpropagation Backpropagation training algorithms fall into three categories: steepest descent (with variable learning rate and momentum, resilient backpropagation); quasi-Newton (Broyden–Fletcher–Goldfarb–Shanno, one step secant); Levenberg–Marquardt and conjugate gradient (Fletcher–Reeves update, Polak–Ribiére update, Powell–Beale restart, scaled conjugate gradient). Algorithm Let be a network with connections, inputs and outputs. Below, denote vectors in , vectors in , and vectors in . These are called inputs, outputs and weights, respectively. The network corresponds to a function which, given a weight , maps an input to an output . In supervised learning, a sequence of training examples produces a sequence of weights starting from some initial weight , usually chosen at random. These weights are computed in turn: first compute using only for . The output of the algorithm is then , giving a new function . The computation is the same in each step, hence only the case is described. is calculated from by considering a variable weight and applying gradient descent to the function to find a local minimum, starting at . This makes the minimizing weight found by gradient descent. Learning pseudocode To implement the algorithm above, explicit formulas are required for the gradient of the function where the function is . The learning algorithm can be divided into two phases: propagation and weight update. Propagation Propagation involves the following steps: Propagation forward through the network to generate the output value(s) Calculation of the cost (error term) Propagation of the output activations back through the network using the training pattern target to generate the deltas (the difference between the targeted and actual output values) of all output and hidden neurons. Weight update For each weight: Multiply the weight's output delta and input activation to find the gradient of the weight. Subtract the ratio (percentage) of the weight's gradient from the weight. The learning rate is the ratio (percentage) that influences the speed and quality of learning. The greater the ratio, the faster the neuron trains, but the lower the ratio, the more accurate the training. The sign of the gradient of a weight indicates whether the error varies directly with or inversely to the weight. Therefore, the weight must be updated in the opposite direction, "descending" the gradient. Learning is repeated (on new batches) until the network performs adequately. Pseudocode Pseudocode for a stochastic gradient descent algorithm for training a three-layer network (one hidden layer): initialize network weights (often small random values) do for each training example named ex do prediction = neural-net-output(network, ex) // forward pass actual = teacher-output(ex) compute error (prediction - actual) at the output units // backward pass // backward pass continued update network weights // input layer not modified by error estimate until error rate becomes acceptably low return the network The lines labeled "backward pass" can be implemented using the backpropagation algorithm, which calculates the gradient of the error of the network regarding the network's modifiable weights. References Computational statistics Classification algorithms Computational neuroscience
Mathematics of artificial neural networks
[ "Mathematics" ]
1,370
[ "Computational statistics", "Computational mathematics" ]
61,551,787
https://en.wikipedia.org/wiki/Moffatt%20eddies
Moffatt eddies are sequences of eddies that develop in corners bounded by plane walls (or sometimes between a wall and a free surface) due to an arbitrary disturbance acting at asymptotically large distances from the corner. Although the source of motion is the arbitrary disturbance at large distances, the eddies develop quite independently and thus solution of these eddies emerges from an eigenvalue problem, a self-similar solution of the second kind. The eddies are named after Keith Moffatt, who discovered these eddies in 1964, although some of the results were already obtained by William Reginald Dean and P. E. Montagnon in 1949. Lord Rayleigh also studied the problem of flow near the corner with homogeneous boundary conditions in 1911. Moffatt eddies inside cones are solved by P. N. Shankar. Flow description Near the corner, the flow can be assumed to be Stokes flow. Describing the two-dimensional planar problem by the cylindrical coordinates with velocity components defined by a stream function such that the governing equation can be shown to be simply the biharmonic equation . The equation has to be solved with homogeneous boundary conditions (conditions taken for two walls separated by angle ) The Taylor scraping flow is similar to this problem but driven inhomogeneous boundary condition. The solution is obtained by the eigenfunction expansion, where are constants and the real part of the eigenvalues are always greater than unity. The eigenvalues will be function of the angle , but regardless eigenfunctions can be written down for any , For antisymmetrical solution, the eigenfunction is even and hence and the boundary conditions demand . The equations admits no real root when °. These complex eigenvalues indeed correspond to the moffatt eddies. The complex eigenvalue if given by where Here . See also Taylor scraping flow References Fluid dynamics Flow regimes
Moffatt eddies
[ "Chemistry", "Engineering" ]
386
[ "Piping", "Chemical engineering", "Flow regimes", "Fluid dynamics" ]
46,265,020
https://en.wikipedia.org/wiki/Pepper%20%28cryptography%29
In cryptography, a pepper is a secret added to an input such as a password during hashing with a cryptographic hash function. This value differs from a salt in that it is not stored alongside a password hash, but rather the pepper is kept separate in some other medium, such as a Hardware Security Module. Note that the National Institute of Standards and Technology refers to this value as a secret key rather than a pepper. A pepper is similar in concept to a salt or an encryption key. It is like a salt in that it is a randomized value that is added to a password hash, and it is similar to an encryption key in that it should be kept secret. A pepper performs a comparable role to a salt or an encryption key, but while a salt is not secret (merely unique) and can be stored alongside the hashed output, a pepper is secret and must not be stored with the output. The hash and salt are usually stored in a database, but a pepper must be stored separately to prevent it from being obtained by the attacker in case of a database breach. A pepper should be long enough to remain secret from brute force attempts to discover it (NIST recommends at least 112 bits). History The idea of a site- or service-specific salt (in addition to a per-user salt) has a long history, with Steven M. Bellovin proposing a local parameter in a Bugtraq post in 1995. In 1996 Udi Manber also described the advantages of such a scheme, terming it a secret salt. The term pepper has been used, by analogy to salt, but with a variety of meanings. For example, when discussing a challenge-response scheme, pepper has been used for a salt-like quantity, though not used for password storage; it has been used for a data transmission technique where a pepper must be guessed; and even as a part of jokes. The term pepper was proposed for a secret or local parameter stored separately from the password in a discussion of protecting passwords from rainbow table attacks. This usage did not immediately catch on: for example, Fred Wenzel added support to Django password hashing for storage based on a combination of bcrypt and HMAC with separately stored nonces, without using the term. Usage has since become more common. Types There are multiple different types of pepper: A secret unique to each user. A shared secret that is common to all users. A randomly-selected number that must be re-discovered on every password input. Algorithm An incomplete example of using a pepper constant to save passwords is given bellow. This table contains two combinations of username and password. The password is not saved, and the 8-byte (64-bit) 44534C70C6883DE2 pepper is saved in a safe place separate from the output values of the hash. Unlike the salt, the pepper does not provide protection to users who use the same password, but protects against dictionary attacks, unless the attacker has the pepper value available. Since the same pepper is not shared between different applications, an attacker is unable to reuse the hashes of one compromised database to another. A complete scheme for saving passwords usually includes both salt and pepper use. Shared-secret pepper In the case of a shared-secret pepper, a single compromised password (via password reuse or other attack) along with a user's salt can lead to an attack to discover the pepper, rendering it ineffective. If an attacker knows a plaintext password and a user's salt, as well as the algorithm used to hash the password, then discovering the pepper can be a matter of brute forcing the values of the pepper. This is why NIST recommends the secret value be at least 112 bits, so that discovering it by exhaustive search is intractable. The pepper must be generated anew for every application it is deployed in, otherwise a breach of one application would result in lowered security of another application. Without knowledge of the pepper, other passwords in the database will be far more difficult to extract from their hashed values, as the attacker would need to guess the password as well as the pepper. A pepper adds security to a database of salts and hashes because unless the attacker is able to obtain the pepper, cracking even a single hash is intractable, no matter how weak the original password. Even with a list of (salt, hash) pairs, an attacker must also guess the secret pepper in order to find the password which produces the hash. The NIST specification for a secret salt suggests using a Password-Based Key Derivation Function (PBKDF) with an approved Pseudorandom Function such as HMAC with SHA-3 as the hash function of the HMAC. The NIST recommendation is also to perform at least 1000 iterations of the PBKDF, and a further minimum 1000 iterations using the secret salt in place of the non-secret salt. Unique pepper per user In the case of a pepper that is unique to each user, the tradeoff is gaining extra security at the cost of storing more information securely. Compromising one password hash and revealing its secret pepper will have no effect on other password hashes and their secret pepper, so each pepper must be individually discovered, which greatly increases the time taken to attack the password hashes. See also Salt (cryptography) HMAC passwd References External links Cryptography Password authentication
Pepper (cryptography)
[ "Mathematics", "Engineering" ]
1,101
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
46,266,901
https://en.wikipedia.org/wiki/SAES%20Getters
SAES Getters S.p.A. is an Italian joint stock company, established in 1940. It is the parent company of the SAES industrial group, which focusses its business on the production of components and systems in advanced materials patented by the same company and used in various industrial and medical applications. History 1940s and 50s In 1940 the company S.A.E.S. (Società Apparecchi Elettrici e Scientifici) was formed in Florence at the initiative of Ernesto Gabbrielli, an engineer from Montecatini Terme and two other shareholders. The impetus for the foundation of the company was the discovery by Gabbrielli of a new method for the production of getters, with nickel lids to protect barium plastics, to prevent the phenomenon of oxidation. S.A.E.S. initially devoted itself to producing time clocks and a barium-magnesium-aluminium alloy. After a few years, it transferred its headquarters to Milan and began to produce electrical resistance heaters. In 1946, the Della Porta and Canale families entered the shareholding structure, and in 1949, Paolo della Porta joined the company, subsequently taking over management in 1952. A period began that was distinguished by major innovations, supported by the Research and Development Laboratory, notably the invention of ring-shaped getters in barium-aluminium alloy. The company expanded into Europe, appointing its first agents in France, Germany, and England. S.A.E.S. continued to invest in scientific research during a period of major change in the electronics sector, characterised by the extensive spread of transistors, to the detriment of vacuum tubes intended for radio and television reception and transmission. In 1957 S.A.E.S. filed a patent for getters for television cathode tubes, first for black and white and then for colour television, launching production on an industrial scale. 1960s and 70s This period was characterised by company consolidations, innovations and commercial successes, including at international level. S.A.E.S. also commissioned its first mass production plant. At the "3rd Symposium on Residual Gases", held in Rome in 1967, S.A.E.S. presented a new configuration of getter, consisting of a metallic tape coated with St 101 alloy, obtained from the combination of zirconium and aluminium. This technological evolution allowed the company to execute new products with Non-Evaporable Getters (NEG) and getter pumps. The non-evaporable getter pumps (NEG pumps) are devices which have the same purpose as barium getters but do not require an evaporation process; they have extremely high absorptive capacity and are still used today, in more advanced forms, in applications requiring a high or ultra-high vacuum. Exploiting its own technological skills in the field of metallurgy, during these two decades, S.A.E.S. developed new alloys (such as the St 707 alloy in zirconium, vanadium and iron) and was involved in projects on the catalytic module and purifiers of inert gases. It also pursued its internationalisation with the creation of subsidiaries with commercial responsibilities in the United Kingdom (1966), United States (1969), Canada (1969) and Japan (1973). Other commercial representative offices were opened in France (1978) and in Germany (1979). In 1977, SAES Getters USA Inc. was founded in Colorado Springs, the first foreign subsidiary with a manufacturing mission for satisfying the growing demand for porous non-evaporable getters from the US military industry. S.A.E.S. continued to grow by virtue of the progressive launch of new products and the expansion of its production structure, both through the construction of new plants and acquisitions of other companies. In 1978, it had grown to 300 employees, redefined its company structure and changed its name, from S.A.E.S. to SAES Getters. 1980s In 1982, a plant was commissioned in Nanjing, in China, for the production of barium getters, the installations for which were supplied directly by SAES Getters. The objective during this period was the vertical integration of production (or the internal management of all processes), which over the years would become one of the group's strong points. From this perspective, in 1984, SAES Metallurgia was founded in Avezzano (Province of L’Aquila), for the production of barium and barium-aluminium alloy, SAES Engineering, for the execution of instruments and devices necessary for the metallurgical processing of alloys, and SAES Gemedis dedicated to the production of Gemedis (Getter Mercury Dispenser), i.e. alloys containing mercury. Over the course of the 1990s, the three companies merged into SAES Advanced Technologies S.p.A., adding to their own portfolio numerous other articles and devices used constantly in Hi-Tech applications requiring a vacuum or ultrahigh vacuum. In the mid-1980s, SAES Getters concluded two important acquisitions in the United States in the field of barium getters for cathode tubes and created Getters Corporation of America. In 1986, SAES Getters was listed on the stock exchange, with the financial resources raised allowing it to pursue its acquisitions policy, most notably, towards the end of the decade, the Californian company Cryolab Inc., subsequently renamed SAES Pure Gas Inc., within which it still develops and produces gas purification products, principally for the semiconductor industry. In 1989, Massimo della Porta, Paolo's son, began to work for SAES Metallurgia. Over the years, Massimo took on increasingly important roles until he was appointed Chairman of SAES Getters in 2009. 1990s At the start of the 1990s, the entire activity of the group was encompassed in three sectors: barium getters for the television industry, non-evaporable getters, NEG pumps and metal dispensers for industrial and scientific applications, gas purifiers and analyzers for the semiconductor industry. Over the decade, production companies were established in South Korea and China and commercial companies in Singapore and Taiwan. In 1996, to meet the new requirements of the company in the fields of production and research, the new headquarters was inaugurated in Lainate. During the same year, SAES Getters became the first Italian company to be listed on Nasdaq, the most important equity market in the US for high-tech companies (on which the company remained until 2003, the year in which it requested a delisting). During the second half of the 1990s, with the succession of technologies in the field of televisions, SAES Getters, sensing the importance of new market developments, expanded its own field of production and focused on technologies widespread in the flat display sector, in particular, mercury dispensers for backlight lamps for LCD (Liquid Crystal Display) for monitors and televisions. 2000s In 2000, Paolo della Porta was named the Entrepreneur of the year in Italy and in 2001, he was appointed "World Entrepreneur of the Year" in Italy by Ernst & Young - this recognition consolidated the image of the company at an international level. Overall, this decade signalled a notable change for the entire company, with its structure reorganised through new acquisitions and company policy evolving significantly, which, characterised by innovation and diversification, focused increasingly on expanding its technology portfolio in advanced materials. This strategy led SAES Getters to specialise in the execution of components and systems for high-tech industrial and medical applications, allowing the company to survive unscathed after the collapse of a number of key markets, such as barium getters for the television industry or back lights for LCD screens. Furthermore, during this period, the company developed highly innovative technologies and processes for the depositing of getters on silicon wafers for so-called MEMS (Micro Electro-Mechanical Systems), miniaturised devices intended for various applications, such as sensors and gyroscopes. The company entered the shape memory alloy sector (SMA), becoming a key reference for the sector and the first producer of SMA devices and materials for use in industry. In this way, SAES acquired Memory Metalle GmbH (renamed Memry GmbH in 2010), a German company with metallurgical and application skills relating to SMA in the field of medicine. It also acquired two companies in the United States: Memry Corporation, specialising in the production of SMA devices for medical use and the business division of Special Metal Corporation, dedicated to the production of NiTiNol (renamed SAES Smart Materials). NiTiNol is the commercial name of shape memory alloys used in medicine and the SAES group is one of the major international suppliers of this material. SAES then launched the production of SMA wires and springs for industrial applications at its Italian facilities, with these now concentrated at its headquarters in Lainate. In this context, in 2011, together with the German company Alfmeier, SAES Getters established the 50-50 joint venture in Germany, Actuator Solutions GmbH, to boost its own competitiveness at international level in the fields of development, production and marketing of SMA-wire-based actuators. In 2014, the joint venture won the "German Innovation Award" in the medium-sized company category. The group is also strengthening its presence in the field of purification, with the acquisition of a division of the company Power & Energy (Ivyland, Pennsylvania, United States), with the aim of expanding its production of palladium membrane purifiers. At the same time, the Research and Innovation area of the company is developing innovative hybrid technologies, which integrate getter materials into polymer matrices, initially concentrating on the development of dispensable absorbers for organic electronics applications, in particular OLED (Organic Light Emitting Diodes) light displays and sources. From 2013 onwards, further developments of the polymer technological platform have permitted the group to execute new functional polymer compounds, with the properties of interacting with the gases and optical, mechanical and surface modifying functions, according to the requirements and applications of interest, including implantable medical devices, food packaging and the field of energy storage (lithium batteries and super condensers). In 2010, it also established the company ETC, the fruit of a collaboration between CNR and SAES Getters (the majority shareholder). An innovative research programme was launched within ETC for the development of OLET technology (Organic Light Emitting Transistor). Technologies Material science The group is mainly focused on advanced alloys, advanced inorganic materials, polymer-matrix composites, and thin metal films. Solid state chemistry SAES Getters exploits solid-state chemistry to manage multiphase solid-state reactions among one or more solid phases. Evaporable getters, mercury dispensers, alkali metal dispensers are examples of products based on solid-state reactions. Metallurgy The company has been dealing for decades with vacuum metallurgy (arc melting and vacuum induction melting) and powder metallurgy (milling technologies, sieving, powders classification and mixing, screen printing, and sintering). In particular, by the available sintering technologies, processes under high vacuum or inert atmosphere and controlled conditions of temperature, time (traditional sintering), and pressure (hot uniaxial pressing and cold isostatic pressing) can be performed, allowing to obtain either highly porous either full dense sintered bodies. Shape memory alloys SMA are materials that can exhibit the property of remembering their original shape even after being severely deformed. Thanks to their intrinsic shape memory and superelastic effects, they represent an enabling technology for implantable medical devices and actuators. SAES Getters deals with designing new materials on a theoretical level, developing new alloys on an industrial scale, and modeling and manufacturing new components. Functional Polymer composites chemistry It is a bundle of technologies grown at SAES to develop advanced polymeric materials that integrate getter properties and rapidly expand towards other functionalities. These materials can be in the form of dispensable polymer composites, functional compounds, and functional coatings for a wide variety of applications, from consumer electronics to implantable medical devices to food packaging and special packaging in general. Gas ultra purification The SAES group manages the purification of gases, either under ultra-high vacuum, or at atmospheric pressure. It deals with enabling the generation of ultra-high pure gases, characterized by impurity concentrations below 1 part-per-billion (ppb). Vacuum science and technology This is the oldest and strongest technology owned by the SAES group, which deals with the design of both ultra-high vacuum pumps and pure metal sources, based on a variety of vacuum gas dynamics codes and on the building of proprietary high-vacuum manufacturing tools, ranging from thin film deposition tools to vacuum reactors. Physical vapour deposition The company developed a range of technologies enabling the deposition of pure metals and alloys on several kinds of substrates. Sputtering is normally used to deposit high surface area getter alloys onto silicon wafers, used by the vacuum MEMS industry as cap wafers, to create and maintain a well-defined gas composition inside MEMS cavities. Products The SAES group's organizational structure is composed of three units dedicated to different technologic solutions: Industrial Applications Business Unit, Shape Memory Alloys (SMA) Business Unit, Business Development Unit. Industrial Applications Business Unit Electronic & Photonic Devices The SAES group provides advanced technological solutions to the electronic devices of a wide range of markets, including the aeronautical, medical, industrial, security, defence, and basic research sectors. The products developed in this division include getters of different types and formats, alkaline metal dispensers, cathodes, and materials for thermal management. The offered products are employed in various devices such as X-ray tubes, microwave tubes, solid-state lasers, electron sources, photomultiplier, and radiofrequency amplification systems. Sensors and Detectors SAES Getters produces getters of different types and formats that are employed in various devices such as night vision devices based on infrared sensors, pressure sensors, gyroscopes for navigation systems, and MEMS devices of various natures. Light Sources The company supplies getters and metal dispensers for lamps. Vacuum Systems The company produces pumps based on non-vaporable getter materials (NEG), which can be applied in both industrial and scientific fields (for example, in analytical instrumentation, vacuum systems for research activities, and particle accelerators). Thermal Insulation The solutions for vacuum thermal insulation include NEG products for cryogenic applications, for solar collectors both for home applications and operating at high temperatures and for thermos. Furthermore, SAES is particularly active in the development of innovative getter solutions for vacuum insulating panels for the white goods industry. Pure Gas Handling In the microelectronics market, SAES Getters develops and sells advanced gas purification systems for the semiconductors industry and other industries that use pure gases. Through the subsidiary SAES Pure Gas, Inc., the group offers a full range of purifiers for bulk gases and special gases. Shape Memory Alloys (SMA) Business Unit The SAES group produces semi-finished products, components, and devices in shape memory alloy, and a special alloy made of nickel-titanium (NiTinol), characterized by super-elasticity (a property that allows the material to withstand even large deformations, returning then to its original form) and by the property of assuming predefined forms when subjected to heat treatment. SMA Medical Applications NiTinol is used in a wide range of medical devices, particularly in the cardiovascular field. In fact, its superelastic properties are ideal for the manufacturing of the devices used in the field of non-invasive surgery, such as catheters to navigate within the cardiovascular system and self-expanding devices (aortic and peripheral stents or heart valves). SMA Industrial Applications The shape memory alloy is used in producing various devices (valves, proportional valves, actuators, release systems, and mini-actuators). The use of SMA devices in the industrial field goes across the board of many application areas such as domotics, the white goods industry, the automotive business, and consumer electronics. Business Development Unit The SAES group has developed the platform of Functional Polymer Composites in the past few years, where getter functionalities, as well as optical and mechanical features, are incorporated into polymer matrices. Originally designed and used for the protection of OLED (Organic Light Emitting Diodes) displays and lamps, these new materials are now being tailored also for new areas such as food packaging and implantable medical devices among others. Relying on the same FPC platform, the group is also active in the field of new-generation electrochemical devices for energy storage, such as super-capacitors and lithium batteries, primarily intended for the market of hybrid and electric engines. Corporate affairs The main shareholders are: SGG Holding S.p.A. – 47.32% Carisma S.p.A. – 5.80% Berger Trust s.r.l. – 2.73% References External links Official Web Site SAES Pure Gas Inc. Web Site Interview to Massimo della Porta (7/03/2015) Actuator Solutions GmbH Web Site Electronics companies established in 1940 Electrical engineering companies of Italy Multinational companies headquartered in Italy Vacuum systems Italian companies established in 1940
SAES Getters
[ "Physics", "Engineering" ]
3,618
[ "Vacuum systems", "Vacuum", "Matter" ]
46,268,782
https://en.wikipedia.org/wiki/Volume%20combustion%20synthesis
Volume combustion synthesis (VCS) is method of chemical synthesis in which the reactants are heated uniformly in a controlled manner until a reaction ignites throughout the volume of the reaction chamber. The VCS mode is typically used for weakly exothermic reactions that require preheating prior to ignition. References Chemical synthesis Combustion
Volume combustion synthesis
[ "Chemistry" ]
66
[ "Combustion", "nan", "Chemical synthesis", "Chemical reaction stubs", "Chemical process stubs" ]
46,272,958
https://en.wikipedia.org/wiki/HBTU
HBTU (Hexafluorophosphate Benzotriazole Tetramethyl Uronium) is a coupling reagent used in solid phase peptide synthesis. It was introduced in 1978 and shows resistance against racemization. It is used because of its mild activating properties. HBTU is prepared by reaction of HOBt with TCFH under basic conditions and was assigned to a uronium type structure, presumably by analogy with the corresponding phosphonium salts, which bear a positive carbon atom instead of the phosphonium residue. Later, it was shown by X-ray analysis that salts crystallize as aminium rather than the corresponding uronium salts. Mechanism HBTU activates carboxylic acids by forming a stabilized HOBt (Hydroxybenzotriazole) leaving group. The activated intermediate species attacked by the amine during aminolysis is the HOBt ester. To create the HOBt ester, the carboxyl group of the acid attacks the imide carbonyl carbon of HBTU. Subsequently, the displaced anionic benzotriazole N-oxide attacks of the acid carbonyl, giving the tetramethyl urea byproduct and the activated ester. Aminolysis displaces the benzotriazole N-oxide to form the desired amide. Safety In vivo dermal sensitization studies according to OECD 429 confirmed HBTU is a moderate skin sensitizer, showing a response at 0.9 wt% in the Local Lymph Node Assay (LLNA) placing it in Globally Harmonized System of Classification and Labelling of Chemicals (GHS) Dermal Sensitization Category 1A. Thermal hazard analysis by differential scanning calorimetry (DSC) shows HBTU is potentially explosive. See also EDC HATU BOP reagent PyBOP PyAOP reagent References Hexafluorophosphates Peptide coupling reagents Benzotriazoles Dimethylamino compounds
HBTU
[ "Chemistry", "Biology" ]
425
[ "Reagents for biochemistry", "Peptide coupling reagents", "Reagents for organic chemistry" ]
46,273,230
https://en.wikipedia.org/wiki/Hafnium%28IV%29%20iodide
Hafnium(IV) iodide is the inorganic compound with the formula HfI4. It is a red-orange, moisture sensitive, sublimable solid that is produced by heating a mixture of hafnium with excess iodine. It is an intermediate in the crystal bar process for producing hafnium metal. In this compound, the hafnium centers adopt octahedral coordination geometry. Like most binary metal halides, the compound is a polymeric. It is one-dimensional polymer consisting of chains of edge-shared bioctahedral Hf2I8 subunits, similar to the motif adopted by HfCl4. The nonbridging iodide ligands have shorter bonds to Hf than the bridging iodide ligands. References Iodides Hafnium compounds Metal halides
Hafnium(IV) iodide
[ "Chemistry" ]
172
[ "Inorganic compounds", "Metal halides", "Salts" ]
50,282,077
https://en.wikipedia.org/wiki/Drop-in%20replacement
Drop-in replacement is a term used in computer science and other fields. It refers to the ability to replace one hardware or software component with another, without any other code or configuration changes being required and resulting in no negative impacts. Usually, the replacement has some benefits including one or more of the following: increased security increased speed increased feature set increased compatibility (e.g. with other components or standards support) increased support (e.g. the old component may no longer be supported, maintained, or manufactured) See also Pin compatibility Plug compatible Clone (computing) Backward compatibility Kludge Software architecture
Drop-in replacement
[ "Engineering" ]
122
[ "Software engineering", "Software engineering stubs" ]
50,288,657
https://en.wikipedia.org/wiki/Quantum%20tunneling%20of%20water
The quantum tunneling of water occurs when water molecules in nanochannels exhibit quantum tunneling behavior that smears out the positions of the hydrogen atoms into a pair of correlated rings. In that state, the water molecules become delocalized around a ring and assume an unusual double top-like shape. At low temperatures, the phenomenon showcases the quantum motion of water through the separating potential walls, which is forbidden in classical mechanics, but allowed in quantum mechanics. The quantum tunneling of water occurs under ultraconfinement in rocks, soil and cell walls. The phenomenon is predicted to help scientists better understand the thermodynamic properties and behavior of water in confined environments such as water diffusion, transport in the channels of cell membranes and in carbon nanotubes. History Quantum tunneling in water was reported as early as 1992. At that time it was known that motions can destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers. On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamer. Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds. On 22 April 2016, the journal Physical Review Letters reported the quantum tunneling of water molecules as demonstrated at the Spallation Neutron Source and Rutherford Appleton Laboratory. First indications of this phenomenon were seen by scientists from Russia and Germany in 2013 based on the splitting of terahertz absorption lines of a water molecule captured in five-ångström channels in beryl. Subsequently it was directly observed using neutron scattering and analyzed by ab initio simulations. In a beryl channel, the water molecule can occupy six symmetrical orientations, in agreement with the known crystal structure. A single orientation has the oxygen atom approximately in the center of the channel, with the two hydrogens pointing to the same side toward one of the channel’s six hexagonal faces. Other orientations point to other faces, but are separated from each other by energy barriers of around 50 meV. These barriers, however, do not stop the hydrogens from tunneling among the six orientations and thus split the ground state energy into multiple levels. References Quantum chemistry Water physics
Quantum tunneling of water
[ "Physics", "Chemistry", "Materials_science" ]
455
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "Condensed matter physics", "Atomic", "Water physics", " and optical physics" ]
51,167,783
https://en.wikipedia.org/wiki/Silicon%20nanowire
Silicon nanowires, also referred to as SiNWs, are a type of semiconductor nanowire most often formed from a silicon precursor by etching of a solid or through catalyzed growth from a vapor or liquid phase. Such nanowires have promising applications in lithium-ion batteries, thermoelectrics and sensors. Initial synthesis of SiNWs is often accompanied by thermal oxidation steps to yield structures of accurately tailored size and morphology. SiNWs have unique properties that are not seen in bulk (three-dimensional) silicon materials. These properties arise from an unusual quasi one-dimensional electronic structure and are the subject of research across numerous disciplines and applications. The reason that SiNWs are considered one of the most important one-dimensional materials is they could have a function as building blocks for nanoscale electronics assembled without the need for complex and costly fabrication facilities. SiNWs are frequently studied towards applications including photovoltaics, nanowire batteries, thermoelectrics and non-volatile memory. Applications Owing to their unique physical and chemical properties, silicon nanowires are a promising candidate for a wide range of applications that draw on their unique physico-chemical characteristics, which differ from those of bulk silicon material. SiNWs exhibit charge trapping behavior which renders such systems of value in applications necessitating electron hole separation such as photovoltaics, and photocatalysts. Recent experiment on nanowire solar cells has led to a remarkable improvement of the power conversion efficiency of SiNW solar cells from <1% to >17% in the last few years. The ability for lithium ions to intercalate into silicon structures renders various Si nanostructures of interest towards applications as anodes in Li-ion batteries (LiBs). SiNWs are of particular merit as such anodes as they exhibit the ability to undergo significant lithiation while maintaining structural integrity and electrical connectivity. Silicon nanowires are efficient thermoelectric generators because they combine a high electrical conductivity, owing to the bulk properties of doped Si, with low thermal conductivity due to the small cross section. Silicon nanowire field-effect transistor (SiNWFET) Charge trapping behavior and tunable surface governed transport properties of SiNWs render this category of nanostructures of interest towards use as metal insulator semiconductors and field effect transistors, where the silicon nanowire is the main channel of the FET which connect the source to the drain terminal, facilitating electron transfer between the two terminals with further applications as nano-electronic storage devices, in flash memory, logic devices as well as chemical, gas and biological sensors. Since SiNWFET was first reported in 2001, it has caused wide concern in the sensor area, because of their superior physical properties such as high carrier mobility, high current switch ratio, and close to ideal subthreshold slope. Furthermore, it is cost-efficient and could be manufactured on large scale, since it is combined with CMOS fabricating technology. Specifically, in bioresearch, SiNWFET has high sensitivity and specificity to biological targets and could offer label-free detection after being modified with small biological molecules to match the target object. What’s more, SiNWFET could be fabricated in arrays and be selectively functionalized, which enables the simultaneous detection and analysis of multiple targets. Multiplexed detection could greatly improve throughput and efficiency of biodetection. Synthesis Several synthesis methods are known for SiNWs and these can be broadly divided into methods which start with bulk silicon and remove material to yield nanowires, also known as top-down synthesis, and methods which use a chemical or vapor precursor to build nanowires in a process generally considered to be bottom-up synthesis. Top down synthesis methods These methods use material removal techniques to produce nanostructures from a bulk precursor Laser beam ablation Ion-beam etching Thermal evaporation oxide-assisted growth (OAG) Metal-assisted chemical etching (MaCE) Bottom-up synthesis methods Vapour–liquid–solid (VLS) growth – a type of catalysed CVD often using silane as Si precursor and gold nanoparticles as catalyst (or 'seed'). Molecular beam epitaxy – a form of PVD applied in plasma environment Precipitation from a solution – a variation of the VLS method, aptly named supercritical fluid liquid solid (SFLS), that uses a supercritical fluid (e.g. organosilane at high temperature and pressure) as Si precursor instead of vapor. The catalyst would be a colloid in solution, such as colloidal gold nanoparticles, and the SiNWs are grown in this solution Thermal oxidation Subsequent to physical or chemical processing, either top-down or bottom-up, to obtain initial silicon nanostructures, thermal oxidation steps are often applied in order to obtain materials with desired size and aspect ratio. Silicon nanowires exhibit a distinct and useful self-limiting oxidation behaviour whereby oxidation effectively ceases due to diffusion limitations, which can be modeled. This phenomenon allows accurate control of dimensions and aspect ratios in SiNWs and has been used to obtain high aspect ratio SiNWs with diameters below 5 nm. The self-limiting oxidation of SiNWs is of value towards lithium-ion battery materials. Outlook There is significant interest in SiNWs for their unique properties and the ability to control size and aspect ratio with great accuracy. As yet, limitations in large-scale fabrication impede the uptake of this material in the full range of investigated applications. Combined studies of synthesis methods, oxidation kinetics and properties of SiNW systems aim to overcome the present limitations and facilitate the implementation of SiNW systems, for example, high quality vapor-liquid-solid–grown SiNWs with smooth surfaces can be reversibly stretched with 10% or more elastic strain, approaching the theoretical elastic limit of silicon, which could open the doors for the emerging “elastic strain engineering” and flexible bio-/nano-electronics. References Materials Nanotechnology Nanoelectronics Nanomaterials Nanowire Silicon
Silicon nanowire
[ "Physics", "Materials_science", "Engineering" ]
1,247
[ "Materials science", "Materials", "Nanoelectronics", "Nanotechnology", "Nanomaterials", "Matter" ]
51,170,495
https://en.wikipedia.org/wiki/Regularity%20structure
Martin Hairer's theory of regularity structures provides a framework for studying a large class of subcritical parabolic stochastic partial differential equations arising from quantum field theory. The framework covers the Kardar–Parisi–Zhang equation, the equation and the parabolic Anderson model, all of which require renormalization in order to have a well-defined notion of solution. A key advantage of regularity structures over previous methods is its ability to pose the solution of singular non-linear stochastic equations in terms of fixed-point arguments in a space of “controlled distributions” over a fixed regularity structure. The notion of regularity structure is an extension of the notion of Rough path theory. Hairer won the 2021 Breakthrough Prize in mathematics for introducing regularity structures. Definition A regularity structure is a triple consisting of: a subset (index set) of that is bounded from below and has no accumulation points; the model space: a graded vector space , where each is a Banach space; and the structure group: a group of continuous linear operators such that, for each and each , we have . A further key notion in the theory of regularity structures is that of a model for a regularity structure, which is a concrete way of associating to any and a "Taylor polynomial" based at and represented by , subject to some consistency requirements. More precisely, a model for on , with consists of two maps , . Thus, assigns to each point a linear map , which is a linear map from into the space of distributions on ; assigns to any two points and a bounded operator , which has the role of converting an expansion based at into one based at . These maps and are required to satisfy the algebraic conditions , , and the analytic conditions that, given any , any compact set , and any , there exists a constant such that the bounds , , hold uniformly for all -times continuously differentiable test functions with unit norm, supported in the unit ball about the origin in , for all points , all , and all with . Here denotes the shifted and scaled version of given by . References Stochastic differential equations Quantum field theory Statistical mechanics
Regularity structure
[ "Physics", "Mathematics" ]
437
[ "Quantum field theory", "Mathematical analysis", "Mathematical analysis stubs", "Quantum mechanics", "Statistical mechanics" ]
51,175,067
https://en.wikipedia.org/wiki/C/1989%20Y2%20%28McKenzie%E2%80%93Russell%29
Comet McKenzie–Russell, formally designated as C/1989 Y2, is a hyperbolic comet that was discovered by Australian astronomers, Patricia McKenzie and Kenneth S. Russell on December 1989. Discovery and observations Robert H. McNaught reported the discovery of a new comet that Patricia McKenzie found on photographic plates that Kenneth S. Russell took on 21 December 1989. Prediscovery images from Japan were taken in 19 December 1989 but went unnoticed until the next year. Orbital calculations of the comet revealed it was already on its outbound flight as it had reached perihelion a month before discovery, and as a result it continued to fade away in the following days. It was last observed on 24 January 1990. References External links Non-periodic comets Hyperbolic comets Discoveries by Kenneth S. Russell
C/1989 Y2 (McKenzie–Russell)
[ "Astronomy" ]
160
[ "Astronomy stubs", "Comet stubs" ]
56,956,044
https://en.wikipedia.org/wiki/Petrolex
Petrolex Oil & Gas Limited is a Nigerian company and part of Petrolex Group, an African integrated energy conglomerate. The company was founded in February 2007 by Segun Adebutu, a Nigerian entrepreneur. It provides services to the oil and gas industry. It is mainly involved in the refining, storage, distribution and retail of petroleum products in Nigeria and Africa. Petrolex is best known for starting in December 2017, the construction of a 3.6 billion dollar high capacity refinery and Sub-Saharan Africa’s largest tank farm as part of its Mega Oil City project in Ogun State, Nigeria. Background Petrolex CEO, Adebutu started an oil and fuel trading business around 2005 but showed interest in “mid-stream infrastructure” for $330 million. His experience in family business, laid the foundation for new ideas in his business career. Over the years, Adebutu was involved in bold projects including oil and gas, solid minerals, construction and maritime. This background inspired Adebutu to replicate similar practices with his new initiative Petrolex Oil & Gas Ltd. In December 2017, Petrolex announced its plan to build a $3.6 billion refinery plant with an output capacity of 250,000 barrels a day. The company is currently working on the “front-end engineering design” and will complete construction in 2021. This initiative is part of a larger Government program to end petroleum products imports in two years. With support from partners, Petrolex Group has invested over $330 million in the Ibefun tank farm with a 600,000 million litres monthly capacity. The farm was commissioned by the Vice President of Nigeria Yemi Osinbajo, as part of phase one of a 10-year expansion program. This phase would ease the Apapa and Ibafon tanker traffic gridlock, a source of anxiety for stakeholders. Petrolex Mega Oil City project Petrolex provides services in refining, storage, distribution and retailing of petroleum products. The company intends to be listed on the Nigerian Stock Exchange in the coming decade. The company launched the planning, design and development of the Petrolex Mega Oil City in Ibefun, Ogun State in 2012. The complex spreads over 101 square kilometres, about 10 per cent the size of Lagos State. It houses a residential estate for staff, an army barracks, 30 loading gantries for product disbursement, and a 4,000 truck capacity trailer park with accommodation for drivers. The Oil City project is the original idea of Segun Adebutu, CEO of Petrolex and son of the Nigerian entrepreneur Sir Kesington Adebutu. Its goal is to create the largest petrochemical industrial estate in Sub-Saharan Africa. Upon completion, this estate will include a large capacity refinery, a tank farm, a liquefied petroleum gas processing plant, a lubricant facility and raw material industries (ex. fertiliser plants). The company has also negotiated the addition of 12,000 acres to expand the Oil City. Operations overview Downstream operations Petrolex downstream operations include the processing of petroleum products, the supply and distribution of gas oil, kerosene; and the retail marketing of specific oil products. Petrolex has built a storage-tank farm and other “mid-stream infrastructure” for $330 million. The company is connecting its infrastructure to the Nigeria System 2B pipeline at Mosimi to support supply and distribution of petroleum products around the country. This infrastructure includes a procurement of barges, tug boats and a daughter vessel. References External links Petrolex official website Petroleum industry Oil and gas companies of Nigeria Companies based in Lagos 2007 establishments in Nigeria Energy companies established in 2007 Non-renewable resource companies established in 2007
Petrolex
[ "Chemistry" ]
749
[ "Petroleum industry", "Petroleum", "Chemical process engineering" ]
56,956,125
https://en.wikipedia.org/wiki/Multi-homogeneous%20B%C3%A9zout%20theorem
In algebra and algebraic geometry, the multi-homogeneous Bézout theorem is a generalization to multi-homogeneous polynomials of Bézout's theorem, which counts the number of isolated common zeros of a set of homogeneous polynomials. This generalization is due to Igor Shafarevich. Motivation Given a polynomial equation or a system of polynomial equations it is often useful to compute or to bound the number of solutions without computing explicitly the solutions. In the case of a single equation, this problem is solved by the fundamental theorem of algebra, which asserts that the number of complex solutions is bounded by the degree of the polynomial, with equality, if the solutions are counted with their multiplicities. In the case of a system of polynomial equations in unknowns, the problem is solved by Bézout's theorem, which asserts that, if the number of complex solutions is finite, their number is bounded by the product of the degrees of the polynomials. Moreover, if the number of solutions at infinity is also finite, then the product of the degrees equals the number of solutions counted with multiplicities and including the solutions at infinity. However, it is rather common that the number of solutions at infinity is infinite. In this case, the product of the degrees of the polynomials may be much larger than the number of roots, and better bounds are useful. Multi-homogeneous Bézout theorem provides such a better bound when the unknowns may be split into several subsets such that the degree of each polynomial in each subset is lower than the total degree of the polynomial. For example, let be polynomials of degree two which are of degree one in indeterminate and also of degree one in (that is the polynomials are bilinear. In this case, Bézout's theorem bounds the number of solutions by while the multi-homogeneous Bézout theorem gives the bound (using Stirling's approximation) Statement A multi-homogeneous polynomial is a polynomial that is homogeneous with respect to several sets of variables. More precisely, consider positive integers , and, for , the indeterminates A polynomial in all these indeterminates is multi-homogeneous of multi-degree if it is homogeneous of degree in A multi-projective variety is a projective subvariety of the product of projective spaces where denote the projective space of dimension . A multi-projective variety may be defined as the set of the common nontrivial zeros of an ideal of multi-homogeneous polynomials, where "nontrivial" means that are not simultaneously 0, for each . Bézout's theorem asserts that homogeneous polynomials of degree in indeterminates define either an algebraic set of positive dimension, or a zero-dimensional algebraic set consisting of points counted with their multiplicities. For stating the generalization of Bézout's theorem, it is convenient to introduce new indeterminates and to represent the multi-degree by the linear form In the following, "multi-degree" will refer to this linear form rather than to the sequence of degrees. Setting the multi-homogeneous Bézout theorem is the following. With above notation, multi-homogeneous polynomials of multi-degrees define either a multi-projective algebraic set of positive dimension, or a zero-dimensional algebraic set consisting of points, counted with multiplicities, where is the coefficient of in the product of linear forms Non-homogeneous case The multi-homogeneous Bézout bound on the number of solutions may be used for non-homogeneous systems of equations, when the polynomials may be (multi)-homogenized without increasing the total degree. However, in this case, the bound may be not sharp, if there are solutions "at infinity". Without insight on the problem that is studied, it may be difficult to group the variables for a "good" multi-homogenization. Fortunately, there are many problems where such a grouping results directly from the problem that is modeled. For example, in mechanics, equations are generally homogeneous or almost homogeneous in the lengths and in the masses. References Theorems about polynomials Algebraic geometry
Multi-homogeneous Bézout theorem
[ "Mathematics" ]
826
[ "Theorems in algebra", "Theorems about polynomials", "Fields of abstract algebra", "Algebraic geometry" ]
56,960,128
https://en.wikipedia.org/wiki/Kristina%20H%C3%A5kansson
Kristina Håkansson is an analytical chemist known for her contribution in Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry for biomolecular identification and structural characterization. Education Håkansson received a M.Sc. in Molecular Biotechnology in 1996 and a Ph.D., also in Molecular Biotechnology in 2000, from Uppsala University. Following her graduation, she did post-doctoral research with Alan G. Marshall at the National High Magnetic Field Laboratory of Florida State University. Career and research Håkansson began her academic career at University of Michigan in 2003 and became the director National High Magnetic Field Laboratory's Ion Cyclotron Resonance facility at Florida State University in 2024. She served as editor for Rapid Communications in Mass Spectrometry from 2021 to 2023. Her research focuses on mass spectrometry, primarily identification and characterization of protein posttranslational modifications by complementary fragmentation techniques such as electron-capture dissociation (ECD)/negative ion ECD (niECD) and infrared multiphoton dissociation (IRMPD) at low (femtomole) levels. Awards 2022 Berzelius Gold Medal, Swedish Society for Mass Spectrometry 2018 Agilent Thought Leader Award 2017 Hach Lecturer, University of Wyoming 2016 Biemann Medal, American Society for Mass Spectrometry 2006–2011 National Science Foundation CAREER Award 2005–2007 Eli Lilly Analytical Chemistry Award 2005–2008 Dow Corning Assistant Professorship, University of Michigan 2005 American Society for Mass Spectrometry Research Award 2004 Elisabeth Caroline Crosby Research Award, University of Michigan 2004–2007 Searle Scholar Award 2000–2002 Swedish Foundation for International Cooperation in Research and Higher Education (STINT) postdoctoral fellow References External links Year of birth missing (living people) Living people Uppsala University alumni University of Michigan faculty Mass spectrometrists Florida State University faculty Swedish scientists
Kristina Håkansson
[ "Physics", "Chemistry" ]
387
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
56,961,153
https://en.wikipedia.org/wiki/Metallabenzene
The parent metallacyclobenzene has the formula LnM(CH)5. They can be viewed as derivatives of benzene wherein a CH center has been replaced by a transition metal complex. Most metallabenzenes do not feature the M(CH)5 ring itself, but, instead, some of the H atoms are replaced by other substituents. Classification Metallabenzene complexes have been classified into three varieties; in such compounds, the parent acyclic hydrocarbon ligand is viewed as the anion C5H5−. The 6 π electrons in the metallacycle conform to the Hückel (4n+2) theory. Preparation and structure The first reported stable metallabenzene was the osmabenzene Os(C5H4S)CO(PPh3)2. Characteristic of other metallaarenes, the Os-C bonds are about 0.6 Å longer than the C-C bonds (in benzene these are 1.39 Å), resulting in a distortion of the hexagonal ring. 1H NMR signals for the ring protons are downfield, consistent with aromatic "ring current." Osmabenzene and its derivatives can be regarded as an Os(II), d6 octahedral complex. Metallabenzenes have also been characterized with metals ruthenium, iridium, platinum, and rhenium. References Organometallic chemistry Cyclic compounds
Metallabenzene
[ "Chemistry" ]
307
[ "Organometallic chemistry" ]
56,962,667
https://en.wikipedia.org/wiki/Southern%20Hydrate%20Ridge
Southern Hydrate Ridge, located about 90 km offshore Oregon Coast, is an active methane seeps site located on the southern portion of Hydrate Ridge. It extends 25 km in length and 15 km across, trending north-northeast-south-southwest at the depth of approximately 800 m. Southern Hydrate Ridge has been the site of numerous submersible dives with the human occupied Alvin submarine, extensive visits by numerous robotic vehicles including the Canadian ROV ROPOS, Jason (US National Deep Submersible Facility), and Tiburon (MBARI), and time-series geophysical studies that document changes in the subsurface distribution of methane. It is also a key site of the National Science Foundations Regional Cabled Array that is part of the Ocean Observatories Initiative (OOI), which includes eight types of cabled instruments streaming live data back to shore 24/7/365 at the speed of light, as well as uncabled instruments. Geological background The geologic history of the Southern Hydrate Ridge has been reconstructed through seismic imaging, which provides constraints on the origin of methane ice deposits found in this region. Hydrate Ridge is in a region where faults along the Cascadia Margin transition from seaward-verging to landward-verging. This fault reorientation corresponds to the transition from sedimentary accretion to subduction in this active accretionary margin. Seaward-verging thrust faults characterize the ridge deformation front, extending down to ~7 km beneath the summit. Initiation of the uplift of Southern Hydrate Ridge is predicted to have initiated about 1 million years ago. Sedimentary characteristics Clay-rich sediments have been found at the Southern Hydrate Ridge. These sediments are from Pleistocene to Holocene in age, and composed of 29% smectite, 31% illite, and 40% (chlorite + kaolinite) on average. Underlying the Pleistocene-Holocene strata is the late-Pliocene-early-Pleistocene accretionary material, composed of 38% smectite, 27% illite, and 35% (chlorite + kaolinite). A thick permeable zone of coarse-grained turbidites underlies the sediments. Located along the Cascadia accretionary margin, sediment build-up in this region is driven by two subduction-related processes: Scraping of sediments off of the subducting Juan de Fuca plate onto the overlying North American plate, and Underplating of subducted sediments onto the overlying plate. Continuous duplexing and underplating of sediment has caused thickening of sediments through uplifting. Furthermore, compaction and dewatering in this region has led to increased local pore pressure. Methane Ice at Southern Hydrate Ridge Methane ice at Southern Hydrate Ridge has been found within the shallow sediments, and more rarely exposed on the seafloor. Because Southern Hydrate Ridge is located on the upper continental slope, the regional hydrate stability zone (RHSZ), which is controlled by the sediment pore pressure and temperature, is very shallow. As organic material in the sediments is utilized by microbes, producing methane saturation within the sediment pores, methane ice forms within the RHSZ. The base of the RHSZ marks the transition from methane-ice-rich sediment, to clay sediments. Due to the impedance contrast between RHSZ and the underlying sediments, the depth of RHSZ can be detected using seismic imaging techniques. Associated microbially-mediated carbonate formations Methane hydrate formation is associated with extensive authigenic carbonate. These carbonate deposits are associated with the local chemosynthetic communities such as sulfide-oxidizing bacteria, mussels, vesicomyid clams, snails and tube worms (although tube worms are not observed at Southern Hydrate Ridge). Migration and egress of methane-rich fluids and microbial interactions can lead to the formation of chemoherms through anaerobic oxidation of methane. At Southern Hydrate Ridge, in addition to a gentle rampart of authigenic carbonate cobbles that rims the main seep site, there is a 60-m tall massive carbonate deposit called the Pinnacle. Uranium-thorium dating of carbonate material from the Pinnacle indicates that the Pinnacle is between ~ 7,000 and 11,000 years old. Methane venting: spatial and temporal discontinuity Methane venting includes the release of methane in the form of fluid and gases from methane seeps as methane ice dissociates. Due to the narrow RHSZ at the upper continental slope, methane ice at Southern Hydrate Ridge is metastable such that changes in seafloor temperature and pressure may lead to destabilization of methane ice and the disassociation into fluid and gas. Methane venting at Southern Hydrate Ridge has been observed to be transient and episodic with temporal variations of hours to days. This area is characterized by multiple sites of venting. which is thought to reflect different fracture networks. While active venting may maintain open fracture networks, fractures may also be filled by hydrates when there is no venting. As venting reactivates, a new fracture system may be created. While temporal and spatial variations in venting have been observed at this seep site, the local venting rate has been found to varyi over six orders of magnitude: the controls are still not well understood. New instrumentation at this site, including cabled multibeam sonar systems developed by the University of Bremen, now image the entire seep area of Southern Hydrate Ridge, scanning for plumes every two hours. An overview sonar and quantification sonar at the main study site "Einsteins Grotto", are providing new insights into the temporal, spatial and intensity of the plumes and quantification of methane flux from this highly dynamic environment. Significance Release of methane from marine seep sites into the atmosphere may have been a factor for past climate warming events, such as the Paleocene-Eocene Thermal Maximum (PETM). It is estimated that there are Gigatons of carbon trapped as methane in margin environments and the release of methane from seeps is thought to be responsible for 5 to 10% of the global atmospheric methane. Scientific investigation Since the discovery of methane seeps and novel microbial and macrofauna at Hydrate Ridge in 1986, the Southern Hydrate Ridge has become an extensive study site. Currently, it is one of the study sites under the OOI Regional Cabled Array. Infrastructure, including a diverse suite of instruments, was installed and became fully operational in 2014. Sensors that are currently at this site include: Pressure Sensor measures the pressure exerted by the overlying water column at the seafloor and is installed to study the impacts of lunar tides on methane release. Current Meter measures the current velocity and temperature of the water using acoustic signals. Acoustic Doppler Current Profiler (ADCP) measures the current velocity of the water profile in the region using acoustic signals. This instrument is installed by the OOI for understanding the local fluxes of heat, mass and momentum. An example of such application is the study of bubble plume evolution over time. Digital Still Camera records the changes in seafloor morphology and biology, as well as methane plumes. This is important to understand how the local system and biosphere evolves through time. Mass Spectrometer measures the dissolved gas concentration, which is important for understanding the local biogeochemical processes and quantification of methane release from the seafloor. Low-frequency Hydrophone records sound waves that propagate through the water column for examination of seismic activity. Bottom Ocean Seismometers detect seismic activity local and at a regional scale. At Southern Hydrate Ridge, there is currently one broadband seismometer with an accelerator, and three short-period seismometers (for examination local seismic events that may provide insights into the fracture distribution in the subsurface). 'Osmo' Fluid Sampler samples the fluid coming issuing from the seep sites through drawing fluid into a capillary tube-like tubing. Benthic Flow Sensors measures the fluid flow rates into and out of the sediment, which are important for determining the local methane and sulfide flux into the ocean. References Clathrate hydrates
Southern Hydrate Ridge
[ "Chemistry" ]
1,709
[ "Clathrates", "Hydrates", "Clathrate hydrates" ]
56,963,959
https://en.wikipedia.org/wiki/A%20Primer%20of%20Real%20Functions
A Primer of Real Functions is a revised edition of a classic Carus Monograph on the theory of functions of a real variable. It is authored by R. P. Boas, Jr and updated by his son Harold P. Boas. References 1960 non-fiction books Mathematics textbooks Functions and mappings Mathematical Association of America books
A Primer of Real Functions
[ "Mathematics" ]
68
[ "Mathematical objects", "Mathematical analysis", "Mathematical relations", "Functions and mappings" ]
43,966,007
https://en.wikipedia.org/wiki/Titanium%20perchlorate
Titanium perchlorate is a molecular compound of titanium and perchlorate groups with formula Ti(ClO4)4. Anhydrous titanium perchlorate decomposes explosively at 130 °C and melts at 85 °C with a slight decomposition. It sublimes in a vacuum as low as 70 °C. Being a molecular with four perchlorate ligands, it is an unusual example of a transition metal perchlorate complex. Properties In Ti(ClO4)4, the four perchlorate groups binds as bidentate ligands. Thus the Ti center is bound to eight oxygen atoms. So the molecule could also be called tetrakis(perchlorato-O,''O)titanium(IV)'''. In the solid form it forms monoclinic crystals, with unit cell parameters a=12.451 b=7.814 c=12.826 Å α=108.13. Unit cell volume is 1186 Å3 at -100 °C. There are four molecules per unit cell. It reacts with petrolatum, nitromethane, acetonitrile, dimethylformamide, and over 25° with carbon tetrachloride. Titanyl perchlorate form solvates with water, dimethyl sulfoxide, dioxane, pyridine-N-oxide, and quinoline-N-oxide. Thermolysis of titanium perchlorate gives TiO2, ClO2 and dioxygen O2 The titanyl species TiO(ClO4)2 is an intermediate in this decomposition. Ti(ClO4)4 → TiO2 + 4ClO2 + 3O2 ΔH = . Formation Titanium perchlorate can be formed by reacting titanium tetrachloride with perchloric acid enriched in dichlorine heptoxide. Another way uses titanium tetrachloride with dichlorine hexoxide. This forms a complex with Cl2O6 which when warmed to 55° in a vacuum, sublimes and can crystallise the pure anhydrous product from the vapour. Related In the salt dicaesium hexaperchloratotitanate, Cs2Ti(ClO4)6 the perchlorate groups are monodentate, connected by one oxygen to titanium. Titanium perchlorate can also form complexes with other ligands bound to the titanium atom including binol, and gluconic acid. A polymeric oxychlorperchlorato compound of titanium, Ti6O4Clx(ClO4)16−x, is made from excess TiCl4 and dichlorine hexoxide. This has a varying composition, and ranges from light to dark yellow. References Perchlorates Titanium(IV) compounds
Titanium perchlorate
[ "Chemistry" ]
588
[ "Perchlorates", "Salts" ]
43,966,823
https://en.wikipedia.org/wiki/Multi-state%20modeling%20of%20biomolecules
Multi-state modeling of biomolecules refers to a series of techniques used to represent and compute the behaviour of biological molecules or complexes that can adopt a large number of possible functional states. Biological signaling systems often rely on complexes of biological macromolecules that can undergo several functionally significant modifications that are mutually compatible. Thus, they can exist in a very large number of functionally different states. Modeling such multi-state systems poses two problems: The problem of how to describe and specify a multi-state system (the "specification problem") and the problem of how to use a computer to simulate the progress of the system over time (the "computation problem"). To address the specification problem, modelers have in recent years moved away from explicit specification of all possible states, and towards rule-based modeling that allow for implicit model specification, including the κ-calculus, BioNetGen, the Allosteric Network Compiler and others. To tackle the computation problem, they have turned to particle-based methods that have in many cases proved more computationally efficient than population-based methods based on ordinary differential equations, partial differential equations, or the Gillespie stochastic simulation algorithm. Given current computing technology, particle-based methods are sometimes the only possible option. Particle-based simulators further fall into two categories: Non-spatial simulators such as StochSim, DYNSTOC, RuleMonkey, and NFSim and spatial simulators, including Meredys, SRSim and MCell. Modelers can thus choose from a variety of tools; the best choice depending on the particular problem. Development of faster and more powerful methods is ongoing, promising the ability to simulate ever more complex signaling processes in the future. Introduction Multi-state biomolecules in signal transduction In living cells, signals are processed by networks of proteins that can act as complex computational devices. These networks rely on the ability of single proteins to exist in a variety of functionally different states achieved through multiple mechanisms, including post-translational modifications, ligand binding, conformational change, or formation of new complexes. Similarly, nucleic acids can undergo a variety of transformations, including protein binding, binding of other nucleic acids, conformational change and DNA methylation. In addition, several types of modifications can co-exist, exerting a combined influence on a biological macromolecule at any given time. Thus, a biomolecule or complex of biomolecules can often adopt a very large number of functionally distinct states. The number of states scales exponentially with the number of possible modifications, a phenomenon known as "combinatorial explosion". This is of concern for computational biologists who model or simulate such biomolecules, because it raises questions about how such large numbers of states can be represented and simulated. Examples of combinatorial explosion Biological signaling networks incorporate a wide array of reversible interactions, post-translational modifications and conformational changes. Furthermore, it is common for a protein to be composed of several - identical or nonidentical - subunits, and for several proteins and/or nucleic acid species to assemble into larger complexes. A molecular species with several of those features can therefore exist in a large number of possible states. For instance, it has been estimated that the yeast scaffold protein Ste5 can be a part of 25666 unique protein complexes. In E. coli, chemotaxis receptors of four different kinds interact in groups of three, and each individual receptor can exist in at least two possible conformations and has up to eight methylation sites, resulting in billions of potential states. The protein kinase CaMKII is a dodecamer of twelve catalytic subunits, arranged in two hexameric rings. Each subunit can exist in at least two distinct conformations, and each subunit features various phosphorylation and ligand binding sites. A recent model incorporated conformational states, two phosphorylation sites and two modes of binding calcium/calmodulin, for a total of around one billion possible states per hexameric ring. A model of coupling of the EGF receptor to a MAP kinase cascade presented by Danos and colleagues accounts for distinct molecular species, yet the authors note several points at which the model could be further extended. A more recent model of ErbB receptor signalling even accounts for more than one googol () distinct molecular species. The problem of combinatorial explosion is also relevant to synthetic biology, with a recent model of a relatively simple synthetic eukaryotic gene circuit featuring 187 species and 1165 reactions. Of course, not all of the possible states of a multi-state molecule or complex will necessarily be populated. Indeed, in systems where the number of possible states is far greater than that of molecules in the compartment (e.g. the cell), they cannot be. In some cases, empirical information can be used to rule out certain states if, for instance, some combinations of features are incompatible. In the absence of such information, however, all possible states need to be considered a priori. In such cases, computational modeling can be used to uncover to what extent the different states are populated. The existence (or potential existence) of such large numbers of molecular species is a combinatorial phenomenon: It arises from a relatively small set of features or modifications (such as post-translational modification or complex formation) that combine to dictate the state of the entire molecule or complex, in the same way that the existence of just a few choices in a coffee shop (small, medium or large, with or without milk, decaf or not, extra shot of espresso) quickly leads to a large number of possible beverages (24 in this case; each additional binary choice will double that number). Although it is difficult for us to grasp the total numbers of possible combinations, it is usually not conceptually difficult to understand the (much smaller) set of features or modifications and the effect each of them has on the function of the biomolecule. The rate at which a molecule undergoes a particular reaction will usually depend mainly on a single feature or a small subset of features. It is the presence or absence of those features that dictates the reaction rate. The reaction rate is the same for two molecules that differ only in features which do not affect this reaction. Thus, the number of parameters will be much smaller than the number of reactions. (In the coffee shop example, adding an extra shot of espresso will cost 40 cent, no matter what size the beverage is and whether or not it has milk in it). It is such "local rules" that are usually discovered in laboratory experiments. Thus, a multi-state model can be conceptualised in terms of combinations of modular features and local rules. This means that even a model that can account for a vast number of molecular species and reactions is not necessarily conceptually complex. Specification vs computation The combinatorial complexity of signaling systems involving multi-state proteins poses two kinds of problems. The first problem is concerned with how such a system can be specified; i.e. how a modeler can specify all complexes, all changes those complexes undergo and all parameters and conditions governing those changes in a robust and efficient way. This problem is called the "specification problem". The second problem concerns computation. It asks questions about whether a combinatorially complex model, once specified, is computationally tractable, given the large number of states and the even larger number of possible transitions between states, whether it can be stored electronically, and whether it can be evaluated in a reasonable amount of computing time. This problem is called the "computation problem". Among the approaches that have been proposed to tackle combinatorial complexity in multi-state modeling, some are mainly concerned with addressing the specification problem, some are focused on finding effective methods of computation. Some tools address both specification and computation. The sections below discuss rule-based approaches to the specification problem and particle-based approaches to solving the computation problem. A wide range of computational tools exist for multi-state modeling. The specification problem Explicit specification The most naïve way of specifying, e.g., a protein in a biological model is to specify each of its states explicitly and use each of them as a molecular species in a simulation framework that allows transitions from state to state. For instance, if a protein can be ligand-bound or not, exist in two conformational states (e.g. open or closed) and be located in two possible subcellular areas (e.g. cytosolic or membrane-bound), then the eight possible resulting states can be explicitly enumerated as: bound, open, cytosol bound, open, membrane bound, closed, cytosol bound, closed, membrane unbound, open, cytosol unbound, open, membrane unbound, closed, cytosol unbound, closed, membrane Enumerating all possible states is a lengthy and potentially error-prone process. For macromolecular complexes that can adopt multiple states, enumerating each state quickly becomes tedious, if not impossible. Moreover, the addition of a single additional modification or feature to the model of the complex under investigation will double the number of possible states (if the modification is binary), and it will more than double the number of transitions that need to be specified. Rule-based model specification It is clear that an explicit description, which lists all possible molecular species (including all their possible states), all possible reactions or transitions these species can undergo, and all parameters governing these reactions, very quickly becomes unwieldy as the complexity of the biological system increases. Modelers have therefore looked for implicit, rather than explicit, ways of specifying a biological signaling system. An implicit description is one that groups reactions and parameters that apply to many types of molecular species into one reaction template. It might also add a set of conditions that govern reaction parameters, i.e. the likelihood or rate at which a reaction occurs, or whether it occurs at all. Only properties of the molecule or complex that matter to a given reaction (either affecting the reaction or being affected by it) are explicitly mentioned, and all other properties are ignored in the specification of the reaction. For instance, the rate of ligand dissociation from a protein might depend on the conformational state of the protein, but not on its subcellular localization. An implicit description would therefore list two dissociation processes (with different rates, depending on conformational state), but would ignore attributes referring to subcellular localization, because they do not affect the rate of ligand dissociation, nor are they affected by it. This specification rule has been summarized as "Don't care, don't write". Since it is not written in terms of reactions, but in terms of more general "reaction rules" encompassing sets of reactions, this kind of specification is often called "rule-based". This description of the system in terms of modular rules relies on the assumption that only a subset of features or attributes are relevant for a particular reaction rule. Where this assumption holds, a set of reactions can be coarse-grained into one reaction rule. This coarse-graining preserves the important properties of the underlying reactions. For instance, if the reactions are based on chemical kinetics, so are the rules derived from them. Many rule-based specification methods exist. In general, the specification of a model is a separate task from the execution of the simulation. Therefore, among the existing rule-based model specification systems, some concentrate on model specification only, allowing the user to then export the specified model into a dedicated simulation engine. However, many solutions to the specification problem also contain a method of interpreting the specified model. This is done by providing a method to simulate the model or a method to convert it into a form that can be used for simulations in other programs. An early rule-based specification method is the κ-calculus, a process algebra that can be used to encode macromolecules with internal states and binding sites and to specify rules by which they interact. The κ-calculus is merely concerned with providing a language to encode multi-state models, not with interpreting the models themselves. A simulator compatible with Kappa is KaSim. BioNetGen is a software suite that provides both specification and simulation capacities. Rule-based models can be written down using a specified syntax, the BioNetGen language (BNGL). The underlying concept is to represent biochemical systems as graphs, where molecules are represented as nodes (or collections of nodes) and chemical bonds as edges. A reaction rule, then, corresponds to a graph rewriting rule. BNGL provides a syntax for specifying these graphs and the associated rules as structured strings. BioNetGen can then use these rules to generate ordinary differential equations (ODEs) to describe each biochemical reaction. Alternatively, it can generate a list of all possible species and reactions in SBML, which can then be exported to simulation software packages that can read SBML. One can also make use of BioNetGen's own ODE-based simulation software and its capability to generate reactions on-the-fly during a stochastic simulation. In addition, a model specified in BNGL can be read by other simulation software, such as DYNSTOC, RuleMonkey, and NFSim. Another tool that generates full reaction networks from a set of rules is the Allosteric Network Compiler (ANC). Conceptually, ANC sees molecules as allosteric devices with a Monod-Wyman-Changeux (MWC) type regulation mechanism, whose interactions are governed by their internal state, as well as by external modifications. A very useful feature of ANC is that it automatically computes dependent parameters, thereby imposing thermodynamic correctness. An extension of the κ-calculus is provided by React(C). The authors of React C show that it can express the stochastic π calculus. They also provide a stochastic simulation algorithm based on the Gillespie stochastic algorithm for models specified in React(C). ML-Rules is similar to React(C), but provides the added possibility of nesting: A component species of the model, with all its attributes, can be part of a higher-order component species. This enables ML-Rules to capture multi-level models that can bridge the gap between, for instance, a series of biochemical processes and the macroscopic behaviour of a whole cell or group of cells. For instance, a proof-of-concept model of cell division in fission yeast includes cyclin/cdc2 binding and activation, pheromone secretion and diffusion, cell division and movement of cells. Models specified in ML-Rules can be simulated using the James II simulation framework. A similar nested language to represent multi-level biological systems has been proposed by Oury and Plotkin. A specification formalism based on molecular finite automata (MFA) framework can then be used to generate and simulate a system of ODEs or for stochastic simulation using a kinetic Monte Carlo algorithm. Some rule-based specification systems and their associated network generation and simulation tools have been designed to accommodate spatial heterogeneity, in order to allow for the realistic simulation of interactions within biological compartments. For instance, the Simmune project includes a spatial component: Users can specify their multi-state biomolecules and interactions within membranes or compartments of arbitrary shape. The reaction volume is then divided into interfacing voxels, and a separate reaction network generated for each of these subvolumes. The Stochastic Simulator Compiler (SSC) allows for rule-based, modular specification of interacting biomolecules in regions of arbitrarily complex geometries. Again, the system is represented using graphs, with chemical interactions or diffusion events formalised as graph-rewriting rules. The compiler then generates the entire reaction network before launching a stochastic reaction-diffusion algorithm. A different approach is taken by PySB, where model specification is embedded in the programming language Python. A model (or part of a model) is represented as a Python programme. This allows users to store higher-order biochemical processes such as catalysis or polymerisation as macros and re-use them as needed. The models can be simulated and analysed using Python libraries, but PySB models can also be exported into BNGL, kappa, and SBML. Models involving multi-state and multi-component species can also be specified in Level 3 of the Systems Biology Markup Language (SBML) using the multi package. A draft specification is available. Thus, by only considering states and features important for a particular reaction, rule-based model specification eliminates the need to explicitly enumerate every possible molecular state that can undergo a similar reaction, and thereby allows for efficient specification. The computation problem When running simulations on a biological model, any simulation software evaluates a set of rules, starting from a specified set of initial conditions, and usually iterating through a series of time steps until a specified end time. One way to classify simulation algorithms is by looking at the level of analysis at which the rules are applied: they can be population-based, single-particle-based or hybrid. Population-based rule evaluation In Population-based rule evaluation, rules are applied to populations. All molecules of the same species in the same state are pooled together. Application of a specific rule reduces or increases the size of one of the pools, possibly at the expense of another. Some of the best-known classes of simulation approaches in computational biology belong to the population-based family, including those based on the numerical integration of ordinary and partial differential equations and the Gillespie stochastic simulation algorithm. Differential equations describe changes in molecular concentrations over time in a deterministic manner. Simulations based on differential equations usually do not attempt to solve those equations analytically, but employ a suitable numerical solver. The stochastic Gillespie algorithm changes the composition of pools of molecules through a progression of randomness reaction events, the probability of which is computed from reaction rates and from the numbers of molecules, in accordance with the stochastic master equation. In population-based approaches, one can think of the system being modeled as being in a given state at any given time point, where a state is defined according to the nature and size of the populated pools of molecules. This means that the space of all possible states can become very large. With some simulation methods implementing numerical integration of ordinary and partial differential equations or the Gillespie stochastic algorithm, all possible pools of molecules and the reactions they undergo are defined at the start of the simulation, even if they are empty. Such "generate-first" methods scale poorly with increasing numbers of molecular states. For instance, it has recently been estimated that even for a simple model of CaMKII with just 6 states per subunits and 10 subunits, it would take 290 years to generate the entire reaction network on a 2.54 GHz Intel Xeon processor. In addition, the model generation step in generate-first methods does not necessarily terminate, for instance when the model includes assembly of proteins into complexes of arbitrarily large size, such as actin filaments. In these cases, a termination condition needs to be specified by the user. Even if a large reaction system can be successfully generated, its simulation using population-based rule evaluation can run into computational limits. In a recent study, a powerful computer was shown to be unable to simulate a protein with more than 8 phosphorylation sites ( phosphorylation states) using ordinary differential equations. Methods have been proposed to reduce the size of the state space. One is to consider only the states adjacent to the present state (i.e. the states that can be reached within the next iteration) at each time point. This eliminates the need for enumerating all possible states at the beginning. Instead, reactions are generated "on-the-fly" at each iteration. These methods are available both for stochastic and deterministic algorithms. These methods still rely on the definition of an (albeit reduced) reaction network - in contrast to the "network-free" methods discussed below. Even with "on-the-fly" network generation, networks generated for population-based rule evaluation can become quite large, and thus difficult - if not impossible - to handle computationally. An alternative approach is provided by particle-based rule evaluation. Particle-based rule evaluation In particle-based (sometimes called "agent-based") simulations, proteins, nucleic acids, macromolecular complexes or small molecules are represented as individual software objects, and their progress is tracked through the course of the entire simulation. Because particle-based rule evaluation keeps track of individual particles rather than populations, it comes at a higher computational cost when modeling systems with a high total number of particles, but a small number of kinds (or pools) of particles. In cases of combinatorial complexity, however, the modeling of individual particles is an advantage because, at any given point in the simulation, only existing molecules, their states and the reactions they can undergo need to be considered. Particle-based rule evaluation does not require the generation of complete or partial reaction networks at the start of the simulation or at any other point in the simulation and is therefore called "network-free". This method reduces the complexity of the model at the simulation stage, and thereby saves time and computational power. The simulation follows each particle, and at each simulation step, a particle only "sees" the reactions (or rules) that apply to it. This depends on the state of the particle and, in some implementation, on the states of its neighbours in a holoenzyme or complex. As the simulation proceeds, the states of particles are updated according to the rules that are fired. Some particle-based simulation packages use an ad-hoc formalism for specification of reactants, parameters and rules. Others can read files in a recognised rule-based specification format such as BNGL. Non-spatial particle-based methods StochSim is a particle-based stochastic simulator used mainly to model chemical reactions and other molecular transitions. The algorithm used in StochSim is different from the more widely known Gillespie stochastic algorithm in that it operates on individual entities, not entity pools, making it particle-based rather than population-based. In StochSim, each molecular species can be equipped with a number of binary state flags representing a particular modification. Reactions can be made dependent on a set of state flags set to particular values. In addition, the outcome of a reaction can include a state flag being changed. Moreover, entities can be arranged in geometric arrays (for instance, for holoenzymes consisting of several subunits), and reactions can be "neighbor-sensitive", i.e. the probability of a reaction for a given entity is affected by the value of a state flag on a neighboring entity. These properties make StochSim ideally suited to modeling multi-state molecules arranged in holoenzymes or complexes of specified size. Indeed, StochSim has been used to model clusters of bacterial chemotactic receptors, and CaMKII holoenzymes. An extension to StochSim includes a particle-based simulator DYNSTOC, which uses a StochSim-like algorithm to simulate models specified in the BioNetGen language (BNGL), and improves the handling of molecules within macromolecular complexes. Another particle-based stochastic simulator that can read BNGL input files is RuleMonkey. Its simulation algorithm differs from the algorithms underlying both StochSim and DYNSTOC in that the simulation time step is variable. The Network-Free Stochastic Simulator (NFSim) differs from those described above by allowing for the definition of reaction rates as arbitrary mathematical or conditional expressions and thereby facilitates selective coarse-graining of models. RuleMonkey and NFsim implement distinct but related simulation algorithms. A detailed review and comparison of both tools is given by Yang and Hlavacek. It is easy to imagine a biological system where some components are complex multi-state molecules, whereas others have few possible states (or even just one) and exist in large numbers. A hybrid approach has been proposed to model such systems: Within the Hybrid Particle/Population (HPP) framework, the user can specify a rule-based model, but can designate some species to be treated as populations (rather than particles) in the subsequent simulation. This method combines the computational advantages of particle-based modeling for multi-state systems with relatively low molecule numbers and of population-based modeling for systems with high molecule numbers and a small number of possible states. Specification of HPP models is supported by BioNetGen, and simulations can be performed with NFSim. Spatial particle-based methods Spatial particle-based methods differ from the methods described above by their explicit representation of space. One example of a particle-based simulator that allows for a representation of cellular compartments is SRSim. SRSim is integrated in the LAMMPS molecular dynamics simulator and allows the user to specify the model in BNGL. SRSim allows users to specify the geometry of the particles in the simulation, as well as interaction sites. It is therefore especially good at simulating the assembly and structure of complex biomolecular complexes, as evidenced by a recent model of the inner kinetochore. MCell allows individual molecules to be traced in arbitrarily complex geometric environments which are defined by the user. This allows for simulations of biomolecules in realistic reconstructions of living cells, including cells with complex geometries like those of neurons. The reaction compartment is a reconstruction of a dendritic spine. MCell uses an ad-hoc formalism within MCell itself to specify a multi-state model: In MCell, it is possible to assign "slots" to any molecular species. Each slot stands for a particular modification, and any number of slots can be assigned to a molecule. Each slot can be occupied by a particular state. The states are not necessarily binary. For instance, a slot describing binding of a particular ligand to a protein of interest could take the states "unbound", "partially bound", and "fully bound". The slot-and-state syntax in MCell can also be used to model multimeric proteins or macromolecular complexes. When used in this way, a slot is a placeholder for a subunit or a molecular component of a complex, and the state of the slot will indicate whether a specific protein component is absent or present in the complex. A way to think about this is that MCell macromolecules can have several dimensions: A "state dimension" and one or more "spatial dimensions". The "state dimension" is used to describe the multiple possible states making up a multi-state protein, while the spatial dimension(s) describe topological relationships between neighboring subunits or members of a macromolecular complex. One drawback of this method for representing protein complexes, compared to Meredys, is that MCell does not allow for the diffusion of complexes, and hence, of multi-state molecules. This can in some cases be circumvented by adjusting the diffusion constants of ligands that interact with the complex, by using checkpointing functions or by combining simulations at different levels. Examples of multi-state models in biology A (by no means exhaustive) selection of models of biological systems involving multi-state molecules and using some of the tools discussed here is give in the table below. See also Multiscale modeling Rule-based modeling References Biomolecules Cell signaling Chemical bonding Proteins Enzyme kinetics Stochastic simulation
Multi-state modeling of biomolecules
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
5,707
[ "Biomolecules by chemical classification", "Natural products", "Biochemistry", "Enzyme kinetics", "Organic compounds", "Condensed matter physics", "nan", "Biomolecules", "Structural biology", "Proteins", "Chemical bonding", "Chemical kinetics", "Molecular biology" ]
42,528,326
https://en.wikipedia.org/wiki/Emotional%20eating
Emotional eating, also known as stress eating and emotional overeating, is defined as the "propensity to eat in response to positive and negative emotions". While the term commonly refers to eating as a means of coping with negative emotions, it sometimes includes eating for positive emotions, such as overeating when celebrating an event or to enhance an already good mood. Background Emotional eating includes eating in response to any emotion, whether that be positive or negative. Most frequently, people refer to emotional eating as "eating to cope with negative emotions." In these situations, emotional eating can be considered a form of disordered eating, which is defined as "an increase in food intake in response to negative emotions" and can be considered a maladaptive strategy. More specifically, emotional eating in order to relieve negative emotions would qualify as a form of emotion-focused coping, which attempts to minimize, regulate, and prevent emotional distress. One study found that emotional eating sometimes does not reduce emotional distress, but instead it enhances emotional distress by sparking feelings of intense guilt after an emotional eating session. Those who eat as a coping strategy are at an especially high risk of developing binge-eating disorder, and those with eating disorders are at a higher risk to engage in emotional eating as a means to cope. In a clinical setting, emotional eating can be assessed by the Dutch Eating Behavior Questionnaire, which contains a scale for restrained, emotional, and external eating. Other questionnaires, such as the Palatable Eating Motives Scale, can determine reasons why a person eats tasty foods when they are not hungry; sub-scales include eating for reward enhancement, coping, social, and conformity. Characteristics Emotional eating usually occurs when one is attempting to satisfy his or her hedonic drive, or the drive to eat palatable food to obtain pleasure in the absence of an energy deficit but can also occur when one is seeking food as a reward, eating for social reasons (such as eating at a party), eating to conform (which involves eating because friends or family wants the individual to), or eating to regulate inner emotional states. When one is engaging in emotional eating, they are usually seeking out energy-dense foods rather than just food in general, which may result in weight gain. In some cases, emotional eating can lead to something called "mindless eating" during which the individual is eating without being mindful of what or how much they are consuming; this can occur during both positive and negative settings. Emotional hunger does not originate from the stomach, such as with a rumbling or growling stomach, but tends to start when a person thinks about a craving or wants something specific to eat. Emotional responses are also different. Giving in to a craving or eating because of stress can cause feelings of regret, shame, or guilt, and these responses tend to be associated with emotional hunger. On the other hand, satisfying a physical hunger is giving the body the nutrients or calories it needs to function and is not associated with negative feelings. Major theories behind eating to cope Current research suggests that certain individual factors may increase one's likelihood of using emotional eating as a coping strategy. The inadequate affect regulation theory posits that individuals engage in emotional eating because they believe overeating alleviates negative feelings. Escape theory builds upon inadequate affect regulation theory by suggesting that people not only overeat to cope with negative emotions, but they find that overeating diverts their attention away from a stimulus that is threatening self-esteem to focus on a pleasurable stimulus like food. Restraint theory suggests that overeating as a result of negative emotions occurs among individuals who already restrain their eating. While these individuals typically limit what they eat, when they are faced with negative emotions they cope by engaging in emotional eating. Restraint theory supports the idea that individuals with other eating disorders are more likely to engage in emotional eating. Together these three theories suggest that an individual's aversion to negative emotions, particularly negative feelings that arise in response to a threat to the ego or intense self-awareness, increase the propensity for the individual to utilize emotional eating as a means of coping with this aversion. The biological stress response may also contribute to the development of emotional eating tendencies. In a crisis, corticotropin-releasing hormone (CRH) is secreted by the hypothalamus, suppressing appetite and triggering the release of glucocorticoids from the adrenal gland. These steroid hormones increase appetite and, unlike CRH, remain in the bloodstream for a prolonged period of time, often resulting in hyperphagia. Those who experience this biologically instigated increase in appetite during times of stress are therefore primed to rely on emotional eating as a coping mechanism. Contributing factors Negative affect Overall, high levels of the negative affect trait are related to emotional eating. Negative affectivity is a personality trait involving negative emotions and poor self-concept. Negative emotions experienced within negative affect include anger, guilt, and nervousness. It has been found that certain negative affect regulation scales predicted emotional eating. An inability to articulate and identify one's emotions made the individual feel inadequate at regulating negative affect and thus more likely to engage in emotional eating as a means for coping with those negative emotions. Further scientific studies regarding the relationship between negative affect and eating find that, after experiencing a stressful event, food consumption is associated with reduced feelings of negative affect (i.e. feeling less bad) for those enduring high levels of chronic stress. This relationship between eating and feeling better suggests a self-reinforcing cyclical pattern between high levels of chronic stress and consumption of highly palatable foods as a coping mechanism. Contrarily, a study conducted by Spoor et al. found that negative affect is not significantly related to emotional eating, but the two are indirectly associated through emotion-focused coping and avoidance-distraction behaviors. While the scientific results differed somewhat, they both suggest that negative affect does play a role in emotional eating but it may be accounted for by other variables. Childhood development For some people, emotional eating is a learned behavior. During childhood, their parents give them treats to help them deal with a tough day or situation, or as a reward for something good. Over time, the child who reaches for a cookie after getting a bad grade on a test may become an adult who grabs a box of cookies after a rough day at work. In an example such as this, the roots of emotional eating are deep, which can make breaking the habit extremely challenging. In some cases, individuals may eat in order to conform; for example, individuals may be told "you have to finish your plate" and the individual may eat past the point in which they feel satisfied. At the same time, stress and negative emotions can cause different effects on appetite. While some children and adults experience an increase in appetite, others experience a decrease. This situation is terminologically referred to as emotional overeating (EOE) and emotional undereating (EUE). As observed in the Gemini twin study, EOE and EUE stem not from genes as expected, but generally from the early childhood environment; shared environmental influences played a significant role in both EOE and EUE. Non-shared environmental factors also had a moderate impact. Interestingly, the shared environmental factors were the only ones common to both behaviors, as neither genetic nor non-shared environmental correlations were found to be significant in this context. Also a positive correlation between EOE and EUE, certain children have a tendency to both overeat and undereat as reactions to stress. The findings indicate that both EOE and EUE behaviors are primarily learned during childhood, with the environment shared among family members having the most significant impact. Genetic factors played a minimal and also insignificant role in these behaviors. Related disorders Emotional eating as a means to cope may be a precursor to developing eating disorders such as binge eating or bulimia nervosa. The relationship between emotional eating and other disorders is largely due to the fact that emotional eating and these disorders share key characteristics. More specifically, they are both related to emotion focused coping, maladaptive coping strategies, and a strong aversion to negative feelings and stimuli. It is important to note that the causal direction has not been definitively established, meaning that while emotional eating is considered a precursor to these eating disorders, it also may be the consequence of these disorders. The latter hypothesis that emotional eating happens in response to another eating disorder is supported by research that has shown emotional eating to be more common among individuals already suffering from bulimia nervosa. Additionally, in a study involving children diagnosed with ADHD (Attention-Deficit/Hyperactivity Disorder) or ASD (Autism Spectrum Disorder), it was observed that both ADHD and ASD-diagnosed children had more issues in their eating behaviors compared to children without any diagnosis. It was suggested that children with ADHD might experience higher instances of emotional overeating (EOE) and emotional undereating (EUE) compared to those without any diagnosis. In the case of children with ASD there seems to be a higher likelihood of experiencing EUE. Biological and environmental factors Stress affects food preferences. Numerous studies — granted, many of them in animals — have shown that physical or emotional distress increases the intake of food high in fat, sugar, or both, even in the absence of caloric deficits. Once ingested, fat- and sugar-filled foods seem to have a feedback effect that damps stress-related responses and emotions, as these foods trigger dopamine and opioid releases, which protect against the negative consequences of stress. These foods really are "comfort" foods in that they seem to counteract stress, but rat studies demonstrate that intermittent access to and consumption of these highly palatable foods creates symptoms that resemble opioid withdrawal, suggesting that high-fat and high-sugar foods can become neurologically addictive. A few examples from the American diet would include: hamburgers, pizza, French fries, sausages and savory pasties. The most common food preferences are in decreasing order from: sweet energy-dense food, non-sweet energy-dense food then, fruits and vegetables. This may contribute to people's stress-induced craving for those foods. The stress response is a highly-individualized reaction and personal differences in physiological reactivity may also contribute to the development of emotional eating habits. Women are more likely than men to resort to eating as a coping mechanism for stress, as are obese individuals and those with histories of dietary restraint. In one study, women were exposed to an hour-long social stressor task or a neutral control condition. The women were exposed to each condition on different days. After the tasks, the women were invited to a buffet with both healthy and unhealthy snacks. Those who had high chronic stress levels and a low cortisol reactivity to the acute stress task consumed significantly more calories from chocolate cake than women with low chronic stress levels after both control and stress conditions. High cortisol levels, in combination with high insulin levels, may be responsible for stress-induced eating, as research shows high cortisol reactivity is associated with hyperphagia, an abnormally increased appetite for food, during stress. Furthermore, since glucocorticoids trigger hunger and specifically increase one's appetite for high-fat and high-sugar foods, those whose adrenal glands naturally secrete larger quantities of glucocorticoids in response to a stressor are more inclined toward hyperphagia. Additionally, those whose bodies require more time to clear the bloodstream of excess glucocorticoids are similarly predisposed. These biological factors can interact with environmental elements to further trigger hyperphagia. Frequent intermittent stressors trigger repeated, sporadic releases of glucocorticoids in intervals too short to allow for a complete return to baseline levels, leading to sustained and elevated levels of appetite. Therefore, those whose lifestyles or careers entail frequent intermittent stressors over prolonged periods of time thus have greater biological incentive to develop patterns of emotional eating, which puts them at risk for long-term adverse health consequences such as weight gain or cardiovascular disease. Macht (2008) described a five-way model to explain the reasoning behind stressful eating: (1) emotional control of food choice, (2) emotional suppression of food intake, (3) impairment of cognitive eating controls, (4) eating to regulate emotions, and (5) emotion-congruent modulation of eating. These break down into subgroups of: Coping, reward enhancement, social and conformity motive. Thus, providing an individual with are stronger understanding  of personal emotional eating. Positive affect Geliebter and Aversa (2003) conducted a study comparing individuals of three weight groups: underweight, normal weight and overweight. Both positive and negative emotions were evaluated. When individuals were experiencing positive emotional states or situations, the underweight group reporting eating more than the other two groups. As an explanation, the typical nature of underweight individuals is to eat less and during times of stress to eat even less. However, when positive emotional states or situations arise, individuals are more likely to indulge themselves with food. Impact Emotional eating may qualify as avoidant coping and/or emotion-focused coping. As coping methods that fall under these broad categories focus on temporary reprieve rather than practical resolution of stressors, they can initiate a vicious cycle of maladaptive behavior reinforced by fleeting relief from stress. Additionally, in the presence of high insulin levels characteristic of the recovery phase of the stress-response, glucocorticoids trigger the creation of an enzyme that stores away the nutrients circulating in the bloodstream after an episode of emotional eating as visceral fat, or fat located in the abdominal area. Therefore, those who struggle with emotional eating are at greater risk for abdominal obesity, which is in turn linked to a greater risk for metabolic and cardiovascular disease. Treatment There are numerous ways in which individuals can reduce emotional distress without engaging in emotional eating as a means to cope. The most salient choice is to minimize maladaptive coping strategies and to maximize adaptive strategies. A study conducted by Corstorphine et al. in 2007 investigated the relationship between distress tolerance and disordered eating. These researchers specifically focused on how different coping strategies impact distress tolerance and disordered eating. They found that individuals who engage in disordered eating often employ emotional avoidance strategies. If an individual is faced with strong negative emotions, they may choose to avoid the situation by distracting themselves through overeating. Discouraging emotional avoidance is thus an important facet to emotional eating treatment. The most obvious way to limit emotional avoidance is to confront the issue through techniques like problem solving. Corstorphine et al. showed that individuals who engaged in problem solving strategies enhance one's ability to tolerate emotional distress. Since emotional distress is correlated to emotional eating, the ability to better manage one's negative affect should allow an individual to cope with a situation without resorting to overeating. One way to combat emotional eating is to employ mindfulness techniques. For example, approaching cravings with a nonjudgmental inquisitiveness can help differentiate between hunger and emotionally-driven cravings. An individual may ask his or herself if the craving developed rapidly, as emotional eating tends to be triggered spontaneously. An individual may also take the time to note his or her bodily sensations, such as hunger pangs, and coinciding emotions, like guilt or shame, in order to make conscious decisions to avoid emotional eating. Emotional eating can also be improved by evaluating physical facets like hormone balance. Female hormones, in particular, can alter cravings and even self-perception of one's body. Additionally, emotional eating can be exacerbated by social pressure to be thin. The focus on thinness and dieting in our culture can make young girls, especially, vulnerable to falling into food restriction and subsequent emotional eating behavior. Emotional eating disorder predisposes individuals to more serious eating disorders and physiological complications. Therefore, combatting disordered eating before such progression takes place has become the focus of many clinical psychologists. Emotional undereating In a lesser percentage of individuals, emotional eating may conversely consist of eating less, called stress fasting or emotional undereating. This is believed to result from the fight-or-flight response. In some individuals, depression and other psychological disorders can also lead to emotional fasting or starvation. While emotional overeating is typically the focal point in addressing emotional eating issues, some individuals experience symptoms of emotional eating as undereating, self-deprivation, or decreased appetite. Additionally, emotional overeating and undereating issues generally arise during the preschool years. Understanding the childhood indicators of emotional overeating (EOE) and emotional undereating (EUE) is crucial as both cause various negative health impacts. For instance, studies have found young people with restrictive eating disorders had permanently stunted height growth. EOE is generally associated with excess weight, while EUE is linked to lower weight. Despite their different connections to weight, these two conditions exhibit a positive correlation. Additionally, some children tend to display tendencies toward both EOE and EUE in response to stressful situations. So that means if a child who emotionally overeats also tends to emotionally under-eat as well. The study conducted on twins revealed that shared environment is one of the factors underlying EOE and EUE. Genetic factors had less impact than expected, playing only a 7% role, whereas the shared environment accounted for a substantial 91% influence. The family environment emerged as a significant factor in shaping a child's eating behaviors. It was found that children whose families use food to calm them have a higher likelihood of experiencing EOE. Moreover, pressuring children to eat, imposing strict rules, or placing restrictions on how much they eat were also associated with EOE. Another study highlighted that lack of social support and a negative family environment were more closely linked to EUE. For instance, there's a higher probability of EUE in children from hostile family relationships, and in many women diagnosed with anorexia, the lack of social support and childhood EUE were observed. See also Comfort food Food addiction Hedonic hunger Overeating Social determinants of health References Eating behaviors Emotional issues Habit and impulse disorders Physiology
Emotional eating
[ "Biology" ]
3,748
[ "Biological interactions", "Eating behaviors", "Behavior", "Physiology" ]
42,528,507
https://en.wikipedia.org/wiki/Birnbaumins
Birnbaumins are a pair of alkaloids and toxic yellow pigment compounds first isolated from the flowerpot parasol mushroom. These toxins can cause gastric ulcers if consumed. References Tryptamine alkaloids Mycotoxins found in Basidiomycota Amidines Oximes Hydroxylamines Biological pigments Conjugated ketones Amides
Birnbaumins
[ "Chemistry", "Biology" ]
78
[ "Tryptamine alkaloids", "Amidines", "Hydroxylamines", "Functional groups", "Reducing agents", "Alkaloids by chemical classification", "Oximes", "Biological pigments", "Amides", "Bases (chemistry)", "Pigmentation" ]
42,528,617
https://en.wikipedia.org/wiki/Hexahydro-1%2C3%2C5-triazine
In chemistry, hexahydro-1,3,5-triazine is a class of heterocyclic compounds with the formula (CH2NR)3. Known as aldehyde ammonias, these compounds characteristically crystallize with water. They are reduced derivatives of 1,3,5-triazine, which have the formula (CHN)3, a family of aromatic heterocycles. They are also called triazacyclohexanes or TACH's, but this acronym is also applied to cis,cis-1,3,5-triaminocyclohexane. Preparation N,N',N''-trisubstituted hexahydro-1,3,5-triazines arise from the condensation of a primary amine and formaldehyde as illustrated by the route to 1,3,5-trimethyl-1,3,5-triazacyclohexane: 3 CH2O + 3 H2NMe → (CH2NMe)3 + 3 H2O The C-substituted derivatives are obtained by reaction of aldehydes and ammonia: 3 RCHO + 3 NH3 → (RCHNH)3 + 3 H2O 1-Alkanolamines are intermediates in these condensation reactions. The parent hexahydro-1,3,5-triazine (CH2NH)3 has been detected as an intermediate in the condensation of formaldehyde and ammonia. This reaction affords hexamethylene tetraamine. The N-substituted derivatives are more stable. These The N,N',N"-triacyltriazines are trizines with acyl groups attached to the three nitrogen centers of the ring. These triacyltriazines arise from the reaction of hexamethylene tetraamine with acid chlorides or the condensation of amides with formaldehyde. Structure Unlike the parent triazines, the hexahydro derivatives are conformationally flexible. Related compounds and derivatives Trimers of isocyanates are sometimes labeled as 2,4,6-trioxohexahydro-1,3,5-triazines. They have the formula (RNC(O))3 and based on the isocyanuric (trione) tautomer of cyanuric acid. The N,N',N"-hexahydro-1,3,5-triazines function as tridentate ligands, which are called TACH (triazacyclohexanes). Examples include Mo(CO)3[(CH2)3(NMe)3] formed from the TACH ligand and molybdenum hexacarbonyl. Hexahydro-1,3,5-triazine polymers have also been synthesized. References Amines Triazines
Hexahydro-1,3,5-triazine
[ "Chemistry" ]
616
[ "Amines", "Bases (chemistry)", "Functional groups" ]
42,529,692
https://en.wikipedia.org/wiki/Central%20Glass%20and%20Ceramic%20Research%20Institute
Central Glass and Ceramic Research Institute (CGCRI) is a Kolkata-based National Research Institute under the Council of Scientific and Industrial Research, India. Established in 1950, it focuses on the area of glass, ceramics, mica, refractories etc. References External links Research institutes in Kolkata Glass engineering and science Research institutes in West Bengal Ceramics 1950 establishments in West Bengal Research institutes established in 1950
Central Glass and Ceramic Research Institute
[ "Materials_science", "Engineering" ]
82
[ "Glass engineering and science", "Materials science" ]
42,531,058
https://en.wikipedia.org/wiki/Drag%20cost
Drag cost is a project management metric developed by Stephen Devaux as part of the Total Project Control (TPC) approach to project schedule and cost analysis. It is the amount by which a project's expected return on investment (ROI) is reduced due to the critical path drag of a specific critical path activity Task (project management) or other specific schedule factor such as a schedule lag or other delaying constraint. Drag cost is computed at the activity level, but is caused by the impact at the project level due to: 1. A reduction in a project's expected value because of later completion, or 2. An increase in a project's cost due to its indirect costs being increased because of a longer project duration. Drag cost computation is often used on projects in order to justify additional project resources. For example, if a project's expected ROI will be reduced by $5,000 for every day of duration, then an activity that has critical path drag of ten days (i.e., is delaying project completion by ten days) will have a drag cost of $50,000. If the addition of a resource that costs $10,000 would reduce the activity's drag to five days, the drag cost would be reduced by $25,000 and the project's expected ROI would be increased by $15,000 ($25,000 minus the additional $10,000 of resource costs). On projects which are performed for non-monetary reasons, such as public literacy programs or emergency response, drag cost can be measured in units of reduction in citizens educated or lives lost due to the additional time taken by critical path activities. Just as drag is only found on the critical path, the same is true of drag cost. Notes and references Cost engineering Business analysis
Drag cost
[ "Engineering" ]
362
[ "Cost engineering" ]