id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
939,466
https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28energy%29
This list compares various energies in joules (J), organized by order of magnitude. Below 1 J 1 to 105 J 106 to 1011 J 1012 to 1017 J 1018 to 1023 J Over 1024 J SI multiples See also Conversion of units of energy Energy conversion efficiency Energy density Metric system Outline of energy Scientific notation TNT equivalent Notes Energy Energy
Orders of magnitude (energy)
[ "Physics", "Mathematics" ]
75
[ "Physical quantities", "Quantity", "Units of energy", "Energy (physics)", "Energy", "Orders of magnitude", "Units of measurement" ]
940,160
https://en.wikipedia.org/wiki/Combined%20diesel%20and%20gas
Combined diesel and gas (CODAG) is a type of propulsion system for ships that need a maximum speed that is considerably faster than their cruise speed, particularly warships like modern frigates or corvettes. Pioneered by Germany with the , a CODAG system consists of diesel engines for cruising and gas turbines that can be switched on for high-speed transits. In most cases the difference of power output from diesel engines alone to diesel and turbine power combined is too large for controllable-pitch propellers to limit the rotations so that the diesels cannot continue to operate without changing the gear ratios of their transmissions. Because of that, special multi-speed gearboxes are needed. This contrasts to combined diesel or gas (CODOG) systems, which couple the diesels with a simple, fixed ratio gearbox to the shaft, but disengage the diesel engines when the turbine is powered up. For an example the new CODAG-propelled s of the Royal Norwegian Navy, the gear ratio for the diesel engine is changed from about 1:7.7 (engine:propeller) for diesel-only to 1:5.3 when in diesel-and-turbine mode. Some ships even have three different gear ratios for the diesel engines — one each for single-diesel and double-diesel cruises, and the third when the gas turbine is engaged. Such a propulsion system has a smaller footprint than a diesel-only power plant with the same maximal power output, since smaller engines can be used and the gas turbine and gearbox don't need that much additional space. Still, it retains the high fuel efficiency of diesel engines when cruising, allowing greater range and lower fuel costs than with gas turbines alone. On the other hand, a more complex, heavy and troublesome gearing is needed. Typical cruising speed of CODAG warships on diesel-power is and typical maximal speed with switched on turbine is . Turbines and diesels on separate shafts Sometimes the engine arrangement of diesel engine and gas turbine with each system using its own shafts and propellers is also called CODAG. Such installations avoid the use of a complicated switching gearbox, but have some disadvantages compared to real CODAG systems: Since more propellers have to be used, they have to be smaller and thus less efficient. The propellers of the idling systems cause drag. CODAG WARP CODAG Water jet And Refined Propeller (WARP), a system developed by Blohm+Voss as option for their MEKO line of ships, also falls in this category, but avoids the above-mentioned problems. CODAG WARP uses two diesel engines to drive two propellers in a combined diesel and diesel (CODAD) arrangement, i.e., both shafts can also be powered by any single engine, and a centerline water jet powered by a gas turbine. The idling water jet does not cause drag, and since its nozzle can be placed further aft and higher it does not affect the size of the propellers. CODAG-electric Another way to combine the two types of engines is to connect them to generators and drive the propellers electrically as in a diesel-electric. See Combined diesel–electric and gas (CODLAG) and Integrated Electric Propulsion (IEP). This also permits propeller pods, with the propulsion motors being located inside the pods. Land vehicles The Swedish Stridsvagn 103 utilizes a diesel engine for slow cruising and aiming, and a gas turbine for additional power. See also Combined gas and gas (COGAG) References External links CODAG WARP @ naval-technology.com Marine propulsion
Combined diesel and gas
[ "Engineering" ]
714
[ "Marine propulsion", "Marine engineering" ]
943,382
https://en.wikipedia.org/wiki/Tsirelson%27s%20bound
A Tsirelson bound is an upper limit to quantum mechanical correlations between distant events. Given that quantum mechanics violates Bell inequalities (i.e., it cannot be described by a local hidden-variable theory), a natural question to ask is how large can the violation be. The answer is precisely the Tsirelson bound for the particular Bell inequality in question. In general, this bound is lower than the bound that would be obtained if more general theories, only constrained by "no-signalling" (i.e., that they do not permit communication faster than light), were considered, and much research has been dedicated to the question of why this is the case. The Tsirelson bounds are named after Boris S. Tsirelson (or Cirel'son, in a different transliteration), the author of the article in which the first one was derived. Bound for the CHSH inequality The first Tsirelson bound was derived as an upper bound on the correlations measured in the CHSH inequality. It states that if we have four (Hermitian) dichotomic observables , , , (i.e., two observables for Alice and two for Bob) with outcomes such that for all , then For comparison, in the classical case (or local realistic case) the upper bound is 2, whereas if any arbitrary assignment of is allowed, it is 4. The Tsirelson bound is attained already if Alice and Bob each makes measurements on a qubit, the simplest non-trivial quantum system. Several proofs of this bound exist, but perhaps the most enlightening one is based on the Khalfin–Tsirelson–Landau identity. If we define an observable and , i.e., if the observables' outcomes are , then If or , which can be regarded as the classical case, it already follows that . In the quantum case, we need only notice that , and the Tsirelson bound follows. Other Bell inequalities Tsirelson also showed that for any bipartite full-correlation Bell inequality with m inputs for Alice and n inputs for Bob, the ratio between the Tsirelson bound and the local bound is at most where and is the Grothendieck constant of order d. Note that since , this bound implies the above result about the CHSH inequality. In general, obtaining a Tsirelson bound for a given Bell inequality is a hard problem that has to be solved on a case-by-case basis. It is not even known to be decidable. The best known computational method for upperbounding it is a convergent hierarchy of semidefinite programs, the NPA hierarchy, that in general does not halt. The exact values are known for a few more Bell inequalities: For the Braunstein–Caves inequalities we have that For the WWŻB inequalities the Tsirelson bound is For the inequality the Tsirelson bound is not known exactly, but concrete realisations give a lower bound of , and the NPA hierarchy gives an upper bound of . It is conjectured that only infinite-dimensional quantum states can reach the Tsirelson bound. Derivation from physical principles Significant research has been dedicated to finding a physical principle that explains why quantum correlations go only up to the Tsirelson bound and nothing more. Three such principles have been found: no-advantage for non-local computation, information causality and macroscopic locality. That is to say, if one could achieve a CHSH correlation exceeding Tsirelson's bound, all such principles would be violated. Tsirelson's bound also follows if the Bell experiment admits a strongly positive quantal measure. Tsirelson's problem There are two different ways of defining the Tsirelson bound of a Bell expression. One by demanding that the measurements are in a tensor product structure, and another by demanding only that they commute. Tsirelson's problem is the question of whether these two definitions are equivalent. More formally, let be a Bell expression, where is the probability of obtaining outcomes with the settings . The tensor product Tsirelson bound is then the supremum of the value attained in this Bell expression by making measurements and on a quantum state : The commuting Tsirelson bound is the supremum of the value attained in this Bell expression by making measurements and such that on a quantum state : Since tensor product algebras in particular commute, . In finite dimensions commuting algebras are always isomorphic to (direct sums of) tensor product algebras, so only for infinite dimensions it is possible that . Tsirelson's problem is the question of whether for all Bell expressions . This question was first considered by Boris Tsirelson in 1993, where he asserted without proof that . Upon being asked for a proof by Antonio Acín in 2006, he realized that the one he had in mind didn't work, and issued the question as an open problem. Together with Miguel Navascués and Stefano Pironio, Antonio Acín had developed an hierarchy of semidefinite programs, the NPA hierarchy, that converged to the commuting Tsirelson bound from above, and wanted to know whether it also converged to the tensor product Tsirelson bound , the most physically relevant one. Since one can produce a converging sequencing of approximations to from below by considering finite-dimensional states and observables, if , then this procedure can be combined with the NPA hierarchy to produce a halting algorithm to compute the Tsirelson bound, making it a computable number (note that in isolation neither procedure halts in general). Conversely, if is not computable, then . In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen claimed to have proven that is not computable, thus solving Tsirelson's problem in the negative; Tsirelson's problem has been shown to be equivalent to Connes' embedding problem, so the same proof also implies that the Connes embedding problem is false. See also Quantum nonlocality Bell's theorem EPR paradox CHSH inequality Quantum pseudo-telepathy References Quantum measurement
Tsirelson's bound
[ "Physics" ]
1,293
[ "Quantum measurement", "Quantum mechanics" ]
943,817
https://en.wikipedia.org/wiki/Convective%20overturn
The convective overturn model of supernovae was proposed by Bethe and Wilson in 1985, and received a dramatic test with SN 1987A, and the detection of neutrinos from the explosion. The model is for type II supernovae, which take place in stars more massive than 8 solar masses. When the iron core of a super massive star becomes heavier than electron degeneracy pressure can support, the core of the star collapses, and the iron core is compressed by gravity until nuclear densities are reached when a strong rebound sends a shock wave throughout the rest of the star and tears it apart in a large supernova explosion. The remains of this core will eventually become a neutron star. The collapse produces two reactions: one breaks apart iron nuclei into 13 helium atoms and 4 neutrons, absorbing energy; and the second produces a wave of neutrinos that form a shock wave. While all models agree there is a convective shock, there is disagreement as to how important that shock is to the supernova explosion. In the convective overturn model, the core collapses faster and faster, exceeding the speed of sound inside the star, and producing a supersonic shock wave. This shock wave explodes outward until it stalls when it reaches the neutrinosphere, where the pressure of the star collapsing inward exceeds the pressure of the neutrinos radiating outwards. This point produces heavier elements as the neutrinos are absorbed. The stalling of the shock wave represents the supernova problem, because once stalled, the shock wave should not be "reenergized". The prompt convection model states that the shock wave will increase the luminosity of the neutrinos produced by the core collapse, and this increase in energy will start the shock wave again. The neutron fingers model has instability near the core expel another wave of energized neutrinos which reenergizes the shock wave. The entropy convection model has matter falling inward from above the shock layer down to the gain radius, which would not increase neutrino luminosity, but would allow the shock wave to continue outwards. All of these models exhibit convective overturn in that they rely on a convection mechanism to re-energize the stalled shock wave and complete the supernova explosion. There are still open issues in both the convective models and in the more general core collapse model, which include not taking into account flavor mixing and mass of neutrinos, and the inability to model large explosions. Current models indicate that the collapse may occur more slowly than thought before, which would mean the shock wave would penetrate further into the upper layers of the star. The proto-neutron star boosts neutrino luminosities, and the additional neutrinos emitted help re-energize the shock wave. These changes remove some, but not all, of the supernova problem, and strengthen the idea of convection being an important factor in supernova explosions. References current convection models and problems Core collapse issues, 2004 conference Bethe, H.A., & Wilson, J.R. 1985, ApJ, 295, 14 Astrophysics Convection
Convective overturn
[ "Physics", "Chemistry", "Astronomy" ]
655
[ "Transport phenomena", "Physical phenomena", "Convection", "Astrophysics", "Thermodynamics", "Astronomical sub-disciplines" ]
944,442
https://en.wikipedia.org/wiki/Brans%E2%80%93Dicke%20theory
In physics, the Brans–Dicke theory of gravitation (sometimes called the Jordan–Brans–Dicke theory) is a competitor to Einstein's general theory of relativity. It is an example of a scalar–tensor theory, a gravitational theory in which the gravitational interaction is mediated by a scalar field as well as the tensor field of general relativity. The gravitational constant is not presumed to be constant but instead is replaced by a scalar field which can vary from place to place and with time. The theory was developed in 1961 by Robert H. Dicke and Carl H. Brans building upon, among others, the earlier 1959 work of Pascual Jordan. At present, both Brans–Dicke theory and general relativity are generally held to be in agreement with observation. Brans–Dicke theory represents a minority viewpoint in physics. Comparison with general relativity Both Brans–Dicke theory and general relativity are examples of a class of relativistic classical field theories of gravitation, called metric theories. In these theories, spacetime is equipped with a metric tensor, , and the gravitational field is represented (in whole or in part) by the Riemann curvature tensor , which is determined by the metric tensor. All metric theories satisfy the Einstein equivalence principle, which in modern geometric language states that in a very small region (too small to exhibit measurable curvature effects), all the laws of physics known in special relativity are valid in local Lorentz frames. This implies in turn that metric theories all exhibit the gravitational redshift effect. As in general relativity, the source of the gravitational field is considered to be the stress–energy tensor or matter tensor. However, the way in which the immediate presence of mass-energy in some region affects the gravitational field in that region differs from general relativity. So does the way in which spacetime curvature affects the motion of matter. In the Brans–Dicke theory, in addition to the metric, which is a rank two tensor field, there is a scalar field, , which has the physical effect of changing the effective gravitational constant from place to place. (This feature was actually a key desideratum of Dicke and Brans; see the paper by Brans cited below, which sketches the origins of the theory.) The field equations of Brans–Dicke theory contain a parameter, , called the Brans–Dicke coupling constant. This is a true dimensionless constant which must be chosen once and for all. However, it can be chosen to fit observations. Such parameters are often called tunable parameters. In addition, the present ambient value of the effective gravitational constant must be chosen as a boundary condition. General relativity contains no dimensionless parameters whatsoever, and therefore is easier to falsify (show whether false) than Brans–Dicke theory. Theories with tunable parameters are sometimes deprecated on the principle that, of two theories which both agree with observation, the more parsimonious is preferable. On the other hand, it seems as though they are a necessary feature of some theories, such as the weak mixing angle of the Standard Model. Brans–Dicke theory is "less stringent" than general relativity in another sense: it admits more solutions. In particular, exact vacuum solutions to the Einstein field equation of general relativity, augmented by the trivial scalar field , become exact vacuum solutions in Brans–Dicke theory, but some spacetimes which are not vacuum solutions to the Einstein field equation become, with the appropriate choice of scalar field, vacuum solutions of Brans–Dicke theory. Similarly, an important class of spacetimes, the pp-wave metrics, are also exact null dust solutions of both general relativity and Brans–Dicke theory, but here too, Brans–Dicke theory allows additional wave solutions having geometries which are incompatible with general relativity. Like general relativity, Brans–Dicke theory predicts light deflection and the precession of perihelia of planets orbiting the Sun. However, the precise formulas which govern these effects, according to Brans–Dicke theory, depend upon the value of the coupling constant . This means that it is possible to set an observational lower bound on the possible value of from observations of the solar system and other gravitational systems. The value of consistent with experiment has risen with time. In 1973 was consistent with known data. By 1981 was consistent with known data. In 2003 evidence – derived from the Cassini–Huygens experiment – shows that the value of must exceed 40,000. It is also often taught that general relativity is obtained from the Brans–Dicke theory in the limit . But Faraoni claims that this breaks down when the trace of the stress-energy momentum vanishes, i.e. , an example of which is the Campanelli-Lousto wormhole solution. Some have argued that only general relativity satisfies the strong equivalence principle. The field equations The field equations of the Brans–Dicke theory are where is the dimensionless Dicke coupling constant; is the metric tensor; is the Einstein tensor, a kind of average curvature; is the Ricci tensor, a kind of trace of the curvature tensor; is the Ricci scalar, the trace of the Ricci tensor; is the stress–energy tensor; is the trace of the stress–energy tensor; is the scalar field; is the scalar potential; is the derivative of the scalar potential with respect to ; is the Laplace–Beltrami operator or covariant wave operator, . The first equation describes how the stress–energy tensor and scalar field together affect spacetime curvature. The left-hand side, the Einstein tensor, can be thought of as a kind of average curvature. It is a matter of pure mathematics that, in any metric theory, the Riemann tensor can always be written as the sum of the Weyl curvature (or conformal curvature tensor) and a piece constructed from the Einstein tensor. The second equation says that the trace of the stress–energy tensor acts as the source for the scalar field . Since electromagnetic fields contribute only a traceless term to the stress–energy tensor, this implies that in a region of spacetime containing only an electromagnetic field (plus the gravitational field), the right-hand side vanishes, and obeys the (curved spacetime) wave equation. Therefore, changes in propagate through electrovacuum regions; in this sense, we say that is a long-range field. For comparison, the field equation of general relativity is simply This means that in general relativity, the Einstein curvature at some event is entirely determined by the stress–energy tensor at that event; the other piece, the Weyl curvature, is the part of the gravitational field which can propagate as a gravitational wave across a vacuum region. But in the Brans–Dicke theory, the Einstein tensor is determined partly by the immediate presence of mass–energy and momentum, and partly by the long-range scalar field . The vacuum field equations of both theories are obtained when the stress–energy tensor vanishes. This models situations in which no non-gravitational fields are present. The action principle The following Lagrangian contains the complete description of the Brans–Dicke theory: where is the determinant of the metric, is the four-dimensional volume form, and is the matter term, or matter Lagrangian density. The matter term includes the contribution of ordinary matter (e.g. gaseous matter) and also electromagnetic fields. In a vacuum region, the matter term vanishes identically; the remaining term is the gravitational term. To obtain the vacuum field equations, we must vary the gravitational term in the Lagrangian with respect to the metric ; this gives the first field equation above. When we vary with respect to the scalar field , we obtain the second field equation. Note that, unlike for the General Relativity field equations, the term does not vanish, as the result is not a total derivative. It can be shown that To prove this result, use By evaluating the s in Riemann normal coordinates, 6 individual terms vanish. 6 further terms combine when manipulated using Stokes' theorem to provide the desired . For comparison, the Lagrangian defining general relativity is Varying the gravitational term with respect to gives the vacuum Einstein field equation. In both theories, the full field equations can be obtained by variations of the full Lagrangian. See also Classical theories of gravitation Dilaton General relativity Mach's principle Scientific importance of GW170817 Notes References See Box 39.1. External links Scholarpedia article on the subject by Carl H. Brans Theories of gravity
Brans–Dicke theory
[ "Physics" ]
1,793
[ "Theoretical physics", "Theories of gravity" ]
27,789,063
https://en.wikipedia.org/wiki/Materials%20Science%20Citation%20Index
The Materials Science Citation Index is a citation index, established in 1992, by Thomson ISI (Thomson Reuters). Its overall focus is cited reference searching of the notable and significant journal literature in materials science. The database makes accessible the various properties, behaviors, and materials in the materials science discipline. This then encompasses applied physics, ceramics, composite materials, metals and metallurgy, polymer engineering, semiconductors, thin films, biomaterials, dental technology, as well as optics. The database indexes relevant materials science information from over 6,000 scientific journals that are part of the ISI database which is multidisciplinary. Author abstracts are searchable, which links articles sharing one or more bibliographic references. The database also allows a researcher to use an appropriate (or related to research) article as a base to search forward in time to discover more recently published articles that cite it. Materials Science Citation Index lists 625 high-impact journals, and is accessible via the Science Citation Index Expanded collection of databases. Editions Coverage of Materials science is accomplished with the following editions: Materials Science, Ceramics Materials Science, Characterization & Testing Materials Science, Biomaterials Materials Science, Coatings & Films Materials Science, Composites Materials Science, Paper & Wood Materials Science, Multidisciplinary Materials Science, Textiles See also Science Citation Index Academic publishing List of academic databases and search engines Social Sciences Citation Index, which covers over 1500 journals, beginning with 1956 Arts and Humanities Citation Index, which covers over 1000 journals, beginning with 1975 Impact factor VINITI Database RAS References Thomson Reuters Bibliographic databases and indexes Online databases Citation indices
Materials Science Citation Index
[ "Materials_science", "Engineering" ]
332
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science" ]
27,791,209
https://en.wikipedia.org/wiki/Phosphodiesterase%204
At least four types of the enzyme phosphodiesterase 4 (PDE4) are known: PDE4A PDE4B PDE4C PDE4D See also 3',5'-cyclic-AMP phosphodiesterase Phosphodiesterase (PDE) PDE4 inhibitor Molecular biology
Phosphodiesterase 4
[ "Chemistry", "Biology" ]
73
[ "Biochemistry", "Molecular biology" ]
27,791,233
https://en.wikipedia.org/wiki/Log-polar%20coordinates
In mathematics, log-polar coordinates (or logarithmic polar coordinates) is a coordinate system in two dimensions, where a point is identified by two numbers, one for the logarithm of the distance to a certain point, and one for an angle. Log-polar coordinates are closely connected to polar coordinates, which are usually used to describe domains in the plane with some sort of rotational symmetry. In areas like harmonic and complex analysis, the log-polar coordinates are more canonical than polar coordinates. Definition and coordinate transformations Log-polar coordinates in the plane consist of a pair of real numbers (ρ,θ), where ρ is the logarithm of the distance between a given point and the origin and θ is the angle between a line of reference (the x-axis) and the line through the origin and the point. The angular coordinate is the same as for polar coordinates, while the radial coordinate is transformed according to the rule . where is the distance to the origin. The formulas for transformation from Cartesian coordinates to log-polar coordinates are given by and the formulas for transformation from log-polar to Cartesian coordinates are By using complex numbers (x, y) = x + iy, the latter transformation can be written as i.e. the complex exponential function. From this follows that basic equations in harmonic and complex analysis will have the same simple form as in Cartesian coordinates. This is not the case for polar coordinates. Some important equations in log-polar coordinates Laplace's equation Laplace's equation in two dimensions is given by in Cartesian coordinates. Writing the same equation in polar coordinates gives the more complicated equation or equivalently However, from the relation it follows that so Laplace's equation in log-polar coordinates, has the same simple expression as in Cartesian coordinates. This is true for all coordinate systems where the transformation to Cartesian coordinates is given by a conformal mapping. Thus, when considering Laplace's equation for a part of the plane with rotational symmetry, e.g. a circular disk, log-polar coordinates is the natural choice. Cauchy–Riemann equations A similar situation arises when considering analytical functions. An analytical function written in Cartesian coordinates satisfies the Cauchy–Riemann equations: If the function instead is expressed in polar form , the Cauchy–Riemann equations take the more complicated form Just as in the case with Laplace's equation, the simple form of Cartesian coordinates is recovered by changing polar into log-polar coordinates (let ): The Cauchy–Riemann equations can also be written in one single equation as By expressing and in terms of and this equation can be written in the equivalent form Euler's equation When one wants to solve the Dirichlet problem in a domain with rotational symmetry, the usual thing to do is to use the method of separation of variables for partial differential equations for Laplace's equation in polar form. This means that you write . Laplace's equation is then separated into two ordinary differential equations where is a constant. The first of these has constant coefficients and is easily solved. The second is a special case of Euler's equation where are constants. This equation is usually solved by the ansatz , but through use of log-polar radius, it can be changed into an equation with constant coefficients: When considering Laplace's equation, and so the equation for takes the simple form When solving the Dirichlet problem in Cartesian coordinates, these are exactly the equations for and . Thus, once again the natural choice for a domain with rotational symmetry is not polar, but rather log-polar, coordinates. Discrete geometry In order to solve a PDE numerically in a domain, a discrete coordinate system must be introduced in this domain. If the domain has rotational symmetry and you want a grid consisting of rectangles, polar coordinates are a poor choice, since in the center of the circle it gives rise to triangles rather than rectangles. However, this can be remedied by introducing log-polar coordinates in the following way. Divide the plane into a grid of squares with side length 2/n, where n is a positive integer. Use the complex exponential function to create a log-polar grid in the plane. The left half-plane is then mapped onto the unit disc, with the number of radii equal to n. It can be even more advantageous to instead map the diagonals in these squares, which gives a discrete coordinate system in the unit disc consisting of spirals, see the figure to the right. Dirichlet-to-Neumann operator The latter coordinate system is for instance suitable for dealing with Dirichlet and Neumann problems. If the discrete coordinate system is interpreted as an undirected graph in the unit disc, it can be considered as a model for an electrical network. To every line segment in the graph is associated a conductance given by a function . The electrical network will then serve as a discrete model for the Dirichlet problem in the unit disc, where the Laplace equation takes the form of Kirchhoff's law. On the nodes on the boundary of the circle, an electrical potential (Dirichlet data) is defined, which induces an electric current (Neumann data) through the boundary nodes. The linear operator from Dirichlet data to Neumann data is called a Dirichlet-to-Neumann operator, and depends on the topology and conductance of the network. In the case with the continuous disc, it follows that if the conductance is homogeneous, let's say everywhere, then the Dirichlet-to-Neumann operator satisfies the following equation Image analysis Already at the end of the 1970s, applications for the discrete spiral coordinate system were given in image analysis ( image registration ) . To represent an image in this coordinate system rather than in Cartesian coordinates, gives computational advantages when rotating or zooming in an image. Also, the photo receptors in the retina in the human eye are distributed in a way that has big similarities with the spiral coordinate system. It can also be found in the Mandelbrot fractal (see picture to the right). Log-polar coordinates can also be used to construct fast methods for the Radon transform and its inverse. See also Polar coordinates Cartesian coordinates Cylindrical coordinates Spherical coordinates log-polar mapping in Retinotopy References External links Non-Newtonian calculus website Coordinate systems Non-Newtonian calculus
Log-polar coordinates
[ "Mathematics" ]
1,320
[ "Calculus", "Non-Newtonian calculus", "Coordinate systems" ]
27,795,355
https://en.wikipedia.org/wiki/Ion%20milling%20machine
Ion milling is a specialized physical etching technique that is a crucial step in the preparation of material analysis techniques. After a specimen goes through ion milling, the surface becomes much smoother and more defined, which allows scientists to study the material much easier. The ion mill generates high-energy particles to remove material off the surface of a specimen, similar to how sand and dust particles wear away at rocks in a canyon to create a smooth surface. Relative to other techniques, ion milling creates much less surface damage, which makes it perfect for surface-sensitive analytical techniques. This article discusses the principle, equipment, applications, and significance of ion milling. Principles Ion milling operates on the principles of sputtering and erosion. Sputtering occurs as the high-energy ions bombard the sample surface. Ions collide with the atoms and molecules on the surface and knock off surface atoms. As the high-energy ions are directed onto the material's surface, a collision cascade occurs. Ions bombard the surface of the specimen, and energy is transferred from the ions onto the surface atoms. If the transferred energy surpasses the binding energy of the target atoms, they are dislodged from the surface. Material that juts out has less surface binding energy and is more likely to be ejected through sputtering. As the ion milling process continues, the sample surface is slowly eroded away, resulting in a thin, flat, and damage-free surface. Specific results can be achieved by changing the angle of incidence of ions, the ion energy, and the type of ions used. Equipment Ion source Ion sources are fundamental to ion milling. Their design and operation are crucial to producing accurate results. The most commonly used ion source relies on radiofrequency (RF) ion sources and direct current (DC) electric fields to generate and accelerate ions from a gas, typically a noble gas like argon or xenon. RF fields are used for ionization because they allow for a high degree of control and efficiency. RF ion sources can efficiently produce ions by creating an alternating radiofrequency electric field in a resonant cavity. RF uses a frequency of several megahertz, which works best for most gases used. The RF field causes the gas to repeat cycles of ionization and electron detachment, which creates plasma. The alternating electric field ionizes the gas by ripping off the electrons and leaving the positive ions. The ions are then accelerated away from the plasma using a DC electric field. An extraction electrode with a DC electric field accelerates the ions towards the specimen due to the voltage difference between the electrode and plasma region. The synergy between RF and DC fields is crucial for optimizing the ion source's performance. The precise combination between these fields gives the ion beam the specific characteristics it needs, such as energy and current. Sample holder To guarantee that the surface is eroded uniformly, the specimen must be held in place while the ion mill operates. The specimen itself needs to have a surface that is mostly level and clean. Prior to ion milling, the surface should be fairly flat because the process does not remove much material. If the specimen's surface is dirty or has other particles on top of it, the ion mill will operate on the layer on top rather than the actual specimen surface. Vacuum system The specimen should be in a high-vacuum environment for optimal milling results. The vacuum makes sure that there are few air particles that could interfere with the ion beam. This way, all the energy in the energy beam can be transferred to the surface with much less energy loss. Analysis Analyzing and monitoring the ion milling process is crucial for achieving desired outcomes and ensuring the quality of the results. There are many techniques and instruments that view key parameters during ion milling. Scanning electron Microscopy (SEM) SEM is used to analyze the surface morphology of samples after ion milling. SEM imaging is used to assess material removal, surface roughness, and cross-sectional features. Secondary ion mass spectrometry (SIMS) After the samples are milled, elemental and isotopic analysis is performed using SIMS. After the primary ions hit the surface, secondary ions and particles are released during the bombardment of the surface. Scientists can gather comprehensive data regarding the material's composition by understanding which ions are utilized for milling and which secondary ions are released. [90] X-ray photoelectron spectroscopy (XPS) XPS is utilized to analyze the chemical composition of the surface. X-rays are used to irradiate the sample and measure the energies of the emitted photoelectrons. XPS assesses the surface chemistry and can detect any chemical changes induced by ion milling. This process can tell how much damage ion milling has caused to the surface after ion bombardment. In-situ monitoring techniques In-situ monitoring techniques observe the ion milling process in real-time. One type of in-situ monitoring is optical emission spectroscopy (OES). OES monitors the emission of light during ion milling and gives information about the plasma. Applications Electron microscopy Ion milling can be used for thinning specimens until electron transparency in transmission electron microscopy (TEM). Microelectronics The accurate and damage-free surface ion milling provides makes it perfect for the precise fabrication of semiconductors. Using ion milling for microelectronics can create well-defined features and patterns on semiconductor wafers. Cross-sectional analysis Ion milling can be used to create cross-sectional samples for materials. Cross-sectional shows interfaces, layer structures, and defects of the material. Surface smoothing and polishing Ion milling is able to take off a few atoms at a time, which allows it to create smooth and polished surfaces on certain materials. Enhancing surface quality is crucial in anything that requires precision, such as optics or semiconductors. Advantages and limitations Advantages Ion milling gives precise control over material removal Low amount of specimen damage Improved surfaces for further processes Limitations Long processing times for thicker samples. Possibility of ion-induced damage The need for specialized equipment and expertise Conclusion Ion milling revolutionized the fields of material engineering and mechanical engineering, allowing researchers and scientists to obtain high-quality specimens for advanced material analysis. Its applications in various industries and its role in advancing microelectronics make it an indispensable tool for modern research and development. References J. Goldstein et al., Scanning Electron Microscopy and X-Ray Microanalysis. New York, NY: Springer, 2018. Electron microscopy
Ion milling machine
[ "Chemistry" ]
1,301
[ "Electron", "Electron microscopy", "Microscopy" ]
27,797,130
https://en.wikipedia.org/wiki/Torpedo%20%28petroleum%29
A torpedo is an explosive device used, especially in the early days of the petroleum industry, to fracture the surrounding rock at the bottom of an oil well to stimulate the flow of oil and to remove built-up paraffin wax that would restrict the flow. Earlier torpedoes used gunpowder, but the use of nitroglycerin eventually became widespread. The development of hydraulic fracturing rendered torpedoes obsolete, and is the primary fracturing process used today. Use A torpedo consisted of canisters that were filled with an explosive and lowered into a well via a rope or wire. Gunpowder was used in the first torpedoes, but nitroglycerin was found to work better despite its instability. The well is usually filled with water to prevent the explosion from escaping upwards. Originally, the topmost canister had a percussion cap that was to detonate the main charge. An iron weight was dropped down the well to set the torpedo off. After incidents of premature explosions, a second method was developed in which a tube of the explosive was placed in a larger tube that was packed with sand. A fuse was wound around the inner tube, connected to a blasting cap. When the torpedo was to be used, the inner tube was filled with nitroglycerin and corked; the fuse was lit and torpedo was dropped down the well. Torpedoes were generally used to remove buildup of paraffin wax from an oil well. Before the use of torpedoes caught on, boiling water or benzene was often poured down wells to try to dissolve the paraffin. Torpedoes were also used to fracture the rock to allow the oil to flow more easily. History Edward A. L. Roberts developed the first torpedo and submitted a patent application in November 1864. Roberts, an American Civil War veteran, came up with the concept of using water to "tamp" the resulting explosion, after watching Confederate artillery rounds explode in a canal at the Battle of Fredericksburg. Roberts developed his first torpedoes in 1865 and 1866. In November 1866 he was granted a patent on his torpedo application, and founded the Roberts Petroleum Torpedo Company. William Reed also developed a torpedo design and went on to found a rival company "for the purpose of infringing and breaking down the Roberts patent. Roberts charged $100–200 per torpedo as well as a royalty amounting to of the increased oil production. To avoid paying the exorbitant fees, an owner of a well would often hire men who illegally produced their own torpedoes and used them at night—the practice giving rise to term "moonlighting". Roberts spent $250,000 to protect his patent from the "moonlighters" by hiring the Pinkerton National Detective Agency and filing numerous lawsuits. Roberts' torpedo patents expired in 1879. Torpedoes manufactured today use modern explosives, with the last nitroglycerin torpedo being used on May 5, 1990. References Sources 1865 introductions American inventions Bombs Petroleum technology
Torpedo (petroleum)
[ "Chemistry", "Engineering" ]
584
[ "Petroleum engineering", "Petroleum technology" ]
27,801,005
https://en.wikipedia.org/wiki/Semenogelin
Semenogelin is a protein that is involved in the formation of a gel matrix that encases ejaculated spermatozoa, preventing capacitation. It blocks capacitation mainly via inhibition of reactive oxygen species (ROS) generation. Proteolysis by prostate-specific antigen (PSA) breaks down the gel matrix and allows the spermatozoa to move more freely. The cleavage products of the semenogelins constitute the main antibacterial components in human seminal plasma. There are two variants of the semenogelin protein: semenogelin 1 and semenogelin 2. Semenogelin along with prostate-specific antigen, are commonly tested for during crime scene investigation. References Proteins
Semenogelin
[ "Chemistry" ]
153
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
27,801,066
https://en.wikipedia.org/wiki/Parallel%20Redundancy%20Protocol
Parallel Redundancy Protocol (PRP) is a network protocol standard for Ethernet that provides seamless failover against failure of any network component. This redundancy is invisible to the application. PRP nodes have two ports and are attached to two separated networks of similar topology. PRP can be implemented entirely in software, i.e. integrated in the network driver. Nodes with single attachment can be attached to one network only. This is in contrast to the companion standard HSR (IEC 62439-3 Clause 5), with which PRP shares the operating principle. PRP and HSR are independent of the application-protocol and can be used by most Industrial Ethernet protocols in the IEC 61784 suite. PRP and HSR are standardized by the IEC 62439-3:2016). They have been adopted for substation automation in the framework of IEC 61850. PRP and HSR are suited for applications that request high availability and short switchover time, such as: protection for electrical substation, synchronized drives, for instance in printing machines or high power inverters. For such applications, the recovery time of commonly used protocols such as the Rapid Spanning Tree Protocol (RSTP) is too long. The cost of PRP is a duplication of all network elements that require it. Cost impact is low since it makes little difference if the spares lie on the shelf or are actually working in the plant. The maintenance interval is shortened since more components can fail in use, but such outage will remain invisible to the application. PRP does not cover end node failures, but redundant nodes may be connected via a PRP network. Topology Each PRP network node (DANP) has two Ethernet ports attached to two separate local area networks of arbitrary, but similar topology. The two LANs have no links connecting them and are assumed to be fail-independent, to avoid common mode failures. Nodes with single attachment (such as a printer) are either attached to one network only (and therefore can communicate only with other nodes attached to the same network), or are attached through a RedBox, a device that behaves like a doubly attached node. Since HSR and PRP use the same duplicate identification mechanism, PRP and HSR networks can be connected without single point of failure and the same nodes can be built to be used in both PRP and HSR networks. Operation A source node (DANP) sends simultaneously two copies of a frame, one over each port. The two frames travel through their respective LANs until they reach a destination node (DANP) with a certain time skew. The destination node accepts the first frame of a pair and discards the second (if it arrives). Therefore, as long as one LAN is operational, the destination application always receives one frame. PRP provides zero-time recovery and allows to check the redundancy continuously to detect lurking failures. Frame format To simplify the detection of duplicates, the frames are identified by their source address and a sequence number that is incremented for each frame sent according to the PRP protocol. The sequence number, the frame size, the path identifier and an Ethertype are appended just before the Ethernet checksum in a 6-octet PRP trailer. This trailer is ignored (considered as padding) by all nodes that are unaware of the PRP protocol, and therefore these singly attached nodes (SAN) can operate in the same network. NOTE: all legacy devices should accept Ethernet frames up to 1528 octets, this is below the theoretical limit of 1535 octets. Implementation The two Ethernet interfaces of a node use the same MAC address. This is allowed since the two LANs have no connection. Therefore, PRP is a layer 2 redundancy, which allows higher layer network protocols to operate without modification. A PRP node needs only one IP address. Especially, the ARP protocol will correctly relate the MAC to the IP address. Clock synchronization IEC 62439-3 Annex C specifies the Precision Time Protocol Industry Profile that support a clock synchronization over PRP with an accuracy of 1 μs after 15 network elements, as profile of IEEE Std 1588 Precision Time Protocol. Clocks can be doubly attached according to PRP, but since the correction is different according to the path, the duplicate discard method of PRP is not applicable. Also, delay measurement messages (Pdelay_Req & Pdelay_Resp) are not duplicated since they are link-local. About every second, a master clock sends two copies of a Sync message, but not at exactly the same time since the ports are separate, therefore the original Syncs have already different time stamps. A slave receives the two Sync messages at different times and applies the Best Master Clock Algorithm (BMCA), and when the two Sync come from the same grandmaster, the clock quality is used as a tie-breaker. A slave will normally listen to one port and supervise the other, rather than switching back and forth or using both Syncs. This method works for several options in 1588, with Layer 2 / Layer 3 operation, and with peer-to-peer / end-to-end delay measurement. IEC 62439-3 defines these two profiles as: L3E2E (Layer 3, end-to-end) that addresses the requirements of ODVA L2P2P (Layer 2, peer-to-peer) that addresses the requirements of power utility in IEC 61850 and has been adopted by IEEE in IEC&IEEE 61850-9-3. Legacy versions The original standard IEC 62439:2010 incremented the sequence number of the Redundancy Control Trailer (RCT) in the PRP frames on a per-connection basis. This gave a good error detection coverage but made difficult the transition from PRP to the High-availability Seamless Redundancy (HSR) protocol, which uses a ring topology instead of parallel networks. The revised standard IEC 62439-3:2012 aligned PRP with HSR using the same duplicate discard algorithm. This allowed building transparent PRP-HSR connection bridges and nodes that can operate both as PRP (DANP) and HSR (DANH). The old IEC 62439:2010 standard is sometimes referred to as PRP-0 as it is still used in some control systems, and PRP 2012 as "PRP". Applications An interesting application of PRP was found in the area of wireless communication as "Timing Combiner" [], yielding significant improvement in packet loss and timing behaviour over parallel redundant wireless links. See also High-availability Seamless Redundancy Media Redundancy Protocol IEC/IEEE 61850-9-3 References External links ZHAW Tutorial on Parallel Redundancy Protocol (PRP) PRP in the Wireshark Wiki Tutorial on Parallel Redundancy Protocol (PRP) Tutorial on High-availability Seamless Redundancy (HSR) Tutorial on Precision Time Protocol with seamless redundancy in PRP and HSR Commercial implementation for Microsoft Windows by Siemens SIMATIC Networking standards Network protocols
Parallel Redundancy Protocol
[ "Technology", "Engineering" ]
1,485
[ "Networking standards", "Computer standards", "Computer networks engineering" ]
34,633,465
https://en.wikipedia.org/wiki/Geometric%20feature%20learning
Geometric feature learning is a technique combining machine learning and computer vision to solve visual tasks. The main goal of this method is to find a set of representative features of geometric form to represent an object by collecting geometric features from images and learning them using efficient machine learning methods. Humans solve visual tasks and can give fast response to the environment by extracting perceptual information from what they see. Researchers simulate humans' ability of recognizing objects to solve computer vision problems. For example, M. Mata et al.(2002) applied feature learning techniques to the mobile robot navigation tasks in order to avoid obstacles. They used genetic algorithms for learning features and recognizing objects (figures). Geometric feature learning methods can not only solve recognition problems but also predict subsequent actions by analyzing a set of sequential input sensory images, usually some extracting features of images. Through learning, some hypothesis of the next action are given and according to the probability of each hypothesis give a most probable action. This technique is widely used in the area of artificial intelligence. Introduction Geometric feature learning methods extract distinctive geometric features from images. Geometric features are features of objects constructed by a set of geometric elements like points, lines, curves or surfaces. These features can be corner features, edge features, Blobs, Ridges, salient points image texture and so on, which can be detected by feature detection methods. Geometric features Primitive features Corners: Corners are a very simple but significant feature of objects. Especially, Complex objects usually have different corner features with each other. Corners of an object can be extracted through Corner detection. Cho and Dunn used a different way to define a corner by the distance and angle between two straight line segments. This is a new way by defining features as a parameterized composition of several components. Edges: Edges are one-dimensional structure features of an image. They represent the boundary of different image regions. The outline of an object can be easily detected by finding the edge using the technique of edge detection. Blobs: Blobs represent regions of images, which can be detected using blob detection method. Ridges: From a practical viewpoint, a ridge can be thought of as a one-dimensional curve that represents an axis of symmetry. Ridges detection method-see ridge detection salient points-see Kadir–Brady saliency detector image texture Compound features Geometric composition Geometric component feature is a combination of several primitive features and it always consists more than 2 primitive features like edges, corners or blobs. Extracting geometric feature vector at location x can be computed according to the reference point, which is shown below: x means the location of the location of features, means the orientation, means the intrinsic scale. Boolean Composition Boolean compound feature consists of two sub-features which can be primitive features or compound features. There are two type of boolean features: conjunctive feature whose value is the product of two sub-features and disjunctive features whose value is the maximum of the two sub-features. Feature space Feature space was firstly considered in computer vision area by Segen. He used multilevel graph to represent the geometric relations of local features. Learning algorithms There are many learning algorithms which can be applied to learn to find distinctive features of objects in an image. Learning can be incremental, meaning that the object classes can be added at any time. Geometric feature extraction methods Corner detection Curve fitting Edge detection Global structure extraction Feature histograms Line detection Connected-component labeling Image texture Motion estimation Feature learning algorithm 1.Acquire a new training image "I". 2.According to the recognition algorithm, evaluate the result. If the result is true, new object classes are recognised. recognition algorithm The key point of recognition algorithm is to find the most distinctive features among all features of all classes. So using below equation to maximise the feature Measure the value of a feature in images, and , and localise a feature: Where is defined as evaluation After recognise the features, the results should be evaluated to determine whether the classes can be recognised, There are five evaluation categories of recognition results: correct, wrong, ambiguous, confused and ignorant. When the evaluation is correct, add a new training image and train it. If the recognition failed, the feature nodes should be maximise their distinctive power which is defined by the Kolmogorov-Smirno distance (KSD). 3.Feature learning algorithm After a feature is recognised, it should be applied to Bayesian network to recognise the image, using the feature learning algorithm to test. The main purpose of feature learning algorithm is to find a new feature from sample image to test whether the classes are recognised or not. Two cases should be consider: Searching for new feature of true class and wrong class from sample image respectively. If new feature of true class is detected and the wrong class is not recognised, then the class is recognised and the algorithm should terminate. If feature of true class is not detected and of false class is detected in the sample image, false class should be prevented from being recognised and the feature should be removed from Bayesian network. Using Bayesian network to realise the test process PAC model based feature learning algorithm Learning framework The probably approximately correct (PAC) model was applied by D. Roth (2002) to solve computer vision problem by developing a distribution-free learning theory based on this model. This theory heavily relied on the development of feature-efficient learning approach. The goal of this algorithm is to learn an object represented by some geometric features in an image. The input is a feature vector and the output is 1 which means successfully detect the object or 0 otherwise. The main point of this learning approach is collecting representative elements which can represent the object through a function and testing by recognising an object from image to find the representation with high probability. The learning algorithm aims to predict whether the learned target concept belongs to a class, where X is the instance space consists with parameters and then test whether the prediction is correct. Evaluation framework After learning features, there should be some evaluation algorithms to evaluate the learning algorithms. D. Roth applied two learning algorithms: 1.Sparse Network of Winnows(SNoW) system SNoW-Train Initial step: initial the set of features which linked to target t for all . T is a set of object targets whose elements are to If each target object in set T belongs to a list of active features, link feature to target and set initial weight at the same time. Evaluate the targets : compare targets with , where is the weight on one position connecting the features i to target t. \theta_{t} is the threshold for the target not t. Update weight according to the result of evaluation. There are two cases: predicted positive on negative example ( and targets are not in the list of active features) and predicted negative on positive example( and targets are in the list of active features). SNoW-Evaluation Evaluate the each target using same function as introduced above Prediction: Make a decision to select the dominant active target node. 2. support vector machines The main purpose of SVM is to find a hyperplane to separate the set of samples where is an input vector which is a selection of features and is the label of . The hyperplane has the following form: is a kernel function Both algorithms separate training data by finding a linear function. Applications Landmarks learning for topological navigation Simulation of detecting object process of human vision behaviour Learning self-generated action Vehicle tracking References Feature detection (computer vision) Applications of computer vision Machine learning
Geometric feature learning
[ "Engineering" ]
1,511
[ "Artificial intelligence engineering", "Machine learning" ]
34,637,816
https://en.wikipedia.org/wiki/Directive%2082/501/EC
Directive 82/501/EC was a European Union law aimed at improving the safety of sites containing large quantities of dangerous substances. It is also known as the Seveso Directive, after the Seveso disaster. It was superseded by the Seveso II Directive and then Seveso III directive. See also Seveso II Directive Control of Major Accident Hazards Regulations 1999 External links Council Directive 96/82/EC of 9 December 1996 on the control of major-accident hazards involving dangerous substances Summaries of EU legislation > Environment > Civil protection > Major accidents involving dangerous substances European Commission page about the Seveso Directives 1982/501 European Union directives Chemical safety Process safety 1982 in law 1982 in the European Economic Community Environmental law in the European Union Regulation of chemicals in the European Union Safety codes fr:Directive Seveso it:Direttiva Seveso
Directive 82/501/EC
[ "Chemistry", "Engineering" ]
177
[ "Chemical accident", "Regulation of chemicals in the European Union", "Safety engineering", "Regulation of chemicals", "Process safety", "nan", "Chemical process engineering", "Chemical safety" ]
34,640,254
https://en.wikipedia.org/wiki/MapHook
MapHook is a location-based journal and social networking application that is operated by MapHook Inc., a software applications development firm based in Dulles, Virginia. MapHook combines GPS and mapping technologies to allow users to create geo-tagged digital memories about events, locations, and activities. These geo-tagged "hooks" contain user reviews, anecdotal information, available business details, and user-created images pertaining to the selected location. These hooks are then published per user specifications to the public or select individuals. MapHook also displays points of interest that relate to Wikipedia articles, Groupon offers, or Yelp reviews within a user's selected vicinities. History MapHook was launched in July 2010. In August 2010, the “Gulf Caravan ,” an advocacy group from St. Louis, MO, selected MapHook to help create awareness about the businesses impacted by the Deepwater Horizon oil spill. In April 2011, MapHook joined with Groupon and began displaying regional Groupon offerings by user location. In August 2011, MapHook added the ability to attach YouTube videos to "hooks." MapHook also introduced the "Groups" concept, which allowed for the creation of user communities with user-set levels of privacy. MapHook also connected with Facebook, Twitter, and Google+ in order to merge with other social networking platforms. In September 2011, MapHook partnered with ThinkGeek and their “Timmy the Monkey Sticker Map Project,” which documents the global reach of ThinkGeek customers by using MapHook. In March 2012, MapHook partnered with the World Wildlife Fund and their "Tigers or Toilet Paper" project, which aims to draw closer attention to the deforestation and ruin of the Sumatran tiger’s habitat by having users create hooks to spread awareness about the paper products being sold in their area. In April 2012, MapHook and the World Wildlife Fund teamed together again for Earth Hour, which took place in 2012 on March 31. As part of the Earth Hour MapHook Project, the World Wildlife Fund asked Earth Hour participants to post hooks on MapHook in order to chart participation and share experiences. Recognition Upon its release in July 2010, MapHook received recognition in a number of online and paper-based publications: On July 19, 2010, MapHook was included in Gizmodo's "This Week's Best Apps" list. On July 20, 2010, MapHook was recognized by TIME's Techland section as its "App of the Week." On July 29, 2010, MapHook made the NY Times "Quick Calls" list. References External links MapHook website Mobile social software IOS software Geosocial networking Wireless locating
MapHook
[ "Technology" ]
563
[ "Wireless locating" ]
34,641,860
https://en.wikipedia.org/wiki/Eduard%20R%C3%BCchardt
Eduard Rüchardt (March 29, 1888 – March 7, 1962) was a German physicist. In modern times Rüchardt is mainly noted for the experiment named after him. However, Rüchardt's chief topic was the study of canal rays. This work started under the supervision of Wilhelm Wien and continued later in collaborations with Walther Gerlach. Life and work After home-schooling in Moscow, Rüchardt visited the Vitztumsche secondary school in Dresden from 1905 on. He started studying physics in Jena in 1908 and continued in Freiburg and Wuerzburg in 1910. There he worked towards his doctor's degree under Wilhelm Wien. The topic of his thesis was "Excitation of phosphorescence through canal rays". In 1920 Rüchardt followed Wien to the Ludwig Maximilian University of Munich to be his assistant. In 1922 he published "processes of charge reversal in hydrogen canal rays" to gain his professorship. There he taught as an associate professor from 1924 to 1946 and from 1946 to 1955 as a tenured professor. Rüchardt's chief topic under Wien was about the physics of canal rays. For the first time the problem of light excitation of phosphors by solid canal rays was observed by energy considerations. Besides the processes of charge reversal in hydrogen canal rays, Rüchardt examined the correlation of neutralization and coverage for secondary radiation canal rays and α-rays. With the interaction of matter with canal rays Rüchardt was able to formulate extensive statements on the construction and properties of atoms. This way he succeeded in finding definite evidence of oxygen isotope 18O. In the 1930s many dissertations supervised by Rüchardt discussed canal rays. The research included Einstein's rotating mirror experiment (Spiegel-drehversuch) and the Transverse Doppler effect. The work done with Walther Gerlach in 1926 received particular acclaim. The great wars greatly influenced his academic focus. He developed specific amplifier valves in World War I. During World War II Rüchardt researched the mode of operation for electrical contacts. These last works shifted to his focus of his research after 1945. It ranged from works (with numerous scholars) on dependency of resistance of contact load up to superconduction of contacts. Legacy and Notability Rüchardt was instrumental in the exposure of the fraudulent results presented by Emil Rupp. He wrote an abstract for the Physikalische Berichte that pointed out that Rupp's vacuum pump appeared in the wrong location. From this he showed that obtaining the kind of freely decaying atoms that Rupp had claimed to do in his experiments would have been impossible. In 1935, following Rupp's fall from grace and in the midst of the controversy over what elements of his work could be trusted, Rüchardt and Walther Gerlach published a short note in the Annalen der Physik in which they made very clear that Rupp had confirmed a mistakenly drawn diagram by Albert Einstein. This is generally considered to be the point when Rupp lost all credibility. Rüchardt's lectures about "Higher Experimental Physics" were exemplary. The experiments demonstrated in the lecture were revised and modernized constantly. He successfully strove for accurate depictions of modern physics in popular culture, as well as the introduction of physical evidence and scientific methods in medicine. The Rüchardt experiment was developed over the years in his lectures. It is now performed as a standard experiment for thermodynamics in several universities, such as UBC, Bayreuth and UCLA. Works and Literature Weitere W Durchgang v. Kanalstrahlen durch Materie, in: Hdb. d. Physik, hg. v. H. Geiger u. K. Scheel, 21933, XXII/2, S. 75–154; Sichtbares u. unsichtbares Licht, 1938 (Neudr. 1952, auch span., engl., poln. u. ungar. Übers.). E. Kappler, in: Physikal. Bll. 4, 1948, S. 211; W. Gerlach, ebd. 14, 1958, S. 129; ders., in: Jb. d. Bayer. Ak. d. Wiss. 1962, S. 189-95 (P); J. Brandmüller. Das wiss. Werk v. E. R., in: Dt. Mus. München, Wiss. Jb. 1991, S. 7-24 (W-Verz., P); Pogg. VI, VII a. References 1888 births 1962 deaths German nuclear physicists Mass spectrometrists Soviet physicists
Eduard Rüchardt
[ "Physics", "Chemistry" ]
992
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
38,624,698
https://en.wikipedia.org/wiki/Stochastic%20quantization
In theoretical physics, stochastic quantization is a method for modelling quantum mechanics, introduced by Edward Nelson in 1966, and streamlined by Giorgio Parisi and Yong-Shi Wu. Description Stochastic quantization serves to quantize Euclidean field theories, and is used for numerical applications, such as numerical simulations of gauge theories with fermions. This serves to address the problem of fermion doubling that usually occurs in these numerical calculations. Stochastic quantization takes advantage of the fact that a Euclidean quantum field theory can be modeled as the equilibrium limit of a statistical mechanical system coupled to a heat bath. In particular, in the path integral representation of a Euclidean quantum field theory, the path integral measure is closely related to the Boltzmann distribution of a statistical mechanical system in equilibrium. In this relation, Euclidean Green's functions become correlation functions in the statistical mechanical system. A statistical mechanical system in equilibrium can be modeled, via the ergodic hypothesis, as the stationary distribution of a stochastic process. Then the Euclidean path integral measure can also be thought of as the stationary distribution of a stochastic process; hence the name stochastic quantization. See also Supersymmetric theory of stochastic dynamics Stochastic quantum mechanics References Stochastic processes
Stochastic quantization
[ "Physics" ]
264
[ "Quantum mechanics", "Quantum physics stubs" ]
38,625,913
https://en.wikipedia.org/wiki/Sand%20slinger
Sand slinger is the term for two different types of machines. Both of which use a short conveyor belt to direct sand, gravel or similar materials to varying locations. Stationary mold filling machine A stationary sand slinger is a type of machine used for filling and uniform ramming of sand in casting molds, and is particularly advantageous with large molds. This machine consists of a heavy base, a bin or hopper to carry sand, and a bucket elevator to which a number of buckets and a swinging arm are attached, which carries a conveyor belt and the sand impeller head. The sand slinger fills the flask uniformly with sand under a high pressure stream throwing the sand with high speed using the impeller. Mobile aggregate moving truck A slinger truck is usually a dump truck equipped with a pivoted conveyor belt used to sling dry bulk material, usually sand or gravel, for a distance of several metres. Casting (manufacturing) Dump trucks
Sand slinger
[ "Engineering" ]
195
[ "Engineering vehicles", "Dump trucks", "Mechanical engineering stubs", "Mechanical engineering" ]
38,627,925
https://en.wikipedia.org/wiki/126%20Tauri
126 Tauri (126 Tau) is a blue subgiant star in the constellation Taurus. Its apparent magnitude is 4.83. It is also a binary star, with an orbital period of 111 years. References Taurus (constellation) B-type subgiants Tauri, 126 Durchmusterung objects 037711 026777 1946 Binary stars
126 Tauri
[ "Astronomy" ]
79
[ "Taurus (constellation)", "Constellations" ]
33,142,938
https://en.wikipedia.org/wiki/I%20Love%20Trash
"I Love Trash" is a song with music and lyrics by Jeff Moss. It was sung by the Muppet character Oscar the Grouch (performed by Caroll Spinney) on Sesame Street. The song was first sung in the first season of the series and has been re-taped several times. In the song, Oscar sings about the trash he so admires; he presents a tattered and worn sneaker that's covered with holes and has torn laces ("A gift from my mother the day I was born..."), a 13-month old newspaper with smelly, cold fish wrapped inside (a piece of waste that Oscar affirms he "wouldn't trade for a big pot of gold"), a defective clock, an old telephone, a broken umbrella and a rusty trombone. In the end, Oscar claims to be "delighted to call them my own." In a Season 29 episode, Oscar sang "Grouches Love Trash" a variation of this song to his niece Irvine. In episode 3891, his old friend Felix the Grouch sang a variation called "I Love Cleaning" while Oscar sang "I Love Trash". Yet another variation occurs when Oscar's trash can was grown to a larger size and the lyrics were adjusted to accommodate (a clip can be seen in Sesame Street All-Star 25th Birthday: Stars and Street Forever!). Oscar sang this song in Here Come the Puppets!, accompanied by Bruno the Trashman on rollerskates. Oscar also sang the song during an appearance on The Bonnie Hunt Show. On September 26, 2013, Oscar and Big Bird sang this song on The Colbert Report. Other versions k. d. lang sang this song when she guest-starred in The Jim Henson Hour episode about garbage. Grover, Telly and Zoe each sing a verse of the song (with changed lyrics to suit their interests) in the CD-ROM game The Three Grouchketeers. Aerosmith frontman Steven Tyler recorded a new version for the album release of Elmopalooza. This version is also included on The Adventures of Elmo in Grouchland soundtrack album. A brief portion of the song was also sung by a group of socks in Sesame Street 4-D Movie Magic, during Oscar's imagination sequence where Sesame Street is turned into a vast garbage dump. On Plaza Sésamo, a Multimonstruo, who loves trash, performed a rock version of this song. At the Jim Henson's Musical World concert on April 14, 2012, Elmo, Ernie, Bert, Cookie Monster, Gordon, Bob, Susan, Leela, Gina, Alan and Maria performed the song. In an episode of Family Guy, Meg sings a very off-key version of the song dressed up as Oscar. A Sesame Street segment features Oscar (now performed by Eric Jacobson) teaching Jack Antonoff (who comes in dressed as a "Grouch", wearing a tie with a tiny smudge) that Grouches like really messy, yucky stuff by singing this song. Oscar and Josh Groban sang the song in an episode of The Not-Too-Late Show with Elmo. References 1969 singles Novelty songs Sesame Street songs 1969 songs Waste
I Love Trash
[ "Physics" ]
664
[ "Materials", "Waste", "Matter" ]
33,149,795
https://en.wikipedia.org/wiki/Aharonov%E2%80%93Casher%20effect
The Aharonov–Casher effect is a quantum mechanical phenomenon predicted in 1984 by Yakir Aharonov and Aharon Casher, in which a traveling magnetic dipole is affected by an electric field. It is dual to the Aharonov–Bohm effect, in which the quantum phase of a charged particle depends upon which side of a magnetic flux tube it comes through. In the Aharonov–Casher effect, the particle has a magnetic moment and the tubes are charged instead. It was observed in a gravitational neutron interferometer in 1989 and later by fluxon interference of magnetic vortices in Josephson junctions. It has also been seen with electrons and atoms. In both effects the particle acquires a phase shift () while traveling along some path P. In the Aharonov–Bohm effect it is While for the Aharonov–Casher effect it is where is its charge and is the magnetic moment. The effects have been observed together. References Bibliography See also Duality (electricity and magnetism) Quantum mechanics Physical phenomena
Aharonov–Casher effect
[ "Physics" ]
216
[ "Physical phenomena", "Theoretical physics", "Quantum mechanics", "Quantum physics stubs" ]
33,149,847
https://en.wikipedia.org/wiki/Angular%20momentum%20of%20light
The angular momentum of light is a vector quantity that expresses the amount of dynamical rotation present in the electromagnetic field of the light. While traveling approximately in a straight line, a beam of light can also be rotating (or "spinning, or "twisting) around its own axis. This rotation, while not visible to the naked eye, can be revealed by the interaction of the light beam with matter. There are two distinct forms of rotation of a light beam, one involving its polarization and the other its wavefront shape. These two forms of rotation are therefore associated with two distinct forms of angular momentum, respectively named light spin angular momentum (SAM) and light orbital angular momentum (OAM). The total angular momentum of light (or, more generally, of the electromagnetic field and the other force fields) and matter is conserved in time. Introduction Light, or more generally an electromagnetic wave, carries not only energy but also momentum, which is a characteristic property of all objects in translational motion. The existence of this momentum becomes apparent in the "radiation pressure phenomenon, in which a light beam transfers its momentum to an absorbing or scattering object, generating a mechanical pressure on it in the process. Light may also carry angular momentum, which is a property of all objects in rotational motion. For example, a light beam can be rotating around its own axis while it propagates forward. Again, the existence of this angular momentum can be made evident by transferring it to small absorbing or scattering particles, which are thus subject to an optical torque. For a light beam, one can usually distinguish two "forms of rotation, the first associated with the dynamical rotation of the electric and magnetic fields around the propagation direction, and the second with the dynamical rotation of light rays around the main beam axis. These two rotations are associated with two forms of angular momentum, namely SAM and OAM. However this distinction becomes blurred for strongly focused or diverging beams, and in the general case only the total angular momentum of a light field can be defined. An important limiting case in which the distinction is instead clear and unambiguous is that of a "paraxial light beam, that is a well collimated beam in which all light rays (or, more precisely, all Fourier components of the optical field) only form small angles with the beam axis. For such a beam, SAM is strictly related with the optical polarization, and in particular with the so-called circular polarization. OAM is related with the spatial field distribution, and in particular with the wavefront helical shape. In addition to these two terms, if the origin of coordinates is located outside the beam axis, there is a third angular momentum contribution obtained as the cross-product of the beam position and its total momentum. This third term is also called "orbital, because it depends on the spatial distribution of the field. However, since its value is dependent from the choice of the origin, it is termed "external orbital angular momentum, as opposed to the "internal OAM appearing for helical beams. Mathematical expressions for the angular momentum of light One commonly used expression for the total angular momentum of an electromagnetic field is the following one, in which there is no explicit distinction between the two forms of rotation: where and are the electric and magnetic fields, respectively, is the vacuum permittivity and we are using SI units. However, another expression of the angular momentum naturally arising from Noether’s theorem is the following one, in which there are two separate terms that may be associated with SAM () and OAM (): where is the vector potential of the magnetic field, and the i-superscripted symbols denote the cartesian components of the corresponding vectors. These two expressions can be proved to be equivalent to each other for any electromagnetic field that satisfies Maxwell’s equations with no source charges and vanishes fast enough outside a finite region of space. The two terms in the second expression however are physically ambiguous, as they are not gauge-invariant. A gauge-invariant version can be obtained by replacing the vector potential A and the electric field E with their “transverse” or radiative component and , thus obtaining the following expression: A justification for taking this step is yet to be provided. The latter expression has further problems, as it can be shown that the two terms are not true angular momenta as they do not obey the correct quantum commutation rules. Their sum, that is the total angular momentum, instead does. An equivalent but simpler expression for a monochromatic wave of frequency ω, using the complex notation for the fields, is the following: Let us now consider the paraxial limit, with the beam axis assumed to coincide with the z axis of the coordinate system. In this limit the only significant component of the angular momentum is the z one, that is the angular momentum measuring the light beam rotation around its own axis, while the other two components are negligible. where and denote the left and right circular polarization components, respectively. Exchange of spin and orbital angular momentum with matter When a light beam carrying nonzero angular momentum impinges on an absorbing particle, its angular momentum can be transferred on the particle, thus setting it in rotational motion. This occurs both with SAM and OAM. However, if the particle is not at the beam center the two angular momenta will give rise to different kinds of rotation of the particle. SAM will give rise to a rotation of the particle around its own center, i.e., to a particle spinning. OAM, instead, will generate a revolution of the particle around the beam axis. These phenomena are schematically illustrated in the figure. In the case of transparent media, in the paraxial limit, the optical SAM is mainly exchanged with anisotropic systems, for example birefringent crystals. Indeed, thin slabs of birefringent crystals are commonly used to manipulate the light polarization. Whenever the polarization ellipticity is changed, in the process, there is an exchange of SAM between light and the crystal. If the crystal is free to rotate, it will do so. Otherwise, the SAM is finally transferred to the holder and to the Earth. Spiral phase plate (SPP) In the paraxial limit, the OAM of a light beam can be exchanged with material media that have a transverse spatial inhomogeneity. For example, a light beam can acquire OAM by crossing a spiral phase plate, with an inhomogeneous thickness (see figure). Pitch-fork hologram A more convenient approach for generating OAM is based on using diffraction on a fork-like or pitchfork hologram (see figure). Holograms can be also generated dynamically under the control of a computer by using a spatial light modulator. As a result, this allows one to obtain arbitrary values of the orbital angular momentum. Q-plate Another method for generating OAM is based on the SAM-OAM coupling that may occur in a medium which is both anisotropic and inhomogeneous. In particular, the so-called q-plate is a device, currently realized using liquid crystals, polymers or sub-wavelength gratings, which can generate OAM by exploiting a SAM sign-change. In this case, the OAM sign is controlled by the input polarization. Cylindrical mode converters OAM can also be generated by converting a Hermite-Gaussian beam into a Laguerre-Gaussian one by using an astigmatic system with two well-aligned cylindrical lenses placed at a specific distance (see figure) in order to introduce a well-defined relative phase between horizontal and vertical Hermite-Gaussian beams. Possible applications of the orbital angular momentum of light The applications of the spin angular momentum of light are undistinguishable from the innumerable applications of the light polarization and will not be discussed here. The possible applications of the orbital angular momentum of light are instead currently the subject of research. In particular, the following applications have been already demonstrated in research laboratories, although they have not yet reached the stage of commercialization: Orientational manipulation of particles or particle aggregates in optical tweezers High-bandwidth information encoding in free-space optical communication Higher-dimensional quantum information encoding, for possible future quantum cryptography or quantum computation applications Sensitive optical detection See also Angular momentum Circular polarization Electromagnetic wave Helmholtz equation Light Light orbital angular momentum Light spin angular momentum Optical vortices Orbital angular momentum multiplexing Polarization (waves) Photon polarization References External links Phorbitech Glasgow Optics Group Leiden Institute of Physics ICFO Università Di Napoli "Federico II" Università Di Roma "La Sapienza" University of Ottawa Further reading Angular momentum of light Light
Angular momentum of light
[ "Physics", "Mathematics" ]
1,797
[ "Physical phenomena", "Physical quantities", "Spectrum (physical sciences)", "Quantity", "Angular momentum of light", "Electromagnetic spectrum", "Waves", "Light", "Angular momentum" ]
33,150,172
https://en.wikipedia.org/wiki/Sense%20and%20respond
Sense and respond has been used in control theory for several decades, primarily in closed systems such as refineries where comparisons are made between measurements and desired values, and system settings are adjusted to narrow the gap between the two. Since the early 1980s, sense and respond has also been used to describe the behavior of certain open systems. Sense and respond is based on lean principles and follows URSLIMM: U - understand customer value R - remove waste S - standardize L - learn by doing I - involve everyone M - measure what matters M - manage performance visually The term "sense and respond" as a business concept was used in a 1992 American Management Association Management Review article by Stephan H. Haeckel. It was developed by Haeckel at IBM’s Advanced Business Institute. Publications 2010 “The Post-Industrial Manager,” Marketing Management Magazine, Fall, 2010, pp 24–32. 2003 “Leading On Demand Businesses – Executives as Architects,” IBM Systems Journal, Vol 42, No 3, 2003, pp 405–413 2003 “Making Meaning Out of Apparent Noise,” in Long Range Planning, April, 2004, Special Issue of articles from May 4, 2003 Wharton Conference “Peripheral Vision: Sensing and Acting on Weak Signals,” Vol 37/2 pp 181–189 2000 “Managing Knowledge in Adaptive Enterprises,” Chapter in Knowledge Horizons: The Present and the Promise of Knowledge Management, edited by C. Despres and D. Chauvel, Butterworth-Heinemann 1999 Adaptive Enterprise: Creating and Leading Sense-and-Respond Organizations, Harvard Business School Press 1993 “Managing By Wire,” Harvard Business Review: Vol. 71, No. 5, September–October (with R.L. Nolan) 1992 “From ‘Make and Sell’ to ‘Sense and Respond,’” Management Review, American Management Association, October Control theory Management Organization design
Sense and respond
[ "Mathematics", "Engineering" ]
381
[ "Applied mathematics", "Control theory", "Organization design", "Design", "Dynamical systems" ]
31,619,812
https://en.wikipedia.org/wiki/Quillen%E2%80%93Lichtenbaum%20conjecture
In mathematics, the Quillen–Lichtenbaum conjecture is a conjecture relating étale cohomology to algebraic K-theory introduced by , who was inspired by earlier conjectures of . and proved the Quillen–Lichtenbaum conjecture at the prime 2 for some number fields. Voevodsky, using some important results of Markus Rost, has proved the Bloch–Kato conjecture, which implies the Quillen–Lichtenbaum conjecture for all primes. Statement The conjecture in Quillen's original form states that if A is a finitely-generated algebra over the integers and l is prime, then there is a spectral sequence analogous to the Atiyah–Hirzebruch spectral sequence, starting at (which is understood to be 0 if q is odd) and abutting to for −p − q > 1 + dim A. K-theory of the integers Assuming the Quillen–Lichtenbaum conjecture and the Vandiver conjecture, the K-groups of the integers, Kn(Z), are given by: 0 if n = 0 mod 8 and n > 0, Z if n = 0 Z ⊕ Z/2 if n = 1 mod 8 and n > 1, Z/2 if n = 1. Z/ck ⊕ Z/2 if n = 2 mod 8 Z/8dk if n = 3 mod 8 0 if n = 4 mod 8 Z if n = 5 mod 8 Z/ck if n = 6 mod 8 Z/4dk if n = 7 mod 8 where ck/dk is the Bernoulli number B2k/k in lowest terms and n is 4k − 1 or 4k − 2 . References Algebraic K-theory Conjectures that have been proved
Quillen–Lichtenbaum conjecture
[ "Mathematics" ]
360
[ "Mathematical theorems", "Mathematical problems", "Conjectures that have been proved" ]
31,620,091
https://en.wikipedia.org/wiki/Synchronization%20network
A synchronization network is a network of coupled dynamical systems. It consists of a network connecting oscillators, where oscillators are nodes that emit a signal with somewhat regular (possibly variable) frequency, and are also capable of receiving a signal. Particularly interesting is the phase transition where the entire network (or a very large percentage) of oscillators begins pulsing at the same frequency, known as synchronization. The synchronization network then becomes the substrate through which synchronization of these oscillators travels. Since there is no central authority organizing nodes, this is a form of self organizing system. Definition Generally, oscillators can be biological, electronic, or physical. Some examples are fireflies, crickets, heart cells, lasers, microwave oscillators, and neurons. Further example can be found in many domains. In a particular system, oscillators may be identical or non-identical. That is, either the network is made up of homogeneous or heterogeneous nodes. Properties of oscillators include: frequency, phase and natural frequency. Network edges describe couplings between oscillators. Couplings may be physical attachment, or consist of some proximity measure through a medium such as air or space. Networks have several properties, including: number of nodes (oscillators), network topology, and coupling strength between oscillators. Kuramoto model Kuramoto developed a major analytical framework for coupled dynamical systems, as follows: A network of oscillators with varied natural frequencies will be incoherent while the coupling strength is weak. Letting be the phase of the th oscillator and be its natural frequency, randomly selected from a Cauchy-Lorentz distribution as follows, , having width and central value , we obtain a description of collective synchronization: , where is the number of nodes (oscillators), and is the coupling strength between nodes and . Kuramoto has also developed an "order parameter", which measures synchronization between nodes: This leads to the asymptotic definition of , the critical coupling strength, as and with . Note that no synchronization, and perfect synchronization. Beyond , each oscillator will belong to one of two groups: a group that is synchronized. a group that will never synchronize, since their natural frequencies vary too greatly from the synchronization frequency. Network topology Synchronization networks may have many topologies. Topology may have a great deal of influence over the spread of dynamics. Some major topologies are listed below: Regular networks: This describes networks where every node has the same number of links. Lattices, rings, and fully connected networks are some examples of this topology. Random graphs: Developed by Erdős and Rényi, these graphs are characterized by a constant probability of a link existing between any two nodes. Small world networks: These networks are the result of rewiring a certain number of edges in regular lattice networks. The resulting networks have much smaller average path length than the original networks. Scale-free networks: Found ubiquitously in naturally occurring systems, scale free networks are characterized by a large number of high-degree nodes. In particular, the degree distribution follows a power-law. History Coupled oscillators have been studied for many years, at least since the Wilberforce pendulum in 1896. In particular, pulse coupled oscillators were pioneered by Peskin in 1975 with his study of cardiac cells. Winfree developed a mean-field approach to synchronization in 1967, which was developed further in the Kuramoto model in the 1970s and 1980s to describe large systems of coupled oscillators. Crawford brought the tools of manifold theory and bifurcation theory to bear on the stability of synchronization with his work in the mid-1990s. These works coincided with the development of a more general theory of coupled dynamical systems and popularization by Strogatz et al. in 1990, continuing through the early 2000s. See also Kuramoto model Complex Networks Coupled Oscillators Dynamical Systems Statistical Physics Self-Organizing Systems References External links Strogatz @ Cornell Self Organizing Systems Research Group at Harvard Nextgen Network Synchronization Oscillation
Synchronization network
[ "Physics" ]
891
[ "Mechanics", "Oscillation" ]
31,620,180
https://en.wikipedia.org/wiki/C19H29NO
{{DISPLAYTITLE:C19H29NO}} The molecular formula C19H29NO (molar mass: 287.44 g/mol, exact mass: 287.2249 u) may refer to: Cycrimine Gephyrotoxin, also known as Histrionicotoxin D Molecular formulas
C19H29NO
[ "Physics", "Chemistry" ]
70
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
31,621,776
https://en.wikipedia.org/wiki/Potassium%20heptafluorotantalate
Potassium heptafluorotantalate is an inorganic compound with the formula K2[TaF7]. It is the potassium salt of the heptafluorotantalate anion [TaF7]2−. This white, water-soluble solid is an intermediate in the purification of tantalum from its ores and is the precursor to the metal. Preparation Industrial Potassium heptafluorotantalate is an intermediate in the industrial production of metallic tantalum. Its production involves leaching tantalum ores, such as columbite and tantalite, with hydrofluoric acid and sulfuric acid to produce the water-soluble hydrogen heptafluorotantalate. Ta2O5 + 14 HF → 2 H2[TaF7] + 5 H2O This solution is subjected to a number of liquid-liquid extraction steps to remove metallic impurities (most importantly niobium) before being treated with potassium fluoride to produce K2[TaF7] H2[TaF7] + 2 KF → K2[TaF7] + 2 HF Lab-scale Hydrofluoric acid is both corrosive and toxic, making it unappealing to work with; as such a number of alternative processes have been developed for small-scale syntheses. Potassium heptafluorotantalate can be produced by both anhydrous and wet methods. The anhydrous method involves the reaction of tantalum oxide with potassium bifluoride or ammonium bifluoride according to the following equation: Ta2O5 + 4 KHF2 + 6 HF → 2 K2[TaF7] + 5 H2O The method was originally reported by Berzelius. K2[TaF7] can also be precipitated from solutions in hydrofluoric acid provided that the concentration of HF is below about 42%. Solutions having higher concentrations of HF yield potassium hexafluorotantalate [KTaF6]. The K-salt can be also precipitated from a solution in hydrofluoric acid of tantalum pentachloride: 5 HF + 2 KF + TaCl5 → K2[TaF7] + 5 HCl Structure Potassium heptafluorotantalate exists in at least two polymorphs. α-K2[TaF7] is the most common form and crystallises in the monoclinic P21/c space group. The structure is composed of [TaF7]2− units interconnected by potassium ions. [TaF7]2− polyhedra may be described as monocapped trigonal prisms with the capping atom located on one of the rectangular faces. Potassium atoms are 9-coordinated and may be viewed as distorted monocapped square prisms. In terms of the coordination sphere of the heavy metal, potassium heptafluoroniobate is similar to the tantalum salt. At temperatures above 230 °C this converts to β-K2[TaF7], which is orthorhombic (space group: Pnma). This structure also consists of potassium ions and the complex anion [TaF7]2−. The structure of the 7-coordinate [TaF7]2− units is essentially unchanged. However the potassium atoms now exist in 2 environments where they coordinate to either 11 or 8 fluorine atoms. Reactions K2[TaF7] is primarily used to produce metallic tantalum by reduction with sodium. This takes place at approximately 800 °C in molten salt and proceeds via a number of potential pathways. K2[TaF7] + 5 Na → Ta + 5 NaF + 2 KF K2[TaF7] is susceptible to hydrolysis. For example, a boiling aqueous solution of K2[TaF7] yields potassium oxyfluorotantalate (K2Ta2O3F6), known as “Marignac’s salt”. In order to prevent hydrolysis and co-precipitation of potassium oxyfluorotantalate, a small excess of HF is added to the solution. References Fluoro complexes Tantalates Potassium compounds Fluorometallates
Potassium heptafluorotantalate
[ "Chemistry" ]
892
[ "Tantalates", "Salts" ]
31,622,034
https://en.wikipedia.org/wiki/Hymenoptera%20Genome%20Database
The Hymenoptera Genome Database (HGD) is a comprehensive resource supporting genomics of Hymenoptera. See also BeeBase References External links Genome database Model organism databases
Hymenoptera Genome Database
[ "Biology" ]
41
[ "Model organism databases", "Model organisms" ]
31,623,698
https://en.wikipedia.org/wiki/Gataparsen
Gataparsen (development code LY2181308) is an antisense oligonucleotide that complementarily binds to survivin mRNA and inhibits its expression in tumor tissue. It being investigated for a number of different cancers. It is targeted at survivin which prevents cells dying. Clinical trials It has completed a phase II trial for acute myeloid leukemia. A phase II trial for non-small cell lung cancer has started. A phase II trial for hormone-refractory prostate cancer will run until early 2012. References Antisense RNA Nucleic acids
Gataparsen
[ "Chemistry" ]
124
[ "Biomolecules by chemical classification", "Nucleic acids" ]
31,623,814
https://en.wikipedia.org/wiki/Dual%20total%20correlation
In information theory, dual total correlation, information rate, excess entropy, or binding information is one of several known non-negative generalizations of mutual information. While total correlation is bounded by the sum entropies of the n elements, the dual total correlation is bounded by the joint-entropy of the n elements. Although well behaved, dual total correlation has received much less attention than the total correlation. A measure known as "TSE-complexity" defines a continuum between the total correlation and dual total correlation. Definition For a set of n random variables , the dual total correlation is given by where is the joint entropy of the variable set and is the conditional entropy of variable , given the rest. Normalized The dual total correlation normalized between [0,1] is simply the dual total correlation divided by its maximum value , Relationship with Total Correlation Dual total correlation is non-negative and bounded above by the joint entropy . Secondly, Dual total correlation has a close relationship with total correlation, , and can be written in terms of differences between the total correlation of the whole, and all subsets of size : where and Furthermore, the total correlation and dual total correlation are related by the following bounds: Finally, the difference between the total correlation and the dual total correlation defines a novel measure of higher-order information-sharing: the O-information: . The O-information (first introduced as the "enigmatic information" by James and Crutchfield is a signed measure that quantifies the extent to which the information in a multivariate random variable is dominated by synergistic interactions (in which case ) or redundant interactions (in which case . History Han (1978) originally defined the dual total correlation as, However Abdallah and Plumbley (2010) showed its equivalence to the easier-to-understand form of the joint entropy minus the sum of conditional entropies via the following: See also Interaction information Mutual information Total correlation Bibliography Footnotes References Information theory Probability theory Covariance and correlation
Dual total correlation
[ "Mathematics", "Technology", "Engineering" ]
405
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
3,888,215
https://en.wikipedia.org/wiki/Jefimenko%27s%20equations
In electromagnetism, Jefimenko's equations (named after Oleg D. Jefimenko) give the electric field and magnetic field due to a distribution of electric charges and electric current in space, that takes into account the propagation delay (retarded time) of the fields due to the finite speed of light and relativistic effects. Therefore, they can be used for moving charges and currents. They are the particular solutions to Maxwell's equations for any arbitrary distribution of charges and currents. Equations Electric and magnetic fields Jefimenko's equations give the electric field E and magnetic field B produced by an arbitrary charge or current distribution, of charge density ρ and current density J: where r′ is a point in the charge distribution, r is a point in space, and is the retarded time. There are similar expressions for D and H. These equations are the time-dependent generalization of Coulomb's law and the Biot–Savart law to electrodynamics, which were originally true only for electrostatic and magnetostatic fields, and steady currents. Origin from retarded potentials Jefimenko's equations can be found from the retarded potentials φ and A: which are the solutions to Maxwell's equations in the potential formulation, then substituting in the definitions of the electromagnetic potentials themselves: and using the relation replaces the potentials φ and A by the fields E and B. Heaviside–Feynman formula The Heaviside–Feynman formula, also known as the Jefimenko–Feynman formula, can be seen as the point-like electric charge version of Jefimenko's equations. Actually, it can be (non trivially) deduced from them using Dirac functions, or using the Liénard-Wiechert potentials. It is mostly known from The Feynman Lectures on Physics, where it was used to introduce and describe the origin of electromagnetic radiation. The formula provides a natural generalization of the Coulomb's law for cases where the source charge is moving: Here, and are the electric and magnetic fields respectively, is the electric charge, is the vacuum permittivity (electric field constant) and is the speed of light. The vector is a unit vector pointing from the observer to the charge and is the distance between observer and charge. Since the electromagnetic field propagates at the speed of light, both these quantities are evaluated at the retarded time . The first term in the formula for represents the Coulomb's law for the static electric field. The second term is the time derivative of the first Coulombic term multiplied by which is the propagation time of the electric field. Heuristically, this can be regarded as nature "attempting" to forecast what the present field would be by linear extrapolation to the present time. The last term, proportional to the second derivative of the unit direction vector , is sensitive to charge motion perpendicular to the line of sight. It can be shown that the electric field generated by this term is proportional to , where is the transverse acceleration in the retarded time. As it decreases only as with distance compared to the standard Coulombic behavior, this term is responsible for the long-range electromagnetic radiation caused by the accelerating charge. The Heaviside–Feynman formula can be derived from Maxwell's equations using the technique of the retarded potential. It allows, for example, the derivation of the Larmor formula for overall radiation power of the accelerating charge. Discussion There is a widespread interpretation of Maxwell's equations indicating that spatially varying electric and magnetic fields can cause each other to change in time, thus giving rise to a propagating electromagnetic wave (electromagnetism). However, Jefimenko's equations show an alternative point of view. Jefimenko says, "...neither Maxwell's equations nor their solutions indicate an existence of causal links between electric and magnetic fields. Therefore, we must conclude that an electromagnetic field is a dual entity always having an electric and a magnetic component simultaneously created by their common sources: time-variable electric charges and currents." As pointed out by McDonald, Jefimenko's equations seem to appear first in 1962 in the second edition of Panofsky and Phillips's classic textbook. David Griffiths, however, clarifies that "the earliest explicit statement of which I am aware was by Oleg Jefimenko, in 1966" and characterizes equations in Panofsky and Phillips's textbook as only "closely related expressions". According to Andrew Zangwill, the equations analogous to Jefimenko's but in the Fourier frequency domain were first derived by George Adolphus Schott in his treatise Electromagnetic Radiation (University Press, Cambridge, 1912). Essential features of these equations are easily observed which is that the right hand sides involve "retarded" time which reflects the "causality" of the expressions. In other words, the left side of each equation is actually "caused" by the right side, unlike the normal differential expressions for Maxwell's equations where both sides take place simultaneously. In the typical expressions for Maxwell's equations there is no doubt that both sides are equal to each other, but as Jefimenko notes, "... since each of these equations connects quantities simultaneous in time, none of these equations can represent a causal relation." See also Liénard–Wiechert potential Notes Electrodynamics Electromagnetism Eponymous equations of physics
Jefimenko's equations
[ "Physics", "Mathematics" ]
1,146
[ "Electromagnetism", "Physical phenomena", "Equations of physics", "Eponymous equations of physics", "Fundamental interactions", "Electrodynamics", "Dynamical systems" ]
3,889,704
https://en.wikipedia.org/wiki/Emerging%20technologies
Emerging technologies are technologies whose development, practical applications, or both are still largely unrealized. These technologies are generally new but also include old technologies finding new applications. Emerging technologies are often perceived as capable of changing the status quo. Emerging technologies are characterized by radical novelty (in application even if not in origins), relatively fast growth, coherence, prominent impact, and uncertainty and ambiguity. In other words, an emerging technology can be defined as "a radically novel and relatively fast growing technology characterised by a certain degree of coherence persisting over time and with the potential to exert a considerable impact on the socio-economic domain(s) which is observed in terms of the composition of actors, institutions and patterns of interactions among those, along with the associated knowledge production processes. Its most prominent impact, however, lies in the future and so in the emergence phase is still somewhat uncertain and ambiguous." Emerging technologies include a variety of technologies such as educational technology, information technology, nanotechnology, biotechnology, robotics, and artificial intelligence. New technological fields may result from the technological convergence of different systems evolving towards similar goals. Convergence brings previously separate technologies such as voice (and telephony features), data (and productivity applications) and video together so that they share resources and interact with each other, creating new efficiencies. Emerging technologies are those technical innovations which represent progressive developments within a field for competitive advantage; converging technologies represent previously distinct fields which are in some way moving towards stronger inter-connection and similar goals. However, the opinion on the degree of the impact, status and economic viability of several emerging and converging technologies varies. History of emerging technologies In the history of technology, emerging technologies are contemporary advances and innovation in various fields of technology. Over centuries innovative methods and new technologies have been developed and opened up. Some of these technologies are due to theoretical research, and others from commercial research and development. Technological growth includes incremental developments and disruptive technologies. An example of the former was the gradual roll-out of DVD (digital video disc) as a development intended to follow on from the previous optical technology compact disc. By contrast, disruptive technologies are those where a new method replaces the previous technology and makes it redundant, for example, the replacement of horse-drawn carriages by automobiles and other vehicles. Emerging technology debates Many writers, including computer scientist Bill Joy, have identified clusters of technologies that they consider critical to humanity's future. Joy warns that the technology could be used by elites for good or evil. They could use it as "good shepherds" for the rest of humanity or decide everyone else is superfluous and push for the mass extinction of those made unnecessary by technology. Advocates of the benefits of technological change typically see emerging and converging technologies as offering hope for the betterment of the human condition. Cyberphilosophers Alexander Bard and Jan Söderqvist argue in The Futurica Trilogy that while Man himself is basically constant throughout human history (genes change very slowly), all relevant change is rather a direct or indirect result of technological innovation (memes change very fast) since new ideas always emanate from technology use and not the other way around. Man should consequently be regarded as history's main constant and technology as its main variable. However, critics of the risks of technological change, and even some advocates such as transhumanist philosopher Nick Bostrom, warn that some of these technologies could pose dangers, perhaps even contribute to the extinction of humanity itself; i.e., some of them could involve existential risks. Much ethical debate centers on issues of distributive justice in allocating access to beneficial forms of technology. Some thinkers, including environmental ethicist Bill McKibben, oppose the continuing development of advanced technology partly out of fear that its benefits will be distributed unequally in ways that could worsen the plight of the poor. By contrast, inventor Ray Kurzweil is among techno-utopians who believe that emerging and converging technologies could and will eliminate poverty and abolish suffering. Some analysts such as Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, argue that as information technology advances, robots and other forms of automation will ultimately result in significant unemployment as machines and software begin to match and exceed the capability of workers to perform most routine jobs. As robotics and artificial intelligence develop further, even many skilled jobs may be threatened. Technologies such as machine learning may ultimately allow computers to do many knowledge-based jobs that require significant education. This may result in substantial unemployment at all skill levels, stagnant or falling wages for most workers, and increased concentration of income and wealth as the owners of capital capture an ever-larger fraction of the economy. This in turn could lead to depressed consumer spending and economic growth as the bulk of the population lacks sufficient discretionary income to purchase the products and services produced by the economy. Examples of emerging technologies Artificial intelligence Artificial intelligence (AI) is the sub intelligence exhibited by machines or software, and the branch of computer science that develops machines and software with animal-like intelligence. Major AI researchers and textbooks define the field as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the study of making intelligent machines". The central functions (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence (or "strong AI") is still among the field's long-term goals. Currently, popular approaches include deep learning, statistical methods, computational intelligence and traditional symbolic AI. There is an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others. 3D printing 3D printing, also known as additive manufacturing, has been posited by Jeremy Rifkin and others as part of the third industrial revolution. Combined with Internet technology, 3D printing would allow for digital blueprints of virtually any material product to be sent instantly to another person to be produced on the spot, making purchasing a product online almost instantaneous. Although this technology is still too crude to produce most products, it is rapidly developing and created a controversy in 2013 around the issue of 3D printed firearms. Gene therapy Gene therapy was first successfully demonstrated in late 1990/early 1991 for adenosine deaminase deficiency, though the treatment was somatic – that is, did not affect the patient's germ line and thus was not heritable. This led the way to treatments for other genetic diseases and increased interest in germ line gene therapy – therapy affecting the gametes and descendants of patients. Between September 1990 and January 2014, there were around 2,000 gene therapy trials conducted or approved. Cancer vaccines A cancer vaccine is a vaccine that treats existing cancer or prevents the development of cancer in certain high-risk individuals. Vaccines that treat existing cancer are known as therapeutic cancer vaccines. There are currently no vaccines able to prevent cancer in general. On April 14, 2009, The Dendreon Corporation announced that their Phase III clinical trial of Provenge, a cancer vaccine designed to treat prostate cancer, had demonstrated an increase in survival. It received U.S. Food and Drug Administration (FDA) approval for use in the treatment of advanced prostate cancer patients on April 29, 2010. The approval of Provenge has stimulated interest in this type of therapy. Cultured meat Cultured meat, also called in vitro meat, clean meat, cruelty-free meat, shmeat, and test-tube meat, is an animal-flesh product that has never been part of a living animal with exception of the fetal calf serum taken from a slaughtered cow. In the 21st century, several research projects have worked on in vitro meat in the laboratory. The first in vitro beefburger, created by a Dutch team, was eaten at a demonstration for the press in London in August 2013. There remain difficulties to be overcome before in vitro meat becomes commercially available. Cultured meat is prohibitively expensive, but it is expected that the cost could be reduced to compete with that of conventionally obtained meat as technology improves. In vitro meat is also an ethical issue. Some argue that it is less objectionable than traditionally obtained meat because it does not involve killing and reduces the risk of animal cruelty, while others disagree with eating meat that has not developed naturally. Nanotechnology Nanotechnology (sometimes shortened to nanotech) is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold. Robotics Robotics is the branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing. These technologies deal with automated machines that can take the place of humans in dangerous environments, factories, warehouses, or kitchens; or resemble humans in appearance, behavior, and/or cognition. A good example of a robot that resembles humans is Sophia, a social humanoid robot developed by Hong Kong-based company Hanson Robotics which was activated on April 19, 2015. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics. Stem-cell therapy Stem cell therapy is an intervention strategy that introduces new adult stem cells into damaged tissue in order to treat disease or injury. Many medical researchers believe that stem cell treatments have the potential to change the face of human disease and alleviate suffering. The ability of stem cells to self-renew and give rise to subsequent generations with variable degrees of differentiation capacities offers significant potential for generation of tissues that can potentially replace diseased and damaged areas in the body, with minimal risk of rejection and side effects. Chimeric antigen receptor (CAR)-modified T cells have raised among other immunotherapies for cancer treatment, being implemented against B-cell malignancies. Despite the promising outcomes of this innovative technology, CAR-T cells are not exempt from limitations that must yet to be overcome in order to provide reliable and more efficient treatments against other types of cancer. Distributed ledger technology Distributed ledger or blockchain technology provides a transparent and immutable list of transactions. A wide range of uses has been proposed for where an open, decentralised database is required, ranging from supply chains to cryptocurrencies. Smart contracts are self-executing transactions which occur when pre-defined conditions are met. The aim is to provide security that is superior to traditional contract law, and to reduce transaction costs and delays. The original idea was conceived by Nick Szabo in 1994, but remained unrealised until the development of blockchains. Augmented reality This type of technology where digital graphics are loaded onto live footage has been around since the 20th century, but thanks to the arrival of more powerful computing hardware and the implementation of open source, this technology has been able to do things that we never thought were possible. Some ways in which we have used this technology can be through apps such as Pokémon Go, Snapchat and Instagram filters and other apps that create fictional things in real objects. Multi-use rockets Reusable rockets, in contrast to single use rockets that are disposed after launch, are able to propulsively land safely in a pre-specified place where they are recovered to be used again in later launches. Early prototypes include the McDonnell Douglas DC-X tested in the 1990s, but the company SpaceX was the first to use propulsive reusability on the first stage of an operational orbital launch vehicle, the Falcon 9, in the 2010s. SpaceX is also developing a fully reusable rocket known as Starship. Other entities developing reusable rockets include Blue Origin and Rocket Lab. Development of emerging technologies As innovation drives economic growth, and large economic rewards come from new inventions, a great deal of resources (funding and effort) go into the development of emerging technologies. Some of the sources of these resources are described below. Research and development Research and development is directed towards the advancement of technology in general, and therefore includes development of emerging technologies. See also List of countries by research and development spending. Applied research is a form of systematic inquiry involving the practical application of science. It accesses and uses some part of the research communities' (the academia's) accumulated theories, knowledge, methods, and techniques, for a specific, often state-, business-, or client-driven purpose. Science policy is the area of public policy which is concerned with the policies that affect the conduct of the science and research enterprise, including the funding of science, often in pursuance of other national policy goals such as technological innovation to promote commercial product development, weapons development, health care and environmental monitoring. Patents Patents provide inventors with a limited period of time (minimum of 20 years, but duration based on jurisdiction) of exclusive right in the making, selling, use, leasing or otherwise of their novel technological inventions. Artificial intelligence, robotic inventions, new material, or blockchain platforms may be patentable, the patent protecting the technological know-how used to create these inventions. In 2019, the World Intellectual Property Organization (WIPO) reported that AI was the most prolific emerging technology in terms of number of patent applications and granted patents, the Internet of things was estimated to be the largest in terms of market size. It was followed, again in market size, by big data technologies, robotics, AI, 3D printing and the fifth generation of mobile services (5G). Since AI emerged in the 1950s, 340,000 AI-related patent applications were filed by innovators and 1.6 million scientific papers have been published by researchers, with the majority of all AI-related patent filings published since 2013. Companies represent 26 out of the top 30 AI patent applicants, with universities or public research organizations accounting for the remaining four. DARPA DARPA (Defense Advanced Research Projects Agency) is an agency of the U.S. Department of Defense responsible for the development of emerging technologies for use by the military. DARPA was created in 1958 as the Advanced Research Projects Agency (ARPA) by President Dwight D. Eisenhower. Its purpose was to formulate and execute research and development projects to expand the frontiers of technology and science, with the aim to reach beyond immediate military requirements. Projects funded by DARPA have provided significant technologies that influenced many non-military fields, such as the Internet and Global Positioning System technology. Technology competitions and awards There are awards that provide incentive to push the limits of technology (generally synonymous with emerging technologies). Note that while some of these awards reward achievement after-the-fact via analysis of the merits of technological breakthroughs, others provide incentive via competitions for awards offered for goals yet to be achieved. The Orteig Prize was a $25,000 award offered in 1919 by French hotelier Raymond Orteig for the first nonstop flight between New York City and Paris. In 1927, underdog Charles Lindbergh won the prize in a modified single-engine Ryan aircraft called the Spirit of St. Louis. In total, nine teams spent $400,000 in pursuit of the Orteig Prize. The XPRIZE series of awards, public competitions designed and managed by the non-profit organization called the X Prize Foundation, are intended to encourage technological development that could benefit mankind. The most high-profile XPRIZE to date was the $10,000,000 Ansari XPRIZE relating to spacecraft development, which was awarded in 2004 for the development of SpaceShipOne. The Turing Award is an annual prize given by the Association for Computing Machinery (ACM) to "an individual selected for contributions of a technical nature made to the computing community." It is stipulated that the contributions should be of lasting and major technical importance to the computer field. The Turing Award is generally recognized as the highest distinction in computer science, and in 2014 grew to $1,000,000. The Millennium Technology Prize is awarded once every two years by Technology Academy Finland, an independent fund established by Finnish industry and the Finnish state in partnership. The first recipient was Tim Berners-Lee, inventor of the World Wide Web. In 2003, David Gobel seed-funded the Methuselah Mouse Prize (Mprize) to encourage the development of new life extension therapies in mice, which are genetically similar to humans. So far, three Mouse Prizes have been awarded: one for breaking longevity records to Dr. Andrzej Bartke of Southern Illinois University; one for late-onset rejuvenation strategies to Dr. Stephen Spindler of the University of California; and one to Dr. Z. Dave Sharp for his work with the pharmaceutical rapamycin. Role of science fiction Science fiction has often affected innovation and new technology by presenting creative, intriguing possibilities for technological advancement. For example, many rocketry pioneers were inspired by science fiction. The documentary How William Shatner Changed the World describes a number of examples of imagined technologies that became real. Bleeding edge The term bleeding edge has been used to refer to some new technologies, formed as an allusion to the similar terms "leading edge" and "cutting edge". It tends to imply even greater advancement, albeit at an increased risk because of the unreliability of the software or hardware. The first documented example of this term being used dates to early 1983, when an unnamed banking executive was quoted to have used it in reference to Storage Technology Corporation. See also List of emerging technologies Bioconservatism Bioethics Biopolitics Current research in evolutionary biology Foresight (futures studies) Futures studies Future of Humanity Institute Institute for Ethics and Emerging Technologies Institute on Biotechnology and the Human Future Technological change Differential technological development Accelerating change Moore's law Innovation Technological revolution Technological innovation system Technological utopianism Techno-progressivism Transhumanism Technological singularity :Category:Upcoming software Notes References Citations Further reading General Giersch, H. (1982). Emerging technologies: Consequences for economic growth, structural change, and employment : symposium 1981. Tübingen: Mohr. Jones-Garmil, K. (1997). The wired museum: Emerging technology and changing paradigms. Washington, DC: American Association of Museums. Kaldis, Byron (2010). "Converging Technologies". Sage Encyclopedia of Nanotechnology and Society, Thousand Oaks: CA, Sage Law and policy Branscomb, L. M. (1993). Empowering technology: Implementing a U.S. strategy. Cambridge, Mass: MIT Press. Raysman, R., & Raysman, R. (2002). Emerging technologies and the law: Forms and analysis. Commercial law intellectual property series. New York, N.Y.: Law Journal Press. Information and learning Hung, D., & Khine, M. S. (2006). Engaged learning with emerging technologies. Dordrecht: Springer. Kendall, K. E. (1999). Emerging information technologies: Improving decisions, cooperation, and infrastructure. Thousand Oaks, Calif: Sage Publications. Illustrated Other Cavin, R. K., & Liu, W. (1996). Emerging technologies: Designing low power digital systems. [New York]: Institute of Electrical and Electronics Engineers. Bioethics Transhumanism Technology development
Emerging technologies
[ "Technology", "Engineering", "Biology" ]
4,082
[ "Bioethics", "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
3,891,878
https://en.wikipedia.org/wiki/Probabilistic%20metric%20space
In mathematics, probabilistic metric spaces are a generalization of metric spaces where the distance no longer takes values in the non-negative real numbers , but in distribution functions. Let D+ be the set of all probability distribution functions F such that F(0) = 0 (F is a nondecreasing, left continuous mapping from R into [0, 1] such that max(F) = 1). Then given a non-empty set S and a function F: S × S → D+ where we denote F(p, q) by Fp,q for every (p, q) ∈ S × S, the ordered pair (S, F) is said to be a probabilistic metric space if: For all u and v in S, if and only if for all x > 0. For all u and v in S, . For all u, v and w in S, and for . History Probabilistic metric spaces are initially introduced by Menger, which were termed statistical metrics. Shortly after, Wald criticized the generalized triangle inequality and proposed an alternative one. However, both authors had come to the conclusion that in some respects the Wald inequality was too stringent a requirement to impose on all probability metric spaces, which is partly included in the work of Schweizer and Sklar. Later, the probabilistic metric spaces found to be very suitable to be used with fuzzy sets and further called fuzzy metric spaces Probability metric of random variables A probability metric D between two random variables X and Y may be defined, for example, as where F(x, y) denotes the joint probability density function of the random variables X and Y. If X and Y are independent from each other, then the equation above transforms into where f(x) and g(y) are probability density functions of X and Y respectively. One may easily show that such probability metrics do not satisfy the first metric axiom or satisfies it if, and only if, both of arguments X and Y are certain events described by Dirac delta density probability distribution functions. In this case: the probability metric simply transforms into the metric between expected values , of the variables X and Y. For all other random variables X, Y the probability metric does not satisfy the identity of indiscernibles condition required to be satisfied by the metric of the metric space, that is: Example For example if both probability distribution functions of random variables X and Y are normal distributions (N) having the same standard deviation , integrating yields: where and is the complementary error function. In this case: Probability metric of random vectors The probability metric of random variables may be extended into metric D(X, Y) of random vectors X, Y by substituting with any metric operator d(x, y): where F(X, Y) is the joint probability density function of random vectors X and Y. For example substituting d(x, y) with Euclidean metric and providing the vectors X and Y are mutually independent would yield to: References Probability distributions Metric geometry
Probabilistic metric space
[ "Mathematics" ]
629
[ "Functions and mappings", "Mathematical relations", "Mathematical objects", "Probability distributions" ]
3,892,303
https://en.wikipedia.org/wiki/Tutte%20polynomial
The Tutte polynomial, also called the dichromate or the Tutte–Whitney polynomial, is a graph polynomial. It is a polynomial in two variables which plays an important role in graph theory. It is defined for every undirected graph and contains information about how the graph is connected. It is denoted by . The importance of this polynomial stems from the information it contains about . Though originally studied in algebraic graph theory as a generalization of counting problems related to graph coloring and nowhere-zero flow, it contains several famous other specializations from other sciences such as the Jones polynomial from knot theory and the partition functions of the Potts model from statistical physics. It is also the source of several central computational problems in theoretical computer science. The Tutte polynomial has several equivalent definitions. It is essentially equivalent to Whitney’s rank polynomial, Tutte’s own dichromatic polynomial and Fortuin–Kasteleyn’s random cluster model under simple transformations. It is essentially a generating function for the number of edge sets of a given size and connected components, with immediate generalizations to matroids. It is also the most general graph invariant that can be defined by a deletion–contraction recurrence. Several textbooks about graph theory and matroid theory devote entire chapters to it. Definitions Definition. For an undirected graph one may define the Tutte polynomial as where denotes the number of connected components of the graph . In this definition it is clear that is well-defined and a polynomial in and . The same definition can be given using slightly different notation by letting denote the rank of the graph . Then the Whitney rank generating function is defined as The two functions are equivalent under a simple change of variables: Tutte’s dichromatic polynomial is the result of another simple transformation: Tutte’s original definition of is equivalent but less easily stated. For connected we set where denotes the number of spanning trees of internal activity and external activity . A third definition uses a deletion–contraction recurrence. The edge contraction of graph is the graph obtained by merging the vertices and and removing the edge . We write for the graph where the edge is merely removed. Then the Tutte polynomial is defined by the recurrence relation if is neither a loop nor a bridge, with base case if contains bridges and loops and no other edges. Especially, if contains no edges. The random cluster model from statistical mechanics due to provides yet another equivalent definition. The partition sum is equivalent to under the transformation Properties The Tutte polynomial factors into connected components. If is the union of disjoint graphs and then If is planar and denotes its dual graph then Especially, the chromatic polynomial of a planar graph is the flow polynomial of its dual. Tutte refers to such functions as V-functions. Examples Isomorphic graphs have the same Tutte polynomial, but the converse is not true. For example, the Tutte polynomial of every tree on edges is . Tutte polynomials are often given in tabular form by listing the coefficients of in row and column . For example, the Tutte polynomial of the Petersen graph, is given by the following table. Other example, the Tutte polynomial of the octahedron graph is given by History W. T. Tutte’s interest in the deletion–contraction formula started in his undergraduate days at Trinity College, Cambridge, originally motivated by perfect rectangles and spanning trees. He often applied the formula in his research and “wondered if there were other interesting functions of graphs, invariant under isomorphism, with similar recursion formulae.” R. M. Foster had already observed that the chromatic polynomial is one such function, and Tutte began to discover more. His original terminology for graph invariants that satisfy the deletion–contraction recursion was W-function, and V-function if multiplicative over components. Tutte writes, “Playing with my W-functions I obtained a two-variable polynomial from which either the chromatic polynomial or the flow-polynomial could be obtained by setting one of the variables equal to zero, and adjusting signs.” Tutte called this function the dichromate, as he saw it as a generalization of the chromatic polynomial to two variables, but it is usually referred to as the Tutte polynomial. In Tutte’s words, “This may be unfair to Hassler Whitney who knew and used analogous coefficients without bothering to affix them to two variables.” (There is “notable confusion” about the terms dichromate and dichromatic polynomial, introduced by Tutte in different paper, and which differ only slightly.) The generalisation of the Tutte polynomial to matroids was first published by Crapo, though it appears already in Tutte’s thesis. Independently of the work in algebraic graph theory, Potts began studying the partition function of certain models in statistical mechanics in 1952. The work by Fortuin and Kasteleyn on the random cluster model, a generalisation of the Potts model, provided a unifying expression that showed the relation to the Tutte polynomial. Specialisations At various points and lines of the -plane, the Tutte polynomial evaluates to quantities that have been studied in their own right in diverse fields of mathematics and physics. Part of the appeal of the Tutte polynomial comes from the unifying framework it provides for analysing these quantities. Chromatic polynomial At , the Tutte polynomial specialises to the chromatic polynomial, where denotes the number of connected components of G. For integer λ the value of chromatic polynomial equals the number of vertex colourings of G using a set of λ colours. It is clear that does not depend on the set of colours. What is less clear is that it is the evaluation at λ of a polynomial with integer coefficients. To see this, we observe: If G has n vertices and no edges, then . If G contains a loop (a single edge connecting a vertex to itself), then . If e is an edge which is not a loop, then The three conditions above enable us to calculate , by applying a sequence of edge deletions and contractions; but they give no guarantee that a different sequence of deletions and contractions will lead to the same value. The guarantee comes from the fact that counts something, independently of the recurrence. In particular, gives the number of acyclic orientations. Jones polynomial Along the hyperbola , the Tutte polynomial of a planar graph specialises to the Jones polynomial of an associated alternating knot. Individual points (2,1) counts the number of forests, i.e., the number of acyclic edge subsets. (1,1) counts the number of spanning forests (edge subsets without cycles and the same number of connected components as G). If the graph is connected, counts the number of spanning trees. (1,2) counts the number of spanning subgraphs (edge subsets with the same number of connected components as G). (2,0) counts the number of acyclic orientations of G. (0,2) counts the number of strongly connected orientations of G. (2,2) is the number where is the number of edges of graph G. (0,−2) If G is a 4-regular graph, then counts the number of Eulerian orientations of G. Here is the number of connected components of G. (3,3) If G is the m × n grid graph, then counts the number of ways to tile a rectangle of width 4m and height 4n with T-tetrominoes. If G is a planar graph, then equals the sum over weighted Eulerian orientations in a medial graph of G, where the weight of an orientation is 2 to the number of saddle vertices of the orientation (that is, the number of vertices with incident edges cyclicly ordered "in, out, in out"). Potts and Ising models Define the hyperbola in the xy−plane: the Tutte polynomial specialises to the partition function, of the Ising model studied in statistical physics. Specifically, along the hyperbola the two are related by the equation: In particular, for all complex α. More generally, if for any positive integer q, we define the hyperbola: then the Tutte polynomial specialises to the partition function of the q-state Potts model. Various physical quantities analysed in the framework of the Potts model translate to specific parts of the . Flow polynomial At , the Tutte polynomial specialises to the flow polynomial studied in combinatorics. For a connected and undirected graph G and integer k, a nowhere-zero k-flow is an assignment of “flow” values to the edges of an arbitrary orientation of G such that the total flow entering and leaving each vertex is congruent modulo k. The flow polynomial denotes the number of nowhere-zero k-flows. This value is intimately connected with the chromatic polynomial, in fact, if G is a planar graph, the chromatic polynomial of G is equivalent to the flow polynomial of its dual graph in the sense that Theorem (Tutte). The connection to the Tutte polynomial is given by: Reliability polynomial At , the Tutte polynomial specialises to the all-terminal reliability polynomial studied in network theory. For a connected graph G remove every edge with probability p; this models a network subject to random edge failures. Then the reliability polynomial is a function , a polynomial in p, that gives the probability that every pair of vertices in G remains connected after the edges fail. The connection to the Tutte polynomial is given by Dichromatic polynomial Tutte also defined a closer 2-variable generalization of the chromatic polynomial, the dichromatic polynomial of a graph. This is where is the number of connected components of the spanning subgraph (V,A). This is related to the corank-nullity polynomial by The dichromatic polynomial does not generalize to matroids because k(A) is not a matroid property: different graphs with the same matroid can have different numbers of connected components. Related polynomials Martin polynomial The Martin polynomial of an oriented 4-regular graph was defined by Pierre Martin in 1977. He showed that if G is a plane graph and is its directed medial graph, then Algorithms Deletion–contraction The deletion–contraction recurrence for the Tutte polynomial, immediately yields a recursive algorithm for computing it for a given graph: as long as you can find an edge e that is not a loop or bridge, recursively compute the Tutte polynomial for when that edge is deleted, and when that edge is contracted. Then add the two sub-results together to get the overall Tutte polynomial for the graph. The base case is a monomial where m is the number of bridges, and n is the number of loops. Within a polynomial factor, the running time t of this algorithm can be expressed in terms of the number of vertices n and the number of edges m of the graph, a recurrence relation that scales as the Fibonacci numbers with solution The analysis can be improved to within a polynomial factor of the number of spanning trees of the input graph. For sparse graphs with this running time is . For regular graphs of degree k, the number of spanning trees can be bounded by where so the deletion–contraction algorithm runs within a polynomial factor of this bound. For example: In practice, graph isomorphism testing is used to avoid some recursive calls. This approach works well for graphs that are quite sparse and exhibit many symmetries; the performance of the algorithm depends on the heuristic used to pick the edge e. Gaussian elimination In some restricted instances, the Tutte polynomial can be computed in polynomial time, ultimately because Gaussian elimination efficiently computes the matrix operations determinant and Pfaffian. These algorithms are themselves important results from algebraic graph theory and statistical mechanics. equals the number of spanning trees of a connected graph. This is computable in polynomial time as the determinant of a maximal principal submatrix of the Laplacian matrix of G, an early result in algebraic graph theory known as Kirchhoff’s Matrix–Tree theorem. Likewise, the dimension of the bicycle space at can be computed in polynomial time by Gaussian elimination. For planar graphs, the partition function of the Ising model, i.e., the Tutte polynomial at the hyperbola , can be expressed as a Pfaffian and computed efficiently via the FKT algorithm. This idea was developed by Fisher, Kasteleyn, and Temperley to compute the number of dimer covers of a planar lattice model. Markov chain Monte Carlo Using a Markov chain Monte Carlo method, the Tutte polynomial can be arbitrarily well approximated along the positive branch of , equivalently, the partition function of the ferromagnetic Ising model. This exploits the close connection between the Ising model and the problem of counting matchings in a graph. The idea behind this celebrated result of Jerrum and Sinclair is to set up a Markov chain whose states are the matchings of the input graph. The transitions are defined by choosing edges at random and modifying the matching accordingly. The resulting Markov chain is rapidly mixing and leads to “sufficiently random” matchings, which can be used to recover the partition function using random sampling. The resulting algorithm is a fully polynomial-time randomized approximation scheme (fpras). Computational complexity Several computational problems are associated with the Tutte polynomial. The most straightforward one is Input: A graph Output: The coefficients of In particular, the output allows evaluating which is equivalent to counting the number of 3-colourings of G. This latter question is #P-complete, even when restricted to the family of planar graphs, so the problem of computing the coefficients of the Tutte polynomial for a given graph is #P-hard even for planar graphs. Much more attention has been given to the family of problems called Tutte defined for every complex pair : Input: A graph Output: The value of The hardness of these problems varies with the coordinates . Exact computation If both x and y are non-negative integers, the problem belongs to #P. For general integer pairs, the Tutte polynomial contains negative terms, which places the problem in the complexity class GapP, the closure of #P under subtraction. To accommodate rational coordinates , one can define a rational analogue of #P. The computational complexity of exactly computing falls into one of two classes for any . The problem is #P-hard unless lies on the hyperbola or is one of the points in which cases it is computable in polynomial time. If the problem is restricted to the class of planar graphs, the points on the hyperbola become polynomial-time computable as well. All other points remain #P-hard, even for bipartite planar graphs. In his paper on the dichotomy for planar graphs, Vertigan claims (in his conclusion) that the same result holds when further restricted to graphs with vertex degree at most three, save for the point , which counts nowhere-zero Z3-flows and is computable in polynomial time. These results contain several notable special cases. For example, the problem of computing the partition function of the Ising model is #P-hard in general, even though celebrated algorithms of Onsager and Fisher solve it for planar lattices. Also, the Jones polynomial is #P-hard to compute. Finally, computing the number of four-colorings of a planar graph is #P-complete, even though the decision problem is trivial by the four color theorem. In contrast, it is easy to see that counting the number of three-colorings for planar graphs is #P-complete because the decision problem is known to be NP-complete via a parsimonious reduction. Approximation The question which points admit a good approximation algorithm has been very well studied. Apart from the points that can be computed exactly in polynomial time, the only approximation algorithm known for is Jerrum and Sinclair’s FPRAS, which works for points on the “Ising” hyperbola for y > 0. If the input graphs are restricted to dense instances, with degree , there is an FPRAS if x ≥ 1, y ≥ 1. Even though the situation is not as well understood as for exact computation, large areas of the plane are known to be hard to approximate. See also Bollobás–Riordan polynomial A Tutte–Grothendieck invariant is any evaluation of the Tutte polynomial Notes References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . External links PlanetMath Chromatic polynomial Steven R. Pagano: Matroids and Signed Graphs Sandra Kingan: Matroid theory. Many links. Code for computing Tutte, Chromatic and Flow Polynomials by Gary Haggard, David J. Pearce and Gordon Royle: Computational problems Duality theories Matroid theory Polynomials Graph invariants
Tutte polynomial
[ "Mathematics" ]
3,534
[ "Mathematical structures", "Polynomials", "Matroid theory", "Graph theory", "Computational problems", "Combinatorics", "Graph invariants", "Mathematical relations", "Duality theories", "Geometry", "Category theory", "Mathematical problems", "Algebra" ]
3,892,380
https://en.wikipedia.org/wiki/Dakin%20oxidation
The Dakin oxidation (or Dakin reaction) is an organic redox reaction in which an ortho- or para-hydroxylated phenyl aldehyde (2-hydroxybenzaldehyde or 4-hydroxybenzaldehyde) or ketone reacts with hydrogen peroxide (H2O2) in base to form a benzenediol and a carboxylate. Overall, the carbonyl group is oxidised, whereas the H2O2 is reduced. The Dakin oxidation, which is closely related to the Baeyer–Villiger oxidation, is not to be confused with the Dakin–West reaction, though both are named after Henry Drysdale Dakin. Reaction mechanism The Dakin oxidation starts with (1) nucleophilic addition of a hydroperoxide ion to the carbonyl carbon, forming a (2) tetrahedral intermediate. The intermediate collapses, causing [1,2]-aryl migration, hydroxide elimination, and formation of a (3) phenyl ester. The phenyl ester is subsequently hydrolyzed: nucleophilic addition of hydroxide ion from solution to the ester carbonyl carbon forms a (4) second tetrahedral intermediate, which collapses, eliminating a (5) phenoxide ion and forming a carboxylic acid. Finally, the phenoxide extracts the acidic hydrogen from the carboxylic acid, yielding the (6) collected products. Factors affecting reaction kinetics The Dakin oxidation has two rate-limiting steps: nucleophilic addition of hydroperoxide to the carbonyl carbon and [1,2]-aryl migration. Therefore, the overall rate of oxidation is dependent on the nucleophilicity of hydroperoxide, the electrophilicity of the carbonyl carbon, and the speed of [1,2]-aryl migration. The alkyl substituents on the carbonyl carbon, the relative positions of the hydroxyl and carbonyl groups on the aryl ring, the presence of other functional groups on the ring, and the reaction mixture pH are four factors that affect these rate-limiting steps. Alkyl substituents In general, phenyl aldehydes are more reactive than phenyl ketones because the ketone carbonyl carbon is less electrophilic than the aldehyde carbonyl carbon. The difference can be mitigated by increasing the temperature of the reaction mixture. Relative positions of hydroxyl and carbonyl groups O-hydroxy phenyl aldehydes and ketones oxidize faster than p-hydroxy phenyl aldehydes and ketones in weakly basic conditions. In o-hydroxy compounds, when the hydroxyl group is protonated, an intramolecular hydrogen bond can form between the hydroxyl hydrogen and the carbonyl oxygen, stabilizing a resonance structure with positive charge on the carbonyl carbon, thus increasing the carbonyl carbon's electrophilicity (7). Lacking this stabilization, the carbonyl carbon of p-hydroxy compounds is less electrophilic. Therefore, o-hydroxy compounds are oxidized faster than p-hydroxy compounds when the hydroxyl group is protonated. M-hydroxy compounds do not oxidize to m-benzenediols and carboxylates. Rather, they form phenyl carboxylic acids. Variations in the aryl rings' migratory aptitudes can explain this. Hydroxyl groups ortho or para to the carbonyl group concentrate electron density at the aryl carbon bonded to the carbonyl carbon (10c, 11d). Phenyl groups have low migratory aptitude, but higher electron density at the migrating carbon increases migratory aptitude, facilitating [1,2]-aryl migration and allowing the reaction to continue. M-hydroxy compounds do not concentrate electron density at the migrating carbon (12a, 12b, 12c, 12d); their aryl groups' migratory aptitude remains low. The benzylic hydrogen, which has the highest migratory aptitude, migrates instead (8), forming a phenyl carboxylic acid (9). Other functional groups on the aryl ring Substitution of phenyl hydrogens with electron-donating groups ortho or para to the carbonyl group increases electron density at the migrating carbon, promotes [1,2]-aryl migration, and accelerates oxidation. Substitution with electron-donating groups meta to the carbonyl group does not change electron density at the migrating carbon; because unsubstituted phenyl group migratory aptitude is low, hydrogen migration dominates. Substitution with electron-withdrawing groups ortho or para to the carbonyl decreases electron density at the migrating carbon (13c), inhibits [1,2]-aryl migration, and favors hydrogen migration. pH The hydroperoxide anion is a more reactive nucleophile than neutral hydrogen peroxide. Consequently, oxidation accelerates as pH increases toward the pKa of hydrogen peroxide and hydroperoxide concentration climbs. At pH higher than 13.5, however, oxidation does not occur, possibly due to deprotonation of the second peroxidic oxygen. Deprotonation of the second peroxidic oxygen would prevent [1,2]-aryl migration because the lone oxide anion is too basic to be eliminated (2). Deprotonation of the hydroxyl group increases electron donation from the hydroxyl oxygen. When the hydroxyl group is ortho or para to the carbonyl group, deprotonation increases the electron density at the migrating carbon, promoting faster [1,2]-aryl migration. Therefore, [1,2]-aryl migration is facilitated by the pH range that favors deprotonated over protonated hydroxyl group. Variants Acid-catalyzed Dakin oxidation The Dakin oxidation can occur in mild acidic conditions as well, with a mechanism analogous to the base-catalyzed mechanism. In methanol, hydrogen peroxide, and catalytic sulfuric acid, the carbonyl oxygen is protonated (14), after which hydrogen peroxide adds as a nucleophile to the carbonyl carbon, forming a tetrahedral intermediate (15). Following an intramolecular proton transfer (16,17), the tetrahedral intermediate collapses, [1,2]-aryl migration occurs, and water is eliminated (18). Nucleophilic addition of methanol to the carbonyl carbon forms another tetrahedral intermediate (19). Following a second intramolecular proton transfer (20,21), the tetrahedral intermediate collapses, eliminating a phenol and forming an ester protonated at the carbonyl oxygen (22). Finally, deprotonation of the carbonyl oxygen yields the collected products and regenerates the acid catalyst (23). Boric acid-catalyzed Dakin oxidation Adding boric acid to the acid-catalyzed reaction mixture increases the yield of phenol product over phenyl carboxylic acid product, even when using phenyl aldehyde or ketone reactants with electron-donating groups meta to the carbonyl group or electron-withdrawing groups ortho or para to the carbonyl group. Boric acid and hydrogen peroxide form a complex in solution that, once added to the carbonyl carbon, favors aryl migration over hydrogen migration, maximizing the yield of phenol and reducing the yield of phenyl carboxylic acid. Methyltrioxorhenium-catalyzed Dakin oxidation Using an ionic liquid solvent with catalytic methyltrioxorhenium (MTO) dramatically accelerates Dakin oxidation. MTO forms a complex with hydrogen peroxide that increases the rate of addition of hydrogen peroxide to the carbonyl carbon. MTO does not, however, change the relative yields of phenol and phenyl carboxylic acid products. Urea-catalyzed Dakin oxidation Mixing urea and hydrogen peroxide yields urea-hydrogen peroxide complex (UHC). Adding dry UHC to solventless phenyl aldehyde or ketone also accelerates Dakin oxidation. Like MTO, UHP increases the rate of nucleophilic addition of hydrogen peroxide. But unlike the MTO-catalyzed variant, the urea-catalyzed variant does not produce potentially toxic heavy metal waste; it has also been applied to the synthesis of amine oxides such as pyridine-N-oxide. Synthetic applications The Dakin oxidation is most commonly used to synthesize benzenediols and alkoxyphenols. Catechol, for example, is synthesized from o-hydroxy and o-alkoxy phenyl aldehydes and ketones, and is used as the starting material for synthesis of several compounds, including the catecholamines, catecholamine derivatives, and 4-tert-butylcatechol, a common antioxidant and polymerization inhibitor. Other synthetically useful products of the Dakin oxidation include guaiacol, a precursor of several flavorants; hydroquinone, a common photograph-developing agent; and 2-tert-butyl-4-hydroxyanisole and 3-tert-butyl-4-hydroxyanisole, two antioxidants commonly used to preserve packaged food. In addition, the Dakin oxidation is useful in the synthesis of indolequinones, naturally occurring compounds that exhibit high anti-biotic, anti-fungal, and anti-tumor activities. See also Baeyer–Villiger oxidation Beckmann rearrangement Reimer–Tiemann reaction References Organic oxidation reactions Name reactions
Dakin oxidation
[ "Chemistry" ]
2,057
[ "Name reactions", "Organic oxidation reactions", "Organic redox reactions", "Organic reactions" ]
3,892,745
https://en.wikipedia.org/wiki/Dissolved%20air%20flotation
Dissolved air flotation (DAF) is a water treatment process that clarifies wastewaters (or other waters) by the removal of suspended matter such as oil or solids. The removal is achieved by dissolving air in the water or wastewater under pressure and then releasing the air at atmospheric pressure in a flotation tank basin. The released air forms tiny bubbles which adhere to the suspended matter, causing the suspended matter to float to the surface of the water where it may then be removed by a skimming device. Dissolved air flotation is very widely used in treating the industrial wastewater effluents from oil refineries, petrochemical and chemical plants, natural gas processing plants, paper mills, general water treatment and similar industrial facilities. A very similar process known as induced gas flotation is also used for wastewater treatment. Froth flotation is commonly used in the processing of mineral ores. In the oil industry, dissolved gas flotation (DGF) units do not use air as the flotation medium due to the explosion risk. Nitrogen gas is used instead to create the bubbles. Process description The feed water to the DAF float tank is often (but not always) dosed with a coagulant (such as ferric chloride or aluminum sulfate) to coagulate the colloidal particles and/or a flocculant to conglomerate the particles into bigger clusters. A portion of the clarified effluent water leaving the DAF tank is pumped into a small pressure vessel (called the air drum) into which compressed air is also introduced. This results in saturating the pressurized effluent water with air. The air-saturated water stream is recycled to the front of the float tank and flows through a pressure reduction valve just as it enters the front of the float tank, which results in the air being released in the form of tiny bubbles. Bubbles form at nucleation sites on the surface of the suspended particles, adhering to the particles. As more bubbles form, the lift from the bubbles eventually overcomes the force of gravity. This causes the suspended matter to float to the surface where it forms a froth layer which is then removed by a skimmer. The froth-free water exits the float tank as the clarified effluent from the DAF unit. Some DAF unit designs utilize parallel plate packing material (e.g. lamellas) to provide more separation surface and therefore to enhance the separation efficiency of the unit. DAF systems can be categorized as circular (more efficient) and rectangular (more residence time). The former type requires just 3 minutes. The rectangular type requires 20 to 30 minutes. One of the bigger advantages of the circular type is its spiral scoop. Drinking water treatment Drinking water supplies that are particularly vulnerable to unicellular algal blooms, and supplies with low turbidity and high colour often employ DAF. After coagulation and flocculation processes, water flows to DAF tanks where air diffusers on the tank bottom create fine bubbles that attach to floc resulting in a floating mass of concentrated floc. The floating floc blanket is removed from the surface and clarified water is withdrawn from the bottom of the DAF tank. See also API oil-water separator Flotation process Industrial wastewater treatment Industrial water treatment List of waste-water treatment technologies Microflotation References External links Treatment and Disposal of Ship-Generated Solid and Liquid Wastes (REMPEC Regional Marine Pollution Emergency Response Centre for the Mediterranean Sea, Project MED.B4.4100.97.0415.8, April 2004) Dissolved Air Flotation (DAF) Knowledge Encyclopedia Flotation processes Water treatment Waste treatment technology
Dissolved air flotation
[ "Chemistry", "Engineering", "Environmental_science" ]
765
[ "Water treatment", "Water pollution", "Environmental engineering", "Oil refining", "Flotation processes", "Water technology", "Waste treatment technology" ]
6,882,629
https://en.wikipedia.org/wiki/Goldman%E2%80%93Hodgkin%E2%80%93Katz%20flux%20equation
The Goldman–Hodgkin–Katz flux equation (or GHK flux equation or GHK current density equation) describes the ionic flux across a cell membrane as a function of the transmembrane potential and the concentrations of the ion inside and outside of the cell. Since both the voltage and the concentration gradients influence the movement of ions, this process is a simplified version of electrodiffusion. Electrodiffusion is most accurately defined by the Nernst–Planck equation and the GHK flux equation is a solution to the Nernst–Planck equation with the assumptions listed below. Origin The American David E. Goldman of Columbia University, and the English Nobel laureates Alan Lloyd Hodgkin and Bernard Katz derived this equation. Assumptions Several assumptions are made in deriving the GHK flux equation (Hille 2001, p. 445) : The membrane is a homogeneous substance The electrical field is constant so that the transmembrane potential varies linearly across the membrane The ions access the membrane instantaneously from the intra- and extracellular solutions The permeant ions do not interact The movement of ions is affected by both concentration and voltage differences Equation The GHK flux equation for an ion S (Hille 2001, p. 445): where S is the current density (flux) outward through the membrane carried by ion S, measured in amperes per square meter (A·m−2) PS is the permeability of the membrane for ion S measured in m·s−1 zS is the valence of ion S Vm is the transmembrane potential in volts F is the Faraday constant, equal to 96,485 C·mol−1 or J·V−1·mol−1 R is the gas constant, equal to 8.314 J·K−1·mol−1 T is the absolute temperature, measured in kelvins (= degrees Celsius + 273.15) [S]i is the intracellular concentration of ion S, measured in mol·m−3 or mmol·l−1 [S]o is the extracellular concentration of ion S, measured in mol·m−3 Implicit definition of reversal potential The reversal potential is shown to be contained in the GHK flux equation (Flax 2008). The proof is replicated from the reference (Flax 2008) here. We wish to show that when the flux is zero, the transmembrane potential is not zero. Formally it is written which is equivalent to writing , which states that when the transmembrane potential is zero, the flux is not zero. However, due to the form of the GHK flux equation when , . This is a problem as the value of is indeterminate. We turn to l'Hôpital's rule to find the solution for the limit: where represents the differential of f and the result is : It is evident from the previous equation that when , if and thus which is the definition of the reversal potential. By setting we can also obtain the reversal potential : which reduces to : and produces the Nernst equation : Rectification Since one of the assumptions of the GHK flux equation is that the ions move independently of each other, the total flow of ions across the membrane is simply equal to the sum of two oppositely directed fluxes. Each flux approaches an asymptotic value as the membrane potential diverges from zero. These asymptotes are and where subscripts 'i' and 'o' denote the intra- and extracellular compartments, respectively. Intuitively one may understand these limits as follows: if an ion is only found outside a cell, then the flux is Ohmic (proportional to voltage) when the voltage causes the ion to flow into the cell, but no voltage could cause the ion to flow out of the cell, since there are no ions inside the cell in the first place. Keeping all terms except Vm constant, the equation yields a straight line when plotting S against Vm. It is evident that the ratio between the two asymptotes is merely the ratio between the two concentrations of S, [S]i and [S]o. Thus, if the two concentrations are identical, the slope will be identical (and constant) throughout the voltage range (corresponding to Ohm's law scaled by the surface area). As the ratio between the two concentrations increases, so does the difference between the two slopes, meaning that the current is larger in one direction than the other, given an equal driving force of opposite signs. This is contrary to the result obtained if using Ohm's law scaled by the surface area, and the effect is called rectification. The GHK flux equation is mostly used by electrophysiologists when the ratio between [S]i and [S]o is large and/or when one or both of the concentrations change considerably during an action potential. The most common example is probably intracellular calcium, [Ca2+]i, which during a cardiac action potential cycle can change 100-fold or more, and the ratio between [Ca2+]o and [Ca2+]i can reach 20,000 or more. References Hille, Bertil (2001). Ion channels of excitable membranes, 3rd ed., Sinauer Associates, Sunderland, Massachusetts. Flax, Matt R. and Holmes, W.Harvey (2008). Goldman-Hodgkin-Katz Cochlear Hair Cell Models – a Foundation for Nonlinear Cochlear Mechanics, Conference proceedings: Interspeech 2008. See also Goldman equation Nernst equation Reversal potential Electrochemical equations Biophysics Bioelectrochemistry
Goldman–Hodgkin–Katz flux equation
[ "Physics", "Chemistry", "Mathematics", "Biology" ]
1,175
[ "Applied and interdisciplinary physics", "Bioelectrochemistry", "Mathematical objects", "Equations", "Electrochemistry", "Biophysics", "Electrochemical equations" ]
6,885,256
https://en.wikipedia.org/wiki/EF-Tu
EF-Tu (elongation factor thermo unstable) is a prokaryotic elongation factor responsible for catalyzing the binding of an aminoacyl-tRNA (aa-tRNA) to the ribosome. It is a G-protein, and facilitates the selection and binding of an aa-tRNA to the A-site of the ribosome. As a reflection of its crucial role in translation, EF-Tu is one of the most abundant and highly conserved proteins in prokaryotes. It is found in eukaryotic mitochondria as TUFM. As a family of elongation factors, EF-Tu also includes its eukaryotic and archaeal homolog, the alpha subunit of eEF-1 (EF-1A). Background Elongation factors are part of the mechanism that synthesizes new proteins through translation in the ribosome. Transfer RNAs (tRNAs) carry the individual amino acids that become integrated into a protein sequence, and have an anticodon for the specific amino acid that they are charged with. Messenger RNA (mRNA) carries the genetic information that encodes the primary structure of a protein, and contains codons that code for each amino acid. The ribosome creates the protein chain by following the mRNA code and integrating the amino acid of an aminoacyl-tRNA (also known as a charged tRNA) to the growing polypeptide chain. There are three sites on the ribosome for tRNA binding. These are the aminoacyl/acceptor site (abbreviated A), the peptidyl site (abbreviated P), and the exit site (abbreviated E). The P-site holds the tRNA connected to the polypeptide chain being synthesized, and the A-site is the binding site for a charged tRNA with an anticodon complementary to the mRNA codon associated with the site. After binding of a charged tRNA to the A-site, a peptide bond is formed between the growing polypeptide chain on the P-site tRNA and the amino acid of the A-site tRNA, and the entire polypeptide is transferred from the P-site tRNA to the A-site tRNA. Then, in a process catalyzed by the prokaryotic elongation factor EF-G (historically known as translocase), the coordinated translocation of the tRNAs and mRNA occurs, with the P-site tRNA moving to the E-site, where it dissociates from the ribosome, and the A-site tRNA moves to take its place in the P-site. Biological functions Protein synthesis EF-Tu participates in the polypeptide elongation process of protein synthesis. In prokaryotes, the primary function of EF-Tu is to transport the correct aa-tRNA to the A-site of the ribosome. As a G-protein, it uses GTP to facilitate its function. Outside of the ribosome, EF-Tu complexed with GTP (EF-Tu • GTP) complexes with aa-tRNA to form a stable EF-Tu • GTP • aa-tRNA ternary complex. EF-Tu • GTP binds all correctly-charged aa-tRNAs with approximately identical affinity, except those charged with initiation residues and selenocysteine. This can be accomplished because although different amino acid residues have varying side-chain properties, the tRNAs associated with those residues have varying structures to compensate for differences in side-chain binding affinities. The binding of an aa-tRNA to EF-Tu • GTP allows for the ternary complex to be translocated to the A-site of an active ribosome, in which the anticodon of the tRNA binds to the codon of the mRNA. If the correct anticodon binds to the mRNA codon, the ribosome changes configuration and alters the geometry of the GTPase domain of EF-Tu, resulting in the hydrolysis of the GTP associated with the EF-Tu to GDP and Pi. As such, the ribosome functions as a GTPase-activating protein (GAP) for EF-Tu. Upon GTP hydrolysis, the conformation of EF-Tu changes drastically and dissociates from the aa-tRNA and ribosome complex. The aa-tRNA then fully enters the A-site, where its amino acid is brought near the P-site's polypeptide and the ribosome catalyzes the covalent transfer of the polypeptide onto the amino acid. In the cytoplasm, the deactivated EF-Tu • GDP is acted on by the prokaryotic elongation factor EF-Ts, which causes EF-Tu to release its bound GDP. Upon dissociation of EF-Ts, EF-Tu is able to complex with a GTP due to the 5– to 10–fold higher concentration of GTP than GDP in the cytoplasm, resulting in reactivated EF-Tu • GTP, which can then associate with another aa-tRNA. Maintaining translational accuracy EF-Tu contributes to translational accuracy in three ways. In translation, a fundamental problem is that near-cognate anticodons have similar binding affinity to a codon as cognate anticodons, such that anticodon-codon binding in the ribosome alone is not sufficient to maintain high translational fidelity. This is addressed by the ribosome not activating the GTPase activity of EF-Tu if the tRNA in the ribosome's A-site does not match the mRNA codon, thus preferentially increasing the likelihood for the incorrect tRNA to leave the ribosome. Additionally, regardless of tRNA matching, EF-Tu also induces a delay after freeing itself from the aa-tRNA, before the aa-tRNA fully enters the A-site (a process called accommodation). This delay period is a second opportunity for incorrectly charged aa-tRNAs to move out of the A-site before the incorrect amino acid is irreversibly added to the polypeptide chain. A third mechanism is the less well understood function of EF-Tu to crudely check aa-tRNA associations and reject complexes where the amino acid is not bound to the correct tRNA coding for it. Other functions EF-Tu has been found in large quantities in the cytoskeletons of bacteria, co-localizing underneath the cell membrane with MreB, a cytoskeletal element that maintains cell shape. Defects in EF-Tu have been shown to result in defects in bacterial morphology. Additionally, EF-Tu has displayed some chaperone-like characteristics, with some experimental evidence suggesting that it promotes the refolding of a number of denatured proteins in vitro. EF-Tu has been found to moonlight on the cell surface of the pathogenic bacteria Staphylococcus aureus, Mycoplasma pneumoniae, and Mycoplasma hyopneumoniae, where EF-Tu is processed and can bind to a range of host molecules. In Bacillus cereus, EF-Tu also moonlights on the surface, where it acts as an environmental sensor and binds to substance P. Structure EF-Tu is a monomeric protein with molecular weight around 43 kDa in Escherichia coli. The protein consists of three structural domains: a GTP-binding domain and two oligonucleotide-binding domains, often referred to as domain 2 and domain 3. The N-terminal domain I of EF-Tu is the GTP-binding domain. It consists of a six beta-strand core flanked by six alpha-helices. Domains II and III of EF-Tu, the oligonucleotide-binding domains, both adopt beta-barrel structures. The GTP-binding domain I undergoes a dramatic conformational change upon GTP hydrolysis to GDP, allowing EF-Tu to dissociate from aa-tRNA and leave the ribosome. Reactivation of EF-Tu is achieved by GTP binding in the cytoplasm, which leads to a significant conformational change that reactivates the tRNA-binding site of EF-Tu. In particular, GTP binding to EF-Tu results in a ~90° rotation of domain I relative to domains II and III, exposing the residues of the tRNA-binding active site. Domain 2 adopts a beta-barrel structure, and is involved in binding to charged tRNA. This domain is structurally related to the C-terminal domain of EF2, to which it displays weak sequence similarity. This domain is also found in other proteins such as translation initiation factor IF-2 and tetracycline-resistance proteins. Domain 3 represents the C-terminal domain, which adopts a beta-barrel structure, and is involved in binding to both charged tRNA and to EF1B (or EF-Ts). Evolution The GTP-binding domain is conserved in both EF-1alpha/EF-Tu and also in EF-2/EF-G and thus seems typical for GTP-dependent proteins which bind non-initiator tRNAs to the ribosome. The GTP-binding translation factor family also includes the eukaryotic peptide chain release factor GTP-binding subunits and prokaryotic peptide chain release factor 3 (RF-3); the prokaryotic GTP-binding protein lepA and its homologue in yeast (GUF1) and Caenorhabditis elegans (ZK1236.1); yeast HBS1; rat Eef1a1 (formerly "statin S1"); and the prokaryotic selenocysteine-specific elongation factor selB. Disease relevance Along with the ribosome, EF-Tu is one of the most important targets for antibiotic-mediated inhibition of translation. Antibiotics targeting EF-Tu can be categorized into one of two groups, depending on the mechanism of action, and one of four structural families. The first group includes the antibiotics pulvomycin and GE2270A, and inhibits the formation of the ternary complex. The second group includes the antibiotics kirromycin and enacyloxin, and prevents the release of EF-Tu from the ribosome after GTP hydrolysis. See also Prokaryotic elongation factors EF-Ts (elongation factor thermo stable) EF-G (elongation factor G) EF-P (elongation factor P) eEF-1 EFR (EF-Tu receptor) References External links Protein biosynthesis Protein domains
EF-Tu
[ "Chemistry", "Biology" ]
2,302
[ "Protein biosynthesis", "Gene expression", "Protein classification", "Biosynthesis", "Protein domains" ]
6,885,778
https://en.wikipedia.org/wiki/Separable%20partial%20differential%20equation
A separable partial differential equation can be broken into a set of equations of lower dimensionality (fewer independent variables) by a method of separation of variables. It generally relies upon the problem having some special form or symmetry. In this way, the partial differential equation (PDE) can be solved by solving a set of simpler PDEs, or even ordinary differential equations (ODEs) if the problem can be broken down into one-dimensional equations. The most common form of separation of variables is simple separation of variables. A solution is obtained by assuming a solution of the form given by a product of functions of each individual coordinate. There is a special form of separation of variables called -separation of variables which is accomplished by writing the solution as a particular fixed function of the coordinates multiplied by a product of functions of each individual coordinate. Laplace's equation on is an example of a partial differential equation that admits solutions through -separation of variables; in the three-dimensional case this uses 6-sphere coordinates. (This should not be confused with the case of a separable ODE, which refers to a somewhat different class of problems that can be broken into a pair of integrals; see separation of variables.) Example For example, consider the time-independent Schrödinger equation for the function (in dimensionless units, for simplicity). (Equivalently, consider the inhomogeneous Helmholtz equation.) If the function in three dimensions is of the form then it turns out that the problem can be separated into three one-dimensional ODEs for functions , , and , and the final solution can be written as . (More generally, the separable cases of the Schrödinger equation were enumerated by Eisenhart in 1948.) References Differential equations
Separable partial differential equation
[ "Mathematics" ]
360
[ "Mathematical objects", "Differential equations", "Equations" ]
37,188,422
https://en.wikipedia.org/wiki/Brewster%20angle%20microscope
A Brewster angle microscope (BAM) is a microscope for studying thin films on liquid surfaces, most typically Langmuir films. In a Brewster angle microscope, both the microscope and a polarized light source are aimed towards a liquid surface at that liquid's Brewster angle, in such a way for the microscope to catch an image of any light reflected from the light source via the liquid surface. Because there is no p-polarized reflection from the pure liquid when both are angled towards it at the Brewster angle, light is only reflected when some other phenomenon such as a surface film affects the liquid surface. The technique was first introduced in 1991. Applications Brewster angle microscopes enable the visualization of Langmuir monolayers or adsorbate films at the air-water interface for example as a function of packing density. They can be used either to study the properties of the Langmuir layer, or to indicate a suitable deposition pressure for Langmuir-Blodgett (LB) deposition. They can be used for example in the LB deposition of nanoparticles. Applications include: Monolayer/film homogeneity. When combined with a Langmuir-Blodgett Trough, observation can be performed during compression/expansion at known surface pressures. Optimizing the deposition parameters. Selecting optimal deposition pressure and other deposition parameters for LB coating. Monolayer/film behavior. Observing phase changes, phase separation, domain size, shape and packing. Monitoring of surface reactions. Photochemical reactions, polymerizations and enzyme kinetics can be followed in real time. Monitoring and detection of surface active materials. For example, protein adsorption and nanoparticle flotation. Lee et al. used a Brewster angle microscope to study optimal deposition parameters for Fe3O4 nanoparticles. Daear et al. have written a recent review on the usage of BAMs in biological applications. See also Brewster's angle Langmuir–Blodgett film Nanoparticle deposition References External links Brewster Angle Microscopy Microscopes Polarization (waves) Materials science 1991 introductions
Brewster angle microscope
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
424
[ "Applied and interdisciplinary physics", "Materials science", "Astrophysics", "Measuring instruments", "Microscopes", "nan", "Microscopy", "Polarization (waves)" ]
37,191,231
https://en.wikipedia.org/wiki/Capped%20octahedral%20molecular%20geometry
In chemistry, the capped octahedral molecular geometry describes the shape of compounds where seven atoms or groups of atoms or ligands are arranged around a central atom defining the vertices of a gyroelongated triangular pyramid. This shape has C3v symmetry and is one of the three common shapes for heptacoordinate transition metal complexes, along with the pentagonal bipyramid and the capped trigonal prism. Examples of the capped octahedral molecular geometry are the heptafluoromolybdate () and the heptafluorotungstate () ions. The "distorted octahedral geometry" exhibited by some AX6E1 molecules such as xenon hexafluoride (XeF6) is a variant of this geometry, with the lone pair occupying the "cap" position. References Stereochemistry Molecular geometry
Capped octahedral molecular geometry
[ "Physics", "Chemistry" ]
181
[ "Molecules", "Molecular geometry", "Stereochemistry", "Space", "Stereochemistry stubs", "nan", "Spacetime", "Matter" ]
37,192,612
https://en.wikipedia.org/wiki/List%20of%20lichens%20of%20Namibia
Lichens are a composite organism consisting of fungi and algae living in symbiotic relationship. They are well adapted to survive in harsh conditions. One of the many places they can be found is the Namib, the desert that gave Namibia its name. Fog in the coastal parts of the desert provides the necessary moisture for the organisms' survival. In the Namib they grow on shrubs, rocks and pebbles of the gravel plains. These small organisms can densely cover large areas, forming lichen fields. The desert hosts 120 lichen species. Most of them are rare and a significant number of them occur only there. "Many are endemic to this region and others show affinities between the Namib lichen biota and other fog deserts of the world, such as the Atacama in South America and Baja California in Mexico and California". Besides the Namib Desert, lichens also occur in suitable places elsewhere in Namibia, with at least 232 species recorded from the country as a whole. Ecological functions Soil stabilizers Plant succession Bioaccumulators that contribute to nutrient cycling in the forestry of Waterberg biosphere Food and habitat for other organisms Threats Lichen vegetation is very vulnerable to pollution and mechanical damage. Lichen fields in the Namib are at risk from off-road driving and mining. However, the Wlotzkasbaken lichen field north of Swakopmund was considered for protection after an Environmental Impact Assessment was done before the development of a desalination plant serving Trekopje uranium mine. Fourteen kilometers of fencing was erected around the northeastern side of the field to protect them from damage caused by vehicles taking shortcuts through the desert. Signs were set up by the Ministry of Environment and Tourism to announce the site location and vulnerability, including several colorful information boards on lichens. The mine also put up an information stand. References Theron, G.L. (2006). Plant Studies 1 study guide Lichenology Lists of plants by location
List of lichens of Namibia
[ "Biology" ]
403
[ "Lichenology" ]
37,194,611
https://en.wikipedia.org/wiki/Flory%E2%80%93Schulz%20distribution
The Flory–Schulz distribution is a discrete probability distribution named after Paul Flory and Günter Victor Schulz that describes the relative ratios of polymers of different length that occur in an ideal step-growth polymerization process. The probability mass function (pmf) for the mass fraction of chains of length is: In this equation, k is the number of monomers in the chain, and 0<a<1 is an empirically determined constant related to the fraction of unreacted monomer remaining. The form of this distribution implies is that shorter polymers are favored over longer ones — the chain length is geometrically distributed. Apart from polymerization processes, this distribution is also relevant to the Fischer–Tropsch process that is conceptually related, where it is known as Anderson-Schulz-Flory (ASF) distribution, in that lighter hydrocarbons are converted to heavier hydrocarbons that are desirable as a liquid fuel. The pmf of this distribution is a solution of the following equation: References Polymers Continuous distributions
Flory–Schulz distribution
[ "Chemistry", "Materials_science" ]
212
[ "Polymers", "Polymer chemistry" ]
37,196,385
https://en.wikipedia.org/wiki/Viscous%20stress%20tensor
The viscous stress tensor is a tensor used in continuum mechanics to model the part of the stress at a point within some material that can be attributed to the strain rate, the rate at which it is deforming around that point. The viscous stress tensor is formally similar to the elastic stress tensor (Cauchy tensor) that describes internal forces in an elastic material due to its deformation. Both tensors map the normal vector of a surface element to the density and direction of the stress acting on that surface element. However, elastic stress is due to the amount of deformation (strain), while viscous stress is due to the rate of change of deformation over time (strain rate). In viscoelastic materials, whose behavior is intermediate between those of liquids and solids, the total stress tensor comprises both viscous and elastic ("static") components. For a completely fluid material, the elastic term reduces to the hydrostatic pressure. In an arbitrary coordinate system, the viscous stress and the strain rate at a specific point and time can be represented by 3 × 3 matrices of real numbers. In many situations there is an approximately linear relation between those matrices; that is, a fourth-order viscosity tensor such that . The tensor has four indices and consists of 3 × 3 × 3 × 3 real numbers (of which only 21 are independent). In a Newtonian fluid, by definition, the relation between and is perfectly linear, and the viscosity tensor is independent of the state of motion or stress in the fluid. If the fluid is isotropic as well as Newtonian, the viscosity tensor will have only three independent real parameters: a bulk viscosity coefficient, that defines the resistance of the medium to gradual uniform compression; a dynamic viscosity coefficient that expresses its resistance to gradual shearing, and a rotational viscosity coefficient which results from a coupling between the fluid flow and the rotation of the individual particles. In the absence of such a coupling, the viscous stress tensor will have only two independent parameters and will be symmetric. In non-Newtonian fluids, on the other hand, the relation between and can be extremely non-linear, and may even depend on other features of the flow besides . Definition Viscous versus elastic stress Internal mechanical stresses in a continuous medium are generally related to deformation of the material from some "relaxed" (unstressed) state. These stresses generally include an elastic ("static") stress component, that is related to the current amount of deformation and acts to restore the material to its rest state; and a viscous stress component, that depends on the rate at which the deformation is changing with time and opposes that change. The viscous stress tensor Like the total and elastic stresses, the viscous stress around a certain point in the material, at any time, can be modeled by a stress tensor, a linear relationship between the normal direction vector of an ideal plane through the point and the local stress density on that plane at that point. In any chosen coordinate system with axes numbered 1, 2, 3, this viscous stress tensor can be represented as a 3 × 3 matrix of real numbers: Note that these numbers usually change with the point and time . Consider an infinitesimal flat surface element centered on the point , represented by a vector whose length is the area of the element and whose direction is perpendicular to it. Let be the infinitesimal force due to viscous stress that is applied across that surface element to the material on the side opposite to . The components of along each coordinate axis are then given by In any material, the total stress tensor is the sum of this viscous stress tensor , the elastic stress tensor and the hydrostatic pressure . In a perfectly fluid material, that by definition cannot have static shear stress, the elastic stress tensor is zero: where is the unit tensor, such that is 1 if and 0 if . While the viscous stresses are generated by physical phenomena that depend strongly on the nature of the medium, the viscous stress tensor is only a description the local momentary forces between adjacent parcels of the material, and not a property of the material. Symmetry Ignoring the torque on an element due to the flow ("extrinsic" torque), the viscous "intrinsic" torque per unit volume on a fluid element is written (as an antisymmetric tensor) as and represents the rate of change of intrinsic angular momentum density with time. If the particles have rotational degrees of freedom, this will imply an intrinsic angular momentum and if this angular momentum can be changed by collisions, it is possible that this intrinsic angular momentum can change in time, resulting in an intrinsic torque that is not zero, which will imply that the viscous stress tensor will have an antisymmetric component with a corresponding rotational viscosity coefficient. If the fluid particles have negligible angular momentum or if their angular momentum is not appreciably coupled to the external angular momentum, or if the equilibration time between the external and internal degrees of freedom is practically zero, the torque will be zero and the viscous stress tensor will be symmetric. External forces can result in an asymmetric component to the stress tensor (e.g. ferromagnetic fluids which can suffer torque by external magnetic fields). Physical causes of viscous stress In a solid material, the elastic component of the stress can be ascribed to the deformation of the bonds between the atoms and molecules of the material, and may include shear stresses. In a fluid, elastic stress can be attributed to the increase or decrease in the mean spacing of the particles, that affects their collision or interaction rate and hence the transfer of momentum across the fluid; it is therefore related to the microscopic thermal random component of the particles' motion, and manifests itself as an isotropic hydrostatic pressure stress. The viscous component of the stress, on the other hand, arises from the macroscopic mean velocity of the particles. It can be attributed to friction or particle diffusion between adjacent parcels of the medium that have different mean velocities. The viscosity equation The strain rate tensor In a smooth flow, the rate at which the local deformation of the medium is changing over time (the strain rate) can be approximated by a strain rate tensor , which is usually a function of the point and time . With respect to any coordinate system, it can be expressed by a 3 × 3 matrix. The strain rate tensor can be defined as the derivative of the strain tensor with respect to time, or, equivalently, as the symmetric part of the gradient (derivative with respect to space) of the flow velocity vector : where denotes the velocity gradient. In Cartesian coordinates, is the Jacobian matrix, and therefore Either way, the strain rate tensor expresses the rate at which the mean velocity changes in the medium as one moves away from the point – except for the changes due to rotation of the medium about as a rigid body, which do not change the relative distances of the particles and only contribute to the rotational part of the viscous stress via the rotation of the individual particles themselves. (These changes comprise the vorticity of the flow, which is the curl (rotational) of the velocity; which is also the antisymmetric part of the velocity gradient .) General flows The viscous stress tensor is only a linear approximation of the stresses around a point , and does not account for higher-order terms of its Taylor series. However in almost all practical situations these terms can be ignored, since they become negligible at the size scales where the viscous stress is generated and affects the motion of the medium. The same can be said of the strain rate tensor as a representation of the velocity pattern around . Thus, the linear models represented by the tensors and are almost always sufficient to describe the viscous stress and the strain rate around a point, for the purpose of modelling its dynamics. In particular, the local strain rate is the only property of the velocity flow that directly affects the viscous stress at a given point. On the other hand, the relation between and can be quite complicated, and depends strongly on the composition, physical state, and microscopic structure of the material. It is also often highly non-linear, and may depend on the strains and stresses previously experienced by the material that is now around the point in question. General Newtonian media A medium is said to be Newtonian if the viscous stress is a linear function of the strain rate , and this function does not otherwise depend on the stresses and motion of fluid around . No real fluid is perfectly Newtonian, but many important fluids, including gases and water, can be assumed to be, as long as the flow stresses and strain rates are not too high. In general, a linear relationship between two second-order tensors is a fourth-order tensor. In a Newtonian medium, specifically, the viscous stress and the strain rate are related by the viscosity tensor : The viscosity coefficient is a property of a Newtonian material that, by definition, does not depend otherwise on or . The strain rate tensor is symmetric by definition, so it has only six linearly independent elements. Therefore, the viscosity tensor has only 6 × 9 = 54 degrees of freedom rather than 81. In most fluids the viscous stress tensor too is symmetric, which further reduces the number of viscosity parameters to 6 × 6 = 36. Shear and bulk viscous stress Absent of rotational effects, the viscous stress tensor will be symmetric. As with any symmetric tensor, the viscous stress tensor can be expressed as the sum of a traceless symmetric tensor , and a scalar multiple of the identity tensor. In coordinate form, This decomposition is independent of the coordinate system and is therefore physically significant. The constant part of the viscous stress tensor manifests itself as a kind of pressure, or bulk stress, that acts equally and perpendicularly on any surface independent of its orientation. Unlike the ordinary hydrostatic pressure, it may appear only while the strain is changing, acting to oppose the change; and it can be negative. The isotropic Newtonian case In a Newtonian medium that is isotropic (i.e. whose properties are the same in all directions), each part of the stress tensor is related to a corresponding part of the strain rate tensor. where and are the scalar isotropic and the zero-trace parts of the strain rate tensor , and and are two real numbers. Thus, in this case the viscosity tensor has only two independent parameters. The zero-trace part of is a symmetric 3 × 3 tensor that describes the rate at which the medium is being deformed by shearing, ignoring any changes in its volume. Thus the zero-trace part of is the familiar viscous shear stress that is associated to progressive shearing deformation. It is the viscous stress that occurs in fluid moving through a tube with uniform cross-section (a Poiseuille flow) or between two parallel moving plates (a Couette flow), and resists those motions. The part of acts as a scalar multiplier (like ), the average expansion rate of the medium around the point in question. (It is represented in any coordinate system by a 3 × 3 diagonal matrix with equal values along the diagonal.) It is numerically equal to of the divergence of the velocity which in turn is the relative rate of change of volume of the fluid due to the flow. Therefore, the scalar part of is a stress that may be observed when the material is being compressed or expanded at the same rate in all directions. It is manifested as an extra pressure that appears only while the material is being compressed, but (unlike the true hydrostatic pressure) is proportional to the rate of change of compression rather the amount of compression, and vanishes as soon as the volume stops changing. This part of the viscous stress, usually called bulk viscosity or volume viscosity, is often important in viscoelastic materials, and is responsible for the attenuation of pressure waves in the medium. Bulk viscosity can be neglected when the material can be regarded as incompressible (for example, when modeling the flow of water in a channel). The coefficient , often denoted by , is called the coefficient of bulk viscosity (or "second viscosity"); while is the coefficient of common (shear) viscosity. See also Vorticity equation Navier–Stokes equations References Tensor physical quantities Viscosity
Viscous stress tensor
[ "Physics", "Mathematics", "Engineering" ]
2,551
[ "Physical phenomena", "Tensors", "Physical quantities", "Quantity", "Tensor physical quantities", "Wikipedia categories named after physical quantities", "Viscosity", "Physical properties" ]
37,196,658
https://en.wikipedia.org/wiki/Database%20encryption
Database encryption can generally be defined as a process that uses an algorithm to transform data stored in a database into "cipher text" that is incomprehensible without first being decrypted. It can therefore be said that the purpose of database encryption is to protect the data stored in a database from being accessed by individuals with potentially "malicious" intentions. The act of encrypting a database also reduces the incentive for individuals to hack the aforementioned database as "meaningless" encrypted data adds extra steps for hackers to retrieve the data. There are multiple techniques and technologies available for database encryption, the most important of which will be detailed in this article. Transparent/External database encryption Transparent data encryption (often abbreviated as TDE) is used to encrypt an entire database, which therefore involves encrypting "data at rest". Data at rest can generally be defined as "inactive" data that is not currently being edited or pushed across a network. As an example, a text file stored on a computer is "at rest" until it is opened and edited. Data at rest are stored on physical storage media solutions such as tapes or hard disk drives. The act of storing large amounts of sensitive data on physical storage media naturally raises concerns of security and theft. TDE ensures that the data on physical storage media cannot be read by malicious individuals that may have the intention to steal them. Data that cannot be read is worthless, thus reducing the incentive for theft. Perhaps the most important strength that is attributed to TDE is its transparency. Given that TDE encrypts all data it can be said that no applications need to be altered in order for TDE to run correctly. It is important to note that TDE encrypts the entirety of the database as well as backups of the database. The transparent element of TDE has to do with the fact that TDE encrypts on "the page level", which essentially means that data is encrypted when stored and decrypted when it is called into the system's memory. The contents of the database are encrypted using a symmetric key that is often referred to as a "database encryption key". Column-level encryption In order to explain column-level encryption it is important to outline basic database structure. A typical relational database is divided into tables that are divided into columns that each have rows of data. Whilst TDE usually encrypts an entire database, column-level encryption allows for individual columns within a database to be encrypted. It is important to establish that the granularity of column-level encryption causes specific strengths and weaknesses to arise when compared to encrypting an entire database. Firstly, the ability to encrypt individual columns allows for column-level encryption to be significantly more flexible when compared to encryption systems that encrypt an entire database such as TDE. Secondly, it is possible to use an entirely unique and separate encryption key for each column within a database. This effectively increases the difficulty of generating rainbow tables which thus implies that the data stored within each column is less likely to be lost or leaked. The main disadvantage associated with column-level database encryption is speed, or a loss thereof. Encrypting separate columns with different unique keys in the same database can cause database performance to decrease, and additionally also decreases the speed at which the contents of the database can be indexed or searched. Field-level encryption Experimental work is being done on providing database operations (like searching or arithmetical operations) on encrypted fields without the need to decrypt them. Strong encryption is required to be randomized - a different result must be generated each time. This is known as probabilistic encryption. Field-level encryption is weaker than randomized encryption, but it allows users to test for equality without decrypting the data. Filesystem-level encryption Encrypting File System (EFS) It is important to note that traditional database encryption techniques normally encrypt and decrypt the contents of a database. Databases are managed by "Database Management Systems" (DBMS) that run on top of an existing operating system (OS). This raises a potential security concern, as an encrypted database may be running on an accessible and potentially vulnerable operating system. EFS can encrypt data that is not part of a database system, which implies that the scope of encryption for EFS is much wider when compared to a system such as TDE that is only capable of encrypting database files. Whilst EFS does widen the scope of encryption, it also decreases database performance and can cause administration issues as system administrators require operating system access to use EFS. Due to the issues concerning performance, EFS is not typically used in databasing applications that require frequent database input and output. In order to offset the performance issues it is often recommended that EFS systems be used in environments with few users. Full disk encryption BitLocker does not have the same performance concerns associated with EFS. Symmetric and asymmetric database encryption Symmetric database encryption Symmetric encryption in the context of database encryption involves a private key being applied to data that is stored and called from a database. This private key alters the data in a way that causes it to be unreadable without first being decrypted. Data is encrypted when saved, and decrypted when opened given that the user knows the private key. Thus if the data is to be shared through a database the receiving individual must have a copy of the secret key used by the sender in order to decrypt and view the data. A clear disadvantage related to symmetric encryption is that sensitive data can be leaked if the private key is spread to individuals that should not have access to the data. However, given that only one key is involved in the encryption process it can generally be said that speed is an advantage of symmetric encryption. Asymmetric database encryption Asymmetric encryption expands on symmetric encryption by incorporating two different types of keys into the encryption method: private and public keys. A public key can be accessed by anyone and is unique to one user whereas a private key is a secret key that is unique to and only known by one user. In most scenarios the public key is the encryption key whereas the private key is the decryption key. As an example, if individual A would like to send a message to individual B using asymmetric encryption, he would encrypt the message using Individual B's public key and then send the encrypted version. Individual B would then be able to decrypt the message using his private key. Individual C would not be able to decrypt Individual A's message, as Individual C's private key is not the same as Individual B's private key. Asymmetric encryption is often described as being more secure in comparison to symmetric database encryption given that private keys do not need to be shared as two separate keys handle encryption and decryption processes. For performance reasons, asymmetric encryption is used in Key management rather than to encrypt the data which is usually done with symmetric encryption. Key management The Symmetric & Asymmetric Database Encryption section introduced the concept of public and private keys with basic examples in which users exchange keys. The act of exchanging keys becomes impractical from a logistical point of view, when many different individuals need to communicate with each-other. In database encryption the system handles the storage and exchange of keys. This process is called key management. If encryption keys are not managed and stored properly, highly sensitive data may be leaked. Additionally, if a key management system deletes or loses a key, the information that was encrypted via said key is essentially rendered "lost" as well. The complexity of key management logistics is also a topic that needs to be taken into consideration. As the number of application that a firm uses increases, the number of keys that need to be stored and managed increases as well. Thus it is necessary to establish a way in which keys from all applications can be managed through a single channel, which is also known as enterprise key management. Enterprise Key Management Solutions are sold by a great number of suppliers in the technology industry. These systems essentially provide a centralised key management solution that allows administrators to manage all keys in a system through one hub. Thus it can be said that the introduction of enterprise key management solutions has the potential to lessen the risks associated with key management in the context of database encryption, as well as to reduce the logistical troubles that arise when many individuals attempt to manually share keys. Hashing Hashing is used in database systems as a method to protect sensitive data such as passwords; however it is also used to improve the efficiency of database referencing. Inputted data is manipulated by a hashing algorithm. The hashing algorithm converts the inputted data into a string of fixed length that can then be stored in a database. Hashing systems have two crucially important characteristics that will now be outlined. Firstly, hashes are "unique and repeatable". As an example, running the word "cat" through the same hashing algorithm multiple times will always yield the same hash, however it is extremely difficult to find a word that will return the same hash that "cat" does. Secondly, hashing algorithms are not reversible. To relate this back to the example provided above, it would be nearly impossible to convert the output of the hashing algorithm back to the original input, which was "cat". In the context of database encryption, hashing is often used in password systems. When a user first creates their password it is run through a hashing algorithm and saved as a hash. When the user logs back into the website, the password that they enter is run through the hashing algorithm and is then compared to the stored hash. Given the fact that hashes are unique, if both hashes match then it is said that the user inputted the correct password. One example of a popular hash function is SHA-256. Salting One issue that arises when using hashing for password management in the context of database encryption is the fact that a malicious user could potentially use an Input to Hash table rainbow table for the specific hashing algorithm that the system uses. This would effectively allow the individual to decrypt the hash and thus have access to stored passwords. A solution for this issue is to 'salt' the hash. Salting is the process of encrypting more than just the password in a database. The more information that is added to a string that is to be hashed, the more difficult it becomes to collate rainbow tables. As an example, a system may combine a user's email and password into a single hash. This increase in the complexity of a hash means that it is far more difficult and thus less likely for rainbow tables to be generated. This naturally implies that the threat of sensitive data loss is minimised through salting hashes. Pepper Some systems incorporate a "pepper" in addition to salts in their hashing systems. Pepper systems are controversial, however it is still necessary to explain their use. A pepper is a value that is added to a hashed password that has been salted. This pepper is often unique to one website or service, and it is important to note that the same pepper is usually added to all passwords saved in a database. In theory the inclusion of peppers in password hashing systems has the potential to decrease the risk of rainbow (Input : Hash) tables, given the system-level specificity of peppers, however the real world benefits of pepper implementation are highly disputed. Application-level encryption In application-level encryption, the process of encrypting data is completed by the application that has been used to generate or modify the data that is to be encrypted. Essentially this means that data is encrypted before it is written to the database. This unique approach to encryption allows for the encryption process to be tailored to each user based on the information (such as entitlements or roles) that the application knows about its users. According to Eugene Pilyankevich, "Application-level encryption is becoming a good practice for systems with increased security requirements, with a general drift toward perimeter-less and more exposed cloud systems". Advantages of application-level encryption One of the most important advantages of application-level encryption is the fact that application-level encryption has the potential to simplify the encryption process used by a company. If an application encrypts the data that it writes/modifies from a database then a secondary encryption tool will not need to be integrated into the system. The second main advantage relates to the overarching theme of theft. Given that data is encrypted before it is written to the server, a hacker would need to have access to the database contents as well as the applications that were used to encrypt and decrypt the contents of the database in order to decrypt sensitive data. Disadvantages of application-level encryption The first important disadvantage of Application-level encryption is that applications used by a firm will need to be modified to encrypt data themselves. This has the potential to consume a significant amount of time and other resources. Given the nature of opportunity cost firms may not believe that application-level encryption is worth the investment. In addition, application-level encryption may have a limiting effect on database performance. If all data on a database is encrypted by a multitude of different applications then it becomes impossible to index or search data on the database. To ground this in reality in the form of a basic example: it would be impossible to construct a glossary in a single language for a book that was written in 30 languages. Lastly the complexity of key management increases, as multiple different applications need to have the authority and access to encrypt data and write it to the database. Risks of database encryption When discussing the topic of database encryption it is imperative to be aware of the risks that are involved in the process. The first set of risks are related to key management. If private keys are not managed in an "isolated system", system administrators with malicious intentions may have the ability to decrypt sensitive data using keys that they have access to. The fundamental principle of keys also gives rise to a potentially devastating risk: if keys are lost then the encrypted data is essentially lost as well, as decryption without keys is almost impossible. How can encryption be used to secure data in a database? Encryption can be employed to enhance the security of data stored in a database by converting the information into an unreadable format using an algorithm. The encrypted data can only be accessed and deciphered with a decryption key, ensuring that even if the database is compromised, the information remains confidential. By encrypting sensitive data such as passwords, financial records, and personal information, organizations can safeguard their data from unauthorized access and data breaches. This process mitigates the risk of data theft and ensures compliance with data protection regulations. Implementing encryption in a database involves utilizing encryption technologies such as Advanced Encryption Standard (AES) or Transport Layer Security (TLS). Encryption keys must be securely managed to prevent unauthorized decryption of data. References Cryptography Data security
Database encryption
[ "Mathematics", "Engineering" ]
3,145
[ "Applied mathematics", "Data security", "Cryptography", "Cybersecurity engineering" ]
30,612,745
https://en.wikipedia.org/wiki/Wilks%27%20theorem
In statistics, Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test. Statistical tests (such as hypothesis testing) generally require knowledge of the probability distribution of the test statistic. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine. A convenient result by Samuel S. Wilks says that as the sample size approaches , the distribution of the test statistic asymptotically approaches the chi-squared () distribution under the null hypothesis . Here, denotes the likelihood ratio, and the distribution has degrees of freedom equal to the difference in dimensionality of and , where is the full parameter space and is the subset of the parameter space associated with . This result means that for large samples and a great variety of hypotheses, a practitioner can compute the likelihood ratio for the data and compare to the value corresponding to a desired statistical significance as an approximate statistical test. The theorem no longer applies when the true value of the parameter is on the boundary of the parameter space: Wilks’ theorem assumes that the ‘true’ but unknown values of the estimated parameters lie within the interior of the supported parameter space. In practice, one will notice the problem if the estimate lies on that boundary. In that event, the likelihood test is still a sensible test statistic and even possess some asymptotic optimality properties, but the significance (the -value) can not be reliably estimated using the chi-squared distribution with the number of degrees of freedom prescribed by Wilks. In some cases, the asymptotic null-hypothesis distribution of the statistic is a mixture of chi-square distributions with different numbers of degrees of freedom. Use Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-likelihood recorded. The test statistic (often denoted by ) is twice the log of the likelihoods ratio, i.e., it is twice the difference in the log-likelihoods: The model with more parameters (here alternative) will always fit at least as well — i.e., have the same or greater log-likelihood — than the model with fewer parameters (here null). Whether the fit is significantly better and should thus be preferred is determined by deriving how likely (-value) it is to observe such a difference  by chance alone, if the model with fewer parameters were true. Where the null hypothesis represents a special case of the alternative hypothesis, the probability distribution of the test statistic is approximately a chi-squared distribution with degrees of freedom equal to , respectively the number of free parameters of models alternative and null. For example: If the null model has 1 parameter and a log-likelihood of −8024 and the alternative model has 3 parameters and a log-likelihood of −8012, then the probability of this difference is that of chi-squared value of with degrees of freedom, and is equal to . Certain assumptions must be met for the statistic to follow a chi-squared distribution, but empirical -values may also be computed if those conditions are not met. Examples Coin tossing An example of Pearson's test is a comparison of two coins to determine whether they have the same probability of coming up heads. The observations can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times each coin came up heads or tails. The contents of this table are our observations . Here consists of the possible combinations of values of the parameters , , , and , which are the probability that coins 1 and 2 come up heads or tails. In what follows, and . The hypothesis space is constrained by the usual constraints on a probability distribution, , and . The space of the null hypothesis is the subspace where . The dimensionality of the full parameter space is 2 (either of the and either of the may be treated as free parameters under the hypothesis ), and the dimensionality of is 1 (only one of the may be considered a free parameter under the null hypothesis ). Writing for the best estimates of under the hypothesis , the maximum likelihood estimate is given by Similarly, the maximum likelihood estimates of under the null hypothesis are given by which does not depend on the coin . The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired distribution. Since the constraint causes the two-dimensional to be reduced to the one-dimensional , the asymptotic distribution for the test will be , the distribution with one degree of freedom. For the general contingency table, we can write the log-likelihood ratio statistic as Invalidity for random or mixed effects models Wilks’ theorem assumes that the true but unknown values of the estimated parameters are in the interior of the parameter space. This is commonly violated in random or mixed effects models, for example, when one of the variance components is negligible relative to the others. In some such cases, one variance component can be effectively zero, relative to the others, or in other cases the models can be improperly nested. To be clear: These limitations on Wilks’ theorem do not negate any power properties of a particular likelihood ratio test. The only issue is that a distribution is sometimes a poor choice for estimating the statistical significance of the result. Bad examples Pinheiro and Bates (2000) showed that the true distribution of this likelihood ratio chi-square statistic could be substantially different from the naïve – often dramatically so. The naïve assumptions could give significance probabilities (-values) that are, on average, far too large in some cases and far too small in others. In general, to test random effects, they recommend using Restricted maximum likelihood (REML). For fixed-effects testing, they say, “a likelihood ratio test for REML fits is not feasible”, because changing the fixed effects specification changes the meaning of the mixed effects, and the restricted model is therefore not nested within the larger model. As a demonstration, they set either one or two random effects variances to zero in simulated tests. In those particular examples, the simulated -values with restrictions most closely matched a 50–50 mixture of and . (With , is 0 with probability 1. This means that a good approximation was ) Pinheiro and Bates also simulated tests of different fixed effects. In one test of a factor with 4 levels (degrees of freedom = 3), they found that a 50–50 mixture of and was a good match for actual -values obtained by simulation – and the error in using the naïve “may not be too alarming.” However, in another test of a factor with 15 levels, they found a reasonable match to – 4 more degrees of freedom than the 14 that one would get from a naïve (inappropriate) application of Wilks’ theorem, and the simulated -value was several times the naïve . They conclude that for testing fixed effects, “it's wise to use simulation.” See also Bayes factor Model selection Sup-LR test Akaike information criterion (AIC) Notes References Other sources External links Statistical ratios Statistical tests
Wilks' theorem
[ "Mathematics" ]
1,503
[ "Mathematical theorems", "Mathematical problems", "Theorems in statistics" ]
30,625,026
https://en.wikipedia.org/wiki/Consensus%20CDS%20Project
The Consensus Coding Sequence (CCDS) Project is a collaborative effort to maintain a dataset of protein-coding regions that are identically annotated on the human and mouse reference genome assemblies. The CCDS project tracks identical protein annotations on the reference mouse and human genomes with a stable identifier (CCDS ID), and ensures that they are consistently represented by the National Center for Biotechnology Information (NCBI), Ensembl, and UCSC Genome Browser. The integrity of the CCDS dataset is maintained through stringent quality assurance testing and on-going manual curation. Motivation and background Biological and biomedical research has come to rely on accurate and consistent annotation of genes and their products on genome assemblies. Reference annotations of genomes are available from various sources, each with their own independent goals and policies, which results in some annotation variation. The CCDS project was established to identify a gold standard set of protein-coding gene annotations that are identically annotated on the human and mouse reference genome assemblies by the participating annotation groups. The CCDS gene sets that have been arrived at by consensus of the different partners now consist of over 18,000 human and over 20,000 mouse genes (see CCDS release history). The CCDS dataset is increasingly representing more alternative splicing events with each new release. Contributing groups Participating annotation groups include: National Center for Biotechnology Information (NCBI) European Bioinformatics Institute (EBI) Wellcome Trust Sanger Institute (WTSI) HUGO Gene Nomenclature Committee (HGNC) Mouse Genome Informatics (MGI) Manual annotation is provided by: Reference Sequence (RefSeq) at NCBI Human and Vertebrate Analysis and Annotation (HAVANA) at WTSI Defining the CCDS gene set "Consensus" is defined as protein-coding regions that agree at the start codon, stop codon, and splice junctions, and for which the prediction meets quality assurance benchmarks. A combination of manual and automated genome annotations provided by (NCBI) and Ensembl (which incorporates manual HAVANA annotations) are compared to identify annotations with matching genomic coordinates. Quality assurance testing In order to ensure that CDSs are of high quality, multiple quality assurance (QA) tests are performed (Table 1). All tests are performed following the annotation comparison step of each CCDS build and are independent of individual annotation group QA tests performed prior to the annotation comparison. Annotations that fail QA tests undergo a round of manual checking that may improve results or reach a decision to reject annotation matches based on QA failure. Review process The CCDS database is unique in that the review process must be carried out by multiple collaborators, and agreement must be reached before any changes can be made. This is made possible with a collaborator coordination system that includes a work process flow and forums for analysis and discussion. The CCDS database operates an internal website that serves multiple purposes including curator communication, collaborator voting, providing special reports and tracking the status of CCDS representations. When a collaborating CCDS group member identifies a CCDS ID that may need review, a voting process is employed to decide on the final outcome. Manual curation Coordinated manual curation is supported by a restricted-access website and a discussion e-mail list. CCDS curation guidelines were established to address specific conflicts that were observed at a higher frequency. Establishment of CCDS curation guidelines has helped to make the CCDS curation process more efficient by reducing the number of conflicting votes and time spent in discussion to reach a consensus agreement. A link to the CCDS curation guidelines can be found here. Curation policies established for the CCDS data set have been integrated in to the RefSeq and HAVANA annotation guidelines and thus, new annotations provided by both groups are more likely to be concordant and result in addition of a CCDS ID. These standards address specific problem areas, are not a comprehensive set of annotation guidelines, and do not restrict the annotation policies of any collaborating group. Examples include, standardized curation guidelines for selection of the initiation codon and interpretation of upstream ORFs and transcripts that are predicted to be candidates for nonsense-mediated decay. Curation occurs continuously, and any of the collaborating centers can flag a CCDS ID as a potential update or withdrawal. Conflicting opinions are addressed by consulting with scientific experts or other annotation curation groups such as the HUGO Gene Nomenclature Committee (HGNC) and Mouse Genome Informatics (MGI). If a conflict cannot be resolved, then collaborators agree to withdraw the CCDS ID until more information becomes available. Curation challenges and annotation guidelines Nonsense-mediated decay (NMD): NMD is the most powerful mRNA surveillance process. NMD eliminates defective mRNA before it can be translated into protein. This is important because if the defective mRNA is translated, the truncated protein may cause disease. Different mechanisms have been proposed to explain NMD; one being the exon junction complex (EJC) model. In this model, if the stop codon is >50 nt upstream of the last exon-exon junction, the transcript is assumed to be a NMD candidate. The CCDS collaborators use a conservative method, based on the EJC model, to screen mRNA transcripts. Any transcripts determined to be NMD candidates are excluded from the CCDS data set except in the following situations: all transcripts at one particular locus are assessed to be NMD candidates however the locus is previously known to be protein coding region; there is experimental evidence suggesting that a functional protein is produced from the NMD candidate transcript. Previously, NMD candidate transcripts were considered to be protein coding transcripts by both RefSeq and HAVANA, and thereby, these NMD candidate transcripts were represented in the CCDS data set. The RefSeq group and the HAVANA project have subsequently revised their annotation policies. Multiple in-frame translation start sites: Multiple factors contribute to translation initiation, such as upstream open reading frames (uORFs), secondary structure and the sequence context around the translation initiation site. A common start site is defined within Kozak consensus sequence: (GCC) GCCACCAUGG in vertebrates. The sequence in brackets (GCC) is the motif with unknown biological impact. There are variations within Kozak consensus sequence, such as G or A is observed three nucleotides upstream (at position -3) of AUG. Bases between positions -3 and +4 of Kozak sequence have the most significant impact on translational efficiency. Hence, a sequence (A/G)NNAUGG is defined as a strong Kozak signal in the CCDS project. According to the scanning mechanism, the small ribosomal subunit can initiate translation from the first reached start codon. There are exceptions to the scanning model: when the initiation site is not surrounded by a strong Kozak signal, which results in leaky scanning. Thereby, the ribosome skips this AUG and initiates translation from a downstream start site; when a shorter ORF can allow the ribosome to re-initiate translation at a downstream ORF. According to the CCDS annotation guidelines, the longest ORF must be annotated except when there is experimental evidence that an internal start site is used to initiate translation. Additionally, other types of new data, such as ribosome profiling data, can be used to identify start codons. The CCDS data set records one translation initiation site per CCDS ID. Any alternative start sites may be used for translation and will be stated in a CCDS public note. Upstream open reading frames: AUG initiation codons located within transcript leaders are known as upstream AUGs (uAUGs). Sometimes, uAUGs are associated with uORFs . uORFs are found in approximately 50% of human and mouse transcripts. The existence of uORFs are another challenge for the CCDS data set. The scanning mechanism for translation initiation suggests that small ribosomal subunits (40S) bind at the 5’ end of a nascent mRNA transcript and scan for the first AUG start codon. It is possible that an uAUG is recognised first, and the corresponding uORF is then translated. The translated uORF could be a NMD candidate, although studies have shown that some uORFs can avoid NMD. The average size limit for uORFs that will escape NMD is approximately 35 amino acids. It also has been suggested that uORFs inhibit translation of the downstream gene by trapping a ribosome initiation complex and causing the ribosome to dissociate from the mRNA transcript before it reaches the protein-coding regions. Currently, no studies have reported the global impact of uORFs on translational regulation. The current CCDS annotation guidelines allow the inclusion of mRNA transcripts containing uORFs if they meet the following two biological requirements: the mRNA transcript has a strong Kozak signal; the mRNA transcript is either ≥ 35 amino acids or overlaps with the primary open reading frame. Read-through transcripts: Read-through transcripts are also known as conjoined genes or co-transcribed genes. Read-through transcripts are defined as transcripts combining at least part of one exon from each of two or more distinct known (partner) genes which lie on the same chromosome in the same orientation. The biological function of read-through transcripts and their corresponding protein molecules remain unknown. However, the definition of a read-through gene in the CCDS data set is that the individual partner genes must be distinct, and the read-through transcripts must share ≥ 1 exon (or ≥ 2 splice sites except in the case of a shared terminal exon) with each of the distinct shorter loci. Transcripts are not considered to be read-through transcripts in the following circumstances: when transcripts are produced from overlapping genes but do not share same splice sites; when transcripts are translated from genes that have nested structures relative to each other. In this instance, the CCDS collaborators and the HGNC have agreed that the read-through transcript be represented as a separate locus. Quality of reference genome sequence: As the CCDS data set is built to represent genomic annotations of human and mouse, the quality problems with the human and mouse reference genome sequences become another challenge. Quality problems occur when the reference genome is misassembled. Thereby the misassembled genome may contain premature stop codons, frame-shift indels, or likely polymorphic pseudogenes. Once these quality problems are identified, the CCDS collaborators report the issues to the Genome Reference Consortium, which investigates and makes the necessary corrections. Access to CCDS data The CCDS project is available from the NCBI CCDS data set page (here), which provides FTP download links and a query interface to acquire information about CCDS sequences and locations. CCDS reports can be obtained by using the query interface, which is located at the top of the CCDS data set page. Users can select various types of identifiers such as CCDS ID, gene ID, gene symbol, nucleotide ID and protein ID to search for specific CCDS information. The CCDS reports (Figure 1) are presented in a table format, providing links to specific resources, such as a history report, Entrez Gene or re-query the CCDS data set. The sequence identifiers table presents transcript information in VEGA, Ensembl and Blink. The chromosome location table includes the genomic coordinates for each individual exon of the specific coding sequence. This table also provides links to several different genome browsers, which allow you to visualise the structure of the coding region. Exact nucleotide sequence and protein sequence of the specific coding sequence are also displayed in the section of CCDS sequence data. Current applications The CCDS dataset is an integral part of the GENCODE gene annotation project and it is used as a standard for high-quality coding exon definition in various research fields, including clinical studies, large-scale epigenomic studies, exome projects and exon array design. Due to the consensus annotation of CCDS exons by the independent annotation groups, exome projects in particular have regarded CCDS coding exons as reliable targets for downstream studies (e.g., for single nucleotide variant detection), and these exons have been used as coding region targets in commercially available exome kits. CCDS release history The CCDS data set size has continued to increase with both the computational genome annotation updates, which integrate new data sets submitted to the International Nucleotide Sequence Database Collaboration (INSDC), and on ongoing curation activities that supplement or improve upon that annotation. Table 2 summarises the key statistics for each CCDS build where Public CCDS IDs are all those that were not under review or pending an update or withdrawal at the time of the current release date. The complete set of release statistics can be found at the official CCDS website on their Releases & Statistics page. Future prospects Long-term goals include the addition of attributes that indicate where transcript annotation is also identical (including the UTRs) and to indicate splice variants with different UTRs that have the same CCDS ID. It is also anticipated that as more complete and high-quality genome sequence data become available for other organisms, annotations from these organisms may be in scope for CCDS representation. The CCDS set will become more complete as the independent curation groups agree on cases where they initially differ, as additional experimental validation of weakly supported genes occurs, and as automatic annotation methods continue to improve. Communication among the CCDS collaborating groups is ongoing and will resolve differences and identify refinements between CCDS update cycles. Human updates are expected to occur roughly every 6 months and mouse releases yearly. See also GENCODE Human Genome Mouse Genome Informatics RefSeq Ensembl References External links CCDS home page Biological databases Genetic engineering in the United Kingdom Genetics databases Science and technology in Cambridgeshire South Cambridgeshire District Wellcome Trust
Consensus CDS Project
[ "Biology" ]
2,926
[ "Bioinformatics", "Biological databases" ]
30,625,834
https://en.wikipedia.org/wiki/Brightray
Brightray is a nickel-chromium alloy that is noted for its resistance to erosion by gas flow at high temperatures. It was used for hard-facing the exhaust valve heads and seats of petrol engines, particularly aircraft engines from the 1930s onwards. It was developed by Henry Wiggin and Co at Birmingham. As well as its use as a coating, it is also used in wire and strip form for electrical heating elements. The original Brightray alloy was composed of 80% nickel / 20% chromium. This alloy is still in use today as Brightray S and can be used at temperatures up to 1050°C. Several other variants are now available. These include nickel-iron-chromium Brightray F that offers better resistance to both reducing and oxidizing environments. Brightray C is a nickel-chromium alloy with rare-earth additions to extend its lifetime under fluctuating temperatures, particularly with heating elements that are being continually switched on and off. See also Nimonic References Refractory metals Nickel–chromium alloys Nickel alloys Chromium alloys Aerospace materials
Brightray
[ "Chemistry", "Engineering" ]
224
[ "Nickel alloys", "Alloy stubs", "Aerospace materials", "Refractory metals", "Alloys", "Aerospace engineering", "Chromium alloys" ]
30,627,411
https://en.wikipedia.org/wiki/Vincenzo%20Balzani
Vincenzo Balzani (born 15 November 1936 in Forlimpopoli, Italy) is an Italian chemist, now emeritus professor at the University of Bologna. Career He has spent most of his professional life at the "Giacomo Ciamician" Department of Chemistry of the University of Bologna, becoming full professor in 1973. He has been appointed emeritus professor on November 1, 2010. Teaching activity He taught courses on General and Inorganic Chemistry, Photochemistry, Supramolecular chemistry. He was chairman of the PhD course on Chemical Sciences from 2002 to 2007 and of the "laurea specialistica" in Photochemistry and Material Chemistry from 2004 to 2007. In the Academic Year 2008–2009, he founded at the University of Bologna an interdisciplinary course on Science and Society. Scientific activity He has carried out an intense scientific activity in the fields of photochemistry, photophysics, electron transfer reactions, supramolecular chemistry, nanotechnology, machines and devices at the molecular level, photochemical conversion of solar energy. With its 650 publications cited more than 64,000 times in the scientific literature (H index 119), he is one of the best known chemists in the world. He is author or co-author of texts for researchers in English, some translated into Chinese and Japanese, which are currently adopted in universities in many countries. A few of the most significant texts are: Photochemistry of Coordination Compounds (1970), Supramolecular Photochemistry (1991), Molecular Devices and Machines - Concepts and Perspectives for the Nanoworld (2008), Energy for a Sustainable World (2011), Photochemistry and Photophysics: Concepts, Research, Applications (2014). Public education activity For many years, alongside scientific research, he has carried out an intense dissemination activity, also on the relationship between science and society and between science and peace, with particular reference to energy and resource issues. He is convinced that scientists have a great responsibility that derives from their knowledge and therefore it is their duty to actively contribute to solving the problems of humanity, particularly those connected to the current energy-climate crisis. Every year he holds dozens of seminars in primary or secondary schools and public conferences to illustrate to students and citizens the problems created by the use of fossil fuels: climate change, ecological unsustainability and the social unease deriving from growing inequalities. He believes that three transitions are necessary: from fossil fuels to renewable energies, from the linear economy to the circular economy and from consumerism to sobriety. On these themes he is coauthor of books much appreciated by students and teachers of secondary schools: Chimica (2000); Energia oggi e domani: Prospettive, sfide, speranze (2004); Energia per l'astronave Terra (2017), whose first edition (2007) won the Galileo award for scientific dissemination; Chimica! Leggere e scrivere il libro della natura (2012), English version: Chemistry! Reading and writing the book of Nature (2014); Energia, risorse, ambiente (2014); Le macchine molecolari (2018), finalist in the National Award for Scientific Dissemination Giancarlo Dosi. Other activities Visiting professor: University of British Columbia, Vancouver, Canada 1972; Energy Research Center, Hebrew University of Jerusalem, Israel, 1979; University of Strasbourg, France, 1990; University of Leuven, Belgium, 1991; University of Bordeaux, France, 1994. Chairman: Gruppo Italiano di Fotochimica (1982–1986), European Photochemistry Association (1988–92); XII IUPAC Symposium on Photochemistry (1988); International Symposium on "Photochemistry and Photophysics of Coordination Compounds (since 1989, now Honorary Chairman); PhD course in Chemistry Sciences (2002–2007) e Laurea specialistica in Photochemistry and Chemistry of Materials (2004–2007), University of Bologna. Director: Institute of Photochemistry and High Energy Radiations (FRAE), National Research Council (Italy), Bologna (1977–1988) and Center for the Photochemical Conversion of Solar Energy, University of Bologna (1981–1998). Member of the Scientific Committee of several international scientific journals. Member of the Scientific Committee of the Urban Plan for Sustainable Mobility (PUMS), of the Bologna metropolitan area (2008–). Political activity: In 2009 he started the Science and Society interdisciplinary course at the University of Bologna with the aim of bridging the gap between University and City; it has long been hoping for the strengthening of similar initiatives for the cultural growth of the Metropolitan City. In 2014 he founded the Energia per l'Italia group,[2] formed by 22 professors and researchers of the university and of the most important research centers of Bologna, with the aim of offering the Government and local politicians guidelines to tackle the energy problem according to a broad perspective that includes scientific, social, environmental and cultural aspects. Coordinator and editor: Supramolecular Photochemistry, NATO ASI Series n. 214, Reidel, Dordrecht (1987); Supramolecular Chemistry, NATO ASI Series n. 371, Reidel, Dordrecht (1992) (with L. De Cola); Guest Editor, Supramolecular Photochemistry, New J. Chem., N.7–8, vol. 20 (1996); Editor in chief of the Handbook on Electron Transfer in Chemistry, in five volumes, Wiley-VCH, Weinheim (2001); Topics in Current Chemistry, volumes 280 and 281 on Photochemistry and Photophysics of Coordination Compounds (2007). Associations and academies He is a member of: Società Chimica Italiana; Accademia delle Scienze di Bologna; Accademia delle Scienze di Torino; Società Nazionale di Scienze, Lettere ed Arti in Napoli; Accademia Nazionale delle Scienze detta dei XL; Accademia Nazionale dei Lincei; European Photochemistry Association; ChemPubSoc Europe; Academia Europaea; European Academy of Sciences, European Academy of Sciences and Arts; American Association for the Advancement of Science. Honors and awards Pacific West Coast Inorganic Lectureship, USA and Canada, 1985; Gold Medal "S. Cannizzaro", Italian Chemical Society, 1988; Doctorate "Honoris Causa", University of Fribourg (CH), 1989; Accademia dei Lincei Award in Chemistry, Italy, 1992; Ziegler-Natta Lecturer, Gesellschaft Deutscher Chemiker, Germany, 1994; Italgas European Prize for Research and Innovation, 1994; Centenary Lecturer, The Royal Chemical Society (U.K.), 1995; Porter Medal for Photochemistry, 2000; Prix Franco-Italien de la Société Française de Chimie, 2002; Grande Ufficiale dell’Ordine al Merito della Repubblica Italiana, 2006; Quilico Gold Metal, Organic Division, Italian Chemical Society, 2008; Honor Professor, East China University of Science and Technology of Shanghai, 2009; Blaise Pascal Medal, European Academy of Sciences, 2009; Rotary Club Galileo International Prize for scientific research, 2011; Nature Award for Mentoring in Science, 2013; Archiginnasio d’oro, Città di Bologna, 2016; Grand Prix de la Maison de la Chimie (France) 2016; Leonardo da Vinci Award, European Academy of Sciences, 2017; Nicholas J. Turro Award, Inter-American Photochemical Society, 2018; Cavaliere di Gran Croce della Repubblica Italiana per meriti scientifici, 2019; Primo Levi Award, Gesellschaft Deutscher Chemiker and Società Chimica Italiana, 2019; UNESCO-Russia Mendeleev Prize, 2021. Publications Scientific books Translated into Chinese and Japanese. Translated into Chinese. Translated into Chinese. Translated into Chinese Educational books (in Italian) Some important papers in scientific journals References External links Homepage at the University of Bologna. Accessed 2011-07-19. CV. Accessed 2015-09-06. List of publications. Accessed 2015-09-06. 1936 births 20th-century Italian chemists Living people Academic staff of the University of Bologna National Research Council (Italy) people Photochemists Members of Academia Europaea
Vincenzo Balzani
[ "Chemistry" ]
1,727
[ "Photochemists", "Physical chemists" ]
25,923,914
https://en.wikipedia.org/wiki/Plutonium%E2%80%93gallium%20alloy
Plutonium–gallium alloy (Pu–Ga) is an alloy of plutonium and gallium, used in nuclear weapon pits, the component of a nuclear weapon where the fission chain reaction is started. This alloy was developed during the Manhattan Project. Overview Metallic plutonium has several different solid allotropes. The δ phase is the least dense and most easily machinable. It is formed at temperatures of 310–452 °C at ambient pressure (1 atmosphere), and is thermodynamically unstable at lower temperatures. However, plutonium can be stabilized in the δ phase by alloying it with a small amount of another metal. The preferred alloy is 3.0–3.5 mol.% (0.8–1.0 wt.%) gallium. Pu–Ga has many practical advantages: stable between −75 and 475 °C, very low thermal expansion, low susceptibility to corrosion (4% of the corrosion rate of pure plutonium), good castability; since plutonium has the rare property that the molten state is denser than the solid state, the tendency to form bubbles and internal defects is decreased. Use in nuclear weapons Stabilized δ-phase Pu–Ga is ductile, and can be rolled into sheets and machined by conventional methods. It is suitable for shaping by hot pressing at about 400 °C. This method was used for forming the first nuclear weapon pits. More modern pits are produced by casting. Subcritical testing showed that wrought and cast plutonium performance is the same. As only the ε-δ transition occurs during cooling, casting Pu-Ga is easier than casting pure plutonium. δ phase Pu–Ga is still thermodynamically unstable, so there are concerns about its aging behavior. There are substantial differences of density (and therefore volume) between the various phases. The transition between δ-phase and α-phase plutonium occurs at a low temperature of 115 °C and can be reached by accident. Prevention of the phase transition and the associated mechanical deformations and consequent structural damage and/or loss of symmetry is of critical importance. Under 4 mol.% gallium the pressure-induced phase change is irreversible. However, the phase change is useful during the operation of a nuclear weapon. As the reaction starts, it generates enormous pressures, in the range of hundreds of gigapascals. Under these conditions, δ phase Pu–Ga transforms to α phase, which is 25% denser and thus more critical. Effect of gallium Plutonium in its α phase has a low internal symmetry, caused by uneven bonding between the atoms, more resembling (and behaving like) a ceramic than a metal. Addition of gallium causes the bonds to become more even, increasing the stability of the δ phase. The α phase bonds are mediated by the 5f shell electrons, and can be disrupted by increased temperature or by presence of suitable atoms in the lattice which reduce the available number of 5f electrons and weaken their bonds. The alloy is denser in molten state than in solid state, which poses an advantage for casting as the tendency to form bubbles and internal defects is decreased. Gallium tends to segregate in plutonium, causing "coring"—gallium-rich centers of grains and gallium-poor grain boundaries. To stabilize the lattice and reverse and prevent segregation of gallium, annealing is required at the temperature just below the δ–ε phase transition, so gallium atoms can diffuse through the grains and create homogeneous structure. The time to achieve homogenization of gallium increases with increasing grain size of the alloy and decreases with increasing temperature. The structure of stabilized plutonium at room temperature is the same as unstabilized at δ-phase temperature, with the difference of gallium atoms substituting plutonium in the fcc lattice. The presence of gallium in plutonium signifies its origin from weapon plants or decommissioned nuclear weapons. The isotopic signature of plutonium then allows rough identification of its origin, manufacturing method, type of the reactor used in its production, and rough history of the irradiation, and matching to other samples, which is of importance in investigation of nuclear smuggling. Aging There are several plutonium and gallium intermetallic compounds: PuGa, Pu3Ga, and Pu6Ga. During aging of the stabilized δ alloy, gallium segregates from the lattice, forming regions of Pu3Ga (ζ'-phase) within α phase, with the corresponding dimensional and density change and buildup of internal strains. The decay of plutonium however produces energetic particles (alpha particles and uranium-235 nuclei) that cause local disruption of the ζ' phase, and establishing a dynamic equilibrium with only a modest amount of ζ' phase present, which explains the alloy's unexpectedly slow, graceful aging. The alpha particles are trapped as interstitial helium atoms in the lattice, coalescing into tiny (about 1 nm diameter) helium-filled bubbles in the metal and causing negligible levels of void swelling; the size of bubbles appears to be limited, though their number increases with time. Addition of 7.5 wt.% of plutonium-238, which has significantly faster decay rate, to the alloy increases the aging damage rate by 16 times, assisting with plutonium aging research. The Blue Gene supercomputer aided with simulations of plutonium aging processes. Production Plutonium alloys can be produced by adding a metal to molten plutonium. However, if the alloying metal is sufficiently reductive, plutonium can be added in the form of oxides or halides. The δ phase plutonium–gallium and plutonium–aluminium alloys are produced by adding plutonium(III) fluoride to molten gallium or aluminium, which has the advantage of avoiding dealing directly with the highly reactive plutonium metal. Reprocessing into MOX fuel For reprocessing of surplus warhead pits into MOX fuel, the majority of gallium has to be removed as its high content could interfere with the fuel rod cladding (gallium attacks zirconium) and with migration of fission products in the fuel pellets. In the ARIES process, the pits are converted to oxide by converting the material to plutonium hydride, then optionally to nitride, and then to oxide. Gallium is then mostly removed from the solid oxide mixture by heating at 1100 °C in a 94% argon 6% hydrogen atmosphere, reducing gallium content from 1% to 0.02%. Further dilution of plutonium oxide during the MOX fuel manufacture brings gallium content to levels considered negligible. A wet route of gallium removal, using ion exchange, is also possible. Electrorefining is another way to separate gallium and plutonium. Development history During the Manhattan Project (1942-1945), the maximum amount of diluent atoms for plutonium to not affect the explosion efficiency was calculated to be 5 mol.%. Two stabilizing elements were considered, silicon and aluminium. However, only aluminium produced satisfactory alloys. But the aluminium tendency to react with α-particles and emit neutrons limited its maximum content to 0.5 mol.%; the next element from the boron group of elements, gallium, was tried and found to be satisfactory. The early atomic bomb design secrets passed to the Soviets by spy Klaus Fuchs included the gallium trick for stabilizing phases of plutonium, and thus the first Soviet atomic bomb used this alloy also. References Gallium alloys Plutonium compounds Low thermal expansion materials Nuclear weapons
Plutonium–gallium alloy
[ "Physics", "Chemistry" ]
1,559
[ "Gallium alloys", "Low thermal expansion materials", "Materials", "Alloys", "Matter" ]
25,925,741
https://en.wikipedia.org/wiki/Thiosulfate%E2%80%93citrate%E2%80%93bile%20salts%E2%80%93sucrose%20agar
Thiosulfate–citrate–bile salts–sucrose agar, or TCBS agar, is a type of selective agar culture plate that is used in microbiology laboratories to isolate Vibrio species. TCBS agar is highly selective for the isolation of V. cholerae and V. parahaemolyticus as well as other Vibrio species. Apart from TCBS agar, other rapid testing dipsticks like immunochromatographic dipstick is also used in endemic areas such as Asia, Africa and Latin America. Though, TCBS agar study is required for confirmation. This becomes immensely important in cases of gastroenteritis caused by Campylobacter species, whose symptoms mimic those of cholera. Since no yellow bacterial growth is observed in case of Campylobacter species on TCBS agar, chances of incorrect diagnosis can be rectified. TCBS agar contains high concentrations of sodium thiosulfate and sodium citrate to inhibit the growth of Enterobacteriaceae. Inhibition of gram-positive bacteria is achieved by the incorporation of ox gall, which is a naturally occurring substance containing a mixture of bile salts and sodium cholate, a pure bile salt. Sodium thiosulfate also serves as a sulfur source and its presence, in combination with ferric citrate, allows for the easy detection of hydrogen sulfide production. Saccharose (sucrose) is included as a fermentable carbohydrate for metabolism by Vibrio species. The alkaline pH of the medium enhances the recovery of V. cholerae and inhibits the growth of others. Thymol blue and bromothymol blue are included as indicators of pH changes. Formula Approximate amounts per liter Yeast extract 5.0 g Proteose Peptone 10.0 g Sodium thiosulfate 10.0 g Sodium citrate 10.0 g Ox gall 5.0 g Sodium cholate 3.0 g Saccharose 20.0 g Sodium chloride 10.0 g Ferric citrate 1.0 g Bromothymol blue 0.04 g Thymol blue 0.04 g Agar 15.0 g pH 8.6 ± 0.2 @ 25 °C Expected results Typical colony morphology V. cholerae: Large yellow colonies. V. parahaemolyticus: Colonies with blue to green centers. V. alginolyticus: Large yellow mucoidal colonies. V. harveyi / V. fischeri: Greyish-green to bluish-green colonies which show luminescence in dark. Older colonies fail to show bioluminescence. Proteus / Enterococci: Partial inhibition. If growth, colonies are small and yellow to translucent. Pseudomonas / Aeromonas: Partial inhibition. If growth, colonies are blue. Bacteria that are not Vibrio but produce hydrogen sulfide grow as small black colonies. This is because the hydrogen sulphide produced from thiosulphate, which acts as a source of sulphur and creates a reduced oxygen tension in which Vibrio can grow due to its facultative anaerobic nature, combines with ferric ions from ferric citrate to produce ferric sulphide, which is black. TCBS agar is both selective and differential. It is highly selective for Vibrio species and differential due to the presence of sucrose and the dyes. Sucrose fermentation produces acid, which converts the colour of bromothymol blue or thymol blue. Two dyes rather than one make the medium produce an array of yellow, green, or blue so that differentiating among various Vibrio species is possible. Control of Acanthaster planci TCBS agar has also been used to control outbreaks of the crown-of-thorns sea star (Acanthaster planci), which is a threat to coral reefs. Single injections lead to deaths of sea stars within 24 hours, with symptoms of "discolored and necrotic skin, ulcerations, loss of body turgor, accumulation of colourless mucus on many spines especially at their tip, and loss of spines. Blisters on the dorsal integument broke through the skin surface and resulted in large, open sores that exposed the internal organs." This was due to promotion of the growth of naturally occurring Vibrio species to high densities, with subsequent symbiont imbalance. References Bacteriology Biochemistry detection reactions Microbiological media
Thiosulfate–citrate–bile salts–sucrose agar
[ "Chemistry", "Biology" ]
951
[ "Biochemistry detection reactions", "Microbiology equipment", "Biochemical reactions", "Microbiology techniques", "Microbiological media" ]
25,935,140
https://en.wikipedia.org/wiki/Percolation%20critical%20exponents
In the context of the physical and mathematical theory of percolation, a percolation transition is characterized by a set of universal critical exponents, which describe the fractal properties of the percolating medium at large scales and sufficiently close to the transition. The exponents are universal in the sense that they only depend on the type of percolation model and on the space dimension. They are expected to not depend on microscopic details such as the lattice structure, or whether site or bond percolation is considered. This article deals with the critical exponents of random percolation. Percolating systems have a parameter which controls the occupancy of sites or bonds in the system. At a critical value , the mean cluster size goes to infinity and the percolation transition takes place. As one approaches , various quantities either diverge or go to a constant value by a power law in , and the exponent of that power law is the critical exponent. While the exponent of that power law is generally the same on both sides of the threshold, the coefficient or "amplitude" is generally different, leading to a universal amplitude ratio. Description Thermodynamic or configurational systems near a critical point or a continuous phase transition become fractal, and the behavior of many quantities in such circumstances is described by universal critical exponents. Percolation theory is a particularly simple and fundamental model in statistical mechanics which has a critical point, and a great deal of work has been done in finding its critical exponents, both theoretically (limited to two dimensions) and numerically. Critical exponents exist for a variety of observables, but most of them are linked to each other by exponent (or scaling) relations. Only a few of them are independent, and the choice of the fundamental exponents depends on the focus of the study at hand. One choice is the set motivated by the cluster size distribution, another choice is motivated by the structure of the infinite cluster. So-called correction exponents extend these sets, they refer to higher orders of the asymptotic expansion around the critical point. Definitions of exponents Self-similarity at the percolation threshold Percolation clusters become self-similar precisely at the threshold density for sufficiently large length scales, entailing the following asymptotic power laws: The fractal dimension relates how the mass of the incipient infinite cluster depends on the radius or another length measure, at and for large probe sizes, . Other notation: magnetic exponent and co-dimension . The Fisher exponent characterizes the cluster-size distribution , which is often determined in computer simulations. The latter counts the number of clusters with a given size (volume) , normalized by the total volume (number of lattice sites). The distribution obeys a power law at the threshold, asymptotically as . The probability for two sites separated by a distance to belong to the same cluster decays as or for large distances, which introduces the anomalous dimension . Also, and . The exponent is connected with the leading correction to scaling, which appears, e.g., in the asymptotic expansion of the cluster-size distribution, for . Also, . For quantities like the mean cluster size , the corrections are controlled by the exponent . The minimum or chemical distance or shortest-path exponent describes how the average minimum distance relates to the Euclidean distance , namely Note, it is more appropriate and practical to measure average , <> for a given . The elastic backbone has the same fractal dimension as the shortest path. A related quantity is the spreading dimension , which describes the scaling of the mass M of a critical cluster within a chemical distance as , and is related to the fractal dimension of the cluster by . The chemical distance can also be thought of as a time in an epidemic growth process, and one also defines where , and is the dynamical exponent. One also writes . Also related to the minimum dimension is the simultaneous growth of two nearby clusters. The probability that the two clusters coalesce exactly in time scales as with . The dimension of the backbone, which is defined as the subset of cluster sites carrying the current when a voltage difference is applied between two sites far apart, is (or ). One also defines . The fractal dimension of the random walk on an infinite incipient percolation cluster is given by . The spectral dimension such that the average number of distinct sites visited in an -step random walk scales as . Critical behavior close to the percolation threshold The approach to the percolation threshold is governed by power laws again, which hold asymptotically close to : The exponent describes the divergence of the correlation length as the percolation transition is approached, . The infinite cluster becomes homogeneous at length scales beyond the correlation length; further, it is a measure for the linear extent of the largest finite cluster. Other notation: Thermal exponent and dimension . Off criticality, only finite clusters exist up to a largest cluster size , and the cluster-size distribution is smoothly cut off by a rapidly decaying function, . The exponent characterizes the divergence of the cutoff parameter, . From the fractal relation we have , yielding . The density of clusters (number of clusters per site) is continuous at the threshold but its third derivative goes to infinity as determined by the exponent : , where represents the coefficient above and below the transition point. The strength or weight of the percolating cluster, or , is the probability that a site belongs to an infinite cluster. is zero below the transition and is non-analytic. Just above the transition, , defining the exponent . plays the role of an order parameter. The divergence of the mean cluster size introduces the exponent . The gap exponent Δ is defined as Δ = 1/(β+γ) = 1/σ and represents the "gap" in critical exponent values from one moment to the next for . The conductivity exponent describes how the electrical conductivity goes to zero in a conductor-insulator mixture, . Also, . Surface critical exponents The probability a point at a surface belongs to the percolating or infinite cluster for is . The surface fractal dimension is given by . Correlations parallel and perpendicular to the surface decay as and . The mean size of finite clusters connected to a site in the surface is . The mean number of surface sites connected to a site in the surface is . Scaling relations Hyperscaling relations Relations based on Relations based on Conductivity scaling relations Surface scaling relations Exponents for standard percolation For , where satisfies near . Exponents for protected percolation In protected percolation, bonds are removed one at a time only from the percolating cluster. Isolated clusters are no longer modified. Scaling relations: , , , where the primed quantities indicated protected percolation Exponents for standard percolation on a non-trivial planar lattice (Weighted planar stochastic lattice (WPSL)) Note that it has been claimed that the numerical values of exponents of percolation depend only on the dimension of lattice. However, percolation on WPSL is an exception in the sense that albeit it is two dimensional yet it does not belong to the same universality where all the planar lattices belong. Exponents for directed percolation Directed percolation (DP) refers to percolation in which the fluid can flow only in one direction along bonds—such as only in the downward direction on a square lattice rotated by 45 degrees. This system is referred to as "1 + 1 dimensional DP" where the two dimensions are thought of as space and time. and are the transverse (perpendicular) and longitudinal (parallel) correlation length exponents, respectively. Also . It satisfies the hyperscaling relation . Another convention has been used for the exponent , which here we call , is defined through the relation , so that . It satisfies the hyperscaling relation . is the exponent corresponding to the behavior of the survival probability as a function of time: . (sometimes called ) is the exponent corresponding to the behavior of the average number of visited sites at time (averaged over all samples including ones that have stopped spreading): . The d(space)+1(time) dimensional exponents are given below. Scaling relations for directed percolation Exponents for dynamic percolation For dynamic percolation (epidemic growth of ordinary percolation clusters), we have , implying For , consider , and taking the derivative with respect to yields , implying Also, Using exponents above, we find See also Critical exponent Percolation theory Percolation threshold Percolation surface critical behavior Conductivity near the percolation threshold Notes References Further reading Percolation theory Critical phenomena Random graphs Critical exponents (phase transitions)
Percolation critical exponents
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,839
[ "Physical phenomena", "Phase transitions", "Physical quantities", "Critical phenomena", "Percolation theory", "Graph theory", "Combinatorics", "Critical exponents (phase transitions)", "Physical constants", "Mathematical relations", "Condensed matter physics", "Random graphs", "Statistical m...
35,830,148
https://en.wikipedia.org/wiki/Condensation%20particle%20counter
A condensation particle counter or CPC is a particle counter that detects and counts aerosol particles by first enlarging them by using the particles as nucleation centers to create droplets in a supersaturated gas. Three techniques have been used to produce nucleation: Adiabatic expansion using an expansion chamber. This was the original technique used by John Aitken in 1888. Thermal diffusion. Mixing of hot and cold gases. The most usually used (also the most efficient) method is cooling by thermal diffusion. Most abundantly used working fluid is n-butanol; during last years water is also encountered in this use. Condensation particle counters are able to detect particles with dimensions from 2 nm and larger. This is of special importance because particles sized down from 50 nm are generally undetectable with conventional optical techniques. Usually the supersaturation is ca. 100…200 % in condensation chamber, despite the fact that heterogeneous nucleation (droplet growth on surface of a suspended solid particle) can occur at supersaturation as small as 1%. The greater vapour content is needed because, according to surface science laws, the vapour pressure over a convex surface is less than over a plane, thus greater content of vapor in air is required to meet actual supersaturation criteria. This amount grows (vapor pressure decreases) along with decrease in particle size, the critical diameter for which condensation can occur at the present saturation level is called Kelvin diameter. The supersaturation level must, however, be small enough to prevent homogeneous nucleation (when liquid molecules collide so often that they form clusters – stable enough to ensure further growth is possible), which will produce false counts. This usually starts at ca. 300% supersaturation. On the right, a diffusional thermal cooling CPC is shown in operation. In order to ensure a high vapour content, the working liquid is in contact with a hollow block of porous material that is heated. Then the humified air enters the cooler where nucleation occur. Temperature difference between the heater and the cooler determines the supersaturation, which in its turn determines the minimal size of particles that will be detected (the greater the difference, the smaller particles get counted). As proper nucleation conditions occur in the center of the flow, sometimes incoming flow is divided: most of it undergoes filtering and forms the sheath flow, which the rest of flow, still containing particles, is inserted into via a capillary. The more uniform is obtained supersaturation, the sharper is particle minimal size cutoff. During the heterogeneous nucleation process in the nucleation chamber, particles grow up to 10…12 μm large and so are conveniently detected by usual techniques, such as laser nephelometry (measurement of light pulses scattered by the grown-up particles). References Meteorological instrumentation and equipment Counting instruments Particle detectors Aerosols Air pollution Aerosol measurement
Condensation particle counter
[ "Chemistry", "Mathematics", "Technology", "Engineering" ]
611
[ "Meteorological instrumentation and equipment", "Counting instruments", "Colloids", "Measuring instruments", "Particle detectors", "Aerosols", "Numeral systems" ]
35,837,025
https://en.wikipedia.org/wiki/Kinetic%20triangulation
A kinetic triangulation data structure is a kinetic data structure that maintains a triangulation of a set of moving points. Maintaining a kinetic triangulation is important for applications that involve motion planning, such as video games, virtual reality, dynamic simulations and robotics. Choosing a triangulation scheme The efficiency of a kinetic data structure is defined based on the ratio of the number of internal events to external events, thus good runtime bounds can sometimes be obtained by choosing to use a triangulation scheme that generates a small number of external events. For simple affine motion of the points, the number of discrete changes to the convex hull is estimated by , thus the number of changes to any triangulation is also lower bounded by . Finding any triangulation scheme that has a near-quadratic bound on the number of discrete changes is an important open problem. Delaunay triangulation The Delaunay triangulation seems like a natural candidate, but a tight worst-case analysis of the number of discrete changes that will occur to the Delaunay triangulation (external events) was considered an open problem until 2015; it has now been bounded to be between and . There is a kinetic data structure that efficiently maintains the Delaunay triangulation of a set of moving points, in which the ratio of the total number of events to the number of external events is . Other triangulations Kaplan et al. developed a randomized triangulation scheme that experiences an expected number of external events, where is the maximum number of times each triple of points can become collinear, , and is the maximum length of a Davenport-Schinzel sequence of order s + 2 on n symbols. Pseudo-triangulations There is a kinetic data structure (due to Agarwal et al.) which maintains a pseudo-triangulation in events total. All events are external and require time to process. References Kinetic data structures Triangulation (geometry)
Kinetic triangulation
[ "Mathematics" ]
400
[ "Triangulation (geometry)", "Planes (geometry)", "Planar graphs" ]
2,900,052
https://en.wikipedia.org/wiki/Calorimeter%20%28particle%20physics%29
In experimental particle physics, a calorimeter is a type of detector that measures the energy of particles. Particles enter the calorimeter and initiate a particle shower in which their energy is deposited in the calorimeter, collected, and measured. The energy may be measured in its entirety, requiring total containment of the particle shower, or it may be sampled. Typically, calorimeters are segmented transversely to provide information about the direction of the particle or particles, as well as the energy deposited, and longitudinal segmentation can provide information about the identity of the particle based on the shape of the shower as it develops. Calorimetry design is an active area of research in particle physics. Types of calorimeters Electromagnetic versus hadronic such as electrons, positrons and photons. A . (See types of particle showers for the differences between the two.) Calorimeters are characterized by the radiation length (for ECALs) and nuclear interaction length (for HCALs) of their active material. ECALs tend to be 15–30 radiation lengths deep while HCALs are 5–8 nuclear interaction lengths deep. Homogeneous versus sampling An ECAL or an HCAL can be either a sampling calorimeter or a homogeneous calorimeter. Calorimeters in high-energy physics experiments Most particle physics experiments use some form of calorimetry. Often it is the most practical way to detect and measure neutral particles from an interaction. In addition, calorimeters are necessary for calculating "missing energy" which can be attributed to particles that rarely interact with matter and escape the detector, such as neutrinos. In most experiments the calorimeter works in conjunction with other components like a central tracker and a muon detector. All the detector components work together to achieve the objective of reconstructing a physics event. See also Calorimeter (for other uses of the term) Total absorption spectroscopy, a technique whose main measuring device is a calorimeter References External links Calorimeter section of The Particle Detector BriefBook Explanation of Calorimeters on Quantumdiaries.org Particle detectors
Calorimeter (particle physics)
[ "Technology", "Engineering" ]
430
[ "Particle detectors", "Measuring instruments" ]
2,900,805
https://en.wikipedia.org/wiki/Octahedral%20symmetry
A regular octahedron has 24 rotational (or orientation-preserving) symmetries, and 48 symmetries altogether. These include transformations that combine a reflection and a rotation. A cube has the same set of symmetries, since it is the polyhedron that is dual to an octahedron. The group of orientation-preserving symmetries is S4, the symmetric group or the group of permutations of four objects, since there is exactly one such symmetry for each permutation of the four diagonals of the cube. Details Chiral and full (or achiral) octahedral symmetry are the discrete point symmetries (or equivalently, symmetries on the sphere) with the largest symmetry groups compatible with translational symmetry. They are among the crystallographic point groups of the cubic crystal system. As the hyperoctahedral group of dimension 3 the full octahedral group is the wreath product , and a natural way to identify its elements is as pairs with and .But as it is also the direct product , one can simply identify the elements of tetrahedral subgroup Td as and their inversions as . So e.g. the identity is represented as 0 and the inversion as 0′. is represented as 6 and as 6′. A rotoreflection is a combination of rotation and reflection. Chiral octahedral symmetry O, 432, or [4,3]+ of order 24, is chiral octahedral symmetry or rotational octahedral symmetry . This group is like chiral tetrahedral symmetry T, but the C2 axes are now C4 axes, and additionally there are 6 C2 axes, through the midpoints of the edges of the cube. Td and O are isomorphic as abstract groups: they both correspond to S4, the symmetric group on 4 objects. Td is the union of T and the set obtained by combining each element of O \ T with inversion. O is the rotation group of the cube and the regular octahedron. Full octahedral symmetry Oh, *432, [4,3], or m3m of order 48 – achiral octahedral symmetry or full octahedral symmetry. This group has the same rotation axes as O, but with mirror planes, comprising both the mirror planes of Td and Th. This group is isomorphic to S4.C2, and is the full symmetry group of the cube and octahedron. It is the hyperoctahedral group for . See also the isometries of the cube. With the 4-fold axes as coordinate axes, a fundamental domain of Oh is given by 0 ≤ x ≤ y ≤ z. An object with this symmetry is characterized by the part of the object in the fundamental domain, for example the cube is given by , and the octahedron by (or the corresponding inequalities, to get the solid instead of the surface). gives a polyhedron with 48 faces, e.g. the disdyakis dodecahedron. Faces are 8-by-8 combined to larger faces for (cube) and 6-by-6 for (octahedron). The 9 mirror lines of full octahedral symmetry can be divided into two subgroups of 3 and 6 (drawn in purple and red), representing in two orthogonal subsymmetries: D2h, and Td. D2h symmetry can be doubled to D4h by restoring 2 mirrors from one of three orientations. Rotation matrices Take the set of all 3×3 permutation matrices and assign a + or − sign to each of the three 1s. There are permutations and sign combinations for a total of 48 matrices, giving the full octahedral group. 24 of these matrices have a determinant of +1; these are the rotation matrices of the chiral octahedral group. The other 24 matrices have a determinant of −1 and correspond to a reflection or inversion. Three reflectional generator matrices are needed for octahedral symmetry, which represent the three mirrors of a Coxeter–Dynkin diagram. The product of the reflections produce 3 rotational generators. Subgroups of full octahedral symmetry The isometries of the cube The cube has 48 isometries (symmetry elements), forming the symmetry group Oh, isomorphic to S4 × Z2. They can be categorized as follows: O (the identity and 23 proper rotations) with the following conjugacy classes (in parentheses are given the permutations of the body diagonals and the unit quaternion representation): identity (identity; 1) rotation about an axis from the center of a face to the center of the opposite face by an angle of 90°: 3 axes, 2 per axis, together 6 ((1 2 3 4), etc.; ((1 ± i)/, etc.) ditto by an angle of 180°: 3 axes, 1 per axis, together 3 ((1 2) (3 4), etc.; i, j, k) rotation about an axis from the center of an edge to the center of the opposite edge by an angle of 180°: 6 axes, 1 per axis, together 6 ((1 2), etc.; ((i ± j)/, etc.) rotation about a body diagonal by an angle of 120°: 4 axes, 2 per axis, together 8 ((1 2 3), etc.; (1 ± i ± j ± k)/2) The same with inversion (x is mapped to −x) (also 24 isometries). Note that rotation by an angle of 180° about an axis combined with inversion is just reflection in the perpendicular plane. The combination of inversion and rotation about a body diagonal by an angle of 120° is rotation about the body diagonal by an angle of 60°, combined with reflection in the perpendicular plane (the rotation itself does not map the cube to itself; the intersection of the reflection plane with the cube is a regular hexagon). An isometry of the cube can be identified in various ways: by the faces three given adjacent faces (say 1, 2, and 3 on a die) are mapped to by the image of a cube with on one face a non-symmetric marking: the face with the marking, whether it is normal or a mirror image, and the orientation by a permutation of the four body diagonals (each of the 24 permutations is possible), combined with a toggle for inversion of the cube, or not For cubes with colors or markings (like dice have), the symmetry group is a subgroup of Oh. Examples: C4v, [4], (*422): if one face has a different color (or two opposite faces have colors different from each other and from the other four), the cube has 8 isometries, like a square has in 2D. D2h, [2,2], (*222): if opposite faces have the same colors, different for each set of two, the cube has 8 isometries, like a cuboid. D4h, [4,2], (*422): if two opposite faces have the same color, and all other faces have one different color, the cube has 16 isometries, like a square prism (square box). C2v, [2], (*22): if two adjacent faces have the same color, and all other faces have one different color, the cube has 4 isometries. if three faces, of which two opposite to each other, have one color and the other three one other color, the cube has 4 isometries. if two opposite faces have the same color, and two other opposite faces also, and the last two have different colors, the cube has 4 isometries, like a piece of blank paper with a shape with a mirror symmetry. Cs, [ ], (*): if two adjacent faces have colors different from each other, and the other four have a third color, the cube has 2 isometries. if two opposite faces have the same color, and all other faces have different colors, the cube has 2 isometries, like an asymmetric piece of blank paper. C3v, [3], (*33): if three faces, of which none opposite to each other, have one color and the other three one other color, the cube has 6 isometries. For some larger subgroups a cube with that group as symmetry group is not possible with just coloring whole faces. One has to draw some pattern on the faces. Examples: D2d, [2+,4], (2*2): if one face has a line segment dividing the face into two equal rectangles, and the opposite has the same in perpendicular direction, the cube has 8 isometries; there is a symmetry plane and 2-fold rotational symmetry with an axis at an angle of 45° to that plane, and, as a result, there is also another symmetry plane perpendicular to the first, and another axis of 2-fold rotational symmetry perpendicular to the first. Th, [3+,4], (3*2): if each face has a line segment dividing the face into two equal rectangles, such that the line segments of adjacent faces do not meet at the edge, the cube has 24 isometries: the even permutations of the body diagonals and the same combined with inversion (x is mapped to −x). Td, [3,3], (*332): if the cube consists of eight smaller cubes, four white and four black, put together alternatingly in all three standard directions, the cube has again 24 isometries: this time the even permutations of the body diagonals and the inverses of the other proper rotations. T, [3,3]+, (332): if each face has the same pattern with 2-fold rotational symmetry, say the letter S, such that at all edges a top of one S meets a side of the other S, the cube has 12 isometries: the even permutations of the body diagonals. The full symmetry of the cube, Oh, [4,3], (*432), is preserved if and only if all faces have the same pattern such that the full symmetry of the square is preserved, with for the square a symmetry group, Dih4, [4], of order 8. The full symmetry of the cube under proper rotations, O, [4,3]+, (432), is preserved if and only if all faces have the same pattern with 4-fold rotational symmetry, Z4, [4]+. Octahedral symmetry of the Bolza surface In Riemann surface theory, the Bolza surface, sometimes called the Bolza curve, is obtained as the ramified double cover of the Riemann sphere, with ramification locus at the set of vertices of the regular inscribed octahedron. Its automorphism group includes the hyperelliptic involution which flips the two sheets of the cover. The quotient by the order 2 subgroup generated by the hyperelliptic involution yields precisely the group of symmetries of the octahedron. Among the many remarkable properties of the Bolza surface is the fact that it maximizes the systole among all genus 2 hyperbolic surfaces. Solids with octahedral chiral symmetry Solids with full octahedral symmetry See also Tetrahedral symmetry Icosahedral symmetry Binary octahedral group Hyperoctahedral group References Peter R. Cromwell, Polyhedra (1997), p. 295 The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, N.W. Johnson: Geometries and Transformations, (2018) Chapter 11: Finite symmetry groups, 11.5 Spherical Coxeter groups External links Groupprops: Direct product of S4 and Z2 Finite groups Rotational symmetry
Octahedral symmetry
[ "Physics", "Mathematics" ]
2,570
[ "Mathematical structures", "Finite groups", "Algebraic structures", "Symmetry", "Rotational symmetry" ]
2,900,960
https://en.wikipedia.org/wiki/Insulin%20shock%20therapy
Insulin shock therapy or insulin coma therapy was a form of psychiatric treatment in which patients were repeatedly injected with large doses of insulin in order to produce daily comas over several weeks. It was introduced in 1927 by Austrian-American psychiatrist Manfred Sakel and used extensively in the 1940s and 1950s, mainly for schizophrenia, before falling out of favour and being replaced by neuroleptic drugs in the 1960s. It was one of a number of physical treatments introduced into psychiatry in the first four decades of the 20th century. These included the convulsive therapies (cardiazol/metrazol therapy and electroconvulsive therapy), deep sleep therapy, and psychosurgery. Insulin coma therapy and the convulsive therapies are collectively known as the shock therapies. Origins In 1927, Sakel, who had recently qualified as a medical doctor in Vienna and was working in a psychiatric clinic in Berlin, began to use low (sub-coma) doses of insulin to treat drug addicts and psychopaths, and when one of the patients experienced improved mental clarity after having slipped into an accidental coma, Sakel reasoned the treatment might work for mentally ill patients. Having returned to Vienna, he treated schizophrenic patients with larger doses of insulin in order to deliberately produce coma and sometimes convulsions. Sakel made his results public in 1933, and his methods were soon taken up by other psychiatrists. Joseph Wortis, after seeing Sakel practice it in 1935, introduced it to the US. British psychiatrists from the Board of Control visited Vienna in 1935 and 1936, and by 1938, 31 hospitals in England and Wales had insulin treatment units. In 1936, Sakel moved to New York and promoted the use of insulin coma treatment in US psychiatric hospitals. By the late 1940s, the majority of psychiatric hospitals in the US were using insulin coma treatment. Technique Insulin coma therapy was a labour-intensive treatment that required trained staff and a special unit. Patients, who were almost invariably diagnosed with schizophrenia, were selected on the basis of having a good prognosis and the physical strength to withstand an arduous treatment. There were no standard guidelines for treatment. Different hospitals and psychiatrists developed their own protocols. Typically, injections were administered six days a week for about two months. The daily insulin dose was gradually increased to 100–150 units (1 unit = 34.7 μg) until comas were produced, at which point the dose would be levelled out. Occasionally doses of up to 450 units were used. After about 50 or 60 comas, or earlier if the psychiatrist thought that maximum benefit had been achieved, the dose of insulin was rapidly reduced before treatment was stopped. Courses of up to 2 years have been documented. After the insulin injection patients would experience various symptoms of decreased blood glucose: flushing, pallor, perspiration, salivation, drowsiness or restlessness. Sopor and coma—if the dose was high enough—would follow. Each coma would last for up to an hour and be terminated by intravenous glucose or via naso-gastric tube. Seizures occurred before or during the coma. Many would be tossing, rolling, moaning, twitching, spasming or thrashing around. Some psychiatrists regarded seizures as therapeutic and patients were sometimes also given electroconvulsive therapy or cardiazol/metrazol convulsive therapy during the coma, or on the day of the week when they didn't have insulin treatment. When they were not in a coma, insulin coma patients were kept together in a group and given special treatment and attention. One handbook for psychiatric nurses, written by British psychiatrist Eric Cunningham Dax, instructs nurses to take their insulin patients out walking and occupy them with games and competitions, flower-picking and map-reading, etc. Patients required continuous supervision as there was a danger of hypoglycemic aftershocks after the coma. In "modified insulin therapy", used in the treatment of neurosis, patients were given lower (sub-coma) doses of insulin. Effects A few psychiatrists (including Sakel) claimed success rates for insulin coma therapy of over 80% in the treatment of schizophrenia. A few others argued that it merely accelerated remission in those patients who were undergoing remission anyway. The consensus at the time was somewhere in between, claiming a success rate of about 50% in patients who had been ill for less than a year (about double the spontaneous remission rate) with no influence on relapse. Sakel suggested the therapy worked by "causing an intensification of the tonus of the parasympathetic end of the autonomic nervous system, by blockading the nerve cell, and by strengthening the anabolic force which induces the restoration of the normal function of the nerve cell and the recovery of the patient." The shock therapies in general had developed on the erroneous premise that epilepsy and schizophrenia rarely occurred in the same patient. The premise was supported by neuropathologic studies that found a dearth of glia in the brains of schizophrenic patients and a surplus of glia in epileptic brains. These observations led the Hungarian neuropsychiatrist Ladislas Meduna to induce seizures in schizophrenic patients with injections of camphor, soon replaced by pentylenetetrazol (Metrazole). Another theory was that patients were somehow "jolted" out of their mental illness. The hypoglycemia (pathologically low glucose levels) that resulted from insulin coma therapy made patients extremely restless, sweaty, and liable to further convulsions and "after-shocks". In addition, patients invariably emerged from the long course of treatment "grossly obese", probably due to glucose rescue-induced glycogen storage disease. The most severe risks of insulin coma therapy were death and brain damage, resulting from irreversible or prolonged coma respectively. A study at the time claimed that many of the cases of brain damage were actually therapeutic improvement because they showed "loss of tension and hostility". Mortality risk estimates varied from about 1% to 4.9%. Respected singer-songwriter Townes Van Zandt was said to have lost much of his long-term memory from this treatment, performed on him for bipolar disorder, preceding a life of substance abuse and depression. Decline Insulin coma therapy was used in most hospitals in the US and the UK during the 1940s and 1950s. The numbers of patients were restricted by the requirement for intensive medical and nursing supervision and the length of time it took to complete a course of treatment. For example, at one typical large British psychiatric hospital, Severalls Hospital in Essex, insulin coma treatment was given to 39 patients in 1956. In the same year, 18 patients received modified insulin treatment, while 432 patients were given electroconvulsive treatment. In 1953, British psychiatrist Harold Bourne published a paper entitled "The insulin myth" in the Lancet, in which he argued that there was no sound basis for believing that insulin coma therapy counteracted the schizophrenic process in a specific way. If treatment worked, he said, it was because patients were chosen for their good prognosis and were given special treatment: "insulin patients tend to be an elite group sharing common privileges and perils". Prior to publishing "The insulin myth" in The Lancet, Bourne had tried to submit the article to the Journal of Mental Science; after a 12-month delay, the Journal informed Bourne they had rejected the article, telling him to "get more experience". In 1957, when insulin coma treatment use was declining, The Lancet published the results of a randomized, controlled trial in which patients were either given insulin coma treatment or identical treatment but with unconsciousness produced by barbiturates. There was no difference in outcome between the groups and the authors concluded that, whatever the benefits of the coma regimen, insulin was not the specific therapeutic agent. In 1958, American neuropsychiatrist Max Fink published in the Journal of the American Medical Association the results of a random controlled comparison in 60 patients treated with 50 iatrogenic insulin-induced comas or chlorpromazine in doses from 300 mg to 2000 mg/day. The results were essentially the same in relief and discharge ratings but chlorpromazine was safer with fewer side-effects, easier to administer, and better suited to long-term care. In 1958, Bourne published a paper on increasing disillusionment in the psychiatric literature about insulin coma therapy for schizophrenia. He suggested there were several reasons it had received almost universal uncritical acceptance by reviews and textbooks for several decades despite the occasional disquieting negative finding, including that, by the 1930s when it all started, schizophrenics were considered inherently unable to engage in psychotherapy, and insulin coma therapy "provided a personal approach to the schizophrenic, suitably disguised as a physical treatment so as to slip past the prejudices of the age." By the 1970s, insulin shock therapy had mostly fallen out of use in the United States, though was still practiced in some hospitals. Its use may have continued longer in China, India, and the Soviet Union. Recent writing Recent articles about insulin coma treatment have attempted to explain why it was given such uncritical acceptance. In the US, Deborah Doroshow wrote that insulin coma therapy secured its foothold in psychiatry not because of scientific evidence or knowledge of any mechanism of therapeutic action, but due to the impressions it made on the minds of the medical practitioners within the local world in which it was administered and the dramatic recoveries observed in some patients. Today, she writes, those who were involved are often ashamed, recalling it as unscientific and inhumane. Administering insulin coma therapy made psychiatry seem a more legitimate medical field. Harold Bourne, who questioned the treatment at the time, said: "It meant that psychiatrists had something to do. It made them feel like real doctors instead of just institutional attendants". One retired psychiatrist who was interviewed by Doroshow "described being won over because his patients were so sick and alternative treatments did not exist". Doroshow argues that "psychiatrists used complications to exert their practical and intellectual expertise in a hospital setting" and that collective risk-taking established "especially tight bonds among unit staff members". She finds it ironic that psychiatrists "who were willing to take large therapeutic risks were extremely careful in their handling of adverse effects". Psychiatrists interviewed by Doroshow recalled how insulin coma patients were provided with various routines and recreational and group-therapeutic activities, to a much greater extent than most psychiatric patients. Insulin coma specialists often chose patients whose problems were the most recent and who had the best prognosis; in one case discussed by Doroshow a patient had already started to show improvement before insulin coma treatment, and after the treatment denied that it had helped, but the psychiatrists nevertheless argued that it had. A Beautiful Mind In 1959, the 1994 Nobel Prize winner in Economics, John Nash, was diagnosed with schizophrenia and was initially treated at McLean Hospital. When he relapsed, he was admitted to Trenton Psychiatric Hospital in New Jersey. His associates at Princeton University pleaded with the hospital director to have Nash admitted to the insulin coma unit, recognizing that it was better staffed than other hospital units. He responded to treatment and was subsequently medicated with neuroleptics. Nash's life story was presented in the film A Beautiful Mind, which accurately portrayed the seizures associated with his treatments. In a review of the Nash history, Fink ascribed the success of coma treatments to the 10% of associated seizures, noting that physicians often augmented the comas by convulsions induced by ECT. He envisioned insulin coma treatment as a weak form of convulsive therapy. Other explanations In the UK, psychiatrist Kingsley Jones sees the support of the Board of Control as important in persuading psychiatrists to use insulin coma therapy. The treatment then acquired the privileged status of a standard procedure, protected by professional organizational interests. He also notes that it has been suggested that the Mental Treatment Act 1930 encouraged psychiatrists to experiment with physical treatments. British lawyer Phil Fennell notes that patients "must have been terrified" by the insulin coma therapy procedures and the effects of the massive overdoses of insulin, and were often rendered more compliant and easier to manage after a course. Leonard Roy Frank, an American activist from the psychiatric survivors movement who underwent 50 forced insulin coma treatments combined with ECT, described the treatment as "the most devastating, painful and humiliating experience of my life", a "flat-out atrocity" glossed over by psychiatric euphemism, and a violation of basic human rights. In 2013, French physician-and-novelist Laurent Seksik wrote an historical novel about the tragic life of Eduard Einstein: Le cas Eduard Einstein. He related the encounter between Dr Sakel and Mileva Maric, Albert Einstein's first wife (and Eduard's mother), and the way Sakel's therapy had been given to Eduard, who had schizophrenia. Representation in media Like many new medical treatments for diseases previously considered incurable, depictions of insulin coma therapy in the media were initially favorable. In the 1940 film Dr. Kildare's Strange Case, young Kildare uses the new "insulin shock cure for schizophrenia" to bring a man back from insanity. The film dramatically shows a five-hour treatment that ends with a patient eating jelly sandwiches and reconnecting with his wife. Other films of the era began to show a more sinister approach, beginning with the 1946 film Shock, in which actor Vincent Price plays a doctor who plots to murder a patient using an overdose of insulin in order to keep the fact that he was a murderer a secret. More recent films include Frances (1982) in which actress Frances Farmer undergoes insulin coma treatment, and A Beautiful Mind, which depicted genius John Nash undergoing insulin treatment. In an episode of the medical drama House M.D., House puts himself in an insulin shock to try to make his hallucinations disappear. Sylvia Plath's The Bell Jar refers to insulin coma therapy in chapter 15. In Kelly Rimmer's book, The German Wife, the character Henry Davis undergoes insulin shock therapy to treat 'combat fatigue'. See also Deep sleep therapy Electroconvulsive therapy Manfred Sakel References 24. "House M.D" "Under My Skin" episode 23, Season 5. Plot synopsis "House" Under My Skin (TV Episode 2009) - IMDb External links The History of Shock Therapy in Psychiatry Drug Treatment in Modern Psychiatry 1944 textbook extract on 'The Insulin Treatment of Schizophrenia' Insulin Coma Therapy by the head of the insulin coma unit at the Hillside Hospital in New York from 1952 to 1958 Shock Treatment - The Killing of Susan Kelly A poem by insulin/electro shock survivor Dorothy Dundas Psychopharmacology Insulin delivery Schizophrenia Physical psychiatric treatments History of psychiatry Experimental medical treatments Coma 1927 introductions
Insulin shock therapy
[ "Chemistry" ]
3,084
[ "Psychopharmacology", "Pharmacology" ]
2,900,961
https://en.wikipedia.org/wiki/Total%20body%20irradiation
Total body irradiation (TBI) is a form of radiotherapy used primarily as part of the preparative regimen for haematopoietic stem cell (or bone marrow) transplantation. As the name implies, TBI involves irradiation of the entire body, though in modern practice the lungs are often partially shielded to lower the risk of radiation-induced lung injury. Total body irradiation in the setting of bone marrow transplantation serves to destroy or suppress the recipient's immune system, preventing immunologic rejection of transplanted donor bone marrow or blood stem cells. Additionally, high doses of total body irradiation can eradicate residual cancer cells in the transplant recipient, increasing the likelihood that the transplant will be successful. Dosage Doses of total body irradiation used in bone marrow transplantation typically range from 10 to >12 Gy. For reference, an unfractionated (i.e. single exposure) dose of 4.5 Gy is fatal in 50% of exposed individuals without aggressive medical care. The 10-12 Gy is typically delivered across multiple fractions to minimise toxicities to the patient. Early research in bone marrow transplantation by E. Donnall Thomas and colleagues demonstrated that this process of splitting TBI into multiple smaller doses resulted in lower toxicity and better outcomes than delivering a single, large dose. The time interval between fractions allows other normal tissues some time to repair some of the damage caused. However, the dosing is still high enough that the ultimate result is the destruction of both the patient's bone marrow (allowing donor marrow to engraft) and any residual cancer cells. Non-myeloablative bone marrow transplantation uses lower doses of total body irradiation, typically about 2 Gy, which do not destroy the host bone marrow but do suppress the host immune system sufficiently to promote donor engraftment. Usage in other cancers In addition to its use in bone marrow transplantation, total body irradiation has been explored as a treatment modality for high-risk Ewing sarcoma. However, subsequent findings suggest that TBI in this setting causes toxicity without improving disease control, and TBI is not currently used in the treatment of Ewing sarcoma outside of clinical trials. Fertility Total body irradiation results in infertility in most cases, with recovery of gonadal function occurring in 10−14% of females. The number of pregnancies observed after hematopoietic stem cell transplantation involving such a procedure is lower than 2%. Fertility preservation measures mainly include cryopreservation of ovarian tissue, embryos or oocytes. Gonadal function has been reported to recover in less than 20% of males after TBI. References Transplantation medicine Radiobiology
Total body irradiation
[ "Chemistry", "Biology" ]
568
[ "Radiobiology", "Radioactivity" ]
2,901,658
https://en.wikipedia.org/wiki/Cryomodule
A cryomodule is a section of a modern particle accelerator composed of superconducting RF (SRF) acceleration cavities, which need very low operating temperatures (often around 2 Kelvin). The cryomodule is a complex, state-of-the-art supercooled component in which particle beams are accelerated for scientific research. The superconducting cavities are cooled with liquid helium. A cryomodule section of an accelerator is composed of superconducting cavities that accelerate the beam, also including a magnetic lattice that provides focusing and steering. Design considerations SRF cavities tend to be thin-walled structures immersed in a bath of liquid helium having temperatures of 1.6 K to 4.5 K. Careful engineering is required to insulate the helium bath from the room-temperature external environment. This is accomplished by: A vacuum chamber surrounding the cold components to eliminate convective heat transfer by gases. Multi-layer insulation wrapped around cold components. This insulation is composed of dozens of alternating layers of aluminized mylar and thin fiberglass sheet, which reflects infrared radiation that shines through the vacuum insulation from the 300 K exterior walls. Low thermal conductivity mechanical connections between the cold mass and the room temperature vacuum vessel. These connections are required, for example, to support the mass of the helium vessel inside the vacuum vessel and to connect the apertures in the SRF cavity to the accelerator beamline. Both types of connections transition from internal cryogenic temperatures to room temperature at the vacuum vessel boundary. The thermal conductivity of these parts is minimized by having small cross sectional area and being composed of low thermal conductivity material, such as stainless steel for the vacuum beampipe and fiber reinforced epoxies (G10) for mechanical support. The vacuum beampipe also requires good electrical conductivity on its interior surface to propagate the image currents of the beam, which is accomplished by about 100 μm of copper plating on the interior surface. References External links Cryomodules at Jefferson lab Cryomodule images A Cryomodule performance (PDF) Accelerator physics
Cryomodule
[ "Physics" ]
437
[ "Accelerator physics", "Applied and interdisciplinary physics", "Experimental physics" ]
2,901,975
https://en.wikipedia.org/wiki/Calorimeter%20constant
A calorimeter constant (denoted Ccal) is a constant that quantifies the heat capacity of a calorimeter. It may be calculated by applying a known amount of heat to the calorimeter and measuring the calorimeter's corresponding change in temperature. In SI units, the calorimeter constant is then calculated by dividing the change in enthalpy (ΔH) in joules by the change in temperature (ΔT) in kelvins or degrees Celsius: The calorimeter constant is usually presented in units of joules per degree Celsius (J/°C) or joules per kelvin (J/K). Every calorimeter has a unique calorimeter constant. Uses The calorimeter constants are used in constant pressure calorimetry to calculate the amount of heat required to achieve a certain raise in the temperature of the calorimeter's contents. Example To determine the change in enthalpy in a neutralization reaction (ΔHneutralization), a known amount of basic solution may be placed in a calorimeter, and the temperature of this solution alone recorded. Then, a known amount of acidic solution may be added and the change in temperature measured using a thermometer. The difference in temperature (ΔT, in units K or °C) may be calculated by subtracting the initial temperature from the final temperature. The enthalpy of neutralization ΔHneutralization may then be calculated according to the following equation: . Regardless of the specific chemical process, with a known calorimeter constant and a known change in temperature the heat added to the system may be calculated by multiplying the calorimeter constant by that change in temperature. See also Thermodynamics References Calorimetry Thermochemistry
Calorimeter constant
[ "Chemistry" ]
368
[ "Thermochemistry" ]
2,902,438
https://en.wikipedia.org/wiki/Plectasin
Plectasin is an antibiotic protein from the mushroom Pseudoplectania nigrella. It was initially discovered in 2005 and commercialised by Novozymes. Plectasin belongs to the antimicrobial peptide class called fungal defensins, which is also present in invertebrates such as flies and mussels. Clinical trials Pre-clinical tests in mice have shown promising results in that multiresistant bacteria have problems mutating resistance against plectasin, which acts by directly binding the bacterial cell-wall precursor Lipid II in a supramolecular complex. At the end of 2008, Novozymes signed a global licensing agreement with Sanofi-Aventis for the further development and marketing of NZ2114, a derivative of plectasin, as a treatment for gram-positive bacterial infections, e.g. Streptococcus and Staphylococcus which are resistant to all existing antibiotics. References Antibiotics Peptides Defensins
Plectasin
[ "Chemistry", "Biology" ]
204
[ "Biomolecules by chemical classification", "Biotechnology products", "Antibiotics", "Molecular biology", "Biocides", "Peptides" ]
2,903,138
https://en.wikipedia.org/wiki/Formula%20game
A formula game is an artificial game represented by a fully quantified Boolean formula such as . One player (E) has the goal of choosing values so as to make the formula true, and selects values for the variables that are existentially quantified with . The opposing player (A) has the goal of making the formula false, and selects values for the variables that are universally quantified with . The players take turns according to the order of the quantifiers, each assigning a value to the next bound variable in the original formula. Once all variables have been assigned values, Player E wins if the resulting expression is true. In computational complexity theory, the language FORMULA-GAME is defined as all formulas such that Player E has a winning strategy in the game represented by . FORMULA-GAME is PSPACE-complete because it is exactly the same decision problem as True quantified Boolean formula. Player E has a winning strategy exactly when every choice they must make in a game has a truth assignment that makes true, no matter what choice Player A makes. References Sipser, Michael. (2006). Introduction to the Theory of Computation. Boston: Thomson Course Technology. Satisfiability problems Boolean algebra PSPACE-complete problems
Formula game
[ "Mathematics", "Technology" ]
256
[ "Boolean algebra", "Automated theorem proving", "PSPACE-complete problems", "Mathematical logic", "Computational problems", "Computer science stubs", "Fields of abstract algebra", "Computer science", "Computing stubs", "Mathematical problems", "Satisfiability problems" ]
24,510,339
https://en.wikipedia.org/wiki/Antitarget
In pharmacology, an antitarget (or off-target) is a receptor, enzyme, or other biological target that, when affected by a drug, causes undesirable side-effects. During drug design and development, it is important for pharmaceutical companies to ensure that new drugs do not show significant activity at any of a range of antitargets, most of which are discovered largely by chance. Among the best-known and most significant antitargets are the hERG channel and the 5-HT2B receptor, both of which cause long-term problems with heart function that can prove fatal (long QT syndrome and cardiac fibrosis, respectively), in a small but unpredictable proportion of users. Both of these targets were discovered as a result of high levels of distinctive side-effects during the marketing of certain medicines, and, while some older drugs with significant hERG activity are still used with caution, most drugs that have been found to be strong 5-HT2B agonists were withdrawn from the market, and any new compound will almost always be discontinued from further development if initial screening shows high affinity for these targets. Agonism of the 5-HT2A receptor is an antitarget because 5-HT2A receptor agonists are associated with hallucinogenic effects. According to David E. Nichols, "Discussions over the years with many colleagues working in the pharmaceutical industry have informed me that if upon screening a potential new drug is found to have serotonin 5-HT2A agonist activity, it nearly always signals the end to any further development of that molecule." There are some exceptions however, for instance efavirenz and lorcaserin, which can activate the 5-HT2A receptor and cause psychedelic effects at high doses. The growth of the field of chemoproteomics has offered a variety of strategies to identify off-targets on a proteome wide scale. See also Off-target activity References Medicinal chemistry Drug discovery
Antitarget
[ "Chemistry", "Biology" ]
417
[ "Life sciences industry", "Drug discovery", "nan", "Medicinal chemistry", "Biochemistry" ]
24,510,992
https://en.wikipedia.org/wiki/Proofs%20related%20to%20chi-squared%20distribution
The following are proofs of several characteristics related to the chi-squared distribution. Derivations of the pdf Derivation of the pdf for one degree of freedom Let random variable Y be defined as Y = X2 where X has normal distribution with mean 0 and variance 1 (that is X ~ N(0,1)). Then, Where and are the cdf and pdf of the corresponding random variables. Then Alternative proof directly using the change of variable formula The change of variable formula (implicitly derived above), for a monotonic transformation , is: In this case the change is not monotonic, because every value of has two corresponding values of (one positive and negative). However, because of symmetry, both halves will transform identically, i.e. In this case, the transformation is: , and its derivative is So here: And one gets the chi-squared distribution, noting the property of the gamma function: . Derivation of the pdf for two degrees of freedom There are several methods to derive chi-squared distribution with 2 degrees of freedom. Here is one based on the distribution with 1 degree of freedom. Suppose that and are two independent variables satisfying and , so that the probability density functions of and are respectively: and of course . Then, we can derive the joint distribution of : where . Further, let and , we can get that: and or, inversely and Since the two variable change policies are symmetric, we take the upper one and multiply the result by 2. The Jacobian determinant can be calculated as: Now we can change to : where the leading constant 2 is to take both the two variable change policies into account. Finally, we integrate out to get the distribution of , i.e. : Substituting gives: So, the result is: Derivation of the pdf for k degrees of freedom Consider the k samples to represent a single point in a k-dimensional space. The chi square distribution for k degrees of freedom will then be given by: where is the standard normal distribution and is that elemental shell volume at Q(x), which is proportional to the (k − 1)-dimensional surface in k-space for which It can be seen that this surface is the surface of a k-dimensional ball or, alternatively, an n-sphere where n = k - 1 with radius , and that the term in the exponent is simply expressed in terms of Q. Since it is a constant, it may be removed from inside the integral. The integral is now simply the surface area A of the (k − 1)-sphere times the infinitesimal thickness of the sphere which is The area of a (k − 1)-sphere is: Substituting, realizing that , and cancelling terms yields: Article proofs
Proofs related to chi-squared distribution
[ "Mathematics" ]
561
[ "Article proofs" ]
24,511,251
https://en.wikipedia.org/wiki/An%20Act%20to%20amend%20the%20Telecommunications%20Act%20%28Internet%20neutrality%29
An Act to amend the Telecommunications Act (Internet neutrality) (, Bill C-398) was tabled in the Parliament of Canada by the MP for Timmins and James Bay, Charlie Angus, on May 29, 2009, on the second session of the 40th Parliament. Bill C-398 aimed to prohibit various forms of discrimination by telecommunications service providers. "Network management practices that favour, degrade or prioritize any content, application or service transmitted over a broadband network based on its source, ownership, destination or type" are specifically prohibited, subject to certain exceptions. Telecommunications Service providers may use reasonable management practices in order to alleviate extraordinary congestion, may prioritize emergency communications, and assure the security of computers and networks in a reasonable manner. ISPs, according to the proposed Bill, would also be allowed to charge users on a usage-based basis as well as offer directly to users consumer protection services that may discriminate, provided notice is given to users as well as a possibility to opt-out. The Bill would also prohibit telecommunications service providers to hamper foreign device attachment to their networks provided the device would neither damage nor degrade the network. Telecommunications service providers would furthermore be obligated to render available to users at all time information about the speed, limitations and management practices that are in effect. History The Bill is a re-submitted version of Bill C-552 which was introduced one day after 300 protesters came to Parliament in May 2008. The Net Neutrality movement in Canada had accelerated since telecom providers Bell Canada and Rogers were found to have throttled their users P2P traffic. Liberal MP David McGuinty had followed closely on the heels of Mr. Angus' first private member initiative with a private Bill of his own, C-555. References Internet in Canada Net neutrality Canadian federal legislation Proposed laws of Canada 2009 in Canadian law
An Act to amend the Telecommunications Act (Internet neutrality)
[ "Engineering" ]
375
[ "Net neutrality", "Computer networks engineering" ]
24,511,700
https://en.wikipedia.org/wiki/Tutte%20graph
In the mathematical field of graph theory, the Tutte graph is a 3-regular graph with 46 vertices and 69 edges named after W. T. Tutte. It has chromatic number 3, chromatic index 3, girth 4 and diameter 8. The Tutte graph is a cubic polyhedral graph, but is non-hamiltonian. Therefore, it is a counterexample to Tait's conjecture that every 3-regular polyhedron has a Hamiltonian cycle. Published by Tutte in 1946, it is the first counterexample constructed for this conjecture. Other counterexamples were found later, in many cases based on Grinberg's theorem. Construction From a small planar graph called the Tutte fragment, W. T. Tutte constructed a non-Hamiltonian polyhedron, by putting together three such fragments. The "compulsory" edges of the fragments, that must be part of any Hamiltonian path through the fragment, are connected at the central vertex; because any cycle can use only two of these three edges, there can be no Hamiltonian cycle. The resulting graph is 3-connected and planar, so by Steinitz' theorem it is the graph of a polyhedron. It has 25 faces. It can be realized geometrically from a tetrahedron (the faces of which correspond to the four large nine-sided faces in the drawing, three of which are between pairs of fragments and the fourth of which forms the exterior) by multiply truncating three of its vertices. Algebraic properties The automorphism group of the Tutte graph is Z/3Z, the cyclic group of order 3. The characteristic polynomial of the Tutte graph is : Related graphs Although the Tutte graph is the first 3-regular non-Hamiltonian polyhedral graph to be discovered, it is not the smallest such graph. In 1965 Lederberg found the Barnette–Bosák–Lederberg graph on 38 vertices. In 1968, Grinberg constructed additional small counterexamples to the Tait's conjecture – the Grinberg graphs on 42, 44 and 46 vertices. In 1974 Faulkner and Younger published two more graphs – the Faulkner–Younger graphs on 42 and 44 vertices. Finally Holton and McKay showed there are exactly six 38-vertex non-Hamiltonian polyhedra that have nontrivial three-edge cuts. They are formed by replacing two of the vertices of a pentagonal prism by the same fragment used in Tutte's example. References Individual graphs Regular graphs Planar graphs Hamiltonian paths and cycles
Tutte graph
[ "Mathematics" ]
522
[ "Planes (geometry)", "Planar graphs" ]
24,511,708
https://en.wikipedia.org/wiki/Mathukumalli%20Vidyasagar
Mathukumalli Vidyasagar (born 29 September 1947) is a leading control theorist and a Fellow of Royal Society. He is currently a Distinguished Professor in Electrical Engineering at IIT Hyderabad. Previously he was the Cecil & Ida Green (II) Chair of Systems Biology Science at the University of Texas at Dallas. Prior to that he was an executive vice-president at Tata Consultancy Services (TCS) where he headed the Advanced Technology Center. Earlier, he was the director of Centre for Artificial Intelligence and Robotics (CAIR), a DRDO defence lab in Bangalore. He is the son of eminent mathematician M V Subbarao. His Erdős number is two and his Einstein number is three. Early life and education He completed his bachelor's, master's and Ph. D. degrees from University of Wisconsin, Madison. Career He began his career as an assistant professor at the Marquette University in 1969. Awards and honors Vidyasagar received several awards and honors, including: 1983: IEEE Fellow of the Institute of Electrical and Electronics Engineers (IEEE), at the age of 35, one of the youngest to receive this honor, "for contributions to the stability analysis of linear and nonlinear distributed systems" 1984: the Frederick Emmons Terman Award from the American Society for Engineering Education 2004: IEEE Spectrum named him as one of forty "Tech Gurus" 2008: the IEEE Control Systems Award 2012: Became a Fellow of Royal Society 2012: Rufus Oldenburger Medal 2013: John R. Ragazzini Award, American Automatic Control Council - for outstanding contributions to automatic control education through publication of textbooks and research monographs 2015: Jawaharlal Nehru Science Fellowship, Government of India 2017: Fellow, International Federation of Automatic Control 2017: Named as 125 "People of Impact" during the 125th anniversary of the Department of Electrical Engineering, University of Wisconsin Books 1978. Nonlinear Systems Analysis 1981. Input-Output Analysis of Large-Scale Interconnected Systems: Decomposition, Well-Posedness and Stability 1985. Control System Synthesis: A Factorization Approach 1989. Robot dynamics and control. with Mark W. Spong 1993. Nonlinear Systems Analysis, (Second Edition) 1997. A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems 2003. Learning and Generalization With Applications to Neural Networks, (Second Edition) 2006. Robot modeling and control. with S. Hutchinson and Mark W. Spong 2012. Computational Cancer Biology: An Interaction Networks Approach 2014. Hidden Markov Processes: Theory and Applications to Biology References External links Home page Control theorists Indian roboticists Fellows of the IEEE University of Texas at Dallas faculty 1947 births Living people Telugu people Engineers from Andhra Pradesh Systems biologists American people of Indian descent American people of Telugu descent Tata Consultancy Services people Scientists from Andhra Pradesh Indian business executives Fellows of the Royal Society 20th-century Indian engineers Indian technology writers 20th-century Indian non-fiction writers
Mathukumalli Vidyasagar
[ "Engineering" ]
587
[ "Control engineering", "Control theorists" ]
24,511,932
https://en.wikipedia.org/wiki/FFKM
FFKMs (by ASTM 1418 standard) (equivalent to FFPMs by ISO/DIN 1629 standard) are perfluoroelastomeric compounds containing an even higher amount of fluorine than FKM fluoroelastomers. They have improved resistance to high temperatures and chemicals and even withstand environments where Oxygen-Plasma are present for many hours. Certain grades have a maximum continuous service temperature of . They are commonly used to make O-rings and gaskets that are used in applications that involve contact with hydrocarbons or highly corrosive fluids, or when a wide range of temperatures is encountered. For vacuum applications, demanding very low contamination (out-gassing and particle emission) as well as high temperature operation (200–300 °C) for prolonged out-baking or processing times and where a copper or metal sealing is not possible or very inconvenient/expensive, a custom-made, clean-room manufactured, sealing such as Kalrez® 9100, SCVBR, Chemraz®, or Perlast can be used. After manufacturing, they are O-plasma vacuum cleaned (and/or vacuum baked) to reach out-gassing performance similar to Teflon while reaching vacuum leak tightness (permeability rates) similar to FKM (Viton) compounds. This combination of properties allows FFKM seals to reach well into UHV pressures without the use of metal sealing. However, they are significantly more expensive than standard FKM O-rings. References Organofluorides Elastomers Materials science Fluoropolymers
FFKM
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
326
[ "Applied and interdisciplinary physics", "Synthetic materials", "Materials science", "Elastomers", "nan" ]
24,512,697
https://en.wikipedia.org/wiki/Paddle%20wheel
A paddle wheel is a form of waterwheel or impeller in which a number of paddles are set around the periphery of the wheel. It has several uses, of which some are: Very low-lift water pumping, such as flooding paddy fields at no more than about height above the water source. To move and mix algae culture in the raceway ponds used for algaculture. Propulsion of watercraft (as a paddlewheel) Low head hydro power (as a waterwheel) Flow sensors Aerators The paddle wheel is an ancient invention but is still used today in a wide range of industrial and agriculture applications. Ship propulsion Paddle wheels would enable ships to travel without needing wind or oars. They were made obsolete by propellers, which had greater propulsion with lower weight and fuel usage. This was demonstrated by an 1845 tug-of-war competition between and with the screw-driven Rattler pulling the paddle steamer Alecto backward at . Physics The paddle wheel is a device for converting between rotary motion of a shaft and linear motion of a fluid. In the linear-to-rotary direction, it is placed in a fluid stream to convert the linear motion of the fluid into rotation of the wheel. Such a rotation can be used as a source of power, or as an indication of the speed of flow. In the rotary-to-linear direction, it is driven by a prime mover such as an electric motor or steam engine and used to pump a fluid or propel a vehicle such as a paddle-wheel steamer or a steamship. References Watermills Pumps
Paddle wheel
[ "Physics", "Chemistry" ]
316
[ "Physical systems", "Hydraulics", "Turbomachinery", "Pumps" ]
24,513,174
https://en.wikipedia.org/wiki/U-30%20tow%20tractor
The U-30 tow tractor is an aircraft towing vehicle used by the United States and Allied Air Force. There are several models of the U-30 tractor, with weight characteristics from . The first generation was designed and delivered in 1968 to the Air Force by the Oshkosh Truck Corporation. The vehicle was required for the then-new Lockheed C-5 Galaxy aircraft, the largest in the USAF's inventory. Oshkosh delivered a total of 45 vehicles under that contract. Since then, several manufacturers have produced the U-30 vehicle, to include Stewart & Stevenson and FMC Technologies (now JBT AeroTech). The Air Force also purchases remanufactured U-30 tow tractors. Melton Sales and Service of Bordentown, New Jersey currently has this contract in which three variants of the U-30 tow tractor are overhauled. See also MB-2 tow tractor M2 high-speed tractor Omni Directional Vehicle References Aircraft ground handling Tractors Oshkosh vehicles
U-30 tow tractor
[ "Engineering" ]
203
[ "Engineering vehicles", "Tractors" ]
24,513,631
https://en.wikipedia.org/wiki/Protein%20Science
Protein Science is a peer-reviewed scientific journal covering research on the structure, function, and biochemical significance of proteins, their role in molecular and cell biology, genetics, and evolution, and their regulation and mechanisms of action. It is published by Wiley-Blackwell on behalf of The Protein Society. The 2022 impact factor of the journal is 8.0. Abstracting and indexing Since January 2008, published articles are deposited in PubMed Central with a 12-month embargo. Protein Science is indexed and abstracted in MEDLINE, Science Citation Index, and Scopus. References External links The Protein Society Academic journals established in 1992 Biochemistry journals Wiley-Blackwell academic journals Monthly journals English-language journals
Protein Science
[ "Chemistry" ]
143
[ "Biochemistry stubs", "Biochemistry journals", "Biochemistry literature", "Biochemistry journal stubs" ]
24,514,868
https://en.wikipedia.org/wiki/Consensus%20site
A consensus site is a term in molecular biology that refers to a site on a protein that is often modified in a particular way. Modifications may be N- or O- linked glycosylation, phosphorylation, tyrosine sulfation or other. References Molecular biology
Consensus site
[ "Chemistry", "Biology" ]
61
[ "Biochemistry", "Molecular biology stubs", "Molecular biology" ]
24,515,182
https://en.wikipedia.org/wiki/Spectralon
Spectralon is a fluoropolymer that has the highest diffuse reflectance of any known material or coating over the ultraviolet, visible, and near-infrared regions of the spectrum. It is the whitest substance available and reflects 99% of the light. It exhibits highly Lambertian behavior, and can be machined into a wide variety of shapes for the construction of optical components such as calibration targets, integrating spheres, and optical pump cavities for lasers. Characteristics Spectralon's reflectance generally exceeds 99 percent over a range from 400 to 1500 nm and 95 percent from 250 to 2500 nm; however, grades are available with added carbon to achieve various gray levels. The material consists of PTFE powder that has been compressed into solid forms and sintered for stability, with approximately 40 percent void volume to enhance scattering of light. Surface or subsurface contamination may lower the reflectance at the extreme upper and lower ends of the spectral range. The material is also highly Lambertian at wavelengths from 257 nm to 10,600 nm, although reflectivity decreases at wavelengths beyond the near infrared. Spectralon exhibits absorbances at 2800 nm, then absorbs strongly (less than 20 percent reflectance) from 5400 to 8000 nm. Although the high diffuse reflectance allows efficient laser pumping, the material has a fairly low damage threshold of four joules per square centimeter, limiting its use to lower-powered systems. Its Lambertian reflectance arises from the material's surface and immediate subsurface structure. The porous network of thermoplastic produces multiple reflections in the first few tenths of a millimeter. Spectralon can partially depolarize the light it reflects, but this effect decreases at high incidence angles. Although it is extremely hydrophobic, this open structure readily absorbs nonpolar solvents, greases and oils. Impurities are difficult to remove from Spectralon; thus, the material should be kept free from contaminants to maintain its reflectance properties. The material has a hardness roughly equal to that of high-density polyethylene and is thermally stable to over 350 °C. It is chemically inert to all but the most powerful bases, such as sodium amide and organosodium or lithium compounds. Gross contamination of the material or marring of the optical surface can be remedied by sanding under a stream of running water. This surface refinishing both restores the original topography of the surface and returns the material to its original reflectance. Weathering tests on the material show no damage upon exposure to atmospheric ultraviolet flux. The material shows no sign of optical or physical degradation after long-term immersion testing in sea water. Applications Three grades of Spectralon reflectance material are available: optical grade, laser grade, and space grade. Optical-grade Spectralon has a high reflectance and Lambertian behavior and is used primarily as a reference standard or target for calibration of spectrophotometers. Laser-grade Spectralon offers the same physical characteristics as optical-grade material but is a different formulation of resin that gives enhanced performance when used in laser pump cavities. Spectralon is used in a variety of "side-pumped" lasers. Space-grade Spectralon combines high reflectance with an extremely Lambertian reflectance profile and is used for terrestrial remote sensing applications. Spectralon's optical properties make it ideal as a reference surface in remote sensing and spectroscopy. For instance, it is used to obtain leaf reflectance and bidirectional reflectance distribution functions in the laboratory. It can be applied to obtain vegetation fluorescence using Fraunhofer lines. Spectralon allows removal of contributions in the emitted light that are directly linked not to the surface (leaf) properties but to geometrical factors. History Spectralon was developed by Labsphere and has been available since 1986. References External links Spectralon Product Details Spectralon Tech Guide Optical materials Fluoropolymers Thermoplastics
Spectralon
[ "Physics" ]
809
[ "Materials", "Optical materials", "Matter" ]
24,517,676
https://en.wikipedia.org/wiki/UCSC%20Genome%20Browser
The UCSC Genome Browser is an online and downloadable genome browser hosted by the University of California, Santa Cruz (UCSC). It is an interactive website offering access to genome sequence data from a variety of vertebrate and invertebrate species and major model organisms, integrated with a large collection of aligned annotations. The Browser is a graphical viewer optimized to support fast interactive performance and is an open-source, web-based tool suite built on top of a MySQL database for rapid visualization, examination, and querying of the data at many levels. The Genome Browser Database, browsing tools, downloadable data files, and documentation can all be found on the UCSC Genome Bioinformatics website. History Initially built and still managed by Jim Kent, then a graduate student, and David Haussler, professor of Computer Science (now Biomolecular Engineering) at the University of California, Santa Cruz in 2000, the UCSC Genome Browser began as a resource for the distribution of the initial fruits of the Human Genome Project. Funded by the Howard Hughes Medical Institute and the National Human Genome Research Institute, NHGRI (one of the US National Institutes of Health), the browser offered a graphical display of the first full-chromosome draft assembly of human genome sequence. Today the browser is used by geneticists, molecular biologists and physicians as well as students and teachers of evolution for access to genomic information. Genomes In the years since its inception, the UCSC Browser has expanded to accommodate genome sequences of all vertebrate species and selected invertebrates for which high-coverage genomic sequences is available, now including 108 species. High coverage is necessary to allow overlap to guide the construction of larger contiguous regions. Genomic sequences with less coverage are included in multiple-alignment tracks on some browsers, but the fragmented nature of these assemblies does not make them suitable for building full featured browsers. (more below on multiple-alignment tracks). The species hosted with full-featured genome browsers are shown in the table. Apart from these 108 species and their assemblies, the UCSC Genome Browser also offers Assembly Hubs, web-accessible directories of genomic data that can be viewed on the browser and include assemblies that are not hosted natively on it. There, users can load and annotate unique assemblies for which UCSC does not provide an annotation database. A full list of species and their assemblies can be viewed in the GenArk Portal, including 2,589 assemblies hosted by both UCSC Genome Browser database and Assembly Hubs. An example can be seen in the Vertebrate Genomes Project assembly hub. Browser functionality The large amount of data about biological systems that is accumulating in the literature makes it necessary to collect and digest information using the tools of bioinformatics. The UCSC Genome Browser presents a diverse collection of annotation datasets (known as "tracks" and presented graphically), including mRNA alignments, mappings of DNA repeat elements, gene predictions, gene-expression data, disease-association data (representing the relationships of genes to diseases), and mappings of commercially available gene chips (e.g., Illumina and Agilent). The basic paradigm of display is to show the genome sequence in the horizontal dimension, and show graphical representations of the locations of the mRNAs, gene predictions, etc. Blocks of color along the coordinate axis show the locations of the alignments of the various data types. The ability to show this large variety of data types on a single coordinate axis makes the browser a handy tool for the vertical integration of the data. To find a specific gene or genomic region, the user may type in the gene name, a DNA sequence, an accession number for an RNA, the name of a genomic cytological band (e.g., 20p13 for band 13 on the short arm of chr20) or a chromosomal position (chr17:38,450,000-38,531,000 for the region around the gene BRCA1). Presenting the data in the graphical format allows the browser to present link access to detailed information about any of the annotations. The gene details page of the UCSC Genes track provides a large number of links to more specific information about the gene at many other data resources, such as Online Mendelian Inheritance in Man (OMIM) and SwissProt. Designed for the presentation of complex and voluminous data, the UCSC Browser is optimized for speed. By pre-aligning millions of RNA secuences from GenBank to each of the 244 genome assemblies (many of the 108 species have more than one assembly), the browser allows instant access to the alignments of any RNA to any of the hosted species. The juxtaposition of the many types of data allow researchers to display exactly the combination of data that will answer specific questions. A pdf/postscript output functionality allows export of a camera-ready image for publication in academic journals. One unique and useful feature that distinguishes the UCSC Browser from other genome browsers is the continuously variable nature of the display. Sequence of any size can be displayed, from a single DNA base up to the entire chromosome (human chr1 = 245 million bases, Mb) with full annotation tracks. Researchers can display a single gene, a single exon, or an entire chromosome band, showing dozens or hundreds of genes and any combination of the many annotations. A convenient drag-and-zoom feature allows the user to choose any region in the genome image and expand it to occupy the full screen. Researchers may also use the browser to display their own data via the Custom Tracks tool. This feature allows users to upload a file of their own data and view the data in the context of the reference genome assembly. Users may also use the data hosted by UCSC, creating subsets of the data of their choosing with the Table Browser tool (such as only the SNPs that change the amino acid sequence of a protein) and display this specific subset of the data in the browser as a Custom Track. Any browser view created by a user, including those containing Custom Tracks, may be shared with other users via the Saved Sessions tool. Tracks Below the displayed images of the UCSC Genome browser are eleven categories of additional tracks that can be selected and displayed alongside the original data. Researchers can select tracks which best represent their query to allow for more applicable data to be displayed depending on the type and depth of research being done. These categories are as follows: Analysis tools The UCSC site hosts a set of genome analysis tools, including a full-featured GUI interface for mining the information in the browser database, a FASTA format sequence alignment tool BLAT that is also useful for simply finding sequences in the massive sequence (human genome = 3.23 billion bases [Gb]) of any of the featured genomes. A liftOver tool uses whole-genome alignments to allow conversion of sequences from one assembly to another or between species. The Genome Graphs tool allows users to view all chromosomes at once and display the results of genome-wide association studies (GWAS). The Gene Sorter displays genes grouped by parameters not linked to genome location, such as expression pattern in tissues. Open source / mirrors The UCSC Browser code base is open-source for non-commercial use, and is mirrored locally by many research groups, allowing private display of data in the context of the public data. The UCSC Browser is mirrored at several locations worldwide, as shown in the table. The Browser code is also used in separate installations by the UCSC Malaria Genome Browser and the Archaea Browser. See also Ensembl ENCODE List of biological databases References External links On-line Training/Tutorials & User's Guides UCSC Genome tutorials (videos of YouTube) Bioinformatics software Genome databases Bioinformatics National Institutes of Health Computational biology Biological databases University of California, Santa Cruz
UCSC Genome Browser
[ "Biology" ]
1,635
[ "Bioinformatics", "Bioinformatics software", "Computational biology" ]
24,519,232
https://en.wikipedia.org/wiki/Dynamic%20combinatorial%20chemistry
Dynamic combinatorial chemistry (DCC); also known as constitutional dynamic chemistry (CDC) is a method to the generation of new molecules formed by reversible reaction of simple building blocks under thermodynamic control. The library of these reversibly interconverting building blocks is called a dynamic combinatorial library (DCL). All constituents in a DCL are in equilibrium, and their distribution is determined by their thermodynamic stability within the DCL. The interconversion of these building blocks may involve covalent or non-covalent interactions. When a DCL is exposed to an external influence (such as proteins or nucleic acids), the equilibrium shifts and those components that interact with the external influence are stabilised and amplified, allowing more of the active compound to be formed. History By modern definition, dynamic combinatorial chemistry is generally considered to be a method of facilitating the generation of new chemical species by the reversible linkage of simple building blocks, under thermodynamic control. This principle is known to select the most thermodynamically stable product from an equilibrating mixture of a number of components, a concept commonly utilised in synthetic chemistry to direct the control of reaction selectivity. Although this approach was arguably utilised in the work of Fischer and Werner as early as the 19th century, their respective studies of carbohydrate and coordination chemistry were restricted to rudimentary speculation, requiring the rationale of modern thermodynamics. It was not until supramolecular chemistry revealed early concepts of molecular recognition, complementarity and self-organisation that chemists could begin to employ strategies for the rational design and synthesis of macromolecular targets. The concept of template synthesis was further developed and rationalised through the pioneering work of Busch in the 1960s, which clearly defined the role of a metal ion template in stabilising the desired ‘thermodynamic’ product, allowing for its isolation from the complex equilibrating mixture. Although the work of Busch helped to establish the template method as a powerful synthetic route to stable macrocyclic structures, this approach remained exclusively within the domain of inorganic chemistry until the early 1990s, when Sanders et al. first proposed the concept of dynamic combinatorial chemistry. Their work combined thermodynamic templation in tandem with combinatorial chemistry, to generate an ensemble complex porphyrin and imine macrocycles using a modest selection of simple building blocks. Sanders then developed this early manifestation of dynamic combinatorial chemistry as a strategy for organic synthesis; the first example being the thermodynamically-controlled macrolactonisation of oligocholates to assemble cyclic steroid-derived macrocycles capable of interconversion via component exchange. Early work by Sanders et al. employed transesterification to generate dynamic combinatorial libraries. In retrospect, it was unfortunate that esters were selected for mediating component exchange, as transesterification processes are inherently slow and require vigorous anhydrous conditions. However, their subsequent investigations identified that both the disulfide and hydrazone covalent bonds exhibit effective component exchange processes and so present a reliable means of generating dynamic combinatorial libraries capable of thermodynamic templation. This chemistry now forms the basis of much research in the developing field of dynamic covalent chemistry, and has in recent years emerged as a powerful tool for the discovery of molecular receptors. Protein-directed One of the key developments within the field of DCC is the use of proteins (or other biological macromolecules, such as nucleic acids) to influence the evolution and generation of components within a DCL. Protein-directed DCC provides a way to generate, identify and rank novel protein ligands, and therefore have huge potential in the areas of enzyme inhibition and drug discovery. Reversible covalent reactions The development of protein-directed DCC has not been straightforward because the reversible reactions employed must occur in aqueous solution at biological pH and temperature, and the components of the DCL must be compatible with proteins. Several reversible reactions have been proposed and/or applied in protein-directed DCC. These included boronate ester formation, diselenides-disulfides exchange, disulphide formation, hemithiolacetal formation, hydrazone formation, imine formation and thiol-enone exchange. Pre-equilibrated DCL For reversible reactions that do not occur in aqueous buffers, the pre-equilibrated DCC approach can be used. The DCL was initially generated (or pre-equilibrated) in organic solvent, and then diluted into aqueous buffer containing the protein target for selection. Organic based reversible reactions, including Diels-Alder and alkene cross metathesis reactions, have been proposed or applied to protein-directed DCC using this method. Reversible non-covalent reactions Reversible non-covalent reactions, such as metal-ligand coordination, has also been applied in protein-directed DCC. This strategy is useful for the investigation of the optimal ligand stereochemistry to the binding site of the target protein. Enzyme-catalysed reversible reactions Enzyme-catalysed reversible reactions, such as protease-catalysed amide bond formation/hydrolysis reactions and the aldolase-catalysed aldol reactions, have also been applied to protein-directed DCC. Analytical methods Protein-directed DCC system must be amenable to efficient screening. Several analytical techniques have been applied to the analysis of protein-directed DCL. These include HPLC, mass spectrometry, NMR spectroscopy, and X-ray crystallography. Multi-protein approach Although most applications of protein-directed DCC to date involved the use of single protein in the DCL, it is possible to identify protein ligands by using multiple proteins simultaneously, as long as a suitable analytical technique is available to detect the protein species that interact with the DCL components. This approach may be used to identify specific inhibitors or broad-spectrum enzyme inhibitors. Other applications DCC is useful in identifying molecules with unusual binding properties, and provides synthetic routes to complex molecules that aren't easily accessible by other means. These include smart materials, foldamers, self-assembling molecules with interlocking architectures and new soft materials. The application of DCC to detect volatile bioactive compounds, i.e. the amplification and sensing of scent, was proposed in a concept paper. Recently, DCC was also used to study the abiotic origins of life. See also Combinatorial biology Combinatorial chemistry Drug discovery Fragment-based lead discovery High-throughput screening References External links University of North Carolina at Chapel Hill: Center for Dynamic Combinatorial Chemistry University of Cambridge: Dynamic Combinatorial Chemistry Cheminformatics Drug discovery Combinatorial chemistry
Dynamic combinatorial chemistry
[ "Chemistry", "Materials_science", "Mathematics", "Engineering", "Biology" ]
1,438
[ "Combinatorial chemistry", "Life sciences industry", "Drug discovery", "Materials science", "Combinatorics", "Computational chemistry", "Medicinal chemistry", "Cheminformatics", "nan" ]
24,519,420
https://en.wikipedia.org/wiki/Graded%20manifold
In algebraic geometry, graded manifolds are extensions of the concept of manifolds based on ideas coming from supersymmetry and supercommutative algebra. Both graded manifolds and supermanifolds are phrased in terms of sheaves of graded commutative algebras. However, graded manifolds are characterized by sheaves on smooth manifolds, while supermanifolds are constructed by gluing of sheaves of supervector spaces. Graded manifolds A graded manifold of dimension is defined as a locally ringed space where is an -dimensional smooth manifold and is a -sheaf of Grassmann algebras of rank where is the sheaf of smooth real functions on . The sheaf is called the structure sheaf of the graded manifold , and the manifold is said to be the body of . Sections of the sheaf are called graded functions on a graded manifold . They make up a graded commutative -ring called the structure ring of . The well-known Batchelor theorem and Serre–Swan theorem characterize graded manifolds as follows. Serre–Swan theorem for graded manifolds Let be a graded manifold. There exists a vector bundle with an -dimensional typical fiber such that the structure sheaf of is isomorphic to the structure sheaf of sections of the exterior product of , whose typical fibre is the Grassmann algebra . Let be a smooth manifold. A graded commutative -algebra is isomorphic to the structure ring of a graded manifold with a body if and only if it is the exterior algebra of some projective -module of finite rank. Graded functions Note that above mentioned Batchelor's isomorphism fails to be canonical, but it often is fixed from the beginning. In this case, every trivialization chart of the vector bundle yields a splitting domain of a graded manifold , where is the fiber basis for . Graded functions on such a chart are -valued functions , where are smooth real functions on and are odd generating elements of the Grassmann algebra . Graded vector fields Given a graded manifold , graded derivations of the structure ring of graded functions are called graded vector fields on . They constitute a real Lie superalgebra with respect to the superbracket , where denotes the Grassmann parity of . Graded vector fields locally read . They act on graded functions by the rule . Graded exterior forms The -dual of the module graded vector fields is called the module of graded exterior one-forms . Graded exterior one-forms locally read so that the duality (interior) product between and takes the form . Provided with the graded exterior product , graded one-forms generate the graded exterior algebra of graded exterior forms on a graded manifold. They obey the relation , where denotes the form degree of . The graded exterior algebra is a graded differential algebra with respect to the graded exterior differential , where the graded derivations , are graded commutative with the graded forms and . There are the familiar relations . Graded differential geometry In the category of graded manifolds, one considers graded Lie groups, graded bundles and graded principal bundles. One also introduces the notion of jets of graded manifolds, but they differ from jets of graded bundles. Graded differential calculus The differential calculus on graded manifolds is formulated as the differential calculus over graded commutative algebras similarly to the differential calculus over commutative algebras. Physical outcome Due to the above-mentioned Serre–Swan theorem, odd classical fields on a smooth manifold are described in terms of graded manifolds. Extended to graded manifolds, the variational bicomplex provides the strict mathematical formulation of Lagrangian classical field theory and Lagrangian BRST theory. See also Connection (algebraic framework) Graded (mathematics) Serre–Swan theorem Supergeometry Supermanifold Supersymmetry References C. Bartocci, U. Bruzzo, D. Hernandez Ruiperez, The Geometry of Supermanifolds (Kluwer, 1991) T. Stavracou, Theory of connections on graded principal bundles, Rev. Math. Phys. 10 (1998) 47 B. Kostant, Graded manifolds, graded Lie theory, and prequantization, in Differential Geometric Methods in Mathematical Physics, Lecture Notes in Mathematics 570 (Springer, 1977) p. 177 A. Almorox, Supergauge theories in graded manifolds, in Differential Geometric Methods in Mathematical Physics, Lecture Notes in Mathematics 1251 (Springer, 1987) p. 114 D. Hernandez Ruiperez, J. Munoz Masque, Global variational calculus on graded manifolds, J. Math. Pures Appl. 63 (1984) 283 G. Giachetta, L. Mangiarotti, G. Sardanashvily, Advanced Classical Field Theory (World Scientific, 2009) ; ; . External links G. Sardanashvily, Lectures on supergeometry, . Supersymmetry Generalized manifolds
Graded manifold
[ "Physics" ]
1,000
[ "Unsolved problems in physics", "Supersymmetry", "Symmetry", "Physics beyond the Standard Model" ]
21,534,694
https://en.wikipedia.org/wiki/Spinmechatronics
Spinmechatronics is neologism referring to an emerging field of research concerned with the exploitation of spin-dependent phenomena and established spintronic methodologies and technologies in conjunction with electro-mechanical, magno-mechanical, acousto-mechanical and opto-mechanical systems. Most especially, spinmechatronics (or spin mechatronics) concerns the integration of micro- and nano- mechatronic systems with spin physics and spintronics. History and origins While spinmechatronics has been recognised only recently (2008) as an independent field, hybrid spin-mechanical system development dates back to the early nineteen-nineties, with devices combining spintronics and micromechanics emerging at the turn of the twenty-first century. One of the longest established spinmechatronic systems is the Magnetic Resonance Force Microscope or MRFM. First proposed by J. A. Sidles in a seminal paper of 1991 – and since extensively developed both theoretically and experimentally by a number of international research groups – the MRFM operates by coupling a magnetically loaded micro-mechanical cantilever to an excited nuclear, proton or electron spin system. The MRFM concept effectively combines scanning atomic force microscopy (AFM) with magnetic resonance spectroscopy to provide a spectroscopic tool of unparalleled sensitivity. Nanometre resolution is possible, and the technique potentially forms the basis for ultra-high sensitivity, ultra-high resolution magnetic, biochemical, biomedical, and clinical diagnostics. The synergy of micromechanics and established spintronic technologies for sensing applications is one of the most significant spinmechatronic developments of the last decade. At the beginning of this century, strain sensors incorporating magnetoresistive technologies emerged and a wide range of devices exploiting similar principles are likely to realize research and commercial potential by 2015. Contemporary innovation in spinmechatronics drives forward the independent advancement of cutting-edge science in spin physics, spintronics and micro- and nano-mechatronics and catalyses the development of wholly new instrumentation, control and fabrication techniques to facilitate and exploit their integration. Key constitutive technologies Micro- and nano- mechatronics MEMS: micro-electromechanical systems are the key ingredient of micro-mechatronics. Micro-electromechanical systems are – as the name suggests – devices with significant dimensions in the micrometre regime or less. Highly suited to integration with electronic and microwave circuitry, they provide the key to electro-mechanical functionalities unachievable with classical precision mechatronics. Commercialisation of mass-produced Microelectromechanical systems products is rapidly picking up pace and includes printer ink-jet technology, 3D accelerometers, integrated pressure sensors, and Digital Light Processing (DLP) displays. At the cutting edge of Microelectromechanical systems fabrication and integration technologies are nano- electromechanical systems (NEMS). Typical examples are micrometres long, tens of nanometres thick, and have mechanical resonance frequencies approaching 100 MHz. Their small physical dimensions and mass (of order pico-grams) makes them highly sensitive to changes in stiffness; this, their synergy with mechanical and data processing systems, and the option of attaching chemical/ biological molecules, makes them ideal for ultra high-performance mechanical, chemical and biological sensing applications. Spin physics Spin physics is a broad and active area of condensed-matter physics research. ‘Spin’ in this context refers to a quantum mechanical property of certain elementary particles and nuclei, and should not be confused with the classical (and better-known) concept of rotation. Spin physics spans studies of nuclear, electron and proton magnetic resonance, magnetism, and certain areas of optics. Spintronics is a branch of spin physics. Perhaps the two best known applications of spin physics are Magnetic Resonance Imaging (or MRI) and the spintronic giant-magnetoresistive (GMR) hard disk read head. Spintronics Spintronic magnetoresistance is a major scientific and commercial success story. Today, most families own a spintronic device: the giant-magnetoresistive (GMR) hard disk read head in their computer. The science that gave rise to this phenomenal business opportunity – and earned the 2007 Nobel Prize for Physics – was the recognition that electrical carriers are characterized by both charge and spin. Today, tunnelling-magnetoresistance (TMR) – which uses the electron spin as a label to allow or forbid electron tunnelling – dominates the hard disk market and is rapidly establishing itself in areas as diverse as magnetic logic devices and biosensors. Ongoing development is pushing the frontiers of TMR devices towards the nanoscale. See also Spintronics Mechatronics Microelectromechanical systems Nanoelectromechanical systems Magnetic resonance force microscopy List of nanotechnology applications Giant magnetoresistance Albert Fert Peter Grünberg Nobel Prize in Physics Hard disk drive Magnetism Quantum tunnelling References External links Electric and magnetic fields in matter Electrical engineering Materials science Nanoelectronics Microtechnology Electromechanical engineering Control engineering
Spinmechatronics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,064
[ "Applied and interdisciplinary physics", "Microtechnology", "Spintronics", "Electric and magnetic fields in matter", "Materials science", "Control engineering", "Electromechanical engineering", "Condensed matter physics", "nan", "Nanoelectronics", "Mechanical engineering by discipline", "Nanot...
21,541,088
https://en.wikipedia.org/wiki/Morison%20equation
In fluid dynamics the Morison equation is a semi-empirical equation for the inline force on a body in oscillatory flow. It is sometimes called the MOJS equation after all four authors—Morison, O'Brien, Johnson and Schaaf—of the 1950 paper in which the equation was introduced. The Morison equation is used to estimate the wave loads in the design of oil platforms and other offshore structures. Description The Morison equation is the sum of two force components: an inertia force in phase with the local flow acceleration and a drag force proportional to the (signed) square of the instantaneous flow velocity. The inertia force is of the functional form as found in potential flow theory, while the drag force has the form as found for a body placed in a steady flow. In the heuristic approach of Morison, O'Brien, Johnson and Schaaf these two force components, inertia and drag, are simply added to describe the inline force in an oscillatory flow. The transverse force—perpendicular to the flow direction, due to vortex shedding—has to be addressed separately. The Morison equation contains two empirical hydrodynamic coefficients—an inertia coefficient and a drag coefficient—which are determined from experimental data. As shown by dimensional analysis and in experiments by Sarpkaya, these coefficients depend in general on the Keulegan–Carpenter number, Reynolds number and surface roughness. The descriptions given below of the Morison equation are for uni-directional onflow conditions as well as body motion. Fixed body in an oscillatory flow In an oscillatory flow with flow velocity , the Morison equation gives the inline force parallel to the flow direction: where is the total inline force on the object, is the flow acceleration, i.e. the time derivative of the flow velocity the inertia force , is the sum of the Froude–Krylov force and the hydrodynamic mass force the drag force according to the drag equation, is the inertia coefficient, and the added mass coefficient, A is a reference area, e.g. the cross-sectional area of the body perpendicular to the flow direction, V is volume of the body. For instance for a circular cylinder of diameter D in oscillatory flow, the reference area per unit cylinder length is and the cylinder volume per unit cylinder length is . As a result, is the total force per unit cylinder length: Besides the inline force, there are also oscillatory lift forces perpendicular to the flow direction, due to vortex shedding. These are not covered by the Morison equation, which is only for the inline forces. Moving body in an oscillatory flow In case the body moves as well, with velocity , the Morison equation becomes: where the total force contributions are: a: Froude–Krylov force, due to the pressure gradient at the body's location induced by the fluid acceleration , b: hydrodynamic mass force, c: drag force. Note that the added mass coefficient is related to the inertia coefficient as . Limitations The Morison equation is a heuristic formulation of the force fluctuations in an oscillatory flow. The first assumption is that the flow acceleration is more-or-less uniform at the location of the body. For instance, for a vertical cylinder in surface gravity waves this requires that the diameter of the cylinder is much smaller than the wavelength. If the diameter of the body is not small compared to the wavelength, diffraction effects have to be taken into account. Second, it is assumed that the asymptotic forms: the inertia and drag force contributions, valid for very small and very large Keulegan–Carpenter numbers respectively, can just be added to describe the force fluctuations at intermediate Keulegan–Carpenter numbers. However, from experiments it is found that in this intermediate regime—where both drag and inertia are giving significant contributions—the Morison equation is not capable of describing the force history very well. Although the inertia and drag coefficients can be tuned to give the correct extreme values of the force. Third, when extended to orbital flow which is a case of non uni-directional flow, for instance encountered by a horizontal cylinder under waves, the Morison equation does not give a good representation of the forces as a function of time. References Further reading , 530 pages Fluid dynamics Equations of fluid dynamics Water waves Articles containing video clips Marine engineering
Morison equation
[ "Physics", "Chemistry", "Engineering" ]
924
[ "Physical phenomena", "Equations of fluid dynamics", "Equations of physics", "Water waves", "Chemical engineering", "Waves", "Marine engineering", "Piping", "Fluid dynamics" ]
21,544,094
https://en.wikipedia.org/wiki/Flow-shop%20scheduling
Flow-shop scheduling is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job-scheduling problem, we are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m machines with varying processing power, while trying to minimize the makespan – the total length of the schedule (that is, when all the jobs have finished processing). In the specific variant known as flow-shop scheduling, each job contains exactly m operations. The i-th operation of the job must be executed on the i-th machine. No machine can perform more than one operation simultaneously. For each operation of each job, execution time is specified. Flow-shop scheduling is a special case of job-shop scheduling where there is strict order of all operations to be performed on all jobs. Flow-shop scheduling may apply as well to production facilities as to computing designs. A special type of flow-shop scheduling problem is the permutation flow-shop scheduling problem in which the processing order of the jobs on the resources is the same for each subsequent step of processing. In the standard three-field notation for optimal-job-scheduling problems, the flow-shop variant is denoted by F in the first field. For example, the problem denoted by " F3||" is a 3-machines flow-shop problem with unit processing times, where the goal is to minimize the maximum completion time. Formal definition There are m machines and n jobs. Each job contains exactly m operations. The i-th operation of the job must be executed on the i-th machine. No machine can perform more than one operation simultaneously. For each operation of each job, execution time is specified. Operations within one job must be performed in the specified order. The first operation gets executed on the first machine, then (as the first operation is finished) the second operation on the second machine, and so on until the m-th operation. Jobs can be executed in any order, however. Problem definition implies that this job order is exactly the same for each machine. The problem is to determine the optimal such arrangement, i.e. the one with the shortest possible total job execution makespan. Sequencing performance measurements (γ) The sequencing problem can be stated as determining a sequence S such that one or several sequencing objectives are optimized. (Average) Flow time, Makespan, Cmax (Average) Tardiness, .... detailed discussion of performance measurement can be found in Malakooti(2013). Complexity of flow-shop scheduling As presented by Garey et al. (1976), most of extensions of the flow-shop-scheduling problems are NP-hard and few of them can be solved optimally in O(nlogn); for example, F2|prmu|Cmax can be solved optimally by using Johnson's Rule. Taillard provides substantial benchmark problems for scheduling flow shops, open shops, and job shops. Solution methods The proposed methods to solve flow-shop-scheduling problems can be classified as exact algorithm such as branch and bound and heuristic algorithm such as genetic algorithm. Minimizing makespan, Cmax F2|prmu|Cmax and F3|prmu|Cmax can be solved optimally by using Johnson's Rule but for general case there is no algorithm that guarantee the optimality of the solution. The flow shop contains n jobs simultaneously available at time zero and to be processed by two machines arranged in series with unlimited storage in between them. The processing time of all jobs are known with certainty. It is required to schedule n jobs on machines so as to minimize makespan. The Johnson's rule for scheduling jobs in two-machine flow shop is given below. In an optimal schedule, job i precedes job j if min{p1i,p2j} < min{p1j,p2i}. Where as, p1i is the processing time of job i on machine 1 and p2i is the processing time of job i on machine 2. Similarly, p1j and p2j are processing times of job j on machine 1 and machine 2 respectively. For Johnson's algorithm: Let p1j be the processing time of job j on machine 1 and p2j the processing time of job j on machine 2 Johnson's algorithm: Form set1 containing all the jobs with p1j < p2j Form set2 containing all the jobs with p1j > p2j, the jobs with p1j = p2j may be put in either set. Form the sequence as follows: (i) The job in set1 go first in the sequence and they go in increasing order of p1j (SPT) (ii) The jobs in set2 follow in decreasing order of p2j (LPT). Ties are broken arbitrarily. This type schedule is referred as SPT(1)–LPT(2) schedule. A detailed discussion of the available solution methods are provided by Malakooti (2013). See also Open-shop scheduling Job-shop scheduling References Optimal scheduling Workflow technology Engineering management
Flow-shop scheduling
[ "Engineering" ]
1,075
[ "Optimal scheduling", "Engineering economics", "Engineering management", "Industrial engineering" ]
29,081,224
https://en.wikipedia.org/wiki/Protection%20of%20the%20Arctic%20Marine%20Environment
The Protection of the Arctic Marine Environment Working Group (PAME) is one of six working groups encompassed by the Arctic Council. Founded as part of the 1991 Arctic Environmental Protection Strategy, it assimilated into the structure of the Council following the signing of the 1996 Ottawa Declaration by the eight Arctic states. The Working Group claims to operate across the domains of Arctic shipping, maritime pollution, marine protected areas, ecosystem approaches to management, resource exploitation and development, and associations with the marine environment. Where necessary, it is tasked with producing guidelines and recommendations for policy improvement, with projects approved every two years by the council. The working group includes representatives from each state, (Canada, Denmark (representing both Greenland and the Faroe Islands), Finland, Iceland, Norway, Russia, Sweden, and the United States) as well as its Permanent Participants (Aleut International Association (AAI), Arctic Athabaskan Council (AAC), Gwich'in Council International (GCI), Inuit Circumpolar Council (ICC), Russian Association of Indigenous Peoples of the North (RAIPON), and the Saami Council) representing the region's indigenous populations, and a number of observers. The Secretariat for PAME is located in Akureyri, Iceland. Objectives The overarching objectives for PAME were formally outlined in the 2009 meeting held in Oslo, Norway. These objectives are: To improve knowledge and respond to emerging knowledge of the Arctic Marine Environment. To determine the adequacy of applicable international/regional commitments and promote their implementation and compliance. To facilitate partnerships, programme and technical cooperation and support communication, reporting and outreach both within and outside the Arctic Council. Arctic Maritime Strategic Plans PAME's objectives, and strategies to further their realisation, are periodically delineated in the group's Arctic Maritime Strategic Plans. As of May 2020, the group has published two such documents; one for the years between 2000 and 2015, and the other for those between 2015 and 2025. Arctic Maritime Strategic Plan 2005–2015 The first of the Working Group's Arctic Maritime Strategic Plans was approved by the Arctic Council in November 2004. The plan centred around: The curtailment and prevention of Arctic marine pollution, The development of sustainable mechanisms for employment of Arctic marine resources, The conservation of marine biodiversity and ecosystems and The 'health and prosperity' of all Arctic inhabitants. Arctic Maritime Strategic Plan 2015–2025 The second Arctic Maritime Strategic Plan delineated four distinct goals relating to Protection of the Arctic Marine Environment for the following 10 years. It was approved by the Council in April 2015 at a meeting in Iqaluit, Canada. The plan called for: The furthering of knowledge relating to the Arctic marine environment and the continuation of the domain's monitoring and assessment structures, The conservation of ecosystem function and encouragement of marine biodiversity, Sustainable employment of the marine environment, centred around cumulative environmental impacts and The enhancement of 'economic, social and cultural well-being of Arctic inhabitants', including the Arctic's indigenous population and the strengthening of their capacity to adapt to ongoing changes in the Arctic marine environment. The second Arctic Maritime Strategic Plan was accompanied by two plans to strengthen its progress: the Implementation Plan and Communication Plan. The former was intended to provide the Arctic Council with an increasingly structured approach to achieving the Working Group's goals, as well as providing guidelines against which progress could be appraised. The latter aimed to facilitate communication and understanding to ensure the fulfilment of the working group's goals. Cooperation PAME works in partnership with The Arctic Council's five additional working groups to construct its strategic plans and suggest mechanisms for their implementation. The five additional working groups of the Arctic Council are: Arctic Monitoring and Assessment Programme (AMAP) Conservation of Arctic Flora & Fauna (CAFF) Emergency Prevention, Preparedness & Response (EPPR) Sustainable Development Working Group (SDWG) Arctic Contaminants Action Program (ACAP) Projects Arctic Marine Shipping Assessment In 2009, the working group released a major report on Arctic marine shipping, the Arctic Marine Shipping Assessment, which analyses current and predicted trends in Arctic transport. PAME was tasked to conduct this research after the 2004 Arctic Climate Impact Assessment concluded that Arctic sea ice decline "is very likely to increase marine transport and access to resources". PAME identifies increases in marine tourism and in transport supporting the exploration and extraction of marine resources as potential catalysts of environmental degradation. Maritime pollution A key role of PAME relates itself to the proliferation of Arctic maritime contamination and pollution, stemming from both off-shore and on-shore activities. As of May 2020, PAME is the process of developing a regional action plan on marine litter in the Arctic, as an extension of a preliminary study on Arctic marine litter and micro-plastics which was carried out between 2017 and 2019. The group's focus on Arctic maritime pollution can be identified in its 1998 Regional Programme of Action for the Protection of the Arctic Marine Environment from Land-based Activities (Arctic RPA), updated in 2009 to reflect contemporary challenges in preventing Arctic marine pollution. Marine protected areas The expert group of PAME's Marine Protected Areas subdivision is co-led by Canada, Norway and the United States. PAME adheres to the definition of a 'Marine Protected Area' volunteered by the International Union for Conservation of Nature. The definition focuses the existence of a distinct geographical area managed in such a way that ensures the long-term conservation and protection of its natural ecosystems. There are seven categories to accommodate the various characters of protected areas that exist globally, with the Arctic states each possessing policy tools to designate and manage these. Ecosystem approaches to management Between 2006 and 2017, PAME produced a total of 11 ecosystem approaches to management (EA) progress reports, and has developed a framework for application of the approach to its projects. The framework principles involve identifying the character of local ecosystems before considering them adjacent to the group's assessments, objectives and values. Decisions with respect to the management of local human activity are made based on the outcomes of these processes. The establishment of the expert group on Arctic ecosystem-based management (EBM) occurred in 2011, the centrality to the working of the Council of which was furthered in both 2015 and 2017, with council ministers agreeing on the requirement for distinct guidelines for implementing the Ecosystem Approach in the Arctic region. PAME and the Arctic Council define ecosystem approaches to management as the management of human activities based on contemporary, scientific knowledge of local ecosystems. The purpose of the approach is to facilitate management decisions of human activity which balance the exploitation of marine resources with ecosystem prosperity. Marine resource exploitation and development In 2009, PAME released guidelines for the exploitation of Arctic off-shore gas and oil, delineating the Arctic Council's understanding of "good practice" for the stages of planning, exploring, developing, producing and decommissioning areas and equipment employed in resource exploitation and development. Since the commissioning of these guidelines, PAME has published eight developing documents detailing further development of local regulation on Arctic oil and Gas drilling activities. PAME meetings PAME convenes on an annual, or biannual basis, and meets with the Ministers of the Arctic Council once every two years. The location of PAME meetings alternates between cities situated in the 8 Arctic states. Past meetings by month, year and location See also International Arctic Science Committee United Nations Environment Programme Ilulissat Declaration Arctic cooperation and politics References Other sources https://archive.today/20080511222501/http://arctic-council.org/working_group/pame http://www.pame.is Ocean pollution Government of the Arctic
Protection of the Arctic Marine Environment
[ "Chemistry", "Environmental_science" ]
1,556
[ "Ocean pollution", "Water pollution" ]
29,083,835
https://en.wikipedia.org/wiki/Diamond%20Ranch%20Academy
Diamond Ranch Academy was a therapeutic boarding school just outside the town of Hurricane, Utah, United States. It admitted adolescents, 12–18, with various issues, including anger management issues and major depressive disorder. Diamond Ranch Academy was founded in Idaho Falls in 1999 by Rob Dias and later moved to southern Utah, where it occupied a ranch. It closed in August 2023 after a decision by Utah officials not to renew the school's license. Its education programs were accredited by the Northwest Accreditation Commission, The Joint Commission, and its courses generally lasted between ten and twelve months. Activities included various sports, including interscholastic competition, as well as caring for farm animals. Diamond Ranch Academy charged a tuition fee of $12,000 per month. Some students who required special education services had their tuition fees covered by school districts in California and Washington. In 2022 a student died at the school after a period of illness, and the Utah Department of Health subsequently issued an extreme level citation to Diamond Ranch Academy for failure to provide and seek necessary medical care for a client. History When Diamond Ranch Academy first opened in 1999, it was a working ranch in Idaho. Students were expected to take part in a cattle drive. During the first 2–6 weeks at Diamond Ranch Academy there was no educational component. Students would take part in a wilderness component to the program. Afterwards the student would receive "continuing education packets" that had been developed by Brigham Young University. Enrollees aged 12 to 17 were housed in groups based on age and gender on four separate areas of the ranch. Students who had reached the age of 18 before completing the program were housed in a fifth area. Diamond Ranch Academy had three locations where they ran their programs: Timber Creek Ranch, near the town of Salmon, Idaho; the Swan Valley Ranch near Jackson Hole, Wyoming; and the Pitchfork Ranch in Southern Idaho.In 2001, they moved to a campus outside of the town of Hurricane, Utah. In 2012, a new campus was opened at a site about from Hurricane. In December 2022 a 17-year-old girl, Taylor Goodridge, collapsed and died from sepsis caused by acute peritonitis while attending the school, which resulted in media attention. The State of Utah Department of Human Services found in a subsequent investigation that she had been ill since October 2022, reporting back pain, difficulty breathing, and difficulty sleeping because of the pain. Goodridge was found to have vomited at least 14 times in the 12-day period prior to her death. Goodridge's parents have since filed a civil lawsuit against the school, claiming that she "begged for help" multiple times before she died without being provided medical care. Between December 2022 and March 2023, Diamond Ranch Academy was suspended from taking on new students by the State of Utah Department of Human Services while the student's death was investigated. In February 2023 Sky News published an article discussing Goodridge's death. It included claims by a previous client who alleged that she had suffered partial facial paralysis after being restrained by staff on the campus. Goodridge's death was the third recorded fatality of a student at Diamond Ranch Academy. On July 11, 2023, the Utah Department of Human Services declined Diamond Ranch Academy's request for license renewal in the capacity of a Residential Treatment Center and Therapeutic Boarding School. Diamond Ranch Academy officially closed on August 14, 2023. In 2024, staff members and administrators from Diamond Ranch Academy were listed on a license application for RAFA Academy, a boarding school at the old Diamond Ranch Academy campus. School structure Diamond Ranch Academy used a token economy system as part of its program. Students earn credits by completing school work, the reward being extra benefits and activities. Notable staff Chad Ryan Huntsman Former headteachers Cory Henwood Bo Iverson Oversight State of Utah Department of Human Services National Association of Therapeutic Schools and Programs References External links Official site Diamond Ranch Academy Reviews Private high schools in Utah Private middle schools in Utah Therapeutic boarding schools in the United States Educational institutions established in 1999 Schools in Washington County, Utah 1999 establishments in Idaho Behavior modification All pages needing cleanup Schools needing cleanup Troubled teen programs
Diamond Ranch Academy
[ "Biology" ]
840
[ "Behavior modification", "Behavior", "Human behavior", "Behaviorism" ]
29,089,664
https://en.wikipedia.org/wiki/Diagnosis%20of%20myocardial%20infarction
A diagnosis of myocardial infarction is created by integrating the history of the presenting illness and physical examination with electrocardiogram findings and cardiac markers (blood tests for heart muscle cell damage). A coronary angiogram allows visualization of narrowings or obstructions on the heart vessels, and therapeutic measures can follow immediately. At autopsy, a pathologist can diagnose a myocardial infarction based on anatomopathological findings. A chest radiograph and routine blood tests may indicate complications or precipitating causes and are often performed upon arrival to an emergency department. New regional wall motion abnormalities on an echocardiogram are also suggestive of a myocardial infarction. Echo may be performed in equivocal cases by the on-call cardiologist. In stable patients whose symptoms have resolved by the time of evaluation, Technetium (99mTc) sestamibi (i.e. a "MIBI scan"), thallium-201 chloride or Rubidium-82 Chloride can be used in nuclear medicine to visualize areas of reduced blood flow in conjunction with physiologic or pharmacologic stress. Thallium may also be used to determine viability of tissue, distinguishing whether non-functional myocardium is actually dead or merely in a state of hibernation or of being stunned. Diagnostic criteria According to the WHO criteria as revised in 2000, a cardiac troponin rise accompanied by either typical symptoms, pathological Q waves, ST elevation or depression or coronary intervention are diagnostic of MI. Previous WHO criteria formulated in 1979 put less emphasis on cardiac biomarkers; according to these, a patient is diagnosed with myocardial infarction if two (probable) or three (definite) of the following criteria are satisfied: Clinical history of ischaemic type chest pain lasting for more than 20 minutes Changes in serial ECG tracings Rise and fall of serum cardiac biomarkers such as creatine kinase-MB fraction and troponin Physical examination The general appearance of patients may vary according to the experienced symptoms; the patient may be comfortable, or restless and in severe distress with an increased respiratory rate. A cool and pale skin is common and points to vasoconstriction. Some patients have low-grade fever (38–39 °C). Blood pressure may be elevated or decreased, and the pulse can become irregular. If heart failure ensues, elevated jugular venous pressure and hepatojugular reflux, or swelling of the legs due to peripheral edema may be found on inspection. Rarely, a cardiac bulge with a pace different from the pulse rhythm can be felt on precordial examination. Various abnormalities can be found on auscultation, such as a third and fourth heart sound, systolic murmurs, paradoxical splitting of the second heart sound, a pericardial friction rub and rales over the lung. Electrocardiogram The primary purpose of the electrocardiogram is to detect ischemia or acute coronary injury in broad, symptomatic emergency department populations. A serial ECG may be used to follow rapid changes in time. The standard 12 lead ECG does not directly examine the right ventricle, and is relatively poor at examining the posterior basal and lateral walls of the left ventricle. In particular, acute myocardial infarction in the distribution of the circumflex artery is likely to produce a nondiagnostic ECG. The use of additional ECG leads like right-sided leads V3R and V4R and posterior leads V7, V8, and V9 may improve sensitivity for right ventricular and posterior myocardial infarction. The 12 lead ECG is used to classify patients into one of three groups: those with ST segment elevation or new bundle branch block (suspicious for acute injury and a possible candidate for acute reperfusion therapy with thrombolytics or primary PCI), those with ST segment depression or T wave inversion (suspicious for ischemia), and those with a so-called non-diagnostic or normal ECG. A normal ECG does not rule out acute myocardial infarction. Mistakes in interpretation are relatively common, and the failure to identify high risk features has a negative effect on the quality of patient care. It should be determined if a person is at high risk for myocardial infarction before conducting imaging tests to make a diagnosis. People who have a normal ECG and who are able to exercise, for example, do not merit routine imaging. Imaging tests such as stress radionuclide myocardial perfusion imaging or stress echocardiography can confirm a diagnosis when a person's history, physical exam, ECG and cardiac biomarkers suggest the likelihood of a problem. Cardiac markers Cardiac markers or cardiac enzymes are proteins that leak out of injured myocardial cells through their damaged cell membranes into the bloodstream. Until the 1980s, the enzymes SGOT and LDH were used to assess cardiac injury. Now, the markers most widely used in detection of MI are MB subtype of the enzyme creatine kinase and cardiac troponins T and I as they are more specific for myocardial injury. The cardiac troponins T and I which are released within 4–6 hours of an attack of MI and remain elevated for up to 2 weeks, have nearly complete tissue specificity and are now the preferred markers for assessing myocardial damage. Heart-type fatty acid binding protein is another marker, used in some home test kits. Elevated troponins in the setting of chest pain may accurately predict a high likelihood of a myocardial infarction in the near future. New markers such as glycogen phosphorylase isoenzyme BB are under investigation. Note that only the cardiac troponins are used clinically for myocardial infarction as creatine kinase adds little value in diagnosing MI while adding to system cost. The diagnosis of myocardial infarction requires two out of three components (history, ECG, and enzymes). When damage to the heart occurs, levels of cardiac markers rise over time, which is why blood tests for them are taken over a 24-hour period. Because these enzyme levels are not elevated immediately following a heart attack, patients presenting with chest pain are generally treated with the assumption that a myocardial infarction has occurred and then evaluated for a more precise diagnosis. Angiography In difficult cases or in situations where intervention to restore blood flow is appropriate, coronary angiography can be performed. A catheter is inserted into an artery (typically the radial or femoral artery) and pushed to the vessels supplying the heart. A radio-opaque dye is administered through the catheter and a sequence of x-rays (fluoroscopy) is performed. Obstructed or narrowed arteries can be identified, and angioplasty applied as a therapeutic measure (see below). Angioplasty requires extensive skill, especially in emergency settings. It is performed by a physician trained in interventional cardiology. Histopathology Histopathological examination of the heart may reveal infarction at autopsy. Gross examination may reveal signs of myocardial infarction. Under the microscope, myocardial infarction presents as a circumscribed area of ischemic, coagulative necrosis (cell death). On gross examination, the infarct is not identifiable within the first 12 hours. Although earlier changes can be discerned using electron microscopy, one of the earliest changes under a normal microscope are so-called wavy fibers. Subsequently, the myocyte cytoplasm becomes more eosinophilic (pink) and the cells lose their transversal striations, with typical changes and eventually loss of the cell nucleus. The interstitium at the margin of the infarcted area is initially infiltrated with neutrophils, then with lymphocytes and macrophages, who phagocytose ("eat") the myocyte debris. The necrotic area is surrounded and progressively invaded by granulation tissue, which will replace the infarct with a fibrous (collagenous) scar (which are typical steps in wound healing). The interstitial space (the space between cells outside of blood vessels) may be infiltrated with red blood cells. These features can be recognized in cases where the perfusion was not restored; reperfused infarcts can have other hallmarks, such as contraction band necrosis. These tables gives an overview of the histopathology seen in myocardial infarction by time after obstruction. By individual parameters Some authors summarize the vascular and early fibrotic changes as 'granulation tissue', which is maximal at 2–3 weeks Differential diagnoses for myocardial fibrosis: Interstitial fibrosis, which is nonspecific, having been described in congestive heart failure, hypertension, and normal aging. Subepicardial fibrosis, which is associated with non-infarction diagnoses such as myocarditis and non-ischemic cardiomyopathy. Chronological See also Myocardial infarction management Myocardial infarction complications Notes References Aging-associated diseases Cardiovascular diseases Diagnostic cardiology Ischemic heart diseases Medical emergencies Cardiac procedures
Diagnosis of myocardial infarction
[ "Biology" ]
1,959
[ "Senescence", "Aging-associated diseases" ]
29,090,067
https://en.wikipedia.org/wiki/Nitroreductase
Nitroreductases are a family of evolutionarily related proteins involved in the reduction of nitrogen-containing compounds, including those containing the nitro functional group. Members of this family utilise flavin mononucleotide as a cofactor and are often found to be homodimers. Members of this family include oxygen-insensitive NAD(P)H nitroreductase (flavin mononucleotide-dependent nitroreductase) (6,7-dihydropteridine reductase) () and NADH dehydrogenase (). A number of these proteins are described as oxidoreductases. They are primarily found in bacterial lineages though a number of eukaryotic homologs have been identified: C. elegans , D. melanogaster , , mouse and human . This protein is not found in photosynthetic eukaryotes. The sequences containing this entry in photosynthetic organisms are possible false positives. The nitroreductase of Enterobacter cloacae was identified by Bryant and Deluca in a strain isolated from a munitions facility, on the basis of its ability to metabolize TNT (trinitrotoluene). Since then many homologues have been identified and the family is now known to include members in diverse organisms, that catalize diverse reactions. The iodotyrosine deiodenase of mammals is a dehalogenase, the BluB of Sinorhizobium meliloti cannibalizes the bound flavin mononucleotideto furnish a critical intermediate in vitamin B12 biosynthesis. Crystal structures of the E. cloacae and E. coli enzymes have been published with a variety of substrates and analogues bound. An example of a potential cold-active enzyme for prodrug therapy was described using a cold-active nitroreductase, Ssap-NtrB (Çelik and Yetis, 2012). Despite Ssap-NtrB derived from a mesophilic bacterium, it showed optimal activity at 20°C against cancer prodrugs. Authors comment that the cold-activity of this novel enzyme will be useful for therapies in combination with crymotherapy, exposing the target tissue to low temperatures in order to trigger the enzyme activity to activate the drug only where is required. Moreover, the enzyme could also be used for bioremediation of compounds of explosive and volatile nature in regions where high activity at low temperatures is needed. Subfamilies Cob(II)yrinic acid a,c-diamide reductase Human proteins containing this domain Iodotyrosine deiodinase (IYD) References Protein domains Single-pass transmembrane proteins
Nitroreductase
[ "Biology" ]
587
[ "Protein domains", "Protein classification" ]
29,092,679
https://en.wikipedia.org/wiki/Sigma-t
Sigma-t is a quantity used in oceanography to measure the density of seawater at a given temperature. σT is defined as ρ(S,T)-1000 kg m−3, where ρ(S,T) is the density of a sample of seawater at temperature T and salinity S, measured in kg m−3, at standard atmospheric pressure. For example, a water sample with a density of 1.027 g/cm3 has a σT value of 27. References See also Density of saltwater and ice Units of density
Sigma-t
[ "Physics", "Mathematics" ]
117
[ "Physical quantities", "Units of density", "Quantity", "Density", "Units of measurement" ]
29,094,048
https://en.wikipedia.org/wiki/Inverse%20problem%20in%20optics
The inverse problem in optics (or the inverse optics problem) refers to the fundamentally ambiguous mapping between sources of retinal stimulation and the retinal images that are caused by those sources. For example, the size of an object, the orientation of the object, and its distance from the observer are conflated in the retinal image. For any given projection on the retina there are an infinite number of pairings of object size, orientation and distance that could have given rise to that projection on the retina. Because the image on the retina does not specify which pairing did in fact cause the image, this and other aspects of vision qualify as an inverse problem. References Inverse problems Optics
Inverse problem in optics
[ "Physics", "Chemistry", "Mathematics" ]
143
[ "Applied and interdisciplinary physics", "Optics", "Applied mathematics", " molecular", "Atomic", "Inverse problems", " and optical physics" ]
29,096,486
https://en.wikipedia.org/wiki/Jscrambler
Jscrambler is a technology company mainly known for its JavaScript obfuscator and eponymous monitoring framework. The obfuscator makes it harder to reverse engineer a web application's client-side code and tamper with its integrity. For real-time detection of web skimming, DOM tampering and user interface changes, the monitoring framework can be used. Jscrambler's products are used in a number of sectors including finance, broadcasting and online gaming. History Jscrambler started as AuditMark, which was founded in 2009 by Pedro Fortuna and Rui Ribeiro. The company's initial focus was developing a solution to fight click-fraud in advertising campaigns, since the traffic audit mechanism was JavaScript dependent. The name of the flagship product - Jscrambler - also became the name of the company, which was officially founded in 2014. In September 2021, Jscrambler raised US$15 million in a series A led by Ace Capital Partners, with participation from Sonae IM and Portugal Ventures. References JavaScript programming tools Software obfuscation Internet properties established in 2009 Computer security software companies Computer security software
Jscrambler
[ "Technology", "Engineering" ]
235
[ "Cybersecurity engineering", "Computer security software", "Software obfuscation" ]
34,645,248
https://en.wikipedia.org/wiki/Rosati%20involution
In mathematics, a Rosati involution, named after Carlo Rosati, is an involution of the rational endomorphism ring of an abelian variety induced by a polarisation. Let be an abelian variety, let be the dual abelian variety, and for , let be the translation-by- map, . Then each divisor on defines a map via . The map is a polarisation if is ample. The Rosati involution of relative to the polarisation sends a map to the map , where is the dual map induced by the action of on . Let denote the Néron–Severi group of . The polarisation also induces an inclusion via . The image of is equal to , i.e., the set of endomorphisms fixed by the Rosati involution. The operation then gives the structure of a formally real Jordan algebra. References Algebraic geometry Ring theory
Rosati involution
[ "Mathematics" ]
195
[ "Fields of abstract algebra", "Ring theory", "Algebraic geometry" ]
34,653,288
https://en.wikipedia.org/wiki/Alternating%20electric%20field%20therapy
Alternating electric field therapy, sometimes called tumor treating fields (TTFields), is a type of electromagnetic field therapy using low-intensity, intermediate frequency electrical fields to treat cancer. TTFields disrupt cell division by disrupting dipole alignment and inducing dielectrophoresis of critical molecules and organelles during mitosis. These anti-mitotic effects lead to cell death, slowing cancer growth. A TTField-treatment device manufactured by the Israeli company Novocure is approved in the United States and Europe for the treatment of newly diagnosed and recurrent glioblastoma, malignant pleural mesothelioma (MPM), and is undergoing clinical trials for several other tumor types. Despite earning regulatory approval, the efficacy of this technology remains controversial among medical experts. Mechanism All living cells contain polar molecules and will respond to changes in electric fields. Alternating electric field therapy, or Tumor Treating Fields (TTFields) use insulated electrodes to apply very-low-intensity, intermediate-frequency alternating electrical fields to a target area containing cancerous cells. Polar molecules play a key role in cell division, making mitosis particularly susceptible to interference from outside electric fields. TTFields disrupt dipole alignment and induce dielectrophoresis during mitosis, killing proliferating cells. Dipole Alignment Polar molecules critical to mitosis include α/β-tubulin and the mitotic septin heterotrimer. Tubulin is necessary for mitotic spindle formation during metaphase, while septins stabilize the cell during cytokinesis. When exposed to TTFields, these molecules align their dipole with the electric field, freezing them in one orientation. This prevents tubulin and septin molecules from moving to and binding where they are needed for successful cell division. This results in mitotic catastrophe, initiating cell death through apoptosis. Uneven chromosome splitting can also be a result of TTFields' effect on dipole alignment, resulting in daughter cells with abnormal chromosome numbers. Dielectrophoresis Cells that successfully complete metaphase are later susceptible to TTFields during telophase. At this stage in cell division, the cell takes on an hourglass shape as it prepares to divide in two. This results in a non-uniform electric field within the cell, with high field density at the cell's furrow. This causes polar molecules and organelles to migrate with the electric field toward the furrow. This disrupts the cell's division and leads to cell death. Optimization In principle, this approach could be selective for cancer cells in regions of the body, such as the brain, where the majority of normal cells are non-proliferating. The frequency of the TTField can be adjusted between 100 and 300kHz to target cancer cells and avoid harming healthy cells. Current research supports that cell size is inversely proportional to optimal TTField frequency. TTFields can also be optimized by orienting two transducer arrays perpendicular to each other to maximize the amount of cells that will be affected. Cells divide in different orientations and are most affected by an electric field that is parallel to their direction of division (perpendicular to the mitotic plate). Clinicians determine where to place the transducer arrays to optimize treatment using software that analyzes tumor location and the patient's morphometry. Other Biological Effects Emerging evidence suggests that alternating electric field therapy disrupts various biological processes, including DNA repair, cell permeability and immunological responses, to elicit therapeutic effects. Greater mechanistic understanding of TTFields may pave the way for new, more effective TTFields-based therapeutic combinations in the future. Medical uses Recurrent glioblastoma The American National Comprehensive Cancer Network's official guidelines list TTFields as an option for the treatment of recurrent glioblastoma, but note substantial disagreement among the members of the expert panel making this recommendation. High-quality evidence for the efficacy of TTFields in oncology is limited. The first randomized clinical trial evaluating TTFields was published in November, 2014, and evaluated efficacy of this approach in patients with recurrent glioblastoma. This trial was the primary basis for regulatory approval of NovoTTF-100A / Optune in the United States and Europe. In this study, patients with glioblastoma that had recurred after initial conventional therapy were randomized to treatment either with a TTFields device (NovoTTF-100A / Optune) or with their treating physician's choice of standard chemotherapy. Survival or response rate in this trial was approximately 6 months, and was not significantly better in the TTFields group than in the conventional therapy group. The results suggested that TTFields and standard chemotherapy might be equally beneficial to patients in this setting, but with different side-effect profiles. Two earlier clinical studies had suggested a benefit of TTFields treatment in recurrent glioblastoma, but definitive conclusions could not be drawn due to their lack of randomized control-groups. Newly diagnosed glioblastoma Initial results of a Novocure-sponsored, phase-3, randomized clinical trial of TTFields in patients with newly diagnosed glioblastoma were reported in November, 2014, and published in December 2015. Interim analysis showed a statistically significant benefit in median survival for patients treated with TTFields plus conventional therapy (temozolomide, radiation, and surgery) versus patients treated with conventional therapy alone, a result which led the trial's independent data monitoring committee to recommend early study-termination. This was the first large-scale trial in a decade to show a survival benefit for patients with newly diagnosed glioblastoma. On the basis of these results, the FDA approved a modification of the trial protocol, allowing all patients on the trial to be offered TTFields. Potential methodological concerns in this trial included the lack of a "sham" control group, raising the possibility of a placebo effect, and the fact that patients receiving TTFields received more cycles of chemotherapy than control patients. This discrepancy might have been a result of improved health and survival in TTFields-treated patients, allowing for more cycles of chemotherapy, but also could have been due to conscious or unconscious bias on the part of clinical investigators. An expert clinical review called the preliminary results "encouraging". Medical device A clinical TTFields device is manufactured by Novocure under the trade name Optune (formerly NovoTTF-100A), and is approved in the United States, Japan, Israel and multiple countries in Europe for the treatment of recurrent glioblastoma. These devices generate electromagnetic waves between 100 and 300 kHz. The devices can be used in conjunction with regular patterns of care for patients, but are only available in certain treatment centers, and require specific training and certification on the part of the prescribing physician. When a TTFields device is used, electrodes resembling a kind of "electric hat" are placed onto a patient's shaved scalp. When not in use, the device's batteries are plugged into a power outlet to be re-charged. Side effects The adverse effects of TTFields include local skin rashes and irritation caused by prolonged electrode use.Compared with other cancer treatment methods, this effect is very minimal and tolerable for the patient. This irritation can be controlled with steroid creams and periodic breaks from treatment. Regulatory approval The NovoTTF-100A / Optune device was approved by the U.S. Food and Drug Administration (FDA) in April 2011 for the treatment of patients with recurrent glioblastoma, based on clinical trial evidence suggesting a benefit in this population. Because the evidence for therapeutic efficacy was not deemed conclusive, the device manufacturer was required to conduct additional clinical trials as a condition of device approval. Critics suggested that pleas of cancer patients in the room of the FDA hearing swayed the opinions of many during the related FDA panel, and that approval was granted despite "huge misgivings on several points". Optune was approved by the FDA for newly diagnosed glioblastoma on Oct. 5, 2015, as a result of randomized phase 3 trial results that reported a 3-month advantage in overall survival and progression-free survival when added to chemotherapy with temozolomide. In the US, Medicare covers treatment, as of February 2020. Company Novocure Ltd. (Nasdaq: NVCR) was founded in 2000. As of December 2020, Novocure Ltd. has over 1000 employees and makes hundreds of millions of dollars in annual sales. Israeli Professor Yoram Palti, professor of physiology and biophysics at the Israel Institute of Technology, is the company's founder and chief technology officer. Novocure Ltd. owns 145 patents. See also Angiogenesis inhibitors Experimental cancer treatment Pulsed electromagnetic field therapy References Further reading External links Approval acknowledgement from FDA Angiogenesis inhibitors Cancer treatments Medical devices
Alternating electric field therapy
[ "Biology" ]
1,817
[ "Angiogenesis", "Angiogenesis inhibitors", "Medical devices", "Medical technology" ]
34,655,097
https://en.wikipedia.org/wiki/MINAS
MINAS is a database of metal ions in nucleic acids. References External links http://www.minas.uzh.ch Biological databases Biochemistry Inorganic chemistry
MINAS
[ "Chemistry", "Biology" ]
34
[ "Biomolecules by chemical classification", "Bioinformatics", "nan", "Biochemistry", "Biological databases", "Nucleic acids" ]
34,655,565
https://en.wikipedia.org/wiki/Carceplex
In host–guest chemistry, a carceplex is a class of chemical structures in the carcerand family that are hinged, and can be closed using reagents that react with the carceplex and trap precursors of reactive intermediates, and are unreactive with the trapped precursor or reactive intermediate. This is useful for determining the spectroscopic and crystallographic properties of reactive intermediates in relative isolation, particularly compounds prone to dimerization like cyclobutadiene. References Supramolecular chemistry
Carceplex
[ "Chemistry", "Materials_science" ]
109
[ "Nanotechnology", "nan", "Supramolecular chemistry" ]
34,656,547
https://en.wikipedia.org/wiki/Staggered%20extension%20process
The staggered extension process (also referred to as StEP) is a common technique used in biotechnology and molecular biology to create new, mutated genes with qualities of one or more initial genes. The technique itself is a modified polymerase chain reaction with very short (approximately 10 seconds) cycles. In these cycles the elongation of DNA is very quick (only a few hundred base pairs) and synthesized fragments anneal with complementary fragments of other strands. In this way, mutations of the initial genes are shuffled and in the end genes with new combinations of mutations are amplified. The StEP protocol has been found to be useful as a method of directed evolution for the discovery of enzymes useful to industry. References Genetics Molecular biology techniques
Staggered extension process
[ "Chemistry", "Biology" ]
144
[ "Molecular biology techniques", "Genetics", "Molecular biology" ]
2,088,843
https://en.wikipedia.org/wiki/Zentner
The zentner (German Zentner, from Latin centenarius, derived from centum meaning "hundred") is a name for a unit of mass which was used predominantly in Germany, Austria, and Switzerland, although it was also sometimes used in the United Kingdom – for example, as a measure of the weight of certain crops including hops for beer production – and similar units were used in Scandinavia. Like the notion of hundredweight, the zentner is the weight of 100 units, where the value of the unit depends on the time and location. Traditionally the unit was one hundred pounds (German Pfund) with the precise value being context-dependent, making one zentner equal to about . In later times, with the adoption of the metric system, the value came to denote exactly , at least in Germany; in Austria and Switzerland the term is now in use for a measure of , as it is in Russia (центнер, tsentner). In Germany a measure of is named a Doppelzentner. See also German obsolete units of measurement Quintal (centner) References Notes Units of mass Obsolete units of measurement de:Zentner
Zentner
[ "Physics", "Mathematics" ]
240
[ "Obsolete units of measurement", "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
2,089,044
https://en.wikipedia.org/wiki/IMSI-catcher
An international mobile subscriber identity-catcher, or IMSI-catcher, is a telephone eavesdropping device used for intercepting mobile phone traffic and tracking location data of mobile phone users. Essentially a "fake" mobile tower acting between the target mobile phone and the service provider's real towers, it is considered a man-in-the-middle (MITM) attack. The 3G wireless standard offers some risk mitigation due to mutual authentication required from both the handset and the network. However, sophisticated attacks may be able to downgrade 3G and LTE to non-LTE network services which do not require mutual authentication. IMSI-catchers are used in a number of countries by law enforcement and intelligence agencies, but their use has raised significant civil liberty and privacy concerns and is strictly regulated in some countries such as under the German Strafprozessordnung (StPO / Code of Criminal Procedure). Some countries do not have encrypted phone data traffic (or very weak encryption), thus rendering an IMSI-catcher unnecessary. Overview A virtual base transceiver station (VBTS) is a device for identifying the temporary mobile subscriber identity (TMSI), international mobile subscriber identity (IMSI) of a nearby GSM mobile phone and intercepting its calls, some are even advanced enough to detect the international mobile equipment identity (IMEI). It was patented and first commercialized by Rohde & Schwarz in 2003. The device can be viewed as simply a modified cell tower with a malicious operator, and on 4 January 2012, the Court of Appeal of England and Wales held that the patent is invalid for obviousness. IMSI-catchers are often deployed by court order without a search warrant, the lower judicial standard of a pen register and trap-and-trace order being preferred by law enforcement. They can also be used in search and rescue operation for missing persons. Police departments have been reluctant to reveal use of these programs and contracts with vendors such as Harris Corporation, the maker of Stingray and Kingfish phone tracker devices. In the UK, the first public body to admit using IMSI catchers was the Scottish Prison Service, though it is likely that the Metropolitan Police Service has been using IMSI catchers since 2011 or before. Body-worn IMSI-catchers that target nearby mobile phones are being advertised to law enforcement agencies in the US. The GSM specification requires the handset to authenticate to the network, but does not require the network to authenticate to the handset. This well-known security hole is exploited by an IMSI catcher. The IMSI catcher masquerades as a base station and logs the IMSI numbers of all the mobile stations in the area, as they attempt to attach to the IMSI-catcher. It allows forcing the mobile phone connected to it to use no call encryption (A5/0 mode) or to use easily breakable encryption (A5/1 or A5/2 mode), making the call data easy to intercept and convert to audio. The 3G wireless standard mitigates risk and enhanced security of the protocol due to mutual authentication required from both the handset and the network and removes the false base station attack in GSM. Some sophisticated attacks against 3G and LTE may be able to downgrade to non-LTE network services which then does not require mutual authentication. Functionalities Identifying an IMSI Every mobile phone has the requirement to optimize its reception. If there is more than one base station of the subscribed network operator accessible, it will always choose the one with the strongest signal. An IMSI-catcher masquerades as a base station and causes every mobile phone of the simulated network operator within a defined radius to log in. With the help of a special identity request, it is able to force the transmission of the IMSI. Tapping a mobile phone The IMSI-catcher subjects the phones in its vicinity to a man-in-the-middle attack, appearing to them as a preferred base station in terms of signal strength. With the help of a SIM, it simultaneously logs into the GSM network as a mobile station. Since the encryption mode is chosen by the base station, the IMSI-catcher can induce the mobile station to use no encryption at all. Hence it can encrypt the plain text traffic from the mobile station and pass it to the base station. A targeted mobile phone is sent signals where the user will not be able to tell apart the device from authentic cell service provider infrastructure. This means that the device will be able to retrieve data that a normal cell tower receives from mobile phones if registered. There is only an indirect connection from mobile station via IMSI-catcher to the GSM network. For this reason, incoming phone calls cannot generally be patched through to the mobile station by the GSM network, although more modern versions of these devices have their own mobile patch-through solutions in order to provide this functionality. Passive IMSI detection The difference between a passive IMSI-catcher and an active IMSI-catcher is that an active IMSI-catcher intercepts the data in transfer such as spoke, text, mail, and web traffic between the endpoint and cell tower. Active IMSI-catchers generally also intercept all conversations and data traffic within a large range and are therefore also called rogue cell towers. It sends a signal with a plethora of commands to the endpoints, which respond by establishing a connection and routes all conversations and data traffic between the endpoints and the actual cell tower for as long as the attacker wishes. A passive IMSI-catcher on the other hand only detects the IMSI, TMSI or IMEI of an endpoint. Once the IMSI, TMSI or IMEI address is detected, the endpoint is immediately released. The passive IMSI-catcher sends out a signal with only one specific command to the endpoints, which respond to it and share the identifiers of the endpoint with the passive IMSI-catcher. The vendors of passive IMSI-catchers take privacy more into account. Universal Mobile Telecommunications System (UMTS) False base station attacks are prevented by a combination of key freshness and integrity protection of signaling data, not by authenticating the serving network. To provide a high network coverage, the UMTS standard allows for inter-operation with GSM. Therefore, not only UMTS but also GSM base stations are connected to the UMTS service network. This fallback is a security disadvantage and allows a new possibility of a man-in-the-middle attack. Tell-tales and difficulties The assignment of an IMSI catcher has a number of difficulties: It must be ensured that the mobile phone of the observed person is in standby mode and the correct network operator is found out. Otherwise, for the mobile station, there is no need to log into the simulated base station. Depending on the signal strength of the IMSI-catcher, numerous IMSIs can be located. The problem is to find out the right one. All mobile phones in the area covered by the catcher have no access to the network. Incoming and outgoing calls cannot be patched through for these subscribers. Only the observed person has an indirect connection. There are some disclosing factors. In most cases, the operation cannot be recognized immediately by the subscriber. But there are a few mobile phones that show a small symbol on the display, e.g. an exclamation point, if encryption is not used. This "Ciphering Indication Feature" can be suppressed by the network provider, however, by setting the OFM bit in EFAD on the SIM card. Since the network access is handled with the SIM/USIM of the IMSI-catcher, the receiver cannot see the number of the calling party. Of course, this also implies that the tapped calls are not listed in the itemized bill. The assignment near the base station can be difficult, due to the high signal level of the original base station. As most mobile phones prefer the faster modes of communication such as 4G or 3G, downgrading to 2G can require blocking frequency ranges for 4G and 3G. Detection and counter-measures Some preliminary research has been done in trying to detect and frustrate IMSI-catchers. One such project is through the Osmocom open source mobile station software. This is a special type of mobile phone firmware that can be used to detect and fingerprint certain network characteristics of IMSI-catchers, and warn the user that there is such a device operating in their area. But this firmware/software-based detection is strongly limited to a select few, outdated GSM mobile phones (i.e. Motorola) that are no longer available on the open market. The main problem is the closed-source nature of the major mobile phone producers. The application Android IMSI-Catcher Detector (AIMSICD) is being developed to detect and circumvent IMSI-catchers by StingRay and silent SMS. Technology for a stationary network of IMSI-catcher detectors has also been developed. Several apps listed on the Google Play Store as IMSI catcher detector apps include SnoopSnitch, Cell Spy Catcher, and GSM Spy Finder and have between 100,000 and 500,000 app downloads each. However, these apps have limitations in that they do not have access to phone's underlying hardware and may offer only minimal protection. See also Telephone tapping Stingray phone tracker Mobile phone jammer External links Chris Paget's presentation Practical Cellphone Spying at DEF CON 18 Verrimus - Mobile Phone Intercept Detection Footnotes Further reading External links Mobile Phone Networks: a tale of tracking, spoofing and owning mobile phones IMSI-catcher Seminar paper and presentation Mini IMSI and IMEI catcher The OsmocomBB project MicroNet: Proximus LLC GSM IMSI and IMEI dual band catcher MicroNet-U: Proximus LLC UMTS catcher iParanoid: IMSI Catcher Intrusion Detection System presentation Vulnerability by Design in Mobile Network Security Surveillance Mobile security Telephone tapping Telephony equipment Law enforcement equipment
IMSI-catcher
[ "Technology", "Engineering" ]
2,085
[ "Mobile security", "Cybersecurity engineering" ]
2,090,057
https://en.wikipedia.org/wiki/Kernel%20density%20estimation
In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy. Definition Let (x1, x2, ..., xn) be independent and identically distributed samples drawn from some univariate distribution with an unknown density ƒ at any given point x. We are interested in estimating the shape of this function ƒ. Its kernel density estimator is where K is the kernel — a non-negative function — and is a smoothing parameter called the bandwidth or simply width. A kernel with subscript h is called the scaled kernel and defined as . Intuitively one wants to choose h as small as the data will allow; however, there is always a trade-off between the bias of the estimator and its variance. The choice of bandwidth is discussed in more detail below. A range of kernel functions are commonly used: uniform, triangular, biweight, triweight, Epanechnikov (parabolic), normal, and others. The Epanechnikov kernel is optimal in a mean square error sense, though the loss of efficiency is small for the kernels listed previously. Due to its convenient mathematical properties, the normal kernel is often used, which means , where ϕ is the standard normal density function. The kernel density estimator then becomes where is the standard deviation of the sample . The construction of a kernel density estimate finds interpretations in fields outside of density estimation. For example, in thermodynamics, this is equivalent to the amount of heat generated when heat kernels (the fundamental solution to the heat equation) are placed at each data point locations xi. Similar methods are used to construct discrete Laplace operators on point clouds for manifold learning (e.g. diffusion map). Example Kernel density estimates are closely related to histograms, but can be endowed with properties such as smoothness or continuity by using a suitable kernel. The diagram below based on these 6 data points illustrates this relationship: For the histogram, first, the horizontal axis is divided into sub-intervals or bins which cover the range of the data: In this case, six bins each of width 2. Whenever a data point falls inside this interval, a box of height 1/12 is placed there. If more than one data point falls inside the same bin, the boxes are stacked on top of each other. For the kernel density estimate, normal kernels with a standard deviation of 1.5 (indicated by the red dashed lines) are placed on each of the data points xi. The kernels are summed to make the kernel density estimate (solid blue curve). The smoothness of the kernel density estimate (compared to the discreteness of the histogram) illustrates how kernel density estimates converge faster to the true underlying density for continuous random variables. Bandwidth selection The bandwidth of the kernel is a free parameter which exhibits a strong influence on the resulting estimate. To illustrate its effect, we take a simulated random sample from the standard normal distribution (plotted at the blue spikes in the rug plot on the horizontal axis). The grey curve is the true density (a normal density with mean 0 and variance 1). In comparison, the red curve is undersmoothed since it contains too many spurious data artifacts arising from using a bandwidth h = 0.05, which is too small. The green curve is oversmoothed since using the bandwidth h = 2 obscures much of the underlying structure. The black curve with a bandwidth of h = 0.337 is considered to be optimally smoothed since its density estimate is close to the true density. An extreme situation is encountered in the limit (no smoothing), where the estimate is a sum of n delta functions centered at the coordinates of analyzed samples. In the other extreme limit the estimate retains the shape of the used kernel, centered on the mean of the samples (completely smooth). The most common optimality criterion used to select this parameter is the expected L2 risk function, also termed the mean integrated squared error: Under weak assumptions on ƒ and K, (ƒ is the, generally unknown, real density function), where o is the little o notation, and n the sample size (as above). The AMISE is the asymptotic MISE, i. e. the two leading terms, where for a function g, and is the second derivative of and is the kernel. The minimum of this AMISE is the solution to this differential equation or Neither the AMISE nor the hAMISE formulas can be used directly since they involve the unknown density function or its second derivative . To overcome that difficulty, a variety of automatic, data-based methods have been developed to select the bandwidth. Several review studies have been undertaken to compare their efficacies, with the general consensus that the plug-in selectors and cross validation selectors are the most useful over a wide range of data sets. Substituting any bandwidth h which has the same asymptotic order n−1/5 as hAMISE into the AMISE gives that AMISE(h) = O(n−4/5), where O is the big O notation. It can be shown that, under weak assumptions, there cannot exist a non-parametric estimator that converges at a faster rate than the kernel estimator. Note that the n−4/5 rate is slower than the typical n−1 convergence rate of parametric methods. If the bandwidth is not held fixed, but is varied depending upon the location of either the estimate (balloon estimator) or the samples (pointwise estimator), this produces a particularly powerful method termed adaptive or variable bandwidth kernel density estimation. Bandwidth selection for kernel density estimation of heavy-tailed distributions is relatively difficult. A rule-of-thumb bandwidth estimator If Gaussian basis functions are used to approximate univariate data, and the underlying density being estimated is Gaussian, the optimal choice for h (that is, the bandwidth that minimises the mean integrated squared error) is: An value is considered more robust when it improves the fit for long-tailed and skewed distributions or for bimodal mixture distributions. This is often done empirically by replacing the standard deviation by the parameter below: where IQR is the interquartile range. Another modification that will improve the model is to reduce the factor from 1.06 to 0.9. Then the final formula would be: where is the sample size. This approximation is termed the normal distribution approximation, Gaussian approximation, or Silverman's rule of thumb. While this rule of thumb is easy to compute, it should be used with caution as it can yield widely inaccurate estimates when the density is not close to being normal. For example, when estimating the bimodal Gaussian mixture model from a sample of 200 points, the figure on the right shows the true density and two kernel density estimates — one using the rule-of-thumb bandwidth, and the other using a solve-the-equation bandwidth. The estimate based on the rule-of-thumb bandwidth is significantly oversmoothed. Relation to the characteristic function density estimator Given the sample (x1, x2, ..., xn), it is natural to estimate the characteristic function as Knowing the characteristic function, it is possible to find the corresponding probability density function through the Fourier transform formula. One difficulty with applying this inversion formula is that it leads to a diverging integral, since the estimate is unreliable for large t’s. To circumvent this problem, the estimator is multiplied by a damping function , which is equal to 1 at the origin and then falls to 0 at infinity. The “bandwidth parameter” h controls how fast we try to dampen the function . In particular when h is small, then ψh(t) will be approximately one for a large range of t’s, which means that remains practically unaltered in the most important region of t’s. The most common choice for function ψ is either the uniform function }, which effectively means truncating the interval of integration in the inversion formula to , or the Gaussian function . Once the function ψ has been chosen, the inversion formula may be applied, and the density estimator will be where K is the Fourier transform of the damping function ψ. Thus the kernel density estimator coincides with the characteristic function density estimator. Geometric and topological features We can extend the definition of the (global) mode to a local sense and define the local modes: Namely, is the collection of points for which the density function is locally maximized. A natural estimator of is a plug-in from KDE, where and are KDE version of and . Under mild assumptions, is a consistent estimator of . Note that one can use the mean shift algorithm to compute the estimator numerically. Statistical implementation A non-exhaustive list of software implementations of kernel density estimators includes: In Analytica release 4.4, the Smoothing option for PDF results uses KDE, and from expressions it is available via the built-in Pdf function. In C/C++, FIGTree is a library that can be used to compute kernel density estimates using normal kernels. MATLAB interface available. In C++, libagf is a library for variable kernel density estimation. In C++, mlpack is a library that can compute KDE using many different kernels. It allows to set an error tolerance for faster computation. Python and R interfaces are available. in C# and F#, Math.NET Numerics is an open source library for numerical computation which includes kernel density estimation In CrimeStat, kernel density estimation is implemented using five different kernel functions – normal, uniform, quartic, negative exponential, and triangular. Both single- and dual-kernel density estimate routines are available. Kernel density estimation is also used in interpolating a Head Bang routine, in estimating a two-dimensional Journey-to-crime density function, and in estimating a three-dimensional Bayesian Journey-to-crime estimate. In ELKI, kernel density functions can be found in the package de.lmu.ifi.dbs.elki.math.statistics.kernelfunctions In ESRI products, kernel density mapping is managed out of the Spatial Analyst toolbox and uses the Quartic(biweight) kernel. In Excel, the Royal Society of Chemistry has created an add-in to run kernel density estimation based on their Analytical Methods Committee Technical Brief 4. In gnuplot, kernel density estimation is implemented by the smooth kdensity option, the datafile can contain a weight and bandwidth for each point, or the bandwidth can be set automatically according to "Silverman's rule of thumb" (see above). In Haskell, kernel density is implemented in the statistics package. In IGOR Pro, kernel density estimation is implemented by the StatsKDE operation (added in Igor Pro 7.00). Bandwidth can be user specified or estimated by means of Silverman, Scott or Bowmann and Azzalini. Kernel types are: Epanechnikov, Bi-weight, Tri-weight, Triangular, Gaussian and Rectangular. In Java, the Weka machine learning package provides weka.estimators.KernelEstimator, among others. In JavaScript, the visualization package D3.js offers a KDE package in its science.stats package. In JMP, the Graph Builder platform utilizes kernel density estimation to provide contour plots and high density regions (HDRs) for bivariate densities, and violin plots and HDRs for univariate densities. Sliders allow the user to vary the bandwidth. Bivariate and univariate kernel density estimates are also provided by the Fit Y by X and Distribution platforms, respectively. In Julia, kernel density estimation is implemented in the KernelDensity.jl package. In KNIME, 1D and 2D Kernel Density distributions can be generated and plotted using nodes from the Vernalis community contribution, e.g. 1D Kernel Density Plot, among others. The underlying implementation is written in Java. In MATLAB, kernel density estimation is implemented through the ksdensity function (Statistics Toolbox). As of the 2018a release of MATLAB, both the bandwidth and kernel smoother can be specified, including other options such as specifying the range of the kernel density. Alternatively, a free MATLAB software package which implements an automatic bandwidth selection method is available from the MATLAB Central File Exchange for 1-dimensional data 2-dimensional data n-dimensional data A free MATLAB toolbox with implementation of kernel regression, kernel density estimation, kernel estimation of hazard function and many others is available on these pages (this toolbox is a part of the book ). In Mathematica, numeric kernel density estimation is implemented by the function SmoothKernelDistribution and symbolic estimation is implemented using the function KernelMixtureDistribution both of which provide data-driven bandwidths. In Minitab, the Royal Society of Chemistry has created a macro to run kernel density estimation based on their Analytical Methods Committee Technical Brief 4. In the NAG Library, kernel density estimation is implemented via the g10ba routine (available in both the Fortran and the C versions of the Library). In Nuklei, C++ kernel density methods focus on data from the Special Euclidean group . In Octave, kernel density estimation is implemented by the kernel_density option (econometrics package). In Origin, 2D kernel density plot can be made from its user interface, and two functions, Ksdensity for 1D and Ks2density for 2D can be used from its LabTalk, Python, or C code. In Perl, an implementation can be found in the Statistics-KernelEstimation module In PHP, an implementation can be found in the MathPHP library In Python, many implementations exist: pyqt_fit.kde Module in the PyQt-Fit package, SciPy (scipy.stats.gaussian_kde), Statsmodels (KDEUnivariate and KDEMultivariate), and scikit-learn (KernelDensity) (see comparison). KDEpy supports weighted data and its FFT implementation is orders of magnitude faster than the other implementations. The commonly used pandas library offers support for kde plotting through the plot method (df.plot(kind='kde')). The getdist package for weighted and correlated MCMC samples supports optimized bandwidth, boundary correction and higher-order methods for 1D and 2D distributions. One newly used package for kernel density estimation is seaborn ( import seaborn as sns , sns.kdeplot() ). A GPU implementation of KDE also exists. In R, it is implemented through density in the base distribution, and bw.nrd0 function is used in stats package, this function uses the optimized formula in Silverman's book. bkde in the KernSmooth library, ParetoDensityEstimation in the DataVisualizations library (for pareto distribution density estimation), kde in the ks library, dkden and dbckden in the evmix library (latter for boundary corrected kernel density estimation for bounded support), npudens in the np library (numeric and categorical data), sm.density in the sm library. For an implementation of the kde.R function, which does not require installing any packages or libraries, see kde.R. The btb library, dedicated to urban analysis, implements kernel density estimation through kernel_smoothing. In SAS, proc kde can be used to estimate univariate and bivariate kernel densities. In Apache Spark, the KernelDensity() class In Stata, it is implemented through kdensity; for example histogram x, kdensity. Alternatively a free Stata module KDENS is available allowing a user to estimate 1D or 2D density functions. In Swift, it is implemented through SwiftStats.KernelDensityEstimation in the open-source statistics library SwiftStats. See also Kernel (statistics) Kernel smoothing Kernel regression Density estimation (with presentation of other examples) Mean-shift Scale space: The triplets {(x, h, KDE with bandwidth h evaluated at x: all x, h > 0} form a scale space representation of the data. Multivariate kernel density estimation Variable kernel density estimation Head/tail breaks Further reading Härdle, Müller, Sperlich, Werwatz, Nonparametric and Semiparametric Methods, Springer-Verlag Berlin Heidelberg 2004, pp. 39–83 References External links Introduction to kernel density estimation A short tutorial which motivates kernel density estimators as an improvement over histograms. Kernel Bandwidth Optimization A free online tool that generates an optimized kernel density estimate. Free Online Software (Calculator) computes the Kernel Density Estimation for a data series according to the following Kernels: Gaussian, Epanechnikov, Rectangular, Triangular, Biweight, Cosine, and Optcosine. Kernel Density Estimation Applet An online interactive example of kernel density estimation. Requires .NET 3.0 or later. Estimation of densities Nonparametric statistics Machine learning
Kernel density estimation
[ "Engineering" ]
3,730
[ "Artificial intelligence engineering", "Machine learning" ]
2,091,495
https://en.wikipedia.org/wiki/Nucleic%20acid%20double%20helix
In molecular biology, the term double helix refers to the structure formed by double-stranded molecules of nucleic acids such as DNA. The double helical structure of a nucleic acid complex arises as a consequence of its secondary structure, and is a fundamental component in determining its tertiary structure. The structure was discovered by Maurice Wilkins, Rosalind Franklin, her student Raymond Gosling, James Watson, and Francis Crick, while the term "double helix" entered popular culture with the 1968 publication of Watson's The Double Helix: A Personal Account of the Discovery of the Structure of DNA. The DNA double helix biopolymer of nucleic acid is held together by nucleotides which base pair together. In B-DNA, the most common double helical structure found in nature, the double helix is right-handed with about 10–10.5 base pairs per turn. The double helix structure of DNA contains a major groove and minor groove. In B-DNA the major groove is wider than the minor groove. Given the difference in widths of the major groove and minor groove, many proteins which bind to B-DNA do so through the wider major groove. History The double-helix model of DNA structure was first published in the journal Nature by James Watson and Francis Crick in 1953, (X,Y,Z coordinates in 1954) based on the work of Rosalind Franklin and her student Raymond Gosling, who took the crucial X-ray diffraction image of DNA labeled as "Photo 51", and Maurice Wilkins, Alexander Stokes, and Herbert Wilson, and base-pairing chemical and biochemical information by Erwin Chargaff. Before this, Linus Pauling—who had already accurately characterised the conformation of protein secondary structure motifs—and his collaborator Robert Corey had posited, erroneously, that DNA would adopt a triple-stranded conformation. The realization that the structure of DNA is that of a double-helix elucidated the mechanism of base pairing by which genetic information is stored and copied in living organisms and is widely considered one of the most important scientific discoveries of the 20th century. Crick, Wilkins, and Watson each received one-third of the 1962 Nobel Prize in Physiology or Medicine for their contributions to the discovery. Nucleic acid hybridization Hybridization is the process of complementary base pairs binding to form a double helix. Melting is the process by which the interactions between the strands of the double helix are broken, separating the two nucleic acid strands. These bonds are weak, easily separated by gentle heating, enzymes, or mechanical force. Melting occurs preferentially at certain points in the nucleic acid. T and A rich regions are more easily melted than C and G rich regions. Some base steps (pairs) are also susceptible to DNA melting, such as T A and T G. These mechanical features are reflected by the use of sequences such as TATA at the start of many genes to assist RNA polymerase in melting the DNA for transcription. Strand separation by gentle heating, as used in polymerase chain reaction (PCR), is simple, providing the molecules have fewer than about 10,000 base pairs (10 kilobase pairs, or 10 kbp). The intertwining of the DNA strands makes long segments difficult to separate. The cell avoids this problem by allowing its DNA-melting enzymes (helicases) to work concurrently with topoisomerases, which can chemically cleave the phosphate backbone of one of the strands so that it can swivel around the other. Helicases unwind the strands to facilitate the advance of sequence-reading enzymes such as DNA polymerase. Base pair geometry The geometry of a base, or base pair step can be characterized by 6 coordinates: shift, slide, rise, tilt, roll, and twist. These values precisely define the location and orientation in space of every base or base pair in a nucleic acid molecule relative to its predecessor along the axis of the helix. Together, they characterize the helical structure of the molecule. In regions of DNA or RNA where the normal structure is disrupted, the change in these values can be used to describe such disruption. For each base pair, considered relative to its predecessor, there are the following base pair geometries to consider: Shear Stretch Stagger Buckle Propeller: rotation of one base with respect to the other in the same base pair. Opening Shift: displacement along an axis in the base-pair plane perpendicular to the first, directed from the minor to the major groove. Slide: displacement along an axis in the plane of the base pair directed from one strand to the other. Rise: displacement along the helix axis. Tilt: rotation around the shift axis. Roll: rotation around the slide axis. Twist: rotation around the rise axis. x-displacement y-displacement inclination tip pitch: the height per complete turn of the helix. Rise and twist determine the handedness and pitch of the helix. The other coordinates, by contrast, can be zero. Slide and shift are typically small in B-DNA, but are substantial in A- and Z-DNA. Roll and tilt make successive base pairs less parallel, and are typically small. "Tilt" has often been used differently in the scientific literature, referring to the deviation of the first, inter-strand base-pair axis from perpendicularity to the helix axis. This corresponds to slide between a succession of base pairs, and in helix-based coordinates is properly termed "inclination". Helix geometries At least three DNA conformations are believed to be found in nature, A-DNA, B-DNA, and Z-DNA. The B form described by James Watson and Francis Crick is believed to predominate in cells. It is 23.7 Å wide and extends 34 Å per 10 bp of sequence. The double helix makes one complete turn about its axis every 10.4–10.5 base pairs in solution. This frequency of twist (termed the helical pitch) depends largely on stacking forces that each base exerts on its neighbours in the chain. The absolute configuration of the bases determines the direction of the helical curve for a given conformation. A-DNA and Z-DNA differ significantly in their geometry and dimensions to B-DNA, although still form helical structures. It was long thought that the A form only occurs in dehydrated samples of DNA in the laboratory, such as those used in crystallographic experiments, and in hybrid pairings of DNA and RNA strands, but DNA dehydration does occur in vivo, and A-DNA is now known to have biological functions. Segments of DNA that cells have methylated for regulatory purposes may adopt the Z geometry, in which the strands turn about the helical axis the opposite way to A-DNA and B-DNA. There is also evidence of protein-DNA complexes forming Z-DNA structures. Other conformations are possible; A-DNA, B-DNA, C-DNA, E-DNA, L-DNA (the enantiomeric form of D-DNA), P-DNA, S-DNA, Z-DNA, etc. have been described so far. In fact, only the letters F, Q, U, V, and Y are available to describe any new DNA structure that may appear in the future. However, most of these forms have been created synthetically and have not been observed in naturally occurring biological systems. There are also triple-stranded DNA forms and quadruplex forms such as the G-quadruplex and the i-motif. Grooves Twin helical strands form the DNA backbone. Another double helix may be found by tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not directly opposite each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form. Non-double helical forms Alternative non-helical models were briefly considered in the late 1970s as a potential solution to problems in DNA replication in plasmids and chromatin. However, the models were set aside in favor of the double-helical model due to subsequent experimental advances such as X-ray crystallography of DNA duplexes and later the nucleosome core particle, and the discovery of topoisomerases. Also, the non-double-helical models are not currently accepted by the mainstream scientific community. Bending DNA is a relatively rigid polymer, typically modelled as a worm-like chain. It has three significant degrees of freedom; bending, twisting, and compression, each of which cause certain limits on what is possible with DNA within a cell. Twisting-torsional stiffness is important for the circularisation of DNA and the orientation of DNA bound proteins relative to each other and bending-axial stiffness is important for DNA wrapping and circularisation and protein interactions. Compression-extension is relatively unimportant in the absence of high tension. Persistence length, axial stiffness DNA in solution does not take a rigid structure but is continually changing conformation due to thermal vibration and collisions with water molecules, which makes classical measures of rigidity impossible to apply. Hence, the bending stiffness of DNA is measured by the persistence length, defined as: This value may be directly measured using an atomic force microscope to directly image DNA molecules of various lengths. In an aqueous solution, the average persistence length has been found to be of around 50 nm (or 150 base pairs). More broadly, it has been observed to be between 45 and 60 nm or 132–176 base pairs (the diameter of DNA is 2 nm) This can vary significantly due to variations in temperature, aqueous solution conditions and DNA length. This makes DNA a moderately stiff molecule. The persistence length of a section of DNA is somewhat dependent on its sequence, and this can cause significant variation. The variation is largely due to base stacking energies and the residues which extend into the minor and major grooves. Models for DNA bending At length-scales larger than the persistence length, the entropic flexibility of DNA is remarkably consistent with standard polymer physics models, such as the Kratky-Porod worm-like chain model. Consistent with the worm-like chain model is the observation that bending DNA is also described by Hooke's law at very small (sub-piconewton) forces. For DNA segments less than the persistence length, the bending force is approximately constant and behaviour deviates from the worm-like chain predictions. This effect results in unusual ease in circularising small DNA molecules and a higher probability of finding highly bent sections of DNA. Bending preference DNA molecules often have a preferred direction to bend, i.e., anisotropic bending. This is, again, due to the properties of the bases which make up the DNA sequence - a random sequence will have no preferred bend direction, i.e., isotropic bending. Preferred DNA bend direction is determined by the stability of stacking each base on top of the next. If unstable base stacking steps are always found on one side of the DNA helix then the DNA will preferentially bend away from that direction. As bend angle increases then steric hindrances and ability to roll the residues relative to each other also play a role, especially in the minor groove. A and T residues will be preferentially be found in the minor grooves on the inside of bends. This effect is particularly seen in DNA-protein binding where tight DNA bending is induced, such as in nucleosome particles. See base step distortions above. DNA molecules with exceptional bending preference can become intrinsically bent. This was first observed in trypanosomatid kinetoplast DNA. Typical sequences which cause this contain stretches of 4-6 T and A residues separated by G and C rich sections which keep the A and T residues in phase with the minor groove on one side of the molecule. For example: The intrinsically bent structure is induced by the 'propeller twist' of base pairs relative to each other allowing unusual bifurcated Hydrogen-bonds between base steps. At higher temperatures this structure is denatured, and so the intrinsic bend is lost. All DNA which bends anisotropically has, on average, a longer persistence length and greater axial stiffness. This increased rigidity is required to prevent random bending which would make the molecule act isotropically. Circularization DNA circularization depends on both the axial (bending) stiffness and torsional (rotational) stiffness of the molecule. For a DNA molecule to successfully circularize it must be long enough to easily bend into the full circle and must have the correct number of bases so the ends are in the correct rotation to allow bonding to occur. The optimum length for circularization of DNA is around 400 base pairs (136 nm), with an integral number of turns of the DNA helix, i.e., multiples of 10.4 base pairs. Having a non integral number of turns presents a significant energy barrier for circularization, for example a 10.4 x 30 = 312 base pair molecule will circularize hundreds of times faster than 10.4 x 30.5 ≈ 317 base pair molecule. The bending of short circularized DNA segments is non-uniform. Rather, for circularized DNA segments less than the persistence length, DNA bending is localised to 1-2 kinks that form preferentially in AT-rich segments. If a nick is present, bending will be localised to the nick site. Stretching Elastic stretching regime Longer stretches of DNA are entropically elastic under tension. When DNA is in solution, it undergoes continuous structural variations due to the energy available in the thermal bath of the solvent. This is due to the thermal vibration of the molecule combined with continual collisions with water molecules. For entropic reasons, more compact relaxed states are thermally accessible than stretched out states, and so DNA molecules are almost universally found in a tangled relaxed layouts. For this reason, one molecule of DNA will stretch under a force, straightening it out. Using optical tweezers, the entropic stretching behavior of DNA has been studied and analyzed from a polymer physics perspective, and it has been found that DNA behaves largely like the Kratky-Porod worm-like chain model under physiologically accessible energy scales. Phase transitions under stretching Under sufficient tension and positive torque, DNA is thought to undergo a phase transition with the bases splaying outwards and the phosphates moving to the middle. This proposed structure for overstretched DNA has been called P-form DNA, in honor of Linus Pauling who originally presented it as a possible structure of DNA. Evidence from mechanical stretching of DNA in the absence of imposed torque points to a transition or transitions leading to further structures which are generally referred to as S-form DNA. These structures have not yet been definitively characterised due to the difficulty of carrying out atomic-resolution imaging in solution while under applied force although many computer simulation studies have been made (for example,). Proposed S-DNA structures include those which preserve base-pair stacking and hydrogen bonding (GC-rich), while releasing extension by tilting, as well as structures in which partial melting of the base-stack takes place, while base-base association is nonetheless overall preserved (AT-rich). Periodic fracture of the base-pair stack with a break occurring once per three bp (therefore one out of every three bp-bp steps) has been proposed as a regular structure which preserves planarity of the base-stacking and releases the appropriate amount of extension, with the term "Σ-DNA" introduced as a mnemonic, with the three right-facing points of the Sigma character serving as a reminder of the three grouped base pairs. The Σ form has been shown to have a sequence preference for GNC motifs which are believed under the GNC hypothesis to be of evolutionary importance. Supercoiling and topology The B form of the DNA helix twists 360° per 10.4-10.5 bp in the absence of torsional strain. But many molecular biological processes can induce torsional strain. A DNA segment with excess or insufficient helical twisting is referred to, respectively, as positively or negatively supercoiled. DNA in vivo is typically negatively supercoiled, which facilitates the unwinding (melting) of the double-helix required for RNA transcription. Within the cell most DNA is topologically restricted. DNA is typically found in closed loops (such as plasmids in prokaryotes) which are topologically closed, or as very long molecules whose diffusion coefficients produce effectively topologically closed domains. Linear sections of DNA are also commonly bound to proteins or physical structures (such as membranes) to form closed topological loops. Francis Crick was one of the first to propose the importance of linking numbers when considering DNA supercoils. In a paper published in 1976, Crick outlined the problem as follows: In considering supercoils formed by closed double-stranded molecules of DNA certain mathematical concepts, such as the linking number and the twist, are needed. The meaning of these for a closed ribbon is explained and also that of the writhing number of a closed curve. Some simple examples are given, some of which may be relevant to the structure of chromatin. Analysis of DNA topology uses three values: L = linking number - the number of times one DNA strand wraps around the other. It is an integer for a closed loop and constant for a closed topological domain. T = twist - total number of turns in the double stranded DNA helix. This will normally tend to approach the number of turns that a topologically open double stranded DNA helix makes free in solution: number of bases/10.5, assuming there are no intercalating agents (e.g., ethidium bromide) or other elements modifying the stiffness of the DNA. W = writhe - number of turns of the double stranded DNA helix around the superhelical axis L = T + W and ΔL = ΔT + ΔW Any change of T in a closed topological domain must be balanced by a change in W, and vice versa. This results in higher order structure of DNA. A circular DNA molecule with a writhe of 0 will be circular. If the twist of this molecule is subsequently increased or decreased by supercoiling then the writhe will be appropriately altered, making the molecule undergo plectonemic or toroidal superhelical coiling. When the ends of a piece of double stranded helical DNA are joined so that it forms a circle the strands are topologically knotted. This means the single strands cannot be separated any process that does not involve breaking a strand (such as heating). The task of un-knotting topologically linked strands of DNA falls to enzymes termed topoisomerases. These enzymes are dedicated to un-knotting circular DNA by cleaving one or both strands so that another double or single stranded segment can pass through. This un-knotting is required for the replication of circular DNA and various types of recombination in linear DNA which have similar topological constraints. The linking number paradox For many years, the origin of residual supercoiling in eukaryotic genomes remained unclear. This topological puzzle was referred to by some as the "linking number paradox". However, when experimentally determined structures of the nucleosome displayed an over-twisted left-handed wrap of DNA around the histone octamer, this paradox was considered to be solved by the scientific community. See also Comparison of nucleic acid simulation software DNA nanotechnology G-quadruplex Molecular models of DNA Molecular structure of Nucleic Acids (publication) Non-B database Triple-stranded DNA References DNA Biophysics Molecular structure Helices
Nucleic acid double helix
[ "Physics", "Chemistry", "Biology" ]
4,167
[ "Applied and interdisciplinary physics", "Molecular geometry", "Molecules", "Stereochemistry", "Biophysics", "Matter" ]
2,092,576
https://en.wikipedia.org/wiki/Dym%20equation
In mathematics, and in particular in the theory of solitons, the Dym equation (HD) is the third-order partial differential equation It is often written in the equivalent form for some function v of one space variable and time The Dym equation first appeared in Kruskal and is attributed to an unpublished paper by Harry Dym. The Dym equation represents a system in which dispersion and nonlinearity are coupled together. HD is a completely integrable nonlinear evolution equation that may be solved by means of the inverse scattering transform. It obeys an infinite number of conservation laws; it does not possess the Painlevé property. The Dym equation has strong links to the Korteweg–de Vries equation. C.S. Gardner, J.M. Greene, Kruskal and R.M. Miura applied [Dym equation] to the solution of corresponding problem in Korteweg–de Vries equation. The Lax pair of the Harry Dym equation is associated with the Sturm–Liouville operator. The Liouville transformation transforms this operator isospectrally into the Schrödinger operator. Thus by the inverse Liouville transformation solutions of the Korteweg–de Vries equation are transformed into solutions of the Dym equation. An explicit solution of the Dym equation, valid in a finite interval, is found by an auto-Bäcklund transform Notes References Solitons Exactly solvable models Integrable systems
Dym equation
[ "Physics" ]
305
[ "Integrable systems", "Theoretical physics" ]
2,094,449
https://en.wikipedia.org/wiki/Cluster%20decay
Cluster decay, also named heavy particle radioactivity, heavy ion radioactivity or heavy cluster decay, is a rare type of nuclear decay in which an atomic nucleus emits a small "cluster" of neutrons and protons, more than in an alpha particle, but less than a typical binary fission fragment. Ternary fission into three fragments also produces products in the cluster size. Description The loss of protons from the parent nucleus changes it to the nucleus of a different element, the daughter, with a mass number Ad = A − Ae and atomic number Zd = Z − Ze, where Ae = Ne + Ze. For example: → + According to "Ronen's golden rule" of cluster decay, the emitted nucleus tends to be one with a high binding energy per nucleon, and especially one with a magic number of nucleon. This type of rare decay mode was observed in radioisotopes that decay predominantly by alpha emission, and it occurs only in a small percentage of the decays for all such isotopes. The branching ratio with respect to alpha decay is rather small (see the Table below). Ta and Tc are the half-lives of the parent nucleus relative to alpha decay and cluster radioactivity, respectively. Cluster decay, like alpha decay, is a quantum tunneling process: in order to be emitted, the cluster must penetrate a potential barrier. This is a different process than the more random nuclear disintegration that precedes light fragment emission in ternary fission, which may be a result of a nuclear reaction, but can also be a type of spontaneous radioactive decay in certain nuclides, demonstrating that input energy is not necessarily needed for fission, which remains a fundamentally different process mechanistically. In the absence of any energy loss for fragment deformation and excitation, as in cold fission phenomena or in alpha decay, the total kinetic energy is equal to the Q-value and is divided between the particles in inverse proportion with their masses, as required by conservation of linear momentum where Ad is the mass number of the daughter, Ad = A − Ae. Cluster decay exists in an intermediate position between alpha decay (in which a nucleus spits out a 4He nucleus), and spontaneous fission, in which a heavy nucleus splits into two (or more) large fragments and an assorted number of neutrons. Spontaneous fission ends up with a probabilistic distribution of daughter products, which sets it apart from cluster decay. In cluster decay for a given radioisotope, the emitted particle is a light nucleus and the decay method always emits this same particle. For heavier emitted clusters, there is otherwise practically no qualitative difference between cluster decay and spontaneous cold fission. History The first information about the atomic nucleus was obtained at the beginning of the 20th century by studying radioactivity. For a long period of time only three kinds of nuclear decay modes (alpha, beta, and gamma) were known. They illustrate three of the fundamental interactions in nature: strong, weak, and electromagnetic. Spontaneous fission became better studied soon after its discovery in 1940 by Konstantin Petrzhak and Georgy Flyorov because of both the military and the peaceful applications of induced fission. This was discovered circa 1939 by Otto Hahn, Lise Meitner, and Fritz Strassmann. There are many other kinds of radioactivity, e.g. cluster decay, proton emission, various beta-delayed decay modes (p, 2p, 3p, n, 2n, 3n, 4n, d, t, alpha, f), fission isomers, particle accompanied (ternary) fission, etc. The height of the potential barrier, mainly of Coulomb nature, for emission of the charged particles is much higher than the observed kinetic energy of the emitted particles. The spontaneous decay can only be explained by quantum tunneling in a similar way to the first application of the Quantum Mechanics to Nuclei given by G. Gamow for alpha decay. Usually the theory explains an already experimentally observed phenomenon. Cluster decay is one of the rare examples of phenomena predicted before experimental discovery. Theoretical predictions were made in 1980, four years before experimental discovery. Four theoretical approaches were used: fragmentation theory by solving a Schrödinger equation with mass asymmetry as a variable to obtain the mass distributions of fragments; penetrability calculations similar to those used in traditional theory of alpha decay, and superasymmetric fission models, numerical (NuSAF) and analytical (ASAF). Superasymmetric fission models are based on the macroscopic-microscopic approach using the asymmetrical two-center shell model level energies as input data for the shell and pairing corrections. Either the liquid drop model or the Yukawa-plus-exponential model extended to different charge-to-mass ratios have been used to calculate the macroscopic deformation energy. Penetrability theory predicted eight decay modes: 14C, 24Ne, 28Mg, 32,34Si, 46Ar, and 48,50Ca from the following parent nuclei: 222,224Ra, 230,232Th, 236,238U, 244,246Pu, 248,250Cm, 250,252Cf, 252,254Fm, and 252,254No. The first experimental report was published in 1984, when physicists at Oxford University discovered that 223Ra emits one 14C nucleus among every billion (109) decays by alpha emission. Theory The quantum tunneling may be calculated either by extending fission theory to a larger mass asymmetry or by heavier emitted particle from alpha decay theory. Both fission-like and alpha-like approaches are able to express the decay constant , as a product of three model-dependent quantities where is the frequency of assaults on the barrier per second, S is the preformation probability of the cluster at the nuclear surface, and Ps is the penetrability of the external barrier. In alpha-like theories S is an overlap integral of the wave function of the three partners (parent, daughter, and emitted cluster). In a fission theory the preformation probability is the penetrability of the internal part of the barrier from the initial turning point Ri to the touching point Rt. Very frequently it is calculated by using the Wentzel-Kramers-Brillouin (WKB) approximation. A very large number, of the order 105, of parent-emitted cluster combinations were considered in a systematic search for new decay modes. The large amount of computations could be performed in a reasonable time by using the ASAF model developed by Dorin N Poenaru, Walter Greiner, et al. The model was the first to be used to predict measurable quantities in cluster decay. More than 150 cluster decay modes have been predicted before any other kind of half-lives calculations have been reported. Comprehensive tables of half-lives, branching ratios, and kinetic energies have been published, e.g. Potential barrier shapes similar to that considered within the ASAF model have been calculated by using the macroscopic-microscopic method. Previously it was shown that even alpha decay may be considered a particular case of cold fission. The ASAF model may be used to describe in a unified manner cold alpha decay, cluster decay, and cold fission (see figure 6.7, p. 287 of the Ref. [2]). One can obtain with good approximation one universal curve (UNIV) for any kind of cluster decay mode with a mass number Ae, including alpha decay In a logarithmic scale the equation log T = f(log Ps) represents a single straight line which can be conveniently used to estimate the half-life. A single universal curve for alpha decay and cluster decay modes results by expressing log T + log S = f(log Ps). The experimental data on cluster decay in three groups of even-even, even-odd, and odd-even parent nuclei are reproduced with comparable accuracy by both types of universal curves, fission-like UNIV and UDL derived using alpha-like R-matrix theory. In order to find the released energy one can use the compilation of measured masses M, Md, and Me of the parent, daughter, and emitted nuclei, c is the light velocity. The mass excess is transformed into energy according to the Einstein's formula E = mc2. Experiments The main experimental difficulty in observing cluster decay comes from the need to identify a few rare events against a background of alpha particles. The quantities experimentally determined are the partial half life, Tc, and the kinetic energy of the emitted cluster Ek. There is also a need to identify the emitted particle. Detection of radiations is based on their interactions with matter, leading mainly to ionizations. Using a semiconductor telescope and conventional electronics to identify the 14C ions, the Rose and Jones's experiment was running for about six months in order to get 11 useful events. With modern magnetic spectrometers (SOLENO and Enge-split pole), at Orsay and Argonne National Laboratory (see ch. 7 in Ref. [2] pp. 188–204), a very strong source could be used, so that results were obtained in a run of few hours. Solid state nuclear track detectors (SSNTD) insensitive to alpha particles and magnetic spectrometers in which alpha particles are deflected by a strong magnetic field have been used to overcome this difficulty. SSNTD are cheap and handy but they need chemical etching and microscope scanning. A key role in experiments on cluster decay modes performed in Berkeley, Orsay, Dubna, and Milano was played by P. Buford Price, Eid Hourany, Michel Hussonnois, Svetlana Tretyakova, A. A. Ogloblin, Roberto Bonetti, and their coworkers. The main region of 20 emitters experimentally observed until 2010 is above Z = 86: 221Fr, 221-224,226Ra, 223,225Ac, 228,230Th, 231Pa, 230,232-236U, 236,238Pu, and 242Cm. Only upper limits could be detected in the following cases: 12C decay of 114Ba, 15N decay of 223Ac, 18O decay of 226Th, 24,26Ne decays of 232Th and of 236U, 28Mg decays of 232,233,235U, 30Mg decay of 237Np, and 34Si decay of 240Pu and of 241Am. Some of the cluster emitters are members of the three natural radioactive families. Others should be produced by nuclear reactions. Up to now no odd-odd emitter has been observed. From many decay modes with half-lives and branching ratios relative to alpha decay predicted with the analytical superasymmetric fission (ASAF) model, the following 11 have been experimentally confirmed: 14C, 20O, 23F, 22,24-26Ne, 28,30Mg, and 32,34Si. The experimental data are in good agreement with predicted values. A strong shell effect can be seen: as a rule the shortest value of the half-life is obtained when the daughter nucleus has a magic number of neutrons (Nd = 126) and/or protons (Zd = 82). The known cluster emissions as of 2010 are as follows: Fine structure The fine structure in 14C radioactivity of 223Ra was discussed for the first time by M. Greiner and W. Scheid in 1986. The superconducting spectrometer SOLENO of IPN Orsay has been used since 1984 to identify 14C clusters emitted from 222–224,226Ra nuclei. Moreover, it was used to discover the fine structure observing transitions to excited states of the daughter. A transition with an excited state of 14C predicted in Ref. was not yet observed. Surprisingly, the experimentalists had seen a transition to the first excited state of the daughter stronger than that to the ground state. The transition is favoured if the uncoupled nucleon is left in the same state in both parent and daughter nuclei. Otherwise the difference in nuclear structure leads to a large hindrance. The interpretation was confirmed: the main spherical component of the deformed parent wave function has an i11/2 character, i.e. the main component is spherical. References External links National Nuclear Data Center Nuclear physics Radioactivity
Cluster decay
[ "Physics", "Chemistry" ]
2,530
[ "Radioactivity", "Nuclear physics" ]
5,245,810
https://en.wikipedia.org/wiki/Enzyme%20induction%20and%20inhibition
Enzyme induction is a process in which a molecule (e.g. a drug) induces (i.e. initiates or enhances) the expression of an enzyme. Enzyme inhibition can refer to the inhibition of the expression of the enzyme by another molecule interference at the enzyme-level, basically with how the enzyme works. This can be competitive inhibition, uncompetitive inhibition, non-competitive inhibition or partially competitive inhibition. If the molecule induces enzymes that are responsible for its own metabolism, this is called auto-induction (or auto-inhibition if there is inhibition). These processes are particular forms of gene expression regulation. These terms are of particular interest to pharmacology, and more specifically to drug metabolism and drug interactions. They also apply to molecular biology. History In the late 1950s and early 1960s, the French molecular biologists François Jacob and Jacques Monod became the first to explain enzyme induction, in the context of the lac operon of Escherichia coli. In the absence of lactose, the constitutively expressed lac repressor protein binds to the operator region of the DNA and prevents the transcription of the operon genes. When present, lactose binds to the lac repressor, causing it to separate from the DNA and thereby enabling transcription to occur. Monod and Jacob generated this theory following 15 years of work by them and others (including Joshua Lederberg), partially as an explanation for Monod's observation of diauxie. Previously, Monod had hypothesized that enzymes could physically adapt themselves to new substrates; a series of experiments by him, Jacob, and Arthur Pardee eventually demonstrated this to be incorrect and led them to the modern theory, for which he and Jacob shared the 1965 Nobel Prize in Physiology or Medicine (together with André Lwoff). Aryl hydrocarbon receptor Potency Index inducer or just inducer predictably induce metabolism via a given pathway and are commonly used in prospective clinical drug-drug interaction studies. Strong, moderate, and weak inducers are drugs that decreases the AUC of sensitive index substrates of a given metabolic pathway by ≥80%, ≥50% to <80%, and ≥20% to <50%, respectively. References External links Pharmacokinetics Gene expression
Enzyme induction and inhibition
[ "Chemistry", "Biology" ]
468
[ "Pharmacology", "Pharmacokinetics", "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
5,248,484
https://en.wikipedia.org/wiki/Tanabe%E2%80%93Sugano%20diagram
In coordination chemistry, Tanabe–Sugano diagrams are used to predict absorptions in the ultraviolet (UV), visible and infrared (IR) electromagnetic spectrum of coordination compounds. The results from a Tanabe–Sugano diagram analysis of a metal complex can also be compared to experimental spectroscopic data. They are qualitatively useful and can be used to approximate the value of 10Dq, the ligand field splitting energy. Tanabe–Sugano diagrams can be used for both high spin and low spin complexes, unlike Orgel diagrams, which apply only to high spin complexes. Tanabe–Sugano diagrams can also be used to predict the size of the ligand field necessary to cause high-spin to low-spin transitions. In a Tanabe–Sugano diagram, the ground state is used as a constant reference, in contrast to Orgel diagrams. The energy of the ground state is taken to be zero for all field strengths, and the energies of all other terms and their components are plotted with respect to the ground term. Background Until Yukito Tanabe and Satoru Sugano published their paper "On the absorption spectra of complex ions", in 1954, little was known about the excited electronic states of complex metal ions. They used Hans Bethe's crystal field theory and Giulio Racah's linear combinations of Slater integrals, now called Racah parameters, to explain the absorption spectra of octahedral complex ions in a more quantitative way than had been achieved previously. Many spectroscopic experiments later, they estimated the values for two of Racah's parameters, B and C, for each d-electron configuration based on the trends in the absorption spectra of isoelectronic first-row transition metals. The plots of the energies calculated for the electronic states of each electron configuration are now known as Tanabe–Sugano diagrams. Number must be fit for each octahedral coordination complex because the C/B can deviate strongly from the theoretical value of 4.0. This ratio changes the relative energies of the levels in the Tanabe–Sugano diagrams, and thus the diagrams may vary slightly between sources depending on what C/B ratio was selected when plotting. Parameters The x-axis of a Tanabe–Sugano diagram is expressed in terms of the ligand field splitting parameter, Δ, or Dq (for "differential of quanta"), divided by the Racah parameter B. The y-axis is in terms of energy, E, also scaled by B. Three Racah parameters exist, A, B, and C, which describe various aspects of interelectronic repulsion. A is an average total interelectron repulsion. B and C correspond with individual d-electron repulsions. A is constant among d-electron configuration, and it is not necessary for calculating relative energies, hence its absence from Tanabe and Sugano's studies of complex ions. C is necessary only in certain cases. B is the most important of Racah's parameters in this case. One line corresponds to each electronic state. The bending of certain lines is due to the mixing of terms with the same symmetry. Although electronic transitions are only "allowed" if the spin multiplicity remains the same (i.e. electrons do not change from spin up to spin down or vice versa when moving from one energy level to another), energy levels for "spin-forbidden" electronic states are included in the diagrams, which are also not included in Orgel diagrams. Each state is given its molecular-symmetry label (e.g. A1g, T2g, etc.), but "g" and "u" subscripts are usually left off because it is understood that all the states are gerade. Labels for each state are usually written on the right side of the table, though for more complicated diagrams (e.g. d6) labels may be written in other locations for clarity. Term symbols (e.g. 3P, 1S, etc.) for a specific dn free ion are listed, in order of increasing energy, on the y-axis of the diagram. The relative order of energies is determined using Hund's rules. For an octahedral complex, the spherical, free ion term symbols split accordingly: Certain Tanabe–Sugano diagrams (d4, d5, d6, and d7) also have a vertical line drawn at a specific Dq/B value, which is accompanied by a discontinuity in the slopes of the excited states' energy levels. This pucker in the lines occurs when the identity of the ground state changes, shown in the diagram below. The left depicts the relative energies of the d7 ion states as functions of crystal field strength (Dq), showing an intersection of the 4T1 and the 2E states near Dq/B ~ 2.1. Subtracting the ground state energy produces the standard Tanabe–Sugano diagram shown on the right. This change in identity generally happens when the spin pairing energy, P, is equal to the ligand field splitting energy, Dq. Complexes to the left of this line (lower Dq/B values) are high-spin, while complexes to the right (higher Dq/B values) are low-spin. There is no low-spin or high-spin designation for d2, d3, or d8 because none of the states cross at reasonable crystal field energies. Tanabe–Sugano diagrams The seven Tanabe–Sugano diagrams for octahedral complexes are shown below. Unnecessary diagrams: d1, d9 and d10 d1 There is no electron repulsion in a d1 complex, and the single electron resides in the t2g orbital ground state. A d1 octahedral metal complex, such as [Ti(H2O)6]3+, shows a single absorption band in a UV-vis experiment. The term symbol for d1 is 2D, which splits into the 2T2g and 2Eg states. The t2g orbital set holds the single electron and has a 2T2g state energy of -4Dq. When that electron is promoted to an eg orbital, it is excited to the 2Eg state energy, +6Dq. This is in accordance with the single absorption band in a UV-vis experiment. The prominent shoulder in this absorption band is due to a Jahn–Teller distortion which removes the degeneracy of the two 2Eg states. However, since these two transitions overlap in a UV-vis spectrum, this transition from 2T2g to 2Eg does not require a Tanabe–Sugano diagram. d9 Similar to d1 metal complexes, d9 octahedral metal complexes have 2D spectral term. The transition is from the (t2g)6(eg)3 configuration (2Eg state) to the (t2g)5(eg)4 configuration (2T2g state). This could also be described as a positive "hole" that moves from the eg to the t2g orbital set. The sign of Dq is opposite that for d1, with a 2Eg ground state and a 2T2g excited state. Like the d1 case, d9 octahedral complexes do not require the Tanabe–Sugano diagram to predict their absorption spectra. d10 There are no d-d electron transitions in d10 metal complexes because the d orbitals are completely filled. Thus, UV-vis absorption bands are not observed and a Tanabe–Sugano diagram does not exist. Diagrams for tetrahedral symmetry Tetrahedral Tanabe–Sugano diagrams are generally not found in textbooks because the diagram for a dn tetrahedral will be similar to that for d(10-n) octahedral, remembering that ΔT for tetrahedral complexes is approximately 4/9 of ΔO for an octahedral complex. A consequence of the much smaller size of ΔT results in (almost) all tetrahedral complexes being high spin and therefore the change in the ground state term seen on the X-axis for octahedral d4-d7 diagrams is not required for interpreting spectra of tetrahedral complexes. Advantages over Orgel diagrams In Orgel diagrams, the magnitude of the splitting energy exerted by the ligands on d orbitals, as a free ion approach a ligand field, is compared to the electron-repulsion energy, which are both sufficient at providing the placement of electrons. However, if the ligand field splitting energy, 10Dq, is greater than the electron-repulsion energy, then Orgel diagrams fail in determining electron placement. In this case, Orgel diagrams are restricted to only high spin complexes. Tanabe–Sugano diagrams do not have this restriction, and can be applied to situations when 10Dq is significantly greater than electron repulsion. Thus, Tanabe–Sugano diagrams are utilized in determining electron placements for high spin and low spin metal complexes. However, they are limited in that they have only qualitative significance. Even so, Tanabe–Sugano diagrams are useful in interpreting UV-vis spectra and determining the value of 10Dq. Applications as a qualitative tool In a centrosymmetric ligand field, such as in octahedral complexes of transition metals, the arrangement of electrons in the d-orbital is not only limited by electron repulsion energy, but it is also related to the splitting of the orbitals due to the ligand field. This leads to many more electron configuration states than is the case for the free ion. The relative energy of the repulsion energy and splitting energy defines the high-spin and low-spin states. Considering both weak and strong ligand fields, a Tanabe–Sugano diagram shows the energy splitting of the spectral terms with the increase of the ligand field strength. It is possible for us to understand how the energy of the different configuration states is distributed at certain ligand strengths. The restriction of the spin selection rule makes it even easier to predict the possible transitions and their relative intensity. Although they are qualitative, Tanabe–Sugano diagrams are very useful tools for analyzing UV-vis spectra: they are used to assign bands and calculate Dq values for ligand field splitting. Examples Manganese(II) hexahydrate In the [Mn(H2O)6]2+ metal complex, manganese has an oxidation state of +2, thus it is a d5 ion. H2O is a weak field ligand (spectrum shown below), and according to the Tanabe–Sugano diagram for d5 ions, the ground state is 6A1. Note that there is no sextet spin multiplicity in any excited state, hence the transitions from this ground state are expected to be spin-forbidden and the band intensities should be low. From the spectra, only very low intensity bands are observed (low molar absorptivity (ε) values on y-axis). Cobalt(II) hexahydrate Another example is [Co(H2O)6]2+. Note that the ligand is the same as the last example. Here the cobalt ion has the oxidation state of +2, and it is a d7 ion. From the high-spin (left) side of the d7 Tanabe–Sugano diagram, the ground state is 4T1(F), and the spin multiplicity is a quartet. The diagram shows that there are three quartet excited states: 4T2, 4A2, and 4T1(P). From the diagram one can predict that there are three spin-allowed transitions. However, the spectrum of [Co(H2O)6]2+ does not show three distinct peaks that correspond to the three predicted excited states. Instead, the spectrum has a broad peak (spectrum shown below). Based on the T–S diagram, the lowest energy transition is 4T1 to 4T2, which is seen in the near IR and is not observed in the visible spectrum. The main peak is the energy transition 4T1(F) to 4T1(P), and the slightly higher energy transition (the shoulder) is predicted to be 4T1 to 4A2. The small energy difference leads to the overlap of the two peaks, which explains the broad peak observed in the visible spectrum. See also Character tables Laporte rule References Coordination chemistry Spectroscopy Inorganic chemistry Transition metals Eponymous diagrams of chemistry
Tanabe–Sugano diagram
[ "Physics", "Chemistry" ]
2,571
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Coordination chemistry", "nan", "Spectroscopy" ]
5,250,644
https://en.wikipedia.org/wiki/Shell%20higher%20olefin%20process
The Shell higher olefin process (SHOP) is a chemical process for the production of linear alpha olefins via ethylene oligomerization and olefin metathesis invented and exploited by Shell plc. The olefin products are converted to fatty aldehydes and then to fatty alcohols, which are precursors to plasticizers and detergents. The annual global production of olefins through this method is over one million tonnes. History The process was discovered by chemists at Shell Development Emeryville in 1968. At the time ecological considerations demanded the replacement of branched fatty alcohols used widely in detergents by linear fatty alcohols because the biodegradation of the branched compounds was slow, causing foaming of surface water. At the same time new gas oil crackers were being commissioned and ethylene supply was outpacing demand. The process was commercialized in 1977 by Shell plc and following an expansion of the Geismar, Louisiana (USA) plant in 2002 global annual production capacity was 1.2 million tons. Process Ethylene reacts by the catalyst to give longer chains. Unlike the Ziegler–Natta process, which aims to produce very long polymers, the oligomer stops growing after addition of 1–10 repeating units of ethylene. The fraction containing C12 to C18 olefins (40–50%) has direct commercial value in detergent production and is removed. For the remaining fraction to be of commercial interest two additional steps are required. The first step is liquid-phase isomerization using alkaline alumina catalyst leading to internal double bonds. For example, 1-octene is converted to 4-octene and 1-eicocene (a C20 hydrocarbon) is converted to 10-eicocene. In the second step olefin metathesis converts mixtures like these to 2-tetradecene which is a C14 component and again within commercial range. The internal olefins can also be reacted with an excess of ethylene with rhenium(VII) oxide supported on alumina as catalyst in an ethenolysis reaction, which causes the internal double bond to break up to form a mixture of α-olefins with odd and even carbon chain-length of the desired molecular weight. The C12 to C18 olefins subsequently are subjected to hydroformylation (oxo process) to give aldehydes. The aldehyde is hydrogenated to give fatty alcohols, which are suitable for manufacturing detergents. Catalytic cycle The first step in this process is the ethylene oligomerization to a mixture of even-numbered α-olefins at 80 to 120 °C and 70 to 140 bar (7 to 14 MPa) catalyzed by a nickel-phosphine complex. Such catalysts are typically prepared from diarylphosphino carboxylic acids, such as (C6H5)2PCH2CO2H. The process and its mechanism was elucidated by the group of Wilhelm Keim, first at Shell and later at the RWTH Aachen. Alternative routes In another olefin application of Shell cyclododecatriene is partially hydrogenated to cyclododecene and then subjected to ethenolysis to the terminal linear open-chain diene. The process was still in use at Essar Stanlow refinery until a serious explosion and following fire lead to the closure of the plant and the alcohols units it fed in 2018. References Chemical processes Catalysis American inventions
Shell higher olefin process
[ "Chemistry" ]
733
[ "Catalysis", "Chemical processes", "nan", "Chemical process engineering", "Chemical kinetics" ]