id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
34,113,697
https://en.wikipedia.org/wiki/List%20of%20valves
Valves are quite diverse and may be classified into a number of types. Basic types - by operating principle Valves can be categorized into the following types, based on their operating mechanism: Ball valve, for on–off control without pressure drop. Ideal for quick shut-off, since a 90° turn completely shuts off, compared to multiple 360° turns for other manual valves Butterfly valve, for on–off flow control in large diameter pipes Choke valve, a solid cylinder placed around or inside a second cylinder with multiple holes or slots, inside a housing. Shifting the solid cylinder exposes more or fewer holes. Used in oil and gas wellheads, where the pressure drop is high. (Not to be confused with engine choke valve, below.) Diaphragm valve or membrane valve, controls flow by movement of a diaphragm. Used in pharmaceutical applications Gate valve, mainly for on–off control, with low pressure drop Globe valve, good for regulating flow. Uses a cylinder movement over a seat Knife valve, similar to a gate valve, but usually more compact. Often used for slurries or powders on–off control Needle valve for accurate flow control Pinch valve, for slurry flow regulation and control Piston valve, for regulating fluids that carry solids in suspension Piston valve (steam engine) Plug valve, slim valve for on–off control but with some pressure drop Solenoid valve, an electrically actuated valve for hydraulic or pneumatic fluid control Spool valve, for hydraulic control; similar to the choke valve Basic types - by function Valves can be categorized also based on their function: Check valve or non-return valve, allows the fluid to pass in one direction only Flow control valve, to maintain and control a variable flow rate through the valve Poppet valve, commonly used in piston engines to regulate the fuel mixture intake and exhaust Pressure-balanced valve Pressure reducing valve, regulates the pressure of a fluid Safety valve or relief valve: operates automatically at a set pressure to correct a potentially dangerous situation, typically over-pressure Sampling valve Specific types These are more specific types of valves, used only in particular fields or applications. Often they are subcategories of the classification by operating principle and by function: Aspin valve: a cone-shaped metal part fitted to the cylinder head of an engine Ballcock: often used as a water level controller (cistern) Bibcock: provides a connection to a flexible hosepipe Blast valve: prevents rapid overpressuring in a fallout shelter or a bunker Boston valve: three-part two-port check valve used on inflatable boats, air mattresses, airbeds etc.; available in two sizes, normal and small Ceramic valve, used mainly in high duty cycle applications or on abrasive fluids. Ceramic disc can also provide Class IV seat leakage Cock: colloquial term for a small valve or a stopcock Choke valve, Butterfly valve used to limit air intake in internal combustion engine. (Not to be confused with choke valves used in industrial flow control, above.) Clapper valve: a type of check valve used in the Siamese fire appliance to allow only one hose to be connected instead of two (the clapper valve blocks the other side from leaking out) Demand valve, part of a diving regulator Double beat valve Double check valve Duckbill valve Fill and drain valve: a valve used in space and missile industry which achieves extremely tight leakage, while providing redundant inhibits against external leakage Flapper valve Flow divider valve: a valve providing a plurality of output flows from a single fluid source Flutter (Heimlich) valve: a specific one-way valve used on the end of chest drain tubes to treat a pneumothorax Foot valve: a check valve on the foot of a suction line to prevent backflow Four-way valve: was used to control the flow of steam to the cylinder of early double-acting steam engines Freeze seal or Freeze plug: in which freezing and melting the fluid creates and removes a plug of frozen material acting as the valve Gas pressure regulator regulates the flow and pressure of a gas Heart valve: regulates blood flow through the heart in many organisms Hydrodynamic vortex valve: a passive flow control valve that uses hydrodynamic forces to regulate flow Larner–Johnson Valve: needle control valve often in large sizes used in water supply systems Leaf valve: one-way valve consisting of a diagonal obstruction with an opening covered by a hinged flap Line blind valve: a thin sheet oriented perpendicular to the pipe. The sheet has a solid end and a flow-through end; sliding it from one position to the other opens or stops the flow. Also called sliding blind valve Outflow valve: regulates flow and pressure, part of cabin pressurization Pilot valve: regulates flow or pressure to other valves Petcock, a small shut-off valve Pinch valve, "beach ball valve": simple, single-part two-port check valve made from soft plastic and molded onto inflatable units such as beach balls, air mattresses, water wings; can be inflated by pump or by mouth Plunger valve: To regulate flow while lowering the pressure Poppet valve and sleeve valve: commonly used in piston engines to regulate the fuel mixture intake and exhaust Pressure regulator or pressure reducing valve (PRV): reduces pressure to a preset level downstream of the valve Pressure sustaining valve, or back-pressure regulator: maintains pressure at a preset level upstream of the valve Presta valve, Schrader valve, or Dunlop valve, holds the air inside bicycle tires Pulse valve, extremely fast pulsing valve Reed valve: consists of two or more flexible materials pressed together along much of their length, but with the influx area open to allow one-way flow, much like a heart valve Regulator: used in SCUBA diving equipment and in gas cooking equipment to reduce the high pressure gas supply to a lower working pressure Rocker valve Rotary valves and piston valves: parts of brass instruments used to change their pitch Rotolock valve Rupture disc: a one-time-use replaceable valve for rapid pressure relief, used to protect piping systems from excessive pressure or vacuum; more reliable than a safety valve Saddle valve: where allowed, is used to tap a pipe for a low-flow need Schrader valve: holds the air inside automobile tires Security valve, a type of automatic shut-off valve (ASV): used to stop the flow of natural gas upon sudden pressure changes. Increased pressure or decreased pressure closes the valve (without using any external power sources). Installing a bypass pipe with a pushbutton to open valve (normally spring closed valve) between the inlet and outlet of the security valve can let the upstream and downstream pressure can be equalized on both sides of the security valve, so it can be reset. Security valves are used at schools, hospitals, prisons, and other hard-to-evacuate buildings. Slide gate valve: Ideal for handling dry bulk material in gravity flow, dilute phase, or dense phase pneumatic conveying applications. Similar to a sliding line blind valve, but the latter is for higher-pressure applications Slide valve: used in early steam engines to control admission and emission of steam from the piston Stopcock: restricts or isolates flow through a pipe Swirl valve: a specially designed Joule–Thomson pressure reduction/expansion valve imparting a centrifugal force upon the discharge stream for improving gas–liquid phase separation Tap (British English), faucet (American English): the common name for a valve used in homes to regulate water flow Tesla valve: a form of check valve with no moving parts, invented by Nikola Tesla for use with fluids Thermally operated valves: Thermal expansion valve, used in refrigeration and air conditioning systems Thermostatic mixing valve Thermostatic radiator valve Thermal shut off valve or Thermally released shut off valve, protects against excessive temperature, mandatory in the gas installation of some countries Trap primer: sometimes include other types of valves, or are valves themselves Vacuum breaker valve: prevents the back-siphonage of contaminated water into pressurized drinkable water supplies References Engineering-related lists
List of valves
[ "Physics", "Chemistry" ]
1,660
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
34,115,448
https://en.wikipedia.org/wiki/Maximal%20information%20coefficient
In statistics, the maximal information coefficient (MIC) is a measure of the strength of the linear or non-linear association between two variables X and Y. The MIC belongs to the maximal information-based nonparametric exploration (MINE) class of statistics. In a simulation study, MIC outperformed some selected low power tests, however concerns have been raised regarding reduced statistical power in detecting some associations in settings with low sample size when compared to powerful methods such as distance correlation and Heller–Heller–Gorfine (HHG). Comparisons with these methods, in which MIC was outperformed, were made in Simon and Tibshirani and in Gorfine, Heller, and Heller. It is claimed that MIC approximately satisfies a property called equitability which is illustrated by selected simulation studies. It was later proved that no non-trivial coefficient can exactly satisfy the equitability property as defined by Reshef et al., although this result has been challenged. Some criticisms of MIC are addressed by Reshef et al. in further studies published on arXiv. Overview The maximal information coefficient uses binning as a means to apply mutual information on continuous random variables. Binning has been used for some time as a way of applying mutual information to continuous distributions; what MIC contributes in addition is a methodology for selecting the number of bins and picking a maximum over many possible grids. The rationale is that the bins for both variables should be chosen in such a way that the mutual information between the variables be maximal. That is achieved whenever . Thus, when the mutual information is maximal over a binning of the data, we should expect that the following two properties hold, as much as made possible by the own nature of the data. First, the bins would have roughly the same size, because the entropies and are maximized by equal-sized binning. And second, each bin of X will roughly correspond to a bin in Y. Because the variables X and Y are real numbers, it is almost always possible to create exactly one bin for each (x,y) datapoint, and that would yield a very high value of the MI. To avoid forming this kind of trivial partitioning, the authors of the paper propose taking a number of bins for X and whose product is relatively small compared with the size N of the data sample. Concretely, they propose: In some cases it is possible to achieve a good correspondence between and with numbers as low as and , while in other cases the number of bins required may be higher. The maximum for is determined by H(X), which is in turn determined by the number of bins in each axis, therefore, the mutual information value will be dependent on the number of bins selected for each variable. In order to compare mutual information values obtained with partitions of different sizes, the mutual information value is normalized by dividing by the maximum achievable value for the given partition size. It is worth noting that a similar adaptive binning procedure for estimating mutual information had been proposed previously. Entropy is maximized by uniform probability distributions, or in this case, bins with the same number of elements. Also, joint entropy is minimized by having a one-to-one correspondence between bins. If we substitute such values in the formula , we can see that the maximum value achievable by the MI for a given pair of bin counts is . Thus, this value is used as a normalizing divisor for each pair of bin counts. Last, the normalized maximal mutual information value for different combinations of and is tabulated, and the maximum value in the table selected as the value of the statistic. It is important to note that trying all possible binning schemes that satisfy is computationally unfeasible even for small n. Therefore, in practice the authors apply a heuristic which may or may not find the true maximum. Notes References Information theory Covariance and correlation
Maximal information coefficient
[ "Mathematics", "Technology", "Engineering" ]
816
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
34,115,523
https://en.wikipedia.org/wiki/Diffuse%20extragalactic%20background%20radiation
The diffuse extragalactic background radiation (DEBRA) refers to the photon field of extragalactic origin that fills our Universe. It contains photons whose energies span more than twenty orders of magnitude, from 10−7 eV to more than 100 GeV. This range covers everything from the microwaves emitted by free hydrogen atoms to ultra high-energy gamma rays, which can only be emitted by the most powerful physical processes in the modern universe such as kilonovas and merging black holes. The origin and the physical processes involved are different within every wavelength range. There is plenty of observational evidence that support the existence of the DEBRA. The figure shows a schematic picture, based on many different data sets, of the spectral intensity (also called spectral radiance) multiplied by wavelength of the DEBRA over all the electromagnetic spectrum. This representation is convenient because the area inside the curve is the energy. The nature and history of the universe is coded in this radiation field and any realistic cosmological model must be able to describe it. Understanding the DEBRA is a major challenge of modern cosmology with huge consequences in other fields of astrophysics, therefore extraordinary efforts are being put by theoreticians, observers, and instrumentalists to do so. Regions of the DEBRA The overall diffuse extragalactic radiation field may be divided in different regions according to their origin and physical processes involved. This is a standard classification from the highest down to the lowest energies: Diffuse extragalactic gamma-ray radiation (also known as cosmic gamma-ray background) Cosmic X-ray background Extragalactic background light (which includes the cosmic infrared background) Cosmic microwave background Cosmic radio background See also Photon underproduction crisis References External links Caltech papers Extragalactic astronomy Cosmic rays Cosmic background radiation Unsolved problems in astronomy
Diffuse extragalactic background radiation
[ "Physics", "Astronomy" ]
364
[ "Physical phenomena", "Unsolved problems in astronomy", "Astronomical sub-disciplines", "Concepts in astronomy", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Radiation", "Astronomical controversies", "Extragalactic astronomy", "Cosmic rays" ]
34,119,149
https://en.wikipedia.org/wiki/Geotechnical%20centrifuge%20modeling
Geotechnical centrifuge modeling is a technique for testing physical scale models of geotechnical engineering systems such as natural and man-made slopes and earth retaining structures and building or bridge foundations. The scale model is typically constructed in the laboratory and then loaded onto the end of the centrifuge, which is typically between in radius. The purpose of spinning the models on the centrifuge is to increase the g-forces on the model so that stresses in the model are equal to stresses in the prototype. For example, the stress beneath a layer of model soil spun at a centrifugal acceleration of 50 g produces stresses equivalent to those beneath a prototype layer of soil in earth's gravity. The idea to use centrifugal acceleration to simulate increased gravitational acceleration was first proposed by Phillips (1869). Pokrovsky and Fedorov (1936) in the Soviet Union and Bucky (1931) in the United States were the first to implement the idea. Andrew N. Schofield (e.g. Schofield 1980) played a key role in modern development of centrifuge modeling. Principles of centrifuge modeling Typical applications A geotechnical centrifuge is used to test models of geotechnical problems such as the strength, stiffness and capacity of foundations for bridges and buildings, settlement of embankments, stability of slopes, earth retaining structures, tunnel stability and seawalls. Other applications include explosive cratering, contaminant migration in ground water, frost heave and sea ice. The centrifuge may be useful for scale modeling of any large-scale nonlinear problem for which gravity is a primary driving force. Reason for model testing on the centrifuge Geotechnical materials such as soil and rock have non-linear mechanical properties that depend on the effective confining stress and stress history. The centrifuge applies an increased "gravitational" acceleration to physical models in order to produce identical self-weight stresses in the model and prototype. The one to one scaling of stress enhances the similarity of geotechnical models and makes it possible to obtain accurate data to help solve complex problems such as earthquake-induced liquefaction, soil-structure interaction and underground transport of pollutants such as dense non-aqueous phase liquids. Centrifuge model testing provides data to improve our understanding of basic mechanisms of deformation and failure and provides benchmarks useful for verification of numerical models. Scaling laws Note that in this article, the asterisk on any quantity represents the scale factor for that quantity. For example, in , the subscript m represents "model" and the subscript p represents "prototype" and represents the scale factor for the quantity . The reason for spinning a model on a centrifuge is to enable small scale models to feel the same effective stresses as a full-scale prototype. This goal can be stated mathematically as where the asterisk represents the scaling factor for the quantity, is the effective stress in the model and is the effective stress in the prototype. In soil mechanics the vertical effective stress, for example, is typically calculated by where is the total stress and is the pore pressure. For a uniform layer with no pore pressure, the total vertical stress at a depth may be calculated by: where represents the density of the layer and represents gravity. In the conventional form of centrifuge modeling, it is typical that the same materials are used in the model and prototype; therefore the densities are the same in model and prototype, i.e., Furthermore, in conventional centrifuge modeling all lengths are scaled by the same factor . To produce the same stress in the model as in the prototype, we thus require , which may be rewritten as The above scaling law states that if lengths in the model are reduced by some factor, n, then gravitational accelerations must be increased by the same factor, n in order to preserve equal stresses in model and prototype. Dynamic problems For dynamic problems where gravity and accelerations are important, all accelerations must scale as gravity is scaled, i.e. Since acceleration has units of , it is required that Hence it is required that :, or Frequency has units of inverse of time, velocity has units of length per time, so for dynamic problems we also obtain Diffusion problems For model tests involving both dynamics and diffusion, the conflict in time scale factors may be resolved by scaling the permeability of the soil Scaling of other quantitites (this section obviously needs work!) scale factors for energy, force, pressure, acceleration, velocity, etc. Note that stress has units of pressure, or force per unit area. Thus we can show that Substituting F = m∙a (Newton's law, force = mass ∙ acceleration) and r = m/L3 (from the definition of mass density). Scale factors for many other quantities can be derived from the above relationships. The table below summarizes common scale factors for centrifuge testing. Scale Factors for Centrifuge Model Tests (from Garnier et al., 2007 ) (Table is suggested to be added here) Value of centrifuge in geotechnical earthquake engineering Large earthquakes are infrequent and unrepeatable but they can be devastating. All of these factors make it difficult to obtain the required data to study their effects by post earthquake field investigations. Instrumentation of full scale structures is expensive to maintain over the large periods of time that may elapse between major temblors, and the instrumentation may not be placed in the most scientifically useful locations. Even if engineers are lucky enough to obtain timely recordings of data from real failures, there is no guarantee that the instrumentation is providing repeatable data. In addition, scientifically educational failures from real earthquakes come at the expense of the safety of the public. Understandably, after a real earthquake, most of the interesting data is rapidly cleared away before engineers have an opportunity to adequately study the failure modes. Centrifuge modeling is a valuable tool for studying the effects of ground shaking on critical structures without risking the safety of the public. The efficacy of alternative designs or seismic retrofitting techniques can compared in a repeatable scientific series of tests. Verification of numerical models Centrifuge tests can also be used to obtain experimental data to verify a design procedure or a computer model. The rapid development of computational power over recent decades has revolutionized engineering analysis. Many computer models have been developed to predict the behavior of geotechnical structures during earthquakes and other loads. Before a computer model can be used with confidence, it must be proven to be valid based on evidence. The meager and unrepeatable data provided by natural earthquakes, for example, is usually insufficient for this purpose. Verification of the validity of assumptions made by a computational algorithm is especially important in the area of geotechnical engineering due to the complexity of soil behavior. Soils exhibit highly non-linear behavior, their strength and stiffness depend on their stress history and on the water pressure in the pore fluid, all of which may evolve during the loading caused by an earthquake. The computer models which are intended to simulate these phenomena are very complex and require extensive verification. Experimental data from centrifuge tests is useful for verifying assumptions made by a computational algorithm. If the results show the computer model to be inaccurate, the centrifuge test data provides insight into the physical processes which in turn stimulates the development of better computer models. See also Andrew N. Schofield Civil engineer Geotechnical engineering Network for Earthquake Engineering Simulation Physical model Scale model Soil mechanics References Schofield (1993), From cam clay to centrifuge models, JSSMFE Vol. 41, No. 5 Ser. No. 424 pp 83– 87, No. 6 Ser. No. 425 pp 84–90, No. 7, Ser. No. 426 pp 71–78. External links Technical committee on physical modelling in geotechnics International Society for Soil Mechanics and Geotechnical Engineering American Society of Civil Engineers Tests in geotechnical laboratories Civil engineering Scale modeling
Geotechnical centrifuge modeling
[ "Physics", "Engineering" ]
1,655
[ "Construction", "Scale modeling", "Civil engineering" ]
36,686,502
https://en.wikipedia.org/wiki/Geographical%20midpoint%20of%20Asia
The location of the geographical centre of Asia (; ; ) depends on the definition of the borders of Asia, mainly whether remote islands are included to define the extreme points of Asia, and on the method of calculating the final result. Also on the projection used (radial projection on the plane vs. projection on a geoid), there is no objectively correct way of finding "the centre of Asia". Thus, several places claim to host this hypothetical centre. The first official declaration of the Centre of Asia was made in the 1890s by the British traveller, and calculated to be near the Manor house of Estate of Safyanov in Saldam (modern Tuva, Russia) at . There is a monument commemorating that fact in the estate garden. Current measurements China The Geographical Centre of the Asian Continent () is the name of a monument indicating the supposed geographical centre of the Asian continent. It is located about south-west of Ürümqi, Xinjiang, People's Republic of China. The measurement on which it is based dates to 1992. It was based on calculating the geographical centre of 49 Asian countries, including island states such as Cyprus and Japan (and, reflecting the People's Republic of China's political perspective, counting Palestine and Sikkim as separate countries), placing the geographical centre of all these countries at . Before the completion of the monument, the site was marked by a wooden pole stating "Geographic Centre of Asia" (亚洲地理中心). The village Baojia Caozi () that happened to be located at the site where the monument was to be built was relocated, and the new village is now known as the "Heart of Asia" (亚心). The site has a tower labelled "Centre of Asia" which represents 48 countries of Asia. The monument was completed in the late 1990s. Russia Obelisk "Center of Asia" (; ) is the name of a monument indicating the supposed geographical centre of the Asian continent. It is located in Kyzyl, Tyva Republic, Russian Federation. It is located in the Tos-Bulak area south of the city, located about to the north-to-northeast of the Ürümqi monument at . The monument was completed in 1968. References Asia Monuments and memorials in Russia Monuments and memorials in China Geography of Asia Kyzyl
Geographical midpoint of Asia
[ "Physics", "Mathematics" ]
474
[ "Point (geometry)", "Geometric centers", "Geographical centres", "Symmetry" ]
36,688,638
https://en.wikipedia.org/wiki/Thin%20walled%20beams
A thin walled beam is a type of beam (structure) that does not have a solid cross sectional area. The cross section of thin walled beams is made up from thin panels connected together. Typical closed sections include round, square, and rectangular tubes. Open sections include I-beams, T-beams, L-beams, and so on. The advantages of thin walled beams are their lighter weight and their bending stiffness per unit cross sectional area, which is much higher than for solid cross sections such as a rod or bar. Thin-walled beams are found almost everywhere, in civil and naval engineering, as well as aeronautics and aerospace designs. Apart from lightweight construction, strong rigidity, and load resistance, there are also lower manufacturing costs, and lower transport and maintenance costs. They also give the designer more flexibility in the choice of material and shape to meet any specific requirements. Thin walled beams are particularly useful when the material is a composite laminate. Pioneer work in this regard was done by Librescu. References Statics Solid mechanics Structural system
Thin walled beams
[ "Physics", "Technology", "Engineering" ]
211
[ "Structural engineering", "Solid mechanics", "Statics", "Building engineering", "Materials stubs", "Classical mechanics", "Structural system", "Materials", "Mechanics", "Matter" ]
36,692,110
https://en.wikipedia.org/wiki/Rad%C3%B3%E2%80%93Kneser%E2%80%93Choquet%20theorem
In mathematics, the Radó–Kneser–Choquet theorem, named after Tibor Radó, Hellmuth Kneser and Gustave Choquet, states that the Poisson integral of a homeomorphism of the unit circle is a harmonic diffeomorphism of the open unit disk. The result was stated as a problem by Radó and solved shortly afterwards by Kneser in 1926. Choquet, unaware of the work of Radó and Kneser, rediscovered the result with a different proof in 1945. Choquet also generalized the result to the Poisson integral of a homeomorphism from the unit circle to a simple Jordan curve bounding a convex region. Statement Let f be an orientation-preserving homeomorphism of the unit circle |z| = 1 in C and define the Poisson integral of f by for r < 1. Standard properties of the Poisson integral show that Ff is a harmonic function on |z| < 1 which extends by continuity to f on |z| = 1. With the additional assumption that f is orientation-preserving homeomorphism of this circle, Ff is an orientation preserving diffeomorphism of the open unit disk. Proof To prove that Ff is locally an orientation-preserving diffeomorphism, it suffices to show that the Jacobian at a point a in the unit disk is positive. This Jacobian is given by On the other hand, that g is a Möbius transformation preserving the unit circle and the unit disk, Taking g so that g(a) = 0 and taking the change of variable ζ = g(z), the chain rule gives It follows that It is therefore enough to prove positivity of the Jacobian when a = 0. In that case where the an are the Fourier coefficients of f: Following , the Jacobian at 0 can be expressed as a double integral Writing where h is a strictly increasing continuous function satisfying the double integral can be rewritten as Hence where This formula gives R as the sum of the sines of four non-negative angles with sum 2π, so it is always non-negative. But then the Jacobian at 0 is strictly positive and Ff is therefore locally a diffeomorphism. It remains to deduce Ff is a homeomorphism. By continuity its image is compact so closed. The non-vanishing of the Jacobian, implies that Ff is an open mapping on the unit disk, so that the image of the open disk is open. Hence the image of the closed disk is an open and closed subset of the closed disk. By connectivity, it must be the whole disk. For |w| < 1, the inverse image of w is closed, so compact, and entirely contained in the open disk. Since Ff is locally a homeomorphism, it must be a finite set. The set of points w in the open disk with exactly n preimages is open. By connectivity every point has the same number N of preimages. Since the open disk is simply connected, N = 1. In fact taking any preimage of the origin, every radial line has a unique lifting to a preimage, and so there is an open subset of the unit disk mapping homeomorphically onto the open disk. If N > 1, its complement would also have to be open, contradicting connectivity. Notes References Theorems in harmonic analysis
Radó–Kneser–Choquet theorem
[ "Mathematics" ]
699
[ "Theorems in mathematical analysis", "Theorems in harmonic analysis" ]
38,095,180
https://en.wikipedia.org/wiki/Ramanujam%E2%80%93Samuel%20theorem
In algebraic geometry, the Ramanujam–Samuel theorem gives conditions for a divisor of a local ring to be principal. It was introduced independently by in answer to a question of Grothendieck and by C. P. Ramanujam in an appendix to a paper by , and was generalized by . Statement Grothendieck's version of the Ramanujam–Samuel theorem is as follows. Suppose that A is a local Noetherian ring with maximal ideal m, whose completion is integral and integrally closed, and ρ is a local homomorphism from A to a local Noetherian ring B of larger dimension such that B is formally smooth over A and the residue field of B is finite over that of A. Then a cycle of codimension 1 in Spec(B) that is principal at the point mB is principal. References Theorems in algebraic geometry
Ramanujam–Samuel theorem
[ "Mathematics" ]
185
[ "Theorems in algebraic geometry", "Theorems in geometry" ]
38,095,963
https://en.wikipedia.org/wiki/Gallocyanin
Gallocyanin is a chemical compound classified as a phenoxazine dye. In combination with certain metals, it is used to prepare gallocyanin stains that are used in identifying nucleic acids. References Phenoxazines Oxazine dyes Carboxylic acids
Gallocyanin
[ "Chemistry" ]
60
[ "Carboxylic acids", "Functional groups", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
38,096,337
https://en.wikipedia.org/wiki/TurboSwing
Turboswing is a type of grease filter used in kitchen ventilation to remove grease particles from the air. It is typically installed inside the extractor hoods of restaurant kitchens. Its operation is based on a rotating filtering medium. How it works The main difference between turboswing and most common filters is that in turboswing filters the filtering medium is not static. There is a perforated disk rotating at high speed. When the grease particles go through the rotating disk they are separated from the air. After separation, centrifugal force due to the rotating disk throws particles against the inner walls of the filter. Particles then drip down the walls of the chamber onto the lower collection basin, where they stay until they are removed through the tap at the bottom of the filter dome. Smaller particles Turboswing filters can remove grease particles starting from 4μm, as opposed to 8μm for common filters. This is because the filtering medium is moving, and this increases the probability of collision between the filter and the particle. Varying airflow Turboswing filters can work with varying airflows. The grease extraction level is not affected by the airflow. This means that this kind of filter can be used in restaurants that turn down the air volumes at non-peak times in order to save energy. The explanation is that if the airflow is lower, the particles go through the rotating disk at a slower speed, therefore increasing, the collision probability. Other filters like cyclonic filters require the airflow to be high on a permanent basis, or else the performance of the filter drops. Therefore, the use of filters like turboswing make it possible to save vast amounts of energy in restaurant kitchen ventilation. Heat recovery Turboswing grease filters make it possible to do heat recovery with the air of a kitchen. Unlike common filters, turboswing filters extract the small particles responsible for making the heat exchanger dirty. Heat recovery makes it possible to save energy in the ventilation of a building. In particular, kitchen air is hotter than the air in most other rooms, and therefore a large amount of energy can potentially be saved. However, when it comes to the ventilation of a kitchen, if the correct kind of filter is not used, heat recovery can be very difficult or even impossible, because of the presence of grease particles in the air. Grease particles accumulate at the heat exchanger, rendering it useless very quickly. In order to have heat recovery in a kitchen, the air must be completely clear of grease, in other words, both large and small grease particles must be removed from the air. Static filters cannot adequately deal with small particles, therefore making it impossible to recover heat. Turboswing filters exhibit high performance in dealing with small particles, and this is why they enable heat recovery to be done with kitchen air. References Ventilation Heating, ventilation, and air conditioning Kitchen Energy recovery Sustainable building Filters
TurboSwing
[ "Chemistry", "Engineering" ]
584
[ "Sustainable building", "Chemical equipment", "Building engineering", "Filters", "Construction", "Filtration" ]
38,097,000
https://en.wikipedia.org/wiki/Spin%20canting
Some antiferromagnetic materials exhibit a non-zero magnetic moment at a temperature near absolute zero. This effect is ascribed to spin canting, a phenomenon through which spins are tilted by a small angle about their axis rather than being exactly co-parallel. Spin canting is due to two factors contrasting each other: isotropic exchange would align the spins exactly antiparallel, while antisymmetric exchange arising from relativistic effects (spin–orbit coupling) would align the spins at 90° to each other. The net result is a small perturbation, the extent of which depends on the relative strength of these effects. This effect is observable in many materials such as hematite. References Magnetic ordering Spintronics
Spin canting
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
154
[ "Spintronics", "Electric and magnetic fields in matter", "Materials science", "Magnetic ordering", "Condensed matter physics" ]
38,097,861
https://en.wikipedia.org/wiki/Georgetown%20Coal%20Gasification%20Plant
Georgetown Coal Gasification Plant, also known as the Georgetown Service and Gas Company, is a historic coal gasification plant located at Georgetown, Sussex County, Delaware. It was built in the late-19th century, and is a rectangular one-story, three bay by three bay brick structure measuring 40 feet by 25 feet. It has a gable roof with a smaller gable roofed ventilator. Also on the property is a small brick gable roofed brick building measuring 8 feet by 10 feet; a small, square concrete building; a large, cylindrical "surge tank;" 500-gallon bottled-gas tank; and a covered pit for impurities. The complex was privately owned and developed starting in the 1880s to provide metered gas for domestic lighting, town street lights, municipal and domestic uses. The coal gasification process was discontinued in the 1940s. The site was added to the National Register of Historic Places in 1985. It is listed on the Delaware Cultural and Historic Resources GIS system as destroyed or demolished. References Industrial buildings and structures on the National Register of Historic Places in Delaware Buildings and structures in Georgetown, Delaware Coal gasification technologies National Register of Historic Places in Sussex County, Delaware
Georgetown Coal Gasification Plant
[ "Chemistry" ]
238
[ "Coal gasification technologies", "Synthetic fuel technologies" ]
38,100,144
https://en.wikipedia.org/wiki/Transit%20metropolis
A Transit metropolis is an urbanized region with high-quality public transportation services and settlement patterns that are conducive to riding public transit. While Transit villages and Transit-oriented developments (TODs) focus on creating compact, mixed-use neighborhoods around rail stations, transit metropolises represent a regional constellation of TODs that benefit from having both trip origins and destinations oriented to public transport stations. In an effort to reduce mounting traffic congestion problems and improve environmental conditions, a number of Chinese mega-cities, including Beijing and Shenzhen, have embraced the transit metropolis model for guiding urban growth and public-transport investment decisions. Around the world, mass transit have been struggling to compete with private automobile and in many places its market is eroding. Transit metropolis and TOD are among the planning strategies being introduced to help reserve ridership losses and advance more sustainable patterns of urban development. Transit metropolises recognize that one or two TODs as islands in a sea of automobile-oriented development (AOD) will do little to get people out of cars and into trains and buses. Only when TODs are organized along linear corridors, as in Stockholm, Copenhagen and Curitiba, or inter-connected by high-capacity transit at a regional scale can they significantly reduce car-dependence and improve environmental conditions. See also Commuter town New Urbanism Smart growth Streetcar suburb Transit-oriented development Transit village References Public transport Sustainable transport Transportation planning
Transit metropolis
[ "Physics" ]
284
[ "Physical systems", "Transport", "Sustainable transport", "Transport stubs" ]
38,104,075
https://en.wikipedia.org/wiki/General%20Data%20Protection%20Regulation
The General Data Protection Regulation (Regulation (EU) 2016/679), abbreviated GDPR, is a European Union regulation on information privacy in the European Union (EU) and the European Economic Area (EEA). The GDPR is an important component of EU privacy law and human rights law, in particular Article 8(1) of the Charter of Fundamental Rights of the European Union. It also governs the transfer of personal data outside the EU and EEA. The GDPR's goals are to enhance individuals' control and rights over their personal information and to simplify the regulations for international business. It supersedes the Data Protection Directive 95/46/EC and, among other things, simplifies the terminology. The European Parliament and Council of the European Union adopted the GDPR on 14 April 2016, to become effective on 25 May 2018. As an EU regulation (instead of a directive), GDPR is directly applicable with force of law on its own without the need of transposition. However, it also provides flexibility for individual member states to modify (derogate from) some of its provisions. As an example of the Brussels effect, the regulation became a model for many other laws around the world, including in Brazil, Japan, Singapore, South Africa, South Korea, Sri Lanka, and Thailand. After leaving the European Union the United Kingdom enacted its "UK GDPR", identical to the GDPR. The California Consumer Privacy Act (CCPA), adopted on 28 June 2018, has many similarities with the GDPR. Contents The GDPR 2016 has eleven chapters, concerning general provisions, principles, rights of the data subject, duties of data controllers or processors, transfers of personal data to third countries, supervisory authorities, cooperation among member states, remedies, liability or penalties for breach of rights, and miscellaneous final provisions. Recital 4 proclaims that ‘processing of personal data should be designed to serve mankind’. General provisions The regulation applies if the data controller (an organisation that collects information about living people, whether they are in the EU or not), or processor (an organisation that processes data on behalf of a data controller like cloud service providers), or the data subject (person) is based in the EU. Under certain circumstances, the regulation also applies to organisations based outside the EU if they collect or process personal data of individuals located inside the EU. The regulation does not apply to the processing of data by a person for a "purely personal or household activity and thus with no connection to a professional or commercial activity." (Recital 18). According to the European Commission, "Personal data is information that relates to an identified or identifiable individual. If you cannot directly identify an individual from that information, then you need to consider whether the individual is still identifiable. You should take into account the information you are processing together with all the means reasonably likely to be used by either you or any other person to identify that individual." The precise definitions of terms such as "personal data", "processing", "data subject", "controller", and "processor" are stated in Article 4. The regulation does not purport to apply to the processing of personal data for national security activities or law enforcement of the EU; however, industry groups concerned about facing a potential conflict of laws have questioned whether Article 48 could be invoked to seek to prevent a data controller subject to a third country's laws from complying with a legal order from that country's law enforcement, judicial, or national security authorities to disclose to such authorities the personal data of an EU person, regardless of whether the data resides in or out of the EU. Article 48 states that any judgement of a court or tribunal and any decision of an administrative authority of a third country requiring a controller or processor to transfer or disclose personal data may not be recognised or enforceable in any manner unless based on an international agreement, like a mutual legal assistance treaty in force between the requesting third (non-EU) country and the EU or a member state. The data protection reform package also includes a separate Data Protection Directive for the police and criminal justice sector that provides rules on personal data exchanges at State level, Union level, and international levels. A single set of rules applies to all EU member states. Each member state establishes an independent supervisory authority (SA) to hear and investigate complaints, sanction administrative offences, etc. SAs in each member state co-operate with other SAs, providing mutual assistance and organising joint operations. If a business has multiple establishments in the EU, it must have a single SA as its "lead authority", based on the location of its "main establishment" where the main processing activities take place. The lead authority thus acts as a "one-stop shop" to supervise all the processing activities of that business throughout the EU. A European Data Protection Board (EDPB) co-ordinates the SAs. EDPB thus replaces the Article 29 Data Protection Working Party. There are exceptions for data processed in an employment context or in national security that still might be subject to individual country regulations. Principles and lawful purposes Article 5 sets out six principles relating to the lawfulness of processing personal data. The first of these specifies that data must be processed lawfully, fairly and in a transparent manner. Article 6 develops this principle by specifying that personal data may not be processed unless there is at least one legal basis for doing so. The other principles refer to "purpose limitation", "data minimisation", "accuracy", "storage limitation", and "integrity and confidentiality". Article 6 states that the lawful purposes are: (a) If the data subject has given consent to the processing of his or her personal data; (b) To fulfill contractual obligations with a data subject, or for tasks at the request of a data subject who is in the process of entering into a contract; (c) To comply with a data controller's legal obligations; (d) To protect the vital interests of a data subject or another individual; (e) To perform a task in the public interest or in official authority; (f) For the legitimate interests of a data controller or a third party, unless these interests are overridden by interests of the data subject or her or his rights according to the Charter of Fundamental Rights (especially in the case of children). If informed consent is used as the lawful basis for processing, consent must have been explicit for data collected and each purpose data is used for. Consent must be a specific, freely given, plainly worded, and unambiguous affirmation given by the data subject; an online form which has consent options structured as an opt-out selected by default is a violation of the GDPR, as the consent is not unambiguously affirmed by the user. In addition, multiple types of processing may not be "bundled" together into a single affirmation prompt, as this is not specific to each use of data, and the individual permissions are not freely given. (Recital 32). Data subjects must be allowed to withdraw this consent at any time, and the process of doing so must not be harder than it was to opt in. A data controller may not refuse service to users who decline consent to processing that is not strictly necessary in order to use the service. Consent for children, defined in the regulation as being less than 16 years old (although with the option for member states to individually make it as low as 13 years old, must be given by the child's parent or custodian, and verifiable. If consent to processing was already provided under the Data Protection Directive, a data controller does not have to re-obtain consent if the processing is documented and obtained in compliance with the GDPR's requirements (Recital 171). Rights of the data subject Transparency and modalities Article 12 requires the data controller to provide information to the "data subject in a concise, transparent, intelligible and easily accessible form, using clear and plain language, in particular for any information addressed specifically to a child." Information and access The right of access (Article 15) is a data subject right. It gives people the right to access their personal data and information about how this personal data is being processed. A data controller must provide, upon request, an overview of the categories of data that are being processed as well as a copy of the actual data; furthermore, the data controller has to inform the data subject on details about the processing, such as the purposes of the processing, with whom the data is shared, and how it acquired the data. A data subject must be able to transfer personal data from one electronic processing system to and into another, without being prevented from doing so by the data controller. Data that has been sufficiently anonymised is excluded, but data that has been only de-identified but remains possible to link to the individual in question, such as by providing the relevant identifier, is not. In practice, however, providing such identifiers can be challenging, such as in the case of Apple's Siri, where voice and transcript data is stored with a personal identifier that the manufacturer restricts access to, or in online behavioural targeting, which relies heavily on device fingerprints that can be challenging to capture, send, and verify. Both data being 'provided' by the data subject and data being 'observed', such as about behaviour, are included. In addition, the data must be provided by the controller in a structured and commonly used standard electronic format. The right to data portability is provided by Article 20. Rectification and erasure A right to be forgotten was replaced by a more limited right of erasure in the version of the GDPR that was adopted by the European Parliament in March 2014. Article 17 provides that the data subject has the right to request erasure of personal data related to them on any one of a number of grounds, including noncompliance with Article 6(1) (lawfulness) that includes a case (f) if the legitimate interests of the controller are overridden by the interests or fundamental rights and freedoms of the data subject, which require protection of personal data (see also Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González). Right to object and automated decisions Article 21 of the GDPR allows an individual to object to processing personal information for marketing or non-service related purposes. This means the data controller must allow an individual the right to stop or prevent controller from processing their personal data. There are some instances where this objection does not apply. For example, if: Legal or official authority is being carried out "Legitimate interest", where the organisation needs to process data in order to provide the data subject with a service they signed up for A task being carried out for public interest. GDPR is also clear that the data controller must inform individuals of their right to object from the first communication the controller has with them. This should be clear and separate from any other information the controller is providing and give them their options for how best to object to the processing of their data. There are instances the controller can refuse a request, in the circumstances that the objection request is "manifestly unfounded" or "excessive", so each case of objection must be looked at individually. Other countries such as Canada are also, following the GDPR, considering legislation to regulate automated decision making under privacy laws, even though there are policy questions as to whether this is the best way to regulate AI. Right to compensation Article 82 of the GDPR stipulates that any person who has suffered material or non-material damage as a result of an infringement of this Regulation shall have the right to receive compensation from the controller or processor for the damage suffered. In the judgment Österreichische Post (C-300/21) the Court of Justice of the European Union gave an interpretation of the right to compensation. Article 82(1) GDPR requires for the award of damages (i) an infringement of the GDPR, (ii) (actual) damage suffered and (iii) a causal link between the infringement and the damage suffered. It is not necessary that the damage suffered reaches a certain degree of seriousness. There is no European defined concept of damage. Compensation is determined nationally in accordance with national law. The principles of equivalence and effectiveness must be taken into account. See also the Opinion of the Advocate General in the case Krankenversicherung Nordrhein (C-667/21). Controller and processor Data controllers must clearly disclose any data collection, declare the lawful basis and purpose for data processing, and state how long data is being retained and if it is being shared with any third parties or outside of the EEA. Firms have the obligation to protect data of employees and consumers to the degree where only the necessary data is extracted with minimum interference with data privacy from employees, consumers, or third parties. Firms should have internal controls and regulations for various departments such as audit, internal controls, and operations. Data subjects have the right to request a portable copy of the data collected by a controller in a common format, as well as the right to have their data erased under certain circumstances. Public authorities, and businesses whose core activities consist of regular or systematic processing of personal data, are required to employ a data protection officer (DPO), who is responsible for managing compliance with the GDPR. Data controllers must report data breaches to national supervisory authorities within 72 hours if they have an adverse effect on user privacy. In some cases, violators of the GDPR may be fined up to €20 million or up to 4% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater. To be able to demonstrate compliance with the GDPR, the data controller must implement measures that meet the principles of data protection by design and by default. Article 25 requires data protection measures to be designed into the development of business processes for products and services. Such measures include pseudonymising personal data, by the controller, as soon as possible (Recital 78). It is the responsibility and the liability of the data controller to implement effective measures and be able to demonstrate the compliance of processing activities even if the processing is carried out by a data processor on behalf of the controller (Recital 74). When data is collected, data subjects must be clearly informed about the extent of data collection, the legal basis for the processing of personal data, how long data is retained, if data is being transferred to a third-party and/or outside the EU, and any automated decision-making that is made on a solely algorithmic basis. Data subjects must be informed of their privacy rights under the GDPR, including their right to revoke consent to data processing at any time, their right to view their personal data and access an overview of how it is being processed, their right to obtain a portable copy of the stored data, their right to erasure of their data under certain circumstances, their right to contest any automated decision-making that was made on a solely algorithmic basis, and their right to file complaints with a Data Protection Authority. As such, the data subject must also be provided with contact details for the data controller and their designated data protection officer, where applicable. Data protection impact assessments (Article 35) have to be conducted when specific risks occur to the rights and freedoms of data subjects. Risk assessment and mitigation is required and prior approval of the data protection authorities is required for high risks. Article 25 requires data protection to be designed into the development of business processes for products and services. Privacy settings must therefore be set at a high level by default, and technical and procedural measures shall be taken by the controller to make sure that the processing, throughout the whole processing lifecycle, complies with the regulation. Controllers shall also implement mechanisms to ensure that personal data is not processed unless necessary for each specific purpose. This is known as data minimisation. A report by the European Union Agency for Network and Information Security elaborates on what needs to be done to achieve privacy and data protection by default. It specifies that encryption and decryption operations must be carried out locally, not by remote service, because both keys and data must remain in the power of the data owner if any privacy is to be achieved. The report specifies that outsourced data storage on remote clouds is practical and relatively safe if only the data owner, not the cloud service, holds the decryption keys. Pseudonymisation According to the GDPR, pseudonymisation is a required process for stored data that transforms personal data in such a way that the resulting data cannot be attributed to a specific data subject without the use of additional information (as an alternative to the other option of complete data anonymisation). An example is encryption, which renders the original data unintelligible in a process that cannot be reversed without access to the correct decryption key. The GDPR requires for the additional information (such as the decryption key) to be kept separately from the pseudonymised data. Another example of pseudonymisation is tokenisation, which is a non-mathematical approach to protecting data at rest that replaces sensitive data with non-sensitive substitutes, referred to as tokens. While the tokens have no extrinsic or exploitable meaning or value, they allow for specific data to be fully or partially visible for processing and analytics while sensitive information is kept hidden. Tokenisation does not alter the type or length of data, which means it can be processed by legacy systems such as databases that may be sensitive to data length and type. This also requires much fewer computational resources to process and less storage space in databases than traditionally encrypted data. Pseudonymisation is a privacy-enhancing technology and is recommended to reduce the risks to the concerned data subjects and also to help controllers and processors to meet their data protection obligations (Recital 28). Records of processing activities According to Article 30 records of processing activities have to be maintained by each organisation matching one of following criteria: employing more than 250 people; the processing it carries out is likely to result in a risk to the rights and freedoms of data subjects; the processing is not occasional; processing includes special categories of data as referred to in Article 9(1) or personal data relating to criminal convictions and offences referred to in Article 10. Such requirements may be modified by each EU country. The records shall be in electronic form and the controller or the processor and, where applicable, the controller's or the processor's representative, shall make the record available to the supervisory authority on request. Records of controller shall contain all of the following information: the name and contact details of the controller and, where applicable, the joint controller, the controller's representative and the data protection officer; the purposes of the processing; a description of the categories of data subjects and of the categories of personal data; the categories of recipients to whom the personal data have been or will be disclosed including recipients in third countries or international organisations; where applicable, transfers of personal data to a third country or an international organisation, including the identification of that third country or international organisation and, in the case of transfers referred to in the second subparagraph of Article 49(1), the documentation of suitable safeguards; where possible, the envisaged time limits for erasure of the different categories of data; where possible, a general description of the technical and organisational security measures referred to in Article 32(1). Records of processor shall contain all of the following information: the name and contact details of the processor or processors and of each controller on behalf of which the processor is acting, and, where applicable, of the controller's or the processor's representative, and the data protection officer; the categories of processing carried out on behalf of each controller; where applicable, transfers of personal data to a third country or an international organisation, including the identification of that third country or international organisation and, in the case of transfers referred to in the second subparagraph of Article 49(1), the documentation of suitable safeguards; where possible, a general description of the technical and organisational security measures referred to in Article 32(1). Security of personal data Controllers and processors of personal data must put in place appropriate technical and organizational measures to implement the data protection principles. Business processes that handle personal data must be designed and built with consideration of the principles and provide safeguards to protect data (for example, using pseudonymization or full anonymization where appropriate). Data controllers must design information systems with privacy in mind. For instance, using the highest-possible privacy settings by default, so that the datasets are not publicly available by default and cannot be used to identify a subject. No personal data may be processed unless this processing is done under one of the six lawful bases specified by the regulation (consent, contract, public task, vital interest, legitimate interest or legal requirement). When the processing is based on consent the data subject has the right to revoke it at any time. Article 33 states the data controller is under a legal obligation to notify the supervisory authority without undue delay unless the breach is unlikely to result in a risk to the rights and freedoms of the individuals. There is a maximum of 72 hours after becoming aware of the data breach to make the report. Individuals have to be notified if a high risk of an adverse impact is determined. In addition, the data processor will have to notify the controller without undue delay after becoming aware of a personal data breach. However, the notice to data subjects is not required if the data controller has implemented appropriate technical and organisational protection measures that render the personal data unintelligible to any person who is not authorised to access it, such as encryption. Data protection officer Article 37 requires appointment of a data protection officer. If processing is carried out by a public authority (except for courts or independent judicial authorities when acting in their judicial capacity), or if processing operations involve regular and systematic monitoring of data subjects on a large scale, or if processing on a large scale of special categories of data and personal data relating to criminal convictions and offences (Articles 9 and Article 10) a data protection officer (DPO)—a person with expert knowledge of data protection law and practices—must be designated to assist the controller or processor in monitoring their internal compliance with the Regulation. A designated DPO can be a current member of staff of a controller or processor, or the role can be outsourced to an external person or agency through a service contract. In any case, the processing body must make sure that there is no conflict of interest in other roles or interests that a DPO may hold. The contact details for the DPO must be published by the processing organisation (for example, in a privacy notice) and registered with the supervisory authority. The DPO is similar to a compliance officer and is also expected to be proficient at managing IT processes, data security (including dealing with cyberattacks) and other critical business continuity issues associated with the holding and processing of personal and sensitive data. The skill set required stretches beyond understanding legal compliance with data protection laws and regulations. The DPO must maintain a living data inventory of all data collected and stored on behalf of the organization. More details on the function and the role of data protection officer were given on 13 December 2016 (revised 5 April 2017) in a guideline document. Organisations based outside the EU must also appoint an EU-based person as a representative and point of contact for their GDPR obligations. This is a distinct role from a DPO, although there is overlap in responsibilities that suggest that this role can also be held by the designated DPO. GDPR Certification Article 42 and 43 of the GDPR set the legal basis for formal GDPR certifications. They set the basis for two categories of certifications: National certification schemes, whose application is limited to a single EU/EEA country; European Data Protection Seals, which are recognized by all EU and EEA jurisdictions. According to Art. 42 GDPR, the purpose of this certification is to demonstrate “compliance with the GDPR of processing operations by controllers and processors”. There are over 70 references to certification in the GDPR, encompassing various obligations such as: Adequacy of the technical and organizational measures; Data sharing with data processors; Data protection by design and by default; International data transfers. The GDPR certification also contributes to reduce the legal and financial risks of applicants, as well as of data controllers using certified data processing services. The adoption of the European Data Protection Seals is under the responsibility of the European Data Protection Board (EDPB) and is recognized across all EU and EEA Member States. In October 2022, the Europrivacy certification criteria were officially recognized by the European Data Protection Board (EDPB) to serve as European Data Protection Seal. Europrivacy was developed by the European research programme and is managed by the European Centre for Certification and Privacy (ECCP) in Luxembourg. Remedies, liability and penalties Besides the definitions as a criminal offence according to national law following Article 83 GDPR the following sanctions can be imposed: a warning in writing in cases of first and non-intentional noncompliance regular periodic data protection audits a fine up to €10 million or up to 2% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater, if there has been an infringement of the following provisions (Article 83, Paragraph 4): the obligations of the controller and the processor pursuant to Articles 8, 11, 25 to 39, and 42 and 43 the obligations of the certification body pursuant to Articles 42 and 43 the obligations of the monitoring body pursuant to Article 41(4) a fine up to €20 million or up to 4% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater, if there has been an infringement of the following provisions (Article 83, Paragraph 5 & 6): the basic principles for processing, including conditions for consent, pursuant to Articles 5, 6, 7, and 9 the data subjects' rights pursuant to Articles 12 to 22 the transfers of personal data to a recipient in a third country or an international organisation pursuant to Articles 44 to 49 any obligations pursuant to member state law adopted under Chapter IX noncompliance with an order or a temporary or definitive limitation on processing or the suspension of data flows by the supervisory authority pursuant to Article 58(2) or failure to provide access in violation of Article 58(1) Exemptions These are some cases which are not addressed in the GDPR specifically, thus are treated as exemptions. Personal or household activities Law enforcement National security Conversely, an entity or more precisely an "enterprise" has to be engaged in "economic activity" to be covered by the GDPR. Economic activity is defined broadly under European Union competition law. Applicability outside of the European Union The GDPR also applies to data controllers and processors outside of the European Economic Area (EEA) if they are engaged in the "offering of goods or services" (regardless of whether a payment is required) to data subjects within the EEA, or are monitoring the behaviour of data subjects within the EEA (Article 3(2)). The regulation applies regardless of where the processing takes place. This has been interpreted as intentionally giving GDPR extraterritorial jurisdiction for non-EU establishments if they are doing business with people located in the EU. It is questionable whether the EU or its member states will in practice be able to enforce GDPR against organisations which have no establishment in the EU. EU Representative Under Article 27, non-EU establishments subject to GDPR are obliged to have a designee within the European Union, an "EU Representative", to serve as a point of contact for their obligations under the regulation. The EU Representative is the Controller's or Processor's contact person vis-à-vis European privacy supervisors and data subjects, in all matters relating to processing, to ensure compliance with this GDPR. A natural (individual) or legal (corporation) person can play the role of an EU Representative. The non-EU establishment must issue a duly signed document (letter of accreditation) designating a given individual or company as its EU Representative. The said designation can only be given in writing. An establishment's failure to designate an EU Representative is considered ignorance of the regulation and relevant obligations, which itself is a violation of the GDPR subject to fines of up to €10 million or up to 2% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater. The intentional or negligent (willful blindness) character of the infringement (failure to designate an EU Representative) may rather constitute aggravating factors. An establishment does not need to name an EU Representative if they only engage in occasional processing that does not include, on a large scale, processing of special categories of data as referred to in Article 9(1) of GDPR or processing of personal data relating to criminal convictions and offences referred to in Article 10, and such processing is unlikely to result in a risk to the rights and freedoms of natural persons, taking into account the nature, context, scope and purposes of the processing. Non-EU public authorities and bodies are equally exempted. Third countries Chapter V of the GDPR forbids the transfer of the personal data of EU data subjects to countries outside of the EEA — known as third countries — unless appropriate safeguards are imposed, or the third country's data protection regulations are formally considered adequate by the European Commission (Article 45). Binding corporate rules, standard contractual clauses for data protection issued by a Data Processing Agreement (DPA), or a scheme of binding and enforceable commitments by the data controller or processor situated in a third country, are among examples. United Kingdom implementation The applicability of GDPR in the United Kingdom is affected by Brexit. Although the United Kingdom formally withdrew from the European Union on 31 January 2020, it remained subject to EU law, including GDPR, until the end of the transition period on 31 December 2020. The United Kingdom granted royal assent to the Data Protection Act 2018 on 23 May 2018, which augmented the GDPR, including aspects of the regulation that are to be determined by national law, and criminal offences for knowingly or recklessly obtaining, redistributing, or retaining personal data without the consent of the data controller. Under the European Union (Withdrawal) Act 2018, existing and relevant EU law was transposed into local law upon completion of the transition, and the GDPR was amended by statutory instrument to remove certain provisions no longer needed due to the UK's non-membership in the EU. Thereafter, the regulation will be referred to as "UK GDPR". The UK will not restrict the transfer of personal data to countries within the EEA under UK GDPR. However, the UK will become a third country under the EU GDPR, meaning that personal data may not be transferred to the country unless appropriate safeguards are imposed, or the European Commission performs an adequacy decision on the suitability of British data protection legislation (Chapter V). As part of the withdrawal agreement, the European Commission committed to perform an adequacy assessment. In April 2019, the UK Information Commissioner's Office (ICO) issued a children's code of practice for social networking services when used by minors, enforceable under GDPR, which also includes restrictions on "like" and "streak" mechanisms in order to discourage social media addiction and on the use of this data for processing interests. In March 2021, Secretary of State for Digital, Culture, Media and Sport Oliver Dowden stated that the UK was exploring divergence from the EU GDPR in order to "[focus] more on the outcomes that we want to have and less on the burdens of the rules imposed on individual businesses". Misconceptions Some common misconceptions about GDPR include: All processing of personal data requires consent of the data subject In fact, data can be processed without consent if one of the other five lawful bases for processing applies, and obtaining consent may often be inappropriate. Individuals have an absolute right to have their data deleted (right to be forgotten) Whilst there is an absolute right to opt-out of direct marketing, data controllers can continue to process personal data where they have a lawful basis to do so, as long as the data remain necessary for the purpose for which it was originally collected. Removing individuals' names from records takes them out of scope of GDPR "Pseudonymous" data where an individual is identified by a number can still be personal data if the data controller is capable of tying that data back to an individual in another way. GDPR applies to anyone processing personal data of EU citizens anywhere in the world In fact, it applies to non-EU established organizations only where they are processing data of data subjects located in the EU (irrespective of their citizenship) and then only when supplying goods or services to them, or monitoring their behaviour. Reception As per a study conducted by Deloitte in 2018, 92% of companies believe they are able to comply with GDPR in their business practices in the long run. Companies operating outside of the EU have invested heavily to align their business practices with GDPR. The area of GDPR consent has a number of implications for businesses who record calls as a matter of practice. A typical disclaimer is not considered sufficient to gain assumed consent to record calls. Additionally, when recording has commenced, should the caller withdraw their consent, then the agent receiving the call must be able to stop a previously started recording and ensure the recording does not get stored. IT professionals expect that compliance with the GDPR will require additional investment overall: over 80 percent of those surveyed expected GDPR-related spending to be at least US$ 100,000. The concerns were echoed in a report commissioned by the law firm Baker & McKenzie that found that "around 70 percent of respondents believe that organizations will need to invest additional budget/effort to comply with the consent, data mapping and cross-border data transfer requirements under the GDPR." The total cost for EU companies is estimated at €200 billion while for US companies the estimate is for $41.7 billion. It has been argued that smaller businesses and startup companies might not have the financial resources to adequately comply with the GDPR, unlike the larger international technology firms (such as Facebook and Google) that the regulation is ostensibly meant to target first and foremost. A lack of knowledge and understanding of the regulations has also been a concern in the lead-up to its adoption. A counter-argument to this has been that companies were made aware of these changes two years prior to them coming into effect and should have had enough time to prepare. The regulations, including whether an enterprise must have a data protection officer, have been criticized for potential administrative burden and unclear compliance requirements. Although data minimisation is a requirement, with pseudonymisation being one of the possible means, the regulation provides no guidance on how or what constitutes an effective data de-identification scheme, with a grey area on what would be considered as inadequate pseudonymisation subject to Section 5 enforcement actions. There is also concern regarding the implementation of the GDPR in blockchain systems, as the transparent and fixed record of blockchain transactions contradicts the very nature of the GDPR. Many media outlets have commented on the introduction of a "right to explanation" of algorithmic decisions, but legal scholars have since argued that the existence of such a right is highly unclear without judicial tests and is limited at best. The GDPR has garnered support from businesses who regard it as an opportunity to improve their data management. Mark Zuckerberg has also called it a "very positive step for the Internet", and has called for GDPR-style laws to be adopted in the US. Consumer rights groups such as The European Consumer Organisation are among the most vocal proponents of the legislation. Other supporters have attributed its passage to the whistleblower Edward Snowden. Free software advocate Richard Stallman has praised some aspects of the GDPR but called for additional safeguards to prevent technology companies from "manufacturing consent". Impact Academic experts who participated in the formulation of the GDPR wrote that the law "is the most consequential regulatory development in information policy in a generation. The GDPR brings personal data into a complex and protective regulatory regime." Despite having had at least two years to prepare and do so, many companies and websites changed their privacy policies and features worldwide directly prior to GDPR's implementation, and customarily provided email and other notifications discussing these changes. This was criticised for resulting in a fatiguing number of communications, while experts noted that some reminder emails incorrectly asserted that new consent for data processing had to be obtained for when the GDPR took effect (any previously obtained consent to processing is valid as long as it met the regulation's requirements). Phishing scams also emerged using falsified versions of GDPR-related emails, and it was also argued that some GDPR notice emails may have actually been sent in violation of anti-spam laws. In March 2019, a provider of compliance software found that many websites operated by EU member state governments contained embedded tracking from ad technology providers. The deluge of GDPR-related notices also inspired memes, including those surrounding privacy policy notices being delivered by atypical means (such as a Ouija board or Star Wars opening crawl), suggesting that Santa Claus's "naughty or nice" list was a violation, and a recording of excerpts from the regulation by a former BBC Radio 4 Shipping Forecast announcer. A blog, GDPR Hall of Shame, was also created to showcase unusual delivery of GDPR notices, and attempts at compliance that contained egregious violations of the regulation's requirements. Its author remarked that the regulation "has a lot of nitty gritty, in-the-weeds details, but not a lot of information about how to comply", but also acknowledged that businesses had two years to comply, making some of its responses unjustified. Research indicates that approximately 25% of software vulnerabilities have GDPR implications. Since Article 33 emphasizes breaches, not bugs, security experts advise companies to invest in processes and capabilities to identify vulnerabilities before they can be exploited, including coordinated vulnerability disclosure processes. An investigation of Android apps' privacy policies, data access capabilities, and data access behaviour has shown that numerous apps display a somewhat privacy-friendlier behaviour since the GDPR was implemented, although they still retain most of their data access privileges in their code. An investigation of the Norwegian Consumer Council into the post-GDPR data subject dashboards on social media platforms (such as Google dashboard) has concluded that large social media firms deploy deceptive tactics in order to discourage their customers from sharpening their privacy settings. On the effective date, some websites began to block visitors from EU countries entirely (including Instapaper, Unroll.me, and Tribune Publishing-owned newspapers, such as the Chicago Tribune and the Los Angeles Times) or redirect them to stripped-down versions of their services (in the case of National Public Radio and USA Today) with limited functionality and/or no advertising so that they will not be liable. Some companies, such as Klout, and several online video games, ceased operations entirely to coincide with its implementation, citing the GDPR as a burden on their continued operations, especially due to the business model of the former. The volume of online behavioural advertising placements in Europe fell 25–40% on 25 May 2018. In 2020, two years after the GDPR began its implementation, the European Commission assessed that users across the EU had increased their knowledge about their rights, stating that "69% of the population above the age of 16 in the EU have heard about the GDPR and 71% of people heard about their national data protection authority." The commission also found that privacy has become a competitive quality for companies which consumers are taking into account in their decisionmaking processes. Enforcement and inconsistency Facebook and subsidiaries WhatsApp and Instagram, as well as Google LLC (targeting Android), were immediately sued by Max Schrems's non-profit NOYB just hours after midnight on 25 May 2018, for their use of "forced consent". Schrems asserts that both companies violated Article 7(4) by not presenting opt-ins for data processing consent on an individualized basis, and requiring users to consent to all data processing activities (including those not strictly necessary) or would be forbidden from using the services. On 21 January 2019, Google was fined €50 million by the French DPA for showing insufficient control, consent, and transparency over use of personal data for behavioural advertising. In November 2018, following a journalistic investigation into Liviu Dragnea, the Romanian DPA (ANSPDCP) used a GDPR request to demand information on the RISE Project's sources. In July 2019, the British Information Commissioner's Office issued an intention to fine British Airways a record £183 million (1.5% of turnover) for poor security arrangements that enabled a 2018 web skimming attack affecting around 380,000 transactions. British Airways was ultimately fined a reduced amount of £20m, with the ICO noting that they had "considered both representations from BA and the economic impact of COVID-19 on their business before setting a final penalty". In December 2019, Politico reported that Ireland and Luxembourg – two smaller EU countries that have had a reputation as a tax havens and (especially in the case of Ireland) as a base for European subsidiaries of U.S. big tech companies – were facing significant backlogs in their investigations of major foreign companies under GDPR, with Ireland citing the complexity of the regulation as a factor. Critics interviewed by Politico also argued that enforcement was also being hampered by varying interpretations between member states, the prioritisation of guidance over enforcement by some authorities, and a lack of cooperation between member states. In November 2021, Irish Council for Civil Liberties lodged a formal complaint of the Commission that it is in breach of its obligation under EU Law to carefully monitor how Ireland applies the GDPR. Until January 2023, the Commission published a new commitment based on the complaint of ICCL. While companies are now subject to legal obligations, there are still various inconsistencies in the practical and technical implementation of GDPR. As an example, according to the GDPR's right to access, the companies are obliged to provide data subjects with the data they gather about them. However, in a study on loyalty cards in Germany, companies did not provide the data subjects with the exact information of the purchased articles. One might argue that such companies do not collect the information of the purchased articles, which does not conform with their business models. Therefore, data subjects tend to see that as a GDPR violation. As a result, studies have suggested for a better control through authorities. According to the GDPR, end-users' consent should be valid, freely given, specific, informed and active. However, the lack of enforceability regarding obtaining lawful consents has been a challenge. As an example, a 2020 study, showed that the Big Tech, i.e. Google, Amazon, Facebook, Apple, and Microsoft (GAFAM), use dark patterns in their consent obtaining mechanisms, which raises doubts regarding the lawfulness of the acquired consent. In March 2021, EU member states led by France were reported to be attempting to modify the impact of the privacy regulation in Europe by exempting national security agencies. After around 160 million Euros in GDPR fines were imposed in 2020, the figure was already over one billion Euros in 2021. Influence on foreign laws Mass adoption of these new privacy standards by multinational companies has been cited as an example of the "Brussels effect", a phenomenon wherein European laws and regulations are used as a baseline due to their gravitas. The U.S. state of California passed the California Consumer Privacy Act on 28 June 2018, taking effect on 1 January 2020; it grants rights to transparency and control over the collection of personal information by companies in a similar means to GDPR. Critics have argued that such laws need to be implemented at the federal level to be effective, as a collection of state-level laws would have varying standards that would complicate compliance. Two other U.S. states have since enacted similar legislation: Virginia passed the Consumer Data Privacy Act on 2 March 2021, and Colorado enacted the Colorado Privacy Act on 8 July 2021. The Republic of Turkey, a candidate for European Union membership, has adopted the Law on The Protection of Personal Data on 24 March 2016 in compliance with the EU acquis. China's 2021 Personal Information Protection Law is the country's first comprehensive law on personal data rights and is modeled after the GDPR. Switzerland will also adopt a new data protection law that largely follows EU's GDPR. With the addition of overseas regions of the European Union joining non-governmental organsational (NGO) bodies in the Caribbean region such as the Organisation of Eastern Caribbean States, the GDPR rules have become necessary to consider in the lack of any current legislation found in the region concerning privacy rights and maintaining compliance of the laws of those outer regions. Website views and revenue A 2024 study found that GDPR reduced both EU user website page views and website revenue by 12%. Timeline 25 January 2012: The proposal for the GDPR was released. 21 October 2013: The European Parliament Committee on Civil Liberties, Justice and Home Affairs (LIBE) had its orientation vote. 15 December 2015: Negotiations between the European Parliament, Council and Commission (Formal Trilogue meeting) resulted in a joint proposal. 17 December 2015: The European Parliament's LIBE Committee voted for the negotiations between the three parties. 8 April 2016: Adoption by the Council of the European Union. The only member state voting against was Austria, which argued that the level of data protection in some respects falls short compared to the 1995 directive. 14 April 2016: Adoption by the European Parliament. 24 May 2016: The regulation entered into force, 20 days after its publication in the Official Journal of the European Union. 6 May 2018: Data Protection Directive for the police and justice sectors into national legislation applicable from this day. 25 May 2018: Its provisions became directly applicable in all member states, two years after the regulations enter into force. 20 July 2018: the GDPR became valid in the EEA countries (Iceland, Liechtenstein, and Norway), after the EEA Joint Committee and the three countries agreed to follow the regulation. EU Digital Single Market The EU Digital Single Market strategy relates to "digital economy" activities related to businesses and people in the EU. As part of the strategy, the GDPR and the NIS Directive all apply from 25 May 2018. The proposed ePrivacy Regulation was also planned to be applicable from 25 May 2018, but will be delayed for several months. The eIDAS Regulation is also part of the strategy. In an initial assessment, the European Council has stated that the GDPR should be considered "a prerequisite for the development of future digital policy initiatives". See also Similar privacy laws in other countries: General Personal Data Protection Law (LGPD) (Brazil) California Consumer Privacy Act (CCPA) Children's Online Privacy Protection Act (COPPA) (USA) Personal Information Protection Law (PIPL) (China) Nigeria Data Protection Act, 2023 (NDP Act) (Nigeria) Personal Data Protection Act 2012 (PDPA) (Singapore) Protection of Personal Information Act (PoPIA) (South Africa) Personal Data Protection Act, No. 9 of 2022 (PDPA) (Sri Lanka) Related EU regulation: Cyber Security and Resilience Bill - UK proposed legislation 2024. Data Act, proposed EU law from 2022 Data Governance Act, proposed EU law from 2020 Digital Markets Act Digital Services Act EU–US Privacy Shield European Data Protection Board (EDPB) European Health Data Space Privacy and Electronic Communications Directive 2002 (ePrivacy Directive, ePD) Related concepts: Convention on Cybercrime Data portability Do Not Track legislation ePrivacy Regulation Privacy Impact Assessment Compliance tactics by certain companies: Consent or pay Footnotes References External links Data protection, European Commission Procedure for the proposed revised legal framework (General Data Protection Regulation) Handbook on European data protection law, European Union Agency for Fundamental Rights Europrivacy Official Website ECCP - European Centre for Certification and Privacy GDPR Certification Privacy law Information privacy European Union regulations European Union data protection law Data protection 2016 establishments in Europe 2018 in Europe Juncker Commission Regulation of artificial intelligence
General Data Protection Regulation
[ "Technology", "Engineering" ]
9,957
[ "Cybersecurity engineering", "Computing and society", "Information privacy", "Regulation of artificial intelligence" ]
38,104,403
https://en.wikipedia.org/wiki/Polar%20code%20%28coding%20theory%29
In information theory, polar codes are a linear block error-correcting codes. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability (in other words, they polarize or become sparse), and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) with polynomial dependence on the gap to capacity. Polar codes were developed by Erdal Arikan, a professor of electrical engineering at Bilkent University. Notably, polar codes have modest encoding and decoding complexity , which renders them attractive for many applications. Moreover, the encoding and decoding energy complexity of generalized polar codes can reach the fundamental lower bounds for energy consumption of two dimensional circuitry to within an factor for any . Industrial applications Polar codes have some limitations when used in industrial applications. Primarily, the original design of the polar codes achieves capacity when block sizes are asymptotically large with a successive cancellation decoder. However, with the block sizes used in industry, the performance of the successive cancellation is poor compared to well-defined and implemented coding schemes such as low-density parity-check code (LDPC) and turbo code. Polar performance can be improved with successive cancellation list decoding, but its usability in real applications is still questionable due to very poor implementation efficiencies caused by the iterative approach. In October 2016, Huawei announced that it had achieved 27 Gbit/s in 5G field trial tests using polar codes for channel coding. The improvements have been introduced so that the channel performance has now almost closed the gap to the Shannon limit, which sets the bar for the maximum rate for a given bandwidth and a given noise level. In November 2016, 3GPP agreed to adopt polar codes for the eMBB (Enhanced Mobile Broadband) control channels for the 5G NR (New Radio) interface. At the same meeting, 3GPP agreed to use LDPC for the corresponding data channel. PAC codes In 2019, Arıkan suggested to employ a convolutional pre-transformation before polar coding. These pre-transformed variant of polar codes were dubbed polarization-adjusted convolutional (PAC) codes. It was shown that the pre-transformation can effectively improve the distance properties of polar codes by reducing the number of minimum-weight and in general small-weight codewords, resulting in the improvement of block error rates under near maximum likelihood (ML) decoding algorithm such as Fano decoding and list decoding. Fano decoding is a tree search algorithm that determines the transmitted codeword by utilizing an optimal metric function to efficiently guide the search process. PAC codes are also equivalent to post-transforming polar codes with certain cyclic codes. At short blocklengths, such codes outperform both convolutional codes and CRC-aided list decoding of conventional polar codes. Neural Polar Decoders Neural Polar Decoders (NPDs) are an advancement in channel coding that combine neural networks (NNs) with polar codes, providing unified decoding for channels with or without memory, without requiring an explicit channel model. They use four neural networks to approximate the functions of polar decoding: the embedding (E) NN, the check-node (F) NN, the bit-node (G) NN, and the embedding-to-LLR (H) NN. The weights of these NNs are determined by estimating the mutual information of the synthetic channels. By the end of training, the weights of the NPD are fixed and can then be used for decoding. The computational complexity of NPDs is determined by the parameterization of the neural networks, unlike successive cancellation (SC) trellis decoders, whose complexity is determined by the channel model and are typically used for finite-state channels (FSCs). The computational complexity of NPDs is , where is the number of hidden units in the neural networks, is the dimension of the embedding, and is the block length. In contrast, the computational complexity of SC trellis decoders is , where is the state space of the channel model. NPDs can be integrated into SC decoding schemes such as SC list decoding and CRC-aided SC decoding. They are also compatible with non-uniform and i.i.d. input distributions by integrating them into the Honda-Yamamoto scheme. This flexibility allows NPDs to be used in various decoding scenarios, improving error correction performance while maintaining manageable computational complexity. References External links AFF3CT home page: A Fast Forward Error Correction Toolbox for high speed polar code simulations in software Error detection and correction Coding theory Capacity-achieving codes Capacity-approaching codes
Polar code (coding theory)
[ "Mathematics", "Engineering" ]
1,022
[ "Discrete mathematics", "Coding theory", "Reliability engineering", "Error detection and correction" ]
48,599,028
https://en.wikipedia.org/wiki/Evert%20Jan%20Baerends
Evert Jan Baerends (born 17 September 1945) is a Dutch theoretical chemist. He is an emeritus professor of the Vrije Universiteit Amsterdam. Baerends is known for his development and application of electronic structure calculations, which over time led to the development of the Amsterdam Density Functional. He worked extensively on density functional theory. Career Baerends was born on 17 September 1945 in Voorst. He obtained his PhD at the Vrije Universiteit Amsterdam under professor Pieter Ros. Baerends became a professor of Theoretical chemistry at the Vrije Universiteit Amsterdam. He did extensive research on density functional theory and was involved in the development and application of electronic structure calculations, which later led to the development of the Amsterdam Density Functional. Baerends became a member of the Royal Netherlands Academy of Arts and Sciences in 2004. In 2010 he was awarded the Schrödinger Medal by the World Association of Theoretical and Computational Chemists, being noted for: "For his pioneering contributions to the development of computational density functional methods and his fundamental contributions to density functional theory and density matrix theory." Baerends is a member of the International Academy of Quantum Molecular Science. After his retirement in the Netherlands Baerends lectured at Pohang University of Science and Technology in South Korea. In 2019 he received an honorary doctorate from the University of Girona. References 1945 births Living people Computational chemists 20th-century Dutch chemists Members of the Royal Netherlands Academy of Arts and Sciences People from Voorst Academic staff of Pohang University of Science and Technology Schrödinger Medal recipients Theoretical chemists Vrije Universiteit Amsterdam alumni Academic staff of Vrije Universiteit Amsterdam 21st-century Dutch chemists
Evert Jan Baerends
[ "Chemistry" ]
346
[ "Quantum chemistry", "Physical chemists", "Computational chemists", "Theoretical chemistry", "Computational chemistry", "Theoretical chemists" ]
48,606,512
https://en.wikipedia.org/wiki/Lagrange%20stability
Lagrange stability is a concept in the stability theory of dynamical systems, named after Joseph-Louis Lagrange. For any point in the state space, in a real continuous dynamical system , where is , the motion is said to be positively Lagrange stable if the positive semi-orbit is compact. If the negative semi-orbit is compact, then the motion is said to be negatively Lagrange stable. The motion through is said to be Lagrange stable if it is both positively and negatively Lagrange stable. If the state space is the Euclidean space , then the above definitions are equivalent to and being bounded, respectively. A dynamical system is said to be positively-/negatively-/Lagrange stable if for each , the motion is positively-/negatively-/Lagrange stable, respectively. References Elias P. Gyftopoulos, Lagrange Stability and Liapunov's Direct Method. Proc. of Symposium on Reactor Kinetics and Control, 1963. (PDF) Lagrangian mechanics Stability theory Dynamical systems
Lagrange stability
[ "Physics", "Mathematics" ]
221
[ "Mathematical analysis", "Mathematical analysis stubs", "Lagrangian mechanics", "Classical mechanics", "Stability theory", "Mechanics", "Dynamical systems" ]
43,742,131
https://en.wikipedia.org/wiki/De%20novo%20peptide%20sequencing
In mass spectrometry, de novo peptide sequencing is the method in which a peptide amino acid sequence is determined from tandem mass spectrometry. Knowing the amino acid sequence of peptides from a protein digest is essential for studying the biological function of the protein. In the old days, this was accomplished by the Edman degradation procedure. Today, analysis by a tandem mass spectrometer is a more common method to solve the sequencing of peptides. Generally, there are two approaches: database search and de novo sequencing. Database search is a simple version as the mass spectra data of the unknown peptide is submitted and run to find a match with a known peptide sequence, the peptide with the highest matching score will be selected. This approach fails to recognize novel peptides since it can only match to existing sequences in the database. De novo sequencing is an assignment of fragment ions from a mass spectrum. Different algorithms are used for interpretation and most instruments come with de novo sequencing programs. Peptide fragmentation Peptides are protonated in positive-ion mode. The proton initially locates at the N-terminus or a basic residue side chain, but because of the internal solvation, it can move along the backbone breaking at different sites which result in different fragments. The fragmentation rules are well explained by some publications. Three different types of backbone bonds can be broken to form peptide fragments: alkyl carbonyl (CHR-CO), peptide amide bond (CO-NH), and amino alkyl bond (NH-CHR). Different types of fragment ions When the backbone bonds cleave, six different types of sequence ions are formed as shown in Fig. 1. The N-terminal charged fragment ions are classed as a, b or c, while the C-terminal charged ones are classed as x, y or z. The subscript n is the number of amino acid residues. The nomenclature was first proposed by Roepstorff and Fohlman, then Biemann modified it and this became the most widely accepted version. Among these sequence ions, a, b and y-ions are the most common ion types, especially in the low-energy collision-induced dissociation (CID) mass spectrometers, since the peptide amide bond (CO-NH) is the most vulnerable and the loss of CO from b-ions. Mass of b-ions = Σ (residue masses) + 1 (H+) Mass of y-ions = Σ (residue masses) + 19 (H2O+H+) Mass of a-ions = mass of b-ions – 28 (CO) Double backbone cleavage produces internal ions, acylium-type like H2N-CHR2-CO-NH-CHR3-CO+ or immonium-type like H2N-CHR2-CO-NH+=CHR3. These ions are usually disturbance in the spectra. Further cleavage happens under high-energy CID at the side chain of C-terminal residues, forming dn, vn, wn-ions. Fragmentation rules summary Most fragment ions are b- or y-ions. a-ions are also frequently seen by the loss of CO from b-ions. Satellite ions(wn, vn, dn-ions) are formed by high-energy CID. Ser-, Thr-, Asp- and Glu-containing ions generate neutral molecular loss of water (-18). Asn-, Gln-, Lys-, Arg-containing ions generate neutral molecular loss of ammonia (-17). Neutral loss of ammonia from Arg leads to fragment ions (y-17) or (b-17) ions with higher abundance than their corresponding ions. When C-terminus has a basic residue, the peptide generates (bn-1+18) ion. A complementary b-y ion pair can be observed in multiply charged ions spectra. For this b-y ion pair, the sum of their subscripts is equal to the total number of amino acid residues in the unknown peptide. If the C-terminus is Arg or Lys, y1-ion can be found in the spectrum to prove it. Methods for peptide fragmentation In low energy collision induced dissociation (CID), b- and y-ions are the main product ions. In addition, loss of ammonia (-17 Da) is observed in fragment with RKNQ amino acids in it. Loss of water (-18 Da) can be observed in fragment with STED amino acids in it. No satellite ions are shown in the spectra. In high energy CID, all different types of fragment ions can be observed but no losses of ammonia or water. In electron transfer dissociation (ETD) and electron capture dissociation (ECD), the predominant ions are c, y, z+1, z+2 and sometimes w ions. For post source decay (PSD) in MALDI, a, b, y-ions are most common product ions. Factors affecting fragmentation are the charge state (the higher charge state, the less energy is needed for fragmentation), mass of the peptide (the larger mass, the more energy is required), induced energy (higher energy leads to more fragmentation), primary amino acid sequence, mode of dissociation and collision gas. Guidelines for interpretation For interpretation, first, look for single amino acid immonium ions (H2N+=CHR2). Corresponding immonium ions for amino acids are listed in Table 1. Ignore a few peaks at the high-mass end of the spectrum. They are ions that undergo neutral molecules losses (H2O, NH3, CO2, HCOOH) from [M+H]+ ions. Find mass differences at 28 Da since b-ions can form a-ions by loss of CO. Look for b2-ions at low-mass end of the spectrum, which helps to identify yn-2-ions too. Mass of b2-ions are listed in Table 2, as well as single amino acids that have equal mass to b2-ions. The mass of b2-ion = mass of two amino acid residues + 1. Identify a sequence ion series by the same mass difference, which matches one of the amino acid residue masses (see Table 1). For example, mass differences between an and an-1, bn and bn-1, cn and cn-1 are the same. Identify yn-1-ion at the high-mass end of the spectrum. Then continue to identify yn-2, yn-3... ions by matching mass differences with the amino acid residue masses (see Table 1). Look for the corresponding b-ions of the identified y-ions. The mass of b+y ions is the mass of the peptide +2 Da. After identifying the y-ion series and b-ion series, assign the amino acid sequence and check the mass. The other method is to identify b-ions first and then find the corresponding y-ions. Algorithms and software Manual de novo sequencing is labor-intensive and time-consuming. Usually algorithms or programs come with the mass spectrometer instrument are applied for the interpretation of spectra. Development of de novo sequencing algorithms An old method is to list all possible peptides for the precursor ion in mass spectrum, and match the mass spectrum for each candidate to the experimental spectrum. The possible peptide that has the most similar spectrum will have the highest chance to be the right sequence. However, the number of possible peptides may be large. For example, a precursor peptide with a molecular weight of 774 has 21,909,046 possible peptides. Even though it is done in the computer, it takes a long time. Another method is called "subsequencing", which instead of listing whole sequence of possible peptides, matches short sequences of peptides that represent only a part of the complete peptide. When sequences that highly match the fragment ions in the experimental spectrum are found, they are extended by residues one by one to find the best matching. In the third method, graphical display of the data is applied, in which fragment ions that have the same mass differences of one amino acid residue are connected by lines. In this way, it is easier to get a clear image of ion series of the same type. This method could be helpful for manual de novo peptide sequencing, but doesn't work for high-throughput condition. The fourth method, which is considered to be successful, is the graph theory. Applying graph theory in de novo peptide sequencing was first mentioned by Bartels. Peaks in the spectrum are transformed into vertices in a graph called "spectrum graph". If two vertices have the same mass difference of one or several amino acids, a directed edge will be applied. The SeqMS algorithm, Lutefisk algorithm, Sherenga algorithm are some examples of this type. Deep Learning More recently, deep learning techniques have been applied to solve the de novo peptide sequencing problem. The first breakthrough was DeepNovo, which adopted the convolutional neural network structure, achieved major improvements in sequence accuracy, and enabled complete protein sequence assembly without assisting databases Subsequently, additional network structures, such as PointNet (PointNovo), have been adopted to extract features from a raw spectrum. The de novo peptide sequencing problem is then framed as a sequence prediction problem. Given previously predicted partial peptide sequence, neural-network-based de novo peptide sequencing models will repeatedly generate the most probable next amino acid until the predicted peptide's mass matches the precursor mass. At inference time, search strategies such as beam search can be adopted to explore a larger search space while keeping the computational cost low. Comparing with previous methods, neural-network-based models have demonstrated significantly better accuracy and sensitivity. Moreover, with a careful model design, deep-learning-based de novo peptide sequencing algorithms can also be fast enough to achieve real-time peptide de novo sequencing. PEAKS software incorporates this neural network learning in their de novo sequencing algorithms. Software packages As described by Andreotti et al. in 2012, Antilope is a combination of Lagrangian relaxation and an adaptation of Yen's k shortest paths. It is based on 'spectrum graph' method and contains different scoring functions, and can be comparable on the running time and accuracy to "the popular state-of-the-art programs" PepNovo and NovoHMM. Grossmann et al. presented AUDENS in 2005 as an automated de novo peptide sequencing tool containing a preprocessing module that can recognize signal peaks and noise peaks. Lutefisk can solve de novo sequencing from CID mass spectra. In this algorithm, significant ions are first found, then determine the N- and C-terminal evidence list. Based on the sequence list, it generates complete sequences in spectra and scores them with the experimental spectrum. However, the result may include several sequence candidates that have only little difference, so it is hard to find the right peptide sequence. A second program, CIDentify, which is a modified version by Alex Taylor of Bill Pearson's FASTA algorithm, can be applied to distinguish those uncertain similar candidates. Mo et al. presented the MSNovo algorithm in 2007 and proved that it performed "better than existing de novo tools on multiple data sets". This algorithm can do de novo sequencing interpretation of LCQ, LTQ mass spectrometers and of singly, doubly, triply charged ions. Different from other algorithms, it applied a novel scoring function and use a mass array instead of a spectrum graph. Fisher et al. proposed the NovoHMM method of de novo sequencing. A hidden Markov model (HMM) is applied as a new way to solve de novo sequencing in a Bayesian framework. Instead of scoring for single symbols of the sequence, this method considers posterior probabilities for amino acids. In the paper, this method is proved to have better performance than other popular de novo peptide sequencing methods like PepNovo by a lot of example spectra. PEAKS is a complete software package for the interpretation of peptide mass spectra. It contains de novo sequencing, database search, PTM identification, homology search and quantification in data analysis. Ma et al. described a new model and algorithm for de novo sequencing in PEAKS, and compared the performance with Lutefisk of several tryptic peptides of standard proteins, by the quadrupole time-of-flight (Q-TOF) mass spectrometer. PepNovo is a high throughput de novo peptide sequencing tool and uses a probabilistic network as scoring method. It usually takes less than 0.2 seconds for interpretation of one spectrum. Described by Frank et al., PepNovo works better than several popular algorithms like Sherenga, PEAKS, Lutefisk. Now a new version PepNovo+ is available. Chi et al. presented pNovo+ in 2013 as a new de novo peptide sequencing tool by using complementary HCD and ETD tandem mass spectra. In this method, a component algorithm, pDAG, largely speeds up the acquisition time of peptide sequencing to 0.018s on average, which is three times as fast as the other popular de novo sequencing software. As described by Jeong et al., compared with other do novo peptide sequencing tools, which works well on only certain types of spectra, UniNovo is a more universal tool that has a good performance on various types of spectra or spectral pairs like CID, ETD, HCD, CID/ETD, etc. It has a better accuracy than PepNovo+ or PEAKS. Moreover, it generates the error rate of the reported peptide sequences. Ma published Novor in 2015 as a real-time de novo peptide sequencing engine. The tool is sought to improve the de novo speed by an order of magnitude and retain similar accuracy as other de novo tools in the market. On a Macbook Pro laptop, Novor has achieved more than 300 MS/MS spectra per second. Pevtsov et al. compared the performance of the above five de novo sequencing algorithms: AUDENS, Lutefisk, NovoHMM, PepNovo, and PEAKS . QSTAR and LCQ mass spectrometer data were employed in the analysis, and evaluated by relative sequence distance (RSD) value, which was the similarity between de novo peptide sequencing and true peptide sequence calculated by a dynamic programming method. Results showed that all algorithms had better performance in QSTAR data than on LCQ data, while PEAKS as the best had a success rate of 49.7% in QSTAR data, and NovoHMM as the best had a success rate of 18.3% in LCQ data. The performance order in QSTAR data was PEAKS > Lutefisk, PepNovo > AUDENS, NovoHMM, and in LCQ data was NovoHMM > PepNovo, PEAKS > Lutefisk > AUDENS. Compared in a range of spectrum quality, PEAKS and NovoHMM also showed the best performance in both data among all 5 algorithms. PEAKS and NovoHMM had the best sensitivity in both QSTAR and LCQ data as well. However, no evaluated algorithms exceeded a 50% of exact identification for both data sets. Recent progress in mass spectrometers made it possible to generate mass spectra of ultra-high resolution . The improved accuracy, together with the increased amount of mass spectrometry data that are being generated, draws the interests of applying deep learning techniques to de novo peptide sequencing. In 2017 Tran et al. proposed DeepNovo, the first deep learning based de novo sequencing software. The benchmark analysis in the original publication demonstrated that DeepNovo outperformed previous methods, including PEAKS, Novor and PepNovo, by a significant margin. DeepNovo is implemented in python with the Tensorflow framework. To represent a mass spectrum as a fixed-dimensional input to the neural-network, DeepNovo discretized each spectrum into a length 150,000 vector. This unnecessarily large spectrum representation, and the single-thread CPU usage in the original implementation, prevents DeepNovo from performing peptide sequencing in real time. To further improve efficiency of de novo peptide sequencing models, Qiao et al. proposed PointNovo in 2020. PointNovo is a python software implemented with the PyTorch framework and it gets rid of the space consuming spectrum-vector-representation adopted by DeepNovo. Comparing with DeepNovo, PointNovo managed to achieve better accuracy and efficiency at the same time by directly representing a spectrum as a set of m/z and intensity pairs. References Mass spectrometry Proteomic sequencing
De novo peptide sequencing
[ "Physics", "Chemistry", "Biology" ]
3,427
[ "Spectrum (physical sciences)", "Instrumental analysis", "Proteomic sequencing", "Mass", "Molecular biology techniques", "Mass spectrometry", "Matter" ]
43,742,565
https://en.wikipedia.org/wiki/Wear%20coefficient
The wear coefficient is a physical coefficient used to measure, characterize and correlate the wear of materials. Background Traditionally, the wear of materials has been characterized by weight loss and wear rate. However, studies have found that wear coefficient is more suitable. The reason being that it takes the wear rate, the applied load, and the hardness of the wear pin into account. Although, measurement variations by an order of 10-1 have been observed, the variations can be minimized if suitable precautions are taken. A wear volume versus distance curve can be divided into at least two regimes, the transient wear regime and the steady-state wear regime. The volume or weight loss is initially curvilinear. The wear rate per unit sliding distance in the transient wear regime decreases until it has reached a constant value in the steady-state wear regime. Hence the standard wear coefficient value obtained from a volume loss versus distance curve is a function of the sliding distance. Measurement The steady-state wear equation was proposed as: where is the Brinell hardness expressed as Pascals, is the volumetric loss, is the normal load, and is the sliding distance. is the dimensionless standard wear coefficient. Therefore, the wear coefficient in the abrasive model is defined as: As can be estimated from weight loss and the density , the wear coefficient can also be expressed as: As the standard method uses the total volume loss and the total sliding distance, there is a need to define the net steady-state wear coefficient: where is the steady-state sliding distance, and is the steady-state wear volume. With regard to the sliding wear model K can be expressed as: where is the plastically deformed zone. If the coefficient of friction is defined as: where is the tangential force. Then K can be defined for abrasive wear as work done to create abrasive wear particles by cutting to external work done : In an experimental situation the hardness of the uppermost layer of material in the contact may not be known with any certainty, consequently, the ratio is more useful; this is known as the dimensional wear coefficient or the specific wear rate. This is usually quoted in units of mm3 N−1 m−1. Composite material As metal matrix composite (MMC) materials have become to be used more often due to their better physical, mechanical and tribological properties compared to matrix materials it is necessary to adjust the equation. The proposed equation is: where is a function of the average particle diameter , is the volume fraction of particles. is a function of the applied load , the pin hardness and the gradient of the curve at . Therefore, the effects of load and pin hardness can be shown: As wear testing is a time-consuming process, it was shown to be possible to save time by using a predictable method. See also wear rate weight loss References Notes Further reading Nam P. Suh, Tribophysics, Prentice-Hall, 1986, External links Materials science Materials science Materials degradation Building engineering
Wear coefficient
[ "Physics", "Materials_science", "Engineering" ]
607
[ "Applied and interdisciplinary physics", "Building engineering", "Materials science", "Civil engineering", "nan", "Materials degradation", "Architecture" ]
43,744,246
https://en.wikipedia.org/wiki/The%20Machine%20%28computer%20architecture%29
The Machine is an experimental computer made by Hewlett Packard Enterprise. It was created as part of a research project to develop a new type of computer architecture for servers. The design focused on a “memory centric computing” architecture, where NVRAM replaced traditional DRAM and disks in the memory hierarchy. The NVRAM was byte addressable and could be accessed from any CPU via a photonic interconnect. The aim of the project was to build and evaluate this new design. Hardware overview The Machine was a computer cluster with many individual nodes connected over a memory fabric. The fabric interconnect used VCSEL-based silicon photonics with a custom chip called the X1. Access to memory is non-uniform and may include multiple hops. The Machine was envisioned to be a rack-scale computer initially with 80 processors and 320 TB of fabric attached memory, with potential for scaling to more enclosures up to 32 ZB. The fabric attached memory is not cache coherent and requires software to be aware of this property. Since traditional locks need cache coherency, hardware was added to the bridges to do atomic operations at that level. Each node also has a limited amount of local private cache-coherent memory (256 GB). Storage and compute on each node had completely separate power domains. The whole fabric attached memory of The Machine is too large to be mapped into a processor's virtual address space (which was 48-bits wide). A way is needed to map windows of the fabric attached memory into processor memory. Therefore, communication between each node SoC and the memory pool goes through an FPGA-based “Z-bridge” component that manages memory mapping of the local SoC to the fabric attached memory. The Z-bridge deals with two different kinds of addresses: 53-bit logical Z addresses and 75-bit Z addresses, which allows addressing 8PB and 32ZB respectively. Each Z-bridge also contained a firewall to enforce access control. The interconnect protocol was developed in-house and known as Next Generation Memory Interconnect (NGMI). This protocol evolved into the open Gen-Z standard. The Z-bridge connects to the SoC using PCIe, avoiding major software changes. A half rack prototype of the machine was unveiled at HPE Discover in London in 2016. Each node contained ARMv8-A based Broadcom/Cavium ThunderX2 SoCs. In total there were 40 32-core SoCs. Due to unavailability of adequate memristor-based NVRAM or phase-change memory, the prototype used 160 TB of battery-backed DRAM. Despite this setback, software architect Keith Packard said this "can be used to prove the other parts of the design before switching". According to The Register, HPE's partnership with SK Hynix to develop memristor-based NVRAM ran into funding and directional problems and they were working with Sandisk on Resistive RAM (ReRAM) for The Machine. According to The Next Platform, HPE considered switching to Intel Optane DIMMs "when production quantities of are available on the market". The Next Platform estimated the rack prototype to consume 24 kW to 36 kW of power. Software overview Two major software projects were created for the Machine. An experimental version of Linux called Linux++ with all the necessary enhancements to configure the hardware and work with traditional programming models. This included bridge configuration, access control and mapping using the DAX subsystem. In parallel, a new operating system (OS) called Carbon was announced that would be designed from first principles to take full advantage of an NVRAM based computer. Primary workloads for The Machine included in-memory database, Hadoop-style software, and real-time big data analytics. HPE claimed that a memory-driven computing design like The Machine could "improve speeds by up to 8000x compared to conventional systems". In the prototype system, the fabric attached memory of the system was organised by a "top of rack" management server component called The Librarian. The Librarian divided the memory into "shelves" of 8GB "books", and hardware protections could be configured on book boundaries. A fine grained 64KB "booklet" was also supported. The mapping of memory is handled by the OS, while the access controls for the memory are configured by the management infrastructure of The Machine system as a whole. Software needs to be aware that fabric attached memory memory reads can have synchronous errors whilst writes can have asynchronous errors. On the Linux system, when a memory error occurs the SIGBUS operating system signal is used. Programming model and data structure changes were also explored, including changes to thread libraries and heap data structures to be resilient with non-volatile memory failure modes. History A few years after HP’s re-discovery of the Memristor, the newly appointed CTO of HP, Martin Fink, created a HP Labs project to build a computer system based on memristor to tackle the slowing of Moore's law. He announced the project at HP’s Discover event in the summer of 2014. Some of the ideas of The Machine also came from Dragonhawk system designs. Three-quarters of HP Labs’s 200 staff were focused on the hardware and software of the machine. Speaking to Bloomberg, HP says it would commercialize The Machine within a few years, “or fall on its face trying.” Kirk Bresniker served as Chief Architect, and Keith Packard was hired to work on the Linux enhancements. Bdale Garbee was hired to manage open source development. In 2015, Hewlett-Packard separated into two separate companies, HP Inc and Hewlett Packard Enterprise (HPE), with The Machine project assigned to the latter. In late 2016, Martin Fink retired as HPE CTO. Fink's retirement announcement also said that Hewlett Packard Labs staff would be moved into the Enterprise product group to "align our R&D work on The Machine with the business". By early 2017, Hewlett Packard Labs had a slide saying that the project's aim was “to demonstrate progress, not develop products” and they would “collaborate to deliver differentiating Machine value into existing architectures as well as disruptive architectures”. BleepingComputer said "In other words, The Machine is no longer a product in its own right. Instead it will provide technologies that will be used in other HPE products going forward.". HPE restructured its pure R&D organization and placed it in the products group. Yahoo! Finance reported that the Machine prototype "remains years away from being commercially available". In 2018, HPE stated that the project had reached the stage where it needed commercial applications from customers in the next step of its evolution. References Computer architecture Supercomputers Non-volatile memory Silicon photonics devices
The Machine (computer architecture)
[ "Technology", "Engineering" ]
1,426
[ "Supercomputers", "Computer engineering", "Supercomputing", "Computer architecture", "Computers" ]
43,747,856
https://en.wikipedia.org/wiki/Zone%20axis
Zone axis, a term sometimes used to refer to "high-symmetry" orientations in a crystal, most generally refers to any direction referenced to the direct lattice (as distinct from the reciprocal lattice) of a crystal in three dimensions. It is therefore indexed with direct lattice indices, instead of with Miller indices. High-symmetry zone axes through a crystal lattice, in particular, often lie in the direction of tunnels through the crystal between planes of atoms. This is because, as we see below, such zone axis directions generally lie within more than one plane of atoms in the crystal. Zone-axis indexing The translational invariance of a crystal lattice is described by a set of unit cell, direct lattice basis vectors (contravariant or polar) called a, b, and c, or equivalently by the lattice parameters, i.e. the magnitudes of the vectors, called a, b and c, and the angles between them, called α (between b and c), β (between c and a), and γ (between a and b). Direct lattice vectors have components measured in distance units, like meters (m) or angstroms (Å). A lattice vector is indexed by its coordinates in the direct lattice basis system and is generally placed between square brackets []. Thus a direct lattice vector , or , is defined as . Angle brackets 〈〉 are used to refer to a symmetrically equivalent class of lattice vectors (i.e. the set of vectors generated by an action of the lattice's symmetry group). In the case of a cubic lattice, for instance, 〈100〉 represents [100], [010], [001], [00], [00] and [00] because each of these vectors is symmetrically equivalent under a 90 degree rotation along an axis. A bar over a coordinate is equivalent to a negative sign (e.g., ). The term "zone axis" more specifically refers to the direction of a direct-space lattice vector. For example, since the [120] and [240] lattice vectors are parallel, their orientations both correspond the 〈120〉 zone of the crystal. Just as a set of lattice planes in direct space corresponds to a reciprocal lattice vector in the complementary space of spatial frequencies and momenta, a "zone" is defined as a set of reciprocal lattice planes in frequency space that corresponds to a lattice vector in direct space. The reciprocal space analog to a zone axis is a "lattice plane normal" or "g-vector direction". Reciprocal lattice vectors (one-form or axial) are Miller-indexed using coordinates in the reciprocal lattice basis instead, generally between round brackets () (similar to square brackets [] for direct lattice vectors). Curly brackets {} (not to be confused with a mathematical set) are used to refer to a symmetrically equivalent class of reciprocal lattice vectors, similar to angle brackets 〈〉 for classes of direct lattice vectors. Here, , , and , where the unit cell volume is ( denotes a dot product and a cross product). Thus a reciprocal lattice vector or has a direction perpendicular to a crystallographic plane and a magnitude equal to the reciprocal of the spacing between those planes, measured in spatial frequency units, e.g. of cycles per angstrom (cycles/Å). A useful and quite general rule of crystallographic "dual vector spaces in 3D", e.g. reciprocal lattices, is that the condition for a direct lattice vector [uvw] (or zone axis) to be perpendicular to a reciprocal lattice vector (hkl) can be written with a dot product as . This is true even if, as is often the case, the basis vector set used to describe the lattice is not Cartesian. Zone-axis patterns By extension, a [uvw] zone-axis pattern (ZAP) is a diffraction pattern taken with an incident beam, e.g. of electrons, X-rays or neutrons traveling along a lattice direction specified by the zone-axis indices [uvw]. Because of their small wavelength λ, high energy electrons used in electron microscopes have a very large Ewald sphere radius (1/λ), so that electron diffraction generally "lights up" diffraction spots with g-vectors (hkl) that are perpendicular to [uvw]. One result of this, as illustrated in the figure above, is that "low-index" zones are generally perpendicular to "low-Miller index" lattice planes, which in turn have small spatial frequencies (g-values) and hence large lattice periodicities (d-spacings). A possible intuition behind this is that in electron microscopy, for electron beams to be directed down wide (i.e. easily visible) tunnels between columns of atoms in a crystal, directing the beam down a low-index (and by association high-symmetry) zone axis may help. See also Crystallography Dual basis Reciprocal lattice Miller index Diffraction Electron diffraction Transmission electron microscopy Footnotes External links International Tables for Crystallography Materials science Crystals Electron microscopy Crystallography Diffraction
Zone axis
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,052
[ "Electron", "Electron microscopy", "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Materials science", "Crystallography", "Diffraction", "Crystals", "Condensed matter physics", "nan", "Microscopy", "Spectroscopy" ]
42,332,421
https://en.wikipedia.org/wiki/Biomaterials%20%28journal%29
Biomaterials is a peer-reviewed scientific journal covering research on and applications of biomaterials. It is published by Elsevier. The editor-in-chief is Kam W. Leong (Columbia University) and David Williams is an honorary editor. The journal was established in 1980. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 14.0. See also Materials Today Acta Biomaterialia Materials Science and Engineering C References External links Elsevier academic journals Materials science journals Academic journals established in 1985 English-language journals
Biomaterials (journal)
[ "Materials_science", "Engineering" ]
130
[ "Materials science journals", "Materials science" ]
42,333,823
https://en.wikipedia.org/wiki/Dirac%20hole%20theory
Dirac hole theory is a theory in quantum mechanics, named after English theoretical physicist Paul Dirac, who introduced it in 1929. The theory poses that the continuum of negative energy states, that are solutions to the Dirac equation, are filled with electrons, and the vacancies in this continuum (holes) are manifested as positrons with energy and momentum that are the negative of those of the state. The discovery of the positron in 1929 gave a considerable support to the Dirac hole theory. While Enrico Fermi, Niels Bohr and Wolfgang Pauli were skeptical about the theory, other physicists, like Guido Beck and Kurt Sitte, made use of Dirac hole theory in alternative theories of beta decay. Gian Wick extended Dirac hole theory to cover neutrinos, introducing the anti-neutrino as a hole in a neutrino Dirac sea. Pair production and annihilation Hole theory provides an alternative perspective on the processes of pair production and annihilation – when a photon of sufficient energy is incident upon an occupied state in the negative energy 'sea', it can excite an electron into the positive energy region, creating both an observable electron while creating a vacant state (hole) in the negative energy region – an anti-electron, or more commonly, a positron. Conversely, due to the principle of least action, the close proximity of an electron and positron presents an opportunity for the electron to de-excite, releasing a photon, reducing the overall energy of the system – this is observationally identical to the process of annihilation. References Dirac equation Paul Dirac
Dirac hole theory
[ "Physics" ]
338
[ "Equations of physics", "Eponymous equations of physics", "Quantum mechanics", "Dirac equation", "Quantum physics stubs" ]
42,334,954
https://en.wikipedia.org/wiki/Aquaver
Aquaver is a cleantech company headquartered in Voorburg, Netherlands, with offices at the High Tech Campus Eindhoven. Aquaver is acknowledged to be the first company worldwide to develop commercial systems based on membrane distillation, a novel technology for water treatment. Technology The technology of the Aquaver systems is based on membrane distillation. Membrane distillation combines membrane separation and distillation, with hydrophobic membranes and differences in vapour pressure. The Vacuum Multi Effect Membrane Distillation (VMEMD) configuration used in Aquaver systems adds the advantages of low-temperature operation and multi-effects to the membrane distillation characteristics. Aquaver has collaborated with memsys and Philips to develop its water treatment systems. Applications Aquaver membrane distillation units are focused at desalination, industrial water treatment, and 'difficult-to-treat' waters. In February 2014 Aquaver commissioned in Gulhi, Maldives, the world's first desalination plant based on membrane distillation. The desalination plant makes use of the waste-heat produced by the existing diesel generators, which provide electricity to the island, to power the water purification process. Aquaver is also participating with Abengoa and Masdar in a 7.9 million dollar project to develop an innovative desalination pilot plant in Ghantoot city, in Abu Dhabi’s border with Dubai. The desalination plant will have a capacity of producing 1,000 m3/d of desalted water using a hybrid system consisting in reverse osmosis in combination with an innovative membrane distillation system, provided by Aquaver, to optimize the traditional reverse osmosis process. Awards Aquaver has received the Water Innovator of the Year 2013 and the Frost & Sullivan 2014 European New Product Innovation Leadership Award for the development of its membrane distillation water treatment systems. Merge In 2015 Aquaver merged with memsys, which provides the membrane distillation modules used in Aquaver systems. The Aquaver management team, who started the company and brought membrane distillation to the market, has left to start other new ventures. See also Desalination Membrane distillation Sewage treatment Water pollution List of waste-water treatment technologies Industrial wastewater treatment References External links Companies based in South Holland Technology companies established in 2011 Dutch companies established in 2011 Sewerage Water technology
Aquaver
[ "Chemistry", "Engineering", "Environmental_science" ]
484
[ "Water technology", "Sewerage", "Environmental engineering", "Water pollution" ]
60,166,827
https://en.wikipedia.org/wiki/Ortho%20effect
Ortho effect is an organic chemistry phenomenon where the presence of a chemical group at the at ortho position or the 1 and 2 position of a phenyl ring, relative to the carboxylic compound changes the chemical properties of the compound. This is caused by steric effects and bonding interactions along with polar effects caused by the various substituents which are in a given molecule, resulting in changes in its chemical and physical properties. The ortho effect is associated with substituted benzene compounds. There are three main ortho effects in substituted benzene compounds: Steric hindrance forces cause substitution of a chemical group in the ortho position of benzoic acids become stronger acids. Steric inhibition of protonation caused by substitution of anilines to become weaker bases, compared to substitution of isomers in the meta and para position. Electrophilic aromatic substitution of disubstituted benzene compounds causes steric effects which determines the regioselectivity of an incoming electrophile in disubstituted benzene compounds Ortho substituted benzoic acids When a substituent group is located ortho position to the carboxyl group in a substituted benzoic acid compound, the compound becomes more acidic surpassing the unmodified benzoic acid. Generally ortho-substituted benzoic acids are stronger acids than their meta and para isomers. Mechanism of action When ortho substitution occurs in benzoic acid, steric hindrance causes the carboxyl group to twist out of the plane of the benzene ring. The twisting inhibits the resonance of the carboxyl group with the phenyl ring, leading to increased acidity of the carboxyl group. This increased acidity contrasts with the reduced acidity caused by destabilizing cross-conjugation. The destabilizing cross-conjugation causes decreased acidity of benzoic acid compared to formic acid. pKa values The table given below shows pKa values of various monosubstituted benzoic acids. Ortho substituted aniline When any group is present at ortho position to an amide group (NH2) in aniline then the basic character of that compound becomes weaker. For example, see the order of basicity of following substituted aniline:- p-Toluidine > m-Toluidine > Aniline > o-Toluidine Aniline > m-Nitroaniline > p-Nitroaniline > o-Nitroaniline Aniline > p-Haloaniline > m-Haloaniline > o-Haloaniline p-Aminophenol pKb=8.50 > o-Aminophenol pKb=9.28 > Aniline pKb=9.38 > m-Aminophenol pKb=9.80 The protonation of substituted aniline is inhibited by steric hindrance. When protonated, the nitrogen in the amino group changes its orbital hybridization from sp2 to sp3, becoming non-planar. This leads to steric hindrance between the ortho-substituted group and the hydrogen atom of the amino group, reducing the stability of the conjugate acid and consequently decreasing the pH of substituted aniline. Electrophilic aromatic substitution of disubstituted benzene compounds The ortho effect also occurs when a meta-directing group is positioned in a meta arrangement relative to an ortho–para-directing group, a new substituent introduced into the molecule tends to preferentially occupy the ortho position relative to the meta-directing group rather than the para position. Currently, there is no definitive explanation for the ortho effect, but it is hypothesized that there may be intramolecular assistance from the meta-directing group influencing the positioning of the incoming substituent. For example, the electrophilic aromatic nitration of 1-methyl-3-nitrobenzene affords 4-methyl-1,2-dinitrobenzene and 1-methyl-2,3-dinitrobenzene in 60.1% and 28.4% yields, respectively. In contrast, 2-methyl-1,4-dinitrobenzene (2c) is isolated in only 9.9% yield. As witnessed in the above example, when a π-acceptor substituent (πAS) is meta to a π-donor substituent (πDS), the electrophilic aromatic nitration occurs ortho to the πAS rather than para. Similar results were also observed on the nitration of 3-methylbenzoic acid in which 5-methyl-2-nitrobenzoic acid and 3-methyl-2-nitrobenzoic acid were obtained as the major compounds, whereas 3-methyl-4-nitrobenzoic acid was reported as a minor compound. Also in nitration of the nitration of 3-bromobenzoic acid 5-bromo-2-nitrobenzoic acid (83%yield) was obtained as major product and 3-bromo-2-nitrobenzoic acid (13% yield) as minor. On an interesting note the potential isomer 3-bromo-4-nitrobenzoic acid was not detected. Diels-Alder reactions The ortho effect occurs in Diels-Alder reactions when the Z-substituted dienophiles react with 1-substituted butadienes to give 3,4-disubstituted cyclohexenes, independent of the nature of diene substituents. References External links Supplemental Topics § The Ortho Effect – Department of Chemistry, Michigan State University Organic chemistry Benzene Isomerism Chemical bonding
Ortho effect
[ "Physics", "Chemistry", "Materials_science" ]
1,219
[ "Stereochemistry", "Condensed matter physics", "nan", "Isomerism", "Chemical bonding" ]
60,167,773
https://en.wikipedia.org/wiki/Nuclear%20fallout%20effects%20on%20an%20ecosystem
This article uses Chernobyl as a case study of nuclear fallout effects on an ecosystem. Chernobyl Officials used hydrometeorological data to create an image of what the potential nuclear fallout looked like after the Chernobyl disaster in 1986. Using this method, they were able to determine the distribution of radionuclides in the surrounding area, and discovered emissions from the nuclear reactor itself. These emissions included; fuel particles, radioactive gases, and aerosol particles. The fuel particles were due to the violent interaction between hot fuel and the cooling water in the reactor, and attached to these particles were Cerium, Zirconium, Lanthanum, and Strontium. All of these elements have low volatility, meaning they prefer to stay in a liquid or solid state rather than condensing into the atmosphere and existing as vapor. Cerium and Lanthanum can cause irreversible damage to marine life by deteriorating cell membranes, affecting reproductive capability, as well as crippling the nervous system. Strontium in its non-nuclear isotope is stable and harmless, however, when the radioactive isotope, Sr90, is released into the atmosphere it can lead to anemia, cancers, and cause shortages in oxygen. The aerosol particles had traces of Tellurium, a toxic element which can create issues in developing fetuses, along with Caesium, which is an unstable, incredibly reactive, and toxic element. Also found in the aerosol particles was enriched Uranium-235. The most prevalent radioactive gas detected was Radon, a noble gas that has no odor, no color, and no taste, and can also travel into the atmosphere or bodies of water. Radon is also directly linked to lung cancer, and is the second leading cause of lung cancer in the populace. All of these elements only deteriorate through radioactive decay, which is also known as a half-life. Half-lives of the nuclides previously discussed can range from mere hours, to decades. The shortest half-life for the previous elements is Zr95, an isotope of zirconium which takes 1.4 hours to decay. The longest is Pu235, which takes approximately 24,000 years to decay. While the initial release of these particles and elements was rather large, there were multiple low-level releases for at least a month after the initial incident at Chernobyl. Local effects Surrounding wildlife and fauna were drastically affected by Chernobyl's explosions. Coniferous trees, which are plentiful in the surrounding landscape, were heavily affected due to their biological sensitivity to radiation exposure. Within days of the initial explosion many pine trees in a 4 km radius died, with lessening yet still harmful effects being observed up to 120 km away. Many trees experienced interruptions in their growth, reproduction was crippled, and there were multiple observations of morphological changes. Hot particles also landed on these forests, causing holes and hollows to be burned into the trees. The surrounding soil was covered in radionuclides, which prevented substantial new growth. Deciduous trees such as Aspen, Birch, Alder, and Oak trees are more resistant to radiation exposure than coniferous trees, however they aren't immune. Damage seen on these trees was less harsh than observed on the pine trees. A lot of new deciduous growth suffered from necrosis, death of living tissue, and foliage on existing trees turned yellow and fell off. Deciduous trees resilience has allowed them to bounce back and they have populated where many coniferous trees, mostly pine, once stood. Herbaceous vegetation was also affected by radiation fallout. There were many observations of color changes in the cells, chlorophyll mutation, lack of flowering, growth depression, and vegetation death. Mammals are a highly radio-sensitive class, and observations of mice in the surrounding area of Chernobyl showed a population decrease. Embryonic mortality increased as well, however, migration patterns of the rodents made the damaged population number increase once again. Among the small rodents affected, it was observed that there were increasing issues in the blood and livers, which is a direct correlation to radiation exposure. Issues such as liver cirrhosis, enlarged spleens, increased peroxide oxidation of tissue lipids, and a decrease in the levels of enzymes were all present in the rodents exposed to the radioactive blasts. Larger wildlife didn't fare much better. Although most livestock were relocated a safe distance away, horses and cattle located on an isolated island 6 km away from the Chernobyl radioactivity were not spared. Hyperthyroidism, stunted growth, and, of course, death plagued the animals left on the island. The loss of human population in Chernobyl, sometimes referred to as the "exclusion zone," has allowed the ecosystems to recover. The use of herbicides, pesticides, and fertilizers has decreased because there is less agricultural activity. Biodiversity of plants and wildlife has increased, and animal populations have also increased. However, radiation continues to impact the local wildlife. Global effects Factors such as rainfall, wind currents, and the initial explosions at Chernobyl themselves caused the nuclear fallout to spread throughout Europe, Asia, as well as parts of North America. Not only was there a spread of these various radioactive elements previously mentioned, but there were also problems with what are known as hot particles. The Chernobyl reactor didn't just expel aerosol particles, fuel particles, and radioactive gases, but there was an additional expulsion of Uranium fuel fused together with radionuclides. These hot particles could spread for thousands of Kilometers and could produce concentrated substances in the form of raindrops known as Liquid hot particles. These particles were potentially hazardous, even in low-level radiation areas. The radioactive level in each individual hot particle could rise as high as 10 kBq, which is a fairly high dosage of radiation. These liquid hot particle droplets could be absorbed in two main ways; ingestion through food or water, and inhalation. Evolutionary effects Mutated organisms themselves also have effects beyond the immediate area. Møller & Mousseau 2011 find that individuals carrying deleterious mutations will not be selected out immediately but will instead survive for many generations. As such they are expected to have descendants far away from contamination sites that created them, contaminating those populations, and causing fitness decline. References Aftermath of war Aftermath of the Chernobyl disaster Environmental impact of nuclear power Nuclear chemistry Nuclear weapons Radiation health effects Radioactive contamination Radiobiology Radiological weapons Nuclear fallout
Nuclear fallout effects on an ecosystem
[ "Physics", "Chemistry", "Materials_science", "Technology", "Biology" ]
1,327
[ "Radiation health effects", "Radioactive contamination", "Aftermath of the Chernobyl disaster", "Nuclear chemistry", "Radiobiology", "Nuclear fallout", "Environmental impact of nuclear power", "nan", "Nuclear physics", "Radiation effects", "Radioactivity" ]
56,745,603
https://en.wikipedia.org/wiki/Passive%20autocatalytic%20recombiner
Passive autocatalytic recombiner (PAR) is a device that removes hydrogen from the containment of a nuclear power plant during an accident. Its purpose is to prevent hydrogen explosions. Recombiners come into action spontaneously as soon as the hydrogen concentration increases. They are passive devices because their operation does not require external energy. Hydrogen may be generated in a nuclear accident if the reactor fuel overheats and zirconium cladding of the fuel rods reacts chemically with steam. If the hydrogen is released from the reactor to the containment, it may get mixed with air and form a flammable or even explosive mixture. A hydrogen explosion could break the containment and cause radioactive materials to be released to the environment. Recombiners aim at removing hydrogen and thereby preventing explosions. Inside a recombiner there are plates or pellets that are coated with platinum or palladium catalyst. On the surface of the catalyst, hydrogen and oxygen molecules react chemically at low temperature and low hydrogen concentration. The reaction generates steam. The reaction starts spontaneously when the hydrogen concentration reaches 1–2 percent. Burning of hydrogen in air requires at least 4 percent hydrogen concentration, and even higher for an explosion. Therefore, a recombiner is able to remove hydrogen from the containment before a flammable concentration is reached. A recombiner is a box that is open from the bottom and from the top. The catalyst is located at the lower part of the box. The reaction of hydrogen and oxygen on the catalyst surface generates heat, and temperature in the recombiner reaches hundreds of degrees Celsius. Hot steam is lighter than the air in the containment, so buoyancy is caused inside the recombiner, much like in a chimney. This causes a strong airflow through the recombiner, feeding hydrogen and oxygen from the containment to the device. Hundreds of kilograms of hydrogen may be generated in a few hours during a severe reactor accident. The most efficient recombiner made by Framatome (formerly Areva) removes slightly over five kilograms of hydrogen per hour when the hydrogen concentration is four percent. Therefore, many recombiners are needed. For example, the containment of Olkiluoto 3 EPR in Finland has 50 recombiners. Manufacturers of passive autocatalytic recombiners include Framatome, SNC-Lavalin (formerly Atomic Energy of Canada Ltd, AECL), and German Siempelkamp-NIS. See also Hydrogen safety References Catalysis Nuclear power plant components Nuclear safety and security Hydrogen
Passive autocatalytic recombiner
[ "Chemistry" ]
529
[ "Catalysis", "Chemical kinetics" ]
56,746,642
https://en.wikipedia.org/wiki/%CE%91-Galactosylceramide
α-Galactosylceramide (α-GalCer, KRN7000) is a synthetic glycolipid derived from structure-activity relationship studies of galactosylceramides isolated from the marine sponge Agelas mauritianus. α-GalCer is a strong immunostimulant and shows potent anti-tumour activity in many in vivo models. Immunostimulatory properties α-GalCer is a potent activator of iNKT cells, and a model CD1d antigen. The invariant T cell receptor of the iNKT cell is able to bind the CD1d:glycolipid complex leading to iNKT cell activation in both mice and humans. Adjuvant activity In combination with a peptide antigen, α-GalCer is able to stimulate a strong immune response against the epitope. The CD1d:glycolipid:TCR interaction activates the iNKT cell which can then activate the dendritic cell. This causes the release of a range of cytokines and licenses the dendritic cell to activate a peptide-specific T cell response. This adjuvant acts through this cellular interaction, rather than through classic pattern recognition receptor pathways. References Immunology Lipids
Α-Galactosylceramide
[ "Chemistry", "Biology" ]
262
[ "Organic compounds", "Biomolecules by chemical classification", "Immunology", "Lipids" ]
56,747,102
https://en.wikipedia.org/wiki/Sergei%20Tretyakov%20%28scientist%29
Sergei Anatolyevich Tretyakov (; born in 1956) is a Russian-Finnish scientist, focused in electromagnetic field theory, complex media electromagnetics and microwave engineering. He is currently a professor at Department of Electronics and Nanoengineering, Aalto University (former Helsinki University of Technology), Finland. His main research area in recent years is metamaterials and metasurfaces from fundamentals to applications. He was the president of the European Virtual Institute for Artificial Electromagnetic Materials and Metamaterials (”Metamorphose VI”) and general chair of the Metamaterials Congresses from 2007 to 2013. He is a fellow/member of many scientific associations such as IEEE, URSI, the Electromagnetics Academy, and OSA. He is also an Honorary Doctor of Francisk Skorina Gomel State University. Education Sergei Tretyakov has received the Engineer's degree and the Candidate of Sciences (PhD) degree in radiophysics from the Leningrad Polytechnic Institute, USSR in 1980 and 1987, respectively. In 1994 he was granted a Docent Diploma by the Ministry of Education of Russian Federation and in the following year he received Doctor of Sciences degree from St. Petersburg State Technical University, Russia. Tretyakov obtained his Full Professor Diploma in 1997 granted by the Ministry of Education, Russia. Career Professional career of Sergei Tretyakov started in 1980 at Radiophysics Department of Leningrad Polytechnic Institute, where he had been an engineer and junior researcher until 1986. In 1986 he was promoted to the position of assistant professor and in 1989 to the position of associate professor. In October 1988, Tretyakov had a 10-months-long research visit to Helsinki University of Technology (from 2010, Aalto University) according to the exchange program between the Ministries of Education in Finland and Soviet Union. During following 8 years, Tretyakov was affiliated with both Electromagnetics Laboratory of Helsinki University of Technology where he worked with Ismo Lindell and Ari Sihvola and St. Petersburg State Technical University where he worked with Constantin Simovski. Tretyakov visited CEA Cesta (French Alternative Energies and Atomic Energy Commission research centre), also affiliated with the Laboratory of Wave-Material Interactions in University of Bordeaux, for 6 months in 1994 as a visiting scientist. In 1996, he was promoted to full professor position in St. Petersburg State Technical University, where he also became a director of Complex Media Electromagnetics Laboratory. From January 1999 until July 2000 Tretyakov was a visiting professor in Electromagnetics Laboratory of Helsinki University of Technology and in August 2000, he moved to the Helsinki University of Technology as a full professor of Radio Engineering. Later on, as a visiting professor, he visited the Abbe Center of Photonics in Friedrich Schiller University Jena, Germany during June – July 2013, and the Department of Photonics Engineering in Technical University of Denmark during January - April 2013. He educated 13 doctors of science. Research Tretyakov has authored or co-authored more than 280 papers in refereed journals, 5 books, and 17 book chapters. Tretyakov's research career started with his diploma thesis under supervision of Prof. V.A. Rozov. The thesis was devoted to the problem of diffraction at an edge of dense planar arrays of metal wires, what is now referred as metasurfaces or two-dimensional metamaterials. During the doctoral studies, Tretyakov worked on ferrite-based anisotropic layered structures under supervision of Prof. M.I. Kontorovich. The first research visit to Helsinki University of Technology profoundly influenced his research interest, shifting it towards a novel and very promising direction of complex electromagnetic materials (now called metamaterials). From this time forth Tretyakov actively works in this research direction with the main contributions listed below. Electromagnetics of chiral and general bianisotropic media Tretyakov made important contributions to research of bianisotropic media. Together with co-authors, he developed the general theory of electromagnetic waves interactions with bianisotropic materials and layers. Moreover, Tretyakov proposed and experimentally characterized first non-reciprocal bianisotropic scatterers of two types: so-called Tellegen scatterer (named after Bernard D. H. Tellegen who suggested gyrator as a circuit element with equivalent electromagnetic response) and artificial "moving" scatterer (a composite based on such scatterers emulate response of a truly moving medium). In 1997, Tretyakov and his colleagues demonstrated that chiral effects (optical rotation and circular dichroism) can be achieved even with an infinitely thin composite layer without broken mirror symmetry. This effect was subsequently named as planar chirality and independently discovered by the team of Nikolay I. Zheludev in 2003. Chiral nihility and negative refractive index Possibility of existence of a backward wave medium, where electromagnetic waves propagate with anti-parallel phase and group velocities, was suggested by several scientists throughout the twentieth century: Arthur Schuster, Horace Lamb, Leonid Mandelstam, Victor Veselago, and others. However, due to the absence of materials with such properties in nature, wide interest to the backward wave media was generated only in the early 2000s, when the team of David R. Smith experimentally demonstrated first negative-index metamaterial. In 2003, Tretyakov and colleagues suggested an alternative way to achieve backward waves by using bianisotropic chiral materials. In this case, it is not required to engineer negative permittivity and permeability, instead, one should just ensure proper chiral response of the material. In the extreme case of so-called chiral nihility (when both relative permittivity and permeability are much smaller than the chirality parameter), two eigenwaves represent "forward" and "backward" circularly polarized waves with equal phase velocities. The existence of backward waves in chiral media was independently suggested by John Pendry in 2004. Broadband cloaking of cylindrical objects Inspired by the idea of transformation-optics based electromagnetic cloaking, Tretyakov's team developed an alternative realization of the same effect for cylindrical objects. In contrast to the previous designs, Tretyakov's cloaking device exhibits significantly increased bandwidth and lower amount of dissipation loss. Moreover, it does not require the use of exotic metamaterials with gradient permittivity and permeability, but, instead, is based on conducting plates with a simple geometry. Strong spatial dispersion in wire media In 2003, Tretyakov's group demonstrated that a dense array of metal wires (wire medium), generally, exhibits strong nonlocal response (spatial dispersion), i.e. cannot be described by usual material parameters such as permittivity. The property of strong spatial dispersion enables the use of wire media for subwavelength imaging and transmission of images over long distances. Superlensing The concept of the superlens, introduced by John Pendry in 2000 as an extension of the work done by Victor Veselago, showed a theoretical possibility to achieve optical resolution well below the wavelength. In 2003, Stanislav Maslovski and Sergei Tretyakov showed that an alternative to Pendry’s device can be constructed using layers that impose the necessary boundary conditions at two parallel planes in free space. Later in 2004, Tretyakov with co-authors explored the necessary electromagnetic properties of the layers and confirmed the effect with experiments. Constitutive parameters of metamaterials By definition, metamaterials are realized as lattices whose periodicity is assumed to be much smaller than the wavelength. However, it is important that, though small, the periodicity is not negligible with respect to the wavelength. For this reason, if one formally introduces constitutive parameters for such regime, they will not be measurable response functions, and it will not be possible to use them for a sample of other dimensions or for a sample excited in another way. In other words, such formally introduced material parameters cannot satisfy the conditions of locality. In 2007, Tretyakov and colleagues explained the physical meaning of calculated material parameters, different from the meaning of the local constitutive parameters High-impedance surfaces and metasurfaces High Impedance Surfaces (HIS), also known as Artificial Magnetic Conductors (AMC), are artificial structures designed by applying special textures to a conducting surface. In a narrow band of frequencies, these structures have very high impendences which can be used as ground planes for novel low profile antennas and other electromagnetic structures. In 2008, Tretyakov and colleagues developed analytical formulas for the calculation of the grid impedance of electrically dense arrays of strips and square patches and their applications for HIS. Tretyakov also made an important contribution to clarify the role of spatial dispersion in the mushroom structure in 2009. This work demonstrated that, under some conditions, spatial dispersion is suppressed. More recently, he worked on modelling and applications of thin composite layers with engineered electromagnetic properties (metasurfaces), in particular, developing approaches to full control of reflected and transmitted waves. Awards and recognition Honorable Doctor, Francisk Skorina Gomel State University (Belarus) President, European Virtual Institute for Artificial Electromagnetic Materials and Metamaterials (”Metamorphose VI”), International Association of European universities, 2007 – 2013 General chair, International Congress Series on Advanced Electromagnetic Materials in Microwaves and Optics (Metamaterials), 2007-2013 Founder and Chairman, St. Petersburg IEEE ED/MTT/AP Chapter, 1995-1998 Deputy Member, URSI (International Union of Radio Science) Finnish National Committee, from 2006; Individual member of URSI since 2018 Coordinator, FP6 Network of Excellence Metamorphose, 2004-2008 Fellow, the Institute of Electrical and Electronics Engineers (IEEE) Fellow, the Electromagnetics Academy (USA) Senior member, Optical Society of America (OSA) Member, European Microwave Association Member, Finnish Academy of Technical Sciences (Teknillisten Tieteiden Akatemia) Member, Tenure-track Committee, Aalto University School of Electrical Engineering, since 2011 Member, Expert Advisory Group for Nanosciences, Nanotechnologies, Materials and New Production Technologies (European Commission, 7th Framework Programme), 2007 – 2011 Member, European Microwave Conferences Management Committee, 2000-2002 Steering Committee Member, European Doctoral Degree Programmes on Metamaterials EUPROMETA Important monographs "Analytical Modeling in Applied Electromagnetics" "Electromagnetics of bi-anisotropic materials: Theory and applications" "Modern Electromagnetic Scattering Theory with Applications" "Electromagnetic waves in chiral and bi-isotropic media" References Metamaterials scientists Living people 1956 births Engineers from Saint Petersburg Peter the Great St. Petersburg Polytechnic University alumni Academic staff of Aalto University Finnish scientists Fellows of the IEEE Finnish people of Russian descent Naturalized citizens of Finland Russian-speaking Finns
Sergei Tretyakov (scientist)
[ "Materials_science" ]
2,261
[ "Metamaterials scientists", "Metamaterials" ]
56,747,378
https://en.wikipedia.org/wiki/Resistive%20pulse%20sensing
Resistive pulse sensing (RPS) is the generic, non-commercial term given for the well-developed technology used to detect, and measure the size of, individual particles in fluid. First invented by Wallace H. Coulter in 1953, the RPS technique is the basic principle behind the Coulter Principle, which is a trademark term. Resistive pulse sensing is also known as the electrical zone sensing technique, reflecting its fundamentally electrical nature, which distinguishes it from other particle sizing technologies such as the optically-based dynamic light scattering (DLS) and nanoparticle tracking analysis (NTA). An international standard has been developed for the use of the resistive pulse sensing technique by the International Organization for Standardization. Construction and operation The basic design principle underlying resistive pulse sensing is shown in Fig. 1. Individual particles, suspended in a conductive fluid, flow one-at-a-time through a constriction. The fluids most commonly used are water containing some amount of dissolved salts, sufficient to carry an electrical current. The salinity levels of seawater or of a wide range of concentrations of phosphate-buffered saline are easily sufficient for this purpose, with electrical conductivity in the mS-S range and salt concentrations of order 1 percent. Typical tap water often contains sufficient dissolved minerals to conduct sufficiently for this application as well. Electrical contact is made with the fluid using metal electrodes, in the best case using platinum or other low electrode potential metals, as are found in electrochemical cell constructions. Biasing the electrodes with an electrical potential of order 1 volt will cause an electrical current to flow through the fluid. If properly designed, the electrical resistance of the constriction will dominate in the total electrical resistance of the circuit. Particles that flow through the constriction while the electrical current is being monitored will cause an obscuration of that current, resulting in an increase in the voltage drop between the two electrodes. In other words, the particle causes a change in the electrical resistance of the constriction. The change in the electrical resistance as a particle passes through a constriction is shown schematically in Fig. 2. Theory of operation The quantitative relationship between the measured change in electrical resistance and the size of the particle that caused that change was worked out by De Blois and Bean in 1970. De Blois and Bean found the very simple result that the resistance change is proportional to the ratio of particle volume to the effective volume of the constriction: , where is a factor that depends on the detailed geometry of the constriction and the electrical conductivity of the working fluid. Hence, by monitoring the electrical resistance as indicated by changes in the voltage drop across the constriction, one can count particles, as each increase in resistance indicates passage of a particle through the constriction, and one can measure the size of that particle, as the magnitude of the resistance change during the particle passage is proportional to that particle's volume. As one can usually calculate the volumetric flow rate of fluid through the constriction, controlled externally by setting the pressure difference across the constriction, one can then calculate the concentration of particles. With a large enough number of particle transients to provide adequate statistical significance, the concentration as a function of particle size, also known as the concentration spectral density, with units of per volume fluid per volume particle, can be calculated. Minimum detectable size and dynamic range Two important considerations when evaluating a resistive pulse sensing (RPS) instrument are the minimum detectable particle size and the dynamic range of the instrument. The minimum detectable size is determined by the volume of the constriction, the voltage difference applied across that constriction, and the noise of the first-stage amplifier used to detect the particle signal. In other words, one must evaluate the minimum signal-to-noise ratio of the system. The minimum particle size can be defined as the size of the particle that generates a signal whose magnitude is equal to the noise, integrated over the same frequency bandwidth as generated by the signal. The dynamic range of an RPS instrument is set at its upper end by the diameter of the constriction, as that is the maximum size particle that can pass through the constriction. One can also instead choose a somewhat smaller maximum, perhaps setting it to 70 percent of this maximum volume. The dynamic range is then equal to the ratio of the maximum particle size to the minimum detectable size. This ratio can be quoted either as the ratio of the maximum to minimum particle volume, or as the ratio of the maximum to minimum particle diameter (the cube of the first method). Microfluidic resistive pulse sensing (MRPS) The original Coulter counter was originally designed using a special technology to fabricate small pores in glass volumes, but the expense and complexity of fabricating these elements means they become a semi-permanent part of the analytic RPS instrument. This also limited the minimum diameter constrictions that could be reliably fabricated, making the RPS technique challenging to use for particles below roughly 1 micron in diameter. There was therefore significant interest in applying the fabrication techniques developed for microfluidic circuits to RPS sensing. This translation of RPS technology to the microfluidic domain enables very small constrictions, well below effective diameters of 1 micron; this therefore extends the minimum detectable particle size to the deep sub-micron range. Using microfluidics technology also allows the use of inexpensive cast plastic or elastomer parts for defining the critical constriction component, which also become disposable. The use of a disposable element eliminates concerns about sample cross-contamination as well as obviating the need for time-consuming cleaning of the RPS instrument. Scientific advances demonstrating these capabilities have been published in the scientific literature, such as by Kasianowicz et al., Saleh and Sohn, and Fraikin et al. These together illustrate a variety of methods to fabricate microfluidic or lab-on-a-chip versions of the Coulter counter technology. References Cell culture techniques Microfluidics
Resistive pulse sensing
[ "Chemistry", "Materials_science", "Biology" ]
1,273
[ "Biochemistry methods", "Microfluidics", "Cell culture techniques", "Microtechnology" ]
56,748,043
https://en.wikipedia.org/wiki/BioSimGrid
BioSimGrid was a project to make data sets from computer simulations in the field of modelling biological systems, particularly biomolecular structures, more accessible to researchers. The project began in 2004 and halted by 2009. In 2004 designers presented the concept of the project as a "protein data bank extended in time". Other developers presented a web portal for accessing data in the project. Other designers described the project as efficient. A review in 2006 described how BioSimGrid was a model project for making data from research more open. BioSimGrid contributors accepted a grant from the National Institutes of Health in 2007. References External links biosimgrid.org is now defunct and file archive at SourceForge Digital libraries Online databases Bioinformatics
BioSimGrid
[ "Engineering", "Biology" ]
154
[ "Bioinformatics", "Biological engineering" ]
56,750,253
https://en.wikipedia.org/wiki/Hayaatun%20Sillem
Dr. Hayaatun Sillem (née Is’harc) is the chief executive officer (CEO) of the Royal Academy of Engineering. Education Sillem grew up in South Africa. She attended Godolphin and Latymer School. She earned a master's degree in biochemistry from the University of Oxford in 1998. She completed a PhD funded by Cancer Research UK at University College London in 2002 investigating the JAK-STAT signaling pathway supervised by Ian M. Kerr. Career Sillem joined the Royal Academy of Engineering in 2002 as an Engineering Policy Advisor "despite, if I’m honest, not knowing anything about engineering or policy”, she says. She joined the Department for International Development 2005. In 2004 she became a Committee Specialist to the Science and Technology Select Committee, and later as a Specialist Adviser to the House of Commons Science & Technology Committee. In 2006, Sillem joined Royal Academy of Engineering as Head of International Activities. She led the Academy's partnership with Africa. In this role she published Engineering Change: Towards a sustainable future in the developing world. She went on to publish Engineers for Africa: Identifying engineering capacity needs in Sub-Saharan Africa, a summary report. The report identified the capacity needs of engineering that are felt across Sub-Saharan Africa, and developed approaches to meeting these needs. Sillem was appointed to Director of Programmes and Fellowship in 2011. She is interested in how science and engineering can help with humanitarian admin, and how engineering can drive international development. She published Investing in Innovation in 2015. In May 2016, Sillem was appointed Director of Strategy and Deputy Chief Executive of the Royal Academy of Engineering. In March 2017 she was appointed a Fellow of the Institution of Engineering and Technology. She spoke at the launch of Angela Saini's book Inferior: How Science Got Women Wrong and the New Research That’s Rewriting the Story in June 2017. Sillem co-founded the Royal Academy of Engineering enterprise hub. She hosted the 10th Young Arab Women Leaders STEM conference in London in December 2017. She was appointed CEO of the Royal Academy of Engineering in January 2018. She is a champion of the Government's Year of Engineering, looking to increase diversity amongst the UK's engineering workforce through the campaign This is Engineering. In 2019, Sillem was 31st in Computer Weekly'''s 50 'Most Influential Women in UK Tech' shortlist. She is a trustee of the London Transport Museum. She is a judge for St Andrews Prize for the Environment. She has written for The Huffington Post''. Awards and honours Sillem was appointed Commander of the Order of the British Empire (CBE) in the 2020 New Year Honours for services to international engineering. In 2021, Sillem received an Engineering and Physical Sciences Suffrage Science award. In 2022, Sillem was awarded an honorary doctorate in engineering from Newcastle University. References Living people Women chief executives Fellows of the Institution of Engineering and Technology British biochemists Women biochemists People educated at Godolphin and Latymer School Year of birth missing (living people) Commanders of the Order of the British Empire Alumni of the University of Oxford
Hayaatun Sillem
[ "Chemistry", "Engineering" ]
645
[ "Institution of Engineering and Technology", "Biochemists", "Fellows of the Institution of Engineering and Technology", "Women biochemists" ]
56,753,159
https://en.wikipedia.org/wiki/Sodium%201%2C3-dithiole-2-thione-4%2C5-dithiolate
Sodium 1,3-dithiole-2-thione-4,5-dithiolate is the organosulfur compound with the formula Na2C3S5, abbreviated Na2dmit. It is the sodium salt of the conjugate base of the 4,5-bis(sulfanyl)-1,3-dithiole-2-thione. The salt is a precursor to dithiolene complexes and tetrathiafulvalenes. Reduction of carbon disulfide with sodium affords sodium 1,3-dithiole-2-thione-4,5-dithiolate together with sodium trithiocarbonate: 4 Na + 4 CS2 → Na2C3S5 + Na2CS3 Before the characterization of dmit2-, reduction of CS2 was thought to give tetrathiooxalate (Na2C2S4). The dianion C3S52- is purified as the tetraethylammonium salt of the zincate complex [Zn(C3S5)2]2-. This salt converts to the bis(thioester) upon treatment with benzoyl chloride: [N(C2H5)4]2[Zn(C3S5)2] + 4 C6H5COCl → 2 C3S3(SC(O)C6H5)2 + [N(C2H5)4]2[ZnCl4] Cleavage of the thioester with sodium methoxide gives sodium 1,3-dithiole-2-thione-4,5-dithiolate: C3S3(SC(O)C6H5)2 + 2 NaOCH3 → Na2C3S5 + 2 C6H5CO2Me Na2dmit undergoes S-alkylation. Heating solutions of Na2dmit gives the isomeric 1,2-dithioledithiolate. References Thiolates Alkene derivatives Sodium compounds
Sodium 1,3-dithiole-2-thione-4,5-dithiolate
[ "Chemistry" ]
432
[ "Thiolates", "Functional groups" ]
56,754,296
https://en.wikipedia.org/wiki/Fuglede%27s%20conjecture
Fuglede's conjecture is an open problem in mathematics proposed by Bent Fuglede in 1974. It states that every domain of (i.e. subset of with positive finite Lebesgue measure) is a spectral set if and only if it tiles by translation. Spectral sets and translational tiles Spectral sets in A set with positive finite Lebesgue measure is said to be a spectral set if there exists a such that is an orthogonal basis of . The set is then said to be a spectrum of and is called a spectral pair. Translational tiles of A set is said to tile by translation (i.e. is a translational tile) if there exist a discrete set such that and the Lebesgue measure of is zero for all in . Partial results Fuglede proved in 1974 that the conjecture holds if is a fundamental domain of a lattice. In 2003, Alex Iosevich, Nets Katz and Terence Tao proved that the conjecture holds if is a convex planar domain. In 2004, Terence Tao showed that the conjecture is false on for . It was later shown by Bálint Farkas, Mihail N. Kolounzakis, Máté Matolcsi and Péter Móra that the conjecture is also false for and . However, the conjecture remains unknown for . In 2015, Alex Iosevich, Azita Mayeli and Jonathan Pakianathan showed that an extension of the conjecture holds in , where is the cyclic group of order p. In 2017, Rachel Greenfeld and Nir Lev proved the conjecture for convex polytopes in . In 2019, Nir Lev and Máté Matolcsi settled the conjecture for convex domains affirmatively in all dimensions. References Conjectures
Fuglede's conjecture
[ "Mathematics" ]
342
[ "Unsolved problems in mathematics", "Mathematical problems", "Conjectures" ]
56,755,814
https://en.wikipedia.org/wiki/Vulpecula%20in%20Chinese%20astronomy
The modern constellation Vulpecula lies across one of the quadrants symbolized by the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ), and Three Enclosures (三垣, Sān Yuán), that divide the sky in traditional Chinese uranography. The name of the western constellation in modern Chinese is 狐狸座 (hú li zuò), meaning "the fox constellation". Stars The map of Chinese constellation in constellation Vulpecula area consists of : See also Traditional Chinese star names Chinese constellations References External links 香港太空館研究資源 中國星區、星官及星名英譯表 天象文學 台灣自然科學博物館天文教育資訊網 中國古天文 中國古代的星象系統 Astronomy in China Vulpecula
Vulpecula in Chinese astronomy
[ "Astronomy" ]
181
[ "Astronomy in China", "Vulpecula", "Constellations", "History of astronomy" ]
55,328,384
https://en.wikipedia.org/wiki/Infinite-order%20hexagonal%20tiling
In 2-dimensional hyperbolic geometry, the infinite-order hexagonal tiling is a regular tiling. It has Schläfli symbol of {6,∞}. All vertices are ideal, located at "infinity", seen on the boundary of the Poincaré hyperbolic disk projection. Symmetry There is a half symmetry form, , seen with alternating colors: Related polyhedra and tiling This tiling is topologically related as a part of sequence of regular polyhedra and tilings with vertex figure (6n). See also Hexagonal tiling Uniform tilings in hyperbolic plane List of regular polytopes References External links Hyperbolic and Spherical Tiling Gallery Hyperbolic tilings Infinite-order tilings Isogonal tilings Isohedral tilings Hexagonal tilings Regular tilings
Infinite-order hexagonal tiling
[ "Physics" ]
168
[ "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Isohedral tilings", "Symmetry" ]
55,330,205
https://en.wikipedia.org/wiki/Proper%20generalized%20decomposition
The proper generalized decomposition (PGD) is an iterative numerical method for solving boundary value problems (BVPs), that is, partial differential equations constrained by a set of boundary conditions, such as the Poisson's equation or the Laplace's equation. The PGD algorithm computes an approximation of the solution of the BVP by successive enrichment. This means that, in each iteration, a new component (or mode) is computed and added to the approximation. In principle, the more modes obtained, the closer the approximation is to its theoretical solution. Unlike POD principal components, PGD modes are not necessarily orthogonal to each other. By selecting only the most relevant PGD modes, a reduced order model of the solution is obtained. Because of this, PGD is considered a dimensionality reduction algorithm. Description The proper generalized decomposition is a method characterized by a variational formulation of the problem, a discretization of the domain in the style of the finite element method, the assumption that the solution can be approximated as a separate representation and a numerical greedy algorithm to find the solution. Variational formulation In the Proper Generalized Decomposition method, the variational formulation involves translating the problem into a format where the solution can be approximated by minimizing (or sometimes maximizing) a functional. A functional is a scalar quantity that depends on a function, which in this case, represents our problem. The most commonly implemented variational formulation in PGD is the Bubnov-Galerkin method. This method is chosen for its ability to provide an approximate solution to complex problems, such as those described by partial differential equations (PDEs). In the Bubnov-Galerkin approach, the idea is to project the problem onto a space spanned by a finite number of basis functions. These basis functions are chosen to approximate the solution space of the problem. In the Bubnov-Galerkin method, we seek an approximate solution that satisfies the integral form of the PDEs over the domain of the problem. This is different from directly solving the differential equations. By doing so, the method transforms the problem into finding the coefficients that best fit this integral equation in the chosen function space. While the Bubnov-Galerkin method is prevalent, other variational formulations are also used in PGD, depending on the specific requirements and characteristics of the problem, such as: Petrov-Galerkin Method: This method is similar to the Bubnov-Galerkin approach but differs in the choice of test functions. In the Petrov-Galerkin method, the test functions (used to project the residual of the differential equation) are different from the trial functions (used to approximate the solution). This can lead to improved stability and accuracy for certain types of problems. Collocation Method: In collocation methods, the differential equation is satisfied at a finite number of points in the domain, known as collocation points. This approach can be simpler and more direct than the integral-based methods like Galerkin's, but it may also be less stable for some problems. Least Squares Method: This approach involves minimizing the square of the residual of the differential equation over the domain. It is particularly useful when dealing with problems where traditional methods struggle with stability or convergence. Mixed Finite Element Method: In mixed methods, additional variables (such as fluxes or gradients) are introduced and approximated along with the primary variable of interest. This can lead to more accurate and stable solutions for certain problems, especially those involving incompressibility or conservation laws. Discontinuous Galerkin Method: This is a variant of the Galerkin method where the solution is allowed to be discontinuous across element boundaries. This method is particularly useful for problems with sharp gradients or discontinuities. Domain discretization The discretization of the domain is a well defined set of procedures that cover (a) the creation of finite element meshes, (b) the definition of basis function on reference elements (also called shape functions) and (c) the mapping of reference elements onto the elements of the mesh. Separate representation PGD assumes that the solution u of a (multidimensional) problem can be approximated as a separate representation of the form where the number of addends N and the functional products X1(x1), X2(x2), ..., Xd(xd), each depending on a variable (or variables), are unknown beforehand. Greedy algorithm The solution is sought by applying a greedy algorithm, usually the fixed point algorithm, to the weak formulation of the problem. For each iteration i of the algorithm, a mode of the solution is computed. Each mode consists of a set of numerical values of the functional products X1(x1), ..., Xd(xd), which enrich the approximation of the solution. Due to the greedy nature of the algorithm, the term 'enrich' is used rather than 'improve', since some modes may actually worsen the approach. The number of computed modes required to obtain an approximation of the solution below a certain error threshold depends on the stopping criterion of the iterative algorithm. Features PGD is suitable for solving high-dimensional problems, since it overcomes the limitations of classical approaches. In particular, PGD avoids the curse of dimensionality, as solving decoupled problems is computationally much less expensive than solving multidimensional problems. Therefore, PGD enables to re-adapt parametric problems into a multidimensional framework by setting the parameters of the problem as extra coordinates: where a series of functional products K1(k1), K2(k2), ..., Kp(kp), each depending on a parameter (or parameters), has been incorporated to the equation. In this case, the obtained approximation of the solution is called computational vademecum: a general meta-model containing all the particular solutions for every possible value of the involved parameters. Sparse Subspace Learning The Sparse Subspace Learning (SSL) method leverages the use of hierarchical collocation to approximate the numerical solution of parametric models. With respect to traditional projection-based reduced order modeling, the use of a collocation enables non-intrusive approach based on sparse adaptive sampling of the parametric space. This allows to recover the lowdimensional structure of the parametric solution subspace while also learning the functional dependency from the parameters in explicit form. A sparse low-rank approximate tensor representation of the parametric solution can be built through an incremental strategy that only needs to have access to the output of a deterministic solver. Non-intrusiveness makes this approach straightforwardly applicable to challenging problems characterized by nonlinearity or non affine weak forms. References Numerical analysis Mathematical modeling Dimension reduction Boundary value problems
Proper generalized decomposition
[ "Mathematics" ]
1,407
[ "Mathematical modeling", "Applied mathematics", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Approximations" ]
53,823,632
https://en.wikipedia.org/wiki/Gravity%20sewer
A gravity sewer is a conduit utilizing the energy resulting from a difference in elevation to remove unwanted water. The term sewer implies removal of sewage or surface runoff rather than water intended for use; and the term gravity excludes water movement induced through force mains or vacuum sewers. Most sewers are gravity sewers because gravity offers reliable water movement with no energy costs wherever grades are favorable. Gravity sewers may drain to sumps where pumping is required to either force sewage to a distant location or lift sewage to a higher elevation for entry into another gravity sewer, and lift stations are often required to lift sewage into sewage treatment plants. Gravity sewers can be either sanitary sewers, combined sewers, storm sewers or effluent sewers. Gravity sewer hydraulics Gravity sewer systems typically resemble the regional runoff pattern with large trunk sewers in each valley receiving flow from smaller lateral sewers extending up hillsides. Sewer systems within comparatively level terrain require careful planning and construction to minimize energy losses in free falls, sharp bends, or turbulent junctions. Every reach of the sewer should routinely experience the minimum velocity necessary to maintain solids in suspension and avoid blockage from solids deposition in low-velocity areas. Sewers in hilly areas, however, may require energy dissipation features to avoid sewer damage from high fluid velocities and the scouring effects of gritty solids in turbulent flow. Covered sewers are buried below the frost line to avoid freezing, and deep enough to receive gravity flow from anticipated wastewater sources. Long gravity sewers may require significant excavation depths or tunneling to maintain acceptable gradients near the sewer outfall. Pumping alternatives to gravity Availability of reliable pumps allows lifting accumulations of water into gravity sewers from collection sumps in excavations like mines or building foundations, but the cost of pumping surface runoff discourages use of lift stations in storm drains or combined sewers. The capitalized cost of operation and maintenance of lift stations and emergency power supplies usually justifies considerable first cost for excavation or tunneling to build a gravity sewer. Sewage treatment is most efficient at centralized locations; and pumping is often required to lift sewage from lower elevations to the sewage treatment plant headworks. Structures called regulators allow overflow into gravity outfall sewers when peak flow in combined sewers exceeds pumping capacity. Sanitary sewers are preferred for cost-effective pumping of sewage when treatment is required. Gravity sewers are preferred where grades are favorable, but lift stations often move sewage to sewage treatment plants. Vacuum sewers have a permanent negative pressure; due to improvements in technology, they have become more comparable to gravity sewers in operation and maintenance costs, and are cheaper to install. History The earliest sewers were ditches to remove standing water from muddy locations where dry ground was preferable for human activity. Early sewers served a function similar to modern storm drains (sometimes called storm sewers.) Combined sewers evolved from the practice of using flow in early drainage ditches to remove other wastes including draft animal feces. Sewers became offensive as waste concentrations increased in communities with high population density. Culverts were installed to cover the offensive liquids. Prosperous communities used masonry and brickwork to cover early sewers. Terracotta pipes were used for low volume sewers. The portion of a sewer discharging into a natural water feature is called an outfall. Improved sewer materials The Industrial Revolution increased population density in manufacturing districts, and produced pipes useful for drain-waste-vent systems from buildings to sewers. Gravity sewers have been assembled from cast iron pipe, vitrified clay pipe, precast concrete pipe, asbestos-cement pipe, and plastic pipe. While some older brickwork sewers remain in use, new sewers of diameters exceeding typically use reinforced concrete. Corrugated metal pipe may be used for storm drains or wastewaters with similarly low risk of corrosive conditions. Non-circular cross-sections may have advantages for large-diameter sewers. References Sources Hydraulic engineering Sewerage infrastructure
Gravity sewer
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
808
[ "Hydrology", "Water treatment", "Sewerage infrastructure", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
53,827,714
https://en.wikipedia.org/wiki/Human%20germline%20engineering
Human germline engineering (HGE) is the process by which the genome of an individual is modified in such a way that the change is heritable. This is achieved by altering the genes of the germ cells, which mature into eggs and sperm. For safety, ethical, and social reasons, the scientific community and the public have concluded that germline editing for reproduction is inappropriate. HGE is prohibited by law in more than 70 countries and by a binding international treaty of the Council of Europe. In November 2015, a group of Chinese researchers used CRISPR/Cas9 to edit single-celled, non-viable embryos to assess its effectiveness. This attempt was unsuccessful; only a small fraction of the embryos successfully incorporated the genetic material and many of the embryos contained a large number of random mutations. The non-viable embryos that were used contained an extra set of chromosomes, which may have been problematic. In 2016, a similar study was performed in China on non-viable embryos with extra sets of chromosomes. This study showed similar results to the first; except that no embryos adopted the desired gene. In November 2018, researcher He Jiankui created the first human babies from genetically edited embryos, known by their pseudonyms, Lulu and Nana. In May 2019, lawyers in China reported that regulations had been drafted that anyone manipulating the human genome would be held responsible for any related adverse consequences. Techniques CRISPR-Cas9 The CRISPR-Cas9 system consists of an enzyme called Cas9 and a special piece of guide RNA (gRNA). Cas9 acts as a pair of ‘molecular scissors’ that can cut the DNA at a specific location in the genome so that genes can be added or removed. The guide RNA has complementary bases to those at the target location, so it binds only there. Once bound Cas9 makes a cut across both DNA strands allowing base pairs to inserted/removed. Afterwards, the cell recognizes that the DNA is damaged and tries to repair it. Although CRISPR/Cas9 can be used in humans, it is more commonly used in other species or cell culture systems, including in experiments to study genes potentially involved in human diseases. Speculative uses Genetic engineering is in widespread use, particularly in agriculture. Human germline engineering has two potential applications: prevent genetic disorders from passing to descendants, and to modify traits such as height that are not disease related. For example, the Berlin Patient has a genetic mutation in the CCR5 gene that suppresses the expression of CCR5. This confers innate resistance to HIV. Modifying human embryos to give the CCR5 Δ32 allele protects them from the disease. An other use would be to cure genetic disorders. In the first study published regarding human germline engineering, the researchers attempted to edit the HBB gene which codes for the human β-globin protein. HBB mutations produce β-thalassaemia, which can be fatal. Genome editing in patients who have these HBB mutations would leave copies of the unmutated gene, effectively curing the disease. If the germline could be edited, this normal copy of the HBB genes could be passed on to future generations. Designer babies Eugenic modifications to humans yield "designer babies", with deliberately-selected traits, possibly extending to its entire genome. HGE potentially allows for enhancement of these traits. The concept has produced strong objections, particularly among bioethicists. In a 2019 animal study with Liang Guang small spotted pigs, precise editing of the myostatin signal peptide yielded increased muscle mass. Myostatin is a negative regulator of muscle growth, so by mutating the gene's signal peptide regions could be promoted. One study mutated myostatin genes in 955 embryos at several locations with CRISPR/cas9 and implanted them into five surrogates, resulting in 16 piglets. Only specific mutations to the myostatin signal peptide increased muscle mass, mainly due to an increase in muscle fibers. A similar mice study knoced out the myostatin gene, which also increased their muscle mass. This showed that muscle mass could be increased with germline editing, which is likely applicable to humans because the myostatin gene regulates human muscle growth. Research HGE is widely debated, and more than 40 countries formally outlaw it. No legislation explicitly prohibits germline engineering in the United States. The Consolidated Appropriation Act of 2016 bans the use of US FDA funds to engage in human germline modification research. In April 2015, a research team published an unsuccessful experiment in which they used CRISPR to edit a gene that is associated with blood disease in non-living human embryos. researchers using CRISPR/Cas9 have run into issues when it comes to mammals due to their complex diploid cells. Studies in microorganisms have examined loss of function genetic screening. Some studies used mice as a subject. Because RNA processes differ between bacteria and mammalian cells, researchers have had difficulties coding for mRNA's translated data without RNA interference. Studies have successfully used a Cas9 nuclease with a single guide RNA to allow for larger knockout regions in mice. Lack of international regulation The lack of international regulation led researchers to attempt to create an international framework of ethical guidelines. The framework lacks the requisite international treaties for enforcement. At the first International Summit on Human Gene Editing in December 2015 researchers issued the first international guidelines. These guidelines allowed pre-clinical research into gene editing in human cells as long as the embryos were not used to implant pregnancy. Genetic alteration of somatic cells for therapeutic proposes was considered ethically acceptable in part because somatic cells cannot pass modifications to subsequent generations. However the lack of consensus and the risks of inaccurate editing led the conference to call for restraint on germline modifications. On March 13, 2019 researchers Eric Lander, Françoise Baylis, Feng Zhang, Emmanuelle Charpentier, Paul Bergfrom and others called for a framework that did not foreclose any outcome, but included a voluntary pledge and a call for a coordinating body to monitor the HGE moratorium with an attempt to reach social consensus before furthering research. The World Health Organization announced on December 18, 2018 plans to convene an intentional committee on the topic. He Jiankui Major studies The first known HGE research was by Chinese researchers in April 2015 in Protein and Cell. The researchers used tripronuclear (3PN) zygotes fertilized by two sperm and therefore non-viable, to investigate CRISPR/Cas9-mediated gene editing in human cells. The researchers found that while CRISPR/Cas9 could effectively cleave the β-globin gene (HBB), the efficiency of homologous recombination directed repair of CRISPR/Cas9 was inefficient and failed in a majority of trials. Problems arose such as off-target cleavage and the competitive recombination of the endogenous delta-globin with CRISPR/Cas9 led to unexpected mutation. The study results indicated that HBB repair in the embryos occurred preferentially through alternative pathways. In the end only 4 of the 54 zygotes carried the intended genetic information, and even then the successfully edited embryos were mosaics containing the preferential genetic code and the mutation. In March 2017, researchers claimed to have successfully edited three viable human embryos. The study showed that CRISPR/Cas9 is could effectively be used as a gene-editing tool in human 2PN zygotes, which could potentially lead to a viable pregnancy. The researchers used injection of Cas9 protein complexed with the relevant sgRNAs and homology donors into human embryos. The researchers found homologous recombination-mediated alteration in CRISPR/Cas9 and G6PD. The researchers also noted the limitations of their study and called for further research. An August 2017 study reported the successful use of CRISPR to edit out a mutation responsible for congenital heart disease.  The study looked at heterozygous MYBPC3 mutation in human embryos. The study claimed precise CRISPR/Cas9 and homology-directed repair response with high accuracy and precision. By modifying the cell cycle stage at which the DSB was induced, they were able to avoid mosaicism in cleaving embryos, prominent in earlier studies, and achieve a large percentage of homozygous embryos carrying the wild-type MYBPC3 gene without evidence of unintended mutations. The researchers concluded that the technique may be used to correct mutations in human embryos. The claims of this study were however pushed back on by critics who argued the evidence was unpersuasive. A June 2018 study researchers reported a potential link for edited cells having increased cancerous potential. The study reported that CRISPR/Cas9 induced DNA damage response and stopped the cell cycle. The study was conducted in human retinal pigment epithelial cells, and the use of CRISPR led to a selection against cells with a functional p53 pathway. The study concluded that p53 inhibition might increase HGE efficiency and that p53 function would need to be watched when developing CRISPR/Cas9 based therapy. A November 2018 study of using CRISPR/Cas9 to correct a single mistaken amino acid in 16 out of 18 attempts in a human embryo. The unusual level of precision was achieved with a base editor (BE) system that was constructed by fusing the deaminase to the dCas9 protein. The BE system efficiently edited the targeted C to T or G to A without the use of a donor and without DBS formation. The study focused on the FBN1 mutation that is causative for Marfan syndrome. The study supported the corrective value of gene therapy for the FBN1 mutation in both somatic and germline cells. Ethical and moral debates As early in the history of biotechnology as 1990, there have been researchers opposed to attempts to modify the human germline using these new tools, and such concerns have continued as technology progressed. In March 2015, with the advent of new techniques like CRISPR, researchers urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited. In April 2015, researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR, creating controversy. A committee of the American National Academy of Sciences and National Academy of Medicine gave support to human genome editing in 2017 once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight." The American Medical Association's Council on Ethical and Judicial Affairs stated that "genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics." Several religious positions have been published with regards to human germline engineering. According to them, many see germline modification as being more moral than the alternative, which would be either discarding of the embryo, or birth of a diseased human. The main conditions when it comes to whether or not it is morally and ethically acceptable lie within the intent of the modification, and the conditions in which the engineering is done. Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases. For parents, genetic engineering could be seen as another child enhancement technique to add to diet, exercise, education, training, cosmetics, and plastic surgery. Another theorist claims that moral concerns limit but do not prohibit germline engineering. Consent One issue related to human genome editing relates to the impact of the technology on future individuals whose genes are modified without their consent. Clinical ethics accepts the idea that parents are, almost always, the most appropriate surrogate medical decision makers for their children until the children develop their own autonomy and decision-making capacity. This is based on the assumption that, except under rare circumstances, parents have the most to lose or gain from a decision and will ultimately make decisions that reflects the future values and beliefs of their children. According to this assumption, it could be assumed that parents are the most appropriate decision makers for their future children as well. However, there are anecdotal reports of children and adults who disagree with the medical decisions made by a parent during pregnancy or early childhood, such as when death was a possible outcome. There are also published patient stories by individuals who feel that they would not wish to change or remove their own medical condition if given the choice and individuals who disagree with medical decisions made by their parents during childhood. Other researchers and philosophers have noted that the issue of the lack of prior consent applies as well to individuals born via traditional sexual reproduction. Philosopher David Pearce further argues that “old-fashioned sexual reproduction is itself an untested genetic experiment”, often compromising a child's wellbeing and pro-social capacities even if the child grows in a healthy environment. According to Pearce, “the question of [human germline engineering] comes down to an analysis of risk-reward ratios – and our basic ethical values, themselves shaped by our evolutionary past.” Bioethicist Julian Savulescu in turn proposes the principle of procreative beneficence, according to which “couples (or single reproducers) should select the child, of the possible children they could have, who is expected to have the best life, or at least as good a life as the others, based on the relevant, available information”. Some ethicists argue that the principle of procreative beneficence would justify or even require genetically enhancing one's children. A relevant issue concerns “off target effects”, large genomes may contain identical or homologous DNA sequences, and the enzyme complex CRISPR/Cas9 may unintentionally cleave these DNA sequences causing mutations that may lead to cell death. The mutations can cause important genes to be turned on or off, such as genetic anti-cancer mechanisms, that could speed up disease exasperation. Unequal distribution of benefits The other ethical concern is the potential for “designer babies”, or the creation of humans with "perfect", or "desirable" traits. There is a debate as to if this is morally acceptable as well. Such debate ranges from the ethical obligation to use safe and efficient technology to prevent disease to seeing some actual benefit in genetic disabilities. There are concerns that the introduction of desirable traits in a certain part of the population (instead of the entire population) could cause economic inequalities (“positional” good). However, this is not the case if a same desirable trait would be introduced over the entire population (similar to vaccines). Another ethical concern pertains to potential unequal distribution of benefits, even in the case of genome editing being inexpensive. For example, corporations may be able to take unfair advantage of patent law or other ways of restricting access to genome editing and thereby may increase the inequalities. There are already disputes in the courts where CRISPR-Cas9 patents and access issues are being negotiated. Therapeutic and non-therapeutic use There remains debate on if the permissibility of human germline engineering for reproduction is dependent on the use, being either a therapeutic or non-therapeutic application. In a survey by the UK's Royal Society, 76% of participants in the UK supported therapeutic human germline engineering to prevent or correct disease, however for non-therapeutic edits such as enhancing intelligence or altering eye or hair color in embryos, there was only 40% and 31% support, respectively. There was a similar result in a study at the University of Bogota, Colombia, where students as well as professors generally agreed that therapeutic genome editing is acceptable, while non-therapeutic genome editing is not. There is also debate on if there can be a defined distinction between therapeutic and non-therapeutic germline editing. An example would be if two embryos are predicted to grow up to be very short in height. Boy 1 will be short because of a mutation in his Human Growth Hormone gene, while boy 2 will be short because his parents are very short. Editing the embryo of boy 1 to make him of average height would be a therapeutic germline edit, while editing the embryo of boy 2 to be of average height would be a non-therapeutic germline edit. In both cases with no editing of the boys' genomes they would both grow up to be very short, which would decrease their wellbeing in life. Likewise editing both of the boys' genomes would allow them to grow up to be of average height. In this scenario, editing for the same phenotype for being of average height falls under both therapeutic and non-therapeutic germline engineering. Current global policy There is distinction in some country policies, including but not limited to official regulation and legislation, between human germline engineering for reproductive use and for laboratory research. As of October 2020, there are 96 countries that have policies involving the use of germline engineering in human cells. Reproductive use Reproductive use of human germline engineering involves implanting the edited embryo to be born. 70 countries currently explicitly prohibit the use of human germline engineering for use in reproduction, while 5 countries prohibit it for reproduction with exceptions. No countries permit the use of human germline engineering for reproduction. Countries that explicitly prohibit any use of human germline engineering for reproduction are: Albania, Argentina, Australia, Austria, Bahrain, Belarus, Benin, Bosnia and Herzegovina, Brazil, Bulgaria, Burundi, Canada, Chile, China, Congo, Costa Rica, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, India, Iran, Ireland, Israel, Japan, Kenya, Latvia, Lebanon, Lithuania, Malaysia, Malta, Mexico, Moldova, Montenegro, Netherlands, New Zealand, Nigeria, North Macedonia, Norway, Oman, Pakistan, Poland, Portugal, Qatar, Romania, Russia, San Marino, Saudi Arabia, Serbia, Slovakia, Slovenia, South Korea, Spain, Sweden, Switzerland, Thailand, Tunisia, Turkey, the United Kingdom, the United States, Uruguay, and the Vatican Countries that explicitly prohibit (with exceptions) the use of human germline engineering for reproduction are: Belgium, Colombia, Italy, Panama, and the United Arab Emirates Laboratory research Laboratory research use involves human germline engineering restricted to in vitro use, where edited cells will not be implanted to be born. 19 countries currently explicitly prohibit any use of human germline engineering for in vitro use, while 4 prohibit it with exceptions, and 11 permit it. Countries that explicitly prohibit any use of germline engineering for in vitro use are: Albania, Austria, Bahrain, Belarus, Brazil, Canada, Costa Rica, Croatia, Germany, Greece, Lebanon, Malaysia, Malta, Pakistan, Saudi Arabia, Sweden, Switzerland, Uruguay, and the Vatican Countries that explicitly prohibit (with exceptions) the use of germline engineering for in vitro use are: Colombia, Finland, Italy, and Panama Countries that explicitly permit the use of germline engineering for in vitro use are: Burundi, China, Congo, India, Iran, Ireland, Japan, Norway, Thailand, the United Kingdom, and the United States See also Human genetic engineering Gene therapy Germinal choice technology Human genetic enhancement CRISPR Designer Baby References Further reading Genetics Genome editing Biotechnology in China 2010s in biotechnology
Human germline engineering
[ "Engineering", "Biology" ]
4,043
[ "Genetics techniques", "Biotechnology by country", "Genome editing", "Genetic engineering", "Biotechnology in China" ]
53,834,843
https://en.wikipedia.org/wiki/Organotantalum%20chemistry
Organotantalum chemistry is the chemistry of chemical compounds containing a carbon-to-tantalum chemical bond. A wide variety of compound have been reported, initially with cyclopentadienyl and CO ligands. Oxidation states vary from Ta(V) to Ta(-I). Classes of organotantalum compounds Alkyl and aryl complexes Pentamethyltantalum was reported by Richard Schrock in 1974. Salts of [Ta(CH3)6]− are prepared by alkylation of TaF5 using methyl lithium: TaF5 + 6 LiCH3 → Li[Ta(CH3)6] + 5 LiF Alkylidene complexes Tantalum alkylidene complexes arise by treating trialkyltantalum dichloride with alkyl lithium reagents. This reaction initially forms a thermally unstable tetraalkyl-monochloro-tantalum complex, which undergoes α-hydrogen elimination, followed by alkylation of the remaining chloride. Tantalum alkylidene complexes are nucleophilic. They effect a number of reactions including: olefinations, olefin metathesis, hydroaminoalkylation of olefins, and conjugate allylation of enones. Ethylene, propylene, and styrene react with tantalum alkylidene complexes to yield olefin metathesis products. Cyclopentadienyl complexes Some of the first reported organotantalum complexes were cyclopentadienyl derivatives. These arise from the salt metathesis reactions of sodium cyclopentadienide and tantalum pentachloride. An example of this is the first transition metal trihydride, Cp2TaH3. More soluble and better developed are derivatives of pentamethylcyclopentadiene such as Cp*TaCl4, Cp*2TaCl2, and Cp*2TaH3. Tantalum carbonyls and isocyanides Reduction of TaCl5 under an atmosphere of CO gives the salts of [Ta(CO)6]−. These same anions can be obtained by carbonylation of tantalum arene complexes. A number of tantalum isocyanide complexes are also known. Tantalum arenes and alkyne complexes Treatment of tantalum pentachloride with hexamethylbenzene (C6Me6), aluminium, and aluminium trichloride gives [M(η6-C6Me6)AlCl4]2. Tantalum-alkyne complexes catalyze cyclotrimerizations. Some tantalum-alkyne complexes are precursors to allylic alcohols. Tantalacyclopropenes are invoked as intermediates. Tantalum-amido complexes Organotantalum compounds are invoked as intermediates in C-alkylation of secondary amines with 1-alkenes using Ta(NMe2)5. The chemistry developed by Maspero was later brought to fruition when Hartwig and Herzon reported the hydroaminoalkylation of olefins to form alkylamines: The catalytic cycle may proceed by β-hydrogen abstraction of the bisamide, which forms the metallaaziridine. Subsequent olefin insertion, protonolysis of the tantalum-carbon bond, and β-hydrogen abstraction affords the alkylamine product. Transmetalation Organotantalum reagents arise via transmetalation of organotin compounds with tantalum(V) chloride. These organotantalum reagents promote the conjugate allylation of enones. Although the direct allylation of carbonyl groups is prevalent throughout the literature, little has been reported on the conjugate allylation of enones. Applications Organotantalum compounds are of academic interest, but few or no commercial applications have been described. References Tantalum compounds Organometallic chemistry
Organotantalum chemistry
[ "Chemistry" ]
836
[ "Organometallic chemistry" ]
53,835,236
https://en.wikipedia.org/wiki/Krische%20allylation
The Krische allylation involves the enantioselective iridium-catalyzed addition of an allyl group to an aldehyde or an alcohol, resulting in the formation of a secondary homoallylic alcohol. The mechanism of the Krische allylation involves primary alcohol dehydrogenation or, when using aldehyde reactants, hydrogen transfer from 2-propanol. Unlike other allylation methods, the Krische allylation avoids the use of preformed allyl metal reagents and enables the direct conversion of primary alcohols to secondary homoallylic alcohols (precluding alcohol to aldehyde oxidation). Background Enantioselective carbonyl allylations are frequently applied to the synthesis of polyketide natural products. In 1978, Hoffmann reported the first asymmetric carbonyl allylation using a chiral allylmetal reagent, an allylborane derived from camphor. Subsequently, other chiral allylmetal reagents were developed by Kumada, Roush, Brown, Leighton, and others. These methods utilize preformed allyl metal reagents and generate stoichiometric quantities of metal byproducts. In 1991, Yamamoto disclosed the first catalytic enantioselective method for carbonyl allylation, which employed a chiral boron Lewis acid-catalyst in combination with allyltrimethylsilane. Numerous catalytic enantioselective methods for carbonyl allylation followed, including work by Umani-Ronchi and Keck. While these methods had a significant impact, they do not circumvent the use of preformed allylmetal reagents. Catalytic variants of the Nozaki-Hiyama-Kishi reaction represent an alternative method for asymmetric carbonyl allylation, but stoichiometric metallic reductants are required. Whereas the allylmetal reagents used in these first-generation technologies are often difficult to prepare and handle, the Krische allylation exploits highly tractable allylic acetates. Additionally, the Krische allylation avoids the use of preformed allyl metal reagents or metallic reductants and chiral auxiliaries, significantly reducing waste generation. Reaction features The Krische allylation involves “transfer hydrogenative” carbon-carbon bond formations. In a series of papers published in the early 2000s, Krische and coworkers demonstrated that allenes, dienes, and allyl acetates could be converted to transient allylmetal nucleophiles via hydrogenation, transfer hydrogenation or hydrogen auto-transfer. This strategy for enantioselective carbonyl allylation avoids preformed organometallic reagents or metallic reductants. A remarkable feature of these reactions is the ability to conduct carbonyl allylation from the alcohol oxidation state. Due to a kinetic preference for primary alcohol dehydrogenation, diols containing both primary and secondary alcohols undergo site-selective carbonyl allylation at the primary alcohol without the need for protecting groups. Additionally, by using alcohol reactants, the use of chiral α-stereogenic aldehydes, which are prone to racemization, can be avoided. The excellent functional group compatibility of the Krische allylation combined with the tractability of the allyl acetate pronucleophiles enables the use of allyl donors bearing highly complex nitrogen-rich substituents. The figure below shows some of the different allyl donors that have been used in the Krische allylation. These methods are summarized in the review literature. Mechanism The active catalyst in the Krische allylation is a cyclometallated π-allyliridium C,O-benzoate complex. This complex can be generated in situ or can be isolated via precipitation or conventional chromatography on silica gel. The mechanism of the Krische allylation has been corroborated by DFT calculations. Entry into the catalytic cycle involves protonation of the cyclometallated π-allyliridium precatalyst to generate the iridium alkoxide I. β-Hydride elimination of alkoxide I generates the aldehyde, which dissociates to form the iridium hydride III. Deprotonation of the iridium hydride III provides an anionic iridium(I) species IV, which upon oxidative addition to the allyl donor forms the π-allyliridium complex V. Association of the aldehyde to the σ-allyliridium species VI triggers carbonyl addition by way of the six-centered transition structure VII to form the homoallylic alkoxide VIII. The homoallylic alkoxide VIII is stable with respect to beta-hydride elimination due to coordination of the double bond with the metal. Exchange with the primary alcohol reactant regenerates the iridium alkoxide I and releases the reaction product. Applications in synthesis Iridium-catalyzed transfer-hydrogenative carbonyl allylation method has been applied to the synthesis of polyketide natural products. Some examples are shown below. In every case, the target compound was prepared in significantly fewer steps than was previously achieved. For example, total syntheses of roxaticin, bryostatin and cryptocaryol were accomplished via double Krische allylation of 1,3-propane diol. This method was also used in the synthesis of mandelalide A. The Krische bisallylation has been applied to the synthesis of psymberin in 17 LLS and 32 total steps. Through the use of the Krische allylation, this synthesis was accomplished via a much shorter route than previous syntheses. The Krische allylation to his synthesis of callyspongiolide using the chiral SEGPHOS catalyst complex. ] In 2018, Harran also prepared callyspongiolide using the Krische allylation as a convergent method for fragment union. Double crotylation was used by Krische to prepare 6-deoxyerythronolide B and swinholide A. Related articles Organostannane addition Carbonyl allylation References External links Krische Group Website Iridium Catalysis Organometallic chemistry Organic reactions
Krische allylation
[ "Chemistry" ]
1,290
[ "Catalysis", "Chemical kinetics", "Organometallic chemistry", "Organic reactions" ]
40,882,801
https://en.wikipedia.org/wiki/Propidium%20monoazide
Propidium monoazide (PMA) is a photoreactive DNA-binding dye that preferentially binds to dsDNA. It is used to detect viable microorganisms by qPCR. Visible light (high power halogen lamps or specific LED devices) induces a photoreaction of the chemical that will lead to a covalent bond with PMA and the dsDNA. The mechanism of DNA modification by PMA can be seen in this protocol. This process renders the DNA insoluble and results in its loss during subsequent genomic DNA extraction. Theoretically, dead microorganisms lose their capability to maintain their membranes intact, which leaves the "naked" DNA in the cytosol ready to react with PMA. DNA of living organisms are not exposed to the PMA, as they have an intact cell membrane. After treatment with the chemical, only the DNA from living bacteria is usable in qPCR, allowing to obtain only the amplified DNA of living organisms. This is helpful in determining which pathogens are active in specific samples. The main use of PMA is in Viability PCR but the same principle can be applied in flow cytometry or fluorescence microscopy. However, the ability of PMA in differentiating viable and non-viable cells varies for different bacteria. An example is that the permeability of PMA to gram-positive and gram-negative cell membranes is different. Therefore, the application of PMA to mixed communities is still limited. PMA was developed at Biotium, Inc. as an improvement on ethidium monoazide (EMA). PMA provides better discrimination between live and dead bacteria because it is excluded from live cells more efficiently than EMA. References DNA-binding substances Dyes Molecular biology Organoazides
Propidium monoazide
[ "Chemistry", "Biology" ]
368
[ "Biochemistry", "DNA-binding substances", "Genetics techniques", "Molecular biology" ]
40,887,862
https://en.wikipedia.org/wiki/Chemical%20phosphorus%20removal
Chemical phosphorus removal is a wastewater treatment method, where phosphorus is removed using salts of aluminum (e.g. alum or polyaluminum chloride), iron (e.g. ferric chloride), or calcium (e.g. lime). Phosphate forms precipitates with the metal ions and is removed together with the sludge in the separation unit (sedimentation tank, flotation tank, etc.). Aluminum sulfate treatment to reduce phosphorus content of lakes One method of eutrophication remediation is the application of aluminum sulfate, a salt commonly used in the coagulation process of drinking water treatment. Aluminum sulfate, or "alum" as it is commonly referred, has been found to be an effective lake management tool by reducing the phosphorus load. Alum was first applied in 1968 to a lake in Sweden. Its first application to an American lake followed in 1970. Today, alum has been utilized with improved effectiveness and understanding. In a large scale study, 114 lakes were monitored for the effectiveness of alum at phosphorus reduction. Across all lakes, alum effectively reduced the phosphorus for 11 years. While there was variety in the longevity (21 years in deep lakes and 5.7 years in shallow lakes), the results express the effectiveness of alum at controlling phosphorus within lakes. Mechanism Alum treatment begins with the addition of aluminum sulfate salt to a water body. Once added, the salt dissolves and dissociates, introducing Al(III) ions to the water. The aluminum ions participate in a series of hydrolysis reactions, forming different aluminum species across pH ranges. As more aluminum sulfate is added, water pH decreases. At higher pH, the soluble species Al(OH)4− is present. In neutral pH ranges (6–8), the insoluble aluminum hydroxide (Al(OH)3) occurs. As pH decreases further, the Al(III) ion remains present. Maintaining optimal pH is important for the removal of phosphorus from water. Phosphorus is most effectively removed at the neutral pH range, when the insoluble aluminum hydroxide is present. This hydroxide functions as a Lewis acid, creating a flocculation environment similar to conventional wastewater treatment. The insoluble Al(OH)3 floc adsorbs phosphorus, as well as other species, and removes them from the water column. As floc adsorption continues, the floc becomes larger, eventually settling to the bottom of the water column in the sediment. The resulting aluminum hydroxide layer covering the lake bottom additionally blocks the diffusion of phosphorus from sediment into the water column, further regulating internally loaded phosphorus. Implementation For most alum treatments, aluminum sulfate salt is applied to substrate at the lake's bottom, within the hypolimnion. The alum then reduces phosphorus levels by inactivating the phosphorus released from these lake sediments, thereby controlling phosphorus in the entire water column. This phosphorus supplied from within the lake sediments is known as "internally loaded" phosphorus, as opposed to "externally loaded" phosphorus supplied by sources outside the lake, such as runoff. Although alum is typically applied to the hypolimnion, reducing phosphorus universally within the lake, it may also be applied to the epilimnion or locally to point sources. This style of alum treatment is similar to the use of alum in conventional water treatment, and is more effective at reducing externally loaded phosphorus than universal application of alum to the hypolimnion. When applied to the epilimnion, boats powered by an outboard motor are deployed onto a lake carrying aluminum sulfate. After determining the necessary dosage and location of the application, the aluminum sulfate is added to the surface of the lake near the wake of the outboard motor. This provides sufficient mixing of the aluminum sulfate within the epilimnion. The necessary dosage of alum is determined by a variety of parameters. Changes in pH, dissolved oxygen levels, metal content of lake sediment, and lake size are all important for consideration. Alum dosage is calculated by scientist and engineers to increase the effectiveness. Limitations Alum treatment is less effective in deep lakes, as well as lakes with substantial external phosphorus loading. In deep lakes, the inactivation of phosphorus is not spread throughout the entire water column, as it is in shallower lakes due to the localization of aluminum hydroxide to the hypolimnion. Furthermore, externally loaded phosphorus often diffuses slowly downward from the lake surface, limiting its interaction with aluminum hydroxide within the hypolimnion and allowing phosphorus accumulation higher in the water column. Therefore, alum treatment is most effectively applied to shallow lakes with primarily internally loaded phosphorus. One exception is point sources of externally loaded phosphorus, which can be effectively regulated by direct application of aluminum sulfate to the source. Another physical property to be considered is the ability of a lake to withstand mixing in the water column. Lakes with a higher Osgood Index, a parameter used to determine the amount of mixing a that occurs in a lake due to wind, have been found to result in more effective alum treatment. Another parameter is the ratio of the watershed area to the lake surface area. Lakes with lower watershed to lake area ratios experienced greater longevity following treatment. These lakes tend to be correlated with longer residence times and tend to be influenced by internally loaded phosphorus which aids in successful treatment. Regardless of application strategy, repeated alum treatment is often necessary for most lakes every 5 to 15 years. The necessity of repeated treatment requires continuous management and phosphorus monitoring to ensure optimal effectiveness. Biological implications are another important consideration of alum treatment. Treatments increase water clarity, which has been correlated with increased plant growth at greater depths within the lake. Increased plant growth within lakes changes the character of the substrate, which is sometimes a factor in biodiversity. Lakes with benthic feeding fish such as carp tend to have lower success at removing phosphorus. These species forage in lake sediments which disturbs the aluminum hydroxide flocs binding phosphorus to the lake bottom. An additional concern is that aluminum salts can acidify lakes, making them potentially toxic to aquatic organisms. However, the aluminum sulfate dosage used for lake treatment is not often high enough to pose significant toxicity to fish, although declines in algae and invertebrates have been observed in treated lakes. The alum dosage is also insufficient to cause toxicity in humans, and is often similar to alum doses used in conventional drinking water treatment. To reduce negative biological effects, the accepted limit for dissolved aluminum concentrations in a water body is 50 μg Al/L and pH should be restricted to a range of 5.5-9. References External links Phosphorus removal from wastewater - Lenntech Water treatment
Chemical phosphorus removal
[ "Chemistry", "Engineering", "Environmental_science" ]
1,376
[ "Water treatment", "Water pollution", "Water technology", "Environmental engineering" ]
46,919,484
https://en.wikipedia.org/wiki/Monoidal%20category%20action
In algebra, an action of a monoidal category S on a category X is a functor such that there are natural isomorphisms and and those natural isomorphism satisfy the coherence conditions analogous to those in S. If there is such an action, S is said to act on X. For example, S acts on itself via the monoid operation ⊗. Notes References Monoidal categories Functors
Monoidal category action
[ "Mathematics" ]
83
[ "Algebra stubs", "Mathematical structures", "Functions and mappings", "Monoidal categories", "Mathematical objects", "Category theory", "Mathematical relations", "Functors", "Algebra" ]
46,919,643
https://en.wikipedia.org/wiki/Uniform%20tiling%20symmetry%20mutations
In geometry, a symmetry mutation is a mapping of fundamental domains between two symmetry groups. They are compactly expressed in orbifold notation. These mutations can occur from spherical tilings to Euclidean tilings to hyperbolic tilings. Hyperbolic tilings can also be divided between compact, paracompact and divergent cases. The uniform tilings are the simplest application of these mutations, although more complex patterns can be expressed within a fundamental domain. This article expressed progressive sequences of uniform tilings within symmetry families. Mutations of orbifolds Orbifolds with the same structure can be mutated between different symmetry classes, including across curvature domains from spherical, to Euclidean to hyperbolic. This table shows mutation classes. This table is not complete for possible hyperbolic orbifolds. *n22 symmetry Regular tilings Prism tilings Antiprism tilings *n32 symmetry Regular tilings Truncated tilings Quasiregular tilings Expanded tilings Omnitruncated tilings Snub tilings *n42 symmetry Regular tilings Quasiregular tilings Truncated tilings Expanded tilings Omnitruncated tilings Snub tilings *n52 symmetry Regular tilings *n62 symmetry Regular tilings *n82 symmetry Regular tilings References Sources John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, From hyperbolic 2-space to Euclidean 3-space: Tilings and patterns via topology Stephen Hyde Polyhedra Euclidean tilings Hyperbolic tilings
Uniform tiling symmetry mutations
[ "Physics", "Mathematics" ]
317
[ "Tessellation", "Euclidean plane geometry", "Hyperbolic tilings", "Euclidean tilings", "Planes (geometry)", "Symmetry" ]
46,922,946
https://en.wikipedia.org/wiki/ESITH
The School of Textile and Clothing industries (ESITH) is a Moroccan engineering school, established in 1996, that focuses on textiles and clothing. It was created in collaboration with ENSAIT and ENSISA, as a result of a public private partnership designed to grow a key sector in the Moroccan economy. The partnership was successful and has been used as a model for other schools. ESITH is the only engineering school in Morocco that provides a comprehensive program in textile engineering with internships for students at the Canadian Group CTT. Edith offers three programs in industrial engineering: product management, supply chain, and logistics, and textile and clothing References Universities and colleges in Morocco 1996 establishments in Morocco Educational institutions established in 1996 Textile engineering 20th-century architecture in Morocco Education in Casablanca Universities and colleges established in 1996
ESITH
[ "Physics", "Engineering" ]
160
[ "Applied and interdisciplinary physics", "Textile engineering" ]
46,923,835
https://en.wikipedia.org/wiki/Oil%20discharge%20monitoring%20equipment
Oil discharge monitoring equipment (ODME) is based on a measurement of oil content in the ballast and slop water, to measure conformance with regulations. The apparatus is equipped with a GPS, data recording functionality, an oil content meter and a flow meter. By use of data interpretation, a computing unit will be able to allow the discharge to continue or it will stop it using a valve outside the deck. Operating principle A sample point on the discharge line allows for the analyzer to determine the oil content of the ballast now and slop water in PPM. The analyzer is self-maintaining by periodical cleansings with fresh water, and therefore requires a minimum of active maintenance from the crew. The results of the analyzer are sent to a computer, which determines whether the oil content values are to result in overboard discharge or not. The valves that direct the ballast water either over board or to slop tank are controlled by the integrated computer, and a GPS signal further automates the process by including special areas and completes the required input for the Oil Record Book. All oil tankers with a gross tonnage of larger than 150 must have efficient Oil Discharge Monitoring Equipment on board. The oily discharge is sent out to sea through a pump. The oily mixture has to pass through a series of sensors to determine whether it is acceptable to be sent to the discharge pipe. Based on regulations, the following values must be recorded by the system: Date and time of the discharge Location of the ship Oil content of the discharge in ppm Total quantity discharged Discharge rate All records of Oil Detection Monitoring Equipment must be stored on board ships for no less than 3 years. Oil Discharge Monitoring systems today consist of a computing unit that is installed in the cargo control room. The computer unit control and receives data from other ODME components. ODME systems also have an analyzing unit that contains the Oil content meter, a fresh water valve for cleaning purposes, and a pressure transmitter that monitors the sample flow through the measuring cell. See also Marpol 73/78 Marpol Annex I Oily water separators Oily water separator (marine) Oil Content Meter Magic Pipe IMO Port Reception Facilities References Waste treatment technology Ocean pollution
Oil discharge monitoring equipment
[ "Chemistry", "Engineering", "Environmental_science" ]
449
[ "Ocean pollution", "Water treatment", "Water pollution", "Environmental engineering", "Waste treatment technology" ]
46,926,261
https://en.wikipedia.org/wiki/Sub-Doppler%20cooling
Sub-Doppler cooling is a class of laser cooling techniques that reduce the temperature of atoms and molecules below the Doppler cooling limit. In experiment implementation, Doppler cooling is limited by the broad natural linewidth of the lasers used in cooling. Regardless of the transition used, however, Doppler cooling processes have an intrinsic cooling limit that is characterized by the momentum recoil from the emission of a photon from the particle. This is called the recoil temperature and is usually far below the linewidth-based limit mentioned above. By laser cooling methods beyond the two-level approximations of atoms, temperature below this limit can be achieved. Optical pumping between the sublevels that make up an atomic state introduces a new mechanism for achieving ultra-low temperatures. The essential feature of sub-Doppler cooling is the non-adiabaticity of the moving atoms to the light field. For a spatially dependent light field, the orientation of moving atoms is adjusted by optical pumping to fit the conditions of the light field. Yet the moving atoms do not instantly adjust to the light field as they move, their orientation always lags behind the orientation that would exist for stationary atoms, which determines the velocity-dependent differential absorption and hence the cooling. With this cooling process, lower temperatures can be obtained. Various methods have been used independently or combined in an experimental sequence to achieve sub-Doppler cooling. One method to produce spatially dependent optical pumping is polarization gradient cooling, where the superposition of two counter-propagating laser beams of orthogonal polarizations lead to a light field with polarization varying on the wavelength scale. A specific mechanism within polarization gradient cooling is Sisyphus cooling, where atoms climb "potential hills" created by the interaction of their internal energy states with spatially varying light fields. The light field in optical molasses in three-dimension also has polarization gradient. Other methods of sub-Doppler cooling include evaporative cooling, free space Raman cooling, Raman side-band cooling, resolved sideband cooling, electromagnetically induced transparency (EIT) cooling, and the use of a dark magneto-optical trap. These techniques can be used depending on the minimum temperature needed and specifications of the individual setup. For example, an optical molasses time-of-flight technique was used to cool sodium (Doppler limit ) to . Motivations for sub-doppler cooling include motional ground state cooling, cooling to the motional ground state, a requirement for maintaining fidelity during many quantum computation operations. Dark magneto-optical trap A magneto-optical trap (MOT) is commonly used for cooling and trapping a substance by Doppler cooling. In the process of Doppler cooling, the red detuned light would be absorbed by atoms from one certain direction and re-emitted in a random direction. The electrons of the atoms would decay to an alternative ground states if the atoms have more than one hyperfine ground level. There is the case of all the atoms in the other ground states rather than the ground states of Doppler cooling, then system cannot cool the atoms further. In order to solve this problem, the other re-pumping light would be incident on the system to repopulate the atoms to restart the Doppler cooling process. This would induce higher amounts of fluorescence being emitted from the atoms which can be absorbed by other atoms, acting as a repulsive force. Due to this problem, the Doppler limit would increase and is easy to meet. When there is a dark spot or lines on the shape of the re-pumping light, the atoms in the middle of the atomic gas would not be excited by the re-pumping light which can decrease the repulsion force from the previous cases. This can help to cool the atoms to a lower temperature than the typical Doppler cooling limit. This is called a dark magneto-optical trap (DMOT). Limits The Doppler cooling limit is set by balancing the heating from the momentum kicks. Applying the results from the Fokker-Planck equation to the sub-Doppler processes would lead to an arbitrarily low final temperature as the damping coefficient become arbitrarily large. A few more considerations are needed. For instance, When a photon is scattered, the momentum change of the atom is assumed to be small relative to its overall momentum, but when the atom slows down to around the region of , the momentum change becomes significant. Thus at low velocities, spontaneous emission would leave the atom with a residual momentum around , which sets a minimum velocity scale. The velocity distribution around cannot be well described by the Fokker Planck equation, and this sets an intuitive lower limit on the temperature. Furthermore, polarization gradient cooling depends on the ability to localize atoms to a scale of , where is the wavelength of the light. Due to the uncertainty principle, this localization also imposes a minimum momentum spread , which also leads to a limit on how much the atoms can be cooled. These theories are tested in the analytical and numerical calculations in with a one-dimensional polarization gradient molasses. It was shown that in the limit of large detuning, the velocity distribution depends only on a dimensionless parameter, the light shift of the ground state divided by the recoil energy. The minimum kinetic energy was found to be on the order of 40 times the recoil energy. References Atomic, molecular, and optical physics
Sub-Doppler cooling
[ "Physics", "Chemistry" ]
1,120
[ "Atomic", " molecular", " and optical physics" ]
46,926,884
https://en.wikipedia.org/wiki/Quantum%20feedback
Quantum feedback or quantum feedback control is a class of methods to prepare and manipulate a quantum system in which that system's quantum state or trajectory is used to evolve the system towards some desired outcome. Just as in the classical case, feedback occurs when outputs from the system are used as inputs that control the dynamics (e.g. by controlling the Hamiltonian of the system). The feedback signal is typically filtered or processed in a classical way, which is often described as measurement based feedback. However, quantum feedback also allows the possibility of maintaining the quantum coherence of the output as the signal is processed (via unitary evolution), which has no classical analogue. Measurement based feedback In the closed loop quantum control, the feedback may be entirely dynamical (that is, the plant and controller form a single dynamical system and the controller with the two influencing each other through direct interaction). This is named Coherent Control. Alternatively, the feedback may be entirely information theoretic insofar as the controller gains information about the plant due to measurement of the plant. This is measurement-based control. Coherent feedback Unlike measurement based feedback, where the quantum state is measured (causing it to collapse) and control is conditioned on the classical measurement outcome, coherent feedback maintains the full quantum state and implements deterministic, non-destructive operations on the state, using fully quantum devices. One example is a mirror, reflecting photons (the quantum states) back to the emitter. Notes References H. M. Wiseman and G. J. Milburn, Quantum Measurement and Control (Cambridge University Press, 2009). Quantum mechanics
Quantum feedback
[ "Physics" ]
327
[ "Theoretical physics", "Quantum mechanics" ]
46,927,724
https://en.wikipedia.org/wiki/P-adic%20cohomology
In mathematics, p-adic cohomology means a cohomology theory for varieties of characteristic p whose values are modules over a ring of p-adic integers. Examples (in roughly historical order) include: Serre's Witt vector cohomology Monsky–Washnitzer cohomology Infinitesimal cohomology Crystalline cohomology Rigid cohomology See also p-adic Hodge theory Étale cohomology, taking values over a ring of l-adic integers for l≠p Arithmetic geometry Cohomology theories
P-adic cohomology
[ "Mathematics" ]
114
[ "Arithmetic geometry", "Number theory" ]
50,991,882
https://en.wikipedia.org/wiki/Finite%20point%20method
The finite point method (FPM) is a meshfree method for solving partial differential equations (PDEs) on scattered distributions of points. The FPM was proposed in the mid-nineties in (Oñate, Idelsohn, Zienkiewicz & Taylor, 1996a), (Oñate, Idelsohn, Zienkiewicz, Taylor & Sacco, 1996b) and (Oñate & Idelsohn, 1998a) with the purpose to facilitate the solution of problems involving complex geometries, free surfaces, moving boundaries and adaptive refinement. Since then, the FPM has evolved considerably, showing satisfactory accuracy and capabilities to deal with different fluid and solid mechanics problems. History Similar to other meshfree methods for PDEs, the finite point method (FPM) has its origins in techniques developed for scattered data fitting and interpolation, basically in the line of weighted least-squares methods (WLSQ). The latter can be regarded as particular forms of the moving least-squares method (MLS) proposed by Lancaster and Salkauskas. WLSQ methods have been widely used in meshfree techniques because allow retaining most of the MLS, but are more efficient and simple to implement. With these goals in mind, an outstanding investigation which led to the development of the FPM began in (Oñate, Idelsohn & Zienkiewicz, 1995a) and (Taylor, Zienkiewicz, Oñate & Idelsohn, 1995). The technique proposed was characterized by WLSQ approximations on local clouds of points and an equations discretization procedure based on point collocation (in the line of Batina’s works, 1989, 1992). The first applications of the FPM focused on adaptive compressible flow problems (Fischer, Onate & Idelsohn, 1995; Oñate, Idelsohn & Zienkiewicz, 1995a; Oñate, Idelsohn, Zienkiewicz & Fisher, 1995b). The effects on the approximation of the local clouds and weighting functions were also analyzed using linear and quadratic polynomial bases (Fischer, 1996). Additional studies in the context of convection-diffusion and incompressible flow problems gave the FPM a more solid base; cf. (Oñate, Idelsohn, Zienkiewicz & Taylor, 1996a) and (Oñate, Idelsohn, Zienkiewicz, Taylor & Sacco, 1996b). These works and (Oñate & Idelsohn, 1998) defined the basic FPM technique in use today. Numerical approximation The approximation in the FPM can be summarized as follows. For each point in the analysis domain (star point), an approximated solution is locally constructed by using a subset of surrounding supporting points , which belong to the problem domain (local cloud of points ). The approximation is computed as a linear combination of the cloud unknown nodal values (or parameters) and certain metric coefficients. These are obtained by solving a WLSQ problem at the cloud level, in which the distances between the nodal parameters and the approximated solution are minimized in a LSQ sense. Once the approximation metric coefficients are known, the problem governing PDEs are sampled at each star point by using a collocation method. The continuous variables (and their derivatives) are replaced in the sampled equations by the discrete approximated forms, and the solution of the resulting system allows calculating the unknown nodal values. Hence, the approximated solution satisfying the governing equations of the problem can be obtained. It is important to note that the highly local character of the FPM makes the method suitable for implementing efficient parallel solution schemes. The construction of the typical FPM approximation is described in (Oñate & Idelsohn, 1998). An analysis of the approximation parameters can be found in (Ortega, Oñate & Idelsohn, 2007) and a more comprehensive study is conducted in (Ortega, 2014). Other approaches have also been proposed, see for instance (Boroomand, Tabatabaei and Oñate, 2005). An extension of the FPM approximation is presented in (Boroomand, Najjar & Oñate, 2009). Applications Fluid mechanics The early lines of research and applications of the FPM to fluid flow problems are summarized in (Fischer, 1996). There, convective-diffusive problems were studied using LSQ and WLSQ polynomial approximations. The study focused on the effects of the cloud of points and weighting functions on the accuracy of the local approximation, which helped to understand the basic behavior of the FPM. The results showed that the 1D FPM approximation leads to discrete derivative forms similar to those obtained with central difference approximations, which are second-order accurate. However, the accuracy degrades to first-order for non-symmetric clouds, depending on the weighting function. Preliminary criteria about the selection of points conforming the local clouds were also defined with the aim to improve the ill-conditioning of the minimization problem. The flow solver employed in that work was based on a two-step Taylor-Galerkin scheme with explicit artificial dissipation. The numerical examples involved inviscid subsonic, transonic and supersonic two-dimensional problems, but a viscous low-Reynolds number test case was also provided. In general, the results obtained in this work were satisfactory and demonstrated that the introduction of weighting in the LSQ minimization leads to superior results (linear basis were used). In a similar line of research, a residual stabilization technique derived in terms of flux balancing in a finite domain, known as Finite Increment Calculus (FIC) (Oñate, 1996, 1998), was introduced. The results were comparable to those obtained with explicit artificial dissipation, but with the advantage that the stabilization in FIC is introduced in a consistent manner, see (Oñate, Idelsohn, Zienkiewicz, Taylor & Sacco, 1996b) and (Oñate & Idelsohn, 1998a). Among these developments, the issue of point generation was firstly addressed in (Löhner & Oñate, 1998). Based on an advancing front technique, the authors showed that point discretizations suitable for meshless computations can be generated more efficiently by avoiding the usual quality checks needed in conventional mesh generation. Highly competitive generation times were achieved in comparison with traditional meshers, showing for the first time that meshless methods are a feasible alternative to alleviate discretization problems. Incompressible 2D flows were first studied in (Oñate, Sacco & Idelsohn, 2000) using a projection method stabilized through the FIC technique. A detailed analysis of this approach was carried out in (Sacco, 2002). Outstanding achievements from that work have given the FPM a more solid base; among them, the definition of local and normalized approximation bases, a procedure for constructing local clouds of points based on local Delaunay triangulation, and a criterion for evaluating the quality of the resultant approximation. The numerical applications presented focused mainly on two-dimensional (viscous and inviscid) incompressible flows, but a three-dimensional application example was also provided. A preliminary application of the FPM in a Lagrangian framework, presented in (Idelsohn, Storti & Oñate, 2001), is also worth of mention. Despite the interesting results obtained for incompressible free surface flows, this line of research was not continued under the FPM and later formulations were exclusively based on Eulerian flow descriptions. The first application of the FPM to the solution of 3D compressible flows was presented in a pioneer work by (Löhner, Sacco, Oñate & Idelsohn, 2002). There, a reliable and general procedure for constructing local clouds of points (based on a Delaunay technique) and a well-suited scheme for solving the flow equations were developed. In the solution scheme proposed, the discrete flux derivatives are written along edges connecting the cloud's points as a central difference-like expression plus an upwind biased term that provides convective stabilization. An approximate Riemann solver of Roe and van Leer flux vector splitting were used for this purpose. The approach proposed is more accurate (also more expensive) than artificial dissipation methods and, additionally, does not require the definition of geometrical measures in the local cloud and problem dependent parameters. The time integration of the equations was performed through a multi-stage explicit scheme in the line of Runge-Kutta methods. Some years later, further research was carried out in relation to 3D FPM approximations in (Ortega, Oñate & Idelsohn, 2007). This work focused on constructing robust approximations regardless of the characteristics of the local support. To this end, local automatic adjusting of the weighting function and other approximation parameters were proposed. Further 3D applications of the method involved compressible aerodynamics flows with adaptive refinement (Ortega, Oñate & Idelsohn, 2009) and moving/deforming boundary problems (Ortega, Oñate & Idelsohn, 2013). In these works, the FPM showed satisfactory robustness and accuracy, and capabilities to address practical computations. Among other achievements, it was demonstrated that a complete regeneration of the model discretization could be an affordable solution strategy, even in large simulation problems. This result presents new possibilities for the meshless analysis of moving/deforming domain problems. The FPM was also applied with success to adaptive shallow water problems in (Ortega, Oñate, Idelsohn & Buachart, 2011) and (Buachart, Kanok-Nukulchai, Ortega & Oñate, 2014). A proposal to exploit meshless advantages in high-Reynolds viscous flow problems is presented in (Ortega, Oñate, Idelsohn & Flores, 2014a). In the same field of applications, a major study on the accuracy, computational cost and parallel performance of the FPM was carried out in (Ortega, Oñate, Idelsohn & Flores, 2014b). There, the FPM was compared with an equivalent Finite Element-based solver, which provided a standard for assessing both, the characteristics of the meshless solver and its suitability to address practical applications. Some simplifications of the FPM technique were proposed in this work to improve efficiency and reduce the performance gap with FEM. Then, grid convergence studies using a wing-body configuration were conducted. The results showed comparable accuracy and performance, revealing the FPM competitive with respect to its FEM counterpart. This is important because meshless techniques are often considered impractical due to the poor efficiency of the initial implementations. The FPM has also been applied in aeroacoustics in (Bajko, Cermak & Jicha, 2014). The solution scheme proposed is based on a linearized Riemann solver and successfully exploits the advantages of high-order FPM approximations. The results obtained are indicative of the potential of the FPM to address sound propagation problems. Solid mechanics Current lines of investigation Current efforts are mainly oriented to exploit the capabilities of the FPM to work in parallel environments for solving large-scale practical problems, particularly in areas where meshless procedures can make useful contributions, for example problems involving complex geometry, moving/deforming domain, adaptive refinement and multiscale phenomena. References Fluid mechanics Solid mechanics
Finite point method
[ "Physics", "Engineering" ]
2,376
[ "Solid mechanics", "Civil engineering", "Mechanics", "Fluid mechanics" ]
39,555,438
https://en.wikipedia.org/wiki/Hasse%20derivative
In mathematics, the Hasse derivative is a generalisation of the derivative which allows the formulation of Taylor's theorem in coordinate rings of algebraic varieties. Definition Let k[X] be a polynomial ring over a field k. The r-th Hasse derivative of Xn is if n ≥ r and zero otherwise. In characteristic zero we have Properties The Hasse derivative is a generalized derivation on k[X] and extends to a generalized derivation on the function field k(X), satisfying an analogue of the product rule and an analogue of the chain rule. Note that the are not themselves derivations in general, but are closely related. A form of Taylor's theorem holds for a function f defined in terms of a local parameter t on an algebraic variety: Notes References Differential algebra
Hasse derivative
[ "Mathematics" ]
160
[ "Differential algebra", "Fields of abstract algebra", "Algebra stubs", "Algebra" ]
39,555,966
https://en.wikipedia.org/wiki/Barban%E2%80%93Davenport%E2%80%93Halberstam%20theorem
In mathematics, the Barban–Davenport–Halberstam theorem is a statement about the distribution of prime numbers in an arithmetic progression. It is known that in the long run primes are distributed equally across possible progressions with the same difference. Theorems of the Barban–Davenport–Halberstam type give estimates for the error term, determining how close to uniform the distributions are. Statement Let a be coprime to q and be a weighted count of primes in the arithmetic progression a mod q. We have where φ is Euler's totient function and the error term E is small compared to x. We take a sum of squares of error terms Then we have for and every positive A, where O is Landau's Big O notation. This form of the theorem is due to Gallagher. The result of Barban is valid only for for some B depending on A, and the result of Davenport–Halberstam has B = A + 5. See also Bombieri–Vinogradov theorem Elliott–Halberstam conjecture References Theorems in analytic number theory
Barban–Davenport–Halberstam theorem
[ "Mathematics" ]
222
[ "Theorems in mathematical analysis", "Theorems in number theory", "Theorems in analytic number theory" ]
57,205,820
https://en.wikipedia.org/wiki/Peters%20four-step%20chemistry
Peters four-step chemistry is a systematically reduced mechanism for methane combustion, named after Norbert Peters, who derived it in 1985. The mechanism reads as The mechanism predicted four different regimes where each reaction takes place. The third reaction, known as radical consumption layer, where most of the heat is released, and the first reaction, also known as fuel consumption layer, occur in a narrow region at the flame. The fourth reaction is the hydrogen oxidation layer, whose thickness is much larger than the former two layers. Finally, the carbon monoxide oxidation layer is the largest of them all, corresponding to the second reaction, and oxidizes very slowly. Peters-Williams three-step chemistry A three-step mechanism was derived in 1987 by Peters and Forman A. Williams by assuming steady-state approximation for the hydrogen radical. Then, See also Zeldovich–Liñán model References Chemical kinetics Combustion Reaction mechanisms Chemical reactions Methane
Peters four-step chemistry
[ "Chemistry" ]
187
[ "Reaction mechanisms", "Chemical reaction engineering", "Methane", "Combustion", "Physical organic chemistry", "nan", "Greenhouse gases", "Chemical kinetics" ]
57,206,613
https://en.wikipedia.org/wiki/Economic%20optimization%20of%20electric%20conductors
Economic optimization of electric conductors (also known as economic cable sizing - ECS) is the process of selecting cable based on both safety and economic analysis. The objective of ECS is to minimise lifetime costs of cables and to reduce emissions due to loses in cables. Cable selection criteria Four criteria are typically used for cable selection: Current-carrying capacity; Voltage drop; Short-circuit temperature rise; and Economic optimization. Until recently, energy costs were sufficiently low that the ongoing cable losses were not significant. However, there are two recent effects that have changed this situation: The increased cost of energy; and The increased allowable operating temperature of cables. Principles of economic optimization There are two primary lifetime costs associated with power cables: Upfront costs: the larger the cable the more costly the cable is to purchase and install (the cost of installation is usually not a significant factor). Ongoing costs: electrical energy losses in the cable while it is carrying current. The larger the cable, the smaller the losses – hence the lower the ongoing costs. The total lifetime cost is the sum of the costs, all of which are related to cable size. The objective of economic cable sizing is to find the lowest overall total cost while maintaining safety standards. There are several approaches to this which are all based upon these fundamental principles. The following standards cover the economics sizing of cables: IEC 60287-3-2.Electric cables - Calculation of the current rating - Part 3-2. JCS 4521-1 Calculation of the Environmental Current Rating for the Electric Cables, Part 1. AS/NZS 3008.1.1:2017 Electrical installations—Selection of cables. Australia In New South Wales, Australia it is permissible to use the calculated energy savings that are due to implementing ECS in BASIX (Building Sustainability Index) applications. Each ECS based application is assessed on a case-by-case basis. References Electrical conductors
Economic optimization of electric conductors
[ "Physics" ]
391
[ "Materials", "Electrical conductors", "Matter" ]
45,609,427
https://en.wikipedia.org/wiki/Nuprl
Nuprl is a proof development system, providing computer-mediated analysis and proofs of formal mathematical statements, and tools for software verification and optimization. Originally developed in the 1980s by Robert Lee Constable and others, the system is now maintained by the PRL Project at Cornell University. The currently supported version, Nuprl 5, is also known as FDL (Formal Digital Library). Nuprl functions as an automated theorem proving system and can also be used to provide proof assistance. Design Nuprl uses a type system based on Martin-Löf intuitionistic type theory to model mathematical statements in a digital library. Mathematical theories can be constructed and analyzed with a variety of editors, including a graphical user interface, a web-based editor, and an Emacs mode. A variety of evaluators and inference engines can operate on the statements in the library. Translators also allow statements to be manipulated with Java and OCaml programs. The overall system is controlled with a variant of ML. Nuprl 5's architecture is described as "distributed open architecture" and intended primarily to be used as a web service rather than as standalone software. Those interested in using the web service, or migrating theories from older versions of Nuprl, can contact the email address given on the Nuprl System web page. History Nuprl was first released in 1984, and was first described in detail in the book Implementing Mathematics with the Nuprl Proof Development System, published in 1986. Nuprl 2 was the first Unix version. Nuprl 3 provided machine proof for mathematical problems related to Girard's paradox and Higman's lemma. Nuprl 4, the first version developed for the World Wide Web, was used to verify cache coherency protocols and other computer systems. The current system architecture, implemented in Nuprl 5, was first proposed in a 2000 conference paper. A reference manual for Nuprl 5 was published in 2002. Nuprl has been the subject of many computer science publications. Successors Both the JonPRL and RedPRL systems are also based on computational type theory. RedPRL is explicitly "inspired by Nuprl". References External links PRL Project web page at Cornell University. The current maintainers of Nuprl have extensive documentation and publications on Nuprl. . A User-Level Introduction to the Nuprl Proof Development System (2001 paper at the University of Pennsylvania Scholarly Commons) RedPRL web page Automated theorem proving Proof assistants
Nuprl
[ "Mathematics" ]
510
[ "Mathematical logic", "Computational mathematics", "Automated theorem proving" ]
49,982,814
https://en.wikipedia.org/wiki/Short%20interspersed%20nuclear%20element
Short interspersed nuclear elements (SINEs) are non-autonomous, non-coding transposable elements (TEs) that are about 100 to 700 base pairs in length. They are a class of retrotransposons, DNA elements that amplify themselves throughout eukaryotic genomes, often through RNA intermediates. SINEs compose about 13% of the mammalian genome. The internal regions of SINEs originate from tRNA and remain highly conserved, suggesting positive pressure to preserve structure and function of SINEs. While SINEs are present in many species of vertebrates and invertebrates, SINEs are often lineage specific, making them useful markers of divergent evolution between species. Copy number variation and mutations in the SINE sequence make it possible to construct phylogenies based on differences in SINEs between species. SINEs are also implicated in certain types of genetic disease in humans and other eukaryotes. In essence, short interspersed nuclear elements are genetic parasites which have evolved very early in the history of eukaryotes to utilize protein machinery within the organism as well as to co-opt the machinery from similarly parasitic genomic elements. The simplicity of these elements make them remarkably successful at persisting and amplifying (through retrotransposition) within the genomes of eukaryotes. These "parasites" which have become ubiquitous in genomes can be very deleterious to organisms as discussed below. However, eukaryotes have been able to integrate short-interspersed nuclear elements into different signaling, metabolic and regulatory pathways and SINEs have become a great source of genetic variability. They seem to play a particularly important role in the regulation of gene expression and the creation of RNA genes. This regulation extends to chromatin re-organization and the regulation of genomic architecture. The different lineages, mutations, and activities among eukaryotes make short-interspersed nuclear elements a useful tool in phylogenetic analysis. Classification and structure SINEs are classified as non-LTR retrotransposons because they do not contain long terminal repeats (LTRs). There are three types of SINEs common to vertebrates and invertebrates: CORE-SINEs, V-SINEs, and AmnSINEs. SINEs have 50-500 base pair internal regions which contain a tRNA-derived segment with A and B boxes that serve as an internal promoter for RNA polymerase III. Internal structure SINEs are characterized by their different modules, which are essentially a sectioning of their sequence. SINEs can, but do not necessarily have to possess a head, a body, and a tail. The head is at the 5' end of short-interspersed nuclear elements and is evolutionarily derived from an RNA synthesized by RNA Polymerase III such as ribosomal RNAs and tRNAs; the 5' head is indicative of which endogenous element that SINE was derived from and was able to parasitically utilize its transcriptional machinery. For example, the 5' of the Alu SINE is derived from 7SL RNA, a sequence transcribed by RNA Polymerase III, giving rise to the RNA element of SRP, an abundant ribonucleoprotein. The body of SINEs possess an unknown origin but often share much homology with a corresponding LINE which thus allows SINEs to parasitically co-opt endonucleases coded by LINEs (which recognize certain sequence motifs). Lastly, the 3′ tail of SINEs is composed of short simple repeats of varying lengths; these simple repeats are sites where two (or more) short-interspersed nuclear elements can combine to form a dimeric SINE. Short-interspersed nuclear elements which only possess a head and tail are called simple SINEs whereas short-interspersed nuclear elements which also possess a body or are a combination of two or more SINEs are complex SINEs. Transcription Short-interspersed nuclear elements are transcribed by RNA polymerase III which is known to transcribe ribosomal RNA and tRNA, two types of RNA vital to ribosomal assembly and mRNA translation. SINEs, like tRNAs and many small-nuclear RNAs possess an internal promoter and thus are transcribed differently than most protein-coding genes. In other words, short-interspersed nuclear elements have their key promoter elements within the transcribed region itself. Though transcribed by RNA polymerase III, SINEs and other genes possessing internal promoters, recruit different transcriptional machinery and factors than genes possessing upstream promoters. Effects on gene expression Changes in chromosome structure influence gene expression primarily by affecting the accessibility of genes to transcriptional machinery. The chromosome has a very complex and hierarchical system of organizing the genome. This system of organization, which includes histones, methyl groups, acetyl groups, and a variety of proteins and RNAs allows different domains within a chromosome to be accessible to polymerases, transcription factors, and other associated proteins to different degrees. Furthermore, the shape and density of certain areas of a chromosome can affect the shape and density of neighboring (or even distant regions) on the chromosome through interaction facilitated by different proteins and elements. Non-coding RNAs such as short-interspersed nuclear elements, which have been known to associate with and contribute to chromatin structure, can thus play huge role in regulating gene expression. Short-interspersed-nuclear-elements similarly can be involved in gene regulation by modifying genomic architecture. In fact Usmanova et al. 2008 suggested that short-interspersed nuclear elements can serve as direct signals in chromatin rearrangement and structure. The paper examined the global distribution of SINEs in mouse and human chromosomes and determined that this distribution was very similar to genomic distributions of genes and CpG motifs. The distribution of SINEs to genes was significantly more similar than that of other non-coding genetic elements and even differed significantly from the distribution of long-interspersed nuclear elements. This suggested that the SINE distribution was not a mere accident caused by LINE-mediated retrotransposition but rather that SINEs possessed a role in gene-regulation. Furthermore, SINEs frequently contain motifs for YY1 polycomb proteins. YY1 is a zinc-finger protein that acts as a transcriptional repressor for a wide-variety of genes essential for development and signaling. Polycomb protein YY1 is believed to mediate the activity of histone deacetylases and histone acetyltransferases to facilitate chromatin re-organization; this is often to facilitate the formation of heterochromatin (gene-silencing state). Thus, the analysis suggests that short-interspersed nuclear elements can function as a ‘signal-booster' in the polycomb-dependent silencing of gene-sets through chromatin re-organization. In essence, it is the cumulative effect of many types of interactions that leads to the difference between euchromatin, which is not tightly packed and generally more accessible to transcriptional machinery, and heterochromatin, which is tightly packed and generally not accessible to transcriptional machinery; SINEs seem to play an evolutionary role in this process. In addition to directly affecting chromatin structure, there are a number of ways in which SINEs can potentially regulate gene expression. For example, long non-coding RNA can directly interact with transcriptional repressors and activators, attenuating or modifying their function. This type of regulation can occur in different ways: the RNA transcript can directly bind to the transcription factor as a co-regulator; also, the RNA can regulate and modify the ability of co-regulators to associate with the transcription factor. For example, Evf-2, a certain long non-coding RNA, has been known to function as a co-activator for certain homeobox transcription factors which are critical to nervous system development and organization. Furthermore, RNA transcripts can interfere with the functionality of the transcriptional complex by interacting or associating with RNA polymerases during the transcription or loading processes. Moreover, non-coding RNAs like SINEs can bind or interact directly with the DNA duplex coding the gene and thus prevent its transcription. Also, many non-coding RNAs are distributed near protein-coding genes, often in the reverse direction. This is especially true for short-interspersed nuclear elements as seen in Usmanova et al. These non-coding RNAs, which lie adjacent to or overlap gene-sets provide a mechanism by which transcription factors and machinery can be recruited to increase or repress the transcription of local genes. The particular example of SINEs potentially recruiting the YY1 polycomb transcriptional repressor is discussed above. Alternatively, it also provides a mechanism by which local gene expression can be curtailed and regulated because the transcriptional complexes can hinder or prevent nearby genes from being transcribed. There is research to suggest that this phenomenon is particularly seen in the gene-regulation of pluripotent cells. In conclusion, non-coding RNAs such as SINEs are capable of affecting gene expression on a multitude of different levels and in different ways. Short-interspersed nuclear elements are believed to be deeply integrated into a complex regulatory network capable of fine-tuning gene expression across the eukaryotic genome. Propagation and regulation The RNA transcribed from the short-interspersed nuclear element does not code for any protein product but is nonetheless reverse-transcribed and inserted back into an alternate region in the genome. For this reason, short interspersed nuclear elements are believed to have co-evolved with long interspersed nuclear element (LINEs), as LINEs do in fact encode protein products which enable them to be reverse- transcribed and integrated back into the genome. SINEs are believed to have co-opted the proteins coded by LINEs which are contained in 2 reading frames. Open reading frame 1 (ORF 1) encodes a protein which binds to RNA and acts as a chaperone to facilitate and maintain the LINE protein-RNA complex structure. Open reading frame 2 (ORF 2) codes a protein which possesses both endonuclease and reverse transcriptase activities. This enables the LINE mRNA to be reverse-transcribed into DNA and integrated into the genome based on the sequence-motifs recognized by the protein's endonuclease domain. LINE-1 (L1) is transcribed and retrotransposed most frequently in the germ-line and during early development; as a result SINEs move around the genome most during these periods. SINE transcription is down-regulated by transcription factors in somatic cells after early development, though stress can cause up-regulation of normally silent SINEs. SINEs can be transferred between individuals or species via horizontal transfer through a viral vector. SINEs are known to share sequence homology with LINES which gives a basis by which the LINE machinery can reverse transcribe and integrate SINE transcripts. Alternately, some SINEs are believed to use a much more complex system of integrating back into the genome; this system involves the use random double-stranded DNA breaks (rather than the endonuclease coded by related long-interspersed nuclear elements creating an insertion-site). These DNA breaks are utilized to prime reverse transcriptase, ultimately integrating the SINE transcript back into the genome. SINEs nonetheless depend on enzymes coded by other DNA elements and are thus known as non-autonomous retrotransposons as they depend on the machinery of LINEs, which are known as autonomous retrotransposons. The theory that short-interspersed nuclear elements have evolved to utilize the retrotransposon machinery of long-interspersed nuclear elements is supported by studies which examine the presence and distribution of LINEs and SINEs in taxa of different species. For example, LINEs and SINEs in rodents and primates show very strong homology at the insertion-site motif. Such evidence is a basis for the proposed mechanism in which integration of the SINE transcript can be co-opted with LINE-coded protein products. This is specifically demonstrated by a detailed analysis of over 20 rodent species profiled LINEs and SINEs, mainly L1s and B1s respectively; these are families of LINEs and SINEs found at high frequencies in rodents along with other mammals. The study sought to provide phylogenetic clarity within the context of LINE and SINE activity. The study arrived at a candidate taxa believed to be the first instance of L1 LINE extinction; it expectedly discovered that there was no evidence to suggest that B1 SINE activity occurred in species which did not have L1 LINE activity. Also, the study suggested that B1 short-interspersed nuclear element silencing in fact occurred before L1 long-interspersed nuclear element extinction; this is due to the fact that B1 SINEs are silenced in the genus most-closely related to the genus which does not contain active L1 LINEs (though the genus with B1 SINE silencing still contains active L1 LINEs). Another genus was also found which similarly contained active L1 long-interspersed nuclear elements but did not contain B1 short-interspersed nuclear elements; the opposite scenario, in which active B1 SINEs were present in a genus which did not possess active L1 LINEs was not found. This result was expected and strongly supports the theory that SINEs have evolved to co-opt the RNA-binding proteins, endonucleases, and reverse-transcriptases coded by LINEs. In taxa which do not actively transcribe and translate long-interspersed nuclear elements protein-products, SINEs do not have the theoretical foundation by which to retrotranspose within the genome. The results obtained in Rinehart et al. are thus very supportive of the current model of SINE retrotransposition. Effects of SINE transposition Insertion of a SINE upstream of a coding region may result in exon shuffling or changes to the regulatory region of the gene. Insertion of a SINE into the coding sequence of a gene can have deleterious effects and unregulated transposition can cause genetic disease. The transposition and recombination of SINEs and other active nuclear elements is thought to be one of the major contributions of genetic diversity between lineages during speciation. Common SINEs Short-interspersed nuclear elements are believed to have parasitic origins in eukaryotic genomes. These SINEs have mutated and replicated themselves a large number of times on an evolutionary time-scale and thus form many different lineages. Their early evolutionary origin has caused them to be ubiquitous in many eukaryotic lineages. Alu elements, short-interspersed nuclear element of about 300 nucleotides, are the most common SINE in humans, with >1,000,000 copies throughout the genome, which is over 10 percent of the total genome; this is not uncommon among other species. Alu element copy number differences can be used to distinguish between and construct phylogenies of primate species. Canines differ primarily in their abundance of SINEC_Cf repeats throughout the genome, rather than other gene or allele level mutations. These dog-specific SINEs may code for a splice acceptor site, altering the sequences that appear as exons or introns in each species. Apart from mammals, SINEs can reach high copy numbers in a range of species, including nonbony vertebrates (elephant shark) and some fish species (coelacanths). In plants, SINEs are often restricted to closely related species and have emerged, decayed, and vanished frequently during evolution. Nevertheless, some SINE families such as the Au-SINEs and the Angio-SINEs are unusually widespread across many often unrelated plant species. Diseases There are >50 human diseases associated with SINEs. When inserted near or within the exon, SINEs can cause improper splicing, become coding regions, or change the reading frame, often leading to disease phenotypes in humans and other animals. Insertion of Alu elements in the human genome is associated with breast cancer, colon cancer, leukemia, hemophilia, Dent's disease, cystic fibrosis, neurofibromatosis, and many others. microRNAs The role of short-interspersed nuclear elements in gene regulation within cells has been supported by multiple studies. One such study examined the correlation between a certain family of SINEs with microRNAs (in zebrafish). The specific family of SINEs being examined was the Anamnia V-SINEs; this family of short interspersed nuclear elements is often found in the untranslated region of the 3' end of many genes and is present in vertebrate genomes. The study involved a computational analysis in which the genomic distribution and activity of the Anamnia V-SINEs in Danio rerio zebrafish was examined; furthermore, these V-SINEs potential to generate novel microRNA loci was analyzed. It was found that genes which were predicted to possess V-SINEs were targeted by microRNAs with significantly higher hybridization E-values (relative to other areas in the genome). The genes that had high hybridization E-values were genes particularly involved in metabolic and signaling pathways. Almost all miRNAs identified to have a strong ability to hybridize to putative V-SINE sequence motifs in genes have been identified (in mammals) to have regulatory roles. These results which establish a correlation between short-interspersed nuclear elements and different regulatory microRNAs strongly suggest that V-SINEs have a significant role in attenuating responses to different signals and stimuli related to metabolism, proliferation and differentiation. Many other studies must be undertaken to establish the validity and extent of short-interspersed nuclear element retrotransposons' role in regulatory gene-expression networks. In conclusion, though not much is known about the role and mechanism by which SINEs generate miRNA gene loci it is generally understood that SINEs have played a significant evolutionary role in the creation of "RNA-genes", this is also touched upon above in SINEs and pseudogenes. With such evidence suggesting that short-interspersed nuclear elements have been evolutionary sources for microRNA loci generation it is important to further discuss the potential relationships between the two as well as the mechanism by which the microRNA regulates RNA degradation and more broadly, gene expression. A microRNA is a non-coding RNA generally 22 nucleotides in length. This non-protein coding oligonucleotide is usually transcribed from a longer nuclear DNA sequence by RNA polymerase II which is also responsible for the transcription of most mRNAs and snRNAs in eukaryotes. However, some research suggests that some microRNAs that possess upstream short-interspersed nuclear elements are transcribed by RNA polymerase III which is widely implicated in ribosomal RNA and tRNA, two transcripts vital to mRNA translation. This provides an alternate mechanism by which short-interspersed nuclear elements could be interacting with or mediating gene-regulatory networks involving microRNAs. The genomic regions producing miRNA can be independent RNA-genes often being anti-sense to neighboring protein-coding genes, or can be found within the introns of protein-coding genes. The co-localization of microRNA and protein-coding genes provides a mechanistic foundation by which microRNA regulates gene-expression. Furthermore, Scarpato et al. reveals (as discussed above) that genes predicted to possess short-interspersed nuclear elements (SINEs) through sequence analysis were targeted and hybridized by microRNAs significantly greater than other genes. This provides an evolutionarily path by which the parasitic SINEs were co-opted and utilized to form RNA-genes (such as microRNAs) which have evolved to play a role in complex gene-regulatory networks. The microRNAs are transcribed as part of longer RNA strands of generally about 80 nucleotides which through complementary base-pairing are able to form hairpin loop structures These structures are recognized and processed in the nucleus by the nuclear protein DiGeorge Syndrome Critical Region 8 (DGCR8) which recruits and associates with the Drosha protein. This complex is responsible for cleaving some of the hair-pin structures from the pre-microRNA which is transported to the cytoplasm. The pre-miRNA is processed by the protein DICER into a double stranded 22 nucleotide. Thereafter, one of the strands is incorporated into a multi-protein RNA-induced silencing complex (RISC). Among these proteins are proteins from the Argonaute family which are critical to the complex's ability to interact with and repress the translation of the target mRNA. Understanding the different ways in which microRNA regulates gene-expression, including mRNA-translation and degradation is key to understanding the potential evolutionary role of SINEs in gene-regulation and in the generation of microRNA loci. This, in addition to SINEs' direct role in regulatory networks (as discussed in SINEs as long non-coding RNAs) is crucial to beginning to understand the relationship between SINEs and certain diseases. Multiple studies have suggested that increased SINE activity is correlated with certain gene-expression profiles and post-transcription regulation of certain genes. In fact, Peterson et al. 2013 demonstrated that high SINE RNA expression correlates with post-transcriptional downregulation of BRCA1, a tumor suppressor implicated in multiple forms of cancer, namely breast cancer. Furthermore, studies have established a strong correlation between transcriptional mobilization of SINEs and certain cancers and conditions such as hypoxia; this can be due to the genomic instability caused by SINE activity as well as more direct-downstream effects. SINEs have also been implicated in countless other diseases. In essence, short-interspersed nuclear elements have become deeply integrated in countless regulatory, metabolic and signaling pathways and thus play an inevitable role in causing disease. Much is still to be known about these genomic parasites but it is clear they play a significant role within eukaryotic organisms. SINEs and pseudogenes The activity of SINEs however has genetic vestiges which do not seem to play a significant role, positive or negative, and manifest themselves in the genome as pseudogenes. SINEs however should not be mistaken as RNA pseudogenes. In general, pseudogenes are generated when processed mRNAs of protein-coding genes are reverse-transcribed and incorporated back into the genome (RNA pseudogenes are reverse transcribed RNA genes). Pseudogenes are generally functionless as they descend from processed RNAs independent of their evolutionary-context which includes introns and different regulatory elements which enable transcription and processing. These pseudogenes, though non-functional may in some cases still possess promoters, CpG islands, and other features which enable transcription; they thus can still be transcribed and may possess a role in the regulation of gene expression (like SINEs and other non-coding elements). Pseudogenes thus differ from SINEs in that they are derived from transcribed- functional RNA whereas SINEs are DNA elements which retrotranspose by co-opting RNA genes transcriptional machinery. However, there are studies which suggest that retro-transposable elements such as short-interspersed nuclear elements are not only capable of copying themselves in alternate regions in the genome but are also able to do so for random genes too. Thus SINEs can be playing a vital role in the generation of pseudogenes, which themselves are known to be involved in regulatory networks. This is perhaps another means by which SINEs have been able to influence and contribute to gene-regulation. References Molecular biology Repetitive DNA sequences Mobile genetic elements Non-coding DNA Eukaryote genes
Short interspersed nuclear element
[ "Chemistry", "Biology" ]
4,784
[ "Mobile genetic elements", "Molecular genetics", "Repetitive DNA sequences", "Molecular biology", "Biochemistry" ]
49,987,118
https://en.wikipedia.org/wiki/BP%20holin%20family
The β-proteobacterial holin (BP-Hol) family (TC# 1.E.50) is a small family that includes members derived from a number of Burkholderia phage as well as a Poloromonas species. As of April 3, 2016, this family belongs to the Holin superfamily II. Members of Saier Bioinformatics Lab at University of California, San Diego found that the BP-Hol family is most closely related to the T7 holin family (TC# 1.E.6). These proteins are of 60 to 110 amino acyl residues (aas) in length and exhibit 1 or 2 transmembrane segments (TMSs). Some are annotated as type II hollins and may be related to members of the T7 Holin family (TC# 1.E.6), although BP-Hol proteins remain functionally uncharacterized. A representative list of the proteins belonging to the BP-Hol family can be found in the Transporter Classification Database. See also Holin Lysin Transporter Classification Database Further reading "BcepMigl_gp72 - Holin - Burkholderia phage BcepMigl - BcepMigl_gp72 gene & protein".www.uniprot.org. Retrieved 2016-03-29. Gründling, Angelika; Manson, Michael D.; Young, Ry (2001-07-31). "Holins kill without warning".Proceedings of the National Academy of Sciences 98 (16): 9348–9352. . . . . Preston, Gail M.; Studholme, David J.; Caldelari, Isabelle (2005-04-01). "Profiling the secretomes of plant pathogenic Proteobacteria". FEMS Microbiology Reviews 29 (2): 331–360. . . . Reddy, Bhaskara L.; Saier Jr., Milton H. (2013-11-01). "Topological and phylogenetic analyses of bacterial holin families and superfamilies". Biochimica et Biophysica Acta (BBA) - Biomembranes 1828 (11): 2654–2671.. . . Saier, Milton H.; Reddy, Bhaskara L. (2015-01-01). "Holins in Bacteria, Eukaryotes, and Archaea: Multifunctional Xenologues with Potential Biotechnological and Biomedical Applications". Journal of Bacteriology 197(1): 7–17. . . . . Wang, I. N.; Smith, D. L.; Young, R. (2000-01-01). "Holins: the protein clocks of bacteriophage infections". Annual Review of Microbiology 54: 799–825. . .. References Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins Holins
BP holin family
[ "Biology" ]
636
[ "Protein families", "Protein classification", "Membrane proteins" ]
49,987,239
https://en.wikipedia.org/wiki/LP%20holin%20family
The Putative Listeria Phage Holin (LP-Hol) Family (TC# 1.E.51) consists of several small proteins of 41 amino acyl residues (aas) and 1 transmembrane segment (TMS). They can be found in several Listeria phage as well as in Listeria monocytogenes. While annotated as holins, these proteins remain functionally uncharacterized. A representative list of proteins belonging to the LP-Hol family can be found in the Transporter Classification Database. See also Holin Lysin Transporter Classification Database Further reading Reddy, Bhaskara L.; Saier Jr., Milton H. (2013-11-01). "Topological and phylogenetic analyses of bacterial holin families and superfamilies". Biochimica et Biophysica Acta (BBA) - Biomembranes 1828 (11): 2654–2671. . . . Saier, Milton H.; Reddy, Bhaskara L. (2015-01-01). "Holins in Bacteria, Eukaryotes, and Archaea: Multifunctional Xenologues with Potential Biotechnological and Biomedical Applications". Journal of Bacteriology 197(1): 7–17. . . . . Wang, I. N.; Smith, D. L.; Young, R. (2000-01-01). "Holins: the protein clocks of bacteriophage infections". Annual Review of Microbiology 54: 799–825.. . . Young, R.; Bläsi, U. (1995-08-01). "Holins: form and function in bacteriophage lysis". FEMS Microbiology Reviews 17 (1-2): 191–205. . . References Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins Holins
LP holin family
[ "Biology" ]
416
[ "Protein families", "Protein classification", "Membrane proteins" ]
49,987,432
https://en.wikipedia.org/wiki/Metal%20stitching
Metal stitching is an industrial technique for repairing cracked and broken cast iron, steel, bronze or aluminium structures and their components. The process is carried out cold, without welding. It allows the repair of cast iron and cast steel, often in-situ, without the distortion from welding, and can be used in other situations where heat cannot be used to achieve a repair. Background The metal stitching process was developed in the late 1930s as an option for repairing cast iron components and equipment on the Texas oil fields. The process was developed to provide a permanent, stress-free repair and utilized when the use of heat or open flame was limited or not allowed. Four men have been credited with the development of this new metal locking technique: Lawrence B. Scott, Fred Lewis, Earl Reynolds and Hal W. Harman. However, it was Hal Harman who initially invented the metal stitching technique, and he filed for a patent to the technique in the 7th of August, 1937. In 1938 L.B. Scott was officially credited with the invention of the Metalock variation of metal stitching, whilst he was still working for Harman. Scott was given patent rights to the repair technique and materials used. Scott used his patents to secure the repair process, called it the ‘Metalock Repair Process’, and began to offer franchises under the Metalock Corporation trade name after starting his own operation in Long Island City, NY. Shortly thereafter, Thomas O. Oliver Ltd. (based in Ontario, Canada) was the first company to purchase a franchise. Fred Lewis (a partner in the development process) purchased a franchise and began operation in Chicago, IL in 1942. The same year, George Jackman Sr. left T.O. Oliver Ltd. and formed Metal Locking Service, Inc. as a stand-alone company. Hal Harman took forward his method - called Chainlock. Initially Harman and Scott both offered competing metal stitching repairs. Then, for several years, Harman and Scott proceeded to take each other to court, to contest patent infringements and design rights. Ultimately Harman succeeded and Scott had to concede the process ownership to others. The first repairs, in the 1930s, were in hazardous oilfields. Just prior to and during WWII, the process was used secretly on US Naval vessels, the process becoming a standard repair method approved by the US Navy after the war. It was in this time period that the process was verified as a credible alternative to welding. Over the years alternative variations of the metal stitching processes were developed, they use terms like Metal Stitch, and Metal Locking, and Metalock to describe their repair process. Lock-N-Stitch is a slightly different stitching method, that was developed from the original stitching concept, by Gary J Reed. Development of the process Major Edward Peckham, a Canadian engineer originally of the Canadian army, was so impressed by what he saw in Texas that he brought the metalock process to Europe, and in 1947 opened an office in London, England. Peckham registered the Metalock Casting Repair Service, which became Metalock (Britain) Ltd. In 1953, to coordinate the expansion of the new process, the Metalock International Association was started in London. An engineering standard was developed to ensure the best possible outcome for a metalock repair. During the early years, from the 1953 to the 1970s, research was applied to improve the process in Sweden, Germany and the UK. This resulted in improvement in two main areas; the creation of a material for the key that was designed for maximum strength under operating conditions. And the development of the key design and dimensions, and how to locate the keys in as best layout possible. During the mid 20th century, the process gained rapid popularity among engineers and in manufacturing. The evidence of this is in the publications of specialist engineering publications Process description The metalock process consists of a series of steps, that uses metal alloy ‘locks’ or 'keys' that are inserted into the cast iron across and at right angles to the fracture. The process is applied to a fracture, or to a complete break in the material. There is often related damage caused in the break, that has to be cut out prior to repair. The steps in the process are: Once complete, the appearance this repair gives is one of a 'stitch' from the sewing of cloth, hence the common term 'metal stitching.’ This method has also been called ‘metal locking’ as it locks in the broken parts of the machine. The durability of the repair is normally high as the technician ensures that the repair maximises the original equipment strength design pattern. Applications As a cold repair process, metal stitching is applicable where heat should not be used, and in situations where the material cannot be successfully repaired by welding. Situations where application of heat would be problematic are particularly appropriate for cold metal stitching, examples include: oil installations and engine rooms. Often large equipment cannot be easily dismantled and removed for repairs, the metalock repair process can often be performed in-situ with little or no dismantling. It is this feature that created the foundation for the development of the process. More unusual onsite locations expanded to include the repair of ship propellers whilst they are fixed to the ship, large mining equipment that is located underground, and underwater repairs. Welding introduces thermal stresses into the base metal, and also changes the grain structure of the metal crystals - altering the characteristics and the strength of that part of the equipment. Heat also distorts the alignment of the original surface. Once the equipment is machined and returned to use, the parent metal is always significantly weaker. Often, the site of the original repair then subsequently fails. The metal stitching repair process however tends to maintain alignment of original surfaces, since the lack of heat during the repair produces no distortion of the completed repair. In addition, the parent metal is not weakened due to material changes. Metal stitching dampens and absorbs compression stresses; providing a good ‘expansion joint’ for castings subject to thermal stresses. It distributes the tensional load away from fatigue points and maintains relieved conditions of inherent internal stresses where the rupture initially occurred. Where the repair involved a pressurized interface, the repair process has the ability to seal the joint. References Mechanical fasteners Maintenance
Metal stitching
[ "Engineering" ]
1,274
[ "Mechanical fasteners", "Maintenance", "Mechanical engineering" ]
49,988,182
https://en.wikipedia.org/wiki/Travel%20itinerary
A travel itinerary is a schedule of events relating to planned travel, generally including destinations to be visited at specified times and means of transportation to move between those destinations. For example, both the plan of a business trip and the route of a road trip, or the proposed outline of one, are travel itineraries. The construction of a travel itinerary may be assisted by the use of travel literature, including travel journals and diaries, a guide book containing information for visitors or tourists about the destination, or a trip planner website dedicated to helping the users plan their trips. Typically a travel itinerary is prepared by a travel agent who assists one in conducting their travel for business or leisure. Most commonly a travel agent provides a list of pre-planned travel itineraries to a traveller, who can then pick one that they're most satisfied with. However, with the advent of the internet, online maps, navigation, online trip planners and easier access to travel information in general, travellers, especially the younger ones prefer a more do-it-yourself approach to travel planning. Since a travel itinerary might serve different purposes for different kinds of travellers, it is crucial for a travel agent to know all the characteristics of her/his target of customers. A typical business traveller's itinerary might include information about meetings, events and contacts with some time for leisure travel, efficiently. Making of reservations When a proposed itinerary has been finalised, the details need to be entered into an airline reservation system, where the appropriate reservations and bookings are made. In the industry, the travel plan is commonly known as the itinerary and the data on the reservation system is known as a passenger name record (PNR). See also Travel plan, a package of actions designed by an organisation to encourage safe, healthy and sustainable travel options References Travel
Travel itinerary
[ "Physics" ]
377
[ "Physical systems", "Transport", "Travel" ]
49,991,314
https://en.wikipedia.org/wiki/Leucine-rich%20repeats%20and%20iq%20motif%20containing%201
Leucine-rich repeats and IQ motif containing 1 is a protein that in humans is encoded by the LRRIQ1 gene. The protein is likely a nuclear encoding mitochondrial protein and is found in all Metazoans. Gene LRRIQ1 is mapped on chromosome 12, at 12q21.31in humans. LRRIQ1 is near ALX1 on the positive strand, and TSPAN19 and SLC6115 on the negative strand. It covers 208.78kb, from 85430099 to 8563881 on the direct strand. The gene contains 36 exons. mRNA The gene contains 31 distinct introns, and the transcript produces 10 different mRNAs. LRRIQ1 has two validated alternative polyadenylation sites. The most common isoform consists of 5,460 base pairs in length, and includes 28 of the total 29 exons. Primates have an elongated 3’ end compared to other mammalian species. Reptiles, birds, and fish also have a truncated 3’end, compared to primate transcripts. Protein The protein is a nuclear encoding mitochondrial protein. The protein in humans has 1760 amino acids. The protein is considered largely neutral, though 17% of the primary structure is composed of the hydrophobic leucine-rich repeats. The leucine-rich repeat forms a structural horseshoe shape, which encourages protein-protein interactions. The most common translated isoform has a predicted molecular weight of 199.3 kdal. Compared to an average of human sequences, the internal composition is rich in Leucine, Glutamic Acid, and Lysine. Domains and Motifs LRRIQ1 contains an IQ calmodulin-binding motif found in one isoform. The isoform contains three copies and serves as a binding site for Calmodulin or CaM-like proteins. The Leucine-Rich Repeat domain is found in three isoforms of LRRIQ1. LRRIQ1 contains 4 Leucine Rich Repeats (LRR). The LRR motif provides a structural frame work for the formation of protein-protein interactions, forming a coiled horseshoe shape. Homology There are no known paralogs of LRRIQ1 detected in humans. There are many orthologs of LRRIQ1. Orthologous LRRIQ1 is found in all metazoans. LRRIQ1 is not found in Plants, Bacteria, Archaea, Fungi, or protists. The most distant homolog is found in Drosophila melanogaster (estimated time of divergence 847 million years ago). The IQ-containing motif and Leucine-rich repeats domains are conserved in Drosophila. Conservation The LRRIQ1 gene has been shown to be highly conserved. The gene has true orthologs throughout the taxa mammal and is found in all Metazoans. The time of divergence versus the corrected % divergence (m) was plotted with samples from human, gorilla, domesticate cat, bison, orca whale, Arabian camel, domestic horse, African Bush Elephant, Bald Eagle, Adelie Penguin, Japanese Gecko, Carolina Anole, and Western Clawed Frog. To make slopes for Fibrinogen (considered a comparatively rapidly evolving protein) and Cytochrome C (comparatively slower), Xenopus tropical, Xenopus laevis, Takifugu rubripes, and Bos Taurus were utilized for comparison. Expression LRRIQ1 is lowly expressed (0.6 times the average gene) in lung, testis, epithelial tissue, pooled germ cell tumors, brain tissues, embryonic tissues, and adipose tissues. Interacting Proteins The presence of the Leucine-Rich Repeat motif provides structural framework for protein-protein interactions. HES4 is the only identified protein that interacts with LRRIQ1. HES4, is a transcription factor found in humans. The protein binds DNA on N-box motifs. Clinical significance To date, the clinical significance of this gene is not known. References Further reading Proteins
Leucine-rich repeats and iq motif containing 1
[ "Chemistry" ]
839
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
32,511,431
https://en.wikipedia.org/wiki/Kinetic%20inductance%20detector
The kinetic inductance detector (KID) — also known as a microwave kinetic inductance detector (MKID) — is a type of superconducting photon detector capable of counting single photons whilst simultaneously measuring their energy and arrival time to high precision. They were first developed by scientists at the California Institute of Technology and the Jet Propulsion Laboratory in 2003. These devices operate at cryogenic temperatures, typically below 1 kelvin. They are being developed for high-sensitivity astronomical detection for frequencies ranging from the far-infrared to X-rays. Principle of operation Photons incident on a strip of superconducting material break Cooper pairs and create excess quasiparticles. The kinetic inductance of the superconducting strip is inversely proportional to the density of Cooper pairs, and thus the kinetic inductance increases upon photon absorption. This inductance is combined with a capacitor to form a microwave resonator whose resonant frequency changes with the absorption of photons. This resonator-based readout is useful for developing large-format detector arrays, as each KID can be addressed by a single microwave tone and many detectors can be measured using a single broadband microwave channel, a technique known as frequency-division multiplexing. Applications KIDs are being developed for a range of astronomy applications, including millimeter and submillimeter wavelength detection at the Caltech Submillimeter Observatory, the Atacama Pathfinder Experiment (APEX) on the Llano de Chajnantor Observatory, the CCAT Observatory, the Large Millimeter Telescope, and the IRAM 30-m telescope. They are also being developed for optical and near-infrared detection at the Palomar Observatory. KIDs have also flown on two balloon-borne telescopes, OLIMPO in 2018 and BLAST-TNG in 2020. They are also foreseen for the spectrometers of the planned PRobe far-Infrared Mission for Astrophysics (PRIMA) telescope, which NASA selected as one of two potential future space telescopes. KIDs have also gained popularity as a more compact, lower cost, and less complex alternative to transition edge sensors. See also Kinetic inductance Cryogenic particle detectors References External links SRON website on kinetic inductance detectors Research group of Prof. B. Mazin at UC Santa Barbara YouTube video on kinetic inductance from MIT Superconducting detectors Radiometry Sensors Particle detectors
Kinetic inductance detector
[ "Materials_science", "Technology", "Engineering" ]
484
[ "Telecommunications engineering", "Superconductivity", "Particle detectors", "Measuring instruments", "Sensors", "Superconducting detectors", "Radiometry" ]
32,512,530
https://en.wikipedia.org/wiki/Severe%20thunderstorm%20outbreak
A severe thunderstorm outbreak, also called a severe weather outbreak or simply a severe outbreak, is an event in which a weather system or combination of weather systems produces a multitude of severe thunderstorms in a region over a continuous span of time. A severe outbreak which is most notable for its tornadoes is called a tornado outbreak. The four kinds of severe weather produced in these outbreaks are tornadoes, severe wind, large hail, and flash flooding. Types Tornado outbreak A tornado outbreak is the occurrence of multiple tornadoes in a region over a relatively short span of time. Usually, a tornado outbreak is the result of multiple supercells. Derecho or other squall line A squall line (commonly abbreviated SQLN) is a line of thunderstorms, most or all of which have attained severe limits, traveling in an organized fashion. The greatest threats within a SQLN are damaging winds, large hail, and flash flooding, though tornadoes are possible. A derecho is a squall line which is long-lived and consistently produces damaging winds across its entire track. Derechos almost exclusively cause flash flooding and wind damage, which can be very severe. Mesoscale convective system A mesoscale convective system is a mesoscale organized system which may produce severe weather along a relatively narrow area or path. The greatest threats in an mesoscale convective system are damaging winds, large hail, and flash flooding, though tornadoes are occasionally possible. Mesoscale convective vortex A mesoscale convective vortex is a tropical cyclone-like, warm core, a feature which may develop in a squall line, derecho, or MCS. Severe MCVs can become what are essentially small tropical storms or hurricanes, and can in fact become such cyclones, as in the case of Hurricane Barry in July 2019. An MCV will often trail a squall line on its south side. The greatest threats in an MCV are (in the center of circulation and south of the center) extreme winds and (north of the center) flash flooding, in addition to tropical cyclone formation. References Weather hazards
Severe thunderstorm outbreak
[ "Physics" ]
444
[ "Weather", "Physical phenomena", "Weather hazards" ]
32,513,367
https://en.wikipedia.org/wiki/Tris%28dimethylamino%29aluminium%20dimer
Tris(dimethylamino)aluminium dimer, formally bis(μ-dimethylamino)tetrakis(dimethylamino)dialuminium, is an amide complex of aluminium. This compound may be used as a precursor to other aluminium complexes. Commercially available, this compound may be prepared from lithium dimethylamide and aluminium trichloride. References Aluminium complexes Metal amides Dimers (chemistry)
Tris(dimethylamino)aluminium dimer
[ "Chemistry", "Materials_science" ]
90
[ "Metal amides", "Coordination chemistry", "Dimers (chemistry)", "Polymer chemistry", "Organic chemistry stubs" ]
52,445,083
https://en.wikipedia.org/wiki/Diamond%20battery
Diamond battery is the name of a nuclear battery concept proposed by the University of Bristol Cabot Institute during its annual lecture held on 25 November 2016 at the Wills Memorial Building. This battery is proposed to run on the radioactivity of waste graphite blocks (previously used as neutron moderator material in graphite-moderated reactors) and would generate small amounts of electricity for thousands of years. The battery is a betavoltaic cell using carbon-14 (14C) in the form of diamond-like carbon (DLC) as the beta radiation source, and additional normal-carbon DLC to make the necessary semiconductor junction and encapsulate the carbon-14. Prototypes Early prototypes use nickel-63 (63Ni) as their source with diamond non-electrolytes/semiconductors for energy conversion, which are seen as a stepping stone to a 14C diamond battery prototype. University of Bristol prototype In 2016, researchers from the University of Bristol claimed to have constructed one of those 63Ni prototypes. From their Frequently Asked Questions (FAQ document), the estimated power of a small C-14 cell is 15 J/day for thousands of years. (For reference, a AA battery of the same size has about 10 kJ total, which is equivalent to 15 J/day for just 2 years.) They note it is not possible to directly replace an AA battery with this technology, because an AA battery can produce bursts of much higher power as well. Instead, the diamond battery is aimed at applications where a low discharge rate over a long period of time is required, such as space exploration, medical devices, seabed communications, microelectronics, etc. Moscow Institute of Physics and Technology prototype In 2018, researchers from the Moscow Institute of Physics and Technology (MIPT), the Technological Institute for Superhard and Novel Carbon Materials (TISNCM), and the National University of Science and Technology (MISIS) announced a prototype using 2-micron thick layers of 63Ni foil sandwiched between 200 10-micron diamond converters. It produced a power output of about 1 μW at a power density of 10 μW/cm3. At those values, its energy density would be approximately 3.3 Wh/g over its 100-year half-life, about 10 times that of conventional electrochemical batteries. This research was published in April 2018 in the Diamond and Related Materials journal. University of Bristol 14C Battery In December 2024, the University of Bristol announced that they had successfully created a battery using 14C. The battery functions in a way similar to a photocell, but capturing electrons instead of light within the diamond. Carbon-14 Researchers are trying to improve the efficiency and are focusing on use of radioactive 14C, which is a minor contributor to the radioactivity of nuclear waste. 14C undergoes beta decay, in which it emits a low-energy beta particle to become Nitrogen-14, which is stable (not radioactive). → + These beta particles, having an average energy of 50 keV, undergo inelastic collisions with other carbon atoms, thus creating electron-hole pairs which then contribute to an electric current. This can be restated in terms of band theory by saying that due to the high energy of the beta particles, electrons in the carbon valence band jump to its conduction band, leaving behind holes in the valence band where electrons were earlier present. Proposed manufacturing In graphite-moderated reactors, fissile uranium rods are placed inside graphite blocks. These blocks act as a neutron moderator whose purpose is to slow down fast-moving neutrons so that nuclear chain reactions can occur with thermal neutrons. During their use, some of the non-radioactive carbon-12 and carbon-13 isotopes in graphite get converted into radioactive 14C by capturing neutrons. When the graphite blocks are removed during station decommissioning, their induced radioactivity qualifies them as low-level waste requiring safe disposal. Researchers at the University of Bristol demonstrated that a large amount of the radioactive 14C was concentrated on the inner walls of the graphite blocks. Due to this, they propose that much of it can be effectively removed from the blocks. This can be done by heating them to the sublimation point of which will release the carbon in gaseous form. After this, blocks will be less radioactive and possibly easier to dispose of with most of the radioactive 14C having been extracted. Those researchers propose that this 14C gas could be collected and used to produce man-made diamonds by a process known as chemical vapor deposition using low pressure and elevated temperature, noting that this diamond would be a thin sheet and not of the stereotypical diamond cut. The resulting diamond made of radioactive 14C would still produce beta radiation which researchers claim would allow it to be used as a betavoltaic source. Researchers also claim this diamond would be sandwiched between non-radioactive man-made diamonds made from 12C which would block radiation from the source and would also be used for energy conversion as a diamond semiconductor instead of conventional silicon semiconductors. Proposed applications Due to its very low power density, conversion efficiency and high cost, a 14C betavoltaic device is very similar to other existing betavoltaic devices which are suited to niche applications needing very little power (microwatts) for several years in situations where conventional batteries cannot be replaced or recharged using conventional energy harvesting techniques. Due to its longer half-life, 14C betavoltaics may have an advantage in service life when compared to other betavoltaics using tritium or nickel. However, this will likely come at the cost of further reduced power density. Commercialization In September 2020, Morgan Boardman, an Industrial Fellow and Strategic Advisory Consultant with the Aspire Diamond Group at the South West Nuclear Hub of the University of Bristol, was appointed CEO of a new company called Arkenlight, which was created explicitly to commercialize their diamond battery technology and possibly other nuclear radiation devices under research or development at Bristol University. In September 2024, Arkenlight announced that they had created a 14C diamond. References External links The speech that proposed the battery Are Radioactive ‘Diamond Batteries’ a Real Thing? Radioactivity Battery
Diamond battery
[ "Physics", "Chemistry" ]
1,272
[ "Nuclear technology", "Radioactivity", "Nuclear physics" ]
52,457,311
https://en.wikipedia.org/wiki/Bouligand%20structure
A Bouligand structure is a layered and rotated microstructure resembling plywood, which is frequently found in naturally evolved materials. It consists of multiple lamellae, or layers, each one composed of aligned fibers. Adjacent lamellae are progressively rotated with respect to their neighbors. This structure enhances the mechanical properties of materials, especially its fracture resistance, and enables strength and in plane isotropy. It is found in various natural structures, including the cosmoid scale of the coelacanth, and the dactyl club of the mantis shrimp and many other stomatopods. In physics, these structures were conceived in 1869 by Ernest Reusch and are called Reusch piles. Due to its desirable mechanical properties, there are ongoing attempts to replicate Bouligand arrangements in the creation of failure resistant bioinspired materials. For example, it has been shown that layered composites (such as CFRP) utilizing this structure have enhanced impact properties. However, replicating the structure on small length scales is challenging, and the development and advancement of manufacturing techniques continually improves the ability to replicate this desirable structure. Mechanical Properties Toughening Mechanisms The Bouligand structure found in many natural materials is credited with imparting a very high toughness and fracture resistance to the overall material it is a part of. The mechanisms by which this toughening occurs are many, and no one mechanism has yet to be identified as the main source of the structure's toughness. Both computational work and physical experiments have been done to determine these pathways by which the structure resists fracture so that synthetic tough Bouligand structures can be taken advantage of. Crack deflection of one form or another is considered the main toughening mechanism in the bouligand structure. Deflection can take the form of crack tilting, and crack bridging. In the former, the crack propagates along the direction of the fiber plane; at the interface with the matrix material. Once the energy release rate at the tip is sufficiently low, the crack can no longer propagate along the fiber direction and must switch to crack bridging. This mode involves the crack changing direction drastically and cutting through fibers to reach a new plane to propagate along. A combination of crack tilting and crack bridging in the bouligand structure results in a highly distorted and enlarged crack. This causes the new surface area created by the propagating crack to increase dramatically relative to a straight crack; making further propagation less and less favorable and in turn toughening the material. In addition to crack deflection which simply causes a single crack to change direction and follow a more tortuous path, the bouligand structure can also tolerate multiple cracks to form and keep them from coalescing. This is sometimes termed crack twisting. Inherently accompanying crack deflection, tilting, bridging and twisting is the mixing of fracture modes. Fracture modes include opening, in-plane shear, and out-of-plane shear. The mixing of these modes via crack bridging, tilting and twisting all greatly complicate the stress fields experienced by the material; helping to dissipate the force on any one laminate plane. Impact Resistance Impact resistance in materials is differentiated from toughening in general by the rate at which stress is applied. In impact testing, the rate at which either stress or strain is applied to the sample is much higher than so-called static testing. In synthetic nano-cellulose films formed into bouligand structures, it was shown that as the pitch angle was increased, the density quickly drops to a roughly constant value as the films are not able to neatly stack onto each other. This value rises again between 42 and 60 degrees and re-stabilizes at higher angles. This reduction in density is accompanied by a sharp increase in both specific ballistic limit velocity, and specific energy absorption. The relatively small angles of 18 to 42 degrees that correlate to the lowest density for the bouligand structure also are shown to have better impact resistance, and better energy adsorption than traditional synthetic quasi-isotropic structures made for impact resistance. This experimentally optimized range of angles for impact resistance is consistent with the range of angles between fiber layers found in natural examples of the bouligand structure. Another means of toughening the bouligand structure is by shear wave filtering. The periodic and hierarchical nature of the Bouligand structure, creates a shear wave filtering effect that is especially effective during high intensity dynamic loads. As the force is applied, specific frequencies that are in shear are not permitted to transmit through the layered structure, creating a band gap in the transmitted energies and decreasing the effective energy felt by the system. The pitch angle of the layers, thickness of the layers, and number of layers present in the material all effect which frequencies are filtered out. Adaptability Adjustment of the Bouligand structure during loading has been measured using small angle X-ray scattering (SAXS). The two adjustment effects are the change in angle between the collagen fibrils and tensile axis, and the stretching of collagen fibrils. There are four mechanisms through which these adjustments occur. Fibrils rotate because of interfibrillar shear: As a tensile force is applied, fibrils rotate to align with the tensile direction. During deformation, the shear component of the applied stress causes the hydrogen bonds between fibrils to break and then reform after fibril adjustment. Collagen fibrils stretch: Collagen fibrils can elastically stretch, resulting in fibrils re-orientating to align with the tensile direction. Tensile opening of interfibrillar gaps: Fibrils highly misoriented with the tensile direction can separate, creating an opening. "Sympathetic" lamella rotation: A lamella is able to rotate away from the tensile direction if it is sandwiched between two lamellae that are reorienting themselves towards the tensile direction. This can happen if the bonding between these lamellae is high. Ψ refers to the angle between the tensile axis and the collagen fibril. Mechanisms 1 and 2 both decrease Ψ. Mechanisms 3 and 4 can increase Ψ, as in, the fibril moves away from the tensile axis. Fibrils with a small Ψ stretch elastically. Fibrils with a large Ψ are compressed, since adjacent lamellae contract in accordance with Poisson's ratio, which is a function of strain anisotropy. Single vs. Double Bouligand Structure The most common Bouligand structure found in nature is the twisted plywood structure where there is a constant angle of misalignment between layers. A rare variation of this structure is the so-called "double twisted" Bouligand structure seen in Coelacanth. This structure uses stacks of two as units to be twisted with respect to each other at some constant misalignment angle. The two fibril layers in each of these units in this case lay such that their fibril orientation is perpendicular to each other. The mechanical differences between the single and double twisted bouligand structure has been observed. It was shown that the double bouligand structure is stiffer and tougher than the more common single bouligand structure. The increase in stiffness is also accompanied by a reduction of flexibility. The increased strength is attributed in part to an addition to the structure of "inter-bundle fibrils" that run up and down the stack of layers, perpendicular to the twisted fiber planes. These fiber bundles help keep the structure together by greatly increasing the energy needed for inter-fibril sliding. These bundles are coupled with the double twisted nature of the plywood arraignment, which shifts the direction a crack would like to grow drastically with each layer. It has also been observed that a structure can form mostly similar to the single twisted bouligand structure, but with a non-constant angle of misalignment. It is still unclear how this particular structural difference affects mechanical properties. Examples in Nature Arthropods The arthropod exoskeleton is highly hierarchical. Polysaccharide chitin fibrils arrange with proteins to form fibers, the fibers coalesce into bundles, and then the bundles arrange into horizontal planes which are stacked helicoidally, forming the twisted plywood Bouligand structure. Repeating Bouligand structures form the exocuticle and endocuticle. Differences in the Bouligand structure of the exocuticle and endocuticle have been found to be critical for analyzing the mechanical properties of both regions. Arthropods have exoskeletons that provide protection from the environment, mechanical load support, and body structure. The outer layer, called the epicuticle, is thin and waxy and is the main waterproofing barrier. Below is the procuticle, which is designed as the main structural element to the body. The procuticle is made of two sections, the exocuticle on the outer part, and the endocuticle on the inner part. The exocuticle is denser than the endocuticle; the endocuticle makes up about 90 volume % of the exoskeleton. Both the exocuticle and endocuticle are made with a Bouligand structure. Crabs In crab exoskeletons, calcite and amorphous calcium carbonate are the minerals deposited in the chitin-protein hierarchical matrix. The sheep crab (Loxorhynchun grandis), like other crabs, has a highly anisotropic exoskeleton. The spacing between the (x-y) plane Bouligand lamellae in the crab exocuticle is ~3-5μm, whereas the interlamellar spacing in the endocuticle is much greater, about 10-15μm. The smaller spacing of the exocuticle results in a higher lamellae density in the exocuticle. There is a higher hardness measurement in the exocuticle than the endocuticle, which is attributed to a higher mineral content in the exocuticle. This gives a higher wear resistance and hardness on the surface of the exoskeleton, thus giving the crab a greater degree of protection. Under stress, the Bouligand planes fail via normal bundle fracture or bundle separation mechanisms. The exocuticle-endocuticle interface is the most critical region and typically where failure first occurs, due to the anisotropic structure and discontinuity of Bouligand planes and spacing at this interface. In the z-direction, porous tubules exist normal to the Bouligand planes that penetrate the exoskeleton. The function of these tubules is to transport ions and nutrients to the new exoskeleton during the molting process. The presence of these tubules, which have a helical structure, results in a ductile necking region during tension. An increased degree of ductility increases the toughness of the crab exoskeleton. Lobster The Homarus americanus (American lobster) is an arthropod with an exoskeleton structure similar to the crabs above, and with similar trends comparing the endo- and exo- cuticles. An important note for the lobster exoskeleton structural/mechanical properties is the impact of the honeycomb structure formed by the Bouligand planes. The stiffness values for the exocuticle in lobster range from 8.5-9.5 GPa, while the endocuticle ranges from 3–4.5 GPa. Gradients in the honeycomb network, especially at the interface between the endo- and exo- cuticle are believed to be the reason for this discrepancy between the structures. Mantis Shrimp Stomatopods have thoracic appendages that are used to hunt prey. The appendages can either be spear-like or club-like, depending on the species. Mantis shrimp with a club-like appendage, or "dactyl club", uses it to smash the shell of prey such as mollusks or crabs. The peacock mantis shrimp is a species of mantis shrimp that has a dactyl club. The clubs are able to withstand fracture under the high stress waves associated with blows against prey. This is possible due to the multi-regional structure of the clubs, which includes a region incorporating a Bouligand structure. The outer, top region of the club is called the impact region. The impact region is supported periodic zones and a striated region. The periodic regions are below the impact region, on the inside of the club. The striated region is present on the sides of the club, surrounding the edges of the periodic region. The impact region is about 50 to 70 μm thick, and is made with highly crystallized hydroxyapatite. The periodic region is dominated by an amorphous calcium carbonate phase. Surrounded by the amorphous mineral phase are chitin fibrils, which make up a Bouligand structure. The layered arrangement of the periodic region corresponds to a compete 180° rotation of the fibers. The impact region has a similar structure, but with a larger pitch distance (length between compete 180° rotation). The striated region is made of highly aligned parallel chitin fiber bundles. The club appendage can sustain high intensity load by shear wave filtering because of the periodicity and chirality of its Bouligand structure. Catastrophic crack growth is hindered in two ways. When crack growth follows the helicoidal structure between layers of chitin fibers, a large surface area per crack length is produced. Therefore, there is high total energy dissipated during club impact and crack propagation. When cracks propagate through neighboring layers, growth is hampered because of modulus oscillation. The Bouligand structure has anisotropic stiffness, resulting in an elastic modulus oscillation through the layers. Overall damage tolerance is improved, with crack propagation depending on growth direction in relation to chitin fiber orientation. Fish Arapaima The Arapaima fish's outer scales are designed to resist piranha bites. This is achieved through the scales' hierarchical architecture. The thinness of the scales and their overlapping arrangement allow for flexibility during movement. This also influences how much a single scale will bend when a predator attacks. In the species Arapaima gigas, each scale has two distinct structural regions which results in a scale that is resistant to puncture and bending. The outer layer is about 0.5 mm thick and is highly mineralized, which makes it hard, promoting predator tooth fracture. The inner layer is about 1 mm thick and is made of mineralized collagen fibrils arranged in a Bouligand structure. In the fibrils, collagen molecules are embedded with hydroxyapatite mineral nanocrystals. Collagen fibrils align in the same direction to make a layer of collagen lamella, of about 50 μm in thickness. Lamellae are stacked with a misalignment in orientation, creating a Bouligand structure. When the scales bend during an attack, stress is distributed due to the corrugated morphology. The largest deformation is designed to occur in the inner core layer. The inner layer can support more plastic deformation than the brittle outer layer. This is because the Bouligand structure can adjust its lamellar layers to adapt to applied forces. Adjustment of the Bouligand structure during loading has been measured using small angle X-ray scattering (SAXS). The four mechanisms through which adjustments occur are fibril rotation, collagen fibril stretching, tensile opening between fibrils, and sympathetic lamella rotation. Fibrils adapting to the loading environment enhance the flexibility of the lamellae. This contributes resistance to scale bending, and therefore increases fracture resistance. As a whole, the outer scale layer is hard and brittle, while the inner layer is ductile and tough. Carp A similar Bouligand structure was found in the scales of the common carp (Cyprinus carpio). Compared to the arapaima, the mineral content in carp scales is lower, while exhibiting higher total energy dissipation in tensile testing as well as higher fibril extensibility. Biomimicry Additive Manufacturing Additive manufacturing is a popular upcoming form of industry which allows for complex geometries and unique performance characteristics for AM parts. The main issue with mechanical properties of AM parts is the introduction of microstructural heterogeneities within layers of deposited material. These defects, including porosity and unique interfaces, result in anisotropy of the mechanical response of the workpiece, which is undesirable. To combat this anisotropic mechanical response, a Bouligand-inspired tool path is used to deposit the material in a twisted Bouligand structure. This results in a stress transfer mechanism which uses interlayer heterogeneities as stress deflection points, thus strengthening the workpiece at these points. Bouligand tool paths are used specifically in cement/ceramic deposition AM. Bouligand-inspired AM parts have been observed to behave better than cast elements under mechanical stress. Pitch Angle A critical parameter in the development of the Bouligand-inspired tool path is the pitch angle. The pitch angle γ is the angle at which the helicoidal structure is formed. The relative size of the pitch angle is critical for the mechanical response of a Bouligand-inspired AM tool piece. For γ < 45° (small angle), interfacial crack growth and interfacial microcracking is observed. For 45° < γ < 90° (large pitch angle), dominant crack growth through the solid is observed. Battery Electrodes Crab shells which already have the Bouligand structure can be used as templates for nanostructured battery electrodes. Crab shells are a low-cost, sustainable alternative to otherwise expensive starting materials and processing methods for nanostructures batteries. The crab shells have a Bouligand structure composed of highly mineralized chitin fibers. The structure can be used as a bio-template to make hollow carbon nanofibers. The desired battery materials, often sulfur and silicon, can be contained in these hollow fibers to create the cathodes and anodes. Nanocellulose Films Cellulose nanocrystals self assemble into helicoidal thin films, the pitch angle between the layers can then be modified via solvent processing. The resulting nanocellulose films, which have a Bouligand structure, can be manipulated to achieve various effects on the material properties. These nanocellulose films are impact-resistant, sustainable, and multi-functional and can be used in various applications such as stretchable electronics, protective coatings, eyewear, and body armor. References Materials
Bouligand structure
[ "Physics" ]
3,958
[ "Materials", "Matter" ]
43,751,582
https://en.wikipedia.org/wiki/Equivalent%20input
Equivalent input (also input-referred, referred-to-input (RTI), or input-related), is a method of referring to the signal or noise level at the output of a system as if it were due to an input to the same system. This input's value is called the Equivalent input. This is accomplished by removing all signal changes (e.g. amplifier gain, transducer sensitivity, etc.) to get the units to match the input. Examples Equivalent input noise A microphone converts acoustical energy to electrical energy. Microphones have some level of electrical noise at their output. This noise may have contributions from random diaphragm movement, thermal noise, or a dozen other sources, but those can all be thought of as an imaginary acoustic noise source injecting sound into the (now noiseless) microphone. The units on this noise are no longer volts, but units of sound pressure (pascals or dBSPL), which can be directly compared to the desired sound pressure inputs. This is called equivalent input noise (EIN), or input-referred noise (IRN), or referred-to-input (RTI) noise. Input-related interference level A device which uses a microphone may be susceptible to electromagnetic interference which causes sonic artifacts. The problem is not in the microphone, but the interference level can be related back to the input to compare to the level of typical inputs to see how audible the artifact is. This is called input-related interference level (IRIL). References Further reading (67 pages) Acoustics Noise (electronics)
Equivalent input
[ "Physics" ]
326
[ "Classical mechanics", "Acoustics" ]
43,752,778
https://en.wikipedia.org/wiki/Gauss%27s%20method
In orbital mechanics (a subfield of celestial mechanics), Gauss's method is used for preliminary orbit determination from at least three observations (more observations increases the accuracy of the determined orbit) of the orbiting body of interest at three different times. The required information are the times of observations, the position vectors of the observation points (in Equatorial Coordinate System), the direction cosine vector of the orbiting body from the observation points (from Topocentric Equatorial Coordinate System) and general physical data. Working in 1801, Carl Friedrich Gauss developed important mathematical techniques (summed up in Gauss's methods) which were specifically used to determine the orbit of Ceres. The method shown following is the orbit determination of an orbiting body about the focal body where the observations were taken from, whereas the method for determining Ceres' orbit requires a bit more effort because the observations were taken from Earth while Ceres orbits the Sun. Observer position vector The observer position vector (in Equatorial coordinate system) of the observation points can be determined from the latitude and local sidereal time (from Topocentric coordinate system) at the surface of the focal body of the orbiting body (for example, the Earth) via either: or where, is the respective observer position vector (in Equatorial Coordinate System) is the equatorial radius of the central body (e.g., 6,378 km for Earth) is the geocentric distance is the oblateness (or flattening) of the central body (e.g., 0.003353 for Earth) is the eccentricity of the central body (e.g., 0.081819 for Earth) is the geodetic latitude (the angle between the normal line of horizontal plane and the equatorial plane) is the geocentric latitude (the angle between the radius and the equatorial plane) is the geodetic altitude is the local sidereal time of observation site Orbiting body direction cosine vector The orbiting body direction cosine vector can be determined from the right ascension and declination (from Topocentric Equatorial Coordinate System) of the orbiting body from the observation points via: where, is the respective unit vector in the direction of the position vector (from observation point to orbiting body in Topocentric Equatorial Coordinate System) is the respective declination is the respective right ascension Algorithm The initial derivation begins with vector addition to determine the orbiting body's position vector. Then based on the conservation of angular momentum and Keplerian orbit principles (which states that an orbit lies in a two dimensional plane in three dimensional space), a linear combination of said position vectors is established. Also, the relation between a body's position and velocity vector by Lagrange coefficients is used which results in the use of said coefficients. Then with vector manipulation and algebra, the following equations were derived. For detailed derivation, refer to Curtis. NOTE: Gauss's method is a preliminary orbit determination, with emphasis on preliminary. The approximation of the Lagrange coefficients and the limitations of the required observation conditions (i.e., insignificant curvature in the arc between observations, refer to Gronchi for more details) causes inaccuracies. Gauss's method can be improved, however, by increasing the accuracy of sub-components, such as solving Kepler's equation. Another way to increase the accuracy is through more observations. Step 1 Calculate time intervals, subtract the times between observations: where is the time interval is the respective observation time Step 2 Calculate cross products, take the cross products of the observational unit direction (order matters): where is the cross product of vectors is the respective cross product vector is the respective unit vector Step 3 Calculate common scalar quantity (scalar triple product), take the dot product of the first observational unit vector with the cross product of the second and third observational unit vector: where is the dot product of vectors and is the common scalar triple product is the respective cross product vector is the respective unit vector Step 4 Calculate nine scalar quantities (similar to step 3): where is the respective scalar quantity is the respective observer position vector is the respective cross product vector Step 5 Calculate scalar position coefficients: where , , and are scalar position coefficients is the common scalar quantity is the respective scalar quantities is the time interval is the respective observer position vector is the respective unit vector Step 6 Calculate the squared scalar distance of the second observation, by taking the dot product of the position vector of the second observation: where is the squared distance of the second observation is the position vector of the second observation Step 7 Calculate the coefficients of the scalar distance polynomial for the second observation of the orbiting body: where are coefficients of the scalar distance polynomial for the second observation of the orbiting body are scalar position coefficients is the gravitational parameter of the focal body of the orbiting body Step 8 Find the root of the scalar distance polynomial for the second observation of the orbiting body: where is the scalar distance for the second observation of the orbiting body (it and its vector, r2, are in the Equatorial Coordinate System) are coefficients as previously stated Various methods can be used to find the root, a suggested method is the Newton–Raphson method. The root must be physically possible (i.e., not negative nor complex) and if multiple roots are suitable, each must be evaluated and compared to any available data to confirm their validity. Step 9 Calculate the slant range, the distance from the observer point to the orbiting body at their respective time: where is the respective slant range (it and its vector, , are in the Topocentric Equatorial Coordinate System) is the common scalar quantity is the respective scalar quantities is the time interval. is the scalar distance for the second observation of the orbiting body is the gravitational parameter of the focal body of the orbiting body Step 10 Calculate the orbiting body position vectors, by adding the observer position vector to the slant direction vector (which is the slant distance multiplied by the slant direction vector): where is the respective orbiting body position vector (in Equatorial Coordinate System) is the respective observer position vector is the respective slant range is the respective unit vector Step 11 Calculate the Lagrange coefficients: where, , , and are the Lagrange coefficients (these are just the first two terms of the series expression based on the assumption of small time interval) is the gravitational parameter of the focal body of the orbiting body is the scalar distance for the second observation of the orbiting body is the time interval Step 12 Calculate the velocity vector for the second observation of the orbiting body: where is the velocity vector for the second observation of the orbiting body (in Equatorial Coordinate System) , , and are the Lagrange coefficients is the respective orbiting body position vector Step 13 The orbital state vectors have now been found, the position (r2) and velocity (v2) vector for the second observation of the orbiting body. With these two vectors, the orbital elements can be found and the orbit determined. See also Inscribed angle theorem and three-point form for ellipses References Der, Gim J.. "New Angles-only Algorithms for Initial Orbit Determination." Advanced Maui Optical and Space Surveillance Technologies Conference. (2012). Print. Astrodynamics Orbits Carl Friedrich Gauss Equations of astronomy
Gauss's method
[ "Physics", "Astronomy", "Engineering" ]
1,492
[ "Concepts in astronomy", "Aerospace engineering", "Astrodynamics", "Equations of astronomy" ]
43,758,806
https://en.wikipedia.org/wiki/Gus%20Crystal
Gus Crystal () is a Russian manufacturer of glass (Lead glass or so-called "crystal"). The company is the oldest surviving manufacturer of Russian crystal and was founded in 1756 on the Gus River. The company gave its name to the town of Gus-Khrustalny and its district. Founded by Akim Maltsov, а merchant from Oryol region. From 2013 the plant was known as Gusevskaya Crystal Plant and named after Akim Maltsov. History In the summer of 1756, a merchant from Oryol region, Akim Maltsev founded a glass factory in the Vladimir Province of the Moscow Governorate, near the Gus River. Initially, the factory produced only simple glasses and tumblers, but in 1830, the founder's heir, Ivan Maltsev, established crystal production, making it as high-quality as Bohemian crystal but more affordable. For a century and a half after its founding, the factory has been operated successfully and expanded. In the final years of the Russian Empire, the Maltsev heirs not only renovated the factory but also reconstructed much of the city, building red brick houses for workers that still stand today, as well as individual cottages for management and the St. George Cathedral. The construction of the cathedral involved the participation of Leonty Benois and Viktor Vasnetsov. Today, the cathedral houses a crystal museum, which displays thousands of unique pieces produced by the Gus Crystal Factory. After the October Revolution of 1917 and the resulting devastation, the factory ceased operations. Production was only resumed in 1923 after a visit to Gus-Khrustalny by Mikhail Kalinin and the allocation of special funding. During the Soviet era, the factory became known for producing faceted glasses, which were presumably designed by Vera Mukhina. The factory produced them in quantities of tens of millions. At the same time, the factory also produced artistic glass, including multicolored glass, and glassblowers continued their work there. Additionally, the factory produced products incorporating colored Venetian threads. Recent history In the 1990s, the factory was privatized, with each workshop becoming its own legal entity. Each of these entities not only supplied products to the neighboring workshop but sold them, leading to a markup at each stage and ultimately making the final product's price uncompetitive. At the same time, criminal interests became involved with the factory, focusing on immediate profit rather than the development of the enterprise. As a result, one by one, the workshops declared themselves financially insolvent, and in 2000, the main factory declared bankruptcy as well. On January 19, 2012, the factory in its previous form ceased to exist, and the last hundred employees were laid off. On December 26, 2013, crystal production was resumed at the factory known as "Gusevsky Crystal Factory named after Akim Maltsev". New equipment was installed in the old workshop, and instead of traditional vases and glasses, the production of handcrafted crystal art pieces by individual orders was established. References External links Official website Official website of the Gus Crystal plant Official website of the Administration of the Gus-Khrustalny city Glassmaking companies Companies established in 1756 Manufacturing companies of Russia Russian brands Companies nationalised by the Soviet Union Companies based in Vladimir Oblast Manufacturing companies established in 1856
Gus Crystal
[ "Materials_science", "Engineering" ]
673
[ "Glass engineering and science", "Glassmaking companies", "Engineering companies" ]
48,614,086
https://en.wikipedia.org/wiki/Calming%20signals
Calming signals is a term conceived by Norwegian dog trainer and canine ethologist, Turid Rugaas, to describe the patterns of behavior used by dogs interacting with each other in environments that cause heightened stress and when conveying their desires or intentions. The term has been used interchangeably with "appeasement signals." Calming signals, or appeasement signals, are communicative cues used by dogs to de-escalate aggressive encounters or to prevent the development of aggressive encounters completely. Calming signals are performed by one dog (the sender) and directed towards one or more individual(s) (the recipient(s)), which could be dogs or individuals of other species, such as humans. When calming signals are ignored, a dog may display warning signals of aggression, and this has the potential to escalate to outright conflict between individuals. The domestication of dogs by humans has significantly altered the behavioral patterns observed in ancestral species, such as the wolf (C. lupis). Dogs have developed changes in body language, as well as changes in auditory and olfactory displays over the course of some 30,000 years, and many of these modified behavioral patterns, or calming signals, can differ in meaning depending on the intended signal receiver's species. Calming signals can be released by an individual voluntarily, or they can be an involuntary response to environmental stimuli as a result of stress-induced changes to body chemistry, such as the release of an odour from the body when anxious. History In the past, studies on social behavior in wolves have been used to provide insight on social behavior patterns in domesticated dogs. Although the domesticated dog (C. familiaris) share a common ancestor with the wolves and may present certain similarities, distinct differences in morphology and in the environment in which the two species evolved can cause the transpiration of inaccurate conclusions about communication behavior patterns in domesticated dogs when applying knowledge gained by the study of wolves. Thus, the grouping of ancestral and descendant species is not found to be an appropriate method for studying calming signals in domesticated species of dogs. The threshold for aggressive behavior in domestic dogs varies from that of wolves. Most domestic dog breeds are less likely to engage in aggressive behavior than their ancestral counterparts, and are therefore more likely to display calming signals to defuse conflict. In dog breeds that differ greatly from wolf morphology, like pugs, some visual signals will be absent or highly modified as they no longer have the physical capacity or means to convey these signals. Neoteny can also account for the loss of certain visual signals in domestic dogs and the retention of novel signals over subsequent generations. Types of Calming Signals Dogs use visual, auditory, and olfactory indicators to communicate with both conspecifics and other species, such as humans. The majority and most well-studied calming signals are visual, and are sometimes accompanied by auditory cues (i.e. a sharp whine accompanying a yawn). Examples of Visual Calming Signals Examples of behaviors classified as calming signals: Head turning Softening of the eyes Turning away Lip and/or nose licking Freezing of the body Slow body movements Displaying a play bow Sitting Lying down Yawning Sniffing the ground Walking in a curve Wagging the tail in a low position Reducing body size Licking the recipient's mouth Blinking Smacking of the lips Lifting a paw The behavioral patterns listed above have the potential to be used as calming signals, but are only classified as such in the appropriate context. For example, a dog lying down when resting would not be considered a calming signal, but a dog lying down when approached by another dog would be considered a calming signal. Thus, calming signals are context-dependent behavioral responses to a dog's environment. Not all calming signals have the same efficacy in de-escalating aggressive encounters or conveying a dog's intention and dogs preferentially display certain calming signals over others depending on external factors such as distance between the sender and the recipient of the signal, and familiarity of the recipient to the sender. A dog is most likely to display a calming signal when it is directly interacting with another dog and when the dogs are separated by a small distance. Lip-licking is a calming signal whose use is noted to increase in frequency as the distance between the sender and the recipient decreases. However, sniffing the ground and yawning, which are both considered calming signals, are most often displayed when the distance between the sender and the recipient increases. Calming signals that are most commonly displayed by dogs overall are freezing, licking of the nose, and turning of the body away from the source of the escalation (i.e. a dog baring its teeth or growling). Dog-Human Interactions Domestic dogs display interspecific signaling, specifically towards humans. Because domestic dogs co-inhabit the same environment as their owners/handlers, humans are their principle social partners, and there is a great level of interaction between the two. Licking of the lips and looking away are calming signal-categorized behaviors that are used by dogs in both conspecific and heterospecific interactions, and in both instances are thought to be used to appease the recipient. Lip licking is also used as a greeting behavior to establish a peaceful basis for future interactions. Understanding canine calming signals is crucial to experiencing positive interactions with dogs. Children under the age of six are least likely to correctly interpret auditory and visual calming signals displayed by dogs and, in the US, younger children have a greater probability of becoming victims of a dog attack. In cases where children cannot appropriately interpret and respond to calming signals, the dog-human interaction is likely to escalate, and the dog may exhibit aggressive behaviors, such as biting. Hugging can feel confusing and threatening to dogs. Therefore it is a common cause of dog bites in the face of young children. Conspecific Interactions Calming signals are often used by dogs post-conflict to diffuse aggressive behaviors and to regain a peaceful social environment. Dogs have evolved peacemaking social mechanisms to alleviate, prevent, or resolve conflicts. Some of these behavior mechanisms are calming signals. Social groups of dogs display two types of post-conflict calming signal behavior patterns: the two opponents of the conflict display the calming signals (reconciliation), or between a third-party member of the social group and one opponent (third-party initiated post-conflict affiliation). Familiarity and distance between two individuals affects the frequency of use of calming signals and the types of calming signals used. Calming signals are used between dogs to prevent the escalation of an agonistic encounter. Intraspecific calming signals can be voluntary, such as licking the lips, or involuntary, such as the release of odors from glands during high-stress interactions. In both cases, the recipient receives the signal, understands its meaning, and acts on this information, often taking action to mitigate the stressful environment by changing their body language or demeanor. Calming signals are not displayed in intraspecific interactions when the level of aggression or threat exceeds the aggression threshold of the sender. In these cases, dogs are more likely to rely on submissive behaviors than calming signals. Calming signals are only useful to a dog when there is a great enough probability that the direction of the encounter can be changed to de-escalate aggression. References Dog training and behavior Ethology
Calming signals
[ "Biology" ]
1,490
[ "Behavioural sciences", "Ethology", "Behavior" ]
48,614,306
https://en.wikipedia.org/wiki/National%20Network%20Management%20Centre
The National Network Management Centre is the main national network operations centre of BT Group, situated in Shropshire. History BT moved to the countryside site in the 1980s. The NMC is also known as the Customer Experience and Management Centre, the International Network Management Centre (INMC), or the National Control Centre (NCC). The BT Global Media Network delivers television content around the world. BT Retail split into BT Consumer and BT Business. The transformation of BT's network to becoming digital began in 1985, and finished in July 1990. BT's Worldwide Network Management Centre at Oswestry opened on 5 September 1990, at a cost of £4m. The site appeared on the BBC Two documentary Genius of Invention. Structure It is situated on the A495, within a few hundred yards of the road's western terminus at the roundabout with the A5, in the west of the parish of Whittington. The site has a staff of around 440. Jamie Ford, the former chief executive of Plusnet, runs BT IT Services. There were originally two buildings - A and B. Building C was built to link these together and forms the hexagonal reception building. Building D was built to be the new Network Management Centre, which has now been superseded by the New Wave Building. Display screen The site has a large display screen, built by Synelec (bought by Planar Systems), made from 140 composite screens. The screen is ten feet high, and seventy feet wide. Function The site is the home of the UK's speaking clock. BT IT Services, headquartered at Barlborough in Derbyshire on the A616, have a main operation in the building. It monitors network traffic on BT's network across the UK, including 0800, 0845 and 999 numbers. It monitors televotes provide by RIDE (Recorded Information Distribution Equipment). RIDE is accessed via the SOAP protocol. At peak periods it implements call gapping load control. BT Wholesale controls its traffic from the site. Every two minutes the site contacts around 700 of BT's telephone exchanges to find out how busy they are. 80% of BT's network consists of optical fibre. See also Institute of Telecommunications Professionals Telecommunications in the United Kingdom :Category:Network performance Erlang distribution Vodafone World Headquarters Yarnfield Park in Yarnfield in Staffordshire west of the M6 Stafford services, the former main BT training centre, which now has a training site for telegraph poles and fibre References External links BT IT Services Independent October 2011 British Telecom buildings and structures Buildings and structures in Shropshire History of Shropshire Network management Organisations based in Shropshire Science and technology in Shropshire Telecommunications company headquarters in the United Kingdom Teletraffic 1980s establishments in the United Kingdom
National Network Management Centre
[ "Engineering" ]
554
[ "Computer networks engineering", "Network management" ]
36,698,220
https://en.wikipedia.org/wiki/C19H24N2O3
The molecular formula C19H24N2O3 (molar mass: 328.4 g/mol, exact mass: 328.1787 u) may refer to: Labetalol Omzotirome TRC-150094 Molecular formulas
C19H24N2O3
[ "Physics", "Chemistry" ]
55
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
36,699,162
https://en.wikipedia.org/wiki/Cambridge%20Algebra%20System
Cambridge Algebra System (CAMAL) is a computer algebra system written in Cambridge University by David Barton, Steve Bourne, and John Fitch. It was initially used for computations in celestial mechanics and general relativity. The foundation code was written in Titan computer assembler. In 1973, when Titan was replaced with an IBM370/85, it was rewritten in ALGOL 68C and then BCPL where it could run on IBM mainframes and assorted microcomputers. References Further reading Computer algebra systems
Cambridge Algebra System
[ "Mathematics" ]
105
[ "Computer algebra systems", "Mathematical software" ]
36,699,980
https://en.wikipedia.org/wiki/Sobolev%20spaces%20for%20planar%20domains
In mathematics, Sobolev spaces for planar domains are one of the principal techniques used in the theory of partial differential equations for solving the Dirichlet and Neumann boundary value problems for the Laplacian in a bounded domain in the plane with smooth boundary. The methods use the theory of bounded operators on Hilbert space. They can be used to deduce regularity properties of solutions and to solve the corresponding eigenvalue problems. Sobolev spaces with boundary conditions Let be a bounded domain with smooth boundary. Since is contained in a large square in , it can be regarded as a domain in by identifying opposite sides of the square. The theory of Sobolev spaces on can be found in , an account which is followed in several later textbooks such as and . For an integer, the (restricted) Sobolev space is defined as the closure of in the standard Sobolev space . . Vanishing properties on boundary: For the elements of are referred to as " functions on which vanish with their first derivatives on ." In fact if agrees with a function in , then is in . Let be such that in the Sobolev norm, and set . Thus in . Hence for and , By Green's theorem this implies where with the unit normal to the boundary. Since such form a dense subspace of , it follows that on . Support properties: Let be the complement of and define restricted Sobolev spaces analogously for . Both sets of spaces have a natural pairing with . The Sobolev space for is the annihilator in the Sobolev space for of and that for is the annihilator of . In fact this is proved by locally applying a small translation to move the domain inside itself and then smoothing by a smooth convolution operator. Suppose in annihilates . By compactness, there are finitely many open sets covering such that the closure of is disjoint from and each is an open disc about a boundary point such that in small translations in the direction of the normal vector carry into . Add an open with closure in to produce a cover of and let be a partition of unity subordinate to this cover. If translation by is denoted by , then the functions tend to as decreases to and still lie in the annihilator, indeed they are in the annihilator for a larger domain than , the complement of which lies in . Convolving by smooth functions of small support produces smooth approximations in the annihilator of a slightly smaller domain still with complement in . These are necessarily smooth functions of compact support in . Further vanishing properties on the boundary: The characterization in terms of annihilators shows that lies in if (and only if) it and its derivatives of order less than vanish on . In fact can be extended to by setting it to be on . This extension defines an element in using the formula for the norm Moreover satisfies for g in . Duality: For , define to be the orthogonal complement of in . Let be the orthogonal projection onto , so that is the orthogonal projection onto . When , this just gives . If and , then This implies that under the pairing between and , and are each other's duals. Approximation by smooth functions: The image of is dense in for . This is obvious for since the sum + is dense in . Density for follows because the image of is dense in and annihilates . Canonical isometries: The operator gives an isometry of into and of onto . In fact the first statement follows because it is true on . That is an isometry on follows using the density of in : for we have: Since the adjoint map between the duals can by identified with this map, it follows that is a unitary map. Application to Dirichlet problem Invertibility of The operator defines an isomorphism between and . In fact it is a Fredholm operator of index . The kernel of in consists of constant functions and none of these except zero vanish on the boundary of . Hence the kernel of is and is invertible. In particular the equation has a unique solution in for in . Eigenvalue problem Let be the operator on defined by where is the inclusion of in and of in , both compact operators by Rellich's theorem. The operator is compact and self-adjoint with for all . By the spectral theorem, there is a complete orthonormal set of eigenfunctions in with Since , lies in . Setting , the are eigenfunctions of the Laplacian: Sobolev spaces without boundary condition To determine the regularity properties of the eigenfunctions and solutions of enlargements of the Sobolev spaces have to be considered. Let be the space of smooth functions on which with their derivatives extend continuously to . By Borel's lemma, these are precisely the restrictions of smooth functions on . The Sobolev space is defined to the Hilbert space completion of this space for the norm This norm agrees with the Sobolev norm on so that can be regarded as a closed subspace of . Unlike , is not naturally a subspace of , but the map restricting smooth functions from to is continuous for the Sobolev norm so extends by continuity to a map . Invariance under diffeomorphism: Any diffeomorphism between the closures of two smooth domains induces an isomorphism between the Sobolev space. This is a simple consequence of the chain rule for derivatives. Extension theorem: The restriction of to the orthogonal complement of its kernel defines an isomorphism onto . The extension map is defined to be the inverse of this map: it is an isomorphism (not necessarily norm preserving) of onto the orthogonal complement of such that . On , it agrees with the natural inclusion map. Bounded extension maps of this kind from to were constructed first constructed by Hestenes and Lions. For smooth curves the Seeley extension theorem provides an extension which is continuous in all the Sobolev norms. A version of the extension which applies in the case where the boundary is just a Lipschitz curve was constructed by Calderón using singular integral operators and generalized by . It is sufficient to construct an extension for a neighbourhood of a closed annulus, since a collar around the boundary is diffeomorphic to an annulus with a closed interval in . Taking a smooth bump function with , equal to 1 near the boundary and 0 outside the collar, will provide an extension on . On the annulus, the problem reduces to finding an extension for in . Using a partition of unity the task of extending reduces to a neighbourhood of the end points of . Assuming 0 is the left end point, an extension is given locally by Matching the first derivatives of order k or less at 0, gives This matrix equation is solvable because the determinant is non-zero by Vandermonde's formula. It is straightforward to check that the formula for , when appropriately modified with bump functions, leads to an extension which is continuous in the above Sobolev norm. Restriction theorem: The restriction map is surjective with . This is an immediate consequence of the extension theorem and the support properties for Sobolev spaces with boundary condition. Duality: is naturally the dual of H−k0(Ω). Again this is an immediate consequence of the restriction theorem. Thus the Sobolev spaces form a chain: The differentiation operators carry each Sobolev space into the larger one with index 1 less. Sobolev embedding theorem: is contained in . This is an immediate consequence of the extension theorem and the Sobolev embedding theorem for . Characterization: consists of in such that all the derivatives ∂αf lie in for |α| ≤ k. Here the derivatives are taken within the chain of Sobolev spaces above. Since is weakly dense in , this condition is equivalent to the existence of functions fα such that To prove the characterization, note that if is in , then lies in Hk−|α|(Ω) and hence in . Conversely the result is well known for the Sobolev spaces : the assumption implies that the is in and the corresponding condition on the Fourier coefficients of shows that lies in . Similarly the result can be proved directly for an annulus . In fact by the argument on the restriction of to any smaller annulus [−δ',δ'] × T lies in : equivalently the restriction of the function lies in for . On the other hand in as , so that must lie in . The case for a general domain reduces to these two cases since can be written as with ψ a bump function supported in such that is supported in a collar of the boundary. Regularity theorem: If in has both derivatives and in then lies in . This is an immediate consequence of the characterization of above. In fact if this is true even when satisfied at the level of distributions: if there are functions g, h in such that (g,φ) = (f, φx) and (h,φ) = (f,φy) for φ in , then is in . Rotations on an annulus: For an annulus , the extension map to is by construction equivariant with respect to rotations in the second variable, On it is known that if is in , then the difference quotient in ; if the difference quotients are bounded in Hk then ∂yf lies in . Both assertions are consequences of the formula: These results on imply analogous results on the annulus using the extension. Regularity for Dirichlet problem Regularity for dual Dirichlet problem If with in and in with , then lies in . Take a decomposition with supported in and supported in a collar of the boundary. Standard Sobolev theory for can be applied to : elliptic regularity implies that it lies in and hence . lies in of a collar, diffeomorphic to an annulus, so it suffices to prove the result with a collar and replaced by The proof proceeds by induction on , proving simultaneously the inequality for some constant depending only on . It is straightforward to establish this inequality for , where by density can be taken to be smooth of compact support in : The collar is diffeomorphic to an annulus. The rotational flow on the annulus induces a flow on the collar with corresponding vector field . Thus corresponds to the vector field . The radial vector field on the annulus is a commuting vector field which on the collar gives a vector field proportional to the normal vector field. The vector fields and commute. The difference quotients can be formed for the flow . The commutators are second order differential operators from to . Their operators norms are uniformly bounded for near ; for the computation can be carried out on the annulus where the commutator just replaces the coefficients of by their difference quotients composed with . On the other hand, lies in , so the inequalities for apply equally well for : The uniform boundedness of the difference quotients implies that lies in with It follows that lies in where is the vector field Moreover, satisfies a similar inequality to . Let be the orthogonal vector field It can also be written as for some smooth nowhere vanishing function on a neighbourhood of the collar. It suffices to show that lies in . For then so that and lie in and must lie in . To check the result on , it is enough to show that and lie in . Note that are vector fields. But then with all terms on the right hand side in . Moreover, the inequalities for show that Hence Smoothness of eigenfunctions It follows by induction from the regularity theorem for the dual Dirichlet problem that the eigenfunctions of in lie in . Moreover, any solution of with in and in must have in . In both cases by the vanishing properties, the eigenfunctions and vanish on the boundary of . Solving the Dirichlet problem The dual Dirichlet problem can be used to solve the Dirichlet problem: By Borel's lemma is the restriction of a function in . Let be the smooth solution of with on . Then solves the Dirichlet problem. By the maximal principle, the solution is unique. Application to smooth Riemann mapping theorem The solution to the Dirichlet problem can be used to prove a strong form of the Riemann mapping theorem for simply connected domains with smooth boundary. The method also applies to a region diffeomorphic to an annulus. For multiply connected regions with smooth boundary have given a method for mapping the region onto a disc with circular holes. Their method involves solving the Dirichlet problem with a non-linear boundary condition. They construct a function such that: is harmonic in the interior of ; On we have: , where is the curvature of the boundary curve, is the derivative in the direction normal to and is constant on each boundary component. gives a proof of the Riemann mapping theorem for a simply connected domain with smooth boundary. Translating if necessary, it can be assumed that . The solution of the Dirichlet problem shows that there is a unique smooth function on which is harmonic in and equals on . Define the Green's function by . It vanishes on and is harmonic on away from . The harmonic conjugate of is the unique real function on such that is holomorphic. As such it must satisfy the Cauchy–Riemann equations: The solution is given by where the integral is taken over any path in . It is easily verified that and exist and are given by the corresponding derivatives of . Thus is a smooth function on , vanishing at . By the Cauchy-Riemann is smooth on , holomorphic on and . The function is only defined up to multiples of , but the function is a holomorphic on and smooth on . By construction, and for . Since has winding number , so too does . On the other hand, only for where there is a simple zero. So by the argument principle assumes every value in the unit disc, , exactly once and does not vanish inside . To check that the derivative on the boundary curve is non-zero amounts to computing the derivative of , i.e. the derivative of should not vanish on the boundary curve. By the Cauchy-Riemann equations these tangential derivative are up to a sign the directional derivative in the direction of the normal to the boundary. But vanishes on the boundary and is strictly negative in since . The Hopf lemma implies that the directional derivative of in the direction of the outward normal is strictly positive. So on the boundary curve, has nowhere vanishing derivative. Since the boundary curve has winding number one, defines a diffeomorphism of the boundary curve onto the unit circle. Accordingly, is a smooth diffeomorphism, which restricts to a holomorphic map and a smooth diffeomorphism between the boundaries. Similar arguments can be applied to prove the Riemann mapping theorem for a doubly connected domain bounded by simple smooth curves (the inner curve) and (the outer curve). By translating we can assume 1 lies on the outer boundary. Let be the smooth solution of the Dirichlet problem with on the outer curve and on the inner curve. By the maximum principle for in and so by the Hopf lemma the normal derivatives of are negative on the outer curve and positive on the inner curve. The integral of over the boundary is zero by Stokes' theorem so the contributions from the boundary curves cancel. On the other hand, on each boundary curve the contribution is the integral of the normal derivative along the boundary. So there is a constant such that satisfies on each boundary curve. The harmonic conjugate of can again be defined by and is well-defined up to multiples of . The function is smooth on and holomorphic in . On the outer curve and on the inner curve . The tangential derivatives on the outer curves are nowhere vanishing by the Cauchy-Riemann equations, since the normal derivatives are nowhere vanishing. The normalization of the integrals implies that restricts to a diffeomorphism between the boundary curves and the two concentric circles. Since the images of outer and inner curve have winding number and about any point in the annulus, an application of the argument principle implies that assumes every value within the annulus exactly once; since that includes multiplicities, the complex derivative of is nowhere vanishing in . This is a smooth diffeomorphism of onto the closed annulus , restricting to a holomorphic map in the interior and a smooth diffeomorphism on both boundary curves. Trace map The restriction map extends to a continuous map for . In fact so the Cauchy–Schwarz inequality yields where, by the integral test, The map is onto since a continuous extension map can be constructed from to . In fact set where Thus . If g is smooth, then by construction Eg restricts to g on 1 × T. Moreover, E is a bounded linear map since It follows that there is a trace map τ of Hk(Ω) onto Hk − 1/2(∂Ω). Indeed, take a tubular neighbourhood of the boundary and a smooth function ψ supported in the collar and equal to 1 near the boundary. Multiplication by ψ carries functions into Hk of the collar, which can be identified with Hk of an annulus for which there is a trace map. The invariance under diffeomorphisms (or coordinate change) of the half-integer Sobolev spaces on the circle follows from the fact that an equivalent norm on Hk + 1/2(T) is given by It is also a consequence of the properties of τ and E (the "trace theorem"). In fact any diffeomorphism f of T induces a diffeomorphism F of T2 by acting only on the second factor. Invariance of Hk(T2) under the induced map F* therefore implies invariance of Hk − 1/2(T) under f*, since f* = τ ∘ F* ∘ E. Further consequences of the trace theorem are the two exact sequences and where the last map takes f in H2(Ω) to f|∂Ω and ∂nf|∂Ω. There are generalizations of these sequences to Hk(Ω) involving higher powers of the normal derivative in the trace map: The trace map to takes f to Abstract formulation of boundary value problems The Sobolev space approach to the Neumann problem cannot be phrased quite as directly as that for the Dirichlet problem. The main reason is that for a function in , the normal derivative cannot be a priori defined at the level of Sobolev spaces. Instead an alternative formulation of boundary value problems for the Laplacian on a bounded region in the plane is used. It employs Dirichlet forms, sesqulinear bilinear forms on , or an intermediate closed subspace. Integration over the boundary is not involved in defining the Dirichlet form. Instead, if the Dirichlet form satisfies a certain positivity condition, termed coerciveness, solution can be shown to exist in a weak sense, so-called "weak solutions". A general regularity theorem than implies that the solutions of the boundary value problem must lie in , so that they are strong solutions and satisfy boundary conditions involving the restriction of a function and its normal derivative to the boundary. The Dirichlet problem can equally well be phrased in these terms, but because the trace map is already defined on , Dirichlet forms do not need to be mentioned explicitly and the operator formulation is more direct. A unified discussion is given in and briefly summarised below. It is explained how the Dirichlet problem, as discussed above, fits into this framework. Then a detailed treatment of the Neumann problem from this point of view is given following . The Hilbert space formulation of boundary value problems for the Laplacian on a bounded region in the plane proceeds from the following data: A closed subspace . A Dirichlet form for given by a bounded Hermitian bilinear form defined for such that for . is coercive, i.e. there is a positive constant and a non-negative constant such that . A weak solution of the boundary value problem given initial data in is a function u satisfying for all g. For both the Dirichlet and Neumann problem For the Dirichlet problem . In this case By the trace theorem the solution satisfies in . For the Neumann problem is taken to be . Application to Neumann problem The classical Neumann problem on consists in solving the boundary value problem Green's theorem implies that for Thus if in and satisfies the Neumann boundary conditions, , and so is constant in . Hence the Neumann problem has a unique solution up to adding constants. Consider the Hermitian form on defined by Since is in duality with , there is a unique element in such that The map is an isometry of onto , so in particular is bounded. In fact So On the other hand, any in defines a bounded conjugate-linear form on sending to . By the Riesz–Fischer theorem, there exists such that Hence and so is surjective. Define a bounded linear operator on by where is the map , a compact operator, and is the map , its adjoint, so also compact. The operator has the following properties: is a contraction since it is a composition of contractions is compact, since and are compact by Rellich's theorem is self-adjoint, since if , they can be written with so has positive spectrum and kernel , for and implies and hence . There is a complete orthonormal basis of consisting of eigenfunctions of . Thus with and decreasing to . The eigenfunctions all lie in since the image of lies in . The are eigenfunctions of with Thus are non-negative and increase to . The eigenvalue occurs with multiplicity one and corresponds to the constant function. For if satisfies , then so is constant. Regularity for Neumann problem Weak solutions are strong solutions The first main regularity result shows that a weak solution expressed in terms of the operator and the Dirichlet form is a strong solution in the classical sense, expressed in terms of the Laplacian and the Neumann boundary conditions. Thus if with , then , satisfies and . Moreover, for some constant independent of , Note that since Take a decomposition with supported in and supported in a collar of the boundary. The operator is characterized by Then so that The function and are treated separately, being essentially subject to usual elliptic regularity considerations for interior points while requires special treatment near the boundary using difference quotients. Once the strong properties are established in terms of and the Neumann boundary conditions, the "bootstrap" regularity results can be proved exactly as for the Dirichlet problem. Interior estimates The function lies in where is a region with closure in . If and By continuity the same holds with replaced by and hence . So Hence regarding as an element of , . Hence . Since for , we have . Moreover, so that Boundary estimates The function is supported in a collar contained in a tubular neighbourhood of the boundary. The difference quotients can be formed for the flow and lie in , so the first inequality is applicable: The commutators are uniformly bounded as operators from to . This is equivalent to checking the inequality for , smooth functions on a collar. This can be checked directly on an annulus, using invariance of Sobolev spaces under dffeomorphisms and the fact that for the annulus the commutator of with a differential operator is obtained by applying the difference operator to the coefficients after having applied to the function: Hence the difference quotients are uniformly bounded, and therefore with Hence and satisfies a similar inequality to : Let be the orthogonal vector field. As for the Dirichlet problem, to show that , it suffices to show that . To check this, it is enough to show that . As before are vector fields. On the other hand, for , so that and define the same distribution on . Hence Since the terms on the right hand side are pairings with functions in , the regularity criterion shows that . Hence since both terms lie in and have the same inner products with 's. Moreover, the inequalities for show that Hence It follows that . Moreover, Neumann boundary conditions Since , Green's theorem is applicable by continuity. Thus for , Hence the Neumann boundary conditions are satisfied: where the left hand side is regarded as an element of and hence . Regularity of strong solutions The main result here states that if and , then and for some constant independent of . Like the corresponding result for the Dirichlet problem, this is proved by induction on . For , is also a weak solution of the Neumann problem so satisfies the estimate above for . The Neumann boundary condition can be written Since commutes with the vector field corresponding to the period flow , the inductive method of proof used for the Dirichlet problem works equally well in this case: for the difference quotients preserve the boundary condition when expressed in terms of . Smoothness of eigenfunctions It follows by induction from the regularity theorem for the Neumann problem that the eigenfunctions of in lie in . Moreover, any solution of with in and in must have in . In both cases by the vanishing properties, the normal derivatives of the eigenfunctions and vanish on . Solving the associated Neumann problem The method above can be used to solve the associated Neumann boundary value problem: By Borel's lemma is the restriction of a function . Let be a smooth function such that near the boundary. Let be the solution of with . Then solves the boundary value problem. Notes References Partial differential equations Harmonic analysis Operator theory Functional analysis
Sobolev spaces for planar domains
[ "Mathematics" ]
5,304
[ "Functional analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
36,700,195
https://en.wikipedia.org/wiki/Earthquake%20weather
Earthquake weather is a type of weather popularly believed to precede earthquakes. History Since ancient times, the notion that weather can somehow foreshadow coming seismic activity has been the topic of much discussion and debate. Geologist Russell Robinson has described "earthquake weather" as one of the most common pseudoscientific methods of predicting earthquakes. Aristotle proposed in the 4th century BC that earthquakes were caused by winds trapped in caves. Small tremors were thought to have been caused by air pushing on the cavern roofs, and large ones by the air breaking the surface. This theory led to a belief in 'earthquake weather', that because a large amount of air was trapped underground, the weather would be hot and calm before an earthquake. A later theory stated that earthquakes occurred in calm, cloudy conditions, and were usually preceded by strong winds, fireballs, and meteors. A modern theory proposes that certain cloud formations may be used to predict earthquakes; however, this idea is rejected by most geologists. Background on earthquakes An earthquake is caused by a sudden slip on a fault. Tectonic plates are always slowly moving, but they can get stuck at their edges due to friction. When the stress on the edge of a tectonic plate overcomes the friction, there is an earthquake that releases energy in waves that travel through the Earth's crust and cause the shaking that is felt. For example, in California, there are two plates, the Pacific plate and the North American plate. The Pacific plate consists of most of the Pacific Ocean floor, and also includes Baja California and the California coastline. The North American plate comprises most of the North American continent, including the inland parts of California, as well as parts of the Atlantic and Arctic Oceans' floors. The primary boundary between these two plates is the San Andreas Fault. The San Andreas Fault is more than 800 miles long and extends to depths of at least 10 miles. Many other smaller faults like the Hayward (San Francisco Bay Area) and the San Jacinto (Southern California) join with the San Andreas to form the San Andreas Fault Zone. The Pacific plate grinds northwestward past the North American plate at a rate of about two inches per year. Earthquake cloud Earthquake clouds are clouds claimed to be signs of imminent earthquakes. They have been described in antiquity: In chapter 32 of his work Brihat Samhita, Indian scholar Varahamihira (505–587) discussed a number of signs warning of earthquakes, including extraordinary clouds occurring a week before the earthquake. In modern times, some scientists have claimed to accurately predict earthquake occurrences by observing clouds. However, these claims have very little support in the scientific community. Psychology It has been proposed by W. J. Humphreys that earthquake weather is not of geological causes, but merely a psychological manifestation. Humphreys argued that "the general state of irritation and sensitiveness developed in us during the hot, calm, perhaps sultry weather given this name, inclines us to sharper observation of earthquake disturbances and accentuates the impression they make on our senses, so that we retain more vivid memories of such quakes while possibly over-looking entirely the occurrences on other more soothing days". Scientific validity Some recent research has found a correlation between a sudden relative spike in atmospheric temperature 2–5 days before an earthquake. It is speculated that this rise is caused by the movement of ions within the Earth's crust, related to an oncoming earthquake. Furthermore, this relative temperature change would not cause any single recognizable weather pattern that could be labelled "earthquake weather". At the 2011 American Geophysical Union Fall Meeting, Shimon Wdowinski announced an apparent temporal connection between tropical cyclones and earthquakes. In April 2013, a team of seismologists at the Georgia Institute of Technology re-examined data from the 2011 Virginia earthquake using pattern-recognition software and found a correlation between Hurricane Irene's nearby passage and an unexpected rise in the number of aftershocks. See also Earthquake light Earthquake prediction References External links Weather lore Anomalous weather Earthquake and seismic risk mitigation
Earthquake weather
[ "Physics", "Engineering" ]
820
[ "Structural engineering", "Physical phenomena", "Weather lore", "Weather", "Anomalous weather", "Earthquake and seismic risk mitigation" ]
36,701,390
https://en.wikipedia.org/wiki/Modular%20vehicle
A modular vehicle is one in which substantial components of the vehicle are interchangeable. This modularity is intended to make repairs and maintenance easier, or to allow the vehicle to be reconfigured to suit different functions. Another application of modular vehicle design is to enable the exchange of batteries in an electric vehicle. In a modular electric vehicle, the power system, wheels and suspension can be contained in a single module or chassis. When the batteries need recharging, the vehicle's body is lifted off and placed onto a fresh power module. By using this Modular Vehicle system, the vehicle's batteries do not have to be removed or reinstalled, and their connections remain intact. History of the modern modular vehicle The world's first road-licensed quick-change modular electric vehicle, based on a patent awarded to Dr Gordon E Dower in 2000, was shown at the World Electric Vehicle Association 2003 Electric Vehicle Symposium EVS-20 in Long Beach, California, USA. Dower described the vehicle's two parts as its motorized deck, shortened to Modek, and its "containing module" or Ridon. When attached to each other, the vehicle thus formed was dubbed the Ridek. Mechanical connections between the modules for braking and steering automatically engage when the body is lowered on to the chassis. In 2004, General Motors attempted to patent a modular vehicle called Autonomy, but the attempt was unsuccessful because Dower’s patent already existed. A team at GM did, however, continue to work on Autonomy, which was intended to be powered by a hydrogen fuel cell. They unveiled a non-drivable version of their modular vehicle in January 2002 at the Detroit Auto Show. GM unveiled a drivable prototype, called Hy-wire at the Paris Auto Show in September, 2002. The name referred to the Hydrogen fuel and the "Drive by wire" system that electronically connected the vehicle modules for steering, braking and controlling the 4 wheel motors. Hy-wire did not go into production. In the 2010s, a number of modular platforms were developed by car manufacturers. Geely Auto developed the Compact Modular Architecture platform (2017), B-segment Modular Architecture platform (2018), and Sustainable Experience Architecture platform (2021). PSA Group and Dongfeng developed the Common Modular Platform (2018) Flexibility Modular vehicles make it possible to use different types of bodies, e.g. sedan, sports car or pickup truck, on one standardized chassis. Also, the modular chassis, with its batteries and motor, are relatively easy to work on, since there is no vehicle body to impede access. See also Electric vehicles Modular design Rolling chassis Skateboard (automotive platform) References External links Chernoff, Adrian B: Hy-wire webpage 2018 article on various modular concept vehicles Automotive styling features Automotive technologies Automotive engineering Electric vehicle technologies Modular design
Modular vehicle
[ "Engineering" ]
577
[ "Systems engineering", "Automotive engineering", "Mechanical engineering by discipline", "Design", "Modular design" ]
36,703,918
https://en.wikipedia.org/wiki/Cantelli%27s%20inequality
In probability theory, Cantelli's inequality (also called the Chebyshev-Cantelli inequality and the one-sided Chebyshev inequality) is an improved version of Chebyshev's inequality for one-sided tail bounds. The inequality states that, for where is a real-valued random variable, is the probability measure, is the expected value of , is the variance of . Applying the Cantelli inequality to gives a bound on the lower tail, While the inequality is often attributed to Francesco Paolo Cantelli who published it in 1928, it originates in Chebyshev's work of 1874. When bounding the event random variable deviates from its mean in only one direction (positive or negative), Cantelli's inequality gives an improvement over Chebyshev's inequality. The Chebyshev inequality has "higher moments versions" and "vector versions", and so does the Cantelli inequality. Comparison to Chebyshev's inequality For one-sided tail bounds, Cantelli's inequality is better, since Chebyshev's inequality can only get On the other hand, for two-sided tail bounds, Cantelli's inequality gives which is always worse than Chebyshev's inequality (when ; otherwise, both inequalities bound a probability by a value greater than one, and so are trivial). Generalizations Various stronger inequalities can be shown. He, Zhang, and Zhang showed (Corollary 2.3) when and : In the case this matches a bound in Berger's "The Fourth Moment Method", This improves over Cantelli's inequality in that we can get a non-zero lower bound, even when . See also Chebyshev's inequality Paley–Zygmund inequality References Probabilistic inequalities
Cantelli's inequality
[ "Mathematics" ]
390
[ "Theorems in probability theory", "Probabilistic inequalities", "Inequalities (mathematics)" ]
34,121,965
https://en.wikipedia.org/wiki/Freudenthal%20spectral%20theorem
In mathematics, the Freudenthal spectral theorem is a result in Riesz space theory proved by Hans Freudenthal in 1936. It roughly states that any element dominated by a positive element in a Riesz space with the principal projection property can in a sense be approximated uniformly by simple functions. Numerous well-known results may be derived from the Freudenthal spectral theorem. The well-known Radon–Nikodym theorem, the validity of the Poisson formula and the spectral theorem from the theory of normal operators can all be shown to follow as special cases of the Freudenthal spectral theorem. Statement Let e be any positive element in a Riesz space E. A positive element of p in E is called a component of e if . If are pairwise disjoint components of e, any real linear combination of is called an e-simple function. The Freudenthal spectral theorem states: Let E be any Riesz space with the principal projection property and e any positive element in E. Then for any element f in the principal ideal generated by e, there exist sequences and of e-simple functions, such that is monotone increasing and converges e-uniformly to f, and is monotone decreasing and converges e-uniformly to f. Relation to the Radon–Nikodym theorem Let be a measure space and the real space of signed -additive measures on . It can be shown that is a Dedekind complete Banach Lattice with the total variation norm, and hence has the principal projection property. For any positive measure , -simple functions (as defined above) can be shown to correspond exactly to -measurable simple functions on (in the usual sense). Moreover, since by the Freudenthal spectral theorem, any measure in the band generated by can be monotonously approximated from below by -measurable simple functions on , by Lebesgue's monotone convergence theorem can be shown to correspond to an function and establishes an isometric lattice isomorphism between the band generated by and the Banach Lattice . See also Radon–Nikodym theorem References Theorems in functional analysis
Freudenthal spectral theorem
[ "Mathematics" ]
435
[ "Theorems in mathematical analysis", "Theorems in functional analysis" ]
34,122,362
https://en.wikipedia.org/wiki/Fluence%20response
Both fluence rates and irradiance of light are important signals for plants and are detected by phytochrome. Exploiting different modes of photoreversibility in this molecule allow plants to respond to different levels of light. There are three main types of fluence rate governed responses that are brought about by different levels of light. Very low fluence responses As the name would suggest this type of response is triggered by very low levels of light and is thought to be mediated by phytochrome A. It can be initiated by fluences as low as 0.0001μmol/m2 up to about 0.05μmol/m2. Germination of Arabidopsis can be induced with very low levels of red light, as can oat seedlings. Such low levels of light are sufficient for inducing this response, since they only convert 0.02% of the phytochrome to its active form. The backward reaction by far red light is only 98% efficient making the conversion non-photoreversible and allowing the response to proceed. VLFRs can also be induced by making up the required fluence by brief flashes of light. Since this depends on light levels and time it is known as the law of reciprocity. Low fluence responses These responses require at least 1μmol/m2 to be initiated and become saturated at about 1000μmol/m2. Unlike VLFRs, these responses are photoreversible. This was shown by exposing lettuce seed to a brief flash of red light causing germination. It was then shown, if this red flash was followed by a flash of far red light, germination was again inhibited. LFRs also follow the law of reciprocity. Other examples of LFRs include leaf de-etiolation and enhancement of rate of chlorophyll production. High-irradiance responses HIRs require long exposure to relatively high light levels. The degree of response will depend on the level of light. They are characterised by the fact that they do not follow the law of reciprocity and depend on the rate of photons hitting the leaf surface, as opposed to the total light levels. This means that neither long exposure to dim levels of light nor very bright flashes of light are enough to trigger these responses. HIR does not show red and far red photoreversibility and does not obey the law of reciprocity. References Physical quantities Radiometry Photosynthesis
Fluence response
[ "Physics", "Chemistry", "Mathematics", "Engineering", "Biology" ]
513
[ "Physical phenomena", "Telecommunications engineering", "Physical quantities", "Quantity", "Photosynthesis", "Biochemistry", "Physical properties", "Radiometry" ]
34,128,384
https://en.wikipedia.org/wiki/Wellcome%20Centre%20for%20Human%20Genetics
The Wellcome Centre for Human Genetics is a human genetics research centre of the Nuffield Department of Medicine in the Medical Sciences Division, University of Oxford, funded by the Wellcome Trust among others. Facilities & resources The centre is located at the Henry Wellcome Building of Genomic Medicine, which cost £20 million and was officially opened in June 2000 with Anthony Monaco as the director. Within the WHG a number of 'cores' provide services to the researchers: Oxford Genomics Centre The Oxford Genomics Centre provides high throughput sequencing services, using Illumina HiSeq4000 2500 and NextSeq500 and MiSeq. They also offer Oxford Nanopore MinION and PromethION sequencing. There are also Array platforms for genotyping, gene expression, and methylation including Illuminia Infinium, Affymetrix and Fluidigm. Research Computing Core The Research Computing Core provides access to computer resources including 4120 cores and 4.2 PB of storage. Transgenics The Transgenics Core provides access to genetically modified mice and cell lines. Cellular Imaging Cellular Imaging Core provides microscopy facilities including fluorescence microscopy (including Fluorescence Correlation Spectroscopy (FCS), Fluorescence Lifetime Correlation Spectroscopy (FLCS), Fluorescence Lifetime Imaging Microscopy (FLIM), Total Internal Reflection Fluorescence Microscopy (TIRF), Photoactivated Localisation Microscopy (PALM), Spectral Imaging (SI) and Single Particle Tracking (SPT). Research Statistical and population genetics The WHG has been involved in many international statistical genetics advances including the Wellcome Trust Case Control Consortia (WTCCC, WTCCC2), the 1000 Genomes Project and the International HapMap Project. References 1994 establishments in England Biotechnology in the United Kingdom Departments of the University of Oxford Educational institutions established in 1994 Genetics in the United Kingdom Genetics or genomics research institutions Research institutes in Oxford Wellcome Trust Medical research institutes in the United Kingdom Human genetics
Wellcome Centre for Human Genetics
[ "Biology" ]
403
[ "Biotechnology in the United Kingdom", "Biotechnology by country" ]
34,130,293
https://en.wikipedia.org/wiki/Thompson%20sampling
Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that address the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief. Description Consider a set of contexts , a set of actions , and rewards in . The aim of the player is to play actions under the various contexts, such as to maximize the cumulative rewards. Specifically, in each round, the player obtains a context , plays an action and receives a reward following a distribution that depends on the context and the issued action. The elements of Thompson sampling are as follows: a likelihood function ; a set of parameters of the distribution of ; a prior distribution on these parameters; past observations triplets ; a posterior distribution , where is the likelihood function. Thompson sampling consists of playing the action according to the probability that it maximizes the expected reward; action is chosen with probability where is the indicator function. In practice, the rule is implemented by sampling. In each round, parameters are sampled from the posterior , and an action chosen that maximizes , i.e. the expected reward given the sampled parameters, the action, and the current context. Conceptually, this means that the player instantiates their beliefs randomly in each round according to the posterior distribution, and then acts optimally according to them. In most practical applications, it is computationally onerous to maintain and sample from a posterior distribution over models. As such, Thompson sampling is often used in conjunction with approximate sampling techniques. History Thompson sampling was originally described by Thompson in 1933. It was subsequently rediscovered numerous times independently in the context of multi-armed bandit problems. A first proof of convergence for the bandit case has been shown in 1997. The first application to Markov decision processes was in 2000. A related approach (see Bayesian control rule) was published in 2010. In 2010 it was also shown that Thompson sampling is instantaneously self-correcting. Asymptotic convergence results for contextual bandits were published in 2011. Thompson Sampling has been widely used in many online learning problems including A/B testing in website design and online advertising, and accelerated learning in decentralized decision making. A Double Thompson Sampling (D-TS) algorithm has been proposed for dueling bandits, a variant of traditional MAB, where feedback comes in the form of pairwise comparison. Relationship to other approaches Probability matching Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances. Bayesian control rule A generalization of Thompson sampling to arbitrary dynamical environments and causal structures, known as Bayesian control rule, has been shown to be the optimal solution to the adaptive coding problem with actions and observations. In this formulation, an agent is conceptualized as a mixture over a set of behaviours. As the agent interacts with its environment, it learns the causal properties and adopts the behaviour that minimizes the relative entropy to the behaviour with the best prediction of the environment's behaviour. If these behaviours have been chosen according to the maximum expected utility principle, then the asymptotic behaviour of the Bayesian control rule matches the asymptotic behaviour of the perfectly rational agent. The setup is as follows. Let be the actions issued by an agent up to time , and let be the observations gathered by the agent up to time . Then, the agent issues the action with probability: where the "hat"-notation denotes the fact that is a causal intervention (see Causality), and not an ordinary observation. If the agent holds beliefs over its behaviors, then the Bayesian control rule becomes , where is the posterior distribution over the parameter given actions and observations . In practice, the Bayesian control amounts to sampling, at each time step, a parameter from the posterior distribution , where the posterior distribution is computed using Bayes' rule by only considering the (causal) likelihoods of the observations and ignoring the (causal) likelihoods of the actions , and then by sampling the action from the action distribution . Upper-Confidence-Bound (UCB) algorithms Thompson sampling and upper-confidence bound algorithms share a fundamental property that underlies many of their theoretical guarantees. Roughly speaking, both algorithms allocate exploratory effort to actions that might be optimal and are in this sense "optimistic". Leveraging this property, one can translate regret bounds established for UCB algorithms to Bayesian regret bounds for Thompson sampling or unify regret analysis across both these algorithms and many classes of problems. References Artificial intelligence engineering Heuristic algorithms Sequential methods Sequential experiments
Thompson sampling
[ "Engineering" ]
1,007
[ "Artificial intelligence engineering", "Software engineering" ]
34,136,411
https://en.wikipedia.org/wiki/C11H11Cl2N3O
{{DISPLAYTITLE:C11H11Cl2N3O}} The molecular formula C11H11Cl2N3O (molar mass: 272.131 g/mol, exact mass: 271.0279 u) may refer to: Muzolimine WAY-161503 Molecular formulas
C11H11Cl2N3O
[ "Physics", "Chemistry" ]
69
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
38,106,226
https://en.wikipedia.org/wiki/Tesamorelin
Tesamorelin (INN) (trade name Egrifta SV) is a synthetic form of growth-hormone-releasing hormone (GHRH) which is used in the treatment of HIV-associated lipodystrophy, approved initially in 2010. It is produced and developed by Theratechnologies, Inc. of Canada. The drug is a synthetic peptide consisting of all 44 amino acids of human GHRH with the addition of a trans-3-hexenoic acid group. Mechanism of action Tesamorelin is the N-terminally modified compound based on 44 amino acids sequence of human GHRH. This modified synthetic form is more potent and stable than the natural peptide. It is also more resistant to cleavage by the dipeptidyl aminopeptidase than human GHRH. It stimulates the synthesis and release of endogenous GH, with an increase in level of insulin-like growth factor (IGF-1). The released GH then binds with the receptors present on various body organs and regulates the body composition. This regulation is mainly because of the combination of anabolic and lipolytic mechanisms. However, it has been found that the main mechanisms by which Tesamorelin reduces body fat mass are lipolysis followed by reduction in triglycerides level. Contraindication Tesamorelin therapy may cause glucose intolerance and increase the risk of type 2-diabetes, so it is contraindicated in pregnancy. It is also contraindicated in pregnancy (category X) because it may cause harm to fetus. It is also contraindicated in patients affected by hypothalamic-pituitary axis disruption due to pituitary gland tumor, head irradiation and hypopituitarism. Adverse effects Injection site erythema, peripheral edema, injection site pruritus and diarrhea. See also List of growth hormone secretagogues References Growth hormone secretagogues Growth hormone-releasing hormone receptor agonists HIV/AIDS Drugs developed by Merck Peptides Recombinant proteins World Anti-Doping Agency prohibited substances
Tesamorelin
[ "Chemistry", "Biology" ]
437
[ "Biomolecules by chemical classification", "Biotechnology products", "Recombinant proteins", "Molecular biology", "Peptides" ]
38,112,103
https://en.wikipedia.org/wiki/11%20Orionis
11 Orionis is a solitary Ap star in the equatorial constellation of Orion, near the border with Taurus. It is visible to the naked eye with an apparent visual magnitude of 4.65, and it is located approximately 365 light years away from the Sun based on parallax. The star is moving further from the Sun with a heliocentric radial velocity of +16.8 km/s. This object is a chemically peculiar star, known as an Ap star, with enhanced silicon and chromium lines in its spectrum. It is an α² CVn variable, ranging from 4.65 to 4.69 magnitude with a period of 4.64 days. The magnetic field measured from metal lines has a strength of . References B-type subgiants A-type main-sequence stars Alpha2 Canum Venaticorum variables Ap stars Orion (constellation) BD+15 732 Orionis, 11 032549 023607 1638 Orionis, V1032
11 Orionis
[ "Astronomy" ]
203
[ "Constellations", "Orion (constellation)" ]
55,336,593
https://en.wikipedia.org/wiki/Principles%20of%20Quantum%20Mechanics
Principles of Quantum Mechanics is a textbook by Ramamurti Shankar. The book has been through two editions. It is used in many college courses around the world. Contents Mathematical Introduction Review of Classical Mechanics All Is Not Well with Classical Mechanics The Postulates – a General Discussion Simple Problems in One Dimension The Classical Limit The Harmonic Oscillator The Path Integral Formulation of Quantum Theory The Heisenberg Uncertainty Relations Systems with Degrees of Freedom Symmetries and Their Consequences Rotational Invariance and Angular Momentum The Hydrogen Atom Spin Addition of Angular Momenta Variational and WKB Methods Time-Independent Perturbation Theory Time-Dependent Perturbation Theory Scattering Theory The Dirac Equation Path Integrals – II Appendix Reviews Physics Bulletin said about the book, "No matter how gently one introduces students to the concept of Dirac’s bras and kets, many are turned off. Shankar attacks the problem head-on in the first chapter, and in a very informal style suggests that there is nothing to be frightened of". American Scientist called it "An excellent text … The postulates of quantum mechanics and the mathematical underpinnings are discussed in a clear, succinct manner". See also Modern Quantum Mechanics by J. J. Sakurai List of textbooks on classical and quantum mechanics References Physics textbooks Quantum mechanics
Principles of Quantum Mechanics
[ "Physics" ]
266
[ "Quantum mechanics", "Works about quantum mechanics" ]
56,767,513
https://en.wikipedia.org/wiki/Zariski%27s%20finiteness%20theorem
In algebra, Zariski's finiteness theorem gives a positive answer to Hilbert's 14th problem for the polynomial ring in two variables, as a special case. Precisely, it states: Given a normal domain A, finitely generated as an algebra over a field k, if L is a subfield of the field of fractions of A containing k such that , then the k-subalgebra is finitely generated. References Hilbert's problems Invariant theory Commutative algebra
Zariski's finiteness theorem
[ "Physics", "Mathematics" ]
101
[ "Symmetry", "Mathematical theorems", "Group actions", "Theorems in algebra", "Commutative algebra", "Hilbert's problems", "Fields of abstract algebra", "Invariant theory", "Mathematical problems", "Algebra" ]
60,170,617
https://en.wikipedia.org/wiki/Ruby%20pressure%20scale
The ruby fluorescence pressure scale is an optical method to measure pressure within a sample chamber of a diamond anvil cell apparatus. Since it is an optical method, which fully make use of the transparency of diamond anvils and only requires an access to a small scale laser generator, it has become the most prevalent pressure gauge method in high pressure sciences. Principles Ruby is chromium-doped corundum (Al2O3). The Cr3+ in corundum's lattice forms an octahedra with surrounding oxygen ions. The octahedral crystal field together with spin-orbital interaction results in different energy levels. Once 3d electrons in Cr3+ are energized by lasers, the excited electrons would go to 4T2 and 2T2 levels. Later they return to 2E levels and the R1, R2 lines come from luminescence from 2E levels to 4A2 ground level. The energy difference of 2E levels are 29 cm−1, corresponding to the splitting of R1, R2 lines at 1.39 nm. Development Ruby fluorescence spectra has two strong sharp lines, R1 and R2. R1 refers to the stronger intensity and lower energy (longer wavelength) excitation and is used to gauge pressure. Pressure is calculated as: , where λ0 is the R1 wavelength measured at 1atm, a and b are constants. (e.g. a = 19.04, b = 5) Since first demonstrated by Forman and colleagues in 1972, many scientists have contributed to the establishment of accurate ruby pressure scale in various experimental conditions. A likely incomplete summary of is given below: References High pressure science Optics Applied and interdisciplinary physics
Ruby pressure scale
[ "Physics", "Chemistry" ]
352
[ "Applied and interdisciplinary physics", "Optics", "High pressure science", " molecular", "Atomic", " and optical physics" ]
60,173,605
https://en.wikipedia.org/wiki/COMOS
COMOS is a plant engineering software from Siemens. The applications for this software are in the process industries for the engineering, operation, and maintenance of process plants as well as their asset management. History The COMOS (acronym for Component Object Server) software system was originally developed and sold by the Logotec Software GmbH, then by the innotec GmbH (founded in 1991 with headquarters in Schwelm, Germany). The first version hit the market in 1996. In 2008, the innotec GmbH was taken over by the Siemens Corporation COMOS is developed further and marketed by a subsidiary, the Siemens Industry Software GmbH. The current status is COMOS Generation 10. Product characteristics Originally, COMOS was developed as an integrated CAE system for engineering in plant construction: all process engineering trades and the involved disciplines of the Electrical, Instrumentation & Control system engineering should be able to work together seamlessly on one system platform. The system incorporates the characteristics of object orientation, central data administration, and open system architecture. Interfaces enable the integration into existing IT infrastructures or cooperation with supplementary software systems. The COMOS software system is based on a central data platform and includes applications that can be combined. They help with the engineering and set-up, operation, and shut-down of industrial plants. Applications The software is used by plant developers (e.g. EPC) to plan process plants (chemical, energy, water / waste water, pharmaceuticals, oil, natural gas, food, etc.). It is also used by plant owner/operators in the mentioned industries, since COMOS not only supports engineering but also operational processes. There are regular user conferences. Its architecture makes COMOS suitable for engineering: it can manage large quantities of data and provide it on an integrated basis. Siemens cooperates in the standardization of export and import interfaces (DEXPI - Data Exchange in the Process Industry), an initiative together with BASF, Bayer, and Evonik. Scope of functions The software system is modular. The functionalities of the COMOS platform support the digital transformation of a plant via the object-oriented database and a special layer technology that permits joint and consistent work on data and documents. Object properties or attributes can be changed in data sheets and various entry masks. Batch queries and changes are also possible. The system is used to design process engineering. Integration into standard process simulators results in the definition of process data at an early planning stage using process flow diagrams and combination with the engineering of processing plants. Another module is used to make this data more precise. The pipework engineering based on piping and instrumentation diagrams followed specified industry standards for the respective pipe classes. Data is exchanged in the further geometrical planning using isometries based on ISO 15926. At the end one gets the virtual 3D design of the plant. The system serves for the electrical engineering of plants all the way to their complete automation: it covers electrical, measurement, control, and regulation (EMSR) processes. Functional plans and sequences can be generated based on common standards. Sequence controls can also be represented graphically. This information can be exchanged directly with distributed control system (DCS) process control systems such as Simatic PCS 7. It supports plant operation after start-up. Engineering data can be used and expanded in the operating phase. Repairs or maintenance work can be reported directly from the field to the central system using mobile maintenance processes. It permits traceable document and data management. It meets the strict requirements of the FDA. Secure access possibilities permit work with distributed information all over the world. It also makes it possible to train plant personnel with visualization and simulation in 3D VR models combined with corresponding training scenarios. Walkinside was developed by the 3D specialist VRcontext and was integrated into the software after the takeover of the company by Siemens in 2011. References Computer-aided design software Piping Mechanical_engineering Chemical_engineering Process_engineering Siemens
COMOS
[ "Physics", "Chemistry", "Engineering" ]
792
[ "Process engineering", "Applied and interdisciplinary physics", "Building engineering", "Chemical engineering", "Mechanical engineering by discipline", "nan", "Mechanical engineering", "Piping" ]
45,625,735
https://en.wikipedia.org/wiki/Iranian%20crewed%20spacecraft
The Iranian crewed spacecraft is a proposal by the Iranian Space Agency and Iranian Aerospace Research Institute of the Iranian Space Research Center (ISRC) to put an astronaut into space. Iran expressed for the first time its intention to send a human to space during the summit of Soviet and Iranian Presidents in 1990. Soviet President Mikhail Gorbachev reached an agreement in principle with erstwhile President Akbar Hashemi Rafsanjani to make joint Soviet-Iranian crewed flights to Mir space station but agreement was never realized after dissolution of USSR. Iranian News Agency claimed on 21 November 2005, that the Iranians have a human space program along with plans for the development of a spacecraft and a space laboratory. Iran Aerospace Industries Organization (IAIO) head Reza Taghipour on 20 August 2008, revealed Iran intends to launch a human mission into space within a decade. This goal was described as the country's top priority for the next 10 years, in order to make Iran the leading space power of the region by 2021. In August 2010, President Ahmadinejad announced that Iran's first astronaut should be sent into space on board an Iranian spacecraft by no later than 2019. Some details of the design were published by the institute in its "Astronaut" publication in February 2015. A mock up of the spaceship was displayed on 17 February 2015 during the ceremony of the national day of space of Iran. The head of the institute announced that the spaceship will be launched to space in about one year, which did not happen. The Iranian President and several of the ministers were present in the unveiling and the ceremony. If funded and developed, it would be comparable to the American Mercury and Soviet Vostok spacecraft that carried the first humans into space in the early 1960s.The Iranian small capsuled spacecraft would carry a single astronaut to a 175 km altitude and return it to Earth. The spacecraft was designated the code name "Class E Kavoshgar" project. The main components include the launcher adapter, spacecraft, and the launch abort system. A sub-orbital test flight with monkey was conducted in 2016. According to Iran's Space Administrator, this program was put on hold in 2017 indefinitely. According to unofficial Chinese internet sources, an Iranian participation in the future Chinese space station program has been under discussion. Currently Iran doesn't have a medium lift rocket similar to Long March 2F, GSLV Mk III and H-IIA. Therefore, developing of full scale spacecraft able to dock with any station is unlikely by Iran due to the lack of equipment. On 6 December, 2023 Iran launched live animals into space in a capsule mounted on a new type of rocket called "Salman". Al Jazeera reported that the Iranian government says this is in line with its current 10 year program to revive its human space program. Iran's telecommunications director claims the effort is to send at least one Iranian astronaut to space in a fully indigenous crewed spacecraft on a fully indigenous rocket by 2029. See also Pishgam References External links Aerospace Research Institute Iranian Space Research Center Iran Space Agency Space program of Iran Human spaceflight programs
Iranian crewed spacecraft
[ "Engineering" ]
629
[ "Space programs", "Human spaceflight programs" ]
45,626,191
https://en.wikipedia.org/wiki/Neville%20theta%20functions
In mathematics, the Neville theta functions, named after Eric Harold Neville, are defined as follows: where: K(m) is the complete elliptic integral of the first kind, , and is the elliptic nome. Note that the functions θp(z,m) are sometimes defined in terms of the nome q(m) and written θp(z,q) (e.g. NIST). The functions may also be written in terms of the τ parameter θp(z|τ) where . Relationship to other functions The Neville theta functions may be expressed in terms of the Jacobi theta functions where . The Neville theta functions are related to the Jacobi elliptic functions. If pq(u,m) is a Jacobi elliptic function (p and q are one of s,c,n,d), then Examples Symmetry Complex 3D plots Notes References Special functions Theta functions Elliptic functions Analytic functions
Neville theta functions
[ "Mathematics" ]
188
[ "Special functions", "Combinatorics" ]
45,626,463
https://en.wikipedia.org/wiki/Castelnuovo%27s%20contraction%20theorem
In mathematics, Castelnuovo's contraction theorem is used in the classification theory of algebraic surfaces to construct the minimal model of a given smooth algebraic surface. More precisely, let be a smooth projective surface over and a (−1)-curve on (which means a smooth rational curve of self-intersection number −1), then there exists a morphism from to another smooth projective surface such that the curve has been contracted to one point , and moreover this morphism is an isomorphism outside (i.e., is isomorphic with ). This contraction morphism is sometimes called a blowdown, which is the inverse operation of blowup. The curve is also called an exceptional curve of the first kind. References Algebraic surfaces Theorems in geometry
Castelnuovo's contraction theorem
[ "Mathematics" ]
157
[ "Mathematical theorems", "Mathematical problems", "Geometry", "Theorems in geometry" ]
45,627,703
https://en.wikipedia.org/wiki/Native-language%20identification
Native-language identification (NLI) is the task of determining an author's native language (L1) based only on their writings in a second language (L2). NLI works through identifying language-usage patterns that are common to specific L1 groups and then applying this knowledge to predict the native language of previously unseen texts. This is motivated in part by applications in second-language acquisition, language teaching and forensic linguistics, amongst others. Overview NLI works under the assumption that an author's L1 will dispose them towards particular language production patterns in their L2, as influenced by their native language. This relates to cross-linguistic influence (CLI), a key topic in the field of second-language acquisition (SLA) that analyzes transfer effects from the L1 on later learned languages. Using large-scale English data, NLI methods achieve over 80% accuracy in predicting the native language of texts written by authors from 11 different L1 backgrounds. This can be compared to a baseline of 9% for choosing randomly. Applications Pedagogy and language transfer This identification of L1-specific features has been used to study language transfer effects in second-language acquisition. This is useful for developing pedagogical material, teaching methods, L1-specific instructions and generating learner feedback that is tailored to their native language. Forensic linguistics NLI methods can also be applied in forensic linguistics as a method of performing authorship profiling in order to infer the attributes of an author, including their linguistic background. This is particularly useful in situations where a text, e.g. an anonymous letter, is the key piece of evidence in an investigation and clues about the native language of a writer can help investigators in identifying the source. This has already attracted interest and funding from intelligence agencies. Methodology Natural language processing methods are used to extract and identify language usage patterns common to speakers of an L1-group. This is done using language learner data, usually from a learner corpus. Next, machine learning is applied to train classifiers, like support vector machines, for predicting the L1 of unseen texts. A range of ensemble based systems have also been applied to the task and shown to improve performance over single classifier systems. Various linguistic feature types have been applied for this task. These include syntactic features such as constituent parses, grammatical dependencies and part-of-speech tags. Surface level lexical features such as character, word and lemma n-grams have also been found to be quite useful for this task. However, it seems that character n-grams are the single best feature for the task. 2013 shared task The Building Educational Applications (BEA) workshop at NAACL 2013 hosted the inaugural NLI shared task. The competition resulted in 29 entries from teams across the globe, 24 of which also published a paper describing their systems and approaches. See also References Computational linguistics Second-language acquisition Natural language processing Machine learning Applied linguistics Bilingualism
Native-language identification
[ "Technology", "Engineering" ]
603
[ "Machine learning", "Computational linguistics", "Natural language processing", "Artificial intelligence engineering", "Natural language and computing" ]
45,630,591
https://en.wikipedia.org/wiki/International%20Radio%20Corporation
The International Radio Corporation (IRC) was an American radio receiver manufacturing company based in Ann Arbor, Michigan. It was established in 1931 by Charles Albert Verschoor with financial backing from Ann Arbor mayor William E. Brown, Jr., and a group of local business leaders. IRC manufactured numerous different radios, many bearing the Kadette name, including the first mass-produced AC/DC radio, the first pocket radio, and the first clock radio. Due to the seasonal nature of radio sales, the company attempted to diversify its offerings with a product that would sell well during the summer, eventually settling on a camera that would become the Argus. In 1939, IRC sold its radio-manufacturing business to its former General Sales Manager, W. Keene Jackson, although his new Kadette Radio Corporation only survived for a year before it went defunct. After World War II, International Industries and its International Research division became wholly owned subsidiaries of Argus, Inc., after which point the International name ceased to exist. History Establishment The International Radio Corporation was founded in 1931 in Ann Arbor, Michigan, the creation of Charles Albert Verschoor, who had begun making radios in the 1920s. Described as a "colorful old-time promoter" in a January 1945 Fortune magazine article and as a "go-getting inventor" by Mary Hunt, Verschoor had previous experience in automobile manufacturing as well. The company was initially financed with $10,000 raised by Ann Arbor mayor William E. Brown, Jr., and a group of local business leaders who desired to create a new local company with substantial potential for growth and job creation during the Great Depression. It was based out of a former furniture factory located at 405 Fourth Street on Ann Arbor's west side. Early products and profitability IRC debuted its first radio, the International Duo, on August 7, 1931; it was named for its ability to receive both local longwave and European shortwave radio signals. It measured by by , at a time when most table radios measured in length without their separate speaker. Shortly thereafter, IRC introduced the Kadette, the first mass-produced AC/DC radio; it was a four-tube, radio small enough to be easily portable that featured an innovative plastic cabinet. This cabinet material, called Bakelite, was fairly cheap to produce and helped IRC to turn a substantial profit on its radio sales. Manufactured by the Chicago Molded Products Company, the Kadette's plastic cabinet was the first to be used on a radio, although its Gothic styling gave it a fairly traditional appearance. The radio also boasted an innovative new circuit design, while its ability to operate on either alternating (AC) or direct current (DC) allowed it to operate without a power transformer, resulting in it being cheaper, smaller, and lighter than its competitors; it also allowed the Kadette to be plugged into typical household wall sockets. Furthermore, IRC released a kit that instructed customers how to modify their Kadettes for battery-powered mobile applications, such as in railroad cars and automobiles; in the words of Robert E. Mayer, this kit "effectively started the car radio market". The popularity of the Kadette led to "almost immediate profitability" for IRC, and by 1933 it was the only company in Ann Arbor that was still able to pay dividends to its shareholders. During the early 1930s, Ann Arbor was less adversely affected by the Great Depression than Detroit or most other Michigan communities, although lost orders and inability to pay dividends were common occurrences for Ann Arbor-based companies. Later products and financial difficulties Following after the Kadette were a variety of other models, many of which were innovative in their own right: the Kadette Jr., the world's first pocket radio; the Kadette Jewel, the original Kadette's successor that was available in five different color combinations; the Kadette Classic, built with three different types of plastic; and the Kadette Clockette, which resembled a small mantel clock and was available in four different wooden case styles. IRC also introduced a number of related accessories, including the Tunemaster, a portable radio remote control, and the Kadette Autime, the first mass-produced clock radio. In 1937, as its sales had climbed to $2,700,000, IRC introduced a 10-tube Kadette radio for $19.95, a price comparable with many four- and five-tube sets when its 10-tube competitors cost $100 or more. With three ballast tubes, these 10-tube radios were met with largely negative reviews; in the words of Alan Voorhees, they were "$20 sets with extra ballast tubes thrown in". They were also reminiscent of 10-tube radios that Verschoor had built between 1925 and 1930 under the "Arborphone" name, which had only five functioning tubes alongside five superfluous ones intended simply to impress prospective customers. Furthermore, when radio dealers sold IRC's 10-tube Kadettes, they achieved profit margins of 15% at most, far less than what they could earn selling premium models made by competitors. After the company began requiring its dealers to stock its slower selling units in order to also have access to its 10-tube Kadettes, some dealers resorted to giving unauthorized discounts to move the less attractive models, resulting in their total profit margins on the whole Kadette line falling to as low as 5% in some cases. As their profit margins fell, many dealers dropped Kadettes from their catalogs altogether; while IRC made efforts to reverse this trend, in many cases irreparable damage had already been done. Diversification While IRC's radio business was initially successful, it was generally seasonal in nature; due to better reception in winter as well as general patterns of behavior before the widespread adoption of air conditioning, sales of radios were much higher during the winter months than during the summer. This prompted Verschoor to explore possibilities for expanding the company's product line in order to reduce the slack periods caused by the seasonal variation in its radio sales. Looking for a product that could be produced relatively cheaply and that would also sell well during the summer months, he decided upon an inexpensive Leica-inspired camera that would ultimately become the Argus, which launched to nearly instant success in 1936. That same year, when IRC had 150 employees, it sold its Kadette AC/DC patents to RCA. Final years In 1938, Verschoor departed from IRC after being pressured to leave. By the early 1940s the company was being run by a "modern management team". In 1939, International Industries sold its radio-manufacturing business to the company's former General Sales Manager, W. Keene Jackson. After renaming it the Kadette Radio Corporation, Jackson expressed his desire to expand its product line by adding television sets, vowing that the new company would "employ every technical resource to bring the price of efficient television reception to the point where every American home can enjoy this new art as quickly as possible." However, Jackson's company suffered from the same problems that IRC had, and just a year after its establishment it was already out of business. While its radio business had faltered, International Industries had found success in the camera and optical equipment fields with its Argus line; by 1942, Argus, Inc.'s sales had climbed to $4,800,000, and during World War II the company employed 1,200 people. After the war, International Industries and its International Research division became wholly owned subsidiaries of Argus, Inc., and shortly thereafter the International name ceased to exist. Notes References External links Radio electronics Companies based in Ann Arbor, Michigan American companies established in 1931 Electronics companies established in 1931 Manufacturing companies established in 1931 1931 establishments in Michigan Defunct manufacturing companies based in Michigan
International Radio Corporation
[ "Engineering" ]
1,592
[ "Radio electronics" ]
45,630,784
https://en.wikipedia.org/wiki/Stacking%20fault
In crystallography, a stacking fault is a planar defect that can occur in crystalline materials. Crystalline materials form repeating patterns of layers of atoms. Errors can occur in the sequence of these layers and are known as stacking faults. Stacking faults are in a higher energy state which is quantified by the formation enthalpy per unit area called the stacking-fault energy. Stacking faults can arise during crystal growth or from plastic deformation. In addition, dislocations in low stacking-fault energy materials typically dissociate into an extended dislocation, which is a stacking fault bounded by partial dislocations. The most common example of stacking faults is found in close-packed crystal structures. Face-centered cubic (fcc) structures differ from hexagonal close packed (hcp) structures only in stacking order: both structures have close-packed atomic planes with sixfold symmetry — the atoms form equilateral triangles. When stacking one of these layers on top of another, the atoms are not directly on top of one another. The first two layers are identical for hcp and fcc, and labelled AB. If the third layer is placed so that its atoms are directly above those of the first layer, the stacking will be ABA — this is the hcp structure, and it continues ABABABAB. However, there is another possible location for the third layer, such that its atoms are not above the first layer. Instead, it is the atoms in the fourth layer that are directly above the first layer. This produces the stacking ABCABCABC, which is in the [111] direction of a cubic crystal structure. In this context, a stacking fault is a local deviation from one of the close-packed stacking sequences to the other one. Usually, only one- two- or three-layer interruptions in the stacking sequence are referred to as stacking faults. An example for the fcc structure is the sequence ABCABABCAB. Formation of stacking faults in FCC crystal Stacking faults are two dimensional planar defects that can occur in crystalline materials. They can be formed during crystal growth, during plastic deformation as partial dislocations move as a result of dissociation of a perfect dislocation, or by condensation of point defects during high-rate plastic deformation. The start and finish of a stacking fault are marked by partial line dislocations such as a partial edge dislocation. Line dislocations tend to occur on the closest packed plane in the closest packed direction. For an FCC crystal, the closest packed plane is the (111) plane, which becomes the glide plane, and the closest packed direction is the [110] direction. Therefore, a perfect line dislocation in FCC has the burgers vector ½<110>, which is a translational vector. Splitting into two partial dislocations is favorable because the energy of a line defect is proportional to the square of the burger’s vector magnitude. For example, an edge dislocation may split into two Shockley partial dislocations with burger’s vector of 1/6<112>.  This direction is no longer in the closest packed direction, and because the two burger’s vectors are at 60 degrees with respect to each other in order to complete a perfect dislocation, the two partial dislocations repel each other. This repulsion is a consequence of stress fields around each partial dislocation affecting the other. The force of repulsion depends on factors such as shear modulus, burger’s vector, Poisson’s ratio, and distance between the dislocations. As the partial dislocations repel, stacking fault is created in between. By nature of stacking fault being a defect, it has higher energy than that of a perfect crystal, so acts to attract the partial dislocations together again. When this attractive force balance the repulsive force described above, the defects are in equilibrium state. The stacking fault energy can be determined from the width of dislocation dissociation using where and are the burgers vectors and is the vector magnitude for the dissociated partial dislocations, is the shear modulus, and the distance between the partial dislocations. Stacking faults may also be created by Frank partial dislocations with burger’s vector of 1/3<111>. There are two types of stacking faults caused by Frank partial dislocations: intrinsic and extrinsic. An intrinsic stacking fault forms by vacancy agglomeration and there is a missing plane with sequence ABCA_BA_BCA, where BA is the stacking fault. An extrinsic stacking fault is formed from interstitial agglomeration, where there is an extra plane with sequence ABCA_BAC_ABCA. Visualizing stacking faults using electron microscopy Stacking faults can be visualized using electron microscopy. One commonly used technique is transmission electron microscopy (TEM). The other is electron channeling contrast imaging (ECCI) in scanning electron microscope (SEM). In an SEM, near-surface defects can be identified because backscattered electron yield differs in defect regions where the crystal is strained, and this gives rise to different contrasts in the image. In order to identify the stacking fault, it is important to recognize the exact Bragg condition for certain lattice planes in the matrix such that regions without defects will detect little backscattered electrons and thus appear dark. Meanwhile, regions with the stacking fault will not satisfy the Bragg condition and thus yield high amounts of backscattered electrons, and thus appear bright in the image. Inverting the contrast gives images where the stacking fault appears dark in the midst of a bright matrix. In a TEM, bright field imaging is one technique used to identify the location of stacking faults. Typical image of stacking fault is dark with bright fringes near a low-angle grain boundary, sandwiched by dislocations at the end of the stacking fault. Fringes indicate that the stacking fault is at an incline with respect to the viewing plane. Stacking faults in semiconductors Many compound semiconductors, e.g. those combining elements from groups III and V or from groups II and VI of the periodic table, crystallize in the fcc zincblende or hcp wurtzite crystal structures. In a semiconductor crystal, the fcc and hcp phases of a given material will usually have different band gap energies. As a consequence, when the crystal phase of a stacking fault has a lower band gap than the surrounding phase, it forms a quantum well, which in photoluminescence experiments leads to light emission at lower energies (longer wavelengths) than for the bulk crystal. In the opposite case (higher band gap in the stacking fault), it constitutes an energy barrier in the band structure of the crystal that can affect the current transport in semiconductor devices. References Crystallographic defects
Stacking fault
[ "Chemistry", "Materials_science", "Engineering" ]
1,445
[ "Crystallographic defects", "Crystallography", "Materials degradation", "Materials science" ]
58,663,333
https://en.wikipedia.org/wiki/Thiocarbonyldiimidazole
1,1'-Thiocarbonyldiimidazole (TCDI) is a thiourea containing two imidazole rings. It is the sulfur analog of the peptide coupling reagent carbonyldiimidazole (CDI). Synthesis TCDI is commercially available but can also be prepared via the reaction of thiophosgene with two equivalents of imidazole. Reactions The imidazole groups on TCDI can be easily displaced, allowing it to act as a safer alternative to thiophosgene. This behaviour has been used in the Corey–Winter olefin synthesis. It may also replace carbonothioyl species (RC(S)Cl) in the Barton–McCombie deoxygenation. Other uses include the synthesis of thioamides and thiocarbamates. Like the analogous CDI, it may be used for peptide coupling. References Thioureas Imidazoles Peptide coupling reagents
Thiocarbonyldiimidazole
[ "Chemistry", "Biology" ]
208
[ "Reagents for biochemistry", "Peptide coupling reagents", "Reagents for organic chemistry" ]
58,663,416
https://en.wikipedia.org/wiki/Neuroanatomy%20of%20handedness
An estimated 90% of the world's human population consider themselves to be right-handed. The human brain's control of motor function is a mirror image in terms of connectivity; the left hemisphere controls the right hand and vice versa. This theoretically means that the hemisphere contralateral to the dominant hand tends to be more dominant than the ipsilateral hemisphere, however this is not always the case and there are numerous other factors which contribute in complex ways to physical hand preference. Language and speech areas Language areas are represented unilaterally in the human brain. In around 95% of right-handers, these brain areas are often located on the left hemisphere, however the proportion reduces in left handers down to around 70%. Therefore 7 in every 100 individuals is right-hemisphered for language and left-hand dominant. It is unclear as to whether or not left-hemisphered left handers suffer any language or writing deficits because of this. Broca's area has been found to have differing grey matter structures depending on handedness. The inferior frontal sulcus, which contains Broca's area, was found to be more continuous in the hemisphere ipsilateral to the dominant hand Corpus callosum Because the left arm is controlled by the right hemisphere and vice versa, the corpus callosum has also been found to be larger in left-handers. This is theoretically so that language comprehension and production can more efficiently move from the primary language areas into the motor areas which control the contralateral arm. No research has investigated the effect of being right-hemispheric for language whilst being left-handed, and whether or not the corpus callosum is still larger without the need to communicate across hemispheres, such would be the case in right-hemispheric left-handers. Planum temporale The planum temporale is a brain region within Broca's Area, and is thought to be the most asymmetric area of the human brain; with the left side having shown to be five times the size of the right in some individuals. However in people who are left handed, this asymmetry has shown to be reduced Motor areas Handedness correlates in motor areas have been found to be more subtle and less pronounced than language areas, but are nevertheless still detectable. Central sulcus The surface area of the central sulcus has been found to be larger in the dominant hemisphere, as well as the 'hand knob', an area in the primary motor cortex which is responsible for hand movements, is located more dorsally in the left hemisphere of people who are right- compared to left-handed Right shift theory Marian Annett devised the Right Shift Theory in 1972, which states that language areas and motor cortex development is preferential in the left hemisphere due to the theoretical gene RS+. This theory also states that there is no particular gene which causes increased right-hemispheric development compared to left, instead without the RS+ gene the development is a gaussian curve which is centralised. The presence of the RS+ gene promotes left-hemispheric dominance, in turn introducing a right-handedness bias which shifts the curve towards the right. Corticospinal tract The corticospinal tract is a bundle of white matter which connects the cerebral cortex with motor neurons in the spinal cord. Notably, humans show a natural asymmetry between left and right tracts, such that the left tract (and therefore connections to the right hand) is significantly larger. However this asymmetry has been found to be reduced in left-handers, suggesting a less biased connection to both hands. Forced handedness In order to untangle causality, some research employs a 'forced handedness' group. Left-handers who were forced during childhood to use their right hand showed a larger surface area of the central sulcus in their left hemisphere, which is associated with right-handedness. Also, structures in the basal ganglia such as the putamen also mirrored developmental right-hand dominant individuals in the forced group. Face processing The Fusiform Face area is an area typically unilaterally, much like the language areas, and localized on the right fusiform gyrus. However, this brain region has been found to be more bilateral in left-handers; that is the left fusiform gyrus responds more to faces in left-handers than in right-handers. However the occipital face area shows no such correlation, and so handedness is thought to impact face processing on a level in the hierarchy which does not involve the occipital face area, however does include the fusiform gyrus. Complications Handedness inventory Handedness in and of itself tends to be a grey area. The requirements for someone to be right- as opposed to left-handed have been debated, and because individuals who identify as left-handed may also use their right hand for a large number of tasks, identifying two clearcut groups of subjects is a challenging task. The Edinburgh Handedness Inventory is a common parametric test used to determine handedness, by comparing individuals to the population at large. However use of this inventory varies between researchers, and it has been criticized for its use in modern research. This means that an individual which one research group may classify as a left-hander, may be classified as ambidextrous in another; leading to difficulties in comparison between the two. Conflicting evidence A number of asymmetrical findings have been disputed, with various studies stating null results in opposition to previously reported differences. This is an issue in handedness neuroscience, as imaging methods are highly susceptible to type 1 errors due to the number of comparisons which they make. Complexity of causality The relationship between handedness and its neuronal correlates is complex. Language areas themselves are not concretely correlated, and motor area show exceedingly subtle differences. Because of this, the literature shows many differing opinions. Clearly, advances in research are still necessary to unveil true causal relationships between structural differences and their manifestation in the form of handedness. Frontal Right/Left Areas and Psychopathology It has been reported some cases of inmates, showing a larger Right-Prefrontal cortex, yet being controlled or dominant their Left-Prefrontal cortex. and it has been associated to criminal behaviour and also to psychopathic traits. In a review, it was associated to the "impulsive behaviour", handedness, mostly left and/or crossed lateralities, and above of all, the eyedness or eye-laterality as a key to detect and to relate brain lateralization which that behavioural disorder when it is crossed-eye-hand laterality which has also been related in a work, reporting a sample of 5% of crossed left-handedness into a population of 57 left-handedness (5-6%) and found possibly associated with emotions and limbic system, as well as to the emergence of need/lack of self-regulation.) References Handedness Neuroanatomy Neuropsychology Motor skills
Neuroanatomy of handedness
[ "Physics", "Chemistry", "Biology" ]
1,448
[ "Behavior", "Motor skills", "Motor control", "Chirality", "Asymmetry", "Handedness", "Symmetry" ]
58,677,840
https://en.wikipedia.org/wiki/LiteBIRD
LiteBIRD (Lite (Light) satellite for the studies of B-mode polarization and Inflation from cosmic background Radiation Detection) is a planned small space observatory that aims to detect the footprint of the primordial gravitational wave on the cosmic microwave background (CMB) in a form of polarization pattern called B-mode. LiteBIRD and OKEANOS were the two finalists for Japan's second Large-Class Mission. In May 2019, LiteBIRD was selected by the Japanese space agency. LiteBIRD is planned to be launched in 2032 with an H3 launch vehicle for three years of observations at the Sun-Earth Lagrangian point L2. Overview Cosmological inflation is the leading theory of the first instant of the universe, called the Big Bang theory. Inflation postulates that the universe underwent a period of rapid expansion an instant after its formation, and it provides a convincing explanation for cosmological observations. Inflation predicts that primordial gravitational waves were created during the inflationary era, about 10−38 second after the beginning of the universe. The primordial gravitational waves are expected to be imprinted in the CMB polarization map as special patterns, called the B-mode. Measurements of polarization of the CMB radiation are considered as the best probe to detect the primordial gravitational waves, that could bring a profound knowledge on how the Universe began, and may open a new era of testing theoretical predictions of quantum gravity, including those by the superstring theory. The science goal of LiteBIRD is to measure the CMB polarization over the entire sky with the sensitivity of δr <0.001, which allows testing the major single-field slow-roll inflation models experimentally. The design concept is being studied by an international team of scientists from Japan, U.S., Canada and Europe. Telescopes In order to separate CMB from the galactic emission, the measurements will cover 40 GHz to 400 GHz during a 3-year full sky survey using two telescopes on LiteBIRD. The Low Frequency Telescope (LFT) covers 40 GHz to 235 GHz, and the High Frequency Telescope (HFT) covers 280 GHz to 400 GHz. LFT has a 400 mm aperture Crossed-Dragone telescope, and HFT has a 200 mm aperture on-axis refractor with two silicon lenses. The baseline design considers an array of 2,622 superconducting polarimetric detectors. The entire optical system will be cooled down to approximately to minimize the thermal emission, and the focal plane is cooled to 100 mK with a two-stage sub-Kelvin cooler. See also POLARBEAR BICEP and Keck Array References External links LiteBIRD Official home page ISAS/JAXA LiteBIRD page LiteBird System Overview from 2012 2030s in spaceflight Proposed spacecraft Satellites of Japan Space telescopes Physical cosmology Cosmic microwave background experiments
LiteBIRD
[ "Physics", "Astronomy" ]
601
[ "Theoretical physics", "Astrophysics", "Space telescopes", "Physical cosmology", "Astronomical sub-disciplines" ]
51,000,464
https://en.wikipedia.org/wiki/Vishnevsky%20liniment
Vishnevsky liniment or balsamic liniment (, ) is a topical medication which has been used to treat wounds, burns, skin ulcers and suppurations. Developed by Russian surgeon Alexander Vishnevsky in 1927, the liniment contains birch tar, xeroformium (bismuth tribromophenolate) and castor oil which have been broadly used as a topical medication in the former Soviet Union. Vishnevsky liniment was broadly used in the Soviet army during World War II. It was later shown that a prolonged application of Vishnevsky liniment for chronic skin ulcers, wounds or burns can be associated with higher risk of skin cancer, hematologic or other malignancy. See also List of Russian drugs References Ointments Abandoned drugs Soviet inventions Drugs in the Soviet Union
Vishnevsky liniment
[ "Chemistry" ]
176
[ "Pharmacology", "Drug safety", "Medicinal chemistry stubs", "Pharmacology stubs", "Abandoned drugs" ]
51,001,398
https://en.wikipedia.org/wiki/Physics%20and%20Chemistry%20of%20Glasses
Physics and Chemistry of Glasses: European Journal of Glass Science and Technology Part B is a bimonthly peer-reviewed scientific journal published by the Society of Glass Technology. It was established in 2006, from the merger of the Society of Glass Technology's journal Physics and Chemistry of Glasses and the Deutsche Glastechnische Gesellschaft's journal Glass Science and Technology. Abstracting and indexing The journal is abstracted and indexed in the Science Citation Index, Current Contents/Physical, Chemical & Earth Sciences, Current Contents/Engineering, Computing & Technology, and Scopus. According to the Journal Citation Reports, the journal has a 2015 impact factor of 0.630. References External links Materials science journals English-language journals Bimonthly journals Academic journals established in 2006 Glass engineering and science
Physics and Chemistry of Glasses
[ "Materials_science", "Engineering" ]
158
[ "Glass engineering and science", "Materials science journals", "Materials science" ]