id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
26,916,166
https://en.wikipedia.org/wiki/Gas%20in%20scattering%20media%20absorption%20spectroscopy
Gas in scattering media absorption spectroscopy (GASMAS) is an optical technique for sensing and analysis of gas located within porous and highly scattering solids, e.g. powders, ceramics, wood, fruit, translucent packages, pharmaceutical tablets, foams, human paranasal sinuses etc. It was introduced in 2001 by Prof. Sune Svanberg and co-workers at Lund University (Sweden). The technique is related to conventional high-resolution laser spectroscopy for sensing and spectroscopy of gas (e.g. tunable diode laser absorption spectroscopy, TDLAS), but the fact that the gas here is "hidden" inside solid materials give rise to important differences. Basic Principles Free gases exhibit very sharp spectral features, and different gas species have their own unique spectral fingerprints. At atmospheric pressure, absorption linewidths are typically on the order of 0.1 cm−1 (i.e. ~3 GHz in optical frequency or 0.006 nm in wavelength), while solid media have dull spectral behavior with absorption features thousand times wider. By looking for the sharp absorption imprints in light emerging from porous samples, it is thus possible to detect gases confined in solids – even though the solid often attenuates light much stronger than the gas itself. The basic principle of GASMAS is shown in figure 1. Laser light is sent into a sample with gas cavities, which could either be small pores (left) or larger gas-filled chambers. The heterogeneous nature of the porous material often give rise to strong light scattering, and pathlengths are often surprisingly long (10 or 100 times the sample dimension are not uncommon). In addition, light will experience absorption related to the solid material. When travelling through the material, light will travel partly through the pores, and will thus experience the spectrally sharp gas absorption. Light leaving the material will carry this information, and can be collected by a detector either in a transmission mode (left) or in a reflection mode (right). In order to detect the spectrally sharp fingerprints related to the gas, GASMAS has so far relied on high-resolution tunable diode laser absorption spectroscopy (TDLAS). In principle, this means that a nearly monochromatic (narrow-bandwidth) laser is scanned across an absorption line of the gas, and a detector records the transmission profile. In order to increase sensitivity, modulation techniques are often employed. The strength of the gas absorption will depend, as given by the Beer-Lambert law, both on the gas concentration and the path-length that the light has travelled through the gas. In conventional TDLAS, the path-length is known and the concentration is readily calculated from the transmittance. In GASMAS, extensive scattering renders the pathlength unknown and the determination of gas concentration is aggravated. In many applications, however, the gas concentration is known and other parameters are in focus. Furthermore, as discussed in 2.2, there are complementing techniques that can provide information on the optical pathlength, thus allowing evaluation also of gas concentrations. Challenges Diffuse light Unknown interaction pathlength Optical interference noise It is well known that optical interference often is a major problem in laser-based gas spectroscopy. In conventional laser-based gas spectrometers, the optical interference originates from e.g. etalon-type interference effects in (or between) optical components and multi-pass gas cells. Throughout the years, great efforts have been devoted to handle this problem. Proper optical design is important to minimize interference from the beginning (e.g. by tilting optical components, avoiding transmissive optics and using anti-reflection coating), but interference patterns can not be completely avoided and are often difficult to separate from gas absorption. Since gas spectroscopy often involves measurement of small absorption fractions (down to 10−7), appropriate handling of interference is crucial. Utilised countermeasures include customized optical design, tailored laser modulation, mechanical dithering, signal post-processing, sample modulation, and baseline recording and interference subtraction. In the case of GASMAS, optical interference is particularly cumbersome. This is related to the severe speckle-type interference that originates from the interaction between laser light and highly scattering solid materials. Since this highly non-uniform interference is generated in same place as the utility signal, it cannot be removed by design. The optical properties of the porous material under study determines the interference pattern, and the level of interference is not seldom much stronger than actual gas absorption signals. Random mechanical dithering (e.g. laser beam dithering and/or sample rotation ) has been found effective in GASMAS. However, this approach converts stable interference into a random noise that must be averaged away, thus requiring longer acquisition times. Baseline recording and interference subtraction may be applicable in some GASMAS applications, as may other of the methods described above. Applications Medical diagnostics See Optical porosimetry See Monitoring of drying processes See Pharmaceutical applications See Monitoring of food and food packaging Much of the food that we consume today is put in a wide variety of packages to ensure food quality and provide a possibility for transportation and distribution. Many of these packages are air or gas tight, making it difficult to study the gas composition without perforation. In many cases it is of great value to study the composition of gases without destroying the package. The perhaps best example is studies of the amount of oxygen in food packages. Oxygen is naturally present in most food and food packages as it is a major component in air. However, oxygen is also one of the great causes or needs for aging of biological substances, due to its source for increase of chemical and microbiological activity. Today, methods like [Modified atmosphere] (MAP) and [Controlled atmosphere] packaging (CAP) are implemented to reduce and control the oxygen content in food packages to prolong [shelf life] and ensure safe food. To assure the effectiveness of these methods it is important to regularly measure the concentration of oxygen (and other gases) inside these packages. GASMAS provides the possibility of doing this non-intrusively, without destroying any food or packages. The two main advantages of measuring the gas-composition in packages without perforation is that no food is wasted in the controlling process and that the same package can be controlled repeatedly during an extended time period to monitor any time-dependence of the gas composition. The studies can be used to guarantee the tightness of packages but also to study food deterioration processes. Much food itself contains free gas distributed in pores within. Examples are fruit, bread, flour, beans, cheese, etc. Also this gas can be of great value to study to monitor quality and maturity level (see e.g. and ). Spectroscopy of gas confined in nanoporous materials See References External links GASMAS technology for packaging, food, and medical applications . Spectroscopy
Gas in scattering media absorption spectroscopy
[ "Physics", "Chemistry" ]
1,401
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
8,432,385
https://en.wikipedia.org/wiki/Dispatchable%20generation
Dispatchable generation refers to sources of electricity that can be programmed on demand at the request of power grid operators, according to market needs. Dispatchable generators may adjust their power output according to an order. Non-dispatchable renewable energy sources such as wind power and solar photovoltaic (PV) power cannot be controlled by operators. Other types of renewable energy that are dispatchable without separate energy storage are hydroelectric, biomass, geothermal and ocean thermal energy conversion. Startup time Dispatchable plants have varying startup times, depending on the technology used and time elapsed after the previous operation. For example, "hot startup" can be performed a few hours after a preceding shutdown, while "cold startup" is performed after a few days of inoperation. The fastest plants to dispatch are grid batteries which can dispatch in milliseconds. Hydroelectric power plants can often dispatch in tens of seconds to minutes, and natural gas power plants can generally dispatch in tens of minutes. For example, the 1,728 MW Dinorwig pumped storage power plant can reach full output in 16 seconds. Gas turbine (Brayton cycle) thermal plants require around 15-30 minutes to startup. Coal thermal plants based on steam turbines (Rankine cycle) are dispatchable sources that require hours to startup. The combined cycle power plants consist of few stages with varying startup times with more than 8 hours required to get to full power from cold state: the gas turbine can start in 15-30 minutes; the steam turbine (ST) heating process takes from 1 hour (for hot startup) to 6 hours (for cold startup); ST load increase takes additional 20 minutes (if "hot") to 2 hours ("cold"). Nuclear power plants have the longest startup times of few days for the cold startup (less than a week). A typical boiling water reactor goes through the following stages: establishment of a chain reaction (upto 6 hours); getting to nominal temperature and pressure in the reactor (12 hours); warming up the steam generation (12 hours); increasing the load (2-3 days). Benefits The primary benefits of dispatchable power plants include: providing spinning reserve (frequency control) balancing the electric power system (load following) optimizing economic generation dispatch (merit order) contributing to clearing grid congestion (redispatch) These capabilities of dispatchable generators allow: Load matching - slow changes in power demand between, for example, night and day, require changes in supply too, as the system needs to be balanced at all times (see also Electricity). Peak matching - short periods of time during which demand exceeds the output of load matching plants; generation capable of satisfying these peaks in demand is implemented through quick deployment of dispatchable sources. Lead-in times - periods during which an alternative source is employed to supplement the lead time required by large coal or natural gas fueled plants to reach full output; these alternative power sources can be deployed in a matter of seconds or minutes to adapt to rapid shocks in demand or supply that cannot be satisfied by peak matching generators. Frequency regulation or intermittent power sources - changes in the electricity output sent into the system may change quality and stability of the transmission system itself because of a change in the frequency of electricity transmitted; renewable sources such as wind and solar are intermittent and need flexible power sources to smooth out their changes in energy production. Backup for base-load generators - Nuclear power plants, for example, are equipped with nuclear reactor safety systems that can stop the generation of electricity in less than a second in case of emergency. Alternative classification A 2018 study suggested a new classification of energy generation sources, which accounts for fast increase in penetration of variable renewable energy sources, which result in high energy prices during periods of low availability: "Fuel saving" variable renewable energy, which have near zero variable costs and zero fuel costs by using power of wind, Sun and run-of-river hydropower. With large share of these sources, "capacity needs are driven by periods with low VRE availability" and therefore their proposed role is to replace other high-variable cost sources at periods when they are available. "Fast-burst" are energy sources that can be instantly dispatched during periods of high demand and high energy prices, but are poorly performing for long term continuous operations. These include energy storage (batteries), flexible demand and demand response. "Firm" low-carbon sources, which provide stable energy supply during all seasons and during periods up to weeks or months, and include nuclear power, hydro plants with large reservoirs, fossil fuels with carbon capture, geothermal and biofuels. See also Peaking power plant Load following power plant Intermittent energy source References Sources Electric power
Dispatchable generation
[ "Physics", "Engineering" ]
947
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
8,433,618
https://en.wikipedia.org/wiki/Ubbelohde%20viscometer
An Ubbelohde type viscometer or suspended-level viscometer is a measuring instrument which uses a capillary based method of measuring viscosity. It is recommended for higher viscosity cellulosic polymer solutions. The advantage of this instrument is that the values obtained are independent of the total volume. The device was developed by the German chemist Leo Ubbelohde (1877-1964). ASTM and other test methods are: ISO 3104, ISO 3105, ASTM D445, ASTM D446, ASTM D4020, IP 71, BS 188. The Ubbelohde viscometer is closely related to the Ostwald viscometer. Both are u-shaped pieces of glassware with a reservoir on one side and a measuring bulb with a capillary on the other. A liquid is introduced into the reservoir then sucked through the capillary and measuring bulb. The liquid is allowed to travel back through the measuring bulb and the time it takes for the liquid to pass through two calibrated marks is a measure for viscosity. The Ubbelohde device has a third arm extending from the end of the capillary and open to the atmosphere. In this way the pressure head only depends on a fixed height and no longer on the total volume of liquid. Determination of viscosity The determination of viscosity is based on Poiseuille's law: where t is the time it takes for a volume V to elute. The ratio depends on R as the capillary radius, on the average applied pressure P, on its length L and on the dynamic viscosity η. The average pressure head is given by: with ρ the density of the liquid, g the Standard gravity and H the average head of the liquid. In this way the viscosity of a fluid can be determined. Usually the viscosity of a liquid is compared to a liquid with an analyte for example a polymer dissolved in it. The relative viscosity is given by: where t0 and ρ0 are the elution time and density of the pure liquid. When the solution is very diluted the so-called specific viscosity becomes: This specific viscosity is related to the concentration of the analyte through the Intrinsic viscosity [η] by the power series: or where is called the viscosity number. The intrinsic viscosity can be determined experimentally by measuring the viscosity number as function of concentration as the Y-axis intercept. References Laboratory glassware Polymer chemistry Viscosity meters
Ubbelohde viscometer
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
531
[ "Viscosity meters", "Materials science", "Polymer chemistry", "Measuring instruments" ]
8,437,941
https://en.wikipedia.org/wiki/Noncommutative%20standard%20model
In theoretical particle physics, the non-commutative Standard Model (best known as Spectral Standard Model ), is a model based on noncommutative geometry that unifies a modified form of general relativity with the Standard Model (extended with right-handed neutrinos). The model postulates that space-time is the product of a 4-dimensional compact spin manifold by a finite space . The full Lagrangian (in Euclidean signature) of the Standard model minimally coupled to gravity is obtained as pure gravity over that product space. It is therefore close in spirit to Kaluza–Klein theory but without the problem of massive tower of states. The parameters of the model live at unification scale and physical predictions are obtained by running the parameters down through renormalization. It is worth stressing that it is more than a simple reformation of the Standard Model. For example, the scalar sector and the fermions representations are more constrained than in effective field theory. Motivation Following ideas from Kaluza–Klein and Albert Einstein, the spectral approach seeks unification by expressing all forces as pure gravity on a space . The group of invariance of such a space should combine the group of invariance of general relativity with , the group of maps from to the standard model gauge group . acts on by permutations and the full group of symmetries of is the semi-direct product: Note that the group of invariance of is not a simple group as it always contains the normal subgroup . It was proved by Mather and Thurston that for ordinary (commutative) manifolds, the connected component of the identity in is always a simple group, therefore no ordinary manifold can have this semi-direct product structure. It is nevertheless possible to find such a space by enlarging the notion of space. In noncommutative geometry, spaces are specified in algebraic terms. The algebraic object corresponding to a diffeomorphism is the automorphism of the algebra of coordinates. If the algebra is taken non-commutative it has trivial automorphisms (so-called inner automorphisms). These inner automorphisms form a normal subgroup of the group of automorphisms and provide the correct group structure. Picking different algebras then give rise to different symmetries. The Spectral Standard Model takes as input the algebra where is the algebra of differentiable functions encoding the 4-dimensional manifold and is a finite dimensional algebra encoding the symmetries of the standard model. History First ideas to use noncommutative geometry to particle physics appeared in 1988-89, and were formalized a couple of years later by Alain Connes and John Lott in what is known as the Connes-Lott model . The Connes-Lott model did not incorporate the gravitational field. In 1997, Ali Chamseddine and Alain Connes published a new action principle, the Spectral Action, that made possible to incorporate the gravitational field into the model. Nevertheless, it was quickly noted that the model suffered from the notorious fermion-doubling problem (quadrupling of the fermions) and required neutrinos to be massless. One year later, experiments in Super-Kamiokande and Sudbury Neutrino Observatory began to show that solar and atmospheric neutrinos change flavors and therefore are massive, ruling out the Spectral Standard Model. Only in 2006 a solution to the latter problem was proposed, independently by John W. Barrett and Alain Connes, almost at the same time. They show that massive neutrinos can be incorporated into the model by disentangling the KO-dimension (which is defined modulo 8) from the metric dimension (which is zero) for the finite space. By setting the KO-dimension to be 6, not only massive neutrinos were possible, but the see-saw mechanism was imposed by the formalism and the fermion doubling problem was also addressed. The new version of the model was studied in, and under an additional assumption, known as the "big desert" hypothesis, computations were carried out to predict the Higgs boson mass around 170 GeV and postdict the Top quark mass. In August 2008, Tevatron experiments excluded a Higgs mass of 158 to 175 GeV at the 95% confidence level. Alain Connes acknowledged on a blog about non-commutative geometry that the prediction about the Higgs mass was invalidated. In July 2012, CERN announced the discovery of the Higgs boson with a mass around 125 GeV/c2. A proposal to address the problem of the Higgs mass was published by Ali Chamseddine and Alain Connes in 2012 by taking into account a real scalar field that was already present in the model but was neglected in previous analysis. Another solution to the Higgs mass problem was put forward by Christopher Estrada and Matilde Marcolli by studying renormalization group flow in presence of gravitational correction terms. See also Noncommutative geometry Noncommutative algebraic geometry Noncommutative quantum field theory Timeline of atomic and subatomic physics Notes References External links Alain Connes' official website with downloadable papers. Alain Connes's Standard Model. Physics beyond the Standard Model Noncommutative geometry
Noncommutative standard model
[ "Physics" ]
1,075
[ "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model" ]
27,287,411
https://en.wikipedia.org/wiki/CIT%20Program%20Tumor%20Identity%20Cards
The "Cartes d'Identité des Tumeurs (CIT)" program, launched and funded by the French charity "Ligue Nationale contre le Cancer," aims to improve or develop better targeted therapeutic approaches by refining molecular knowledge of multiple types of tumors. The CIT program mainly relies on the large-scale and systematic profiling of large cohorts of tumors at various molecular levels including at least the genome, the epigenome, and the transcriptome. See also Precision medicine Oncology Cancer Research Bioinformatics Computational genomics Oncogenomics Genomics Transcriptome Gene expression profiling References External links Official web site List of main scientific publications Evidence-based medicine Medical diagnosis Bioinformatics Biostatistics Cancer research
CIT Program Tumor Identity Cards
[ "Engineering", "Biology" ]
153
[ "Bioinformatics", "Biological engineering" ]
4,928,954
https://en.wikipedia.org/wiki/Tricine
Tricine is an organic compound that is used in buffer solutions. The name tricine comes from tris and glycine, from which it was derived. It is a white crystalline powder that is moderately soluble in water. It is a zwitterionic amino acid that has a pKa1 value of 2.3 at 25 °C, while its pKa2 at 20 °C is 8.15. Its useful buffering range of pH is 7.4-8.8. Along with bicine, it is one of Good's buffering agents. Good first prepared tricine to buffer chloroplast reactions. Applications Tricine is a commonly used electrophoresis buffer and is also used in resuspension of cell pellets. It has a higher negative (more negative) charge than glycine allowing it to migrate faster. In addition its high ionic strength causes more ion movement and less protein movement. This allows for low molecular weight proteins to be separated in lower percent acrylamide gels. Tricine has been documented in the separation of proteins in the range of 1 to 100 kDa by electrophoresis. The tricine buffer at 25 mmol/L was found to be the most effective buffer among the ten tested for ATP assays using firefly luciferase. Tricine has also been found to be an effective scavenger of hydroxyl radicals in a study of radiation-induced membrane damage. See also SDS-PAGE Ampholyte Glycine Bicine References Triols Buffer solutions Alpha-Amino acids Amino acid derivatives Zwitterions Hydroxy acids Ethanolamines
Tricine
[ "Physics", "Chemistry" ]
334
[ "Buffer solutions", "Ions", "Zwitterions", "Matter" ]
4,930,033
https://en.wikipedia.org/wiki/Stationary%20state
A stationary state is a quantum state with all observables independent of time. It is an eigenvector of the energy operator (instead of a quantum superposition of different energies). It is also called energy eigenvector, energy eigenstate, energy eigenfunction, or energy eigenket. It is very similar to the concept of atomic orbital and molecular orbital in chemistry, with some slight differences explained below. Introduction A stationary state is called stationary because the system remains in the same state as time elapses, in every observable way. For a single-particle Hamiltonian, this means that the particle has a constant probability distribution for its position, its velocity, its spin, etc. (This is true assuming the particle's environment is also static, i.e. the Hamiltonian is unchanging in time.) The wavefunction itself is not stationary: It continually changes its overall complex phase factor, so as to form a standing wave. The oscillation frequency of the standing wave, multiplied by the Planck constant, is the energy of the state according to the Planck–Einstein relation. Stationary states are quantum states that are solutions to the time-independent Schrödinger equation: where This is an eigenvalue equation: is a linear operator on a vector space, is an eigenvector of , and is its eigenvalue. If a stationary state is plugged into the time-dependent Schrödinger equation, the result is Assuming that is time-independent (unchanging in time), this equation holds for any time . Therefore, this is a differential equation describing how varies in time. Its solution is Therefore, a stationary state is a standing wave that oscillates with an overall complex phase factor, and its oscillation angular frequency is equal to its energy divided by . Stationary state properties As shown above, a stationary state is not mathematically constant: However, all observable properties of the state are in fact constant in time. For example, if represents a simple one-dimensional single-particle wavefunction , the probability that the particle is at location is which is independent of the time . The Heisenberg picture is an alternative mathematical formulation of quantum mechanics where stationary states are truly mathematically constant in time. As mentioned above, these equations assume that the Hamiltonian is time-independent. This means simply that stationary states are only stationary when the rest of the system is fixed and stationary as well. For example, a 1s electron in a hydrogen atom is in a stationary state, but if the hydrogen atom reacts with another atom, then the electron will of course be disturbed. Spontaneous decay Spontaneous decay complicates the question of stationary states. For example, according to simple (nonrelativistic) quantum mechanics, the hydrogen atom has many stationary states: 1s, 2s, 2p, and so on, are all stationary states. But in reality, only the ground state 1s is truly "stationary": An electron in a higher energy level will spontaneously emit one or more photons to decay into the ground state. This seems to contradict the idea that stationary states should have unchanging properties. The explanation is that the Hamiltonian used in nonrelativistic quantum mechanics is only an approximation to the Hamiltonian from quantum field theory. The higher-energy electron states (2s, 2p, 3s, etc.) are stationary states according to the approximate Hamiltonian, but stationary according to the true Hamiltonian, because of vacuum fluctuations. On the other hand, the 1s state is truly a stationary state, according to both the approximate and the true Hamiltonian. Comparison to "orbital" in chemistry An orbital is a stationary state (or approximation thereof) of a one-electron atom or molecule; more specifically, an atomic orbital for an electron in an atom, or a molecular orbital for an electron in a molecule. For a molecule that contains only a single electron (e.g. atomic hydrogen or H2+), an orbital is exactly the same as a total stationary state of the molecule. However, for a many-electron molecule, an orbital is completely different from a total stationary state, which is a many-particle state requiring a more complicated description (such as a Slater determinant). In particular, in a many-electron molecule, an orbital is not the total stationary state of the molecule, but rather the stationary state of a single electron within the molecule. This concept of an orbital is only meaningful under the approximation that if we ignore the electron–electron instantaneous repulsion terms in the Hamiltonian as a simplifying assumption, we can decompose the total eigenvector of a many-electron molecule into separate contributions from individual electron stationary states (orbitals), each of which are obtained under the one-electron approximation. (Luckily, chemists and physicists can often (but not always) use this "single-electron approximation".) In this sense, in a many-electron system, an orbital can be considered as the stationary state of an individual electron in the system. In chemistry, calculation of molecular orbitals typically also assume the Born–Oppenheimer approximation. See also Transition of state Quantum number Quantum mechanic vacuum or vacuum state Virtual particle Steady state References Further reading Stationary states, Alan Holden, Oxford University Press, 1971, Quantum mechanics ja:基底状態 zh:定态
Stationary state
[ "Physics" ]
1,118
[ "Theoretical physics", "Quantum mechanics" ]
4,930,696
https://en.wikipedia.org/wiki/MagneRide
MagneRide is an automotive adaptive suspension with magnetorheological damper system developed by the Delphi Automotive corporation, that uses magnetically controlled dampers, or shock absorbers, for a highly adaptive ride. As opposed to traditional suspension systems, MagneRide has no mechanical valves or even small moving parts that can wear. This system consists of monotube dampers, one on each corner of the vehicle, a sensor set, and an ECU (electronic control unit) to maintain the system. Background The dampers are filled with magnetorheological fluid, a mixture of easily magnetized iron particles in a synthetic hydrocarbon oil. In each of the monotube dampers is a piston containing two electromagnetic coils and two small fluid passages through the piston. The electromagnets are able to create a variable magnetic field across the fluid passages. When the magnets are off, the fluid travels through the passages freely. When the magnets are turned on, the iron particles in the fluid create a fibrous structure through the passages in the same direction as the magnetic field. The strength of the bonds between the magnetized iron particles causes the effective viscosity of the fluid to increase resulting in a stiffer suspension. Altering the strength of the current results in an instantaneous change in force of the piston. If the sensors sense any body roll, they communicate the information to the ECU. The ECU will compensate for this by changing the strength of the current to the appropriate dampers. History The first generation was created by Delphi Corporation during a period when it was a subsidiary of General Motors (GM). Originally licensed only to General Motors vehicles, it debuted on the 2002.5 Cadillac Seville STS. The first sports car to use the technology was the 2003 C5 Corvette. Delphi would later license the technology to other manufacturers such as Ferrari and Audi. BeijingWest Industries, BWI, acquired MagneRide IP in 2009. Differentiating features Low-velocity damping control Ability to "draw" force-velocity curve Fast response Improvements Generation II MagneRide continued to use a single electromagnetic coil inside the damper piston. Changes from the previous generation include uprated seals and bearings to extend its application to heavier cars and SUV's. The most notable improvements in the new system are the ECU and coils. A smaller, lighter, more capable ECU debuted with GenII The legislative requirement for lead-free ECU's caused BWI to redesign their control unit for the third generation. Because they could not use lead, BWI designed their new ECU from scratch. The new and improved ECU has three times the computing capacity as the previous edition as well as ten times more memory. It also has greater tuneability. Dual coils The third generation introduced a second electromagnetic coil in the piston of each damper, improving turn-off response. With the single electromagnetic coil, there was a small delay from when the ECU turned off the current to when the damper lost its magnetic field. This was caused by a temporary electric current, or eddy current, in the electromagnet. BWI greatly reduced this delay with its dual coil system. The two coils are wound in opposite directions to each other, cancelling out the eddy currents. The dual coil system effectively eliminated the delay, causing a quicker responding suspension system. Applications MagneRide was first used by General Motors in the Cadillac Seville STS (2002.5) sedan, first used in a sports car in the 2003 C5 Corvette, and is now used as a standard suspension or an option in many models for Cadillac, Buick, Chevrolet, and other GM vehicles. It can also be found on some Holden Special Vehicles, Ferrari, Lamborghini, Ford and Audi vehicles. Specific Applications: Buick Lucerne: CXS trim; Lucerne Super Chevrolet Camaro: Standard equipment on ZL1 trim (2012–2024) and optional on SS trim (2016–2024) Chevrolet Corvette C5: Standard equipment on 2003 50th anniversary model, optional on 2003-2004 model years Chevrolet Corvette C6: optional in coupe trim starting in 2005 model year and in hardtop (Z06) trim starting in 2012 model year; standard equipment in ZR1. Chevrolet Corvette C7: Optional with Z51 package, standard on Z06 and ZR1 Chevrolet Corvette C8: Optional with Z51 package, standard on Z06 Chevrolet SS (2015-2017) Cadillac XLR and Cadillac XLR-V (2004–2009) standard on all models Cadillac ATS and Cadillac ATS-V (2013–): standard with 3.6L or 2.0T option package Cadillac Celestiq (2023-): standard along with air suspension and Active Roll Control Cadillac CT4-V (2020–): standard on CT4-V Cadillac CT5-V (2020–): fourth generation Magnetic Ride Control standard on CT5-V Cadillac CTS and Cadillac CTS-V (2009–) (Magnetic Ride Control) Cadillac CT6 (2016–): standard on Platinum, optional on other models except PHEV Cadillac Escalade (2008-): standard Cadillac SRX (2004–09): standard with Performance or Premium option package. Cadillac DTS (2006–11): standard with Performance or Premium option package. Cadillac STS (2005–11): standard with Northstar V8 and 1SG option package. Cadillac Seville STS (2002–03): Debut application for MagneRide, replacing Continuously Variable Road-Sensing Suspension (CVRSS). GMC Sierra (Denali Trim) (2015-) GMC Yukon and Yukon XL (LTZ Trim) (2015-) Ford Mustang Ecoboost (2018–): Optional in Performance Package (2018–2019); Standard in Handling Package, which requires High Performance Package (2020–) Ford Mustang GT (2018–): Optional in Performance Package, Standard in Performance Package Level 2 Ford Mustang Bullitt (2019–2020): Optional Ford Mustang Shelby GT350 (2015–2020) and GT500 (2020–2022): Standard Ford Mustang Mach-E: GT Performance Edition HSV Senator HSV GTS HSV W427 Acura MDX Sport Package Acura ZDX Acura NSX Acura TLX (2021–) Audi TT (magnetic ride) Audi S3 (magnetic ride) Audi R8 (magnetic ride) Land Rover Discovery Sport Land Rover Range Rover Evoque Ferrari 599 Ferrari F12berlinetta Ferrari California Ferrari FF Ferrari 458 Italia Ferrari La Ferrari Ferrari Roma Lamborghini Aventador Lamborghini Huracán Aftermarket : Tesla Model 3 (AWD,RWD) Tesla Model y (AWD,RWD) GWM TANK 300 Mercedes Benz W447 V class & Vito References External links BWI Group MagneRide Company "BWI wraps up Delphi deal", China Daily Automotive suspension technologies Automotive technology tradenames Vehicle safety technologies Auto parts Mechanical power control
MagneRide
[ "Physics" ]
1,438
[ "Mechanics", "Mechanical power control" ]
4,931,186
https://en.wikipedia.org/wiki/Blood%20fractionation
Blood fractionation is the process of fractionating whole blood, or separating it into its component parts. This is typically done by centrifuging the blood. The resulting components are: a clear solution of blood plasma in the upper phase (which can be separated into its own fractions, see Blood plasma fractionation), the buffy coat, which is a thin layer of leukocytes (white blood cells) mixed with platelets in the middle, and erythrocytes (red blood cells) at the bottom of the centrifuge tube. Serum separation tubes (SSTs) are tubes used in phlebotomy containing a silicone gel; when centrifuged the silicone gel forms a layer on top of the buffy coat, allowing the blood serum to be removed more effectively for testing and related purposes. As an alternative to energy-consuming centrifugation, more energy-efficient technologies have been studied, such as ultrasonic fractionation. Plasma protein fractionation Plasma proteins are separated by using the inherent differences of each protein. Fractionation involves changing the conditions of the pooled plasma (e.g., the temperature or the acidity) so that proteins that are normally dissolved in the plasma fluid become insoluble, forming large clumps, called precipitate. The insoluble protein can be collected by centrifugation. One of the very effective ways for carrying out this process is the addition of alcohol to the plasma membrane pool while simultaneously cooling the pool. This process is sometimes called cold alcohol fractionation or ethanol fractionation. It was described by and bears the eponym of Dr Edwin J. Cohn. This procedure is carried out in a series of steps so that a single pool of plasma yields several different protein products, such as albumin and immune globulin. Human serum albumin prepared by this process is used in some vaccines, for treating burn victims, and other medical applications. See also Blood plasma fractionation References Blood Fractionation Medical technology
Blood fractionation
[ "Chemistry", "Biology" ]
411
[ "Fractionation", "Medical technology", "Separation processes" ]
4,932,111
https://en.wikipedia.org/wiki/Capacitor
In electrical engineering, a capacitor is a device that stores electrical energy by accumulating electric charges on two closely spaced surfaces that are insulated from each other. The capacitor was originally known as the condenser, a term still encountered in a few compound names, such as the condenser microphone. It is a passive electronic component with two terminals. The utility of a capacitor depends on its capacitance. While some capacitance exists between any two electrical conductors in proximity in a circuit, a capacitor is a component designed specifically to add capacitance to some part of the circuit. The physical form and construction of practical capacitors vary widely and many types of capacitor are in common use. Most capacitors contain at least two electrical conductors, often in the form of metallic plates or surfaces separated by a dielectric medium. A conductor may be a foil, thin film, sintered bead of metal, or an electrolyte. The nonconducting dielectric acts to increase the capacitor's charge capacity. Materials commonly used as dielectrics include glass, ceramic, plastic film, paper, mica, air, and oxide layers. When an electric potential difference (a voltage) is applied across the terminals of a capacitor, for example when a capacitor is connected across a battery, an electric field develops across the dielectric, causing a net positive charge to collect on one plate and net negative charge to collect on the other plate. No current actually flows through a perfect dielectric. However, there is a flow of charge through the source circuit. If the condition is maintained sufficiently long, the current through the source circuit ceases. If a time-varying voltage is applied across the leads of the capacitor, the source experiences an ongoing current due to the charging and discharging cycles of the capacitor. Capacitors are widely used as parts of electrical circuits in many common electrical devices. Unlike a resistor, an ideal capacitor does not dissipate energy, although real-life capacitors do dissipate a small amount (see Non-ideal behavior). The earliest forms of capacitors were created in the 1740s, when European experimenters discovered that electric charge could be stored in water-filled glass jars that came to be known as Leyden jars. Today, capacitors are widely used in electronic circuits for blocking direct current while allowing alternating current to pass. In analog filter networks, they smooth the output of power supplies. In resonant circuits they tune radios to particular frequencies. In electric power transmission systems, they stabilize voltage and power flow. The property of energy storage in capacitors was exploited as dynamic memory in early digital computers, and still is in modern DRAM. History Natural capacitors have existed since prehistoric times. The most common example of natural capacitance are the static charges accumulated between clouds in the sky and the surface of the Earth, where the air between them serves as the dielectric. This results in bolts of lightning when the breakdown voltage of the air is exceeded. In October 1745, Ewald Georg von Kleist of Pomerania, Germany, found that charge could be stored by connecting a high-voltage electrostatic generator by a wire to a volume of water in a hand-held glass jar. Von Kleist's hand and the water acted as conductors and the jar as a dielectric (although details of the mechanism were incorrectly identified at the time). Von Kleist found that touching the wire resulted in a powerful spark, much more painful than that obtained from an electrostatic machine. The following year, the Dutch physicist Pieter van Musschenbroek invented a similar capacitor, which was named the Leyden jar, after the University of Leiden where he worked. He also was impressed by the power of the shock he received, writing, "I would not take a second shock for the kingdom of France." Daniel Gralath was the first to combine several jars in parallel to increase the charge storage capacity. Benjamin Franklin investigated the Leyden jar and came to the conclusion that the charge was stored on the glass, not in the water as others had assumed. He also adopted the term "battery", (denoting the increase of power with a row of similar units as in a battery of cannon), subsequently applied to clusters of electrochemical cells. In 1747, Leyden jars were made by coating the inside and outside of jars with metal foil, leaving a space at the mouth to prevent arcing between the foils. The earliest unit of capacitance was the jar, equivalent to about 1.11 nanofarads. Leyden jars or more powerful devices employing flat glass plates alternating with foil conductors were used exclusively up until about 1900, when the invention of wireless (radio) created a demand for standard capacitors, and the steady move to higher frequencies required capacitors with lower inductance. More compact construction methods began to be used, such as a flexible dielectric sheet (like oiled paper) sandwiched between sheets of metal foil, rolled or folded into a small package. Early capacitors were known as condensers, a term that is still occasionally used today, particularly in high power applications, such as automotive systems. The term condensatore was used by Alessandro Volta in 1780 to refer to a device, similar to his electrophorus, he developed to measure electricity, and translated in 1782 as condenser, where the name referred to the device's ability to store a higher density of electric charge than was possible with an isolated conductor. The term became deprecated because of the ambiguous meaning of steam condenser, with capacitor becoming the recommended term in the UK from 1926, while the change occurred considerably later in the United States. Since the beginning of the study of electricity, non-conductive materials like glass, porcelain, paper and mica have been used as insulators. Decades later, these materials were also well-suited for use as the dielectric for the first capacitors. Paper capacitors, made by sandwiching a strip of impregnated paper between strips of metal and rolling the result into a cylinder, were commonly used in the late 19th century; their manufacture started in 1876, and they were used from the early 20th century as decoupling capacitors in telephony. Porcelain was used in the first ceramic capacitors. In the early years of Marconi's wireless transmitting apparatus, porcelain capacitors were used for high voltage and high frequency application in the transmitters. On the receiver side, smaller mica capacitors were used for resonant circuits. Mica capacitors were invented in 1909 by William Dubilier. Prior to World War II, mica was the most common dielectric for capacitors in the United States. Charles Pollak (born Karol Pollak), the inventor of the first electrolytic capacitors, found out that the oxide layer on an aluminum anode remained stable in a neutral or alkaline electrolyte, even when the power was switched off. In 1896 he was granted U.S. Patent No. 672,913 for an "Electric liquid capacitor with aluminum electrodes". Solid electrolyte tantalum capacitors were invented by Bell Laboratories in the early 1950s as a miniaturized and more reliable low-voltage support capacitor to complement their newly invented transistor. With the development of plastic materials by organic chemists during the Second World War, the capacitor industry began to replace paper with thinner polymer films. One very early development in film capacitors was described in British Patent 587,953 in 1944. Electric double-layer capacitors (now supercapacitors) were invented in 1957 when H. Becker developed a "Low voltage electrolytic capacitor with porous carbon electrodes". He believed that the energy was stored as a charge in the carbon pores used in his capacitor as in the pores of the etched foils of electrolytic capacitors. Because the double layer mechanism was not known by him at the time, he wrote in the patent: "It is not known exactly what is taking place in the component if it is used for energy storage, but it leads to an extremely high capacity." The MOS capacitor was later widely adopted as a storage capacitor in memory chips, and as the basic building block of the charge-coupled device (CCD) in image sensor technology. In 1966, Dr. Robert Dennard invented modern DRAM architecture, combining a single MOS transistor per capacitor. Theory of operation Overview A capacitor consists of two conductors separated by a non-conductive region. The non-conductive region can either be a vacuum or an electrical insulator material known as a dielectric. Examples of dielectric media are glass, air, paper, plastic, ceramic, and even a semiconductor depletion region chemically identical to the conductors. From Coulomb's law a charge on one conductor will exert a force on the charge carriers within the other conductor, attracting opposite polarity charge and repelling like polarity charges, thus an opposite polarity charge will be induced on the surface of the other conductor. The conductors thus hold equal and opposite charges on their facing surfaces, and the dielectric develops an electric field. An ideal capacitor is characterized by a constant capacitance C, in farads in the SI system of units, defined as the ratio of the positive or negative charge Q on each conductor to the voltage V between them: A capacitance of one farad (F) means that one coulomb of charge on each conductor causes a voltage of one volt across the device. Because the conductors (or plates) are close together, the opposite charges on the conductors attract one another due to their electric fields, allowing the capacitor to store more charge for a given voltage than when the conductors are separated, yielding a larger capacitance. In practical devices, charge build-up sometimes affects the capacitor mechanically, causing its capacitance to vary. In this case, capacitance is defined in terms of incremental changes: Hydraulic analogy In the hydraulic analogy, voltage is analogous to water pressure and electrical current through a wire is analogous to water flow through a pipe. A capacitor is like an elastic diaphragm within the pipe. Although water cannot pass through the diaphragm, it moves as the diaphragm stretches or un-stretches. Capacitance is analogous to diaphragm elasticity. In the same way that the ratio of charge differential to voltage would be greater for a larger capacitance value (), the ratio of water displacement to pressure would be greater for a diaphragm that flexes more readily. In an AC circuit, a capacitor behaves like a diaphragm in a pipe, allowing the charge to move on both sides of the dielectric while no electrons actually pass through. For DC circuits, a capacitor is analogous to a hydraulic accumulator, storing the energy until pressure is released. Similarly, they can be used to smooth the flow of electricity in rectified DC circuits in the same way an accumulator damps surges from a hydraulic pump. Charged capacitors and stretched diaphragms both store potential energy. The more a capacitor is charged, the higher the voltage across the plates (). Likewise, the greater the displaced water volume, the greater the elastic potential energy. Electrical current affects the charge differential across a capacitor just as the flow of water affects the volume differential across a diaphragm. Just as capacitors experience dielectric breakdown when subjected to high voltages, diaphragms burst under extreme pressures. Just as capacitors block DC while passing AC, diaphragms displace no water unless there is a change in pressure. Circuit equivalence at short-time limit and long-time limit In a circuit, a capacitor can behave differently at different time instants. However, it is usually easy to think about the short-time limit and long-time limit: In the long-time limit, after the charging/discharging current has saturated the capacitor, no current would come into (or get out of) either side of the capacitor; Therefore, the long-time equivalence of capacitor is an open circuit. In the short-time limit, if the capacitor starts with a certain voltage V, since the voltage drop on the capacitor is known at this instant, we can replace it with an ideal voltage source of voltage V. Specifically, if V=0 (capacitor is uncharged), the short-time equivalence of a capacitor is a short circuit. Parallel-plate capacitor The simplest model of a capacitor consists of two thin parallel conductive plates each with an area of separated by a uniform gap of thickness filled with a dielectric of permittivity . It is assumed the gap is much smaller than the dimensions of the plates. This model applies well to many practical capacitors which are constructed of metal sheets separated by a thin layer of insulating dielectric, since manufacturers try to keep the dielectric very uniform in thickness to avoid thin spots which can cause failure of the capacitor. Since the separation between the plates is uniform over the plate area, the electric field between the plates is constant, and directed perpendicularly to the plate surface, except for an area near the edges of the plates where the field decreases because the electric field lines "bulge" out of the sides of the capacitor. This "fringing field" area is approximately the same width as the plate separation, , and assuming is small compared to the plate dimensions, it is small enough to be ignored. Therefore, if a charge of is placed on one plate and on the other plate (the situation for unevenly charged plates is discussed below), the charge on each plate will be spread evenly in a surface charge layer of constant charge density coulombs per square meter, on the inside surface of each plate. From Gauss's law the magnitude of the electric field between the plates is . The voltage(difference) between the plates is defined as the line integral of the electric field over a line (in the z-direction) from one plate to another The capacitance is defined as . Substituting above into this equation Therefore, in a capacitor the highest capacitance is achieved with a high permittivity dielectric material, large plate area, and small separation between the plates. Since the area of the plates increases with the square of the linear dimensions and the separation increases linearly, the capacitance scales with the linear dimension of a capacitor (), or as the cube root of the volume. A parallel plate capacitor can only store a finite amount of energy before dielectric breakdown occurs. The capacitor's dielectric material has a dielectric strength Ud which sets the capacitor's breakdown voltage at . The maximum energy that the capacitor can store is therefore The maximum energy is a function of dielectric volume, permittivity, and dielectric strength. Changing the plate area and the separation between the plates while maintaining the same volume causes no change of the maximum amount of energy that the capacitor can store, so long as the distance between plates remains much smaller than both the length and width of the plates. In addition, these equations assume that the electric field is entirely concentrated in the dielectric between the plates. In reality there are fringing fields outside the dielectric, for example between the sides of the capacitor plates, which increase the effective capacitance of the capacitor. This is sometimes called parasitic capacitance. For some simple capacitor geometries this additional capacitance term can be calculated analytically. It becomes negligibly small when the ratios of plate width to separation and length to separation are large. For unevenly charged plates: If one plate is charged with while the other is charged with , and if both plates are separated from other materials in the environment, then the inner surface of the first plate will have , and the inner surface of the second plated will have charge. Therefore, the voltage between the plates is . Note that the outer surface of both plates will have , but those charges do not affect the voltage between the plates. If one plate is charged with while the other is charged with , and if the second plate is connected to ground, then the inner surface of the first plate will have , and the inner surface of the second plated will have . Therefore, the voltage between the plates is . Note that the outer surface of both plates will have zero charge. Interleaved capacitor For number of plates in a capacitor, the total capacitance would be where is the capacitance for a single plate and is the number of interleaved plates. As shown to the figure on the right, the interleaved plates can be seen as parallel plates connected to each other. Every pair of adjacent plates acts as a separate capacitor; the number of pairs is always one less than the number of plates, hence the multiplier. Energy stored in a capacitor To increase the charge and voltage on a capacitor, work must be done by an external power source to move charge from the negative to the positive plate against the opposing force of the electric field. If the voltage on the capacitor is , the work required to move a small increment of charge from the negative to the positive plate is . The energy is stored in the increased electric field between the plates. The total energy stored in a capacitor (expressed in joules) is equal to the total work done in establishing the electric field from an uncharged state. where is the charge stored in the capacitor, is the voltage across the capacitor, and is the capacitance. This potential energy will remain in the capacitor until the charge is removed. If charge is allowed to move back from the positive to the negative plate, for example by connecting a circuit with resistance between the plates, the charge moving under the influence of the electric field will do work on the external circuit. If the gap between the capacitor plates is constant, as in the parallel plate model above, the electric field between the plates will be uniform (neglecting fringing fields) and will have a constant value . In this case the stored energy can be calculated from the electric field strength The last formula above is equal to the energy density per unit volume in the electric field multiplied by the volume of field between the plates, confirming that the energy in the capacitor is stored in its electric field. Current–voltage relation The current I(t) through any component in an electric circuit is defined as the rate of flow of a charge Q(t) passing through it. Actual charges – electrons – cannot pass through the dielectric of an ideal capacitor. Rather, one electron accumulates on the negative plate for each one that leaves the positive plate, resulting in an electron depletion and consequent positive charge on one electrode that is equal and opposite to the accumulated negative charge on the other. Thus the charge on the electrodes is equal to the integral of the current as well as proportional to the voltage, as discussed above. As with any antiderivative, a constant of integration is added to represent the initial voltage V(t0). This is the integral form of the capacitor equation: Taking the derivative of this and multiplying by C yields the derivative form: for independent of time, voltage and electric charge. The dual of the capacitor is the inductor, which stores energy in a magnetic field rather than an electric field. Its current-voltage relation is obtained by exchanging current and voltage in the capacitor equations and replacing with the inductance . RC circuits A series circuit containing only a resistor, a capacitor, a switch and a constant DC source of voltage is known as a charging circuit. If the capacitor is initially uncharged while the switch is open, and the switch is closed at , it follows from Kirchhoff's voltage law that Taking the derivative and multiplying by C, gives a first-order differential equation: At , the voltage across the capacitor is zero and the voltage across the resistor is V0. The initial current is then . With this assumption, solving the differential equation yields where is the time constant of the system. As the capacitor reaches equilibrium with the source voltage, the voltages across the resistor and the current through the entire circuit decay exponentially. In the case of a discharging capacitor, the capacitor's initial voltage () replaces . The equations become AC circuits Impedance, the vector sum of reactance and resistance, describes the phase difference and the ratio of amplitudes between sinusoidally varying voltage and sinusoidally varying current at a given frequency. Fourier analysis allows any signal to be constructed from a spectrum of frequencies, whence the circuit's reaction to the various frequencies may be found. The reactance and impedance of a capacitor are respectively where is the imaginary unit and is the angular frequency of the sinusoidal signal. The phase indicates that the AC voltage lags the AC current by 90°: the positive current phase corresponds to increasing voltage as the capacitor charges; zero current corresponds to instantaneous constant voltage, etc. Impedance decreases with increasing capacitance and increasing frequency. This implies that a higher-frequency signal or a larger capacitor results in a lower voltage amplitude per current amplitude – an AC "short circuit" or AC coupling. Conversely, for very low frequencies, the reactance is high, so that a capacitor is nearly an open circuit in AC analysis – those frequencies have been "filtered out". Capacitors are different from resistors and inductors in that the impedance is inversely proportional to the defining characteristic; i.e., capacitance. A capacitor connected to an alternating voltage source has a displacement current to flowing through it. In the case that the voltage source is V0cos(ωt), the displacement current can be expressed as: At , the capacitor has a maximum (or peak) current whereby . The ratio of peak voltage to peak current is due to capacitive reactance (denoted XC). XC approaches zero as approaches infinity. If XC approaches 0, the capacitor resembles a short wire that strongly passes current at high frequencies. XC approaches infinity as ω approaches zero. If XC approaches infinity, the capacitor resembles an open circuit that poorly passes low frequencies. The current of the capacitor may be expressed in the form of cosines to better compare with the voltage of the source: In this situation, the current is out of phase with the voltage by +π/2 radians or +90 degrees, i.e. the current leads the voltage by 90°. Laplace circuit analysis (s-domain) When using the Laplace transform in circuit analysis, the impedance of an ideal capacitor with no initial charge is represented in the domain by: where is the capacitance, and is the complex frequency. Circuit analysis Cpacitors in parallel Capacitors in a parallel configuration each have the same applied voltage. Their capacitances add up. Charge is apportioned among them by size. Using the schematic diagram to visualize parallel plates, it is apparent that each capacitor contributes to the total surface area. For capacitors in series Connected in series, the schematic diagram reveals that the separation distance, not the plate area, adds up. The capacitors each store instantaneous charge build-up equal to that of every other capacitor in the series. The total voltage difference from end to end is apportioned to each capacitor according to the inverse of its capacitance. The entire series acts as a capacitor smaller than any of its components. Capacitors are combined in series to achieve a higher working voltage, for example for smoothing a high voltage power supply. The voltage ratings, which are based on plate separation, add up, if capacitance and leakage currents for each capacitor are identical. In such an application, on occasion, series strings are connected in parallel, forming a matrix. The goal is to maximize the energy storage of the network without overloading any capacitor. For high-energy storage with capacitors in series, some safety considerations must be applied to ensure one capacitor failing and leaking current does not apply too much voltage to the other series capacitors. Series connection is also sometimes used to adapt polarized electrolytic capacitors for bipolar AC use. Voltage distribution in parallel-to-series networks. To model the distribution of voltages from a single charged capacitor connected in parallel to a chain of capacitors in series : Note: This is only correct if all capacitance values are equal. The power transferred in this arrangement is: Non-ideal behavior In practice, capacitors deviate from the ideal capacitor equation in several aspects. Some of these, such as leakage current and parasitic effects are linear, or can be analyzed as nearly linear, and can be accounted for by adding virtual components to form an equivalent circuit. The usual methods of network analysis can then be applied. In other cases, such as with breakdown voltage, the effect is non-linear and ordinary (normal, e.g., linear) network analysis cannot be used, the effect must be considered separately. Yet another group of artifacts may exist, including temperature dependence, that may be linear but invalidates the assumption in the analysis that capacitance is a constant. Finally, combined parasitic effects such as inherent inductance, resistance, or dielectric losses can exhibit non-uniform behavior at varying frequencies of operation. Breakdown voltage Above a particular electric field strength, known as the dielectric strength Eds, the dielectric in a capacitor becomes conductive. The voltage at which this occurs is called the breakdown voltage of the device, and is given by the product of the dielectric strength and the separation between the conductors, The maximum energy that can be stored safely in a capacitor is limited by the breakdown voltage. Exceeding this voltage can result in a short circuit between the plates, which can often cause permanent damage to the dielectric, plates, or both. Due to the scaling of capacitance and breakdown voltage with dielectric thickness, all capacitors made with a particular dielectric have approximately equal maximum energy density, to the extent that the dielectric dominates their volume. For air dielectric capacitors the breakdown field strength is of the order 2–5 MV/m (or kV/mm); for mica the breakdown is 100–300 MV/m; for oil, 15–25 MV/m; it can be much less when other materials are used for the dielectric. The dielectric is used in very thin layers and so absolute breakdown voltage of capacitors is limited. Typical ratings for capacitors used for general electronics applications range from a few volts to 1 kV. As the voltage increases, the dielectric must be thicker, making high-voltage capacitors larger per capacitance than those rated for lower voltages. The breakdown voltage is critically affected by factors such as the geometry of the capacitor conductive parts; sharp edges or points increase the electric field strength at that point and can lead to a local breakdown. Once this starts to happen, the breakdown quickly tracks through the dielectric until it reaches the opposite plate, leaving carbon behind and causing a short (or relatively low resistance) circuit. The results can be explosive, as the short in the capacitor draws current from the surrounding circuitry and dissipates the energy. However, in capacitors with particular dielectrics and thin metal electrodes, shorts are not formed after breakdown. It happens because a metal melts or evaporates in a breakdown vicinity, isolating it from the rest of the capacitor. The usual breakdown route is that the field strength becomes large enough to pull electrons in the dielectric from their atoms thus causing conduction. Other scenarios are possible, such as impurities in the dielectric, and, if the dielectric is of a crystalline nature, imperfections in the crystal structure can result in an avalanche breakdown as seen in semi-conductor devices. Breakdown voltage is also affected by pressure, humidity and temperature. Equivalent circuit An ideal capacitor only stores and releases electrical energy, without dissipation. In practice, capacitors have imperfections within the capacitor's materials that result in the following parasitic components: , the equivalent series inductance, due to the leads. This is usually significant only at relatively high frequencies. Two resistances that add a real-valued component to the total impedance, which wastes power: , a small series resistance in the leads. Becomes more relevant as frequency increases. , a small conductance (or reciprocally, a large resistance) in parallel with the capacitance, to account for imperfect dielectric material. This causes a small leakage current across the dielectric (see ) that slowly discharges the capacitor over time. This conductance dominates the total resistance at very low frequencies. Its value varies greatly depending on the capacitor material and quality. Simplified RLC series model As frequency increases, the capacitive impedance (a negative reactance) reduces, so the dielectric's conductance becomes less important and the series components become more significant. Thus, a simplified RLC series model valid for a large frequency range simply treats the capacitor as being in series with an equivalent series inductance and a frequency-dependent equivalent series resistance , which varies little with frequency. Unlike the previous model, this model is not valid at DC and very low frequencies where is relevant. Inductive reactance increases with frequency. Because its sign is positive, it counteracts the capacitance. At the RLC circuit's natural frequency , the inductance perfectly cancels the capacitance, so total reactance is zero. Since the total impedance at is just the real-value of , average power dissipation reaches its maximum of , where V is the root mean square (RMS) voltage across the capacitor. At even higher frequencies, the inductive impedance dominates, so the capacitor undesirably behaves instead like an inductor. High-frequency engineering involves accounting for the inductance of all connections and components. Q factor For a simplified model of a capacitor as an ideal capacitor in series with an equivalent series resistance , the capacitor's quality factor (or Q) is the ratio of the magnitude of its capacitive reactance to its resistance at a given frequency : The Q factor is a measure of its efficiency: the higher the Q factor of the capacitor, the closer it approaches the behavior of an ideal capacitor. Dissipation factor is its reciprocal. Ripple current Ripple current is the AC component of an applied source (often a switched-mode power supply) whose frequency may be constant or varying. Ripple current causes heat to be generated within the capacitor due to the dielectric losses caused by the changing field strength together with the current flow across the slightly resistive supply lines or the electrolyte in the capacitor. The equivalent series resistance (ESR) is the amount of internal series resistance one would add to a perfect capacitor to model this. Some types of capacitors, primarily tantalum and aluminum electrolytic capacitors, as well as some film capacitors have a specified rating value for maximum ripple current. Tantalum electrolytic capacitors with solid manganese dioxide electrolyte are limited by ripple current and generally have the highest ESR ratings in the capacitor family. Exceeding their ripple limits can lead to shorts and burning parts. Aluminum electrolytic capacitors, the most common type of electrolytic, suffer a shortening of life expectancy at higher ripple currents. If ripple current exceeds the rated value of the capacitor, it tends to result in explosive failure. Ceramic capacitors generally have no ripple current limitation and have some of the lowest ESR ratings. Film capacitors have very low ESR ratings but exceeding rated ripple current may cause degradation failures. Capacitance instability The capacitance of certain capacitors decreases as the component ages. In ceramic capacitors, this is caused by degradation of the dielectric. The type of dielectric, ambient operating and storage temperatures are the most significant aging factors, while the operating voltage usually has a smaller effect, i.e., usual capacitor design is to minimize voltage coefficient. The aging process may be reversed by heating the component above the Curie point. Aging is fastest near the beginning of life of the component, and the device stabilizes over time. Electrolytic capacitors age as the electrolyte evaporates. In contrast with ceramic capacitors, this occurs towards the end of life of the component. Temperature dependence of capacitance is usually expressed in parts per million (ppm) per °C. It can usually be taken as a broadly linear function but can be noticeably non-linear at the temperature extremes. The temperature coefficient may be positive or negative, depending mostly on the dielectric material. Some, designated C0G/NP0, but called NPO, have a somewhat negative coefficient at one temperature, positive at another, and zero in between. Such components may be specified for temperature-critical circuits. Capacitors, especially ceramic capacitors, and older designs such as paper capacitors, can absorb sound waves resulting in a microphonic effect. Vibration moves the plates, causing the capacitance to vary, in turn inducing AC current. Some dielectrics also generate piezoelectricity. The resulting interference is especially problematic in audio applications, potentially causing feedback or unintended recording. In the reverse microphonic effect, the varying electric field between the capacitor plates exerts a physical force, moving them as a speaker. This can generate audible sound, but drains energy and stresses the dielectric and the electrolyte, if any. Current and voltage reversal Current reversal occurs when the current changes direction. Voltage reversal is the change of polarity in a circuit. Reversal is generally described as the percentage of the maximum rated voltage that reverses polarity. In DC circuits, this is usually less than 100%, often in the range of 0 to 90%, whereas AC circuits experience 100% reversal. In DC circuits and pulsed circuits, current and voltage reversal are affected by the damping of the system. Voltage reversal is encountered in RLC circuits that are underdamped. The current and voltage reverse direction, forming a harmonic oscillator between the inductance and capacitance. The current and voltage tends to oscillate and may reverse direction several times, with each peak being lower than the previous, until the system reaches an equilibrium. This is often referred to as ringing. In comparison, critically damped or overdamped systems usually do not experience a voltage reversal. Reversal is also encountered in AC circuits, where the peak current is equal in each direction. For maximum life, capacitors usually need to be able to handle the maximum amount of reversal that a system may experience. An AC circuit experiences 100% voltage reversal, while underdamped DC circuits experience less than 100%. Reversal creates excess electric fields in the dielectric, causes excess heating of both the dielectric and the conductors, and can dramatically shorten the life expectancy of the capacitor. Reversal ratings often affect the design considerations for the capacitor, from the choice of dielectric materials and voltage ratings to the types of internal connections used. Dielectric absorption Capacitors made with any type of dielectric material show some level of "dielectric absorption" or "soakage". On discharging a capacitor and disconnecting it, after a short time it may develop a voltage due to hysteresis in the dielectric. This effect is objectionable in applications such as precision sample and hold circuits or timing circuits. The level of absorption depends on many factors, from design considerations to charging time, since the absorption is a time-dependent process. However, the primary factor is the type of dielectric material. Capacitors such as tantalum electrolytic or polysulfone film exhibit relatively high absorption, while polystyrene or Teflon allow very small levels of absorption. In some capacitors where dangerous voltages and energies exist, such as in flashtubes, television sets, microwave ovens and defibrillators, the dielectric absorption can recharge the capacitor to hazardous voltages after it has been shorted or discharged. Any capacitor containing over 10 joules of energy is generally considered hazardous, while 50 joules or higher is potentially lethal. A capacitor may regain anywhere from 0.01 to 20% of its original charge over a period of several minutes, allowing a seemingly safe capacitor to become surprisingly dangerous. Leakage No material is a perfect insulator, thus all dielectrics allow some small level of current to leak through, which can be measured with a megohmmeter.<ref>Robinson's Manual of Radio Telegraphy and Telephony by S.S. Robinson -- US Naval Institute 1924 Pg. 170</ref> Leakage is equivalent to a resistor in parallel with the capacitor. Constant exposure to factors such as heat, mechanical stress, or humidity can cause the dielectric to deteriorate resulting in excessive leakage, a problem often seen in older vacuum tube circuits, particularly where oiled paper and foil capacitors were used. In many vacuum tube circuits, interstage coupling capacitors are used to conduct a varying signal from the plate of one tube to the grid circuit of the next stage. A leaky capacitor can cause the grid circuit voltage to be raised from its normal bias setting, causing excessive current or signal distortion in the downstream tube. In power amplifiers this can cause the plates to glow red, or current limiting resistors to overheat, even fail. Similar considerations apply to component fabricated solid-state (transistor) amplifiers, but, owing to lower heat production and the use of modern polyester dielectric-barriers, this once-common problem has become relatively rare. Electrolytic failure from disuse Aluminum electrolytic capacitors are conditioned when manufactured by applying a voltage sufficient to initiate the proper internal chemical state. This state is maintained by regular use of the equipment. If a system using electrolytic capacitors is unused for a long period of time it can lose its conditioning. Sometimes they fail with a short circuit when next operated. Lifespan All capacitors have varying lifespans, depending upon their construction, operational conditions, and environmental conditions. Solid-state ceramic capacitors generally have very long lives under normal use, which has little dependency on factors such as vibration or ambient temperature, but factors like humidity, mechanical stress, and fatigue play a primary role in their failure. Failure modes may differ. Some capacitors may experience a gradual loss of capacitance, increased leakage or an increase in equivalent series resistance (ESR), while others may fail suddenly or even catastrophically. For example, metal-film capacitors are more prone to damage from stress and humidity, but will self-heal when a breakdown in the dielectric occurs. The formation of a glow discharge at the point of failure prevents arcing by vaporizing the metallic film in that spot, neutralizing any short circuit with minimal loss in capacitance. When enough pinholes accumulate in the film, a total failure occurs in a metal-film capacitor, generally happening suddenly without warning. Electrolytic capacitors generally have the shortest lifespans. Electrolytic capacitors are affected very little by vibration or humidity, but factors such as ambient and operational temperatures play a large role in their failure, which gradually occur as an increase in ESR (up to 300%) and as much as a 20% decrease in capacitance. The capacitors contain electrolytes which will eventually diffuse through the seals and evaporate. An increase in temperature also increases internal pressure, and increases the reaction rate of the chemicals. Thus, the life of an electrolytic capacitor is generally defined by a modification of the Arrhenius equation, which is used to determine chemical-reaction rates: Manufacturers often use this equation to supply an expected lifespan, in hours, for electrolytic capacitors when used at their designed operating temperature, which is affected by both ambient temperature, ESR, and ripple current. However, these ideal conditions may not exist in every use. The rule of thumb for predicting lifespan under different conditions of use is determined by: This says that the capacitor's life decreases by half for every 10 degrees Celsius that the temperature is increased, where: is the rated life under rated conditions, e.g. 2000 hours is the rated max/min operational temperature is the average operational temperature is the expected lifespan under given conditions Capacitor types Practical capacitors are available commercially in many different forms. The type of internal dielectric, the structure of the plates and the device packaging all strongly affect the characteristics of the capacitor, and its applications. Values available range from very low (picofarad range; while arbitrarily low values are in principle possible, stray (parasitic) capacitance in any circuit is the limiting factor) to about 5 kF supercapacitors. Above approximately 1 microfarad electrolytic capacitors are usually used because of their small size and low cost compared with other types, unless their relatively poor stability, life and polarised nature make them unsuitable. Very high capacity supercapacitors use a porous carbon-based electrode material. Dielectric materials Most capacitors have a dielectric spacer, which increases their capacitance compared to air or a vacuum. In order to maximise the charge that a capacitor can hold, the dielectric material needs to have as high a permittivity as possible, while also having as high a breakdown voltage as possible. The dielectric also needs to have as low a loss with frequency as possible. However, low value capacitors are available with a high vacuum between their plates to allow extremely high voltage operation and low losses. Variable capacitors with their plates open to the atmosphere were commonly used in radio tuning circuits. Later designs use polymer foil dielectric between the moving and stationary plates, with no significant air space between the plates. Several solid dielectrics are available, including paper, plastic, glass, mica and ceramic. Paper was used extensively in older capacitors and offers relatively high voltage performance. However, paper absorbs moisture, and has been largely replaced by plastic film capacitors. Most of the plastic films now used offer better stability and ageing performance than such older dielectrics such as oiled paper, which makes them useful in timer circuits, although they may be limited to relatively low operating temperatures and frequencies, because of the limitations of the plastic film being used. Large plastic film capacitors are used extensively in suppression circuits, motor start circuits, and power-factor correction circuits. Ceramic capacitors are generally small, cheap and useful for high frequency applications, although their capacitance varies strongly with voltage and temperature and they age poorly. They can also suffer from the piezoelectric effect. Ceramic capacitors are broadly categorized as class 1 dielectrics, which have predictable variation of capacitance with temperature or class 2 dielectrics, which can operate at higher voltage. Modern multilayer ceramics are usually quite small, but some types have inherently wide value tolerances, microphonic issues, and are usually physically brittle. Glass and mica capacitors are extremely reliable, stable and tolerant to high temperatures and voltages, but are too expensive for most mainstream applications. Electrolytic capacitors and supercapacitors are used to store small and larger amounts of energy, respectively, ceramic capacitors are often used in resonators, and parasitic capacitance occurs in circuits wherever the simple conductor-insulator-conductor structure is formed unintentionally by the configuration of the circuit layout. Electrolytic capacitors use an aluminum or tantalum plate with an oxide dielectric layer. The second electrode is a liquid electrolyte, connected to the circuit by another foil plate. Electrolytic capacitors offer very high capacitance but suffer from poor tolerances, high instability, gradual loss of capacitance especially when subjected to heat, and high leakage current. Poor quality capacitors may leak electrolyte, which is harmful to printed circuit boards. The conductivity of the electrolyte drops at low temperatures, which increases equivalent series resistance. While widely used for power-supply conditioning, poor high-frequency characteristics make them unsuitable for many applications. Electrolytic capacitors suffer from self-degradation if unused for a period (around a year), and when full power is applied may short circuit, permanently damaging the capacitor and usually blowing a fuse or causing failure of rectifier diodes. For example, in older equipment, this may cause arcing in rectifier tubes. They can be restored before use by gradually applying the operating voltage, often performed on antique vacuum tube equipment over a period of thirty minutes by using a variable transformer to supply AC power. The use of this technique may be less satisfactory for some solid state equipment, which may be damaged by operation below its normal power range, requiring that the power supply first be isolated from the consuming circuits. Such remedies may not be applicable to modern high-frequency power supplies as these produce full output voltage even with reduced input. Tantalum capacitors offer better frequency and temperature characteristics than aluminum, but higher dielectric absorption and leakage. Polymer capacitors (OS-CON, OC-CON, KO, AO) use solid conductive polymer (or polymerized organic semiconductor) as electrolyte and offer longer life and lower ESR at higher cost than standard electrolytic capacitors. A feedthrough capacitor is a component that, while not serving as its main use, has capacitance and is used to conduct signals through a conductive sheet. Several other types of capacitor are available for specialist applications. Supercapacitors store large amounts of energy. Supercapacitors made from carbon aerogel, carbon nanotubes, or highly porous electrode materials, offer extremely high capacitance (up to 5 kF ) and can be used in some applications instead of rechargeable batteries. Alternating current capacitors are specifically designed to work on line (mains) voltage AC power circuits. They are commonly used in electric motor circuits and are often designed to handle large currents, so they tend to be physically large. They are usually ruggedly packaged, often in metal cases that can be easily grounded/earthed. They also are designed with direct current breakdown voltages of at least five times the maximum AC voltage. Voltage-dependent capacitors The dielectric constant for a number of very useful dielectrics changes as a function of the applied electrical field, for example ferroelectric materials, so the capacitance for these devices is more complex. For example, in charging such a capacitor the differential increase in voltage with charge is governed by: where the voltage dependence of capacitance, , suggests that the capacitance is a function of the electric field strength, which in a large area parallel plate device is given by . This field polarizes the dielectric, which polarization, in the case of a ferroelectric, is a nonlinear S-shaped function of the electric field, which, in the case of a large area parallel plate device, translates into a capacitance that is a nonlinear function of the voltage. Corresponding to the voltage-dependent capacitance, to charge the capacitor to voltage an integral relation is found: which agrees with only when does not depend on voltage . By the same token, the energy stored in the capacitor now is given by Integrating: where interchange of the order of integration is used. The nonlinear capacitance of a microscope probe scanned along a ferroelectric surface is used to study the domain structure of ferroelectric materials. Another example of voltage dependent capacitance occurs in semiconductor devices such as semiconductor diodes, where the voltage dependence stems not from a change in dielectric constant but in a voltage dependence of the spacing between the charges on the two sides of the capacitor. This effect is intentionally exploited in diode-like devices known as varicaps. Frequency-dependent capacitors If a capacitor is driven with a time-varying voltage that changes rapidly enough, at some frequency the polarization of the dielectric cannot follow the voltage. As an example of the origin of this mechanism, the internal microscopic dipoles contributing to the dielectric constant cannot move instantly, and so as frequency of an applied alternating voltage increases, the dipole response is limited and the dielectric constant diminishes. A changing dielectric constant with frequency is referred to as dielectric dispersion, and is governed by dielectric relaxation processes, such as Debye relaxation. Under transient conditions, the displacement field can be expressed as (see electric susceptibility): indicating the lag in response by the time dependence of , calculated in principle from an underlying microscopic analysis, for example, of the dipole behavior in the dielectric. See, for example, linear response function. The integral extends over the entire past history up to the present time. A Fourier transform in time then results in: where εr(ω) is now a complex function, with an imaginary part related to absorption of energy from the field by the medium. See permittivity. The capacitance, being proportional to the dielectric constant, also exhibits this frequency behavior. Fourier transforming Gauss's law with this form for displacement field: where is the imaginary unit, is the voltage component at angular frequency , is the real part of the current, called the conductance, and determines the imaginary part of the current and is the capacitance. is the complex impedance. When a parallel-plate capacitor is filled with a dielectric, the measurement of dielectric properties of the medium is based upon the relation: where a single prime denotes the real part and a double prime the imaginary part, is the complex impedance with the dielectric present, is the so-called complex capacitance with the dielectric present, and is the capacitance without the dielectric. (Measurement "without the dielectric" in principle means measurement in free space, an unattainable goal inasmuch as even the quantum vacuum is predicted to exhibit nonideal behavior, such as dichroism. For practical purposes, when measurement errors are taken into account, often a measurement in terrestrial vacuum, or simply a calculation of C0, is sufficiently accurate.) Using this measurement method, the dielectric constant may exhibit a resonance at certain frequencies corresponding to characteristic response frequencies (excitation energies) of contributors to the dielectric constant. These resonances are the basis for a number of experimental techniques for detecting defects. The conductance method measures absorption as a function of frequency. Alternatively, the time response of the capacitance can be used directly, as in deep-level transient spectroscopy. Another example of frequency dependent capacitance occurs with MOS capacitors, where the slow generation of minority carriers means that at high frequencies the capacitance measures only the majority carrier response, while at low frequencies both types of carrier respond. At optical frequencies, in semiconductors the dielectric constant exhibits structure related to the band structure of the solid. Sophisticated modulation spectroscopy measurement methods based upon modulating the crystal structure by pressure or by other stresses and observing the related changes in absorption or reflection of light have advanced our knowledge of these materials. Styles The arrangement of plates and dielectric has many variations in different styles depending on the desired ratings of the capacitor. For small values of capacitance (microfarads and less), ceramic disks use metallic coatings, with wire leads bonded to the coating. Larger values can be made by multiple stacks of plates and disks. Larger value capacitors usually use a metal foil or metal film layer deposited on the surface of a dielectric film to make the plates, and a dielectric film of impregnated paper or plasticthese are rolled up to save space. To reduce the series resistance and inductance for long plates, the plates and dielectric are staggered so that connection is made at the common edge of the rolled-up plates, not at the ends of the foil or metalized film strips that comprise the plates. The assembly is encased to prevent moisture entering the dielectricearly radio equipment used a cardboard tube sealed with wax. Modern paper or film dielectric capacitors are dipped in a hard thermoplastic. Large capacitors for high-voltage use may have the roll form compressed to fit into a rectangular metal case, with bolted terminals and bushings for connections. The dielectric in larger capacitors is often impregnated with a liquid to improve its properties. Capacitors may have their connecting leads arranged in many configurations, for example axially or radially. "Axial" means that the leads are on a common axis, typically the axis of the capacitor's cylindrical bodythe leads extend from opposite ends. Radial leads are rarely aligned along radii of the body's circle, so the term is conventional. The leads (until bent) are usually in planes parallel to that of the flat body of the capacitor, and extend in the same direction; they are often parallel as manufactured. Small, cheap discoidal ceramic capacitors have existed from the 1930s onward, and remain in widespread use. After the 1980s, surface mount packages for capacitors have been widely used. These packages are extremely small and lack connecting leads, allowing them to be soldered directly onto the surface of printed circuit boards. Surface mount components avoid undesirable high-frequency effects due to the leads and simplify automated assembly, although manual handling is made difficult due to their small size. Mechanically controlled variable capacitors allow the plate spacing to be adjusted, for example by rotating or sliding a set of movable plates into alignment with a set of stationary plates. Low cost variable capacitors squeeze together alternating layers of aluminum and plastic with a screw. Electrical control of capacitance is achievable with varactors (or varicaps), which are reverse-biased semiconductor diodes whose depletion region width varies with applied voltage. They are used in phase-locked loops, amongst other applications. Capacitor markings Marking codes for larger parts Most capacitors have designations printed on their bodies to indicate their electrical characteristics. Larger capacitors, such as electrolytic types usually display the capacitance as value with explicit unit, for example, 220 μF. For typographical reasons, some manufacturers print MF on capacitors to indicate microfarads (μF). Three-/four-character marking code for small capacitors Smaller capacitors, such as ceramic types, often use a shorthand-notation consisting of three digits and an optional letter, where the digits (XYZ) denote the capacitance in picofarad (pF), calculated as XY × 10Z, and the letter indicating the tolerance. Common tolerances are ±5%, ±10%, and ±20%, denotes as J, K, and M, respectively. A capacitor may also be labeled with its working voltage, temperature, and other relevant characteristics. Example: A capacitor labeled or designated as 473K 330V has a capacitance of = 47 nF (±10%) with a maximum working voltage of 330 V. The working voltage of a capacitor is nominally the highest voltage that may be applied across it without undue risk of breaking down the dielectric layer. Two-character marking code for small capacitors For capacitances following the E3, E6, E12 or E24 series of preferred values, the former ANSI/EIA-198-D:1991, ANSI/EIA-198-1-E:1998 and ANSI/EIA-198-1-F:2002 as well as the amendment IEC 60062:2016/AMD1:2019 to IEC 60062 define a special two-character marking code for capacitors for very small parts which leave no room to print the above-mentioned three-/four-character code onto them. The code consists of an uppercase letter denoting the two significant digits of the value followed by a digit indicating the multiplier. The EIA standard also defines a number of lowercase letters to specify a number of values not found in E24. RKM code The RKM code following IEC 60062 and BS 1852 is a notation to state a capacitor's value in a circuit diagram. It avoids using a decimal separator and replaces the decimal separator with the SI prefix symbol for the particular value (and the letter for weight 1). The code is also used for part markings. Example: for 4.7 nF or for 2.2 F. Historical In texts prior to the 1960s and on some capacitor packages until more recently, obsolete capacitance units were utilized in electronic books, magazines, and electronics catalogs. The old units "mfd" and "mf" meant microfarad (μF); and the old units "mmfd", "mmf", "uuf", "μμf", "pfd" meant picofarad (pF); but they are rarely used any more. Also, "Micromicrofarad" or "micro-microfarad" are obsolete units that are found in some older texts that is equivalent to picofarad (pF). Summary of obsolete capacitance units: (upper/lower case variations are not shown) μF (microfarad) = mf, mfd pF (picofarad) = mmf, mmfd, pfd, μμF Applications Energy storage A capacitor can store electric energy when disconnected from its charging circuit, so it can be used like a temporary battery, or like other types of rechargeable energy storage system. Capacitors are commonly used in electronic devices to maintain power supply while batteries are being changed. (This prevents loss of information in volatile memory.) A capacitor can facilitate conversion of kinetic energy of charged particles into electric energy and store it. There are tradeoffs between capacitors and batteries as storage devices. Without external resistors or inductors, capacitors can generally release their stored energy in a very short time compared to batteries. Conversely, batteries can hold a far greater charge per their size. Conventional capacitors provide less than 360 joules per kilogram of specific energy, whereas a conventional alkaline battery has a density of 590 kJ/kg. There is an intermediate solution: supercapacitors, which can accept and deliver charge much faster than batteries, and tolerate many more charge and discharge cycles than rechargeable batteries. They are, however, 10 times larger than conventional batteries for a given charge. On the other hand, it has been shown that the amount of charge stored in the dielectric layer of the thin film capacitor can be equal to, or can even exceed, the amount of charge stored on its plates. In car audio systems, large capacitors store energy for the amplifier to use on demand. Also, for a flash tube, a capacitor is used to hold the high voltage. Digital memory In the 1930s, John Atanasoff applied the principle of energy storage in capacitors to construct dynamic digital memories for the first binary computers that used electron tubes for logic. Pulsed power and weapons Pulsed power is used in many applications to increase the power intensity (watts) of a volume of energy (joules) by releasing that volume within a very short time. Pulses in the nanosecond range and powers in the gigawatts are achievable. Short pulses often require specially constructed, low-inductance, high-voltage capacitors that are often used in large groups (capacitor banks) to supply huge pulses of current for many pulsed power applications. These include electromagnetic forming, Marx generators, pulsed lasers (especially TEA lasers), pulse forming networks, radar, fusion research, and particle accelerators. Large capacitor banks (reservoir) are used as energy sources for the exploding-bridgewire detonators or slapper detonators in nuclear weapons and other specialty weapons. Experimental work is under way using banks of capacitors as power sources for electromagnetic armour and electromagnetic railguns and coilguns. Power conditioning Reservoir capacitors are used in power supplies where they smooth the output of a full or half wave rectifier. They can also be used in charge pump circuits as the energy storage element in the generation of higher voltages than the input voltage. Capacitors are connected in parallel with the power circuits of most electronic devices and larger systems (such as factories) to shunt away and conceal current fluctuations from the primary power source to provide a "clean" power supply for signal or control circuits. Audio equipment, for example, uses several capacitors in this way, to shunt away power line hum before it gets into the signal circuitry. The capacitors act as a local reserve for the DC power source, and bypass AC currents from the power supply. This is used in car audio applications, when a stiffening capacitor compensates for the inductance and resistance of the leads to the lead–acid car battery. Power-factor correction In electric power distribution, capacitors are used for power-factor correction. Such capacitors often come as three capacitors connected as a three phase load. Usually, the values of these capacitors are not given in farads but rather as a reactive power in volt-amperes reactive (var). The purpose is to counteract inductive loading from devices like electric motors and transmission lines to make the load appear to be mostly resistive. Individual motor or lamp loads may have capacitors for power-factor correction, or larger sets of capacitors (usually with automatic switching devices) may be installed at a load center within a building or in a large utility substation. Suppression and coupling Signal coupling Because capacitors pass AC but block DC signals (when charged up to the applied DC voltage), they are often used to separate the AC and DC components of a signal. This method is known as AC coupling or "capacitive coupling". Here, a large value of capacitance, whose value need not be accurately controlled, but whose reactance is small at the signal frequency, is employed. Decoupling A decoupling capacitor is a capacitor used to protect one part of a circuit from the effect of another, for instance to suppress noise or transients. Noise caused by other circuit elements is shunted through the capacitor, reducing the effect they have on the rest of the circuit. It is most commonly used between the power supply and ground. An alternative name is bypass capacitor as it is used to bypass the power supply or other high impedance component of a circuit. Decoupling capacitors need not always be discrete components. Capacitors used in these applications may be built into a printed circuit board, between the various layers. These are often referred to as embedded capacitors. The layers in the board contributing to the capacitive properties also function as power and ground planes, and have a dielectric in between them, enabling them to operate as a parallel plate capacitor. High-pass and low-pass filters Noise suppression, spikes, and snubbers When an inductive circuit is opened, the current through the inductance collapses quickly, creating a large voltage across the open circuit of the switch or relay. If the inductance is large enough, the energy may generate a spark, causing the contact points to oxidize, deteriorate, or sometimes weld together, or destroying a solid-state switch. A snubber capacitor across the newly opened circuit creates a path for this impulse to bypass the contact points, thereby preserving their life; these were commonly found in contact breaker ignition systems, for instance. Similarly, in smaller scale circuits, the spark may not be enough to damage the switch but may still radiate undesirable radio frequency interference (RFI), which a filter capacitor absorbs. Snubber capacitors are usually employed with a low-value resistor in series, to dissipate energy and minimize RFI. Such resistor-capacitor combinations are available in a single package. Capacitors are also used in parallel with interrupting units of a high-voltage circuit breaker to equally distribute the voltage between these units. These are called "grading capacitors". In schematic diagrams, a capacitor used primarily for DC charge storage is often drawn vertically in circuit diagrams with the lower, more negative, plate drawn as an arc. The straight plate indicates the positive terminal of the device, if it is polarized (see electrolytic capacitor). Motor starters In single phase squirrel cage motors, the primary winding within the motor housing is not capable of starting a rotational motion on the rotor, but is capable of sustaining one. To start the motor, a secondary "start" winding has a series non-polarized starting capacitor to introduce a lead in the sinusoidal current. When the secondary (start) winding is placed at an angle with respect to the primary (run) winding, a rotating electric field is created. The force of the rotational field is not constant, but is sufficient to start the rotor spinning. When the rotor comes close to operating speed, a centrifugal switch (or current-sensitive relay in series with the main winding) disconnects the capacitor. The start capacitor is typically mounted to the side of the motor housing. These are called capacitor-start motors, that have relatively high starting torque. Typically they can have up-to four times as much starting torque as a split-phase motor and are used on applications such as compressors, pressure washers and any small device requiring high starting torques. Capacitor-run induction motors have a permanently connected phase-shifting capacitor in series with a second winding. The motor is much like a two-phase induction motor. Motor-starting capacitors are typically non-polarized electrolytic types, while running capacitors are conventional paper or plastic film dielectric types. Signal processing The energy stored in a capacitor can be used to represent information, either in binary form, as in DRAMs, or in analogue form, as in analog sampled filters and CCDs. Capacitors can be used in analog circuits as components of integrators or more complex filters and in negative feedback loop stabilization. Signal processing circuits also use capacitors to integrate a current signal. Tuned circuits Capacitors and inductors are applied together in tuned circuits to select information in particular frequency bands. For example, radio receivers rely on variable capacitors to tune the station frequency. Speakers use passive analog crossovers, and analog equalizers use capacitors to select different audio bands. The resonant frequency f of a tuned circuit is a function of the inductance (L) and capacitance (C) in series, and is given by: where is in henries and is in farads. Sensing Most capacitors are designed to maintain a fixed physical structure. However, various factors can change the structure of the capacitor, and the resulting change in capacitance can be used to sense those factors. Changing the dielectric The effects of varying the characteristics of the dielectric can be used for sensing purposes. Capacitors with an exposed and porous dielectric can be used to measure humidity in air. Capacitors are used to accurately measure the fuel level in airplanes; as the fuel covers more of a pair of plates, the circuit capacitance increases. Squeezing the dielectric can change a capacitor at a few tens of bar pressure sufficiently that it can be used as a pressure sensor. A selected, but otherwise standard, polymer dielectric capacitor, when immersed in a compatible gas or liquid, can work usefully as a very low cost pressure sensor up to many hundreds of bar. Changing the distance between the plates Capacitors with a flexible plate can be used to measure strain or pressure. Industrial pressure transmitters used for process control use pressure-sensing diaphragms, which form a capacitor plate of an oscillator circuit. Capacitors are used as the sensor in condenser microphones, where one plate is moved by air pressure, relative to the fixed position of the other plate. Some accelerometers use MEMS capacitors etched on a chip to measure the magnitude and direction of the acceleration vector. They are used to detect changes in acceleration, in tilt sensors, or to detect free fall, as sensors triggering airbag deployment, and in many other applications. Some fingerprint sensors use capacitors. Additionally, a user can adjust the pitch of a theremin musical instrument by moving their hand since this changes the effective capacitance between the user's hand and the antenna. Changing the effective area of the plates Capacitive touch switches are now used on many consumer electronic products. Oscillators A capacitor can possess spring-like qualities in an oscillator circuit. In the image example, a capacitor acts to influence the biasing voltage at the npn transistor's base. The resistance values of the voltage-divider resistors and the capacitance value of the capacitor together control the oscillatory frequency. Producing light A light-emitting capacitor is made from a dielectric that uses phosphorescence to produce light. If one of the conductive plates is made with a transparent material, the light is visible. Light-emitting capacitors are used in the construction of electroluminescent panels, for applications such as backlighting for laptop computers. In this case, the entire panel is a capacitor used for the purpose of generating light. Hazards and safety The hazards posed by a capacitor are usually determined, foremost, by the amount of energy stored, which is the cause of things like electrical burns or heart fibrillation. Factors such as voltage and chassis material are of secondary consideration, which are more related to how easily a shock can be initiated rather than how much damage can occur. Under certain conditions, including conductivity of the surfaces, preexisting medical conditions, the humidity of the air, or the pathways it takes through the body (i.e.: shocks that travel across the core of the body and, especially, the heart are more dangerous than those limited to the extremities), shocks as low as one joule have been reported to cause death, although in most instances they may not even leave a burn. Shocks over ten joules will generally damage skin, and are usually considered hazardous. Any capacitor that can store 50 joules or more should be considered potentially lethal. Capacitors may retain a charge long after power is removed from a circuit; this charge can cause dangerous or even potentially fatal shocks or damage connected equipment. For example, even a seemingly innocuous device such as the flash of a disposable camera, has a photoflash capacitor which may contain over 15 joules of energy and be charged to over 300 volts. This is easily capable of delivering a shock. Service procedures for electronic devices usually include instructions to discharge large or high-voltage capacitors, for instance using a Brinkley stick. Larger capacitors, such as those used in microwave ovens, HVAC units and medical defibrillators may also have built-in discharge resistors to dissipate stored energy to a safe level within a few seconds after power is removed. High-voltage capacitors are stored with the terminals shorted, as protection from potentially dangerous voltages due to dielectric absorption or from transient voltages the capacitor may pick up from static charges or passing weather events. Some old, large oil-filled paper or plastic film capacitors contain polychlorinated biphenyls (PCBs). It is known that waste PCBs can leak into groundwater under landfills. Capacitors containing PCBs were labelled as containing "Askarel" and several other trade names. PCB-filled paper capacitors are found in very old (pre-1975) fluorescent lamp ballasts, and other applications. Capacitors may catastrophically fail when subjected to voltages or currents beyond their rating, or in case of polarized capacitors, applied in a reverse polarity. Failures may create arcing that heats and vaporizes the dielectric fluid, causing a build up of pressurized gas that may result in swelling, rupture, or an explosion. Larger capacitors may have vents or similar mechanism to allow the release of such pressures in the event of failure. Capacitors used in RF or sustained high-current applications can overheat, especially in the center of the capacitor rolls. Capacitors used within high-energy capacitor banks can violently explode when a short in one capacitor causes sudden dumping of energy stored in the rest of the bank into the failing unit. High voltage vacuum capacitors can generate soft X-rays even during normal operation. Proper containment, fusing, and preventive maintenance can help to minimize these hazards. High-voltage capacitors may benefit from a pre-charge to limit in-rush currents at power-up of high voltage direct current (HVDC) circuits. This extends the life of the component and may mitigate high-voltage hazards. See also Capacitance meter Capacitor plague Electric displacement field Electroluminescence List of capacitor manufacturers Notes References Bibliography Philosophical Transactions of the Royal Society LXXII, Appendix 8, 1782 (Volta coins the word condenser) Further reading Tantalum and Niobium-Based Capacitors – Science, Technology, and Applications; 1st Ed; Yuri Freeman; Springer; 120 pages; 2018; . Capacitors; 1st Ed; R. P. Deshpande; McGraw-Hill; 342 pages; 2014; . The Capacitor Handbook; 1st Ed; Cletus Kaiser; Van Nostrand Reinhold; 124 pages; 1993; . Understanding Capacitors and their Uses; 1st Ed; William Mullin; Sams Publishing; 96 pages; 1964. (archive) Fixed and Variable Capacitors; 1st Ed; G. W. A. Dummer and Harold Nordenberg; Maple Press; 288 pages; 1960. (archive) The Electrolytic Capacitor''; 1st Ed; Alexander Georgiev; Murray Hill Books; 191 pages; 1945. (archive) External links The First Condenser – A Beer Glass – SparkMuseum How Capacitors Work – Howstuffworks Capacitor Tutorial Electrical components Energy storage Science and technology in the Dutch Republic Dutch inventions 18th-century inventions German inventions
Capacitor
[ "Physics", "Technology", "Engineering" ]
15,951
[ "Electrical components", "Physical quantities", "Capacitors", "Capacitance", "Electrical engineering", "Components" ]
4,932,763
https://en.wikipedia.org/wiki/Characterization%20%28materials%20science%29
Characterization, when used in materials science, refers to the broad and general process by which a material's structure and properties are probed and measured. It is a fundamental process in the field of materials science, without which no scientific understanding of engineering materials could be ascertained. The scope of the term often differs; some definitions limit the term's use to techniques which study the microscopic structure and properties of materials, while others use the term to refer to any materials analysis process including macroscopic techniques such as mechanical testing, thermal analysis and density calculation. The scale of the structures observed in materials characterization ranges from angstroms, such as in the imaging of individual atoms and chemical bonds, up to centimeters, such as in the imaging of coarse grain structures in metals. While many characterization techniques have been practiced for centuries, such as basic optical microscopy, new techniques and methodologies are constantly emerging. In particular the advent of the electron microscope and secondary ion mass spectrometry in the 20th century has revolutionized the field, allowing the imaging and analysis of structures and compositions on much smaller scales than was previously possible, leading to a huge increase in the level of understanding as to why different materials show different properties and behaviors. More recently, atomic force microscopy has further increased the maximum possible resolution for analysis of certain samples in the last 30 years. Microscopy Microscopy is a category of characterization techniques which probe and map the surface and sub-surface structure of a material. These techniques can use photons, electrons, ions or physical cantilever probes to gather data about a sample's structure on a range of length scales. Some common examples of microscopy techniques include: Optical microscopy Scanning electron microscopy (SEM) Transmission electron microscopy (TEM) Field ion microscopy (FIM) Scanning probe microscopy (SPM) Atomic force microscopy (AFM) Scanning tunneling microscopy (STM) X-ray diffraction topography (XRT) Atom-Probe Tomography (APT) Spectroscopy Spectroscopy is a category of characterization techniques which use a range of principles to reveal the chemical composition, composition variation, crystal structure and photoelectric properties of materials. Some common examples of spectroscopy techniques include: Optical radiation Ultraviolet-visible spectroscopy (UV-vis) Fourier transform infrared spectroscopy (FTIR) Thermoluminescence (TL) Photoluminescence (PL) X-ray X-ray diffraction (XRD) Small-angle X-ray scattering (SAXS) Energy-dispersive X-ray spectroscopy (EDX, EDS) Wavelength dispersive X-ray spectroscopy (WDX, WDS) Electron energy loss spectroscopy (EELS) X-ray photoelectron spectroscopy (XPS) Auger electron spectroscopy (AES) X-ray photon correlation spectroscopy (XPCS) Mass spectrometry Modes of mass spectrometry: Electron ionization (EI) Thermal ionization mass spectrometry (TI-MS) MALDI-TOF Secondary ion mass spectrometry (SIMS) Nuclear spectroscopy Nuclear magnetic resonance spectroscopy (NMR) Mössbauer spectroscopy (MBS) Perturbed angular correlation (PAC) Other Photon correlation spectroscopy/Dynamic light scattering (DLS) Terahertz spectroscopy (THz) Electron paramagnetic/spin resonance (EPR, ESR) Small-angle neutron scattering (SANS) Rutherford backscattering spectrometry (RBS) Spatially resolved acoustic spectroscopy (SRAS) Macroscopic testing A huge range of techniques are used to characterize various macroscopic properties of materials, including: Mechanical testing, including tensile, compressive, torsional, creep, fatigue, toughness and hardness testing Differential thermal analysis (DTA) Dielectric thermal analysis (DEA, DETA) Thermogravimetric analysis (TGA) Differential scanning calorimetry (DSC) Impulse excitation technique (IET) Ultrasound techniques, including resonant ultrasound spectroscopy and time domain ultrasonic testing methods See also Analytical chemistry Instrumental chemistry Semiconductor characterization techniques Wafer bond characterization Polymer characterization Lipid bilayer characterization Lignin characterization Characterization of nanoparticles MEMS for in situ mechanical characterization References Materials science
Characterization (materials science)
[ "Physics", "Materials_science", "Engineering" ]
854
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
2,679,447
https://en.wikipedia.org/wiki/Bel%20decomposition
In semi-Riemannian geometry, the Bel decomposition, taken with respect to a specific timelike congruence, is a way of breaking up the Riemann tensor of a pseudo-Riemannian manifold into lower order tensors with properties similar to the electric field and magnetic field. Such a decomposition was partially described by Alphonse Matte in 1953 and by Lluis Bel in 1958. This decomposition is particularly important in general relativity. This is the case of four-dimensional Lorentzian manifolds, for which there are only three pieces with simple properties and individual physical interpretations. Decomposition of the Riemann tensor In four dimensions the Bel decomposition of the Riemann tensor, with respect to a timelike unit vector field , not necessarily geodesic or hypersurface orthogonal, consists of three pieces: the electrogravitic tensor Also known as the tidal tensor. It can be physically interpreted as giving the tidal stresses on small bits of a material object (which may also be acted upon by other physical forces), or the tidal accelerations of a small cloud of test particles in a vacuum solution or electrovacuum solution. the magnetogravitic tensor Can be interpreted physically as a specifying possible spin-spin forces on spinning bits of matter, such as spinning test particles. the topogravitic tensor Can be interpreted as representing the sectional curvatures for the spatial part of a frame field. Because these are all transverse (i.e. projected to the spatial hyperplane elements orthogonal to our timelike unit vector field), they can be represented as linear operators on three-dimensional vectors, or as three-by-three real matrices. They are respectively symmetric, traceless, and symmetric (6,8,6 linearly independent components, for a total of 20). If we write these operators as E, B, L respectively, the principal invariants of the Riemann tensor are obtained as follows: is the trace of E2 + L2 - 2 B BT, is the trace of B ( E - L ), is the trace of E L - B2. See also Bel–Robinson tensor Ricci decomposition Tidal tensor Papapetrou–Dixon equations Curvature invariant References Lorentzian manifolds Tensors in general relativity
Bel decomposition
[ "Physics", "Engineering" ]
457
[ "Tensors", "Physical quantities", "Tensor physical quantities", "Tensors in general relativity", "Relativity stubs", "Theory of relativity" ]
2,679,476
https://en.wikipedia.org/wiki/Hinman%20collator
The Hinman collator, an early optical collator, was an opto-mechanical device for comparing pairs of documents for differences in the text. Documents that appeared similar were said to “collate”. The collator resulted in rapid advances in the study of literary works. Invented by Charlton Hinman in the late 1940s, the device used lights and mirrors to superimpose images of the two documents so that differences in text alignment or wording stood out. This resulted in huge improvements in speed and efficiency compared to the traditional cross-referencing of texts by eye. The idea built on earlier work such as Carl Pulfrich's blink comparator used to help identify the former planet Pluto, and Hinman's work analysing aerial photographs during World War II. Hinman used his device to compare the many slightly different impressions of the First Folio of William Shakespeare's works. The printing and bookbinding processes used in the time of Shakespeare often resulted in variations in the pages bound into the final books, and the collator enabled Hinman to describe the exact order in which the Folios had been composited and printed. He used the collator to compare 55 different copies of the First Folio held by the Folger Shakespeare Library, and subsequently wrote about his findings in Printing and Proof-reading of the First Folio of Shakespeare in 1963. In the wake of Hinman's success, the device was purchased by a number of universities, libraries and other institutions (allegedly including the CIA). As more compact types of collator were developed in the 1960s, the last Hinman was built in 1978. In his 2002 survey of mechanical collators, Steven Escar Smith estimates from scattered records that as many as 59 Hinman Collators were produced, 41 of these surviving at the time of his survey's publication. A more portable collator was developed by Randall McLeod. See also Blink comparator vdiff References External links Kelsey Jackson Williams, 'The Hinman Collator', Stirling (Scotland): The Pathfoot Press, 2018. Books and Mirrors Charlton Hinman and The Roots of Mechanical Collation Instructions for implementing Blink Comparator on personal computer Optical devices Bibliography Textual scholarship
Hinman collator
[ "Materials_science", "Engineering" ]
471
[ "Glass engineering and science", "Optical devices" ]
2,680,508
https://en.wikipedia.org/wiki/Carminati%E2%80%93McLenaghan%20invariants
In general relativity, the Carminati–McLenaghan invariants or CM scalars are a set of 16 scalar curvature invariants for the Riemann tensor. This set is usually supplemented with at least two additional invariants. Mathematical definition The CM invariants consist of 6 real scalars plus 5 complex scalars, making a total of 16 invariants. They are defined in terms of the Weyl tensor and its right (or left) dual , the Ricci tensor , and the trace-free Ricci tensor In the following, it may be helpful to note that if we regard as a matrix, then is the square of this matrix, so the trace of the square is , and so forth. The real CM scalars are: (the trace of the Ricci tensor) The complex CM scalars are: The CM scalars have the following degrees: is linear, are quadratic, are cubic, are quartic, are quintic. They can all be expressed directly in terms of the Ricci spinors and Weyl spinors, using Newman–Penrose formalism; see the link below. Complete sets of invariants In the case of spherically symmetric spacetimes or planar symmetric spacetimes, it is known that comprise a complete set of invariants for the Riemann tensor. In the case of vacuum solutions, electrovacuum solutions and perfect fluid solutions, the CM scalars comprise a complete set. Additional invariants may be required for more general spacetimes; determining the exact number (and possible syzygies among the various invariants) is an open problem. See also Curvature invariant, for more about curvature invariants in (semi)-Riemannian geometry in general Curvature invariant (general relativity), for other curvature invariants which are useful in general relativity References External links The GRTensor II website includes a manual with definitions and discussions of the CM scalars. Implementation in the Maxima computer algebra system Tensors in general relativity
Carminati–McLenaghan invariants
[ "Physics", "Engineering" ]
405
[ "Tensors in general relativity", "Tensors", "Tensor physical quantities", "Physical quantities" ]
2,680,620
https://en.wikipedia.org/wiki/Parameterized%20post-Newtonian%20formalism
In physics, precisely in the study of the theory of general relativity and many alternatives to it, the post-Newtonian formalism is a calculational tool that expresses Einstein's (nonlinear) equations of gravity in terms of the lowest-order deviations from Newton's law of universal gravitation. This allows approximations to Einstein's equations to be made in the case of weak fields. Higher-order terms can be added to increase accuracy, but for strong fields, it may be preferable to solve the complete equations numerically. Some of these post-Newtonian approximations are expansions in a small parameter, which is the ratio of the velocity of the matter forming the gravitational field to the speed of light, which in this case is better called the speed of gravity. In the limit, when the fundamental speed of gravity becomes infinite, the post-Newtonian expansion reduces to Newton's law of gravity. The parameterized post-Newtonian formalism or PPN formalism, is a version of this formulation that explicitly details the parameters in which a general theory of gravity can differ from Newtonian gravity. It is used as a tool to compare Newtonian and Einsteinian gravity in the limit in which the gravitational field is weak and generated by objects moving slowly compared to the speed of light. In general, PPN formalism can be applied to all metric theories of gravitation in which all bodies satisfy the Einstein equivalence principle (EEP). The speed of light remains constant in PPN formalism and it assumes that the metric tensor is always symmetric. History The earliest parameterizations of the post-Newtonian approximation were performed by Sir Arthur Stanley Eddington in 1922. However, they dealt solely with the vacuum gravitational field outside an isolated spherical body. Ken Nordtvedt (1968, 1969) expanded this to include seven parameters in papers published in 1968 and 1969. Clifford Martin Will introduced a stressed, continuous matter description of celestial bodies in 1971. The versions described here are based on Wei-Tou Ni (1972), Will and Nordtvedt (1972), Charles W. Misner et al. (1973) (see Gravitation (book)), and Will (1981, 1993) and have ten parameters. Beta-delta notation Ten post-Newtonian parameters completely characterize the weak-field behavior of the theory. The formalism has been a valuable tool in tests of general relativity. In the notation of Will (1971), Ni (1972) and Misner et al. (1973) they have the following values: is the 4 by 4 symmetric metric tensor with indexes and going from 0 to 3. Below, an index of 0 will indicate the time direction and indices and (going from 1 to 3) will indicate spatial directions. In Einstein's theory, the values of these parameters are chosen (1) to fit Newton's Law of gravity in the limit of velocities and mass approaching zero, (2) to ensure conservation of energy, mass, momentum, and angular momentum, and (3) to make the equations independent of the reference frame. In this notation, general relativity has PPN parameters and Alpha-zeta notation In the more recent notation of Will & Nordtvedt (1972) and Will (1981, 1993, 2006) a different set of ten PPN parameters is used. is calculated from The meaning of these is that , and measure the extent of preferred frame effects. , , , and measure the failure of conservation of energy, momentum and angular momentum. In this notation, general relativity has PPN parameters and The mathematical relationship between the metric, metric potentials and PPN parameters for this notation is: where repeated indexes are summed. is on the order of potentials such as , the square magnitude of the coordinate velocities of matter, etc. is the velocity vector of the PPN coordinate system relative to the mean rest-frame of the universe. is the square magnitude of that velocity. if and only if , otherwise. There are ten metric potentials, , , , , , , , , and , one for each PPN parameter to ensure a unique solution. 10 linear equations in 10 unknowns are solved by inverting a 10 by 10 matrix. These metric potentials have forms such as: which is simply another way of writing the Newtonian gravitational potential, where is the density of rest mass, is the internal energy per unit rest mass, is the pressure as measured in a local freely falling frame momentarily comoving with the matter, and is the coordinate velocity of the matter. Stress-energy tensor for a perfect fluid takes form How to apply PPN Examples of the process of applying PPN formalism to alternative theories of gravity can be found in Will (1981, 1993). It is a nine step process: Step 1: Identify the variables, which may include: (a) dynamical gravitational variables such as the metric , scalar field , vector field , tensor field and so on; (b) prior-geometrical variables such as a flat background metric , cosmic time function , and so on; (c) matter and non-gravitational field variables. Step 2: Set the cosmological boundary conditions. Assume a homogeneous isotropic cosmology, with isotropic coordinates in the rest frame of the universe. A complete cosmological solution may or may not be needed. Call the results , , , . Step 3: Get new variables from , with , or if needed. Step 4: Substitute these forms into the field equations, keeping only such terms as are necessary to obtain a final consistent solution for . Substitute the perfect fluid stress tensor for the matter sources. Step 5: Solve for to . Assuming this tends to zero far from the system, one obtains the form where is the Newtonian gravitational potential and may be a complicated function including the gravitational "constant" . The Newtonian metric has the form , , . Work in units where the gravitational "constant" measured today far from gravitating matter is unity so set . Step 6: From linearized versions of the field equations solve for to and to . Step 7: Solve for to . This is the messiest step, involving all the nonlinearities in the field equations. The stress–energy tensor must also be expanded to sufficient order. Step 8: Convert to local quasi-Cartesian coordinates and to standard PPN gauge. Step 9: By comparing the result for with the equations presented in PPN with alpha-zeta parameters, read off the PPN parameter values. Comparisons between theories of gravity A table comparing PPN parameters for 23 theories of gravity can be found in Alternatives to general relativity#Parametric post-Newtonian parameters for a range of theories. Most metric theories of gravity can be lumped into categories. Scalar theories of gravitation include conformally flat theories and stratified theories with time-orthogonal space slices. In conformally flat theories such as Nordström's theory of gravitation the metric is given by and for this metric , which drastically disagrees with observations. In stratified theories such as Yilmaz theory of gravitation the metric is given by and for this metric , which also disagrees drastically with observations. Another class of theories is the quasilinear theories such as Whitehead's theory of gravitation. For these . The relative magnitudes of the harmonics of the Earth's tides depend on and , and measurements show that quasilinear theories disagree with observations of Earth's tides. Another class of metric theories is the bimetric theory. For all of these is non-zero. From the precession of the solar spin we know that , and that effectively rules out bimetric theories. Another class of metric theories is the scalar–tensor theories, such as Brans–Dicke theory. For all of these, . The limit of means that would have to be very large, so these theories are looking less and less likely as experimental accuracy improves. The final main class of metric theories is the vector–tensor theories. For all of these the gravitational "constant" varies with time and is non-zero. Lunar laser ranging experiments tightly constrain the variation of the gravitational "constant" with time and , so these theories are also looking unlikely. There are some metric theories of gravity that do not fit into the above categories, but they have similar problems. Accuracy from experimental tests Bounds on the PPN parameters from Will (2006) and Will (2014) † ‡ Based on from Will (1976, 2006). It is theoretically possible for an alternative model of gravity to bypass this bound, in which case the bound is from Ni (1972). See also Alternatives to general relativity#Parametric post-Newtonian parameters for a range of theories Effective one-body formalism Linearized gravity Peskin–Takeuchi parameter The same thing as PPN, but for electroweak theory instead of gravitation Tests of general relativity References Eddington, A. S. (1922) The Mathematical Theory of Relativity, Cambridge University Press. Misner, C. W., Thorne, K. S. & Wheeler, J. A. (1973) Gravitation, W. H. Freeman and Co. Will, C. M. (1981, 1993) Theory and Experiment in Gravitational Physics, Cambridge University Press. . Will, C. M., (2006) The Confrontation between General Relativity and Experiment, https://web.archive.org/web/20070613073754/http://relativity.livingreviews.org/Articles/lrr-2006-3/ Theories of gravity Formalism (deductive) General relativity
Parameterized post-Newtonian formalism
[ "Physics" ]
1,976
[ "General relativity", "Theoretical physics", "Theory of relativity", "Theories of gravity" ]
2,683,414
https://en.wikipedia.org/wiki/Perseus%20Cluster
The Perseus cluster (Abell 426) is a cluster of galaxies in the constellation Perseus. It has a recession speed of 5,366 km/s and a diameter of 863. It is one of the most massive objects in the known universe, containing thousands of galaxies immersed in a vast cloud of multimillion-degree gas. X-radiation from the cluster The Perseus galaxy cluster is the brightest cluster in the sky when observed in the X-ray band. The cluster contains the radio source 3C 84 that is currently blowing bubbles of relativistic plasma into the core of the cluster. These are seen as holes in an X-ray image of the cluster, as they push away the X-ray emitting gas. They are known as radio bubbles, because they appear as emitters of radio waves due to the relativistic particles in the bubble. The galaxy NGC 1275 is located at the centre of the cluster, where the X-ray emission is brightest. The first detection of X-ray emission from the Perseus cluster (astronomical designation Per XR-1) occurred during an Aerobee rocket flight on March 1, 1970. The X-ray source may be associated with NGC 1275 (Per A, 3C 84), and was reported in 1971. If the source is NGC 1275, then Lx is about 4 x 1045 ergs/s. More detailed observations from Uhuru confirmed the earlier detection and its source within the Perseus cluster. Perseus galaxy cluster's Cosmic music note In 2003, a team of astronomers led by Andrew Fabian at Cambridge University discovered one of the deepest notes ever detected, after 53 hours of Chandra observations. No human will actually hear the note, because its time period between oscillations is 9.6 million years, which is 57 octaves below the keys in the middle of a piano. The sound waves appear to be generated by the inflation of bubbles of relativistic plasma by the central active galactic nucleus in NGC 1275. The bubbles are visible as ripples in the X-ray band since the X-ray brightness of the intracluster medium that fills the cluster is strongly dependent on the density of the plasma. In May 2022, NASA reported the sonification (converting astronomical data associated with pressure waves into sound) of the black hole at the center of the Perseus galaxy cluster. A similar case also happens in the nearby Virgo Cluster, generated by an even larger supermassive black hole in the galaxy Messier 87, also detected by Chandra. Like the former, no human will hear the note. The tone is variable, and even lower than those generated by NGC 1275, from 56 octaves below middle C on minor eruptions, to as low as 59 octaves below middle C on major eruptions. Image gallery See also References External links Fabian, A.C., et al. A deep Chandra observation of the Perseus cluster: shocks and ripples. Monthly Notices of the Royal Astronomical Society. Vol. 344 (2003): L43 (arXiv:astro-ph/0306036v2). The galaxy cluster Abell 426 (Perseus). A catalogue of 660 galaxy positions, isophotal magnitudes and morphological types, Brunzendorf, J.; Meusinger, H., Astronomy and Astrophysics Supplement, v.139, p. 141–161, 1999. The galaxy cluster Abell 426 (Perseus). A deep image of Abell 426 Space.com article about "Sound of a Black Hole." in the Perseus cluster http://aida.astroinfo.org/ Perseus A: Mysterious X-ray Signal Intrigues Astronomers The clickable Perseus cluster Galaxy clusters Perseus (constellation) Perseus-Pisces Supercluster 426 Abell richness class 2
Perseus Cluster
[ "Astronomy" ]
813
[ "Perseus (constellation)", "Galaxy clusters", "Astronomical objects", "Constellations" ]
2,683,666
https://en.wikipedia.org/wiki/Filler%20metal
In metalworking, a filler metal is a metal added in the making of a joint through welding, brazing, or soldering. Soldering Soldering and brazing processes rely on a filler metal added to the joint to form the junction between the base metal parts. Soft soldering uses a filler that melts at a lower temperature than the workpiece, often a lead-tin solder alloy. Brazing and hard soldering use a higher temperature filler that melts at a temperature which may approach that of the base metal, and which may form a eutectic alloy with the base metal. Filler alloys have a lower melting point than the base metal, so that the joint may be made by bringing the whole assembly up to temperature without everything melting as one. Complex joints, typically for jewelry or live steam boilermaking, may be made in stages, with filler metals of progressively lower melting points used in turn. Early joints are thus not destroyed by heating to the later temperatures. Welding Welding processes work around the melting point of the base metal and require the base metal itself to begin melting. They usually require more precise distribution of heat from a small torch, as melting the entire workpiece is avoided by controlling the distribution of heat over space, rather than limiting the maximum heat. If filler is used, it is of a similar alloy and melting point to the base metal. Not all welding processes require filler metal. Autogenous welding processes only require part of the existing base metal to be melted and this is sufficient, provided that the joint is already mechanically close-fitting before welding. Forge- or hammer welding uses hammering to close up the hot joint and also to locally increase its heat. Many gas welding processes, such as lead burning, are typically autogenous and a separate wire filler rod of the same metal is only added if there is a gap to fill. Some metals, such as lead or Birmabright aluminium alloy, use offcut strips of the same metal as filler. Steels are usually welded with a filler alloy made specially for the purpose. To prevent rusting in storage, these wires are often lightly copper plated. With electric arc welding, a major use for the filler rod is as a consumable electrode that also generates heat in the workpiece. An electrical discharge from this electrode provides heat that melts both the electrode and heats the base metal. TIG welding is an electric welding process that uses a non-consumed tungsten electrode to provide heat, with the filler rod added manually. This is more like gas welding as a process, but with a different heat source. Hardfacing A specialist use for filler metal is where a deliberately different metal is to be deposited. This is often done for hardfacing excavating tools or digger bucket teeth. A hard, but more expensive and sometimes brittle, facing alloy is deposited onto the wear surfaces of mild steel tools. Four types of filler metals exist—covered electrodes, bare electrode wire or rod, tubular electrode wire, and welding fluxes. Sometimes non-consumable electrodes are included as well, but since these metals are not consumed by the welding process, they are normally excluded. Usage Covered electrodes Covered electrodes are used extensively in shielded metal arc welding and are a major factor in that method's popularity. Bare electrode wires Bare electrode wires are used in gas metal arc welding and bare electrode rods are used in gas tungsten arc welding. Tubular electrode wires Tubular electrode wire is used in flux-cored arc welding. Welding fluxes Welding fluxes are used in submerged arc welding. See also Amorphous brazing foil Autogenous welding, welding processes without filler References Cary, Howard B. and Scott C. Helzer (2005). Modern Welding Technology. Upper Saddle River, New Jersey: Pearson Education. . Welding
Filler metal
[ "Engineering" ]
788
[ "Welding", "Mechanical engineering" ]
2,684,031
https://en.wikipedia.org/wiki/Atomic%20diffusion
In chemical physics, atomic diffusion is a diffusion process whereby the random, thermally-activated movement of atoms in a solid results in the net transport of atoms. For example, helium atoms inside a balloon can diffuse through the wall of the balloon and escape, resulting in the balloon slowly deflating. Other air molecules (e.g. oxygen, nitrogen) have lower mobilities and thus diffuse more slowly through the balloon wall. There is a concentration gradient in the balloon wall, because the balloon was initially filled with helium, and thus there is plenty of helium on the inside, but there is relatively little helium on the outside (helium is not a major component of air). The rate of transport is governed by the diffusivity and the concentration gradient. In crystals In the crystal solid state, diffusion within the crystal lattice occurs by either interstitial or substitutional mechanisms and is referred to as lattice diffusion. In interstitial lattice diffusion, a diffusant (such as C in an iron alloy), will diffuse in between the lattice structure of another crystalline element. In substitutional lattice diffusion (self-diffusion for example), the atom can only move by substituting place with another atom. Substitutional lattice diffusion is often contingent upon the availability of point vacancies throughout the crystal lattice. Diffusing particles migrate from point vacancy to point vacancy by the rapid, essentially random jumping about (jump diffusion). Since the prevalence of point vacancies increases in accordance with the Arrhenius equation, the rate of crystal solid state diffusion increases with temperature. For a single atom in a defect-free crystal, the movement can be described by the "random walk" model. In 3-dimensions it can be shown that after jumps of length the atom will have moved, on average, a distance of: If the jump frequency is given by (in jumps per second) and time is given by , then is proportional to the square root of : Diffusion in polycrystalline materials can involve short circuit diffusion mechanisms. For example, along the grain boundaries and certain crystalline defects such as dislocations there is more open space, thereby allowing for a lower activation energy for diffusion. Atomic diffusion in polycrystalline materials is therefore often modeled using an effective diffusion coefficient, which is a combination of lattice, and grain boundary diffusion coefficients. In general, surface diffusion occurs much faster than grain boundary diffusion, and grain boundary diffusion occurs much faster than lattice diffusion. See also Kirkendall effect Mass diffusivity References External links Classical and nanoscale diffusion (with figures and animations) Diffusion Crystallographic defects
Atomic diffusion
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
534
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Crystallographic defects", "Materials science", "Crystallography", "Materials degradation" ]
2,684,087
https://en.wikipedia.org/wiki/Momentum%20diffusion
Momentum diffusion most commonly refers to the diffusion, or spread of momentum between particles (atoms or molecules) of matter, often in the fluid state. This transport of momentum can occur in any direction of the fluid flow. Momentum diffusion can be attributed to either external pressure or shear stress or both. Diffusion due to pressure When pressure is applied on an incompressible fluid the velocity of the fluid will change. The fluid accelerates or decelerates depending on the relative direction of pressure with respect to the flow direction. This is because applying pressure on the fluid has caused momentum diffusion in that direction. Understanding the exact nature of diffusion is a key aspect toward understanding momentum diffusion due to pressure. Momentum diffusion due to shear stresses A fluid flowing along a flat plate will stick to it at the point of contact and this is known as the no-slip condition. This is an outcome of the adhesive forces between the flat plate and the fluid. The presence of the wall has an effect up to a certain distance in the fluid (in the direction perpendicular to the wall area and flow ) and this is known as the boundary layer. Any layer of fluid that is not in contact with the wall will be flowing with a certain velocity and will be sandwiched between two layers of fluid. Now the layer just above it (flowing with a greater velocity) will try to drag it in the direction of flow, whereas the layer just below it (flowing with a lesser velocity) will try to slow it down. The attraction between the layers of the fluid is the result of cohesive forces, and viscosity is the property that explains the nature and strength of cohesive forces within a fluid. It is common to experience the fact that the flowing fluid will exert a certain amount of force on the plate, trying to pull it in its flow direction. The flat plate exerts an equal amount of force on the fluid. (Newton's third law) Experiments on the fluid flow parallel to a flat plate reveal that the force, known as shear stress can be expressed mathematically as Note this is valid only for one dimensional fluid flow in rectangular coordinates. The is the shear stress at any layer of the fluid where (i.e. the gradient of velocity in a direction perpendicular to the flow and the area of the flat plate), is the local gradient and is the viscosity. The units of shear stress are Force/Unit Area. This is in M.K.S system. This can also be interpreted as . However, these are also the units of momentum flux. This is the precise reason why shear stress in a fluid can also be interpreted as the flux of momentum. The diffusion of momentum is in the direction of decreasing velocity. This means that momentum is being transferred from the fluid in the upper layers (which has greater momentum) toward the fluid that is close to the wall (which has lesser momentum due to its lower velocity). The phrase "momentum diffusion" can also refer to the diffusion of the probability for a single particle to have a particular momentum. In this case, it is the probability distribution function that diffuses in momentum space, rather than the (conserved) quantity of momentum that diffuses among many particles. References Diffusion Fluid dynamics
Momentum diffusion
[ "Physics", "Chemistry", "Engineering" ]
653
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Chemical engineering", "Piping", "Fluid dynamics" ]
2,684,175
https://en.wikipedia.org/wiki/Photon%20diffusion
Photon diffusion is a situation where photons travel through a material without being absorbed, but rather undergoing repeated scattering events which change the direction of their path. The path of any given photon is then effectively a random walk. A large ensemble of such photons can be said to exhibit diffusion in the material, and can be described with a diffusion equation. Astrophysics In astrophysics, photon diffusion occurs inside a stellar atmosphere. To describe this phenomenon, one should develop the transfer equation in moments and use the Eddington approximation to radiative transfer (i.e. the diffusion approximation). In 3D the results are two equations for the photon energy flux: where is the opacity. By substituting the first equation into the second, one obtains the diffusion equation for the photon energy density: Medical science In medicine, the diffusion of photons can be used to create images of the body (mainly brain and breast) and has contributed much to the advance of certain fields of research, such as neuroscience. This technique is known as diffuse optical imaging. See also Diffuse reflection Diffusion damping Global dimming Medical optical imaging Optical tomography Radiative transfer equation and diffusion theory for photon transport in biological tissue References Diffusion Photonics
Photon diffusion
[ "Physics", "Chemistry" ]
246
[ "Transport phenomena", "Physical phenomena", "Diffusion" ]
2,684,396
https://en.wikipedia.org/wiki/Brush%20%28electric%29
A brush or carbon brush is an electrical contact, often made from specially prepared carbon, which conducts current between stationary and rotating parts (the latter most commonly being a rotating shaft) of an electrical machine. Typical applications include electric motors, alternators and electric generators. The lifespan of a carbon brush depends on how much the motor is used, and how much power is put through the motor. Etymology For certain types of electric motors or generators to function, the coils of the rotor must be connected to complete an electrical circuit. Originally this was accomplished by affixing a copper or brass commutator or 'slip ring' to the shaft, with springs pressing braided copper wire 'brushes' onto the slip rings or commutator which conduct the current. Such brushes arced and even welded as the commutator rotated, because the brush short–circuited adjacent segments. The cure was the introduction of 'high resistance brushes' made from graphite (sometimes with added copper). Although the resistance was of the order of tens of milliohms, they were high resistance enough to provide a gradual shift of current from one commutator segment to the next. Carbon brushes are available in four main grade categories: carbon graphite, electrographitic, graphite, and metal graphite. The term brush remains in use. Since the brushes wear out, they can be replaced in products intended to allow maintenance. During World War II, high–altitude aircraft generators had very rapid brush wear, requiring reformulated brush compounds for acceptable life. Metal fiber brushes are currently being developed. They may have advantages over current brush technology, but have not yet seen wide implementation. Manufacturing process Mixing components Exact composition of the brush depends on the application. Graphite/carbon powder is commonly used. Copper is used for better conductance (rare for AC applications). In order to maximize electrical conductivity and green strength, highly dendritic (electrolytic) copper powder is used. Binders, mostly phenol or other resins or pitch, are mixed in so the powder holds its shape when compacted. Other additives include metal powders, and solid lubricants like MoS2, WS2. Much know-how and research is needed in order to define a brush grade mixture for each application or motor. Compacting the mixture The brush compound is compacted in a tool consisting of upper and lower punch and die, on mechanical or hydraulic presses. In this step, depending on later processing, the copper wire (called shunt wire) can be inserted automatically through a hole in the upper punch and fixed into the pressed brush block by the powder pressed around. This operation, called "tamping", is usually performed using electrolytic copper powder, possibly with silver coating for some high performance applications. After this process, the brush is still very fragile and in professional jargon called a 'green brush'. Firing of green brushes Next follows heat treatment of the "green brushes" under artificial atmosphere (usually hydrogen and nitrogen). Temperatures range up to 1200 °C. This process is called sintering or baking. During sintering, the binders either burn off or carbonize and form a crystalline structure between the carbon, copper and other additives. Baking is followed by graphitization (heat treatment). The heat treatment is transformed by a temperature curve exactly defined for each material mixture. Besides the mixture composition, the used temperature curve is the second big “secret” of each brush manufacturer. After the heat treatment, the brush structure is modified in a way which makes copying of the brush nearly impossible for competing companies. Secondary operations Sintering causes the brushes to shrink and to bend. They must be ground to net shape. Some companies use additional treatments in order to make the brush more durable by methods such as impregnation of the running surface by special oils, resins and grease. Manufacturing of carbon brushes requires an in-depth knowledge of materials and experience in mixture compositions. Very small changes in brush contents by just a few percent of components by weight can significantly change the properties of brushes on their applications. There are just a handful of brush developing companies in the world, which are mostly specialized on certain types of brushes. Carbon brushes are one of the least costly parts in an electric motor. On the other hand, they usually are the key part which delivers the durability (“life-time”) and performance to the motor they are used in. Their production requires very high attention to quality control and production process control throughout all steps of the production process. Liquid metal brushes From time to time the use of liquid metals to make contacts is researched. Drawbacks to this approach include the need to contain the liquid metal (as it is usually toxic or corrosive) and power losses from induction and turbulence. See also Commutator (electric) § Brush construction Slip ring References Electrical power connectors Electric motors Scottish inventions
Brush (electric)
[ "Technology", "Engineering" ]
997
[ "Electrical engineering", "Engines", "Electric motors" ]
2,684,566
https://en.wikipedia.org/wiki/Drag%20count
A drag count is a dimensionless unit used by aerospace engineers. 1 drag count is equal to a of 0.0001. As the drag forces present on automotive vehicles are smaller than for aircraft, 1 drag count is commonly referred to as 0.0001 of . Definition A drag count is defined as: where: is the drag force, which is by definition the force component in the direction of the flow velocity, is the mass density of the fluid, is the speed of the object relative to the fluid, and is the reference area. The drag coefficient is used to compare the solutions of different geometries by means of a dimensionless number. A drag count is more user-friendly than the drag coefficient, as the latter is usually much less than 1. A drag count of 200 to 400 is typical for an airplane at cruise. A reduction of one drag count on a subsonic civil transport airplane means about more in payload. Notes References See also Drag coefficient Zero-lift drag coefficient Drag (physics) Equations Force
Drag count
[ "Physics", "Chemistry", "Mathematics" ]
207
[ "Fluid dynamics stubs", "Drag (physics)", "Force", "Physical quantities", "Quantity", "Mass", "Mathematical objects", "Classical mechanics", "Equations", "Wikipedia categories named after physical quantities", "Matter", "Fluid dynamics" ]
25,424,454
https://en.wikipedia.org/wiki/Dentin%20phosphoprotein
Dentin phosphoprotein, or phosphophoryn, is one of three proteins formed from dentin sialophosphoprotein and is important in the regulation of mineralization of dentin, one of the main constituent materials of teeth (see dentinogenesis). Phosphophoryn is the most acidic protein ever discovered and has an isoelectric point of 1. This extreme acidity is achieved by its amino acid sequence. Many portions of its chain are repeating (aspartic acid-serine-serine) sequences. In protein chemistry, net acidity equates to negative charge. Being highly negative, dentin phosphoprotein is able to attract large amounts of calcium. In vitro studies also indicate phosphophoryn can initiate hydroxyapatite formation. References Teeth Proteins
Dentin phosphoprotein
[ "Chemistry" ]
177
[ "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Molecular biology", "Proteins" ]
25,425,327
https://en.wikipedia.org/wiki/SHARON%20Wastewater%20Treatment
SHARON (Single reactor system for High activity Ammonium Removal Over Nitrite) is a sewage treatment process. A partial nitrification process of sewage treatment used for the removal of ammonia and organic nitrogen components from wastewater flow streams. The process results in stable nitrite formation, rather than complete oxidation to nitrate. Nitrate formation by nitrite oxidising bacteria (NOB) (such as Nitrobacter) is prevented by adjusting temperature, pH, and retention time to select for nitrifying ammonia oxidising bacteria (AOB) (such as Nitrosomonas). Denitrification of waste streams utilizing SHARON reactors can proceed with an anoxic reduction, such as anammox. Mechanism The SHARON (Single reactor system for High activity Ammonium Removal Over Nitrite) wastewater treatment process is a combination of two already used nitrogen removing reactions. One process utilizes fast growing nitrifiers utilizing nitrification of ammonia to nitrite and Anammox which is the denitrification of nitrite to atmospheric nitrogen using ammonia as an electron donor. The combination of the two processes allows for a more efficient conversion of ammonia and prevents a buildup of nitrate in the water. This combination also provides an improvement to the already established processes by having a much lower energy and COD requirements. Waste entering this process must first undergo the nitrification of ammonia to nitrite so there is nitrite in high enough concentration for anammox to be fueled. This first step typically needs to convert ~50% of the ammonia in the waste stream before anammox can begin. To obtain this delay a key technique used is to utilize the different growth rates found in ammonia oxidizers and nitrite oxidizers. Ammonia oxidizers have a significantly higher growth rate then Nitrite oxidizers at high temperatures so if utilized properly ammonia oxidizers will reach the log phase and begin the nitrification before the nitrite oxidizers start anammox. Once the Anammox process begins the reactions rates of nitrification vs Anammox must be closely controlled as to completely remove ammonia from the stream. To maintain the balance between these two processes control of the pH is utilized. In this process works as a chemostat making the SLT(Sludge Retention Time) the same as the HLT(Hydraulic Retention Time). This characteristic makes the process insensitive to suspended solids concentrations in the water. References External links M2T Technologies Anaerobic digestion
SHARON Wastewater Treatment
[ "Chemistry", "Engineering" ]
514
[ "Water technology", "Anaerobic digestion", "Environmental engineering" ]
25,429,092
https://en.wikipedia.org/wiki/Endiandric%20acid%20C
Endiandric acid C, isolated from the tree Endiandra introrsa, is a well characterized chemical compound. Endiadric acid C is reported to have better antibiotic activity than ampicillin. This genus of trees is in the family Lauraceae. These trees are found in the north-eastern Australian rainforests and other tropical and subtropical regions. However, they are also found in southern Canada and in Chile. Endiandric acid C is also isolated from the species E. xanthocarpa. Endiandric acids are also found in Beilschmiedia trees, which were categorized under the genus Endiandra, but moved to their own genus as they found in cold, high latitude areas, and even in New Zealand. Other endiandric acids are found in B. oligandra and B. anacardioides, which are found in the Western Province of Cameroon. Bioactivity This compound has the best antibacterial activity of Endiandrianic acid A-G compounds. Endiandric acid C was tested towards five strains of bacteria, which included Bacillus subtilis, Micococcus luteus, Streptococcus faecalis, Pseudomonas palida, and Escherichia coli through examining zone inhibition and minimum concentration, which was found to range between 0.24 μg/mL and 500 μg/mL. Endiandric acid C has also been used to cure uterine tumors, rubella, and female genital infections, and rheumatisms. Biosynthesis Many biochemists predicted when examining K. C. Nicolaou's biomimetic synthesis of the endiandric acid cascade that enzymes aided this reaction in the biosynthesis. The biomimetic series determined that this process took place synthetically through a series of Diels-Alder cyclization reactions and therefore led researches to believe that Diels-Alderase assisted the formation of endiandric acid C. Although it has been discovered since then that many famous cyclization reactions like that of lovastatin do result from the Diels-Alderase they have determine that the endiandric acid cascade does not involve enzymes but rather spontaneously undergoes ring formation from a derivative of bisnoryangonin 5, which results from both the shikimate and acetic pathways. The 4-hydroxycinnamoyl-CoA, compound 2, is the precursor that comes from the shikimate pathway. Two units of malonyl CoA are then added to through the acetate pathway 3. Compound 3 is then reduced to the di-enol form that tautomerizes to give the bisnoryangonin 5. A small amount of compound 5 can be isolated, however S-adenosyl methionine methylates most of it and gives yangonin 6. It has been proposed that a bisnoryangonin derivative 7, is then reduced by dehydrogenase to give the polyene precursor 8, that goes through spontaneous 8π conrotatory, 6π disrotatory, and [4+2] cyclization reactions to form endiandric acid C. This proposal is supported by the fact that endiandric acids naturally occur as racemic mixtures and not in an enantiomerically pure form, which should happen if enzymes mediate this process. The Diels-Alder reaction itself is a powerful reaction that can give cyclic compounds with many stereogenic centers. Biomimetic total synthesis K. C. Nicolaou's group successfully synthesized endiandric acid, 1, in 1982 as a test of Black's biosynthetic conjecture, using a biomimetic strategy involving series of stereocontrolled electrocyclic reactions. Specifically, they observed that the natural products endiandric acids A and C could have arisen from a common precursor, via slightly different 6π [4s+2s] cycloaddition (Diels-Alder) reactions. This key precursor was in turn accessible biosynthetically via two further thermally allowed sequential 6π electron and 8π electron electrocyclizations. The Nicolaou group therefore sought to synthesize endiandric acid C from an acyclic symmetric diacetylenic diol precursor, 14 (as shown); they began with "mild hydrogenation" in the presence of Lindlar catalyst and quinoline, anticipating tetraene diol 15, cyclooctatriene 16, or the fully cyclized bicyclo[4.2.0]octadiene (bicyclic diol) 17. Remarkably, following this 3-6 hour, 25 °C process, a 45-55% yield of bicyclic diol 17 could be isolated. Hence, it was not necessary to do anything specific to promote the required sequence of 8π conrotatory and 6π disrotatory cyclizations (further highlighted in supplementary image); they occurred spontaneously on generation of tetraene-diol 15. Protection of a single alcohol moiety (as TBDPS) was accomplished using the silyl chloride via the corresponding tricyclic iodoether intermediate (not shown), with the internally masked remaining hydroxyl group being released on treatment with zinc dust in acetic acid (giving 18 in 70-80% yield). Bromination of the alcohol under Appel conditions followed by its displacement on treatment with sodium cyanide in HMPA gave nitrile 20, the key intermediate in all of this group's endiandric acid syntheses. The title compound was then pursued via DIBAL reduction of the nitrile at low temperature, followed by mild acidic hydrolysis to release aldehyde 21. A series of 7 further steps—condensation to form trans-butenoate 22, thermal intramolecular Diels-Alder reaction to create the tetracyclic endiantric core structure 23, desilylation to unmask alcohol 24, bromination and nitrile formation (as described above) to give 25 and 26, respectively, then hydrolysis of the methyl ester and repeat of the earlier DIBAL/acid hydrolysis sequence—generated the endiantric core structure with pendant aldehyde, 28, that was poised for the final step. Its treatment with diethyl cinnamylphosphonate and LDA at low temperature in THF (generating en route the anionic olefination reagent) formed the desired diene in good yield in a "geometrically controlled manner", thus providing the desired endiandric acid C product. References Further reading Bandaranayake, W. M.; Banfield, J. E.; Black, D. S. C.; Fallon, G. D.; Gatehouse, B. M. Constituents of Endiandra-Spp 1. Endiandric-Acid a Novel Carboxylic-Acid from Endiandra-Introrsa Lauraceae and a Derived Lactone. Aust. J. of Chem. 1981, 34, 1655-1667. Bandaranayake, W. M.; Banfield, J. E.; Black, D. S. C.; Fallon, G. D.; Gatehouse, B. M. Constituents of Endiandra Species. Iii. 4-[(E,E)-5'-Phenylpenta-2',4'-Dien-1'-Yl]Tetracyclo[5.4.0.02.5.03.9]Undec-10-Ene-8-Carboxylic Acid from Endiandra Introrsa (Lauraceae). Aust. J. of Chem. 1982, 35, 567-579. Banfield, J. E.; Black, D. S. C.; Collins, D. J.; Hyland, B. P. M.; Lee, J. J.; Pranowo, S. R. Constituents of Some Species of Beilschmiedia and Endiandra (Lauraceae): New Endiandric Acid and Benzopyran Derivatives Isolated from B. Oligandra. Aust. J. of Chem. 1994, 47, 587-607.s Chouna, J. R.; Nkeng-Efouet, P. A.; Lenta, B. N.; Devkota, K. P.; Neumann, B.; Stammler, H.-G.; Kimbu, S. F.; Sewald, N. Antibacterial Endiandric Acid Derivatives from Beilschmiedia Anacardioides. Phytochemistry. 2009, 70, 684-688. Gravel, E.; Poupon, E. Biogenesis and Biomimetic Chemistry: Can Complex Natural Products Be Assembled Spontaneously? Eur. J. Org. Chem. 2008, 27-42. Miller, A. K.; Trauner, D. Mapping the Chemistry of Highly Unsaturated Pyrone Polyketides. Synlett 2006, 2295-2316. Milne, B. F.; Long, P. F.; Starcevic, A.; Hranueli, D.; Jaspars, M. Spontaneity in the Patellamide Biosynthetic Pathway. Org. Biomol. Chem. 2006, 4, 631-638. Nicolaou, K. C.; Petasis, N. A.; Zipkin, R. E.; Uenishi, J. The Endiandric Acid Cascade. Electrocyclizations in Organic Synthesis. 1. Stepwise, Stereocontrolled Total Synthesis of Endiandric Acids A and B. J. Am. Chem. Soc. 1982, 104, 5555-5557. Nicolaou, K. C.; Petasis, N. A.; Uenishi, J.; Zipkin, R. E. The Endiandric Acid Cascade. Electrocyclizations in Organic Synthesis. 2. Stepwise, Stereocontrolled Total Synthesis of Endiandric Acids C-G. J. Am. Chem. Soc. 1982, 104, 5557-5558. Nicolaou, K. C.; Zipkin, R. E.; Petasis, N. A. The Endiandric Acid Cascade. Electrocyclizations in Organic Synthesis. 3. "Biomimetic" Approach to Endiandric Acids A-G. Synthesis of Precursors. J. Am. Chem. Soc. 1982, 104, 5558-5560. Oikawa, H. Involvement of the Diels-Alderases in the Biosynthesis of Natural Products. Bull. Chem. Soc. Jpn. 2005, 78, 537-554. Gravel, E.; Poupon, E. Biogenesis and Biomimetic Chemistry: Can Complex Natural Products Be Assembled Spontaneously? Euro. J. O. C. 2008, 27-42. Miller, A. K.; Trauner, D. Mapping the Chemistry of Highly Unsaturated Pyrone Polyketides. Synlett 2006, 2295-2316. Milne, B. F.; Long, P. F.; Starcevic, A.; Hranueli, D.; Jaspars, M. Spontaneity in the Patellamide Biosynthetic Pathway. Organic & Biomolecular Chemistry 2006, 4, 631-638. Antibiotics Carboxylic acids Total synthesis
Endiandric acid C
[ "Chemistry", "Biology" ]
2,461
[ "Biotechnology products", "Carboxylic acids", "Functional groups", "Antibiotics", "Chemical synthesis", "Total synthesis", "Biocides" ]
25,430,394
https://en.wikipedia.org/wiki/Fina%20%28architecture%29
In Mediterranean architecture, the fina is a physical space used in urban design, corresponding to the approximately 1-meter-wide public space alongside buildings. It is used to describe the placement of design items within traditional architectural elements. It also mandates public rules of behaviour for the neighbours concerning the usage and maintenance of finas in their buildings. For instance, a person has the right to use the part of the fina immediately in front of his home for the loading or unloading of his vehicle but he has no right to block it. Fina is identified as a convention in ancient Levant architecture that denotes a zone along the street wall of a building where balconies, downspouts, and other protruding features were allowed as long as they did not impede the passage of public transport and other users of the street. In Islamic architecture, fina or Al-Fina, which emerged in old Islamic cities that were organized by Islamic law, refers to a patio – an open-sky courtyard of a central building. It serves to illuminate and ventilate rooms and spaces inside buildings. This particular architectural concept is still used in urban spaces in the Middle East such as Egypt as a form of environmental organizer. This in-between space also influences the urban fabric and character of the city. Fina has two types of uses: temporary and permanent. Trees, flower pots, window gratings and other decorations constitute the temporary uses of fina. Its permanent use are represented by built-in structures such as stairs, benches, and water-related infrastructure, among others. These also include the sabat, which is a structure built between the opposite buildings on both sides of a narrow street. It is constituted by rooms bridging the street. It provides a passageway to respect the right of way, and the supporting pillars of the resulting arch must be within the fina. References Further reading Arabic-Islamic cities: building and planning principles. BS Hakim - 1986 - Kegan Paul Intl Mediterranean urban and building codes: origins, content, impact, and lessons, Urban Design International Learning from Traditional Mediterranean Codes by Besim Hakim. The Town Paper -Council report III/IV - April 2003 Generative processes for revitalising historic towns or heritage districts by Besim Hakim. INTBAU - International Network for Traditional Building, Architecture & Urbanism. Urban design
Fina (architecture)
[ "Engineering" ]
481
[ "Architecture stubs", "Architecture" ]
3,642,362
https://en.wikipedia.org/wiki/Electroless%20deposition
Electroless deposition (ED) or electroless plating is an autocatalytic process through which metals and metal alloys are deposited onto conductive and nonconductive surfaces. These nonconductive surfaces include plastics, ceramics, and glass etc., which can then become decorative, anti-corrosive, and conductive depending on their final functions. Electroplating, unlike electroless deposition, only deposits on other conductive or semi-conductive materials when an external current is applied. Electroless deposition deposits metals onto 2D and 3D structures such as screws, nanofibers, and carbon nanotubes, unlike other plating methods such as Physical Vapor Deposition ( PVD), Chemical Vapor Deposition (CVD), and electroplating, which are limited to 2D surfaces. Commonly the surface of the substrate is characterized via pXRD, SEM-EDS, and XPS which relay set parameters based their final functionality. These parameters are referred to a Key Performance Indicators crucial for a researcher’ or company's purpose. Electroless deposition continues to rise in importance within the microelectronic industry, oil and gas, and aerospace industry. History Electroless deposition was serendipitously discovered by Charles Wurtz in 1846. Wurtz noticed the nickel-phosphorus bath when left sitting on the benchtop spontaneously decomposed and formed a black powder. 70 years later François Auguste Roux rediscovered the electroless deposition process and patented it in United States as the 'Process of producing metallic deposits'. Roux deposited nickel-posphorous (Ni-P) electroless deposition onto a substrate but his invention went uncommercialized. In 1946 the process was re-discovered by Abner Brenner and Grace E. Riddell while working at the National Bureau of Standards. They presented their discovery at the 1946 Convention of the American Electroplaters' Society (AES); a year later, at the same conference they proposed the term "electroless" for the process and described optimized bath formulations, that resulted in a patent. However, neither Abner nor Riddell benefited financially from the filed patent. The first commercial deposition of Ni-P was Leonhardt Plating Company in Cincinnati followed by the Kannigen Co. Ltd in Japan which revolutionized the industry. The Leonhardt commercialization of electroless deposition was a catalyst for the design and patenting of several deposition baths including plating of metals such as Pt, Sn, Ag, and their alloys. An elementary electroless deposition process is Tollens' reaction which is often used in scientific demonstrations. Tollens' reaction deposits a uniform metallic silver layer via ED on glass forming a reflective surface, thus its reference as silvering mirrors. This reaction is used to test for aldehydes in a basic solution of silver nitrate. This reaction is often used as crude method used in chemistry demonstrations for the oxidation of an aldehyde to carboxylic acid, and the reduction of the silver cation into elemental silver (reflective surface). Preparation and Bath Stability Electroless deposition is an important process in the electronic industry for metallization of substrates. Other metallization of substrates also include physical vapor deposition (PVD), chemical vapor deposition (CVD), and electroplating which produce thin metal films but require high temperature, vacuum, and a power source respectively. Electroless deposition is advantageous in comparison to PVD, CVD, and electroplating deposition methods because it can be performed at ambient conditions. The plating method for Ni-P, Ni-Au, Ni-B, and Cu baths are distinct; however, the processes involve the same approach. The electroless deposition process is defined by four steps: Pretreatment or functionalization of the substrate cleans the surface of the substrate to remove any contaminants which affects nanoparticle size and poor plating occurs. Pretreatment determines the porosity of the elemental metal deposition, and the initiation site of elemental deposition. Sensitization is an activator ion that can reduce the active metal in the deposition bath and serves as a catalytic site for the templation of the active metal. Activation accelerates the deposition by acting as a catalytic seed on the substrate surface for the final electroless deposition bath metal. Electroless deposition is the process by which metal cation is reduced to elemental metal with a powerful reducing agent. The electroless deposition bath constitutes the following reagents which affect the side product synthesis, bath lifetime and plating rates. A source of metal cation which is provided by a metal salt (ex. Cu2+ from CuSO4 and Ni2+ from NiCl2) Reducing agent which donates electrons to the metal cation (ex. CH2O -formaldehyde for Cu2+ and NaH2PO2-sodium hypophosphite for Ni2+) Suitable complexing agent provide buffering action by preventing drastic fall and rise of pH, prevent nickel salt precipitation, and reduce the concentration of free nickel ions in solution. (ex.tartrate, EDTA, acetate etc.) Stabilizer control plating rate and prevent decomposition of the bath. The deposition of a plating bath is preceded by hydrogen gas evolution but stabilizers are added to prevent random deposition of the ED bath. They are meticulously chosen to prevent loss of hydrogenation and dehydrogenation catalyst activity. Stabilizers fine-tune the autocatalytic nature of the bath while controlling the heterogeneous deposition of nanoparticles. Buffering agent and pH stability. Deposition baths produce hydronium atoms which causes decrease in pH. If a bath becomes too acidic the hydrogen starts reducing at a higher rate than the metal and reduces the wt% of elemental metal produced. The metal is hydrolyzed and falls out of solution. The relationship between pH and standard potential (E0) is related to the activity of the hydronium ion in the Nernst equation in relation to the potential. Potential decreases as the solution becomes more basic and this relationship is described by the Pourbaix Diagram. All the above parameters are responsible for controlling side product release. Side product formation negatively affect the bath by poisoning the catalytic site, and disrupt the morphology of the metal nanoparticle. Process Fundamental principle The electroless deposition process is based on redox chemistry in which electrons are released from a reducing agent and a metal cation is reduced to elemental metal. Equations (1) and (2) show the simplified ED process where a reducing agent releases electrons, and the metal cation is reduced respectively. The electroless deposition and electroplating bath actively performs cathodic and anodic reactions at the surface of the substrate. The standard electrode potential of the metal and reducing agent are important as a driving force for electron exchange. The standard potential is defined as the power of reduction of compounds. Examples are shown in Table 1., in which Zn with a lower standard potential (-0.7618 V) act as a reducing agent to copper (0.3419 V). The calculated potentials for the reaction of the copper salt and zinc metal is ~1.1 V meaning the reaction is spontaneous. Since electroless deposition also uses the principles of standard electrode potentials we are also able to calculate potential, E, of metal ions in a solution governed by the Nernst equation (3). E is the potential of the reaction, E0 is the standard reduction potential of the redox reaction, and Q is the concentration of the products divided by the concentration of the reactants . Electrons for ED are produced by powerful reducing agents in the deposition bath ex. formaldehyde, sodium borohydride, glucose, sodium hypophosphite, hydrogen peroxide, and ascorbic acid. These reducing agents have negative standard potentials that drive the deposition process. The standard potential of the reducing agent and metal salt is not the only determinant of the redox reaction for electroless deposition. Conventional deposition of the copper nanoparticles uses formaldehyde as a reducing agent. But the E0 of formaldehyde is pH dependent. At pH 0 of the deposition bath is E0 of formaldehyde is 0.056 V, but at pH=14 the E0=-1.070. The formaldehyde (pH 14) is a more suitable reducing agent than at pH=0 because of the lower negative standard potential which makes it a powerful reducing agent. The potential dependence on pH is described by the Pourbaix Diagram. Four classic deposition mechanisms The first mechanism for electroless deposition, atomic hydrogen mechanism, was proposed until Brenner and Riddell for a nickel deposition bath. This led the way for other scientists to propose several other mechanisms. The four examples of classical electroless deposition mechanism for Ni-P codeposition including: (1) Atomic hydrogen mechanism, (2) Hydride transfer mechanism, (3) Electrochemical mechanism, and (4) Metal hydroxide mechanism. The classic mechanisms focused on the formation of a Ni-P nanoparticles onto a substrate. Electroless nickel plating uses nickel salts as the metal cation source and either hypophosphite (H2PO2−) (or a borohydride-like compound) as a reducer. A side reaction forms elemental phosphorus (or boron) which is incorporated in the coating. The classical deposition methods follows the following steps: Diffusion of reactants (Ni2+, H2PO2−) to the surface Adsorption of reactants at the surface Chemical reaction at the surface Desorption of products (H2PO3−, H2, H+, H−) from the surface Diffusion of the product from the surface or adhesion of the product onto the surface Atomic hydrogen mechanism Brenner and Riddle proposed the atomic hydrogen mechanism for evolution of Ni and H2 from a Ni salt, reducing agent, complexing agent, and stabilizers. They used a nickel chloride salt (NiCl2), sodium hypophosphite (NaH2PO2) reducing agent, commonly used complexing agents (ex. citrate, EDTA, and tridentates etc.), and stabilizers such as cethyltrimethyl ammonium bromide ( CTAB). The redox reactions [4]-[6] proposes that adsorbed hydrogen (Had) reduces Ni2+ at the catalytic surface and has a secondary reaction where H2 gas evolves. In 1946 it was discovered that a Ni-P alloy and hydrogen gas was formed instead due to a secondary reaction of hypophosphite with atomic hydrogen to form elemental phosphorus. The standard potentials for equation [4], [5], and [6] are 0.50 V, -0.25 V, and 0 V respectively. The potential of the bath overall is 0.25 V. NB: the potential for the equation [4] is +0.50 V because the reaction has been reversed to illustrate oxidation. Calculation E= Ered - Eox = (-0.25 V)-(-0.50 V) = 0.25 V (spontaneous reaction) However, the atomic hydrogen mechanism did not account for the co-deposition of Ni-P. Hydride transfer mechanism The hydride transfer mechanism was proposed by Hersh in 1955 which accounted for the deposition of elemental phosphorus. Hersh proposed the hydride transfer mechanism which was expanded in 1964 by R.M. Lukes to explain the deposition of elemental P. Hydride transfer in a basic environment was purported [7] to form the hydride (H−) which reduced the Ni2+ to Ni0[ 8], and combines with water to form H2 gas [9]. Lukes reasoned that the hydride ion came from the hypophosphite and thus accounts for the Ni-P codeposition through a secondary reaction. The standard potential for equation [7], [8], and [9] are 1.65 V, -0.25 V, and 0 V respectively. NB the potential for the equation [7] and [8] is +0.50 V because the reaction has been reversed to illustrate oxidation. Calculation E= Ered - Eox = (-0.25 V)-(-1.65 V) = 1.45 V (spontaneous reaction) Electrochemical mechanism The electrochemical mechanism was also proposed by Brenner and Riddell but was later modified by others including scientists Machu and El-Gendi. They proposed that an electrolytic reaction occurred at the surface of the substrate, and H2 [11] and P [13] are by products of the Ni2+ ion reduction [10][11]. The anodic reaction [10] has a reduction potential of 0.50 V. The cathodic reactions [10], [11], [12], and [13] have reduction potentials of 0.50, -0.25 V, 0 V, and 0.50 V respectively. The potential of the reaction is 1.25 V (spontaneous reaction). NB the potential for the equation [10] and [13] is +0.50 V because the reaction has been reversed to illustrate oxidation. Calculation 10 reaction of [10] and [11] E = Ered - Eox = (-0.25 V)-(-0.50V) = 0.25 V (spontaneous reaction) Calculation 20 reaction of [11] and [13] E = Ered - Eox = (-0.25 V+ 0.50 V)-(-0.50 V) = 0.75 V (spontaneous reaction) The 10 and 20 reactions havepositive potentials and therefore are competing reactions within the same bath. Metal hydroxide mechanism Proposed in 1968, solvated Ni ions at the catalytic surface ionized water and forms a hydroxide coordinated Ni ion. The hydrolyzed Ni2+ ion catalyzes the production of Ni, P, and H2. Water is ionized at the Ni surface [14], and Ni2+ ions coordinate with hydroxide ions [15]. The coordinated Ni2+ is reduced [16] and NiOH+ab is adsorbed on the substrate surface. At the surface H2PO2− reduces NiOH+ab to elemental Ni0 [17]. The released elemental H recombine to form hydrogen gas and [18] and elemental Ni catalyzes the production of the P [19]. The deposited Ni acts as a catalyst due continued reduction by H2PO2− [17]. Cavallotti and Salvago also proposed that the NiOH+ab [20] and water combination oxidizes to Ni2+ and elemental H. The NiOH+ab participates in a competing reaction [21a] (refers to reaction [17] )and [21b] to for elemental Ni and hydrolyzed Ni respectively. Finally H2PO2− is oxidized [22] and elemental H [21a/21b] recombine to form and H2 evolves for both reactions. The overall reactions is shown in equation [23]. NB: the potential for the equation [16], [19], [21a], [21b], and [22] is +0.50 V because the reaction has been reversed to illustrate oxidation. Calculation 10 reaction of [17] E = Ered - Eox = (-0.25 V)-(-0.50V) = 0.25 V (spontaneous reaction) Calculation 20 reaction of [19] E = Ered - Eox = (0.50)-(0.25V) = 0.25 V (spontaneous reaction) Overall reaction [23] including the reduction of Ni2+ E = Ered - Eox = (-0.25 V + 0.50 V) -(-0.50 V) = 0.75 V (spontaneous reaction) Industrial applications Electroless deposition changes the mechanical, magnetic, internal stress, conductivity, and brightening of the substrate. The first industrial application of electroless deposition by the Leonhardt Plating Company electroless deposition has flourished into metallization of plastics., textiles, prevention of corrosion, and jewelry. The microelectronics industry including the manufacturing of circuit boards, semi-conductive devices, batteries, and sensors. Metallization of plastics by electroless deposition Typical metallization of plastics includes nickel-phosphorus, nickel gold, nickel-boron, palladium, copper, and silver. Metallized plastics are used to reduce the weight of metal product and reduce the cost associated with the use of precious metals. Electroless nickel plating is used in variety of industries including aviation, construction, textiles, and oil and gas industries. Electromagnetic interference shielding Electromagnetic interference shielding (EMI shielding) refers to the process by which devices are protected from interference from the electromagnetic radiation. The interference negatively affects the function of the devices; EMI sources include radiowaves, cell phones, and TV receivers. The Federal Aviation Administration and the Federal Communications Commission prohibit the use of cellphones after an airplane is airborne to avoid interference with navigation. Elemental Ni, Cu, and Ni/Cu coating on planes absorb noise signals in the 14 Hz to 1 GHz range. Oil and gas production Elemental nickel coating prevents corrosion of the steel tubulars used for drilling. At the core of this industry nickel coats pressure vessels, compressor blades, reactors, turbine blades, and valves. See also Electroless copper deposition Electroless nickel-boron deposititon Nanomaterials Nanotechnology References Metal plating Corrosion prevention Printed circuit board manufacturing
Electroless deposition
[ "Chemistry", "Engineering" ]
3,653
[ "Corrosion prevention", "Metallurgical processes", "Coatings", "Corrosion", "Printed circuit board manufacturing", "Electronic engineering", "Electrical engineering", "Metal plating" ]
3,643,964
https://en.wikipedia.org/wiki/Chemical%20field-effect%20transistor
A ChemFET is a chemically-sensitive field-effect transistor, that is a field-effect transistor used as a sensor for measuring chemical concentrations in solution. When the target analyte concentration changes, the current through the transistor will change accordingly. Here, the analyte solution separates the source and gate electrodes. A concentration gradient between the solution and the gate electrode arises due to a semi-permeable membrane on the FET surface containing receptor moieties that preferentially bind the target analyte. This concentration gradient of charged analyte ions creates a chemical potential between the source and gate, which is in turn measured by the FET. Construction A ChemFET's source and drain are constructed as for an ISFET, with the gate electrode separated from the source electrode by a solution. The gate electrode's interface with the solution is a semi-permeable membrane containing the receptors, and a gap to allow the substance under test to come in contact with the sensitive receptor moieties. A ChemFET's threshold voltage depends on the concentration gradient between the analyte in solution and the analyte in contact with its receptor-embedded semi-permeable barrier. Often, ionophores are used to facilitate analyte ion mobility through the substrate to the receptor. For example, when targeting anions, quaternary ammonium salts (such as tetraoctylammonium bromide) are used to provide cationic nature to the membrane, facilitating anion mobility through the substrate to the receptor moieties. Applications ChemFETs can be utilized in either liquid or gas phase to detect target analyte, requiring reversible binding of analyte with a receptor located in the gate electrode membrane. There is a wide range of applications of ChemFETs, including most notably anion or cation selective sensing. More work has been done with cation-sensing ChemFETs than anion-sensing ChemFETs. Anion-sensing is more complicated than cation-sensing in ChemFETs due to many factors, including the size, shape, geometry, polarity, and pH of the species of interest. Practical limitations The body of a ChemFET is generally found to be robust. However, the unavoidable requirement for a separate reference electrode makes the system more bulky overall and potentially more fragile. History Dutch engineer Piet Bergveld studied the MOSFET and realized it could be adapted into a sensor for chemical and biological applications. In 1970, Bergveld invented the ion-sensitive field-effect transistor (ISFET). He described the ISFET as "a special type of MOSFET with a gate at a certain distance". In the ISFET structure, the metal gate of a standard MOSFET is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. ChemFETs are based on a modified ISFET, a concept developed by Bergveld in the 1970s. There is some confusion as to the relationship between ChemFETs and ISFETs. Whereas an ISFET only detects ions, a ChemFET detects any chemical (including ions). See also References Acid–base chemistry Biosensors Electrochemistry Electrodes Field-effect transistors Measuring instruments MOSFETs Sensors Transistor types
Chemical field-effect transistor
[ "Chemistry", "Technology", "Engineering", "Biology" ]
701
[ "Acid–base chemistry", "Electrodes", "Measuring instruments", "Biosensors", "Equilibrium chemistry", "Electrochemistry", "nan", "Sensors" ]
3,644,058
https://en.wikipedia.org/wiki/Sommerfeld%20identity
The Sommerfeld identity is a mathematical identity, due Arnold Sommerfeld, used in the theory of propagation of waves, where is to be taken with positive real part, to ensure the convergence of the integral and its vanishing in the limit and . Here, is the distance from the origin while is the distance from the central axis of a cylinder as in the cylindrical coordinate system. Here the notation for Bessel functions follows the German convention, to be consistent with the original notation used by Sommerfeld. The function is the zeroth-order Bessel function of the first kind, better known by the notation in English literature. This identity is known as the Sommerfeld identity. In alternative notation, the Sommerfeld identity can be more easily seen as an expansion of a spherical wave in terms of cylindrically-symmetric waves: Where The notation used here is different form that above: is now the distance from the origin and is the radial distance in a cylindrical coordinate system defined as . The physical interpretation is that a spherical wave can be expanded into a summation of cylindrical waves in direction, multiplied by a two-sided plane wave in the direction; see the Jacobi-Anger expansion. The summation has to be taken over all the wavenumbers . The Sommerfeld identity is closely related to the two-dimensional Fourier transform with cylindrical symmetry, i.e., the Hankel transform. It is found by transforming the spherical wave along the in-plane coordinates (,, or , ) but not transforming along the height coordinate . Notes References Mathematical identities Wave mechanics
Sommerfeld identity
[ "Physics", "Mathematics" ]
318
[ "Physical phenomena", "Mathematical theorems", "Classical mechanics", "Waves", "Wave mechanics", "Mathematical identities", "Mathematical problems", "Algebra" ]
6,470,547
https://en.wikipedia.org/wiki/Targeted%20drug%20delivery
Targeted drug delivery, sometimes called smart drug delivery, is a method of delivering medication to a patient in a manner that increases the concentration of the medication in some parts of the body relative to others. This means of delivery is largely founded on nanomedicine, which plans to employ nanoparticle-mediated drug delivery in order to combat the downfalls of conventional drug delivery. These nanoparticles would be loaded with drugs and targeted to specific parts of the body where there is solely diseased tissue, thereby avoiding interaction with healthy tissue. The goal of a targeted drug delivery system is to prolong, localize, target and have a protected drug interaction with the diseased tissue. The conventional drug delivery system is the absorption of the drug across a biological membrane, whereas the targeted release system releases the drug in a dosage form. The advantages to the targeted release system is the reduction in the frequency of the dosages taken by the patient, having a more uniform effect of the drug, reduction of drug side-effects, and reduced fluctuation in circulating drug levels. The disadvantage of the system is high cost, which makes productivity more difficult, and the reduced ability to adjust the dosages. Targeted drug delivery systems have been developed to optimize regenerative techniques. The system is based on a method that delivers a certain amount of a therapeutic agent for a prolonged period of time to a targeted diseased area within the body. This helps maintain the required plasma and tissue drug levels in the body, thereby preventing any damage to the healthy tissue via the drug. The drug delivery system is highly integrated and requires various disciplines, such as chemists, biologists, and engineers, to join forces to optimize this system. Background In traditional drug delivery systems such as oral ingestion or intravascular injection, the medication is distributed throughout the body through the systemic blood circulation. For most therapeutic agents, only a small portion of the medication reaches the organ to be affected, such as in chemotherapy where roughly 99% of the drugs administered do not reach the tumor site. Targeted drug delivery seeks to concentrate the medication in the tissues of interest while reducing the relative concentration of the medication in the remaining tissues. For example, by avoiding the host's defense mechanisms and inhibiting non-specific distribution in the liver and spleen, a system can reach the intended site of action in higher concentrations. Targeted delivery is believed to improve efficacy while reducing side-effects. When implementing a targeted release system, the following design criteria for the system must be taken into account: the drug properties, side-effects of the drugs, the route taken for the delivery of the drug, the targeted site, and the disease. Increasing developments to novel treatments requires a controlled microenvironment that is accomplished only through the implementation of therapeutic agents whose side-effects can be avoided with targeted drug delivery. Advances in the field of targeted drug delivery to cardiac tissue will be an integral component to regenerate cardiac tissue. There are two kinds of targeted drug delivery: active targeted drug delivery, such as some antibody medications, and passive targeted drug delivery, such as the enhanced permeability and retention effect (EPR-effect). Targeting methods This ability for nanoparticles to concentrate in areas of solely diseased tissue is accomplished through either one or both means of targeting: passive or active. Passive targeting Passive targeting is achieved by incorporating the therapeutic agent into a macromolecule or nanoparticle that passively reaches the target organ. In passive targeting, the drug's success is directly related to circulation time. This is achieved by cloaking the nanoparticle with some sort of coating. Several substances can achieve this, with one of them being polyethylene glycol (PEG). By adding PEG to the surface of the nanoparticle, it is rendered hydrophilic, thus allowing water molecules to bind to the oxygen molecules on PEG via hydrogen bonding. The result of this bond is a film of hydration around the nanoparticle which makes the substance antiphagocytic. The particles obtain this property due to the hydrophobic interactions that are natural to the reticuloendothelial system (RES), thus the drug-loaded nanoparticle is able to stay in circulation for a longer period of time. To work in conjunction with this mechanism of passive targeting, nanoparticles that are between 10 and 100 nanometers in size have been found to circulate systemically for longer periods of time. Active targeting Active targeting of drug-loaded nanoparticles enhances the effects of passive targeting to make the nanoparticle more specific to a target site. There are several ways that active targeting can be accomplished. One way to actively target solely diseased tissue in the body is to know the nature of a receptor on the cell for which the drug will be targeted to. Researchers can then utilize cell-specific ligands that will allow the nanoparticle to bind specifically to the cell that has the complementary receptor. This form of active targeting was found to be successful when utilizing transferrin as the cell-specific ligand. The transferrin was conjugated to the nanoparticle to target tumor cells that possess transferrin-receptor mediated endocytosis mechanisms on their membrane. This means of targeting was found to increase uptake, as opposed to non-conjugated nanoparticles. Another cell-specific ligand is the RGD motif which binds to the integrin αvβ3. This integrin is upregulated in tumor and activated endothelial cells. Conjugation of RGD to chemotherapeutic-loaded nanoparticles has been shown to increase cancer cell uptake in vitro and therapeutic efficacy in vivo. Active targeting can also be achieved by utilizing magnetoliposomes, which usually serves as a contrast agent in magnetic resonance imaging. Thus, by grafting these liposomes with a desired drug to deliver to a region of the body, magnetic positioning could aid with this process. Furthermore, a nanoparticle could possess the capability to be activated by a trigger that is specific to the target site, such as utilizing materials that are pH responsive. Most of the body has a consistent, neutral pH. However, some areas of the body are naturally more acidic than others, and, thus, nanoparticles can take advantage of this ability by releasing the drug when it encounters a specific pH. Another specific triggering mechanism is based on the redox potential. One of the side effects of tumors is hypoxia, which alters the redox potential in the vicinity of the tumor. By modifying the redox potential that triggers the payload release the vesicles can be selective to different types of tumors. By utilizing both passive and active targeting, a drug-loaded nanoparticle has a heightened advantage over a conventional drug. It is able to circulate throughout the body for an extended period of time until it is successfully attracted to its target through the use of cell-specific ligands, magnetic positioning, or pH responsive materials. Because of these advantages, side effects from conventional drugs will be largely reduced as a result of the drug-loaded nanoparticles affecting only diseased tissue. However, an emerging field known as nanotoxicology has concerns that the nanoparticles themselves could pose a threat to both the environment and human health with side effects of their own. Active targeting can also be achieved through peptide based drug targeting system. Delivery vehicles There are different types of drug delivery vehicles, such as polymeric micelles, liposomes, lipoprotein-based drug carriers, nano-particle drug carriers, dendrimers, etc. An ideal drug delivery vehicle must be non-toxic, biocompatible, non-immunogenic, biodegradable, and must avoid recognition by the host's defense mechanisms[3]. Peptides Cell Surface Peptides provide one way to introduce drug delivery into a target cell. This method is accomplished by the peptide binding to a target cells surface receptors, in a way that bypasses immune defenses that would otherwise compromise a slower delivery, without causing harm to the host. In particular, peptides, such as intercellular adhesion molecule-1, have shown a great deal of binding ability in a target cell. This method has shown a degree of efficacy in treating both autoimmune diseases as well as forms of cancer as a result of this binding affinity. Peptide mediated delivery is also of promise due to the low cost of creating the peptides as well as the simplicity of their structure. Liposomes The most common vehicle currently used for targeted drug delivery is the liposome. Liposomes are non-toxic, non-hemolytic, and non-immunogenic even upon repeated injections; they are biocompatible and biodegradable and can be designed to avoid clearance mechanisms (reticuloendothelial system (RES), renal clearance, chemical or enzymatic inactivation, etc.) Lipid-based, ligand-coated nanocarriers can store their payload in the hydrophobic shell or the hydrophilic interior depending on the nature of the drug/contrast agent being carried. The only problem to using liposomes in vivo is their immediate uptake and clearance by the RES system and their relatively low stability in vitro. To combat this, polyethylene glycol (PEG) can be added to the surface of the liposomes. Increasing the mole percent of PEG on the surface of the liposomes by 4-10% significantly increased circulation time in vivo from 200 to 1000 minutes. PEGylation of the liposomal nanocarrier elongates the half-life of the construct while maintaining the passive targeting mechanism that is commonly conferred to lipid-based nanocarriers. When used as a delivery system, the ability to induce instability in the construct is commonly exploited allowing the selective release of the encapsulated therapeutic agent in close proximity to the target tissue/cell in vivo. This nanocarrier system is commonly used in anti-cancer treatments as the acidity of the tumour mass caused by an over-reliance on glycolysis triggers drug release. Additional endogenous trigger pathways have been explored through the exploitation of inner and outer tumor environments, such as reactive oxygen species, glutathione, enzymes, hypoxia, and adenosine-5’- triphosphate (ATP), all of which are generally highly present in and around tumors. External triggers are also used, such as light, low frequency ultrasound (LFUS), electrical fields, and magnetic fields. In specific, LFUS has demonstrated high efficacy in the controlled trigger of various drugs in mice, such as cisplatin and calcein. Micelles and dendrimers Another type of drug delivery vehicle used is polymeric micelles. They are prepared from certain amphiphilic co-polymers consisting of both hydrophilic and hydrophobic monomer units. They can be used to carry drugs that have poor solubility. This method offers little in the terms of size control or function malleability. Techniques that utilize reactive polymers along with a hydrophobic additive to produce a larger micelle that create a range of sizes have been developed. Dendrimers are also polymer-based delivery vehicles. They have a core that branches out in regular intervals to form a small, spherical, and very dense nanocarrier. Biodegradable particles Biodegradable particles have the ability to target diseased tissue as well as deliver their payload as a controlled-release therapy. Biodegradable particles bearing ligands to P-selectin, endothelial selectin (E-selectin) and ICAM-1 have been found to adhere to inflamed endothelium. Therefore, the use of biodegradable particles can also be used for cardiac tissue. Microalgae-based delivery There are biocompatible microalgae hybrid microrobots for active drug-delivery in the lungs and the gastrointestinal tract. The microrobots proved effective in tests with mice. In the two studies, "Fluorescent dye or cell membrane–coated nanoparticle functionalized algae motors were further embedded inside a pH-sensitive capsule" and "antibiotic-loaded neutrophil membrane-coated polymeric nanoparticles [were attached] to natural microalgae". Artificial DNA nanostructures The success of DNA nanotechnology in constructing artificially designed nanostructures out of nucleic acids such as DNA, combined with the demonstration of systems for DNA computing, has led to speculation that artificial nucleic acid nanodevices can be used to target drug delivery based upon directly sensing its environment. These methods make use of DNA solely as a structural material and a chemical, and do not make use of its biological role as the carrier of genetic information. Nucleic acid logic circuits that could potentially be used as the core of a system that releases a drug only in response to a stimulus such as a specific mRNA have been demonstrated. In addition, a DNA "box" with a controllable lid has been synthesized using the DNA origami method. This structure could encapsulate a drug in its closed state, and open to release it only in response to a desired stimulus. Applications Targeted drug delivery can be used to treat many diseases, such as the cardiovascular diseases and diabetes. However, the most important application of targeted drug delivery is to treat cancerous tumors. In doing so, the passive method of targeting tumors takes advantage of the enhanced permeability and retention (EPR) effect. This is a situation specific to tumors that results from rapidly forming blood vessels and poor lymphatic drainage. When the blood vessels form so rapidly, large fenestrae result that are 100 to 600 nanometers in size, which allows enhanced nanoparticle entry. Further, the poor lymphatic drainage means that the large influx of nanoparticles are rarely leaving, thus, the tumor retains more nanoparticles for successful treatment to take place. The American Heart Association rates cardiovascular disease as the number one cause of death in the United States. Each year 1.5 million myocardial infarctions (MI), also known as heart attacks, occur in the United States, with 500,000 leading to deaths. The costs related to heart attacks exceed $60 billion per year. Therefore, there is a need to come up with an optimum recovery system. The key to solving this problem lies in the effective use of pharmaceutical drugs that can be targeted directly to the diseased tissue. This technique can help develop many more regenerative techniques to cure various diseases. The development of a number of regenerative strategies in recent years for curing heart disease represents a paradigm shift away from conventional approaches that aim to manage heart disease. Stem cell therapy can be used to help regenerate myocardium tissue and return the contractile function of the heart by creating/supporting a microenvironment before the MI. Developments in targeted drug delivery to tumors have provided the groundwork for the burgeoning field of targeted drug delivery to cardiac tissue. Recent developments have shown that there are different endothelial surfaces in tumors, which has led to the concept of endothelial cell adhesion molecule-mediated targeted drug delivery to tumors. Liposomes can be used as drug delivery for the treatment of tuberculosis. The traditional treatment for TB is skin to chemotherapy which is not overly effective, which may be due to the failure of chemotherapy to make a high enough concentration at the infection site. The liposome delivery system allows for better microphage penetration and better builds a concentration at the infection site. The delivery of the drugs works intravenously and by inhalation. Oral intake is not advised because the liposomes break down in the Gastrointestinal System. 3D printing is also used by doctors to investigate how to target cancerous tumors in a more efficient way. By printing a plastic 3D shape of the tumor and filling it with the drugs used in the treatment the flow of the liquid can be observed allowing the modification of the doses and targeting location of the drugs. See also Targeted therapy Nanomedicine Antibody-drug conjugate Retrometabolic drug design Magnetic drug delivery PH-responsive tumor-targeted drug delivery References Further reading YashRoy R.C. (1999) Targeted drug delivery.Proceedings ICAR Short Course on "Recent approaches on clinical pharmacokinetics and therapeutic monitoring of drugs in farm animals", Oct 25 to Nov 3, 1999, Div of Pharmacology and Toxicology, IVRI, Izatnagar (India), pp. 129–136. https://www.researchgate.net/publication/233426779_Targeted_drug_delivery?ev=prf_pub External links Drug delivery right on target Pharmacokinetics Medicinal chemistry Drug discovery
Targeted drug delivery
[ "Chemistry", "Materials_science", "Biology" ]
3,489
[ "Pharmacology", "Pharmacokinetics", "Drug delivery devices", "Nanomedicine", "nan", "Medicinal chemistry", "Biochemistry", "Nanotechnology" ]
6,473,308
https://en.wikipedia.org/wiki/Catalyst%20poisoning
Catalyst poisoning is the partial or total deactivation of a catalyst by a chemical compound. Poisoning refers specifically to chemical deactivation, rather than other mechanisms of catalyst degradation such as thermal decomposition or physical damage. Although usually undesirable, poisoning may be helpful when it results in improved catalyst selectivity (e.g. Lindlar's catalyst). An important historic example was the poisoning of catalytic converters by leaded fuel. Poisoning of Pd catalysts Organic functional groups and inorganic anions often have the ability to strongly adsorb to metal surfaces. Common catalyst poisons include carbon monoxide, halides, cyanides, sulfides, sulfites, phosphates, phosphites and organic molecules such as nitriles, nitro compounds, oximes, and nitrogen-containing heterocycles. Agents vary their catalytic properties because of the nature of the transition metal. Lindlar catalysts are prepared by the reduction of palladium chloride in a slurry of calcium carbonate (CaCO3) followed by poisoning with lead acetate. In a related case, the Rosenmund reduction of acyl halides to aldehydes, the palladium catalyst (over barium sulfate or calcium carbonate) is intentionally poisoned by the addition of sulfur or quinoline in order to lower the catalyst activity and thereby prevent over-reduction of the aldehyde product to the primary alcohol. Poisoning process Poisoning often involves compounds that chemically bond to a catalyst's active sites. Poisoning decreases the number of active sites, and the average distance that a reactant molecule must diffuse through the pore structure before undergoing reaction increases as a result. As a result, poisoned sites can no longer alter the rate of reaction. Large scale production of substances such as ammonia in the Haber–Bosch process include steps to remove potential poisons from the product stream. When the poisoning reaction rate is slow relative to the rate of diffusion, the poison will be evenly distributed throughout the catalyst and will result in homogeneous poisoning of the catalyst. Conversely, if the reaction rate is fast compared to the rate of diffusion, a poisoned shell will form on the exterior layers of the catalyst, a situation known as "pore-mouth" poisoning, and the rate of catalytic reaction may become limited by the rate of diffusion through the inactive shell. Homogenous and "pore-mouth" poisoning occurrences are most frequently observed when using a porous medium catalyst. Selective poisoning If the catalyst and reaction conditions are indicative of low effectiveness, selective poisoning may be observed, where poisoning of only a small fraction of the catalyst's surface gives a disproportionately large drop in activity. If η is the effectiveness factor of the poisoned surface and hp is the Thiele modulus for the poisoned case: When the ratio of the reaction rates of the poisoned pore to the unpoisoned pore is considered: where F is the ratio of poisoned to unpoisoned pores, hT is the Thiele modulus for the unpoisoned case, and α is the fraction of the surface that is poisoned. The above equation simplifies depending on the value of hT. When the surface is available, hT is negligible: This represents the "classical case" of nonselective poisoning where the fraction of the activity remaining is equal to the fraction of the unpoisoned surface remaining. When hT is very large, it becomes: In this case, the catalyst effectiveness factors are considerably less than unity, and the effects of the portion of the poison adsorbed near the closed end of the pore are not as apparent as when hT is small. The rate of diffusion of the reactant through the poisoned region is equal to the rate of reaction and is given by: And the rate of reaction within a pore is given by: The fraction of the catalyst surface available for reaction can be obtained from the ratio of the poisoned reaction rate to the unpoisoned reaction rate: Benefits of selective poisoning Usually, catalyst poisoning is undesirable as it leads to the wasting of expensive metals or their complexes. However, poisoning of catalysts can be used to improve selectivity of reactions. Poisoning can allow for selective intermediates to be isolated and desirable final products to be produced. Hydrodesulfurization catalysts In the purification of petroleum products, the process of hydrodesulfurization is utilized. Thiols, such as thiophene, are reduced using H2 to produce H2S and hydrocarbons of varying chain length. Common catalysts used are tungsten and molybdenum sulfide. Adding cobalt and nickel to either edges or partially incorporating them into the crystal lattice structure can improve the catalyst's efficiency. The synthesis of the catalyst creates a supported hybrid that prevents poisoning of the cobalt nuclei. Other examples In catalytic converters used on automobiles, the combustion of leaded gasoline produces elemental lead, lead(II) oxide, lead(II) chloride, and lead(II) bromide. Lead alloys with the metals present in the catalyst, while lead oxides and halides coat the catalyst's surfaces, reducing the converter's ability to reduce NOx emissions. In fuel cells using platinum catalysts, the fuels must be free of sulfur and carbon monoxide, unless a desulfurization system is used. Ziegler-Natta catalysts for the production of polyolefins (e.g. polyethylene, polypropylene, etc.) are poisoned by water and oxygen. This poisoning applies to both homogeneous catalysts and heterogeneous catalysts for olefin polymerization. This requires the monomers (ethylene, propylene, etc.) to be purified. See also Hydrogen purity Reaction inhibitor Enzyme inhibitor References Catalysis Fuel cells fr:Poison de catalyseur
Catalyst poisoning
[ "Chemistry" ]
1,206
[ "Catalysis", "Chemical kinetics" ]
1,935,605
https://en.wikipedia.org/wiki/Micropropagation
Micropropagation or tissue culture is the practice of rapidly multiplying plant stock material to produce many progeny plants, using modern plant tissue culture methods. Micropropagation is used to multiply a wide variety of plants, such as those that have been genetically modified or bred through conventional plant breeding methods. It is also used to provide a sufficient number of plantlets for planting from seedless plants, plants that do not respond well to vegetative reproduction or where micropropagation is the cheaper means of propagating (e.g. Orchids). Cornell University botanist Frederick Campion Steward discovered and pioneered micropropagation and plant tissue culture in the late 1950s and early 1960s. Steps In short, steps of micropropagation can be divided into four stages: Selection of mother plant Multiplication Rooting and acclimatizing Transfer new plant to soil Selection of mother plant Micropropagation begins with the selection of plant material to be propagated. The plant tissues are removed from an intact plant in a sterile condition. Clean stock materials that are free of viruses and fungi are important in the production of the healthiest plants. Once the plant material is chosen for culture, the collection of explant(s) begins and is dependent on the type of tissue to be used, including stem tips, anthers, petals, pollen and other plant tissues. The explant material is then surface sterilized, usually in multiple courses of bleach and alcohol washes, and finally rinsed in sterilized water. This small portion of plant tissue, sometimes only a single cell, is placed on a growth medium, typically containing Macro and micronutrients, water, sucrose as an energy source and one or more plant growth regulators (plant hormones). Usually, the medium is thickened with a gelling agent, such as agar, to create a gel which supports the explant during growth. Some plants are easily grown on simple media, but others require more complicated media for successful growth; the plant tissue grows and differentiates into new tissues depending on the medium. For example, media containing cytokinin are used to create branched shoots from plant buds. Multiplication Multiplication is the taking of tissue samples produced during the first stage and increasing their number. Following the successful introduction and growth of plant tissue, the establishment stage is followed by multiplication. Through repeated cycles of this process, a single explant sample may be increased from one to hundreds and thousands of plants. Depending on the type of tissue grown, multiplication can involve different methods and media. If the plant material grown is callus tissue, it can be placed in a blender and cut into smaller pieces and recultured on the same type of culture medium to grow more callus tissue. If the tissue is grown as small plants called plantlets, hormones are often added that cause the plantlets to produce many small offshoots. After the formation of multiple shoots, these shoots are transferred to rooting medium with a high auxin\cytokinin ratio. After the development of roots, plantlets can be used for hardening. Pretransplant This stage involves treating the plantlets/shoots produced to encourage root growth and "hardening." It is performed in vitro, or in a sterile "test tube" environment. "Hardening" refers to the preparation of the plants for a natural growth environment. Until this stage, the plantlets have been grown in "ideal" conditions, designed to encourage rapid growth. Due to the controlled nature of their maturation, the plantlets often do not have fully functional dermal coverings. This causes them to be highly susceptible to disease and inefficient in their use of water and energy. In vitro conditions are high in humidity, and plants grown under these conditions often do not form a working cuticle and stomata that keep the plant from drying out. When taken out of culture, the plantlets need time to adjust to more natural environmental conditions. Hardening typically involves slowly weaning the plantlets from a high-humidity, low light, warm environment to what would be considered a normal growth environment for the species in question. Transfer from culture In the final stage of plant micropropagation, the plantlets are removed from the plant media and transferred to soil or (more commonly) potting compost for continued growth by conventional methods. This stage is often combined with the "pretransplant" stage. Methods There are many methods of plant micro propagation. Meristem culture In Meristem culture, the meristem and a few subtending leaf primordia are placed into a suitable growing media. where they are induced to form new meristem. These meristems are then divided and further grown and multiplied. To produce plantlets the meristems are taken of from their proliferation medium and put on a regeneration medium. When an elongated rooted plantlet is produced after some weeks, it can be transferred to the soil. A disease-free plant can be produced by this method. Experimental result also suggest that this technique can be successfully utilized for rapid multiplication of various plant species, e.g. Coconut, strawberry, sugarcane. Callus culture A callus is mass of undifferentiated parenchymatous cells. When a living plant tissue is placed in an artificial growing medium with other conditions favorable, callus is formed. The growth of callus varies with the homogenous levels of auxin and cytokinin and can be manipulated by endogenous supply of these growth regulators in the culture medium. The callus growth and its organogenesis or embryogenesis can be referred into three different stages. Stage I: Rapid production of callus after placing the explants in culture medium Stage II: The callus is transferred to other medium containing growth regulators for the induction of adventitious organs. Stage III: The new plantlet is then exposed gradually to the environmental condition. Embryo culture In embryo culture, the embryo is excised and placed into a culture medium with proper nutrient in aseptic condition. To obtain a quick and optimum growth into plantlets, it is transferred to soil. It is particularly important for the production of interspecific and intergeneric hybrids and to overcome the embryo. Protoplast culture In protoplast culture, the plant cell can be isolated with the help of wall degrading enzymes and growth in a suitable culture medium in a controlled condition for regeneration of plantlets. Under suitable conditions the protoplast develops a cell wall followed by an increase in cell division and differentiation and grows into a new plant. The protoplast is first cultured in liquid medium at 25 to 28 C with a light intensity of 100 to 500 lux or in dark and after undergoing substantial cell division, they are transferred into solid medium congenial or morphogenesis in many horticultural crops respond well to protoplast culture. Advantages Micropropagation has a number of advantages over traditional plant propagation techniques: The main advantage of micropropagation is the production of many plants that are clones of each other. Micropropagation can be used to produce disease-free plants. It can have an extraordinarily high fecundity rate, producing thousands of propagules while conventional techniques might only produce a fraction of this number. It is the only viable method of regenerating genetically modified cells or cells after protoplast fusion. It is useful in multiplying plants which produce seeds in uneconomical amounts, or when plants are sterile and do not produce viable seeds or when seed cannot be stored (see recalcitrant seeds). Micropropagation often produces more robust plants, leading to accelerated growth compared to similar plants produced by conventional methods - like seeds or cuttings. Some plants with very small seeds, including most orchids, are most reliably grown from seed in sterile culture. A greater number of plants can be produced per square meter and the propagules can be stored longer and in a smaller area. Disadvantages Micropropagation is not always the perfect means of multiplying plants. Conditions that limits its use include: Labour may make up 50–69% of operating costs. All plants produced via micropropagation are genetically identical clones, leading to a lack of overall disease resilience, as all progeny plants may be vulnerable to the same infections. An infected plant sample can produce infected progeny. This is uncommon as the stock plants are carefully screened and vetted to prevent culturing plants infected with virus or fungus. Not all plants can be successfully tissue cultured, often because the proper medium for growth is not known or the plants produce secondary metabolic chemicals that stunt or kill the explant. Sometimes plants or cultivars do not come true to type after being tissue cultured. This is often dependent on the type of explant material utilized during the initiation phase or the result of the age of the cell or propagule line. Some plants are very difficult to disinfect of fungal organisms. The major limitation in the use of micropropagation for many plants is the cost of production; for many plants the use of seeds, which are normally disease free and produced in good numbers, readily produce plants (see orthodox seed) in good numbers at a lower cost. For this reason, many plant breeders do not utilize micropropagation because the cost is prohibitive. Other breeders use it to produce stock plants that are then used for seed multiplication. Mechanisation of the process could reduce labour costs, but has proven difficult to achieve, despite active attempts to develop technological solutions. Applications Micropropagation facilitates the growth, storage, and maintenance of a large number of plants in small spaces, which makes it a cost-effective process. Micropropagation is used for germplasm storage and the protection of endangered species. Micropropagation is widely used in ornamental plants to efficiently produce large quantities of uniform, disease-free specimens, significantly enhancing commercial horticulture operations. Among the species broadly propagated in vitro, one can mention chrysanthemum, damask rose, Saintpaulia ionantha, Zamioculcas zamiifolia and bleeding heart. Micropropagation can also be used with fruit trees, e.g. Pyrus communis. In order to reduce expenditures, natural plant extracts can be used to substitute traditional plant growth regulators. References Agronomy Horticulture Plant reproduction
Micropropagation
[ "Biology" ]
2,148
[ "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
1,935,842
https://en.wikipedia.org/wiki/Evidence-based%20design
Evidence-based design (EBD) is the process of constructing a building or physical environment based on scientific research to achieve the best possible outcomes. Evidence-based design is especially important in evidence-based medicine, where research has shown that environment design can affect patient outcomes. It is also used in architecture, interior design, landscape architecture, facilities management, education, and urban planning. Evidence-based design is part of the larger movement towards evidence-based practices. Background Evidence-based design (EBD) was popularized by the seminal study by Ulrich (1984) that showed the impact of a window view on patient recovery. Studies have since examined the relationships between design of the physical environment of hospitals with outcomes in health, the results of which show how the physical environment can lower the incidence of nosocomial infections, medical errors, patient falls, and staff injuries; and reduce stress of facility users, improve safety and productivity, reduce resource waste, and enhance sustainability. Evidence in EBD may include a wide range of sources of knowledge, from systematic literature reviews to practice guidelines and expert opinions. Evidence-based design was first defined as "the deliberate attempt to base design decisions on the best available research evidence" and that "an evidence-based designer, together with an informed client, makes decisions based on the best available information from research and project evaluations". The Center for Heath Design (CHD), a non-profit organization that supports healthcare and design professionals to improve the understanding and application of design that influence the performance of healthcare, patient satisfaction, staff productivity and safety, base their model on the importance of working in partnership with the client and interdisciplinary team to foster understanding of the client, preferences and resources. The roots of evidence-based design could go back to 1860 when Florence Nightingale identified fresh air as "the very first canon of nursing," and emphasized the importance of quiet, proper lighting, warmth and clean water. Nightingale applied statistics to nursing, notably with "Diagram of the causes of mortality in the army in the East". This statistical study led to advances in sanitation, although the germ theory of disease was not yet fully accepted. Nightingale was also an enthusiast for the therapeutic benefits of sunlight and views from windows. She wrote: "Second only to fresh air … I should be inclined to rank light in importance for the sick. Direct sunlight, not only daylight, is necessary for speedy recovery … I mention from experience, as quite perceptible in promoting recovery, the being able to see out of a window, instead of looking against a dead wall; the bright colours of flowers; the being able to read in bed by the light of the window close to the bed-head. It is generally said the effect is upon the mind. Perhaps so, but it is not less so upon the body on that account ...." Nightingale’s ideas appear to have been influential on E R Robson, architect to the London School Board, when he wrote: “It is well known that the rays of the sun have a beneficial influence on the air of a room, tending to promote ventilation, and that they are to a young child very much what they are to a flower.” The evidence-based design movement began in the 1970s with Archie Cochranes's book Effectiveness and Efficiency: Random Reflections on Health Services. to collect, codify, and disseminate "evidence" gathered in randomised controlled trials relative to the built environment. A 1984 study by Roger Ulrich seemed to support Nightingale's ideas from more than a century before: he found that surgical patients with a view of nature suffered fewer complications, used less pain medication and were discharged sooner than those who looked out on a brick wall; and laid the foundation for what has now become a discipline known as evidence-based design. Studies exist about the psychological effects of lighting, carpeting and noise on critical-care patients, and evidence links physical environment with improvement of patients and staff safety, wellness and satisfaction. Architectural researchers have studied the impact of hospital layout on staff effectiveness, and social scientists studied guidance and wayfinding. In the 1960s and 1970s numerous studies were carried out using methods drawn from behavioural psychology to examine both people’s behaviour in relation to buildings and their responses to different designs – see for example the book by David Canter and Terence Lee More recently, architectural researchers have conducted post-occupancy evaluations (POE) to provide advice on improving building design and quality. While the EBD process is particularly suited to healthcare, it may be also used in other fields for positive health outcomes and provision of healing environments. While healthcare proved to be one of the most prominent sectors to examine the evidence base for how good design benefits building occupants, visitors and the public, other sectors also have considerable bodies of evidence. And, many sectors benefit from literature reviews that draw together and summarise the evidence. In the UK some were led by the UK Commission for Architecture and the Built Environment, a government watchdog established by the Labour Party following its election in 1997 and commitment to improving the quality of the UK stock of public sector buildings. Other reviews were supported by various public or private organisations, and some were undertaken in academia. Reviews were undertaken at the urban scale, some were cross-sectoral and others were sector based (hospitals, schools, higher education). An academic paper by Sebastian Macmillan ) gives an overview of the field as it was in 2006. A cautionary note about the strength of evidence in the built environment In supporting evidence-based design, some caution is needed to ascertain the robustness of the evidence: the architectural psychology movement eventually drew criticism for its tendency towards ‘architectural determinism’ – a confusion between correlation and causality with the implication that there were mechanistic and causal links between the built environment and human behaviour. As some of the studies reviewed below reveal, the evidence is often weak or, worse, conflicting. In an early review of evidence in the healthcare sector, Rubin, Owens & Golden examined the medical literature for research papers on the effect of the physical environment on patient outcomes. They concluded that, if the demanding standards of proof used in medical research were used, almost all the studies would have to be regarded as methodologically flawed or at least limited. Unfortunately strongly held opinions are not the same as rigorously collected evidence. Evidence-base for architecture generally, housing and urban environments In 2002, CABE published a cross-sectoral study that set a pattern by reviewing a selection of the evidence (which it called the key research) for healthcare buildings, educational buildings, housing, urban environments, and business premises. It claimed: “Good design is not just about the aesthetic improvement of our environment, it is as much about improved quality of life, equality of opportunity and economic growth. … Good design does not cost more when measured across the lifetime of the building or place …” At the urban scale, in 2001, CABE and DETR published a study on the value of urban design which includes a literature review plus some case studies. In New Zealand, a landmark review was supported by the Ministry for the Environment. The study categorised the evidence as conclusive, strong, suggestive or anecdotal, and also noted the difficulty of establishing causation since various design elements may be found in combination with other features. The authors state that urban design is context-specific and cautions against automatically adopting what works elsewhere in New Zealand. In its 2003 review of the evidence about housing CABE expressed similar concerns about the evidence base when it said: “The most striking finding in a review of the literature relating to the quality of residential design is the almost complete absence of any empirical attempts to measure the implications of high quality on costs, prices or values.” David Halpern’s book brings together and reviews a substantial number of studies covering among other issues: mental ill-health in city centres; social isolation in out of town housing estates; residential satisfaction; and estate layouts, semi-private spaces and a sense of community. He concludes that there is substantial evidence to show the physical environment has real and significant effects on group and friendship formation, and on patterns of neighbourly behaviour. Other literature reviews include a 2006 study by the Scottish Executive and one by the UK NWDA/RENEW North West. Public open space CABE’s 2004 literature review on public open space draws attention to the physical and mental health benefits associated with access to recreational space, as well as the environmental value of biodiversity and improved air quality. In a follow up 2005 study entitled Does Money Grown on Trees? CABE assessed the impact on the value of residential property of proximity to a park, drawing on valuations prepared by local property experts in which external variables (shops, schools, busy roads) were controlled for. Economic and non-monetary benefits from the proximity were identified. Schools and Higher Education A comprehensive review of the literature was undertaken in 2005 for the Design Council. It concluded that there was evidence for the effect of basic physical variables (air quality, temperature, noise) on learning but that once minimum standards were achieved, further improvements were less significant. The reviewers found forceful opinions on the effects of lighting and colour but that the supporting evidence was conflicting. It was difficult to draw generalizable conclusions about other physical characteristics, and the interactions between different elements was as important as single elements. Other literature reviews of the education sector include two by Price Waterhouse Coopers and one by researchers at the University of Salford. In the higher education sector, a review by CABE reports on the links between building design and the recruitment, retention and performance of staff and students. Fifty articles are reviewed, and five new case studies reported. Offices The offices sector has been widely studied with the major concerns focusing on productivity. A study in 2000 by Sheffield Hallam University reported that apart from surveys of occupants of individual offices, the evidence base on new workplaces was mainly journalistic and biased towards interviews with successes and failures. Some companies claimed that new spatial arrangements led to reduced costs, reduced absenteeism and easier recruitment, faster development of new ideas, and increased profitability. But others reported the exact opposite; and the reasons for this remained unclear. CABE and the British Council for Offices published a joint study in 2005. The paper reports that four main issues have been studied: the largest is environmental and ergonomic issues related to the comfort of individual office workers; secondly research on the efficiency with which office space is used; thirdly adaptability and flexibility and finally research related to supporting work processes. The report is critical of the disproportionate focus on the performance of building services compared with other aspects of buildings. Evidence-based design for healthcare facilities There is a growing awareness among healthcare professionals and medical planners for the need to create patient-centered environments that can help patients and family cope with the stress that accompanies illness. There is also growing supporting research and evidence through various studies that have shown both the influence of well-designed environments on positive patient health outcomes, and poor design on negative effects including longer hospital stays. Using biophilic design concepts in interior environments is increasingly argued to have positive impacts on health and well-being through improving direct and indirect experiences of nature. Numerous studies have demonstrated improved patient health outcomes through environmental measures; exposing patients to nature has been shown to produce substantial alleviation of pain, and limited research also suggests that patients experience less pain when exposed to higher levels of daylight in their hospital rooms. Patients have an increased need for sleep during illness, but suffer from poor sleep when hospitalised. Approaches such as single-bed rooms and reduced noise have been shown to improve patient sleep. Natural daylight in patient rooms help to maintain circadian rhythms and improve sleep. According to Heerwagen, an environmental psychologist, medical models of health integrate behavioral, social, psychological, and mental processes. Contact with nature and daylight has been found to enhance emotional functioning; drawing on research from studies (EBD) on well-being outcomes and building features. Positive feelings such as calmness increase, while anxiety, anger, or other negative emotions diminish with views of nature. In contrast there is also convincing evidence that stress could be worsened and ineffective in fostering restoration in built environments that lack nature. Few studies have shown the restorative effects of gardens for stressed patients, families and staff. Behavioural observation and interview methods in post occupancy studies of hospital gardens have shown a faster recovery from stress by nearly all garden users. Limited evidence suggest increased benefits when these gardens contain foliage, flowers, water, pleasant nature sounds, such as birds and water. Related approaches Performance-based building design EBD is closely related to performance-based building design (PBBD) practices. As an approach to design, PBBD tries to create clear statistical relationships between design decisions and satisfaction levels demonstrated by the building systems. Like EBD, PBBD uses research evidence to predict performance related to design decisions. The decision-making process is non-linear, since the building environment is a complex system. Choices cannot be based on cause-and-effect predictions; instead, they depend on variable components and mutual relationships. Technical systems, such as heating, ventilation and air-conditioning, have interrelated design choices and related performance requirements (such as energy use, comfort and use cycles) are variable components. Evidence-based medicine Evidence-based medicine (EBM) is a systematic process of evaluating scientific research which is used as the basis for clinical treatment choices. Sackett, Rosenberg, Gray, Haynes and Richardson argue that "evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients". It is used in the healthcare industry to convince decision-makers to invest the time and money to build better buildings, realizing strategic business advantages as a result. As medicine has become increasingly evidence-based, healthcare design uses EBD to link hospitals' physical environments with healthcare outcomes. Research-informed design Research-informed design (RID) is a less-developed concept that is commonly misunderstood and used synonymously with EBD, although they are different. It can be defined as the process of applying credible research in integration with the project team to inform the environmental design to achieve the project goals. Credible research here, includes qualitative, quantitative, and mixed methods approaches with the highest standards of rigor suitable for their methodology. The literature for "research-informed" practices comes from education, and not from the healthcare disciplines. The process involves application of the outcomes from literature review and empirical investigation to inform design during the design phase, given the constraints; and to share the process and the lessons learnt just like in EDB. Research and accreditation As EBD is supported by research, many healthcare organizations are adopting its principles with the guidance of evidence-based designers. The Center for Health Design developed the Pebble Project, a joint research effort by CHD and selected healthcare providers on the effect of building environments on patients and staff. Health Environment Research & Design journal and the Health Care Advisory Board are additional sources of information and database on EBD. The Evidence Based Design Accreditation and Certification (EDAC) program was introduced in 2009 by The Center for Health Design to provide internationally recognized certification and promote the use of EBD in healthcare building projects, making EBD an accepted and credible approach to improving healthcare outcomes. EDAC identifies those experienced in EBD and teaches about the research process: identifying, hypothesizing, implementing, gathering and reporting data associated with a healthcare project. Process There are four components to evidence-based design: Gather qualitative and quantitative intelligence Map strategic, cultural and research goals Hypothesize outcomes, innovate, and implement translational design Measure and share outcomes Meta-analysis template for literature review In his book Evidence-based Policy: A Realistic Perspective, Ray Pawson suggests a meta-analysis template which may be applied to EBD. With this protocol, the field will be able to provide designers with a source for evidence-based design. A systematic review process should follow five steps: Formulating the review question Identifying and collecting evidence Evaluating the quality of the evidence Extracting, processing and systematizing data Disseminating findings Conceptual model According to Hamilton, architects have a responsibility in translation of research in the field, and its application in informing designs. He further illustrates a conceptual model architects could use, that identifies four levels of addressing research and methods base on varying levels of commitment: Level 1 Informed design decisions based on available literature on environmental research, based on applicability, such as the use of a state of the art technology or strategy based on the physical setting of the project Level 2 Design decisions based on predictive performance and measurable outcomes, rather than subjective decisions based on random choice Level 3 Results reported publicly, with the objective of moving information on the methods and results moving information beyond the design team, The peer review, makes the process more robust, as it could include varying perspectives from those who may or may not agree with the findings Level 4 Publishing findings in peer-reviewed journals Collaborating with academic and social scientists Working model A white paper (series 3/5) from the Center for Health Design presents a working model to help designers implement EBD decision-making. The primary goal is providing a healing environment; positive outcomes depend on three investments: Designed infrastructure, including the built environment and technology Re-engineered clinical and administrative practices to maximize infrastructure investment Leadership to maximize human and infrastructure investments All three investments depend on existing research. Strategies A white paper from the Center for Health Design identifies ten strategies to aid EBD decision-making: Start with problems. Identify the problems the project is trying to solve and for which the facility design plays an important role (for example, adding or upgrading technology, expanding services to meet growing market demand, replacing aging infrastructure) Use an integrated multidisciplinary approach with consistent senior involvement, ensuring that everyone with problem-solving tools is included. It is essential to stimulate synergy between different community to maximize efforts, outcomes and interchanges. Maintain a patient- and family-centered approach; patient and family experiences are key to defining aims and assessing outcomes. Focus on financial operations past the first-cost impact, exploring the cost-effectiveness of design options over time and considering multi-year investment returns. Apply disciplined participation and criteria management. These processes use decision-making tools such as SWOT analysis, analytic hierarchy processes and decision trees which may also be used in design (particularly of technical aspects such as structure, fire safety or energy use). Establish incentive-linked criteria to increase design-team motivation and involve end users with checklists, surveys and simulations. Use strategic partnerships to create new products with hospital-staff expertise and influence. Encourage simulation and testing, assuming the patient's perspective when making lighting and energy models and computer visualizations. Use a lifecycle perspective (30–50 years) from planning to product, exploring the lifecycle return on investment of design strategies for safety and workforce outcomes. Overcommunicate. Positive outcomes are connected with the involvement of clinical staff and community members with meetings, newsletters, webcams and other tools. Tools Evidence-based design has been applied to efficacy measurements of a building's design, and is usually done at the post-construction stage as a part of a post-occupancy evaluation (POE). The POE assesses strengths and weaknesses of design decisions in relation to human behaviour in a built environment. Issues include acoustics, odor control, vibration, lighting and user-friendliness, and are binary-choice (acceptable or unacceptable). Other research techniques, such as observation, photography, checklists, interviews, surveys and focus groups, supplement traditional design-research methods. Assessment tools have been developed by The Center for Health Design and the Picker Institute to help healthcare managers and designers gather information on consumer needs, assess their satisfaction and measure quality improvements: The Patient Environmental Checklist assesses an existing facility's strong and weak points. Specific environmental features are evaluated by patients and their families on a 5-point scale, and the checklist quickly identifies areas needing improvement. The Patient Survey gathers information on patients' experiences with the built environment. The questions range is wide, since patients' priorities may differ significantly from those of administrators or designers. Focus Groups with consumers learn about specific needs and generate ideas for future solutions. References Cama, R., "Patient room advances and controversies: Are you in the evidence-based healthcare design game?", Healthcare Design, March 2009. Hall, C.R., "CHD rolls out evidence-based design accreditation and certification", Health Facilities Management, July 2009. Kirk, Hamilton D., "Research Informed Design & Outcomes for Healthcare" in Evidence Based Hospital Design Forum, Washington, January 2009. Stankos, M. and Scharz, B., "Evidence-Based Design in Healthcare: A Theoretical Dilemma", IDRP Interdisciplinary Design and Research e-Journal, Volume I, Issue I (Design and Health), January 2007. Ulrich, R.S., "Effects of Healthcare Environmental Design on Medical Outcomes" in Design & Health – The therapeutic benefits of design, proceedings of the 2nd Annual International Congress on Design and Health. Karolinska Institute, Stockholm, June 2000. Webster, L. and Steinke, C., "Evidence-based design: A new direction for health care". Design Quarterly, Winter 2009 Sadler, B.L., Dubose, J.R., Malone, E.B. and Zimring, C.M., "The business case for building better hospitals through evidence based design". White Paper Series 1/5, Evidence-Based Design Resources for Healthcare Executives , Center for Health Design, September 2008. Ulrich, R.S., Zimring, C.M., Zhu, X., Dubose, J., Seo, H.B., Choi, Y.S., Quan, X. and Joseph, A., "A review of the research literature on evidence based healthcare design", White Paper Series 5/5, Evidence-Based Design Resources for Healthcare Executives , Center for Health Design, September 2008. Further reading A Visual Reference to Evidence-Based Design by Jain Malkin. Study Guide 1: An Introduction to Evidence-Based Design: Exploring Healthcare and Design. Study Guide 2: Building the Evidence-Base: Understanding Research in Helathcare Design. Study Guide 3: Integrating Evidence-Based Design: Practicing the Healthcare Design Process. A Practitioner's Guide to Evidence-Based Design by Debra D. Harris, PhD, Anjali Joseph, PhD, Franklin Becker, PhD, Kirk Hamilton, FAIA, FACHA, Mardelle McCuskey Shepley, AIA, D.Arch. Evidence-Based Design for Multiple Building Types by D. Kirk Hamilton and David H. Watkins. Stout, Chris E. and Hayes, Randy A. The evidence-based practice: methods, models, and tools for mental health professionals. John Wiley and Sons, January 2005. Ulrich, R., Quan, X., Zimring, C., Joseph, A. and, Choudhary, R., "The Role of the Physical Environment in the Hospital of the 21st Century". Report to the Center for Health Design, September 2004. Cama, R., (2009). Evidence-Based Healthcare Design. Hoboken, New Jersey: John Wiley & Sons, Inc. Phiri, M. (2015). Design Tools for Evidence-Based Healthcare Design. Abingdon & New York: Routledge. Phiri, M. & Chen, B. (2014). Sustainability and Evidence-Based Design in Healthcare Estate. Heidelberg: Springer. External links The Center for Health Design Role of the Physical Environment in the Hospital of the 21st Century: Report published by The Center for Health Design in 2004 summarizing evidence-based design research for healthcare InformeDesign: Research database of studies linking environment to outcomes Center for Health Systems and Design Picker Institute Tulane Center for Evidence-Based Global Health Health care quality Decision-making Health informatics Evidence-based practices
Evidence-based design
[ "Biology" ]
4,932
[ "Health informatics", "Medical technology" ]
1,935,972
https://en.wikipedia.org/wiki/Diethyl%20azodicarboxylate
Diethyl azodicarboxylate, conventionally abbreviated as DEAD and sometimes as DEADCAT, is an organic compound with the structural formula . Its molecular structure consists of a central azo functional group, RN=NR, flanked by two ethyl ester groups. This orange-red liquid is a valuable reagent but also quite dangerous and explodes upon heating. Therefore, commercial shipment of pure diethyl azodicarboxylate is prohibited in the United States and is carried out either in solution or on polystyrene particles. DEAD is an aza-dienophile and an efficient dehydrogenating agent, converting alcohols to aldehydes, thiols to disulfides and hydrazo groups to azo groups; it is also a good electron acceptor. While DEAD is used in numerous chemical reactions it is mostly known as a key component of the Mitsunobu reaction, a common strategy for the preparation of an amine, azide, ether, thioether, or ester from the corresponding alcohol. It is used in the synthesis of various natural products and pharmaceuticals such as zidovudine, an AIDS drug; FdUMP, a potent antitumor agent; and procarbazine, a chemotherapy drug. Properties DEAD is an orange-red liquid which weakens its color to yellow or colorless upon dilution or chemical reaction. This color change is conventionally used for visual monitoring of the synthesis. DEAD dissolves in most common organic solvents, such as toluene, chloroform, ethanol, tetrahydrofuran and dichloromethane but has low solubility in water or carbon tetrachloride; the solubility in water is higher for the related azo compound dimethyl azodicarboxylate. DEAD is a strong electron acceptor and easily oxidizes a solution of sodium iodide in glacial acetic acid. It also reacts vigorously with hydrazine hydrate producing diethyl hydrazodicarboxylate and evolving nitrogen. Linear combination of atomic orbitals molecular orbital method (LCAO-MO) calculations suggest that the molecule of DEAD is unusual in having a high-lying vacant bonding orbital, and therefore tends to withdraw hydrogen atoms from various hydrogen donors. Photoassisted removal of hydrogen by DEAD was demonstrated for isopropyl alcohol, resulting in pinacol and tetraethyl tetrazanetetracarboxylate, and for acetaldehyde yielding diacetyl and diethyl hydrazodicarboxylate. Similarly, reacting DEAD with ethanol and cyclohexanol abstracts hydrogen producing acetaldehyde and cyclohexanone. Those reactions also proceed without light, although at much lower yields. Thus, in general DEAD is an aza-dienophile and dehydrogenating agent, converting alcohols to aldehydes, thiols to disulfides and hydrazo groups to azo groups. It also undergoes pericyclic reactions with alkenes and dienes via ene and Diels–Alder mechanisms. Preparation Although available commercially, diethyl azodicarboxylate can be prepared fresh in the laboratory, especially if required in pure, non-diluted form. A two-step synthesis starts from hydrazine, first by alkylation with ethyl chloroformate, followed by treating the resulting diethyl hydrazodicarboxylate with chlorine (bubbling through the solution), hypochlorous acid, concentrated nitric acid or red fuming nitric acid. The reaction is carried out in an ice bath, and the reagents are added dropwise so that the temperature does not rise above 20 °C. Diethyl hydrazodicarboxylate is a solid with melting temperature of 131–133 °C which is collected as a residue; it is significantly more stable to heating than DEAD and is conventionally dried at a temperature of about 80 °C. Applications Mitsunobu reaction DEAD is a reagent in the Mitsunobu reaction where it forms an adduct with phosphines (usually triphenylphosphine) and assists the synthesis of esters, ethers, amines and thioethers from alcohols. Reactions normally result in the inversion of molecular symmetry. DEAD was used in the original 1967 article by Oyo Mitsunobu, and his 1981 review on the use of diethyl azodicarboxylate is a top-cited chemistry article. The Mitsunobu reaction has several applications in the synthesis of natural products and pharmaceuticals. In the above reaction, which is assisted either by DEAD or DIAD (diisopropyl azodicarboxylate), thymidine 1 transforms to the derivative 2. The latter easily converts to zidovudine 4 (also known as azidothymidine or AZT), an important antiviral drug, used among others in the treatment of AIDS. Another example of pharmaceutical application of DEAD-assisted Mitsunobu reaction is the synthesis of bis[(pivaloyloxy)methyl [PIVz] derivative of 2'-deoxy-5-fluorouridine 5'-monophosphate (FdUMP), which is a potent antitumor agent. Michael reaction The azo group in DEAD is a Michael acceptor. In the presence of a copper(II) catalyst, DEAD assists conversion of β-keto esters to the corresponding hydrazine derivatives. The substitution of boronic acid esters proceeds similarly: Other reactions DEAD is an efficient component in Diels-Alder reactions and in click chemistry, for example the synthesis of bicyclo[2.1.0]pentane, which originates from Otto Diels. It has also been used to generate aza-Baylis-Hillman adducts with acrylates. DEAD can be used for synthesis of heterocyclic compounds. Thus, pyrazoline derivatives convert by condensation to α,β-unsaturated ketones: Another application is the use of DEAD as an enophile in ene reactions: Safety DEAD is toxic, shock and light sensitive; it can violently explode when its undiluted form is heated above 100 °C. Shipment by air of pure diethyl azodicarboxylate is prohibited in the United States and is carried out in solution, typically about 40% DEAD in toluene. Alternatively, DEAD is transported and stored on 100–300 mesh polystyrene particles at a concentration of about 1 mmol/g. The time-weighed average threshold limit value for exposure to DEAD over a typical 40-hour working week is 50 parts per million; that is, DEAD is half as toxic as, e.g., carbon monoxide. Safety hazards have resulted in rapid decline of DEAD usage and replacement with DIAD and other similar compounds. References Azo compounds Reagents for organic chemistry Ethyl esters Carboxylate esters
Diethyl azodicarboxylate
[ "Chemistry" ]
1,466
[ "Reagents for organic chemistry" ]
1,936,485
https://en.wikipedia.org/wiki/Serotype
A serotype or serovar is a distinct variation within a species of bacteria or virus or among immune cells of different individuals. These microorganisms, viruses, or cells are classified together based on their shared reactivity between their surface antigens and a particular antiserum, allowing the classification of organisms to a level below the species. A group of serovars with common antigens is called a serogroup or sometimes serocomplex. Serotyping often plays an essential role in determining species and subspecies. The Salmonella genus of bacteria, for example, has been determined to have over 2600 serotypes. Vibrio cholerae, the species of bacteria that causes cholera, has over 200 serotypes, based on cell antigens. Only two of them have been observed to produce the potent enterotoxin that results in cholera: O1 and O139. Serotypes were discovered in hemolytic streptococci by the American microbiologist Rebecca Lancefield in 1933. Procedure Serotyping is the process of determining the serotype of an organism, using prepared antisera that bind to a set of known antigens. Some antisera detect multiple known antigens and are known as polyvalent or broad; others are monovalent. For example, what was once described as HLA-A9 is now subdivided into two more specific serotypes ("split antigens"), HLA-A23 and HLA-A24. As a result, A9 is now known as a "broad" serotype. For organisms with many possible serotypes, first obtaining a polyvalent match can reduce the number of tests required. The binding between a surface antigen and the antiserum can be experimentally observed in many forms. A number of bacteria species, including Streptococcus pneumoniae, display the Quellung reaction visible under a microscope. Others such as Shigella (and E. coli) and Salmonella are traditionally detected using a slide agglutination test. HLA types are originally determined with the complement fixation test. Newer procedures include the latex fixation test and various other immunoassays. "Molecular serotyping" refers to methods that replace the antibody-based test with a test based on the nucleic acid sequence – therefore actually a kind of genotyping. By analyzing which surface antigen-defining allele(s) are present, these methods can produce faster results. However, their results may not always agree with traditional serotyping, as they can fail to account for factors that affect the expression of antigen-determining genes. Role in organ transplantation The immune system is capable of discerning a cell as being 'self' or 'non-self' according to that cell's serotype. In humans, that serotype is largely determined by human leukocyte antigen (HLA), the human version of the major histocompatibility complex. Cells determined to be non-self are usually recognized by the immune system as foreign, causing an immune response, such as hemagglutination. Serotypes differ widely between individuals; therefore, if cells from one human (or animal) are introduced into another random human, those cells are often determined to be non-self because they do not match the self-serotype. For this reason, transplants between genetically non-identical humans often induce a problematic immune response in the recipient, leading to transplant rejection. In some situations, this effect can be reduced by serotyping both recipient and potential donors to determine the closest HLA match. Human leukocyte antigens Bacteria Most bacteria produce antigenic substances on the outer surface that can be distinguished by serotyping. Almost all species of Gram-negative bacteria produce a layer of lipopolysaccharide on the outer membrane. The outermost portion of the LPS accessible to antibodies is the O antigen. Variation in the O antigen can be caused by genetic differences in the biosynthetic pathway or the tranporter used to move the building-blocks to the outside of the cell. The flagella on motile bacteria is called the H antigen in serotyping. Minute genetic differences in the components of the flagella lead to variations detectable by antibodies. Some bacteria produce a polysaccharide capsule, called the K antigen in serotyping. The LPS (O) and capsule (K) antigens are themselves important pathogenicity factors. Some antigens are invariant among a taxonomic group. Presence of these antigens would not be useful for classification lower than the species level, but may inform identification. One example is the enterobacterial common antigen (ECA), universal to all Enterobacterales. E. coli E. coli have 187 possible O antigens (6 later removed from list, 3 actually producing no LPS), 53 H antigens, and at least 72 K antigens. Among these three, the O antigen has the best correlation with lineages; as a result, the O antigen is used to define the "serogroup" and is also used to define strains in taxonomy and epidemiology. Shigella Shigella are only classified by their O antigen, as they are non-motile and produce no flagella. Across the four "species", there are 15 + 11 + 20 + 2 = 48 serotypes. Some of these O antigens have equivalents in E. coli, which also cladistically include Shigella. Salmonella The Kauffman–White classification scheme is the basis for naming the manifold serovars of Salmonella. To date, more than 2600 different serotypes have been identified. A Salmonella serotype is determined by the unique combination of reactions of cell surface antigens. For Salmonella, the O and H antigens are used. There are two species of Salmonella: Salmonella bongori and Salmonella enterica. Salmonella enterica can be subdivided into six subspecies. The process to identify the serovar of the bacterium consists of finding the formula of surface antigens which represent the variations of the bacteria. The traditional method for determining the antigen formula is agglutination reactions on slides. The agglutination between the antigen and the antibody is made with a specific antisera, which reacts with the antigen to produce a mass. The antigen O is tested with a bacterial suspension from an agar plate, whereas the antigen H is tested with a bacterial suspension from a broth culture. The scheme classifies the serovar depending on its antigen formula obtained via the agglutination reactions. Additional serotyping methods and alternative subtyping methodologies have been reviewed by Wattiau et al. Streptococcus Streptococcus pneumoniae has 93 capsular serotypes. 91 of these serotypes use the Wzy enzyme pathway. The Wzy pathway is used by almost all gram-positive bacteria, by lactococci and streptococci (exopolysacchide), and is also responsible for group 1 and 4 Gram-negative capsules. Viruses Other organisms Many other organisms can be classified using recognition by antibodies. The malaria pathogen Plasmodium falciparum is notorious for its many surface antigen variants. A certain vaccine candidate is designed to cover all of these serotypes. Toxoplasma gondii can be classified into serotypes. Trypanosoma cruzi, which causes Chagas disease, can be serotyped using whole parasites. See also Biovar Morphovar References External links HLA Allele and Haplotype Frequency Database Serology Speciation Biological classification Microbiology Infraspecific bacteria taxa Infraspecific virus taxa
Serotype
[ "Chemistry", "Biology" ]
1,613
[ "Evolutionary processes", "Speciation", "Microbiology", "nan", "Microscopy" ]
1,936,653
https://en.wikipedia.org/wiki/Samuel%20L.%20Braunstein
Samuel Leon Braunstein (born 1961) is a professor at the University of York, England. He is a member of a research group in non-standard computation and has a particular interest in quantum information, quantum computation, and black hole thermodynamics. Braunstein has written or edited three books and has published more than 140 papers, which have been cited over 36,000 times. His most important work is on quantum teleportation, and published in a paper titled Unconditional Quantum Teleportation. The paper has been cited more than 3,000 times and received significant coverage in both the scientific and mainstream press. In February 2006, Braunstein made the news due to his involvement in the first successful demonstration of quantum telecloning. From 2009, he began to research black hole thermodynamics, contributing to the black hole information paradox and the firewall paradox. Braunstein co-authored papers with Gilles Brassard and Simone Severini, with whom he introduced the Braunstein-Ghosh-Severini Entropy of a graph. Education Braunstein completed his PhD in 1988 at Caltech, under Carlton M. Caves. His dissertation was titled Novel Quantum States and Measurements. Academic career University of Melbourne - BSc and MSc in Physics California Institute of Technology - PhD in Physics, awarded in 1988 University of Arizona, USA - Research Associate (1988 - 1991) Technion, Israel - Lady Davis Fellow (1991 - 1993) Weizmann Institute of Science, Israel - Feinberg Fellow (1993 - 1995) University of Ulm, Germany - Humboldt Fellow (1995 - 1996) School of Informatics, University of Wales, Bangor, Wales - Lecturer through Professor (1996 - 2003) Department of Computer Science, University of York, England - Professor (2003-) Awards and honors 2001 — Fellow of the Institute of Physics 2003 — Royal Society Wolfson Research Merit Award 2008 — Fellow of The Optical Society 2011 — Fellow of the American Association for the Advancement of Science Books Samuel L. Braunstein: Quantum Computing: Where Do We Want To Go Tomorrow?, Wiley-VCH, Samuel L. Braunstein and Hoi-Kwong Lo: Scalable Quantum Computers: Paving the Way to Realization, Wiley-VCH, Samuel L. Braunstein and Arun K. Pati (Eds.): Quantum Information with Continuous Variables, Springer, See also Quantum Aspects of Life Arun K. Pati Continuous-variable quantum information Notes External links Sam Braunstein's homepage Abstract and PDF of Unconditional Quantum Teleportation Braunstein's math genealogy Living people 1961 births Royal Society Wolfson Research Merit Award holders 21st-century Australian physicists Australian Jews Quantum physicists Academics of the University of York Quantum information scientists
Samuel L. Braunstein
[ "Physics" ]
557
[ "Quantum physicists", "Quantum mechanics" ]
1,936,865
https://en.wikipedia.org/wiki/Glucose-6-phosphate%20isomerase
Glucose-6-phosphate isomerase (GPI), alternatively known as phosphoglucose isomerase/phosphoglucoisomerase (PGI) or phosphohexose isomerase (PHI), is an enzyme ( ) that in humans is encoded by the GPI gene on chromosome 19. This gene encodes a member of the glucose phosphate isomerase protein family. The encoded protein has been identified as a moonlighting protein based on its ability to perform mechanistically distinct functions. In the cytoplasm, the gene product functions as a glycolytic enzyme (glucose-6-phosphate isomerase) that interconverts glucose-6-phosphate (G6P) and fructose-6-phosphate (F6P). Extracellularly, the encoded protein (also referred to as neuroleukin) functions as a neurotrophic factor that promotes survival of skeletal motor neurons and sensory neurons, and as a lymphokine that induces immunoglobulin secretion. The encoded protein is also referred to as autocrine motility factor (AMF) based on an additional function as a tumor-secreted cytokine and angiogenic factor. Defects in this gene are the cause of nonspherocytic hemolytic anemia, and a severe enzyme deficiency can be associated with hydrops fetalis, immediate neonatal death and neurological impairment. Alternative splicing results in multiple transcript variants. [provided by RefSeq, Jan 2014] Structure Functional GPI is a 64-kDa dimer composed of two identical monomers. The two monomers interact notably through the two protrusions in a hugging embrace. The active site of each monomer is formed by a cleft between the two domains and the dimer interface. GPI monomers are made of two domains, one made of two separate segments called the large domain and the other made of the segment in between called the small domain. The two domains are each αβα sandwiches, with the small domain containing a five-strand β-sheet surrounded by α-helices while the large domain has a six-stranded β-sheet. The large domain, located at the N-terminal, and the C-terminal of each monomer also contain "arm-like" protrusions. Several residues in the small domain serve to bind phosphate, while other residues, particularly His388, from the large and C-terminal domains are crucial to the sugar ring-opening step catalyzed by this enzyme. Since the isomerization activity occurs at the dimer interface, the dimer structure of this enzyme is critical to its catalytic function. It is hypothesized that serine phosphorylation of this protein induces a conformational change to its secretory form. Mechanism The mechanism that GPI uses to interconvert glucose 6-phosphate and fructose 6-phosphate (aldose to ketose) consists of three major steps: opening the glucose ring, isomerizing glucose into fructose through an enediol intermediate, and closing the fructose ring. Isomerization of glucose Glucose 6-phosphate binds to GPI in its pyranose form. The ring is opened in a "push-pull" mechanism by His388, which protonates the C5 oxygen, and Lys518, which deprotonates the C1 hydroxyl group. This creates an open chain aldose. Then, the substrate is rotated about the C3-C4 bond to position it for isomerization. At this point, Glu357 deprotonates C2 to create a cis-enediolate intermediate stabilized by Arg272. To complete the isomerization, Glu357 donates its proton to C1, the C2 hydroxyl group loses its proton and the open-chain ketose fructose 6-phosphate is formed. Finally, the ring is closed by rotating the substrate about the C3-C4 bond again and deprotonating the C5 hydroxyl with Lys518. When going from fructose-6-phosphate toward glucose-6-phosphate, the result could be mannose-6-phosphate if carbon C2 is given the wrong chirality, but the enzyme does not permit that result except at a very low, non-physiological, rate. Function This gene belongs to the GPI family. The protein encoded by this gene is a dimeric enzyme that catalyzes the reversible isomerization of G6P and F6P. Since the reaction is reversible, its direction is determined by G6P and F6P concentrations. glucose 6-phosphate ↔ fructose 6-phosphate The protein has different functions inside and outside the cell. In the cytoplasm, the protein is involved in glycolysis and gluconeogenesis, as well as the pentose phosphate pathway. Outside the cell, it functions as a neurotrophic factor for spinal and sensory neurons, called neuroleukin. The same protein is also secreted by cancer cells, where it is called autocrine motility factor and stimulates metastasis. Extracellular GPI is also known to function as a maturation factor. Neuroleukin Though originally treated as separate proteins, cloning technology demonstrated that GPI is almost identical to the protein neuroleukin. Neuroleukin is a neurotrophic factor for spinal and sensory neurons. It is found in large amounts in muscle, brain, heart, and kidneys. Neuroleukin also acts as a lymphokine secreted by T cells stimulated by lectin. It induces immunoglobulin secretion in B cells as part of a response that activates antibody-secreting cells. Autocrine motility factor Cloning experiments also revealed that GPI is identical to the protein known as autocrine motility factor (AMF). AMF produced and secreted by cancer cells and stimulates cell growth and motility as a growth factor. AMF is thought to play a key role in cancer metastasis by activating the MAPK/ERK or PI3K/AKT pathways. In the PI3K/AKT pathway, AMF interacts with gp78/AMFR to regulate ER calcium release, and therefore protect against apoptosis in response to ER stress. Prokaryotic orthologs In some archaea and bacteria glucose-6-phosphate isomerase activity occurs via a bifunctional enzyme that also exhibits phosphomannose isomerase (PMI) activity. Though not closely related to eukaryotic GPIs, the bifunctional enzyme is similar enough that the sequence includes the cluster of threonines and serines that forms the sugar phosphate-binding site in conventional GPI. The enzyme is thought to use the same catalytic mechanisms for both glucose ring-opening and isomerization for the interconversion of G6P to F6P. Clinical significance A deficiency of GPI is responsible for 4% of the hemolytic anemias due to glycolytic enzyme deficiencies. Several cases of GPI deficiency have recently been identified. Elevated serum GPI levels have been used as a prognostic biomarker for colorectal, breast, lung, kidney, gastrointestinal, and other cancers. As AMF, GPI is attributed with regulating cell migration during invasion and metastasis. One study showed that the external layers of breast tumor spheroids (BTS) secrete GPI, which induces epithelial–mesenchymal transition (EMT), invasion, and metastasis in BTS. The GPI inhibitors ERI4P and 6PG were found to block metastasis of BTS but not BTS glycolysis or fibroblast viability. In addition, GPI is secreted exclusively by tumor cells and not normal cells. For these reasons, GPI inhibitors may be a safer, more targeted approach for anti-cancer therapy. GPI also participates in a positive feedback loop with HER2, a major breast cancer therapeutic target, as GPI enhances HER2 expression and HER2 overexpression enhances GPI expression, and so on. As a result, GPI activity likely confers resistance in breast cancer cells against HER2-based therapies using Herceptin/Trastuzumab, and should be considered as an additional target when treating patients. Applications Human GPI is capable of inducing arthritis in mice with varied genetic backgrounds via intradermal injection. See also Fructose-1-phosphate-aldolase enzyme, which converts fructose to glucose Interactions GPI is known to interact with: AMFR, and HER2. Interactive pathway map References Further reading External links Glucose-6-phosphate isomerase in PROSITE Phosphoglucose Isomerase Glucose phosphate isomerase deficiency Protein domains EC 5.3.1 Tumor markers Glycolysis enzymes Glycolysis
Glucose-6-phosphate isomerase
[ "Chemistry", "Biology" ]
1,906
[ "Carbohydrate metabolism", "Biomarkers", "Tumor markers", "Glycolysis", "Protein classification", "Protein domains", "Chemical pathology" ]
1,936,945
https://en.wikipedia.org/wiki/Dual%20superconductor%20model
In the theory of quantum chromodynamics, dual superconductor models attempt to explain confinement of quarks in terms of an electromagnetic dual theory of superconductivity. Overview In an electromagnetic dual theory the roles of electric and magnetic fields are interchanged. The BCS theory of superconductivity explains superconductivity as the result of the condensation of electric charges to Cooper pairs. In a dual superconductor an analogous effect occurs through the condensation of magnetic charges (also called magnetic monopoles). In ordinary electromagnetic theory, no monopoles have been shown to exist. However, in quantum chromodynamics — the theory of color charge which explains the strong interaction between quarks — the color charges can be viewed as (non-abelian) analogues of electric charges and corresponding magnetic monopoles are known to exist. Dual superconductor models posit that condensation of these magnetic monopoles in a superconductive state explains color confinement — the phenomenon that only neutrally colored bound states are observed at low energies. Qualitatively, confinement in dual superconductor models can be understood as a result of the dual to the Meissner effect. The Meissner effect says that a superconducting metal will try to expel magnetic field lines from its interior. If a magnetic field is forced to run through the superconductor, the field lines are compressed in magnetic flux "tubes" known as fluxons. In a dual superconductor the roles of magnetic and electric fields are exchanged and the Meissner effect tries to expel electric field lines. Quarks and antiquarks carry opposite color charges, and for a quark–antiquark pair 'electric' field lines run from the quark to the antiquark. If the quark–antiquark pair are immersed in a dual superconductor, then the electric field lines get compressed to a flux tube. The energy associated to the tube is proportional to its length, and the potential energy of the quark–antiquark is proportional to their separation. The potential energy of colored objects becomes infinite in the limit of large separation, all else being equal, though in reality, when it becomes large enough to form a new quark-anti-quark pair from the vacuum, these split the flux tube and bind to the original anti-quark and quark. A quark–antiquark will therefore always bind regardless of their separation, which explains why no unbound quarks are ever found. Dual superconductors are described by (a dual to) the Landau–Ginzburg model, which is equivalent to the Abelian Higgs model. The MIT bag model boundary conditions for gluon fields are those of the dual color superconductor. The dual superconductor model is motivated by several observations in calculations using lattice gauge theory. The model, however, also has some shortcomings. In particular, although it confines colored quarks, it fails to confine color of some gluons, allowing colored bound states at energies observable in particle colliders. Notes References See also QCD vacuum Maximum Abelian gauge Gauge theories Quantum chromodynamics
Dual superconductor model
[ "Physics" ]
677
[ "Quantum mechanics", "Quantum physics stubs" ]
1,936,957
https://en.wikipedia.org/wiki/General%20transcription%20factor
General transcription factors (GTFs), also known as basal transcriptional factors, are a class of protein transcription factors that bind to specific sites (promoter) on DNA to activate transcription of genetic information from DNA to messenger RNA. GTFs, RNA polymerase, and the mediator (a multi-protein complex) constitute the basic transcriptional apparatus that first bind to the promoter, then start transcription. GTFs are also intimately involved in the process of gene regulation, and most are required for life. A transcription factor is a protein that binds to specific DNA sequences (enhancer or promoter), either alone or with other proteins in a complex, to control the rate of transcription of genetic information from DNA to messenger RNA by promoting (serving as an activator) or blocking (serving as a repressor) the recruitment of RNA polymerase. As a class of protein, general transcription factors bind to promoters along the DNA sequence or form a large transcription preinitiation complex to activate transcription. General transcription factors are necessary for transcription to occur. Types In bacteria, transcription initiation requires an RNA polymerase and a single GTF: sigma factor. In archaea and eukaryotes, transcription initiation requires an RNA polymerase and a set of multiple GTFs to form a transcription preinitiation complex. Transcription initiation by eukaryotic RNA polymerase II involves the following GTFs: TFIIA – stabilizes the interaction between the TATA box and TFIID/TATA binding protein (TBP) TFIIB – recognizes the B recognition element (BRE) in promoters TFIID – binds to TBP and recognizes TBP associated factors (TAFs), also adds promoter selectivity TFIIE – attracts and regulates TFIIH TFIIF – stabilizes RNA polymerase interaction with TBP and TFIIB; helps attract TFIIE and TFIIH TFIIH – unwinds DNA at the transcription start point, phosphorylates Ser5 of the RNA polymerase CCTD, releases RNA polymerase from the promoter Function and mechanism In bacteria A sigma factor is a protein needed only for initiation of RNA synthesis in bacteria. Sigma factors provide promoter recognition specificity to the RNA polymerase (RNAP) and contribute to DNA strand separation, then dissociating from the RNA polymerase core enzyme following transcription initiation. The RNA polymerase core associates with the sigma factor to form RNA polymerase holoenzyme. Sigma factor reduces the affinity of RNA polymerase for nonspecific DNA while increasing specificity for promoters, allowing transcription to initiate at correct sites. The core enzyme of RNA polymerase has five subunits (protein subunits) (~400 kDa). Because of the RNA polymerase association with sigma factor, the complete RNA polymerase therefore has 6 subunits: the sigma subunit-in addition to the two alpha (α), one beta (β), one beta prime (β'), and one omega (ω) subunits that make up the core enzyme(~450 kDa). In addition, many bacteria can have multiple alternative σ factors. The level and activity of the alternative σ factors are highly regulated and can vary depending on environmental or developmental signals. In archaea and eukaryotes The transcription preinitiation complex is a large complex of proteins that is necessary for the transcription of protein-coding genes in eukaryotes and archaea. It attaches to the promoter of the DNA (e.i., TATA box) and helps position the RNA polymerase II to the gene transcription start sites, denatures the DNA, and then starts transcription. Transcription preinitiation complex assembly The assembly of transcription preinitiation complex follows these steps: TATA binding protein (TBP), a subunit of TFIID (the largest GTF) binds to the promoter (TATA box), creating a sharp bend in the promoter DNA. Then the TBP-TFIIA interactions recruit TFIIA to the promoter. TBP-TFIIB interactions recruit TFIIB to the promoter. RNA polymerase II and TFIIF assemble to form the Polymerase II complex. TFIIB helps the Pol II complex bind correctly. TFIIE and TFIIH then bind to the complex and form the transcription preinitiation complex. TFIIA/B/E/H leave once RNA elongation begins. TFIID will stay until elongation is finished. Subunits within TFIIH that have ATPase and helicase activity create negative superhelical tension in the DNA. This negative superhelical tension causes approximately one turn of DNA to unwind and form the transcription bubble. The template strand of the transcription bubble engages with the RNA polymerase II active site, then RNA synthesis starts. References External links Holoenzymes at the US National Library of Medicine Medical Subject Headings DNA Transcription YouTube Video Transcription factors
General transcription factor
[ "Chemistry", "Biology" ]
1,004
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
1,937,128
https://en.wikipedia.org/wiki/Phosphorus%20halide
In chemistry, there are three series of binary phosphorus halides, containing phosphorus in the oxidation states +5, +3 and +2. All compounds have been described, in varying degrees of detail, although serious doubts have been cast on the existence of PI5. Mixed chalcogen halides also exist. Oxidation state +5 (PX5) In the gas phase the phosphorus pentahalides have trigonal bipyramidal molecular geometry as explained by VSEPR theory. Phosphorus pentafluoride is a relatively inert gas, notable as a mild Lewis acid and a fluoride ion acceptor. It is a fluxional molecule in which the axial (ax) and equatorial (eq) fluorine atoms interchange positions by the Berry pseudorotation mechanism. Phosphorus pentachloride, phosphorus pentabromide, and phosphorus heptabromide are ionic in the solid and liquid states; PCl5 is formulated as PCl4+PCl6–, but in contrast, PBr5 is formulated as PBr4+ Br−, and PBr7 is formulated as PBr4+ Br3−. They are widely used as chlorinating and brominating agents in organic chemistry. Oxidation state +3 (PX3) The phosphorus(III) halides are the best known of the three series. They are usually prepared by direct reaction of the elements, or by transhalogenation. Phosphorus trifluoride is used as a ligand in coordination chemistry, where it resembles carbon monoxide. Phosphorus trichloride is a major industrial chemical and widely used starting material for phosphorus chemistry. Phosphorus tribromide is used in organic chemistry to convert alcohols to alkyl bromides and carboxylic acids to acyl bromides (e.g. in the Hell-Volhard-Zelinsky reaction). Phosphorus triiodide also finds use in organic chemistry, as a mild oxygen acceptor. The trihalides are fairly readily oxidized by chalcogens to give the corresponding oxyhalides or equivalents. Oxidation state +2 (P2X4) Phosphorus(II) halides may be prepared by passing an electric discharge through a mixture of the trihalide vapour and hydrogen gas. The relatively stable P2I4 is known to have a trans, bent configuration similar to hydrazine and finds some uses in organic syntheses, the others are of purely academic interest at the present time. Diphosphorus tetrabromide is particularly poorly described. They are subhalides of phosphorus. Oxyhalides,thiohalides and selehalides The oxyhalides may be prepared from the corresponding trihalides by reaction with organic peroxides or ozone: they are sometimes referred to as phosphoryl halides. The thiohalides, also known as thiophosphoryl halides may be prepared from the trihalides by reaction with elemental sulfur in an inert solvent. The corresponding selenohalides are also known. The oxyhalides and thiohalides are significantly more electrophilic than the corresponding phosphorus(III) species, and present a significant toxic hazard. References External links WebElements NIST Standard Reference Database Inorganic phosphorus compounds
Phosphorus halide
[ "Chemistry" ]
676
[ "Inorganic phosphorus compounds", "Inorganic compounds" ]
1,937,145
https://en.wikipedia.org/wiki/S%20wave
In seismology and other areas involving elastic waves, S waves, secondary waves, or shear waves (sometimes called elastic S waves) are a type of elastic wave and are one of the two main types of elastic body waves, so named because they move through the body of an object, unlike surface waves. S waves are transverse waves, meaning that the direction of particle movement of an S wave is perpendicular to the direction of wave propagation, and the main restoring force comes from shear stress. Therefore, S waves cannot propagate in liquids with zero (or very low) viscosity; however, they may propagate in liquids with high viscosity. The name secondary wave comes from the fact that they are the second type of wave to be detected by an earthquake seismograph, after the compressional primary wave, or P wave, because S waves travel more slowly in solids. Unlike P waves, S waves cannot travel through the molten outer core of the Earth, and this causes a shadow zone for S waves opposite to their origin. They can still propagate through the solid inner core: when a P wave strikes the boundary of molten and solid cores at an oblique angle, S waves will form and propagate in the solid medium. When these S waves hit the boundary again at an oblique angle, they will in turn create P waves that propagate through the liquid medium. This property allows seismologists to determine some physical properties of the Earth's inner core. History In 1830, the mathematician Siméon Denis Poisson presented to the French Academy of Sciences an essay ("memoir") with a theory of the propagation of elastic waves in solids. In his memoir, he states that an earthquake would produce two different waves: one having a certain speed and the other having a speed . At a sufficient distance from the source, when they can be considered plane waves in the region of interest, the first kind consists of expansions and compressions in the direction perpendicular to the wavefront (that is, parallel to the wave's direction of motion); while the second consists of stretching motions occurring in directions parallel to the front (perpendicular to the direction of motion). Theory Isotropic medium For the purpose of this explanation, a solid medium is considered isotropic if its strain (deformation) in response to stress is the same in all directions. Let be the displacement vector of a particle of such a medium from its "resting" position due elastic vibrations, understood to be a function of the rest position and time . The deformation of the medium at that point can be described by the strain tensor , the 3×3 matrix whose elements are where denotes partial derivative with respect to position coordinate . The strain tensor is related to the 3×3 stress tensor by the equation Here is the Kronecker delta (1 if , 0 otherwise) and and are the Lamé parameters ( being the material's shear modulus). It follows that From Newton's law of inertia, one also gets where is the density (mass per unit volume) of the medium at that point, and denotes partial derivative with respect to time. Combining the last two equations one gets the seismic wave equation in homogeneous media Using the nabla operator notation of vector calculus, , with some approximations, this equation can be written as Taking the curl of this equation and applying vector identities, one gets This formula is the wave equation applied to the vector quantity , which is the material's shear strain. Its solutions, the S waves, are linear combinations of sinusoidal plane waves of various wavelengths and directions of propagation, but all with the same speed . Assuming that the medium of propagation is linear, elastic, isotropic, and homogeneous, this equation can be rewritten as where ω is the angular frequency and is the wavenumber. Thus, . Taking the divergence of seismic wave equation in homogeneous media, instead of the curl, yields a wave equation describing propagation of the quantity , which is the material's compression strain. The solutions of this equation, the P waves, travel at the faster speed . The steady state SH waves are defined by the Helmholtz equation where is the wave number. S waves in viscoelastic materials Similar to in an elastic medium, in a viscoelastic material, the speed of a shear wave is described by a similar relationship , however, here, is a complex, frequency-dependent shear modulus and is the frequency dependent phase velocity. One common approach to describing the shear modulus in viscoelastic materials is through the Voigt Model which states: , where is the stiffness of the material and is the viscosity. S wave technology Magnetic resonance elastography Magnetic resonance elastography (MRE) is a method for studying the properties of biological materials in living organisms by propagating shear waves at desired frequencies throughout the desired organic tissue. This method uses a vibrator to send the shear waves into the tissue and magnetic resonance imaging to view the response in the tissue. The measured wave speed and wavelengths are then measured to determine elastic properties such as the shear modulus. MRE has seen use in studies of a variety of human tissues including liver, brain, and bone tissues. See also Earthquake Early Warning (Japan) Lamb waves Longitudinal wave Love wave Rayleigh wave Shear wave splitting References Further reading Waves Seismology
S wave
[ "Physics" ]
1,087
[ "Waves", "Physical phenomena", "Motion (physics)" ]
1,937,408
https://en.wikipedia.org/wiki/Watchmaker%20analogy
The watchmaker analogy or watchmaker argument is a teleological argument, an argument for the existence of God. In broad terms, the watchmaker analogy states that just as it is readily observed that a watch (e.g.: a pocket watch) did not come to be accidentally or on its own but rather through the intentional handiwork of a skilled watchmaker, it is also readily observed that nature did not come to be accidentally or on its own but through the intentional handiwork of an intelligent designer. The watchmaker analogy originated in natural theology and is often used to argue for the pseudoscientific concept of intelligent design. The analogy states that a design implies a designer, by an intelligent designer, i.e. a creator deity. The watchmaker analogy was given by William Paley in his 1802 book Natural Theology or Evidences of the Existence and Attributes of the Deity. The original analogy played a prominent role in natural theology and the "argument from design," where it was used to support arguments for the existence of God of the universe, in both Christianity and Deism. Prior to Paley, however, Sir Isaac Newton, René Descartes, and others from the time of the Scientific Revolution had each believed "that the physical laws he [each] had uncovered revealed the mechanical perfection of the workings of the universe to be akin to a watch, wherein the watchmaker is God." The 1859 publication of Charles Darwin's book on natural selection put forward an alternative explanation to the watchmaker analogy, for complexity and adaptation. In the 19th century, deists, who championed the watchmaker analogy, held that Darwin's theory fit with "the principle of uniformitarianism—the idea that all processes in the world occur now as they have in the past" and that deistic evolution "provided an explanatory framework for understanding species variation in a mechanical universe." When evolutionary biology began being taught in American high schools in the 1960s, Christian fundamentalists used versions of the argument to dispute the concepts of evolution and natural selection, and there was renewed interest in the watchmaker argument. Evolutionary biologist Richard Dawkins referred to the analogy in his 1986 book The Blind Watchmaker when explaining the mechanism of evolution. Others, however, consider the watchmaker analogy to be compatible with evolutionary creation, opining that the two concepts are not mutually exclusive. History Ancient predecessor In the second century Epictetus argued that, by analogy to the way a sword is made by a craftsman to fit with a scabbard, so human genitals and the desire of humans to fit them together suggest a type of design or craftsmanship of the human form. Epictetus attributed this design to a type of Providence woven into the fabric of the universe, rather than to a personal monotheistic god. Scientific Revolution The Scientific Revolution "nurtured a growing awareness" that "there were universal laws of nature at work that ordered the movement of the world and its parts." Amos Yong writes that in "astronomy, the Copernican revolution regarding the heliocentrism of the solar system, Johannes Kepler's (1571–1630) three laws of planetary motion, and Isaac Newton's (1642–1727) law of universal gravitation—laws of gravitation and of motion, and notions of absolute space and time—all combined to establish the regularities of heavenly and earthly bodies". Simultaneously, the development of machine technology and the emergence of the mechanical philosophy encouraged mechanical imagery unlikely to have come to the fore in previous ages. With such a backdrop, "deists suggested the watchmaker analogy: just as watches are set in motion by watchmakers, after which they operate according to their pre-established mechanisms, so also was the world begun by God as creator, after which it and all its parts have operated according to their pre-established natural laws. With these laws perfectly in place, events have unfolded according to the prescribed plan." For Sir Isaac Newton, "the regular motion of the planets made it reasonable to believe in the continued existence of God". Newton also upheld the idea that "like a watchmaker, God was forced to intervene in the universe and tinker with the mechanism from time to time to ensure that it continued operating in good working order". Similarly to Newton, René Descartes (1596–1650) speculated on "the cosmos as a great time machine operating according to fixed laws, a watch created and wound up by the great watchmaker". William Paley Watches and timepieces have been used as examples of complicated technology in philosophical discussions. For example, Cicero, Voltaire and René Descartes all used timepieces in arguments regarding purpose. The watchmaker analogy, as described here, was used by Bernard le Bovier de Fontenelle in 1686, but was most famously formulated by Paley. Paley used the watchmaker analogy in his book Natural Theology, or Evidences of the Existence and Attributes of the Deity collected from the Appearances of Nature, published in 1802. In it, Paley wrote that if a pocket watch is found on a heath, it is most reasonable to assume that someone dropped it and that it was made by at least one watchmaker, not by natural forces: Paley went on to argue that the complex structures of living things and the remarkable adaptations of plants and animals required an intelligent designer. He believed the natural world was the creation of God and showed the nature of the creator. According to Paley, God had carefully designed "even the most humble and insignificant organisms" and all of their minute features (such as the wings and antennae of earwigs). He believed, therefore, that God must care even more for humanity. Paley recognised that there is great suffering in nature and nature appears to be indifferent to pain. His way of reconciling that with his belief in a benevolent God was to assume that life had more pleasure than pain. As a side note, a charge of wholesale plagiarism from this book was brought against Paley in The Athenaeum for 1848, but the famous illustration of the watch was not peculiar to Nieuwentyt and had been used by many others before either Paley or Nieuwentyt. But the charge of plagiarism was based on more similarities. For example, Nieuwentyt wrote "in the middle of a Sandy down, or in a desart {sic} and solitary Place, where few People are used to pass, any one should find a Watch ..." Joseph Butler William Paley taught the works of Joseph Butler and appears to have built on Butler's 1736 design arguments of inferring a designer from evidence of design. Butler noted: "As the manifold Appearances of Design and of final Causes, in the Constitution of the World, prove it to be the Work of an intelligent Mind ... The appearances of Design and of final Causes in the constitution of nature as really prove this acting agent to be an intelligent Designer... ten thousand Instances of Design, cannot but prove a Designer.". Jean-Jacques Rousseau Rousseau also mentioned the watchmaker theory. He wrote the following in his 1762 book, Emile: I am like a man who sees the works of a watch for the first time; he is never weary of admiring the mechanism, though he does not know the use of the instrument and has never seen its face. I do not know what this is for, says he, but I see that each part of it is fitted to the rest, I admire the workman in the details of his work, and I am quite certain that all these wheels only work together in this fashion for some common end which I cannot perceive. Let us compare the special ends, the means, the ordered relations of every kind, then let us listen to the inner voice of feeling; what healthy mind can reject its evidence? Unless the eyes are blinded by prejudices, can they fail to see that the visible order of the universe proclaims a supreme intelligence? What sophisms must be brought together before we fail to understand the harmony of existence and the wonderful co-operation of every part for the maintenance of the rest? Criticism David Hume Before Paley published his book, David Hume (1711–1776) had already put forward a number of philosophical criticisms of the watch analogy, and to some extent anticipated the concept of natural selection. His criticisms can be separated into three major distinctions. His first objection is that we have no experience of world-making. Hume highlighted the fact that everything we claim to know the cause of, we have derived the inductions from previous experiences of similar objects being created or seen the object itself being created ourselves. For example, with a watch, we know it has to be created by a watchmaker because we can observe it being made and compare it to the making of other similar watches or objects to deduce they have alike causes in their creation. However, he argues that we have no experience of the universe's creation or any other universe's creations to compare our own universe to and never will; therefore, it would be illogical to infer that our universe has been created by an intelligent designer in the same way that a watch has. The second criticism that Hume offers is about the form of the argument as an analogy in itself. An analogical argument claims that because object X (a watch) is like object Y (the universe) in one respect, both are therefore probably alike in another, hidden, respect (their cause, having to be created by an intelligent designer). He points out that for an argument from analogy to be successful, the two things that are being compared have to have an adequate number of similarities that are relevant to the respect that are analogised. For example, a kitten and a lion may be very similar in many respects, but just because a lion makes a "roar", it would not be correct to infer a kitten also "roars": the similarities between the two objects being not enough and the degree of relevance to what sound they make being not relevant enough. Hume then argues that the universe and a watch also do not have enough relevant or close similarities to infer that they were both created the same way. For example, the universe is made of organic natural material, but the watch is made of artificial mechanic materials. He claims that in the same respect, the universe could be argued to be more analogous to something more organic such as a vegetable (which we can observe for ourselves does not need a 'designer' or a 'watchmaker' to be created). Although he admits the analogy of a universe to a vegetable to seem ridiculous, he says that it is just as ridiculous to analogize the universe with a watch. The third criticism that Hume offers is that even if the argument did give evidence for a designer; it still gives no evidence for the traditional 'omnipotent', 'benevolent' (all-powerful and all-loving) God of traditional Christian theism. One of the main assumptions of Paley's argument is that 'like effects have like causes'; or that machines (like the watch) and the universe have similar features of design and so both also have the same cause of their existence: they must both have an intelligent designer. However, Hume points out that what Paley does not comprehend is to what extent 'like causes' extend: how similar the creation of a universe is to the creation of a watch. Instead, Paley moves straight to the conclusion that this designer of the universe is the 'God' he believes in of traditional Christianity. Hume, however takes the idea of 'like causes' and points out some potential absurdities in how far the 'likeness' of these causes could extend to if the argument were taken further as to explain this. One example that he uses is how a machine or a watch is usually designed by a whole team of people rather than just one person. Surely, if we are analogizing the two in this way, it would point to there being a group of gods who created the universe, not just a single being. Another example he uses is that complex machines are usually the result of many years of trial and error with every new machine being an improved version of the last. Also by analogy of the two, would that not hint that the universe could also have been just one of many of God's 'trials' and that there are much better universes out there? However, if that were taken to be true, surely the 'creator' of it all would not be 'all loving' and 'all powerful' if they had to carry out the process of 'trial and error' when creating the universe? Hume also points out there is still a possibility that the universe could have been created by random chance but still show evidence of design as the universe is eternal and would have an infinite amount of time to be able to form a universe so complex and ordered as our own. He called that the 'Epicurean hypothesis'. It argued that when the universe was first created, the universe was random and chaotic, but if the universe is eternal, over an unlimited period of time, natural forces could have naturally 'evolved' by random particles coming together over time into the incredibly ordered system we can observe today without the need of an intelligent designer as an explanation. The last objection that he makes draws on the widely discussed problem of evil. He argues that all the daily unnecessary suffering that goes on everywhere within the world is yet another factor that pulls away from the idea that God is an 'omnipotent' 'benevolent' being. Charles Darwin When Darwin completed his studies of theology at Christ's College, Cambridge in 1831, he read Paley's Natural Theology and believed that the work gave rational proof of the existence of God. That was because living beings showed complexity and were exquisitely fitted to their places in a happy world. Subsequently, on the voyage of the Beagle, Darwin found that nature was not so beneficent, and the distribution of species did not support ideas of divine creation. In 1838, shortly after his return, Darwin conceived his theory that natural selection, rather than divine design, was the best explanation for gradual change in populations over many generations. He published the theory in On the Origin of Species in 1859, and in later editions, he noted responses that he had received: Darwin reviewed the implications of this finding in his autobiography: The idea that nature was governed by laws was already common, and in 1833, William Whewell as a proponent of the natural theology that Paley had inspired had written that "with regard to the material world, we can at least go so far as this—we can perceive that events are brought about not by insulated interpositions of Divine power, exerted in each particular case, but by the establishment of general laws." Darwin, who spoke of the "fixed laws" concurred with Whewell, writing in his second edition of On The Origin of Species: By the time that Darwin published his theory, theologians of liberal Christianity were already supporting such ideas, and by the late 19th century, their modernist approach was predominant in theology. In science, evolution theory incorporating Darwin's natural selection became completely accepted. Richard Dawkins In The Blind Watchmaker, Richard Dawkins argues that the watch analogy conflates the complexity that arises from living organisms that are able to reproduce themselves (and may become more complex over time) with the complexity of inanimate objects, unable to pass on any reproductive changes (such as the multitude of parts manufactured in a watch). The comparison breaks down because of this important distinction. In a BBC Horizon episode, also entitled The Blind Watchmaker, Dawkins described Paley's argument as being "as mistaken as it is elegant". In both contexts, he saw Paley as having made an incorrect proposal as to a certain problem's solution, but Dawkins did not disrespect him. In his essay The Big Bang, Steven Pinker discusses Dawkins's coverage of Paley's argument, adding: "Biologists today do not disagree with Paley's laying out of the problem. They disagree only with his solution." In his book The God Delusion, Dawkins argues that rather than luck, the evolution of human life is the result of natural selection. He suggests that it is fallacious to view "coming about by chance" and "coming about by design" as the only possibilities, with natural selection being the alternative to the existence of an intelligent designer. By amassing a large number of small changes, the theory of natural selection allows for a seemingly impossible end product to be produced. In addition, he argues that the watchmaker's creation of the watch implies that the watchmaker must be more complex than the watch. Design is top-down, someone or something more complex designs something less complex. To follow the line upwards demands that the watch was designed by a (necessarily more complex) watchmaker, the watchmaker must have been created by a more complex being than himself. So the question becomes who designed the designer? Dawkins argues that (a) this line continues ad infinitum, and (b) it does not explain anything. Evolution, on the other hand, takes a bottom-up approach; it explains how more complexity can arise gradually by building on or combining lesser complexity. Richerson and Boyd Biologist Peter Richerson and anthropologist Robert Boyd offer an oblique criticism by arguing that watches were not "hopeful monsters created by single inventors," but were created by watchmakers building up their skills in a cumulative fashion over time, each contributing to a watch-making tradition from which any individual watchmaker draws their designs. Contemporary usage In the early 20th century, the modernist theology of higher criticism was contested in the United States by Biblical literalists, who campaigned successfully against the teaching of evolution and began calling themselves creationists in the 1920s. When teaching of evolution was reintroduced into public schools in the 1960s, they adopted what they called creation science that had a central concept of design in similar terms to Paley's argument. That idea was then relabeled intelligent design, which presents the same analogy as an argument against evolution by natural selection without explicitly stating that the "intelligent designer" was God. The argument from the complexity of biological organisms was now presented as the irreducible complexity argument, the most notable proponent of which was Michael Behe, and, leveraging off the verbiage of information theory, the specified complexity argument, the most notable proponent of which was William Dembski. The watchmaker analogy was referenced in the 2005 Kitzmiller v. Dover Area School District trial. Throughout the trial, Paley was mentioned several times. The defense's expert witness John Haught noted that both intelligent design and the watchmaker analogy are "reformulations" of the same theological argument. On day 21 of the trial, Mr. Harvey walked Dr. Minnich through a modernized version of Paley's argument, substituting a cell phone for the watch. In his ruling, the judge stated that the use of the argument from design by intelligent design proponents "is merely a restatement of the Reverend William Paley's argument applied at the cell level," adding "Minnich, Behe, and Paley reach the same conclusion, that complex organisms must have been designed using the same reasoning, except that Professors Behe and Minnich refuse to identify the designer, whereas Paley inferred from the presence of design that it was God." The judge ruled that such an inductive argument is not accepted as science because it is unfalsifiable. See also Existence of God Cosmological argument Genetic algorithm God of the gaps Infinite monkey theorem Irreducible complexity Junkyard tornado Objections to evolution References Sources External links The Divine Watchmaker Robert Hooke William Paley (1743–1805) The Autobiography of Charles Darwin, revised version published in 1958 by Darwin's granddaughter Nora Barlow. "Recapitulation and Conclusion", By Charles Darwin. Creationist objections to evolution Deism Intelligent design Arguments for the existence of God
Watchmaker analogy
[ "Engineering" ]
4,117
[ "Intelligent design", "Design" ]
1,937,828
https://en.wikipedia.org/wiki/Nanostructure
A nanostructure is a structure of intermediate size between microscopic and molecular structures. Nanostructural detail is microstructure at nanoscale. In describing nanostructures, it is necessary to differentiate between the number of dimensions in the volume of an object which are on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length can be far more. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) are often used synonymously although UFP can reach into the micrometre range. The term nanostructure is often used when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure. Properties of nanoscale objects and ensembles of these objects are widely studied in physics. List of nanostructures See also Nanoarchitecture Nanomaterials Nanotechnology Tube-based nanostructures List of software for nanostructures modeling Nanocar NanoPutian Nano-I-beam References External links Applications of Nanoparticles Nanomaterials
Nanostructure
[ "Materials_science" ]
316
[ "Nanotechnology", "Materials science stubs", "Nanotechnology stubs", "Nanomaterials" ]
1,938,356
https://en.wikipedia.org/wiki/Axial%20ratio
Axial ratio, for any structure or shape with two or more axes, is the ratio of the length (or magnitude) of those axes to each other - the longer axis divided by the shorter. In chemistry or materials science, the axial ratio (symbol P) is used to describe rigid rod-like molecules. It is defined as the length of the rod divided by the rod diameter. In physics, the axial ratio describes electromagnetic radiation with elliptical, or circular, polarization. The axial ratio is the ratio of the magnitudes of the major and minor axis defined by the electric field vector. See also Aspect ratio Degree of polarization References Ratios Polymer physics
Axial ratio
[ "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
132
[ "Polymer physics", "Materials science stubs", "Polymer stubs", "Materials science", "Ratios", "Arithmetic", "Polymer chemistry", "Organic chemistry stubs" ]
1,938,536
https://en.wikipedia.org/wiki/Wavellite
Wavellite is an aluminium basic phosphate mineral with formula Al3(PO4)2(OH, F)3·5H2O. Distinct crystals are rare, and it normally occurs as translucent green radial or spherical clusters. Discovery and occurrence Wavellite was first described in 1805 for an occurrence at High Down, Filleigh, Devon, England and named by William Babington in 1805 in honor of Dr. William Wavell (1750–1829), a Devon-based physician, botanist, historian, and naturalist, who brought the mineral to the attention of fellow mineralogists. It occurs in association with crandallite and variscite in fractures in aluminous metamorphic rock, in hydrothermal regions and in phosphate rock deposits. It is found in a wide variety of locations notably in the Mount Ida, Arkansas area in the Ouachita Mountains. It is sometimes used as a gemstone. See also List of minerals Apatite, fluoro-phosphate of calcium Pyromorphite, chloro-phosphate of lead Turquoise, a hydrated phosphate of copper and aluminium References External links Aluminium minerals Phosphate minerals Halide minerals Orthorhombic minerals Minerals in space group 62 Luminescent minerals Fluorine minerals Hydroxide minerals Pentahydrate minerals Gemstones
Wavellite
[ "Physics", "Chemistry" ]
267
[ "Luminescence", "Luminescent minerals", "Materials", "Gemstones", "Matter" ]
1,940,015
https://en.wikipedia.org/wiki/Nature%20Reviews%20Molecular%20Cell%20Biology
Nature Reviews Molecular Cell Biology is a monthly peer-reviewed review journal published by Nature Portfolio. It was established in October 2000 and covers all aspects of molecular and cell biology. The editor-in-chief is Kim Baumann. According to the Journal Citation Reports, the journal has a 2021 impact factor of 113.915, ranking it 1st out of 194 journals in the category "Cell Biology". References External links Nature Research academic journals Academic journals established in 2000 Molecular and cellular biology journals Monthly journals English-language journals Review journals
Nature Reviews Molecular Cell Biology
[ "Chemistry" ]
108
[ "Molecular and cellular biology journals", "Molecular biology" ]
24,048,043
https://en.wikipedia.org/wiki/Hairpin%20clip
A hairpin clip, also known as a retaining pin, is a type of formed wire used on a grooved shaft. It is designed to be easily installed and uninstalled, and is reusable. They are commonly made from 1050 carbon steel and 300 series stainless steel. References Fasteners Steel objects
Hairpin clip
[ "Engineering" ]
67
[ "Construction", "Fasteners" ]
24,048,122
https://en.wikipedia.org/wiki/Grain%20boundary%20diffusion%20coefficient
The grain boundary diffusion coefficient is the diffusion coefficient of a diffusant along a grain boundary in a polycrystalline solid. It is a physical constant denoted , and it is important in understanding how grain boundaries affect atomic diffusivity. Grain boundary diffusion is a commonly observed route for solute migration in polycrystalline materials. It dominates the effective diffusion rate at lower temperatures in metals and metal alloys. Take the apparent self-diffusion coefficient for single-crystal and polycrystal silver, for example. At high temperatures, the coefficient is the same in both types of samples. However, at temperatures below 700 °C, the values of with polycrystal silver consistently lie above the values of with a single crystal. Measurement The general way to measure grain boundary diffusion coefficients was suggested by Fisher. In the Fisher model, a grain boundary is represented as a thin layer of high-diffusivity uniform and isotropic slab embedded in a low-diffusivity isotropic crystal. Suppose that the thickness of the slab is , the length is , and the depth is a unit length, the diffusion process can be described as the following formula. The first equation represents diffusion in the volume, while the second shows diffusion along the grain boundary, respectively. where where is the volume concentration of the diffusing atoms and is their concentration in the grain boundary. To solve the equation, Whipple introduced an exact analytical solution. He assumed a constant surface composition, and used a Fourier–Laplace transform to obtain a solution in integral form. The diffusion profile therefore can be depicted by the following equation. To further determine , two common methods were used. The first is used for accurate determination of . The second technique is useful for comparing the relative of different boundaries. Method 1: Suppose the slab was cut into a series of thin slices parallel to the sample surface, we measure the distribution of in-diffused solute in the slices, . Then we used the above formula that developed by Whipple to get . Method 2: To compare the length of penetration of a given concentration at the boundary with the length of lattice penetration from the surface far from the boundary. References See also Kirkendall effect Phase transformations in solids Mass diffusivity Diffusion
Grain boundary diffusion coefficient
[ "Physics", "Chemistry" ]
459
[ "Transport phenomena", "Physical phenomena", "Diffusion" ]
24,048,956
https://en.wikipedia.org/wiki/Potential%20flow%20around%20a%20circular%20cylinder
In mathematics, potential flow around a circular cylinder is a classical solution for the flow of an inviscid, incompressible fluid around a cylinder that is transverse to the flow. Far from the cylinder, the flow is unidirectional and uniform. The flow has no vorticity and thus the velocity field is irrotational and can be modeled as a potential flow. Unlike a real fluid, this solution indicates a net zero drag on the body, a result known as d'Alembert's paradox. Mathematical solution A cylinder (or disk) of radius is placed in a two-dimensional, incompressible, inviscid flow. The goal is to find the steady velocity vector and pressure in a plane, subject to the condition that far from the cylinder the velocity vector (relative to unit vectors and ) is: where is a constant, and at the boundary of the cylinder where is the vector normal to the cylinder surface. The upstream flow is uniform and has no vorticity. The flow is inviscid, incompressible and has constant mass density . The flow therefore remains without vorticity, or is said to be irrotational, with everywhere. Being irrotational, there must exist a velocity potential : Being incompressible, , so must satisfy Laplace's equation: The solution for is obtained most easily in polar coordinates and , related to conventional Cartesian coordinates by and . In polar coordinates, Laplace's equation is (see Del in cylindrical and spherical coordinates): The solution that satisfies the boundary conditions is The velocity components in polar coordinates are obtained from the components of in polar coordinates: and Being inviscid and irrotational, Bernoulli's equation allows the solution for pressure field to be obtained directly from the velocity field: where the constants and appear so that far from the cylinder, where . Using , In the figures, the colorized field referred to as "pressure" is a plot of On the surface of the cylinder, or , pressure varies from a maximum of 1 (shown in the diagram in ) at the stagnation points at and to a minimum of −3 (shown in ) on the sides of the cylinder, at and . Likewise, varies from at the stagnation points to on the sides, in the low pressure. Stream function The flow being incompressible, a stream function can be found such that It follows from this definition, using vector identities, Therefore, a contour of a constant value of will also be a streamline, a line tangent to . For the flow past a cylinder, we find: Physical interpretation Laplace's equation is linear, and is one of the most elementary partial differential equations. This simple equation yields the entire solution for both and because of the constraint of irrotationality and incompressibility. Having obtained the solution for and , the consistency of the pressure gradient with the accelerations can be noted. The dynamic pressure at the upstream stagnation point has value of . a value needed to decelerate the free stream flow of speed . This same value appears at the downstream stagnation point, this high pressure is again needed to decelerate the flow to zero speed. This symmetry arises only because the flow is completely frictionless. The low pressure on sides on the cylinder is needed to provide the centripetal acceleration of the flow: where is the radius of curvature of the flow. But , and . The integral of the equation for centripetal acceleration over a distance will thus yield The exact solution has, for the lowest pressure, The low pressure, which must be present to provide the centripetal acceleration, will also increase the flow speed as the fluid travels from higher to lower values of pressure. Thus we find the maximum speed in the flow, , in the low pressure on the sides of the cylinder. A value of is consistent with conservation of the volume of fluid. With the cylinder blocking some of the flow, must be greater than somewhere in the plane through the center of the cylinder and transverse to the flow. Comparison with flow of a real fluid past a cylinder The symmetry of this ideal solution has a stagnation point on the rear side of the cylinder, as well as on the front side. The pressure distribution over the front and rear sides are identical, leading to the peculiar property of having zero drag on the cylinder, a property known as d'Alembert's paradox. Unlike an ideal inviscid fluid, a viscous flow past a cylinder, no matter how small the viscosity, will acquire a thin boundary layer adjacent to the surface of the cylinder. Boundary layer separation will occur, and a trailing wake will exist in the flow behind the cylinder. The pressure at each point on the wake side of the cylinder will be lower than on the upstream side, resulting in a drag force in the downstream direction. Janzen–Rayleigh expansion The problem of potential compressible flow over circular cylinder was first studied by O. Janzen in 1913 and by Lord Rayleigh in 1916 with small compressible effects. Here, the small parameter is square of the Mach number , where is the speed of sound. Then the solution to first-order approximation in terms of the velocity potential is where is the radius of the cylinder. Potential flow over a circular cylinder with slight variations Regular perturbation analysis for a flow around a cylinder with slight perturbation in the configurations can be found in Milton Van Dyke (1975). In the following, will represent a small positive parameter and is the radius of the cylinder. For more detailed analyses and discussions, readers are referred to Milton Van Dyke's 1975 book Perturbation Methods in Fluid Mechanics. Slightly distorted cylinder Here the radius of the cylinder is not , but a slightly distorted form . Then the solution to first-order approximation is Slightly pulsating circle Here the radius of the cylinder varies with time slightly so . Then the solution to first-order approximation is Flow with slight vorticity In general, the free-stream velocity is uniform, in other words , but here a small vorticity is imposed in the outer flow. Linear shear Here a linear shear in the velocity is introduced. where is the small parameter. The governing equation is Then the solution to first-order approximation is Parabolic shear Here a parabolic shear in the outer velocity is introduced. Then the solution to the first-order approximation is where is the homogeneous solution to the Laplace equation which restores the boundary conditions. Slightly porous cylinder Let represent the surface pressure coefficient for an impermeable cylinder: where is the surface pressure of the impermeable cylinder. Now let be the internal pressure coefficient inside the cylinder, then a slight normal velocity due to the slight porousness is given by but the zero net flux condition requires that . Therefore, Then the solution to the first-order approximation is Corrugated quasi-cylinder If the cylinder has variable radius in the axial direction, the -axis, , then the solution to the first-order approximation in terms of the three-dimensional velocity potential is where is the modified Bessel function of the first kind of order one. See also Joukowsky transform Kutta condition Magnus effect References Fluid dynamics
Potential flow around a circular cylinder
[ "Chemistry", "Engineering" ]
1,482
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
24,050,458
https://en.wikipedia.org/wiki/Dynamic%20circuit%20network
A dynamic circuit network (DCN) is an advanced computer networking technology that combines traditional packet-switched communication based on the Internet Protocol, as used in the Internet, with circuit-switched technologies that are characteristic of traditional telephone network systems. This combination allows user-initiated ad hoc dedicated allocation of network bandwidth for high-demand, real-time applications and network services, delivered over an optical fiber infrastructure. Implementation Dynamic circuit networks were pioneered by the Internet2 advanced networking consortium. The experimental Internet2 HOPI infrastructure, decommissioned in 2007, was a forerunner to the current SONET-based Ciena Network underlying the Internet2 DCN. The Internet2 DCN began operation in late 2007 as part of the larger Internet2 network. It provides advanced networking capabilities and resources to the scientific and research communities, such as the Large Hadron Collider (LHC) project. The Internet2 DCN is based on open-source, standards-based software, the Inter-domain Controller (IDC) protocol, developed in cooperation with ESnet and GÉANT2. The entire software set is known as the Dynamic Circuit Network Software Suite (DCN SS). Inter-domain Controller protocol The Inter-domain Controller protocol manages the dynamic provisioning of network resources participating in a dynamic circuit network across multiple administrative domain boundaries. It is a SOAP-based XML messaging protocol, secured by Web Services Security (v1.1) using the XML Digital Signature standard. It is transported over HTTP Secure (HTTPS) connections. See also Internet Protocol Suite IPv6 Fiber-optic communication References External links Internet2 Website Dynamic Circuit Network Suite Computer networks engineering Fiber-optic communications Wide area networks Network architecture Routing
Dynamic circuit network
[ "Technology", "Engineering" ]
339
[ "Network architecture", "Computer networks engineering", "Computer engineering" ]
22,547,607
https://en.wikipedia.org/wiki/Tron%3A%20Legacy
Tron: Legacy (stylized as TRON: Legacy) is a 2010 American science fiction action film directed by Joseph Kosinski from a screenplay by Adam Horowitz and Edward Kitsis, based on a story by Horowitz, Kitsis, Brian Klugman, and Lee Sternthal. The second installment in the Tron series, it serves as a sequel to Tron (1982), whose director Steven Lisberger returned to co-produce. The cast includes Jeff Bridges and Bruce Boxleitner reprising their roles as Kevin Flynn and Alan Bradley, respectively, as well as Garrett Hedlund, Olivia Wilde, James Frain, Beau Garrett, and Michael Sheen. The story follows Flynn's adult son Sam, who responds to a message from his long-lost father and is transported into a virtual reality called "the Grid", where Sam, his father, and the algorithm Quorra must stop the malevolent program Clu from invading the real world. Interest in creating a sequel to Tron arose after the film garnered a cult following. After much speculation, Walt Disney Pictures began a concerted effort in 2005 to devise a sequel, with the hiring of Klugman and Sternthal as writers. Kosinski was recruited as director two years later. As he was not optimistic about Disney's The Matrix-esque approach to the film, Kosinski filmed a concept trailer, which he used to conceptualize the universe of Tron: Legacy and convince the studio to greenlight the film. Principal photography took place in Vancouver over 67 days, in and around the city's central business district. Most sequences were shot in 3D and ten companies were involved with the extensive visual effects work. Chroma keying and other techniques were used to allow more freedom in creating effects. Daft Punk composed the musical score, incorporating orchestral sounds with their trademark electronic music. Tron: Legacy premiered in Tokyo on November 30, 2010, and was released in the United States on December 17, by Walt Disney Studios Motion Pictures. Disney vigorously promoted the film across multiple media platforms, including merchandising, consumer products, theme parks, and advertising. Upon its release, the film received mixed reviews from critics, who criticized the story and character development, but praised the performances of Bridges and Sheen, the visual effects, production design, and soundtrack. It was a commercial success, grossing $409 million during its worldwide theatrical run against a $170 million production budget. The film was nominated for an Academy Award for Best Sound Editing at the 83rd Academy Awards, but lost to Inception. Like its predecessor, Tron: Legacy has been described as a cult film since its release. A standalone sequel, Tron: Ares, is scheduled to be released on October 10, 2025. Plot In 1989, Kevin Flynn, who was promoted to CEO of ENCOM International seven years earlier, disappears. Twenty years later, his son Sam, now ENCOM's primary shareholder, pranks the corporation by releasing the company's signature operating system online for free. ENCOM executive Alan Bradley, Kevin's old friend, approves of this, believing it aligns with Flynn's ideals of free software. Nonetheless, Sam is arrested for trespassing. Alan posts bail for Sam and tells him of a pager message originating from Flynn's shuttered video arcade, after being disconnected for 20 years. There, Sam discovers a hidden basement with a large computer and laser, which suddenly digitizes and downloads him into the Grid, a virtual reality created by Kevin. He is captured and sent to "the Games", where he must fight a masked computer program named Rinzler. When Sam is injured and bleeds, Rinzler realizes Sam is human, or a "User". He takes Sam to Clu, the Grid's corrupt ruling program, who resembles a young Kevin. Clu nearly kills Sam in a Light Cycle match, but Sam is rescued by Quorra, an "apprentice" of Flynn, who shows him Kevin's hideout outside Clu's territory. Kevin explains that he had been working to create a "perfect" computer system and had appointed Clu and security program Tron as its co-creators. The trio discovered a species of naturally occurring "isomorphic algorithms" (ISOs), with the potential to resolve various natural mysteries. Clu, considering them an aberration, betrayed Kevin, killed Tron, and destroyed the ISOs. The "Portal" permitting travel between the two worlds closed, leaving Kevin trapped in the system. Clu sent the message to Alan hoping to lure him into the Grid (though Sam serves his purpose just as well) and reopen the Portal for a limited time. Since Flynn's "identity disc" is the master key to the Grid and the only way to traverse the Portal, Clu expects Sam to bring Kevin to the Portal so he can take Flynn's disc, go through the Portal himself, and impose his idea of perfection on the human world. Against his father's wishes, Sam returns to Clu's territory to find Zuse, a program who can provide safe passage to the Portal. At the End of Line Club, the owner reveals himself to be Zuse, then betrays Sam to Clu's guards. In the resulting fight, Kevin rescues his son, but Quorra is injured and Zuse gains possession of Flynn's disc. Zuse attempts to bargain with Clu over the disc, but Clu instead destroys the club along with Zuse. Kevin and Sam stow away aboard a "Solar Sailer" transport program, where Flynn restores Quorra and reveals her to be the last surviving ISO. The transport is intercepted by Clu's warship. As a diversion, Quorra allows herself to be captured by Rinzler, whom Kevin recognizes as Tron, not killed by Clu but rather reprogrammed. Sam reclaims Flynn's disc and rescues Quorra, while Kevin takes control of a Light Fighter. Clu, Rinzler, and several guards pursue the trio in Light Jets. Rinzler remembers his past as Tron and deliberately collides with Clu's Light Jet, then falls into the Sea of Simulation below. Clu confronts the others at the Portal, but Kevin reintegrates with his digital duplicate, destroying Clu along with himself. Quorra – having switched discs with Kevin – gives Flynn's disc to Sam, and they escape together to the real world as the ensuing explosion from Kevin's sacrifice levels the Sea of Simulation. In Flynn's arcade, Sam backs up and deactivates the system. He then tells a waiting Alan that he plans to retake control of ENCOM, naming Alan chairman of the board. Sam departs on his motorcycle with Quorra as the sun rises. Cast Garrett Hedlund as Samuel "Sam" Flynn, a primary shareholder of ENCOM who, while investigating his father's disappearance, is transported onto the Grid himself. Hedlund won a "Darwinian casting process" which tested hundreds of actors, being chosen for having the "unique combination of intelligence, wit, humor, look and physicality" that the producers were looking for in Flynn's son. The actor trained hard to do his own stunts, which included jumping over cars and copious wire and harness work. Owen Best as Young Sam Flynn. Jeff Bridges as Kevin Flynn, the former CEO of ENCOM International and creator of the popular arcade game Tron based on his own experiences in ENCOM's virtual reality, who disappeared in 1989 while developing "a digital frontier that will reshape the human condition." Bridges also portrays CLU (Codified Likeness Utility) via digital makeup and voiceover, while John Reardon portrays CLU physically. CLU is a more advanced incarnation of Flynn's original computer-hacking program, designed as an "exact duplicate of himself" within the Grid. Olivia Wilde as Quorra, an "isomorphic algorithm," adept warrior, and confidante of Kevin Flynn in the Grid. Flynn refers to her as his "apprentice" and has imparted volumes of information to her regarding the world outside of the Grid, which she longs to experience. She is shown to have a love of human literature, particularly the writings of Jules Verne, and plays Go with Flynn. She comments that her 'aggressive strategy' is usually foiled by Flynn's patience. Wilde describes Quorra as akin to Joan of Arc. Her hairstyle was influenced by singer Karen O. Wilde added that although "[Quorra] could have just been another slinky, vampy temptress," it was important for her to appeal to both men and women, and that character tried to avoid the typical female lead by having a naiveté and childlike innocence adequate for such an "evolving and learning organism." Quorra's action scenes led Wilde to work out and train in martial arts. Bruce Boxleitner as Alan Bradley, a board member executive for ENCOM, and close friend of Kevin Flynn who, after receiving a cryptic page from the office at the shuttered Flynn's Arcade, encourages Sam to investigate its origin. Boxleitner also portrays Tron / Rinzler, a security program originally developed by Bradley to monitor ENCOM's Master Control Program and later reassigned by Flynn to defend the Grid. He was overpowered and re-purposed by Clu as a masked command program wielding an identity disk that splits into two. Anis Cheurfa, a stunt actor, portrayed Rinzler, while Boxleitner provided the dialogue and physically appeared as Tron in flashback sequences via the same treatment as Bridges' younger self for CLU. Rinzler is named after author and Lucasfilm Executive Editor J.W. Rinzler. Michael Sheen as Zuse / Castor, a flamboyant probability program who runs the End of Line Club at the top of the tallest tower in the system. Sheen describes his performance as containing elements of performers such as David Bowie, Joel Grey from Cabaret, and a bit of Frank-N-Furter from The Rocky Horror Show. James Frain as Jarvis, an administration program who serves as CLU's right-hand man and chief intelligence officer. Frain had to shave his head, bleach his eyebrows white, and wear make-up. The refraction on Jarvis's helmet led Frain to walk in a "slightly squinty, blind stagger" which the actor felt was helpful to get him into character. Frain described Jarvis as "a fun, comic character that's a little off-beat," considering him "more human, in terms of being fallible and absurd" compared to the zanier Castor. Beau Garrett appears as Gem, one of four programs known as Sirens. The Sirens operate the Grid's game armory, equipping combatants with the armor needed to compete in the games, while also reporting to Castor. Serinda Swan, Yaya DaCosta, and Elizabeth Mathis depict the other three Sirens. Jeffrey Nordling stars as Richard Mackey, the chairman of ENCOM's executive board, and Cillian Murphy makes an uncredited appearance as Edward Dillinger, Jr., the head of ENCOM's software design team and the son of former ENCOM Senior Executive Ed Dillinger portrayed by David Warner in the original film. Daft Punk, who composed the score for the film, cameo as disc jockey programs at Castor's End of Line Club, and Tron creator Steven Lisberger makes an appearance as Shaddix, a bartender in the End of Line Club. Production Background Steven Lisberger relocated to Boston, Massachusetts, from Philadelphia, Pennsylvania, in the 1970s to pursue a career in computer animation. Since the computer animation field was mainly concentrated in Los Angeles, Lisberger had very little competition operating on the East Coast: "Nobody back then did Hollywood stuff, so there was no competition and no one telling us that we couldn't do it." He later produced and directed the American science fiction film Tron (1982) for Walt Disney Productions, the first computer animation-based feature film. Although the film garnered some critical praise, it generated only modest sales at the box office — the cumulative North American gross was just $33 million. Producer Sean Bailey, who saw the film with his father and Lisberger, was captivated by the finished product. Although Tron performed below Disney studio's expectations, it later developed a cult following, which fueled speculation of Pixar's alleged interest in creating a sequel, in 1999. Rumors of a Tron sequel were further ignited after the 2003 release of the first-person shooter video game, Tron 2.0. Lisberger hinted that a third installment could be in the works, depending on the commercial success of the game. Writing Shortly after hiring Kosinski, Bailey approached screenwriting duo Adam Horowitz and Edward Kitsis, who accepted for being self-described "obsessed about Tron." Horowitz later claimed the challenge was to "homage the first movie, continue the story, expand it and take it to another place and open up space for new fans," and Kitsis claimed that the film would start a whole new mythology "of which we're only scratching the surface." Horowitz and Kitsis first created a story outline, and developed and fine-tuned the plot with Bailey and Kosinski across a period of two days in La Quinta. The writers also consulted Lisberger, to view Trons creator input on the story. Lisberger gave his blessing, particularly as he has a son the same age as Sam, which Kitsis stated that "was like we had tapped into something he was feeling without even realizing it." The Pixar team contributed with rewrites for additional shooting after being shown a rough cut in March 2010, which helped in particular to the development of Sam's story line. The writing staff cited The Wizard of Oz as a source of thematic influence for Tron: Legacy in writing the script, with Kitsis stating that "They both have very similar DNA, which is Tron really lives on, in a lot of ways, trying to get home. You're put on this world and you want to go home and what is home? That's in a lot of way inspired us." Kitsis also added that they had to include an "emotional spine to take us into the story or else it just becomes a bunch of moves or gags and stuff," eventually deciding on adding a mysterious destiny to Flynn and giving him a legendary aura – "Kevin Flynn to us was Steve Jobs and Bill Gates all wrapped up into one and John Lennon." The writers decided to create the character of Clu as an evil embodiment of "how you look back on your younger self, (...) that guy [that] thought he knew everything, but he really knew nothing." Bridges liked the idea of the dual perspectives, and contributed with the writers for the characterization of Flynn as a sanguine Zen master by suggesting them to get inspiration from various Buddhist texts. Part of the concepts emerged from a reunion the producers had with scientists from California Institute of Technology and the Jet Propulsion Laboratory to discuss concepts such as isomorphic algorithms and the digitizing of organic matter. Horowitz revealed the film would contain many light cycle battles, and asserted that the script for the scenes were "incredibly detailed," and involved an intricate collaborative process. For the disc game, Horowitz and Kitsis wrote a rough draft of the scene, and sent the script to Kosinski; he summarized his perspective of the sequence's visuals to them. "He described them as these underlying platforms," said Horowitz, "that would then coalesce and then the way you would go from round to round in the game is you defeat someone, they kinda come together as you see in the movie." After giving his intake, Kosinski sent various sketches of the scene to the writers and would often revise the script. Kitsis thought that illustrating the character's stories to be the most difficult task in writing Tron: Legacy. The writers collaborated with the creative process throughout production, which was helpful especially considering the difficulties of describing in a tangible way a digital world that "in its very nature defies basic screenwriting conventions." Conception Plans for creating Tron: Legacy began to materialize in 2005, when Walt Disney Studios hired screenwriters Brian Klugman and Lee Sternthal as writers for the film. The two had recently finished writing the script for Warrior. According to Variety columnist Michael Fleming, Klugman and Sternthal felt "that the world has caught up to Lisberger's original concept." Klugman said of the precedent film: "It was remembered not only for story, but a visual style that nobody had ever used before. We are contemporizing it, taking ideas that were ahead of the curve and applying them to the present, and we feel the film has a chance to resonate to a younger audience." In 2007, Disney began to negotiate with Joseph Kosinski to direct Tron: Legacy. Kosinski admitted that at the time, he was not keen on the idea but it later grew on him as time progressed. Kosinski was involved in a meeting with Bailey, president of Walt Disney Pictures. "Disney owns the property, Tron," Bailey stated. "Do you know it? Are you interested? What would your take be? In a post-Matrix world, how do you go back to the world of Tron?" Kosinski wanted to embrace the general ambiance of the film and wished to not use the Internet as a model or use a formula emulative of The Matrix film series. As neither individuals were in equal agreement on choosing a perspective to conceive the film, Kosinski asked Bailey to lend him money in order to create a conceptual prototype of the Tron: Legacy universe, which was eventually presented at the 2009 San Diego Comic-Con. "So, we went into Disney," he recalled, "and I told them, 'We can talk about this all day, but in order to really get on the same page, I need to show you what this world looks and feels like. Give me some money and let me do a small test that will give you a hint for a couple minutes of it, and see what you think.'" A graduate of Columbia University architecture school, Kosinski's knowledge of architecture was pivotal in conceptualizing the Tron: Legacy universe. His approach in cultivating a prototype was different from other film directors because, according to Kosinski, he came "from a design point of view"; "Some of my favorite directors come from outside of the film business, so that made my approach different from other directors, but a design background makes sense for a movie like this because the whole world has to be made from scratch." Lisberger would later state that he left the sequel to a different production team because "after thirty years I don't want to compete with myself," and to showcase how the next generation dealt with the themes contained in Tron – "If I brought my network in, it would be a little bit like one of those Clint Eastwood movies where all the old guys go to space." Lisberger added that "I dig this role of being the Obi-Wan or the Yoda on this film more than being the guy in the trenches," stating that unlike Kosinski his age was a hindering factor – "I cannot work sixteen hours a day staring at twenty-five monitors for most of that time." Themes Tron: Legacy is imbued with several references to religious themes, particularly those relating to Christianity and Buddhism. Olivia Wilde's character, Quorra, was inspired/formed by the historical Catholic figure Joan of Arc. Wilde sought inspiration from her six months before production of the film commenced. She, alongside Kosinski, collaborated with the writers on editing the characters so she would contain the characteristics of Joan of Arc. Wilde assessed the characteristics of the figure: "She's this unlikely warrior, very strong but compassionate, and completely led by selflessness. Also, she thinks she's in touch with some higher power and has one foot in another world. All of these were elements of Quorra." Since she epitomizes the concept of androgyny, producers conceived Quorra from an androgynous perspective, notably giving her a short haircut. Bridges opined that Tron: Legacy was evocative of a modern myth, adding that ideas alluding to technological advancement were prevalent throughout the film. To Cyriaque Lamar of io9, the film's approach to technology was reminiscent of a kōan. "One of the things that brought me to this film," affirmed Bridges, "was the idea of helping to create a modern-day myth to help us navigate through these technological waters [...]. I dig immediate gratification as much as anybody, but it happens so fast that if you make a decision like that, you can go far down the wrong path. Think about those plastic single-use water bottles. Where did that come from? Who decided that? You can have a couple of swigs of water [...] and those bottles don't disintegrate entirely. Microscopic animals eat the plastic, and the fish eat those, and we're all connected. It's a finite situation here." According to screenwriter Adam Horowitz, Kosinski stated that the film's universal theme was "finding a human connection in a digital world." They followed this by "approach[ing] the world from the perspective of character, using Kevin Flynn as an organizing principle, and focus on the emotional relationship from father and son and their reconciliation, which brings profound turns in their respective individual lives." Development At the 2008 San Diego Comic-Con, a preliminary teaser trailer (labeled as TR2N and directed by Joseph Kosinski) was shown as a surprise to convention guests. It depicted a yellow Program engaged in a light cycle battle with a blue Program, and it prominently featured Jeff Bridges reprising his role as an aged Kevin Flynn (from the first film). At the end of the trailer, the yellow Program showed his face, which appeared identical to Flynn's earlier program Clu (resembling the younger Flynn in Tron). While the trailer did not confirm that a Tron sequel was in production, it showed that Disney was serious about a sequel. In an interview with Sci-Fi Wire, Bridges revealed that the test footage was unlikely to appear in the finished film. On July 23, 2009, Disney revealed the film's title at their panel at Comic-Con. Bridges explained that the title is in reference to the story's theme: "It's basically a story about a son's search for his father." They also showed a trailer similar to the one shown at Comic-Con 2009, with updated visuals. At the time, the film had just wrapped production and they had a year of post-production ahead of them. Because none of the footage from inside the computer world was finished, they premiered concept images from the production. Art included the Recognizer, which has been updated from the original film. Concept photos were also shown of Disc Wars, which has also been revised from the original film into a 16-game tournament. The arena is set up so that the game court organically changes, and all 16 games are going on at the same time. The boards also combine in real time until the last two Disc warriors are connected. Light cycles make a return, with new designs by Daniel Simon. According to the press conference at Comic-Con 2008, a new vehicle appears called a "Light Runner," a two-seat version of the light cycle, and Kevin Flynn's own cycle, a "Second Generation Light Cycle" designed in 1989 by Flynn and is "still the fastest thing on The Grid." It incorporates some of the look of both films. A life-size model of the light cycle was put on display at a booth at Fan Expo 2009 in Toronto, Ontario from August 28–30, 2009, along with a special presentation of material from the production. The conceptual art shown at Comic-Con was shown in the session, along with some test film of the martial artists who play a more athletic style of Disc Wars. A segment from the film showed Flynn's son entering the now-decrepit arcade, playing a Tron stand-up arcade video game, noticing a passage in the wall behind the Tron game and entering it, the passage closing behind him. Flynn's son makes the visit to the arcade after Alan Bradley receives a page from the disconnected phone number of the arcade. The footage was used later as part of the trailer released on March 5, 2010. The character of Yori and her user, Dr. Lora Baines, do not appear in the sequel, even though the film refers to Alan Bradley being married to Lora. Fans have lobbied for actress Cindy Morgan to be in the film with active campaigns online, such as "Yori Lives" on Facebook, which is independent of Morgan herself. "All I know is what I'm seeing online," Morgan said. "I am so thrilled and touched and excited about the fan reaction and about people talking about the first one and how it relates to the second one. I can't tell you how warm a feeling I get from that. It just means so much." No one from Tron: Legacy had contacted Morgan, and she did not directly speak with anyone from the sequel's cast and crew. As Dr. Lora Baines, Cindy Morgan had appeared with Bruce Boxleitner (as Alan Bradley) at the Encom Press Conference in San Francisco, April 2, 2010. Filming Principal photography took place in Vancouver, British Columbia, in April 2009, and lasted for approximately 67 days. Many filming locations were established in Downtown Vancouver and its surroundings. Stage shooting for the film took place at the Canadian Motion Picture Park studio in Burnaby, an adjacent city that forms part of Metro Vancouver. Kosinski devised and constructed twelve to fifteen of the film's sets, including Kevin Flynn's safe house, a creation he illustrated on a napkin for a visual effects test. "I wanted to build as much as possible. It was important to me that this world feel real, and anytime I could build something I did. So I hired guys that I went to architecture school with to work on the sets for this film, and hopefully people who watch the film feel like there's a certain physicality to this world that hopefully they appreciate, knowing that real architects actually put this whole thing together." The film was shot in dual camera 3D using Pace Fusion rigs like James Cameron's Avatar, but unlike the Sony F950 cameras on that film, Tron used the F35s. "The benefit of [the F35s]," according to director Kosinski, "is that it has a full 35mm sensor which gives you that beautiful cinematic shallow depth of field." The film's beginning portions were shot in 2D, while forty minutes of the film were vertically enhanced for IMAX. Digital Domain was contracted to work on the visual effects, while companies such as Prime Focus Group, DD Vancouver, and Mr. X were brought on to collaborate with producer on the post-production junctures of Tron: Legacy. Post-production wrapped on November 25, 2009. The sequences on the Grid were wholly shot in 3D, utilizing cameras specifically designed for it, and employed a 3D technique that combined other special effects techniques. The real-world sequences were filmed in 2D, and eventually altered using the three-dimensional element. Bailey stated that it was a challenge shooting Tron: Legacy in 3D because the cameras were bigger and heavier, and variations needed to be taken into account. Despite these concerns, he opined that it was a "great reason to go to the movies because it's an experience you just can't recreate on an iPhone or a laptop." In some sequences the image shows a fine mesh pattern and some blurring. That is not interference or a production fault, but indicates that that sequence is a flashback and to simulate an older form of video representation technology. Stunt work on the film was designed and coordinated by 87Eleven, who also designed and trained fight sequences for 300 and Watchmen. Olivia Wilde described it as an honor to train with them. Design In defining his method for creating Tron: Legacy, Kosinski declared that his main objective was to "make it feel real," adding that he wanted the audience to feel like filming actually occurred in the fictional universe. For this, many physical sets were built, as Kosinski "wanted the materials to be real materials: glass, concrete, steel, so it had this kind of visceral quality." Kosinski collaborated with people who specialized in fields outside of the film industry, such as architecture and automotive design. The looks for the Grid aimed for a more advanced version of the cyberspace visited by Flynn in Tron, which Lisberger described as "a virtual Galapagos, which has evolved on its own." As Bailey put, the Grid would not have any influence from the Internet as it had turned offline from the real world in the 1980s, and "grew on its own server into something powerful and unique." Kosinski added that as the simulation became more realistic, it would try to become closer to the real world with environmental effects such as rain and wind, and production designer Darren Gilford stated that there would be a juxtaposition between the variety of texture and color of the real-world introduction in contrast with the "clean surfaces and lines" of the Grid. As the design team considered the lights a major part of the Tron look, particularly for being set in a dark world—described by effects art director Ben Procter as "dark silhouetted objects dipped in an atmosphere with clouds in-between, in a kind of Japanese landscape painting" where "the self-lighting of the objects is the main light source"—lighting was spread through every prop on the set, including the floor in Flynn's hideout. Lisberger also stated that while the original Tron "reflected the way cyberspace was," the sequel was "going to be like a modern day, like contemporary plus, in terms of how much resolution, the texturing, the feel, the style," adding that "it doesn't have that Pong Land vibe to it anymore." The skintight suits worn by the actors were reminiscent of the outfits worn by the actors in the original film. Kosinski believed that the costumes could be made to be practical due to the computerized nature of the film, as physically illuminating each costume would be costly to the budget. Christine Bieselin Clark worked with Michael Wilkinson in designing the lighted costumes, which used electroluminescent lamps derived from a flexible polymer film and featured hexagonal patterns. The lights passed through the suit via Light Tape, a substance composed of Honeywell lamination and Sylvania phosphors. To concoct a color, a transparent 3M Vinyl film was applied onto the phosphor prior to lamination. While most of the suits were made out of foam latex, others derived from spandex, which was sprayed with balloon rubber, ultimately giving the illusion of a lean shape. The actors had to be compressed to compensate for the bulk of the electronics. In addition, Clark and Wilkinson designed over 140 background costumes. The two sought influence from various fashion and shoe designers in building the costumes. On the back of the suit was an illuminated disc, which consisted of 134 LED lights. It was attached to the suit via a magnet, and was radio-controlled. All the costumes had to be sewn in such a way that the stitches did not appear, as the design team figured that in a virtual environment the clothes would just materialize, with no need for buttons, zippers or enclosures. According to Neville Page, the lead designer for the helmets, "The art departments communicated very well with each other to realise Joe's [...] vision. We would look over each other's shoulders to find inspiration from one another. The development of the costumes came from trying to develop the form language which came from within the film." The majority of the suits were designed using ZBrush. A scan of an actor's body was taken, which was then encased to decipher the fabric, the location of the foam, amongst other concerns. With a computer numerical cutting (CNC) of dense foam, a small-scale output would be created to perfect fine details before initiating construction of the suit. Upon downloading the participant's body scan, the illustrations were overlaid to provide an output manufacturing element. Describing the CNC process, Chris Lavery of Clothes on Film noted that it had a tendency to elicit bubbles and striations. Clark stated: "The [...] suit is all made of a hexagon mesh which we also printed and made the fabric from 3D files. This would go onto the hard form; it would go inside the mould which was silicon matrix. We would put those together and then inject foam into the negative space. The wiring harness is embedded into the mould and you get a torso. We then paint it and that's your finished suit." Sound and visual effects Crowd effects for the gaming arena were recorded at the 2010 San Diego Comic-Con. During one of the Tron: Legacy panels, the crowd was given instruction via a large video screen while techs from Skywalker Sound recorded the performance. The audience performed chants and stomping effects similar to what is heard in modern sports arenas. It took two years and 10 companies to create the 1,565 visual effects shots of Tron: Legacy. The majority of the effects were done by Digital Domain, who created 882 shots under supervisor Eric Barba. The production team blended several special effect techniques, such as chroma keying, to allow more freedom in creating effects. Similar to Tron, this approach was seen as pushing the boundaries of modern technology. "I was going more on instinct rather than experience," Kosinski remarked. Although he had previously used the technology in producing advertisements, this was the first time Kosinski used it a large scale simultaneously. Darren Gilford was approached as the production designer, while David Levy was hired as a concept artist. Levy translated Kosinski's ideas into drawings and other visual designs. "Joe's vision evolved the visuals of the first film," he stated. "He wanted the Grid to feel like reality, but with a twist." An estimated twenty to twenty-five artists from the art department developed concepts of the Tron: Legacy universe, which varied from real world locations to fully digital sets. Gilford suggested that there were between sixty and seventy settings in the film, split up into fifteen fully constructed sets with different levels of computer-created landscapes. Rather than utilizing makeup tactics, such as the ones used in A Beautiful Mind, to give Jeff Bridges a younger appearance, the character of Clu was completely computer generated. To show that this version of Clu was created some time after the events of the original film, the visual effects artists based his appearance on how Bridges looked in Against All Odds, released two years after Tron. The effects team hired makeup artist Rick Baker to construct a molded likeness of a younger Bridges head to serve as their basis for their CG work. But soon, they scrapped the mold because they wished for it to be more youthful. There was no time to make another mold, so the team reconstructed it digitally. On-set, first Bridges would perform, being then followed by actor double John Reardon who would mimic his actions. Reardon's head was replaced on post-production with the digital version of the young Bridges. Barba – who was involved in a similar experience for The Curious Case of Benjamin Button — stated that they used four microcameras with infrared sensors to capture all 134 dots on Bridges face that would be the basis of the facial movements, a similar process that was used in Avatar. It took over two years to not only create the likeness of Clu, but also the character's movements (such as muscle movement). Bridges called the experience surreal and said it was "Just like the first Tron, but for real!" Musical score and soundtrack album The French electronic duo Daft Punk composed the film score of Tron: Legacy, which features over 30 tracks. The score was arranged and orchestrated by Joseph Trapanese. Jason Bentley served as the film's music supervisor. Director Joseph Kosinski referred to the score as a mixture of orchestral and electronic elements. An electronic music fan, Kosinski stated that to replicate the innovative electronic Tron score by Wendy Carlos "rather than going with a traditional film composer, I wanted to try something fresh and different," adding that "there was a lot of interest from different electronic bands that I follow to work on the film" but he eventually picked Daft Punk. Kosinski added that he knew the band was "more than just dance music guys" for side projects such as their film Electroma. The duo were first contacted by producers in 2007, when Tron: Legacy was still in the early stages of production. Since they were touring at the time, producers were unsuccessful in contacting the group. They were again approached by Kosinski, eventually agreeing to take part in the film a year later. Kosinski added that Daft Punk were huge Tron fans, and that his meeting with them "was almost like they were interviewing me to make sure that I was going to hold up to the Tron legacy." The duo started composing the soundtrack before production began, and is a notable departure from the duo's previous works, as Daft Punk placed higher emphasis on orchestral elements rather than relying solely on synthesizers. "Synths are a very low level of artificial intelligence," explained member Guy-Manuel de Homem-Christo, "whereas you have a Stradivarius that will live for a thousand years. We knew from the start that there was no way that we were going to do this film score with two synthesizers and a drum machine." "Derezzed" was taken from the album and released as its sole single. The album was released by Walt Disney Records on December 3, 2010, and sold 71,000 copies in its first week in the United States. Peaking at number six on the Billboard 200, it eventually acquired a platinum certification by the Recording Industry Association of America, denoting shipments of 1,000,000 copies. A remix album for the soundtrack, titled Tron: Legacy Reconfigured, became available on April 5, 2011 to coincide with the film's home media release. Marketing Marketing and promotions On July 21, 2009, several film-related websites posted they had received via mail a pair of "Flynn's Arcade" tokens along with a flash drive. Its content was an animated GIF that showed CSS code lines. Four of them were put together and part of the code was cracked, revealing the URL to Flynnlives.com, a fictitious site maintained by activists who believe Kevin Flynn is alive, even though he has been missing since 1989. Clicking on a tiny spider in the lower section of the main page led to a countdown clock that hit zero on July 23, 2009, 9:30 pm PDT. Within the Terms of Use Section, an address was found. It lies in San Diego, California, US near the city's convention center where the Comic-Con 2009 took place and some footage and information on the sequel was released. Flynn's Arcade was re-opened at that location, with several Space Paranoids arcade machines and a variety of '80s video games. A full-size light cycle from the new film was on display. A ninth viral site, homeoftron.com, was found. It portrays some of the history of Flynn's Arcade as well as a fan memoir section. On December 19, 2009, a new poster was revealed, along with the second still from the film. Banners promoting the film paved the way to the 2010 Comic-Con convention center, making this a record third appearance for the film at the annual event. Disney also partnered with both Coke Zero and Norelco on Tron: Legacy. Disney's subsidiary Marvel Comics had special covers of their superheroes in Tron garb, and Nokia had trailers for the film preloaded on Nokia N8 phones while doing a promotion to attend the film's London premiere. While Sam picks up a can of Coors in the film, it was not product placement, with the beer appearing because Kosinski "just liked the color and thought it would look good on screen." Attractions At the Walt Disney World Resort in Florida, one monorail train was decorated with special artwork depicting light cycles with trailing beams of light, along with the film's logo. This Tron-themed monorail, formerly the "Coral" monorail, was renamed the "Tronorail" and unveiled in March 2010. At the Disneyland Resort in California, a nighttime dance party named "ElecTRONica" premiered on October 8, 2010, and was set to close in May 2011, but it was extended until April 2012 due to positive guest response, in Hollywood Land at Disney California Adventure Park. Winners of America's Best Dance Crew, Poreotics, performed at ElecTRONica. As part of ElecTRONica, a sneak peek with scenes from the film is shown in 3D with additional in-theater effects in the Muppet*Vision 3D theater. On October 29, 2010, the nighttime show World of Color at Disney California Adventure Park began soft-openings after its second show of a Tron: Legacy-themed encore using a Daft Punk music piece titled "The Game Has Changed" from the film soundtrack, using new effects and projections on Paradise Pier attractions. The encore officially premiered on November 1, 2010. On December 12, 2010, the show Extreme Makeover: Home Edition, as part of a house rebuild, constructed a Tron: Legacy-themed bedroom for one of the occupants' young boys. The black painted room not only consisted of life-sized Tron city graphics, but also glowing blue line graphics on the walls, floor and furniture, a desk with glowing red-lit Recognizers for the legs and a Tron suit-inspired desk chair, a light cycle-shaped chair with blue lighting accents, a projection mural system that projected Tron imagery on a glass wall partition, a laptop computer, a flat panel television, several Tron: Legacy action figures, a daybed in black and shimmering dark blue and blue overhead lit panels. Disney was involved with the Ice Hotel in Jukkasjärvi, Sweden through association with designers Ian Douglas-Jones at I-N-D-J and Ben Rousseau to create "The Legacy of the River," a high-tech suite inspired by Tron: Legacy. The suite uses electroluminescent wire to capture the art style of the film. It consists of over 60 square meters of 100mm thick ice equating to approximately six tons. 160 linear meters of electroluminescent wire were routed out, sandwiched and then glued with powdered snow and water to create complex geometric forms. The Ice Hotel is expected to get 60,000 visitors for the season, which lasts December 2010 through April 2011. On November 19, 2010, the Tron: Legacy Pop Up Shop opened at Royal-T Cafe and Art Space in Culver City, California. The shop featured many of the collaborative products created as tie-ins with the film from brands such as Oakley, Hurley and Adidas. The space was decorated in theme and the adjacent cafe had a tie in menu with Tron-inspired dishes. The shop remained open until December 23, 2010. Following the release of the film, the TRON Lightcycle Power Run attraction, based on the film, opened at Shanghai Disneyland and Magic Kingdom in 2016 and 2023, respectively. Merchandising Electronics and toy lines inspired by the film were released during late 2010. A line of Tron-inspired jewelry, shoes and apparel was also released, and Disney created a pop-up store to sell them in Culver City. Custom Tron branded gaming controllers have been released for Xbox 360, PlayStation 3 and Wii. A tie-in video game, entitled Tron: Evolution, was released on November 25, 2010. The story takes place between the original film and Tron: Legacy. Teaser trailers were released in November 2009, while a longer trailer was shown during the Spike Video Game Awards on December 12, 2009. There were also two games released for the iOS devices (iPhone, iPod, and iPad) as tie-ins to the films. Disney commissioned N-Space to develop a series of multiplayer games based on Tron: Legacy for the Wii console. IGN reviewed the PlayStation 3 version of the game but gave it only a "passable" 6 out of 10. A tie-in 128-page graphic novel Tron: Betrayal was released by Disney Press on November 16, 2010. It includes an 11-page retelling of the original Tron story, in addition to a story taking place between the original film and Tron: Legacy. IGN reviewed the comic and gave it a "passable" score of 6.5 out of 10. Release Premiere and theaters On October 28, 2010, a 23-minute preview of the film was screened on many IMAX theaters all over the world, (presented by ASUS). The tickets for this event were sold out within an hour on October 8. Stand-by tickets for the event were also sold shortly before the presentation started. Original merchandise from the film was also available for sale. Announced through the official Tron Facebook page, the red carpet premiere of the film was broadcast live on the Internet. Tron: Legacy was released in theaters on December 17, 2010, in the United States and United Kingdom. The film was originally set to be released in the UK on December 26, 2010, but was brought forward due to high demand. The film was presented in IMAX 3D and Disney Digital 3D. The film was also released with D-BOX motion code in select theaters and released in 50 Iosono-enhanced cinemas, creating "3D sound." On December 10, 2010, in Toronto, Ontario, Canada, a special premiere was hosted by George Stroumboulopoulos organised through Twitter, open to the first 100 people who showed up at the CN Tower. After the film ended the tower was lit up blue to mirror The Grid. On December 13, 2010, in select cities all over the United States, a free screening of the entire film in 3D was available to individuals on a first-come, first-served basis. Free "Flynn Lives" pins were handed out to the attendees. The announcement of the free screenings was made on the official Flynn Lives Facebook page. On January 21, 2011, the German designer Michael Michalsky hosted the German premiere of the film at his cultural event StyleNite during Berlin Fashion Week. Home media release Tron: Legacy was released by Walt Disney Studios Home Entertainment on Blu-ray Disc, DVD, and digital download in North America on April 5, 2011. Tron: Legacy was available stand-alone as a single-disc DVD, a two-disc DVD and Blu-ray combo pack, and a four-disc box set adding a Blu-ray 3D and a digital copy. A five-disc box set featuring both Tron films was also released, entitled The Ultimate Tron Experience, having a collectible packaging resembling an identity disk. The digital download of Tron: Legacy was available in both high definition or standard definition, including versions with or without the digital extras. A short film sequel to the film, Tron: The Next Day, as well as a preview of the 19-episode animated series Tron: Uprising, is included in all versions of the home media release. Tron: Legacy was the second Walt Disney Studios Home Entertainment release that included Disney Second Screen, a feature accessible via a computer or iPad app download that provides additional content as the user views the film. Forty minutes of the film were shot in 2.39:1 and then vertically enhanced for IMAX. These scenes are presented in 1.78:1 in a similar way to the Blu-ray release of The Dark Knight. Reception Box office Leading up to the release, various commercial analysts predicted that Tron: Legacy would gross $40–$50 million during its opening weekend, a figure that Los Angeles Times commentator Ben Fritz wrote would be "solid but not spectacular." Although the studio hoped to attract a broad audience, the film primarily appealed to men: "Women appear to be more hesitant about the science-fiction sequel," wrote Fritz. Jay Fernandez of The Hollywood Reporter felt that the disproportionate audience would be problematic for the film's long term box office prospects. Writing for Box Office Mojo, Brandon Gray attributed pre-release hype to "unwarranted blockbuster expectations from fanboys," given the original Tron was considered a box office success when it was released, and the film's cult fandom "amounted to a niche." In North America, the film earned $43.6 million during the course of its opening weekend. On its opening day, it grossed $17.6 million, including $3.6 million during midnight showings from 2,000 theaters, 29% of which were IMAX screenings, and went on to claim the top spot for the weekend, ahead of Yogi Bear and How Do You Know, making $44 million. Tron: Legacy grossed roughly $68 million during its first week, and surpassed $100 million on its 12th day in release. Outside North America, Tron: Legacy grossed $23 million on its opening weekend, averaging $6,000 per theater. According to Disney, 65% of foreign grosses originated from five key markets; Japan, Australia, Brazil, United Kingdom, and Spain. The film performed the best in Japan, where it took $4.7M from 350 theaters. Australia ($3.4M), the United Kingdom ($3.2M), Brazil ($1.9M), and Spain ($1.9M). By the following week, Tron: Legacy obtained $65.5 million from foreign markets, bringing total grosses to $153.8 million. At the end of its theatrical run, Tron: Legacy had grossed $409.9 million; $172.1 million in North America, and $237.8 million in other countries. Critical reception Review aggregator website Rotten Tomatoes reported that 51% of commentators gave the film a positive review, based on 248 reviews. Attaining a mean score of 5.86/10, the site's consensus stated: "Tron: Legacy boasts dazzling visuals, but its human characters and story get lost amidst its state-of-the-art production design." At Metacritic, which assigns a normalized rating out of 100 based on reviews from mainstream critics, Tron: Legacy received a rating average of 49, based on 40 reviews, indicating "mixed or average reviews". Audiences polled by CinemaScore gave the film an average grade of "B+" on an A+ to F scale. The visual effects were cited as the central highlight of the film. In his three-star review, Roger Ebert of the Chicago Sun-Times felt that the environment was aesthetically pleasing, and added that its score displayed an "electronic force" that complemented the visuals. Rolling Stone columnist Peter Travers echoed these sentiments, concluding that the effects were of an "award-caliber." J. Hoberman of The Village Voice noted that while it was extensively enhanced, Tron: Legacy retained the streamlined visuals that were seen in its predecessor, while Variety Peter DeBarge affirmed that the visuals and the accompanied "cutting-edge" score made for a "stunning virtual ride." To Nick de Semlyen of Empire, "This is a movie of astonishing high-end gloss, fused to a pounding Daft Punk soundtrack, populated with sleek sirens and chiselled hunks, boasting electroluminescent landscapes to make Blu-ray players weep." Some critics were not as impressed with the film's special effects. Manohla Dargis of The New York Times avouched that despite its occasional notability, the film's "vibrating kaleidoscopic colors that gave the first movie its visual punch have been replaced by a monotonous palette of glassy black and blue and sunbursts of orange and yellow." Though declaring that Tron: Legacy was "eye-popping," San Francisco Chronicle Amy Biancolli conceded that the special effects were "spectacular"—albeit cheesy. A columnist for The Wall Street Journal, Joe Morgenstern denounced the producers' emphasis on technological advancements, which he felt could have been used for other means such as drama. The performances of various cast members were frequently mentioned in the critiques. Michael Sheen's portrayal of Castor was particularly acclaimed by commentators, who—because of his flamboyance—drew parallels to pop-rock icon David Bowie, as well as fictional characters such as A Clockwork Orange lead character Alex. Dargis, Debruge, Puig, and Carrie Rickey of The Philadelphia Inquirer were among the journalists to praise his acting: Dargis ascribed Sheen's exceptional performance to a comparatively "uninteresting" cast. To Philadelphia Daily News film critic Gary Thompson, the film became humorous with the scenes involving Castor. Star Tribune critic Colin Covert believed that Sheen's campy antics were the "too brief" highlights of Tron: Legacy. With other cast members—particularly Garrett Hedlund, Olivia Wilde, and Jeff Bridges—commentary reflected diverse attitudes. The film received "a little boost from" Wilde, according to Rickey. The Boston Globe Wesley Morris called Hedlund a "dud stud"; "None of what he sees impresses," he elaborated. "The feeling is mutual. At an alleged cost of $200 million, that's some yawn. If he can't be thrilled, why should we?" To Salon commentator Andrew O'Hehir, even Bridges—an individual he regarded as "one of America's most beloved and distinctive" actors—was "weird and complicated" rather than being the "sentimental and alluring" portrayer in the original Tron. Critics were divided with the character development and the storylines in Tron: Legacy. Writing for The New Yorker, Bruce Jones commented that the audience did not connect with the characters, as they were lacking emotion and substance. "Disney may be looking for a merchandising bonanza with this long-gestating sequel to the groundbreaking 1982 film," remarked Jones, "but someone in the corporate offices forgot to add any human interest to its action-heavy script." Likewise, USA Today journalist Claudia Puig found Tron: Legacy to resonate with "nonsensical" and "unimaginative, even obfuscating" dialogue, and that "most of the story just doesn't scan." As Dana Stevens from Slate summed up, "Tron: Legacy is the kind of sensory-onslaught blockbuster that tends to put me to sleep, the way babies will nap to block out overwhelming stimuli. I confess I may have snoozed through one or two climactic battles only to be startled awake by an incoming neon Frisbee." Although he proclaimed the plot of Tron: Legacy and its predecessor to be spotty, Ian Buckwater of NPR was lenient on the latter film due to its youth-friendly nature. In contrast to negative responses, Michelle Alexander of Eclipse adored the plot of Tron: Legacy, a reaction that was paralleled by Rossiter Drake from 7x7, who wrote that it was "buoyed" by its "sometimes convoluted, yet hard to resist" story. Metros Larushka Ivan-Zadeh complained about the underdeveloped plot, saying "In 2010, issues surrounding the immersive nature of gaming and all-consuming power of modern technology are more pertinent than ever, so it's frustrating the script does nothing with them." However, she conceded that "it's the best 3D flick since Avatar and a super-groovy soundtrack by Daft Punk nonetheless makes for an awesome watch." Accolades Tron: Legacy received an award for Best Original Score from the Austin Film Critics Association. The film was also nominated for "Excellence in Production Design for a Fantasy Film" by the Art Directors Guild, and for Sound Editing by the Academy of Motion Picture Arts and Sciences. The film made the final shortlist for the Academy Award for Best Visual Effects, although it did not receive a nomination. In other media Manga A manga version of Tron: Legacy was released by Earth Star Entertainment in Japan on June 30, 2011. Video games and pinball Tron: Legacy was adapted as a location named "The Grid" in the 2012 Nintendo 3DS game Kingdom Hearts 3D: Dream Drop Distance and the later HD remastered version in Kingdom Hearts HD 2.8 Final Chapter Prologue. In 2011, Stern Pinball released Tron: Legacy the pinball machine. Television Tron: Uprising, an animated television series, premiered on June 7, 2012, on the Disney XD network across the United States. Tron: Legacy writers Adam Horowitz and Eddie Kitsis revealed that the series tells the story of what happened in the Grid in between the films. Bruce Boxleitner and Olivia Wilde reprise their roles as Tron and Quorra from Tron: Legacy, while Elijah Wood, Lance Henriksen, Mandy Moore, Emmanuelle Chriqui, Paul Reubens, and Nate Corddry voice new characters. Sequel Steven Lisberger stated on October 28, 2010, before the film's release, that a sequel was in planning and that Adam Horowitz and Edward Kitsis, screenwriters for Tron: Legacy, were in the early stages of producing a script for the new film. In March 2015, it was revealed that Disney had green-lit the third film with Hedlund reprising his role as Sam and Kosinski returning to direct the sequel. Wilde was revealed in April to be returning as Quorra. Filming was expected to start in Vancouver in October 2015. However, in May 2015, The Hollywood Reporter reported that Walt Disney Studios had chosen not to continue with a third installment, which was confirmed by Wilde the following month. Hedlund later stated that the box office failure of Tomorrowland right before the third Tron would have begun filming led Disney to cancel the project. However, during a 2017 Q&A session with Joseph Kosinski, he revealed that Tron 3 had not been scrapped, instead saying it was in "cryogenic freeze." A few days later, it was reported that Jared Leto was attached to portray a new character named Ares in the sequel. However, Disney had not officially confirmed that the project was in development. In June 2020, Walt Disney Studios President of Music & Soundtracks Mitchell Leib confirmed in an interview that a third Tron film was being actively worked on at Disney. He said that Disney has a script written and was looking for a director, though was hopeful that Kosinski would return, as well as saying that it was a high priority for them that Daft Punk return to do the score, though the band's break up in 2021 leaves their return uncertain. In August 2020, Deadline reported that Garth Davis had officially been tapped to direct the film from a screenplay by Jesse Wigutow. In March 2022, while promoting Morbius, Leto confirmed that the film is still happening. By January 2023, Davis had exited as director, with Joachim Rønning entering negotiations to take the directing job. Leto was still attached, with production planned to begin in Vancouver on July 3, but delayed by the strikes is scheduled to be released on October 10, 2025. In August 2024, Nine Inch Nails was announced to be providing the score for the film, replacing Daft Punk. Notes References External links 2010 films 2010 3D films 2010 science fiction action films 2010s science fiction adventure films American sequel films American 3D films American chase films American science fiction action films American science fiction adventure films Cyberpunk films Films about computing Films about computer and internet entrepreneurs Films about telepresence Films about video games Films about virtual reality Films directed by Joseph Kosinski Films set in 1989 Films set in 2009 Films shot in Vancouver Genocide in fiction IMAX films Films using motion capture Religion in science fiction Tron films Walt Disney Pictures films Films set in computers Films about computer hacking 2010 directorial debut films Films about coups d'état Films about father–son relationships 2010s English-language films 2010s American films Films scored by musical groups English-language science fiction adventure films English-language science fiction action films Saturn Award–winning films
Tron: Legacy
[ "Technology" ]
12,629
[ "Works about computing", "Films about computing" ]
22,549,668
https://en.wikipedia.org/wiki/Comparison%20of%20instruction%20set%20architectures
An instruction set architecture (ISA) is an abstract model of a computer, also referred to as computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost (among other things); because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today. An ISA defines everything a machine language programmer needs to know in order to program a computer. What an ISA defines differs between ISAs; in general, ISAs define the supported data types, what state there is (such as the main memory and registers) and their semantics (such as the memory consistency and addressing modes), the instruction set (the set of machine instructions that comprises a computer's machine language), and the input/output model. Data representation In the early decades of computing, there were computers that used binary, decimal and even ternary. Contemporary computers are almost exclusively binary. Characters are encoded as strings of bits or digits, using a wide variety of character sets; even within a single manufacturer there were character set differences. Integers are encoded with a variety of representations, including Sign-magnitude, Ones' complement, Two's complement, Offset binary, Nines' complement and Ten's complement. Similarly, floating point numbers are encoded with a variety of representations for the sign, exponent and mantissa. In contemporary machines IBM hexadecimal floating-point and IEEE 754 floating point have largely supplanted older formats. Addresses are typically unsigned integers generated from a combination of fields in an instruction, data from registers and data from storage; the details vary depending on the architecture. Bits Computer architectures are often described as n-bit architectures. In the first of the 20th century, n is often 12, 18, 24, 30, 36, 48 or 60. In the last of the 20th century, n is often 8, 16, or 32, and in the 21st century, n is often 16, 32 or 64, but other sizes have been used (including 6, 39, 128). This is actually a simplification as computer architecture often has a few more or less "natural" data sizes in the instruction set, but the hardware implementation of these may be very different. Many instruction set architectures have instructions that, on some implementations of that instruction set architecture, operate on half and/or twice the size of the processor's major internal datapaths. Examples of this are the Z80, MC68000, and the IBM System/360. On these types of implementations, a twice as wide operation typically also takes around twice as many clock cycles (which is not the case on high performance implementations). On the 68000, for instance, this means 8 instead of 4 clock ticks, and this particular chip may be described as a 32-bit architecture with a 16-bit implementation. The IBM System/360 instruction set architecture is 32-bit, but several models of the System/360 series, such as the IBM System/360 Model 30, have smaller internal data paths, while others, such as the 360/195, have larger internal data paths. The external databus width is not used to determine the width of the architecture; the NS32008, NS32016 and NS32032 were basically the same 32-bit chip with different external data buses; the NS32764 had a 64-bit bus, and used 32-bit register. Early 32-bit microprocessors often had a 24-bit address, as did the System/360 processors. Digits In the first of the 20th century, word oriented decimal computers typically had 10 digit words with a separate sign, using all ten digits in integers and using two digits for exponents in floating point numbers. Endianness An architecture may use "big" or "little" endianness, or both, or be configurable to use either. Little-endian processors order bytes in memory with the least significant byte of a multi-byte value in the lowest-numbered memory location. Big-endian architectures instead arrange bytes with the most significant byte at the lowest-numbered address. The x86 architecture as well as several 8-bit architectures are little-endian. Most RISC architectures (SPARC, Power, PowerPC, MIPS) were originally big-endian (ARM was little-endian), but many (including ARM) are now configurable as either. Endianness only applies to processors that allow individual addressing of units of data (such as bytes) that are smaller than some of the data formats. Instruction formats Opcodes In some architectures, an instruction has a single opcode. In others, some instructions have an opcode and one or more modifiers. E.g., on the IBM System/370, byte 0 is the opcode but when byte 0 is a then byte 1 selects a specific instruction, e.g., is store clock (STCK). Operands Addressing modes Architectures typically allow instructions to include some combination of operand addressing modes: Direct The instruction specifies a complete address Immediate The instruction specifies a value rather than an address Indexed The instruction specifies a register to use as an index. In some architecture the index is scaled by the operand length. Indirect The instruction specifies the location of a pointer word that describes the operand, possibly involving multiple levels of indexing and indirection Truncated Base-displacement The instruction specifies a displacement from an address in a register autoincrement/autodecrement A register used for indexing, or a pointer word used by indirect addressing, is incremented or decremented by 1, an operand size or an explicit delta Number of operands The number of operands is one of the factors that may give an indication about the performance of the instruction set. A three-operand architecture (2-in, 1-out) will allow A := B + C to be computed in one instruction ADD B, C, A A two-operand architecture (1-in, 1-in-and-out) will allow A := A + B to be computed in one instruction ADD B, A but requires that A := B + C be done in two instructions MOVE B, A ADD C, A Encoding length As can be seen in the table below some instructions sets keep to a very simple fixed encoding length, and other have variable-length. Usually it is RISC architectures that have fixed encoding length and CISC architectures that have variable length, but not always. Instruction sets The table below compares basic information about instruction set architectures. Notes: Usually the number of registers is a power of two, e.g. 8, 16, 32. In some cases a hardwired-to-zero pseudo-register is included, as "part" of register files of architectures, mostly to simplify indexing modes. The column "Registers" only counts the integer "registers" usable by general instructions at any moment. Architectures always include special-purpose registers such as the program counter (PC). Those are not counted unless mentioned. Note that some architectures, such as SPARC, have register windows; for those architectures, the count indicates how many registers are available within a register window. Also, non-architected registers for register renaming are not counted. In the "Type" column, "Register–Register" is a synonym for a common type of architecture, "load–store", meaning that no instruction can directly access memory except some special ones, i.e. load to or store from register(s), with the possible exceptions of memory locking instructions for atomic operations. In the "Endianness" column, "Bi" means that the endianness is configurable. See also Central processing unit (CPU) Processor design Comparison of CPU microarchitectures Instruction set architecture Microprocessor Benchmark (computing) Notes References Computer architecture Computing comparisons
Comparison of instruction set architectures
[ "Technology", "Engineering" ]
1,719
[ "Computer engineering", "Computers", "Computing comparisons", "Computer architecture" ]
22,553,448
https://en.wikipedia.org/wiki/Steglich%20esterification
The Steglich esterification is a variation of an esterification with dicyclohexylcarbodiimide as a coupling reagent and 4-dimethylaminopyridine as a catalyst. The reaction was first described by Wolfgang Steglich in 1978. It is an adaptation of an older method for the formation of amides by means of DCC (dicyclohexylcarbodiimide) and 1-hydroxybenzotriazole (HOBT). This reaction generally takes place at room temperature. A variety of polar aprotic solvents can be used. Because the reaction is mild, esters can be obtained that are inaccessible through other methods for instance esters of the sensitive 2,4-dihydroxybenzoic acid. A characteristic is the formal uptake of water generated in the reaction by DCC, forming the urea compound dicyclohexylurea (DCU). Reaction mechanism The reaction mechanism is described as follows: With amines, the reaction proceeds without problems to the corresponding amides because amines are more nucleophilic. If the esterification is slow, a side-reaction occurs, diminishing the final yield or complicating purification of the product. This side-reaction is a 1,3-rearrangement of the O-acyl intermediate to an N-acylurea which is unable to further react with the alcohol. DMAP suppresses this side reaction, acting as an acyl transfer-reagent in the following manner: References Further reading J. Otera: Esterification. 1. Auflage, Wiley-VCH, Weinheim, 2003, External links Mechanism for the Steglich esterification Name reactions Esterification reactions
Steglich esterification
[ "Chemistry" ]
374
[ "Coupling reactions", "Esterification reactions", "Name reactions", "Organic reactions" ]
22,553,477
https://en.wikipedia.org/wiki/BOSS%20%28molecular%20mechanics%29
Biochemical and Organic Simulation System (BOSS) is a general-purpose molecular modeling program that performs molecular mechanics calculations, Metropolis Monte Carlo statistical mechanics simulations, and semiempirical Austin Model 1 (AM1), PM3, and PDDG/PM3 quantum mechanics calculations. The molecular mechanics calculations cover energy minimizations, normal mode analysis and conformational searching with the Optimized Potentials for Liquid Simulations (OPLS) force fields. BOSS is developed by Prof. William L. Jorgensen at Yale University, and distributed commercially by Cemcomco, LLC and Schrödinger, Inc. Key features OPLS force field inventor Geometry optimization Semiempirical quantum chemistry MC simulations for pure liquids, solutions, clusters or gas-phase systems Free energies are computed from statistical perturbation (free energy perturbation (FEP)) theory TIP3P, TIP4P, and TIP5P water models See also References External links Molecular modelling software Monte Carlo molecular modelling software
BOSS (molecular mechanics)
[ "Chemistry" ]
207
[ "Molecular modelling", "Molecular modelling software", "Computational chemistry software" ]
22,555,068
https://en.wikipedia.org/wiki/PstI
PstI is a type II restriction endonuclease isolated from the Gram negative species, Providencia stuartii. Function PstI cleaves DNA at the recognition sequence 5′-CTGCA/G-3′ generating fragments with 3′-cohesive termini. This cleavage yields sticky ends 4 base pairs long. PstI is catalytically active as a dimer. The two subunits are related by a 2-fold symmetry axis which in the complex with the substrate coincides with the dyad axis of the recognition sequence. It has a molecular weight of 69,500 and contains 54 positive and 41 negatively charged residues. The PstI restriction/modification (R/M) system has two components: a restriction enzyme that cleaves foreign DNA, and a methyltransferase which protect native DNA strands by methylation of the adenine base inside the recognition sequence. The combination of both provide is a defense mechanism against invading viruses. The methyltransferase and endonuclease are encoded as two separate proteins and act independently. In the PstI system, the genes are encoded on opposite strands and hence must be transcribed divergently from separate promoters. The transcription initiation sites are separated by only 70 base pairs. A delay in the expression of the endonuclease relative to methylase is due to the inherent differences of the two proteins. The endonuclease is a dimer, requiring a second step for assembly, whereas the methylase is a monomer. PstI is functionally equivalent to BsuBI. Both enzymes recognize the target sequence 5'CTGCAG. The enzyme systems have similar methyltransferases (41% amino acid identity), restriction endonucleases (46% amino acid identity), and genetic makeup (58% nucleotide identity). These observations suggest a shared evolutionary history. When examining the preferential double strand cleavage of DNA, the restriction endonuclease PstI bind to pSM1 plasmid DNA. DNA cloning PstI is a useful enzyme for DNA cloning as it provides a selective system for generating hybrid DNA molecules. These hybrid DNA molecules can be then cleaved at the regenerated PstI sites. Its use is not limited to molecular cloning; it is also used in restriction site mapping, genotyping, Southern blotting, restriction fragment length polymorphism (RFLP) and SNP. It is also an isoschizomer restriction enzyme SalPI from Streptomyces albus P. Cleavage PstI preferentially cleaves purified pSM1 DNA without being influenced by the superhelicity of the substrate. However, it is not known whether the effects of this cleavage occurs upon binding to the recognition site or DNA scission. Its differential cleavage rates at different restriction sites is due to the five features of duplex structure. The proximity to the ends in linear DNA molecule, variation in DNA sequence within the recognition sites for enzymes, short distance between regions of unusual DNA sequences and recognition sites, and lastly the special structures such as loops and hairpins. The collective effect of these five factors could affect the accessibility of the restriction enzyme to its recognition sites. References Restriction enzymes
PstI
[ "Biology" ]
662
[ "Genetics techniques", "Restriction enzymes" ]
13,087,180
https://en.wikipedia.org/wiki/Mass%20%28mass%20spectrometry%29
The mass recorded by a mass spectrometer can refer to different physical quantities depending on the characteristics of the instrument and the manner in which the mass spectrum is displayed. Units The dalton (symbol: Da) is the standard unit that is used for indicating mass on an atomic or molecular scale (atomic mass). The unified atomic mass unit (symbol: u) is equivalent to the dalton. One dalton is approximately the mass of one a single proton or neutron. The unified atomic mass unit has a value of . The amu without the "unified" prefix is an obsolete unit based on oxygen, which was replaced in 1961. Molecular mass The molecular mass (abbreviated Mr) of a substance, formerly also called molecular weight and abbreviated as MW, is the mass of one molecule of that substance, relative to the unified atomic mass unit u (equal to 1/12 the mass of one atom of 12C). Due to this relativity, the molecular mass of a substance is commonly referred to as the relative molecular mass, and abbreviated to Mr. Average mass The average mass of a molecule is obtained by summing the average atomic masses of the constituent elements. For example, the average mass of natural water with formula H2O is 1.00794 + 1.00794 + 15.9994 = 18.01528 Da. Mass number The mass number, also called the nucleon number, is the number of protons and neutrons in an atomic nucleus. The mass number is unique for each isotope of an element and is written either after the element name or as a superscript to the left of an element's symbol. For example, carbon-12 (12C) has 6 protons and 6 neutrons. Nominal mass The nominal mass for an element is the mass number of its most abundant naturally occurring stable isotope, and for an ion or molecule, the nominal mass is the sum of the nominal masses of the constituent atoms. Isotope abundances are tabulated by IUPAC: for example carbon has two stable isotopes 12C at 98.9% natural abundance and 13C at 1.1% natural abundance, thus the nominal mass of carbon is 12. The nominal mass is not always the lowest mass number, for example iron has isotopes 54Fe, 56Fe, 57Fe, and 58Fe with abundances 6%, 92%, 2%, and 0.3%, respectively, and a nominal mass of 56 Da. For a molecule, the nominal mass is obtained by summing the nominal masses of the constituent elements, for example water has two hydrogen atoms with nominal mass 1 Da and one oxygen atom with nominal mass 16 Da, therefore the nominal mass of H2O is 18 Da. In mass spectrometry, the difference between the nominal mass and the monoisotopic mass is the mass defect. This differs from the definition of mass defect used in physics which is the difference between the mass of a composite particle and the sum of the masses of its constituent parts. Accurate mass The accurate mass (more appropriately, the measured accurate mass) is an experimentally determined mass that allows the elemental composition to be determined. For molecules with mass below 200 Da, 5 ppm accuracy is often sufficient to uniquely determine the elemental composition. Exact mass The exact mass of an isotopic species (more appropriately, the calculated exact mass) is obtained by summing the masses of the individual isotopes of the molecule. For example, the exact mass of water containing two hydrogen-1 (1H) and one oxygen-16 (16O) is 1.0078 + 1.0078 + 15.9949 = 18.0105 Da. The exact mass of heavy water, containing two hydrogen-2 (deuterium or 2H) and one oxygen-16 (16O) is 2.0141 + 2.0141 + 15.9949 = 20.0229 Da. When an exact mass value is given without specifying an isotopic species, it normally refers to the most abundant isotopic species. Monoisotopic mass The monoisotopic mass is the sum of the masses of the atoms in a molecule using the unbound, ground-state, rest mass of the principal (most abundant) isotope for each element. The monoisotopic mass of a molecule or ion is the exact mass obtained using the principal isotopes. Monoisotopic mass is typically expressed in daltons. For typical organic compounds, where the monoisotopic mass is most commonly used, this also results in the lightest isotope being selected. For some heavier atoms such as iron and argon the principal isotope is not the lightest isotope. The mass spectrum peak corresponding to the monoisotopic mass is often not observed for large molecules, but can be determined from the isotopic distribution. Most abundant mass This refers to the mass of the molecule with the most highly represented isotope distribution, based on the natural abundance of the isotopes. Isotopomer and isotopologue Isotopomers (isotopic isomers) are isomers having the same number of each isotopic atom, but differing in the positions of the isotopic atoms. For example, CH3CHDCH3 and CH3CH2CH2D are a pair of structural isotopomers. Isotopomers should not be confused with isotopologues, which are chemical species that differ in the isotopic composition of their molecules or ions. For example, three isotopologues of the water molecule with different isotopic composition of hydrogen are: HOH, HOD and DOD, where D stands for deuterium (2H). Kendrick mass The Kendrick mass is a mass obtained by multiplying the measured mass by a numeric factor. The Kendrick mass is used to aid in the identification of molecules of similar chemical structure from peaks in mass spectra. The method of stating mass was suggested in 1963 by the chemist Edward Kendrick. According to the procedure outlined by Kendrick, the mass of CH2 is defined as 14.000 Da, instead of 14.01565 Da. The Kendrick mass for a family of compounds is given by For hydrocarbon analysis, = CH2. Mass defect (mass spectrometry) The mass defect used in nuclear physics is different from its use in mass spectrometry. In nuclear physics, the mass defect is the difference in the mass of a composite particle and the sum of the masses of its component parts. In mass spectrometry the mass defect is defined as the difference between the exact mass and the nearest integer mass. The Kendrick mass defect is the exact Kendrick mass subtracted from the nearest integer Kendrick mass. Mass defect filtering can be used to selectively detect compounds with a mass spectrometer based on their chemical composition. Packing fraction (mass spectrometry) The term packing fraction was defined by Aston as the difference of the measured mass M and the nearest integer mass I (based on the oxygen-16 mass scale) divided by the quantity comprising the mass number multiplied by ten thousand: . Aston's early model of nuclear structure (prior to the discovery of the neutron) postulated that the electromagnetic fields of closely packed protons and electrons in the nucleus would interfere and a fraction of the mass would be destroyed. A low packing fraction is indicative of a stable nucleus. Nitrogen rule The nitrogen rule states that organic compounds containing exclusively hydrogen, carbon, nitrogen, oxygen, silicon, phosphorus, sulfur, and the halogens either have an odd nominal mass that indicates an odd number of nitrogen atoms are present or an even nominal mass that indicates an even number of nitrogen atoms are present in the molecular ion. Prout's hypothesis and the whole number rule The whole number rule states that the masses of the isotopes are integer multiples of the mass of the hydrogen atom. The rule is a modified version of Prout's hypothesis proposed in 1815, to the effect that atomic weights are multiples of the weight of the hydrogen atom. See also List of elements by atomic mass Dalton (unit) References External links web tools to compute molecule masses & isotopic distribution Mass Mass spectrometry
Mass (mass spectrometry)
[ "Physics", "Chemistry", "Mathematics" ]
1,671
[ "Scalar physical quantities", "Physical quantities", "Spectrum (physical sciences)", "Instrumental analysis", "Quantity", "Mass", "Size", "Mass spectrometry", "Wikipedia categories named after physical quantities", "Matter" ]
13,089,065
https://en.wikipedia.org/wiki/RGS4
Regulator of G protein signaling 4 also known as RGP4 is a protein that in humans is encoded by the RGS4 gene. RGP4 regulates G protein signaling. Function Regulator of G protein signalling (RGS) family members are regulatory molecules that act as GTPase activating proteins (GAPs) for G alpha subunits of heterotrimeric G proteins. RGS proteins are able to deactivate G protein subunits of the Gi alpha, Go alpha and Gq alpha subtypes. They drive G proteins into their inactive GDP-bound forms. Regulator of G protein signaling 4 belongs to this family. All RGS proteins share a conserved 120-amino acid sequence termed the RGS domain which conveys GAP activity. Regulator of G protein signaling 4 protein is 37% identical to RGS1 and 97% identical to rat Rgs4. This protein negatively regulates signaling upstream or at the level of the heterotrimeric G protein and is localized in the cytoplasm. Clinical significance A number of studies associate the RGS4 gene with schizophrenia, while some fail to detect an association. RGS4 is also of interest as one of the three main RGS proteins (along with RGS9 and RGS17) involved in terminating signalling by the mu opioid receptor, and may be important in the development of tolerance to opioid drugs. Inhibitors cyclic peptides CCG-4986 Interactions RGS4 has been shown to interact with: COPB2, ERBB3, and GNAQ. References Further reading Proteins
RGS4
[ "Chemistry" ]
318
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
13,089,880
https://en.wikipedia.org/wiki/Parallel%20mesh%20generation
Parallel mesh generation in numerical analysis is a new research area between the boundaries of two scientific computing disciplines: computational geometry and parallel computing. Parallel mesh generation methods decompose the original mesh generation problem into smaller subproblems which are solved (meshed) in parallel using multiple processors or threads. The existing parallel mesh generation methods can be classified in terms of two basic attributes: the sequential technique used for meshing the individual subproblems and the degree of coupling between the subproblems. One of the challenges in parallel mesh generation is to develop parallel meshing software using off-the-shelf sequential meshing codes. Overview Parallel mesh generation procedures in general decompose the original 2-dimensional (2D) or 3-dimensional (3D) mesh generation problem into N smaller subproblems which are solved (i.e., meshed) concurrently using P processors or threads. The subproblems can be formulated to be either tightly coupled, partially coupled or even decoupled. The coupling of the subproblems determines the intensity of the communication and the amount/type of synchronization required between the subproblems. The challenges in parallel mesh generation methods are: to maintain stability of the parallel mesher (i.e., retain the quality of finite elements generated by state-of-the-art sequential codes) and at the same time achieve 100% code re-use (i.e., leverage the continuously evolving and fully functional off-the-shelf sequential meshers) without substantial deterioration of the scalability of the parallel mesher. There is a difference between parallel mesh generation and parallel triangulation. In parallel triangulation a pre-defined set of points is used to generate in parallel triangles that cover the convex hull of the set of points. A very efficient algorithm for parallel Delaunay triangulations appears in Blelloch et al. This algorithm is extended in Clemens and Walkington for parallel mesh generation. Parallel mesh generation software While many solvers have been ported to parallel machines, grid generators have left behind. Still the preprocessing step of mesh generation remains a sequential bottleneck in the simulation cycle. That is why the need for developing of stable 3D parallel grid generator is well-justified. A parallel version of the MeshSim mesh generator by Simmetrix Inc., is available for both research and commercial use. It includes parallel implementations of surface, volume and boundary layer mesh generation as well as parallel mesh adaptivity. The algorithms it uses are based on those in reference and are scalable (both in the parallel sense and in the sense that they give speedup compared to the serial implementation) and stable. For multicore or multiprocessor systems, there is also a multithreaded version of these algorithms that are available in the base MeshSim product Another parallel mesh generator is D3D, was developed by Daniel Rypl at Czech Technical University in Prague. D3D is a mesh generator capable to discretize in parallel (or sequentially) 3D domains into mixed meshes. BOXERMesh is an unstructured hybrid mesh generator developed by Cambridge Flow Solutions. Implemented as distributed-memory fully parallelised software, it is specifically designed to overcome the traditional bottlenecks constraining engineering simulation, delivering advanced meshing on geometries of arbitrary complexity and size. Its scalability has been demonstrated on very large meshes generated on HPC clusters. Challenges in parallel mesh generation It takes substantial time to develop the algorithmic and software infrastructure for commercial sequential mesh generation libraries. Moreover, improvements in terms of quality, speed, and functionality are open ended which makes the task of creating leading edge parallel mesh generation codes challenging. An area with immediate high benefits to parallel mesh generation is domain decomposition. The DD problem as it is posed in is still open for 3D geometries and its solution will help to deliver stable and scalable methods that rely on off-the-shelf mesh generation codes for Delaunay and Advancing Front Techniques. Finally, a long term investment to parallel mesh generation is to attract the attention of mathematicians with open problems in mesh generation and broader impact in mathematics. See also Mesh generation Parallel computing References Mesh generation Parallel computing
Parallel mesh generation
[ "Physics" ]
856
[ "Tessellation", "Mesh generation", "Symmetry" ]
13,090,363
https://en.wikipedia.org/wiki/Error-correcting%20codes%20with%20feedback
In mathematics, computer science, telecommunication, information theory, and searching theory, error-correcting codes with feedback are error correcting codes designed to work in the presence of feedback from the receiver to the sender. Problem Alice (the sender) wishes to send a value x to Bob (the receiver). The communication channel between Alice and Bob is imperfect, and can introduce errors. Solution An error-correcting code is a way of encoding x as a message such that Bob will successfully understand the value x as intended by Alice, even if the message Alice sends and the message Bob receives differ. In an error-correcting code with feedback, the channel is two-way: Bob can send feedback to Alice about the message he received. Noisy feedback In an error-correcting code without noisy feedback, the feedback received by the sender is always free of errors. In an error-correcting code with noisy feedback, errors can occur in the feedback, as well as in the message. An error-correcting code with noiseless feedback is equivalent to an adaptive search strategy with errors. History In 1956, Claude Shannon introduced the discrete memoryless channel with noiseless feedback. In 1961, Alfréd Rényi introduced the Bar-Kochba game (also known as Twenty questions), with a given percentage of wrong answers, and calculated the minimum number of randomly chosen questions to determine the answer. In his 1964 dissertation, Elwyn Berlekamp considered error correcting codes with noiseless feedback. In Berlekamp's scenario, the receiver chose a subset of possible messages and asked the sender whether the given message was in this subset, a 'yes' or 'no' answer. Based on this answer, the receiver then chose a new subset and repeated the process. The game is further complicated due to noise; some of the answers will be wrong. See also Noisy channel coding theorem References Sources . . Error detection and correction
Error-correcting codes with feedback
[ "Engineering" ]
393
[ "Error detection and correction", "Reliability engineering" ]
13,091,426
https://en.wikipedia.org/wiki/Sequence%20hypothesis
The sequence hypothesis was first formally proposed in the review "On Protein Synthesis" by Francis Crick in 1958. It states that the sequence of bases in the genetic material (DNA or RNA) determines the sequence of amino acids for which that segment of nucleic acid codes, and this amino acid sequence determines the three-dimensional structure into which the protein folds. The three-dimensional structure of a protein is required for a protein to be functional. This hypothesis then lays the essential link between information stored and inherited in nucleic acids to the chemical processes which enable life to exist. Or, as Crick put it in 1958: This description is further amplified in the article and, in discussing how a protein folds up into its three-dimensional structure, Crick suggested that "the folding is simply a function of the order of the amino acids" in the protein. References See also Central dogma of molecular biology Nucleic acids Biology theories
Sequence hypothesis
[ "Chemistry", "Biology" ]
187
[ "Biomolecules by chemical classification", "Biology theories", "Nucleic acids" ]
25,435,134
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20database%20method
The nuclear magnetic resonance database method enables the identification of the stereochemistry of chiral molecules, especially polyols. It relies on the observation that NMR spectroscopy data depend only on the immediate environment near an asymmetric carbon, not on the entire molecular structure. All stereoisomers of a certain class of compounds are synthesized, and their proton NMR and carbon-13 NMR chemical shifts and coupling constants are compared. Yoshito Kishi's group at Harvard University has reported NMR databases for 1,3,5-triols 1,2,3-triols, 1,2,3,4-tetraols, and 1,2,3,4,5-pentaols. The stereochemistry of any 1,2,3-triol may be determined by comparing it with the database, even if the remainder of the unknown molecule is different from the database template compounds. References Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance database method
[ "Physics", "Chemistry", "Astronomy" ]
194
[ "Spectroscopy stubs", "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Nuclear magnetic resonance spectroscopy", "Astronomy stubs", "Nuclear chemistry stubs", "Nuclear magnetic resonance stubs", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
25,436,307
https://en.wikipedia.org/wiki/Roark%27s%20Formulas%20for%20Stress%20and%20Strain
Roark's Formulas for Stress and Strain is a mechanical engineering design book written by Richard G. Budynas and Ali M. Sadegh. It was first published in 1938 and the most current ninth edition was published in March 2020. Subjects The book covers various subjects, including bearing and shear stress, experimental stress analysis, stress concentrations, material behavior, and stress and strain measurement. It also features expanded tables and cases, improved notations and figures within the tables, consistent table and equation numbering, and verification of correction factors. The formulas are organized into tables in a hierarchical format: chapter, table, case, subcase, and each case and subcase is accompanied by diagrams. The main topics of the book include: • The behavior of bodies under stress • Analytical, numerical, and experimental methods • Tension, compression, shear, and combined stress • Beams and curved beams • Torsion, flat plates, and columns • Shells of revolution, pressure vessels, and pipes • Bodies under direct pressure and shear stress • Elastic stability • Dynamic and temperature stresses • Stress concentration • Fatigue and fracture • Stresses in fasteners and joints • Composite materials and solid biomechanics Topics The topics covered in the 7th Edition: Chapter 1 – Introduction Chapter 2 – Stress and Strain: Important Relationships Chapter 3 – The Behavior of Bodies Under Stress Chapter 4 – Principles and Analytical Methods Chapter 5 – Numerical Methods Chapter 6 – Experimental Methods Chapter 7 – Tension, Compression, Shear, and Combined Stress Chapter 8 – Beams; Flexure of Straight Bars Chapter 9 – Bending of Curved Beams Chapter 10 – Torsion Chapter 11 – Flat Plates Chapter 12 – Columns and Other Compression Members Chapter 13 – Shells of Revolution; Pressure Vessels; Pipes Chapter 14 – Bodies in Contact Undergoing Direct Bearing and Shear Stress Chapter 15 – Elastic Stability Chapter 16 – Dynamic and Temperature Stresses Chapter 17 – Stress Concentration Factors Appendix A – Properties of a Plane Area Appendix B – Glossary Appendix C – Composite Materials In all, there are over 5,000 formulas for over 1,500 different load/support conditions for various structural members. Editions 1st Edition 1938 2nd Edition 1943 3rd Edition 1954 4th Edition 1965 5th Edition 1975 – 6th Edition 1989 – 7th Edition September 13, 2001 (851 Pages) – 8th Edition November 28, 2011 (1072 Pages) – 9th Edition March 9, 2020 (928 Pages) Biography Richard G. Budynas is professor of mechanical engineering at Rochester Institute of Technology. He is author of a newly revised McGraw-Hill textbook, Applied Strength and Applied Stress Analysis, 2nd Edition. Ali M. Sadegh is a professor and the Founder and Director of the Center for Advanced Engineering Design at The City College of New York. He is a Licensed Professional Engineer, P.E., and a Certified Manufacturing Engineer, CMfgE. Warren C. Young is professor emeritus in the department of mechanical engineering at the University of Wisconsin, Madison, where he was on the faculty for over 40 years. Dr. Young has also taught as a visiting professor at Bengal Engineering College in Calcutta, India, and served as chief of the Energy Manpower and Training Project sponsored by USAir in Bandung, Indonesia. References 1938 non-fiction books Engineering textbooks Handbooks and manuals Mechanical engineering
Roark's Formulas for Stress and Strain
[ "Physics", "Engineering" ]
650
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
25,441,497
https://en.wikipedia.org/wiki/Stokes%27%20theorem
Stokes' theorem, also known as the Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls, or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence: The line integral of a vector field over a loop is equal to the surface integral of its curl over the enclosed surface. Stokes' theorem is a special case of the generalized Stokes theorem. In particular, a vector field on can be considered as a 1-form in which case its curl is its exterior derivative, a 2-form. Theorem Let be a smooth oriented surface in with boundary . If a vector field is defined and has continuous first order partial derivatives in a region containing , then More explicitly, the equality says that The main challenge in a precise statement of Stokes' theorem is in defining the notion of a boundary. Surfaces such as the Koch snowflake, for example, are well-known not to exhibit a Riemann-integrable boundary, and the notion of surface measure in Lebesgue theory cannot be defined for a non-Lipschitz surface. One (advanced) technique is to pass to a weak formulation and then apply the machinery of geometric measure theory; for that approach see the coarea formula. In this article, we instead use a more elementary definition, based on the fact that a boundary can be discerned for full-dimensional subsets of . A more detailed statement will be given for subsequent discussions. Let be a piecewise smooth Jordan plane curve. The Jordan curve theorem implies that divides into two components, a compact one and another that is non-compact. Let denote the compact part; then is bounded by . It now suffices to transfer this notion of boundary along a continuous map to our surface in . But we already have such a map: the parametrization of . Suppose is piecewise smooth at the neighborhood of , with . If is the space curve defined by then we call the boundary of , written . With the above notation, if is any smooth vector field on , then Here, the "" represents the dot product in . Special case of a more general theorem Stokes' theorem can be viewed as a special case of the following identity: where is any smooth vector or scalar field in . When is a uniform scalar field, the standard Stokes' theorem is recovered. Proof The proof of the theorem consists of 4 steps. We assume Green's theorem, so what is of concern is how to boil down the three-dimensional complicated problem (Stokes' theorem) to a two-dimensional rudimentary problem (Green's theorem). When proving this theorem, mathematicians normally deduce it as a special case of a more general result, which is stated in terms of differential forms, and proved using more sophisticated machinery. While powerful, these techniques require substantial background, so the proof below avoids them, and does not presuppose any knowledge beyond a familiarity with basic vector calculus and linear algebra. At the end of this section, a short alternative proof of Stokes' theorem is given, as a corollary of the generalized Stokes' theorem. Elementary proof First step of the elementary proof (parametrization of integral) As in , we reduce the dimension by using the natural parametrization of the surface. Let and be as in that section, and note that by change of variables where stands for the Jacobian matrix of at . Now let be an orthonormal basis in the coordinate directions of . Recognizing that the columns of are precisely the partial derivatives of at , we can expand the previous equation in coordinates as Second step in the elementary proof (defining the pullback) The previous step suggests we define the function Now, if the scalar value functions and are defined as follows, then, This is the pullback of along , and, by the above, it satisfies We have successfully reduced one side of Stokes' theorem to a 2-dimensional formula; we now turn to the other side. Third step of the elementary proof (second equation) First, calculate the partial derivatives appearing in Green's theorem, via the product rule: Conveniently, the second term vanishes in the difference, by equality of mixed partials. So, But now consider the matrix in that quadratic form—that is, . We claim this matrix in fact describes a cross product. Here the superscript "" represents the transposition of matrices. To be precise, let be an arbitrary matrix and let Note that is linear, so it is determined by its action on basis elements. But by direct calculation Here, represents an orthonormal basis in the coordinate directions of . Thus for any . Substituting for , we obtain We can now recognize the difference of partials as a (scalar) triple product: On the other hand, the definition of a surface integral also includes a triple product—the very same one! So, we obtain Fourth step of the elementary proof (reduction to Green's theorem) Combining the second and third steps and then applying Green's theorem completes the proof. Green's theorem asserts the following: for any region D bounded by the Jordans closed curve γ and two scalar-valued smooth functions defined on D; We can substitute the conclusion of STEP2 into the left-hand side of Green's theorem above, and substitute the conclusion of STEP3 into the right-hand side. Q.E.D. Proof via differential forms The functions can be identified with the differential 1-forms on via the map Write the differential 1-form associated to a function as . Then one can calculate that where is the Hodge star and is the exterior derivative. Thus, by generalized Stokes' theorem, Applications Irrotational fields In this section, we will discuss the irrotational field (lamellar vector field) based on Stokes' theorem. Definition 2-1 (irrotational field). A smooth vector field on an open is irrotational (lamellar vector field) if . This concept is very fundamental in mechanics; as we'll prove later, if is irrotational and the domain of is simply connected, then is a conservative vector field. Helmholtz's theorem In this section, we will introduce a theorem that is derived from Stokes' theorem and characterizes vortex-free vector fields. In classical mechanics and fluid dynamics it is called Helmholtz's theorem. Theorem 2-1 (Helmholtz's theorem in fluid dynamics). Let be an open subset with a lamellar vector field and let be piecewise smooth loops. If there is a function such that [TLH0] is piecewise smooth, [TLH1] for all , [TLH2] for all , [TLH3] for all . Then, Some textbooks such as Lawrence call the relationship between and stated in theorem 2-1 as "homotopic" and the function as "homotopy between and ". However, "homotopic" or "homotopy" in above-mentioned sense are different (stronger than) typical definitions of "homotopic" or "homotopy"; the latter omit condition [TLH3]. So from now on we refer to homotopy (homotope) in the sense of theorem 2-1 as a tubular homotopy (resp. tubular-homotopic). Proof of Helmholtz's theorem In what follows, we abuse notation and use "" for concatenation of paths in the fundamental groupoid and "" for reversing the orientation of a path. Let , and split into four line segments . so that By our assumption that and are piecewise smooth homotopic, there is a piecewise smooth homotopy Let be the image of under . That follows immediately from Stokes' theorem. is lamellar, so the left side vanishes, i.e. As is tubular(satisfying [TLH3]), and . Thus the line integrals along and cancel, leaving On the other hand, , , so that the desired equality follows almost immediately. Conservative forces Above Helmholtz's theorem gives an explanation as to why the work done by a conservative force in changing an object's position is path independent. First, we introduce the Lemma 2-2, which is a corollary of and a special case of Helmholtz's theorem. Lemma 2-2. Let be an open subset, with a Lamellar vector field and a piecewise smooth loop . Fix a point , if there is a homotopy such that [SC0] is piecewise smooth, [SC1] for all , [SC2] for all , [SC3] for all . Then, Above Lemma 2-2 follows from theorem 2–1. In Lemma 2-2, the existence of satisfying [SC0] to [SC3] is crucial;the question is whether such a homotopy can be taken for arbitrary loops. If is simply connected, such exists. The definition of simply connected space follows: Definition 2-2 (simply connected space). Let be non-empty and path-connected. is called simply connected if and only if for any continuous loop, there exists a continuous tubular homotopy from to a fixed point ; that is, [SC0'] is continuous, [SC1] for all , [SC2] for all , [SC3] for all . The claim that "for a conservative force, the work done in changing an object's position is path independent" might seem to follow immediately if the M is simply connected. However, recall that simple-connection only guarantees the existence of a continuous homotopy satisfying [SC1-3]; we seek a piecewise smooth homotopy satisfying those conditions instead. Fortunately, the gap in regularity is resolved by the Whitney's approximation theorem. In other words, the possibility of finding a continuous homotopy, but not being able to integrate over it, is actually eliminated with the benefit of higher mathematics. We thus obtain the following theorem. Theorem 2-2. Let be open and simply connected with an irrotational vector field . For all piecewise smooth loops Maxwell's equations In the physics of electromagnetism, Stokes' theorem provides the justification for the equivalence of the differential form of the Maxwell–Faraday equation and the Maxwell–Ampère equation and the integral form of these equations. For Faraday's law, Stokes' theorem is applied to the electric field, : For Ampère's law, Stokes' theorem is applied to the magnetic field, : Notes References Electromagnetism Mechanics Vectors (mathematics and physics) Vector calculus Theorems in calculus
Stokes' theorem
[ "Physics", "Mathematics", "Engineering" ]
2,238
[ "Theorems in mathematical analysis", "Electromagnetism", "Physical phenomena", "Theorems in calculus", "Calculus", "Mechanics", "Fundamental interactions", "Mechanical engineering" ]
25,444,252
https://en.wikipedia.org/wiki/Molecular%20processor
A molecular processor is a processor that is based on a molecular platform rather than on an inorganic semiconductor in integrated circuit format. Current technology Molecular processors are currently in their infancy and currently only a few exist. At present a basic molecular processor is any biological or chemical system that uses a complementary DNA (cDNA) template to form a long chain amino acid molecule. A key factor that differentiates molecular processors is "the ability to control output" of protein or peptide concentration as a function of time. Simple formation of a molecule becomes the task of a chemical reaction, bioreactor or other polymerization technology. Current molecular processors take advantage of cellular processes to produce amino acid based proteins and peptides. The formation of a molecular processor currently involves integrating cDNA into the genome and should not replicate and re-insert, or be defined as a virus after insertion. Current molecular processors are replication incompetent, non-communicable and cannot be transmitted from cell to cell, animal to animal or human to human. All must have a method to terminate if implanted. The most effective methodology for insertion of cDNA (template with control mechanism) uses capsid technology to insert a payload into the genome. A viable molecular processor is one that dominates cellular function by re-task and or reassignment but does not terminate the cell. It will continuously produce protein or produce on demand and have method to regulate dosage if qualifying as a "drug delivery" molecular processor. Potential applications range from up-regulation of functional CFTR in cystic fibrosis and hemoglobin in sickle cell anemia to angiogenesis in cardiovascular stenosis to account for protein deficiency (used in gene therapy.) Example A vector inserted to form a molecular processor is described in part. The objective was to promote angiogenesis, blood vessel formation and improve cardiovasculature. Vascular endothelial growth factor (VEGF) and enhanced green fluorescent protein (EGFP) cDNA was ligated to either side of an internal ribosomal re-entry site (IRES) to produce inline production of both the VEGF and EGFP proteins. After in vitro insertion and quantification of integrating units (IUs), engineered cells produce a bioluminescent marker and a chemotactic growth factor. In this instance, increased fluorescence of EGFP is used to show VEGF production in individual cells with active molecular processors. The production was exponential in nature and regulated through use of an integrating promoter, cell numbers, the number of integrated units (IUs) of molecular processors and or cell numbers. The measure the molecular processors efficacy was performed by FC/FACS to indirectly measure VEGF through fluorescence intensity. Proof of functional molecular processing was quantified by ELISA to show VEGF effect through chemotactic and angiogenesis models. The result involved directed assembly and coordination of endothelial cells for tubule formation by engineered cells on endothelial cells. The research goes on to show implantation and VEGF with dosage capabilities to promote revascularization, validating mechanisms of molecular processor control. See also Biocomputers Computational gene DNA computing List of emerging technologies Molecular electronics Organic semiconductor References External links CNN -Moving toward molecular chips Newscientist - Atomic Logic Softpedia - IBM Is Working on DNA-Based processors Biological engineering Molecular electronics Nanoelectronics DNA
Molecular processor
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
688
[ "Biological engineering", "Molecular physics", "Molecular electronics", "Nanoelectronics", "Nanotechnology" ]
26,925,616
https://en.wikipedia.org/wiki/List%20of%20tribology%20organizations
This is a list of organizations involved in research in or advocacy of tribology, the scientific and engineering discipline related to friction, lubrication and wear. Government Argonne National Laboratory NASA - Glenn Research Center National Center for Scientific Research (CNRS) National Research Council (Canada) Southwest Research Institute Advocacy and Professional societies American Bearing Manufacturers Association (ABMA) American Gear Manufacturers Association (AGMA) American Society of Mechanical Engineers (ASME) International Federation for the Promotion of Mechanism and Machine Science (IFToMM) Institution of Engineering and Technology Institution of Mechanical Engineers (IMechE) Institute of Physics Society of Tribologists and Lubrication Engineers (STLE) (USA) Publications Lubricants Proceedings of the Institution of Mechanical Engineers, Part J: Journal of Engineering Tribology Tribology Letters Tribology Transactions Wear Higher education Austria TU Wien Australia Curtin University University of New South Wales University of Western Australia Belgium Ghent University KU Leuven Brazil University of São Paulo Federal University of Uberlandia Federal University of Espírito Santo Canada Dalhousie University University of Waterloo University of Windsor China Chinese Academy of Sciences Chinese Academy of Sciences Tsinghua University Qingdao University of Technology Chile Pontificia Universidad Católica de Chile (UC) Czech Republic Brno University of Technology France Ecole Centrale de Lyon INSA Lyon / University of Lyon University of Poitiers Germany Clausthal University of Technology (ITR) Karlsruhe Institute of Technology (KIT) Leibniz Universität Hannover (IMKT) Otto von Guericke University Magdeburg RWTH Aachen University Technische Universität Berlin Technical University of Munich India Indian Institute of Science Bangalore Indian Institute of Technology Delhi Indian Institute of Technology Kanpur Indian Institute of Technology (Indian School of Mines), Dhanbad Indian Institute of Technology Rupnagar National Institute of Technology Srinagar Indian Institute of Technology Tirupati Indian Institute of Technology Indore Japan Kanazawa University Kobe University Nagoya University Niigata University Tokyo University Korea KAIST Korea University Yonsei University Kookmin University Malaysia International Islamic University Malaysia Universiti Malaya Universiti Kebangsaan Malaysia Universiti Teknologi MARA Universiti Teknikal Malaysia Melaka Universiti Teknologi Malaysia Universiti Sains Malaysia Netherlands Delft University of Technology University of Twente New Zealand Auckland University of Technology Norway Norwegian University of Science and Technology Pakistan National University of Science and Technology Portugal Aveiro University University of Coimbra Russia Slovenia University of Ljubljana Sweden Luleå University of Technology Uppsala University KTH Royal Institute of Technology Switzerland EPFL ETH Zurich United Kingdom Bournemouth University Cardiff University Imperial College London National Physical Laboratory University of Bradford University of Cambridge - Tribology Research Group University of Central Lancashire - Jost Institute for Tribotechnology University of Leeds - Institute of Functional Surfaces University of Leicester - Mechanics of Materials Research Group University of Loughborough - Dynamics Research Group University of Oxford - Solid Mechanics & Materials Engineering University of Sheffield - The Leonardo Tribology Centre University of Southampton - National Centre for Advanced Tribology (nCATs) University of Strathclyde United States Auburn University - Multiscale Tribology Laboratory and the Undergraduate Tribology Minor Georgia Institute of Technology - Tribology Research Group George Mason University - Tribology and Surface Mechanics (TSM) Group Gonzaga University - Tribology Research Laboratory Iowa State University - Convergent Manufacturing, Processing and Advanced Surface Science (CoMPASS) Laboratory Lehigh University - Surface Interfaces and Materials Tribology Laboratory Louisiana State University - Center for Rotating Machinery Massachusetts Institute of Technology - Sloan Automotive Laboratory Miami University - Ye Research Group Northwestern University - Center for Surface Engineering and Tribology Pennsylvania State University - Tribology/Materials Processing Lab Purdue University - Mechanical Engineering Tribology Laboratory Rice University - Additive Manufacturing, Performance and Tribology Center Rochester Institute of Technology - Tribology Laboratory Texas A&M University - Rotor Dynamics Laboratory Tribology Group University at Buffalo - Dynamics, Control and Mechatronics University of Akron - Timken Engineered Surfaces Laboratories University of California, Berkeley - Tribology Group University of California, Merced - Tribology Group University of Dayton - Tribology Group University of Delaware - Materials Tribology Laboratory University of Florida - Tribology Laboratory University of Illinois Urbana-Champaign - Tribology and Microtribodynamics Laboratory University of Pennsylvania - Carpick Group University of Texas at Arlington - Turbomachinery and Energy Systems Laboratory University of Utah - Nanotribology and Precision Engineering Laboratory University of Nevada Reno University of Wisconsin - Madison - Eriten Research Group See also Tribology Friction Wear Lubrication Tribology ar:قائمة منظمات تقنية النانو
List of tribology organizations
[ "Chemistry", "Materials_science", "Engineering" ]
981
[ "Tribology", "Mechanical engineering", "Materials science", "Surface science" ]
21,044,436
https://en.wikipedia.org/wiki/SIGraDi
The Sociedad Iberoamericana de Gráfica Digital, SIGraDi (Iberoamerican Society of Digital Graphics) gathers researchers, educators and professionals in architecture, urban design, communication design, Product Design and Art whose work involves the new digital media. It is an organization sister to ACADIA, eCAADe, CAADRIA and ASCAAD. SIGraDi organizes a yearly congress when recent digital technologies and applications are presented and debated. Annual congress SIGraDi congresses are intended as a region wide effort for the interchange of experiences, debate of our disciplines' advancements and the creation of references for the iberoamerican groups involved in digital media applied to education, research and professional practice SIGraDi Congresses The annual SIGraDi congress is the main event organised under auspices of the association. It is organised by a member in good standing, who volunteers for the organisation. The organiser is supported by members of the International Executive Committee. During the years, SIGraDi has developed the policy to circulate the conference location in such a way that southern, central and northern areas of Latin America and Iberoamerica are reached regularly. In the past years, the following SIGraDi Congresses have been organised. References Architectural design International non-profit organizations Computer-aided design software Information technology organizations based in South America
SIGraDi
[ "Engineering" ]
271
[ "Design", "Architectural design", "Architecture" ]
21,045,118
https://en.wikipedia.org/wiki/Haefliger%20structure
In mathematics, a Haefliger structure on a topological space is a generalization of a foliation of a manifold, introduced by André Haefliger in 1970. Any foliation on a manifold induces a special kind of Haefliger structure, which uniquely determines the foliation. Definition A codimension- Haefliger structure on a topological space consists of the following data: a cover of by open sets ; a collection of continuous maps ; for every , a diffeomorphism between open neighbourhoods of and with ; such that the continuous maps from to the sheaf of germs of local diffeomorphisms of satisfy the 1-cocycle condition for The cocycle is also called a Haefliger cocycle. More generally, , piecewise linear, analytic, and continuous Haefliger structures are defined by replacing sheaves of germs of smooth diffeomorphisms by the appropriate sheaves. Examples and constructions Pullbacks An advantage of Haefliger structures over foliations is that they are closed under pullbacks. More precisely, given a Haefliger structure on , defined by a Haefliger cocycle , and a continuous map , the pullback Haefliger structure on is defined by the open cover and the cocycle . As particular cases we obtain the following constructions: Given a Haefliger structure on and a subspace , the restriction of the Haefliger structure to is the pullback Haefliger structure with respect to the inclusion Given a Haefliger structure on and another space , the product of the Haefliger structure with is the pullback Haefliger structure with respect to the projection Foliations Recall that a codimension- foliation on a smooth manifold can be specified by a covering of by open sets , together with a submersion from each open set to , such that for each there is a map from to local diffeomorphisms with whenever is close enough to . The Haefliger cocycle is defined by germ of at u. As anticipated, foliations are not closed in general under pullbacks but Haefliger structures are. Indeed, given a continuous map , one can take pullbacks of foliations on provided that is transverse to the foliation, but if is not transverse the pullback can be a Haefliger structure that is not a foliation. Classifying space Two Haefliger structures on are called concordant if they are the restrictions of Haefliger structures on to and . There is a classifying space for codimension- Haefliger structures which has a universal Haefliger structure on it in the following sense. For any topological space and continuous map from to the pullback of the universal Haefliger structure is a Haefliger structure on . For well-behaved topological spaces this induces a 1:1 correspondence between homotopy classes of maps from to and concordance classes of Haefliger structures. References Differential geometry Smooth manifolds Topological spaces Structures on manifolds Foliations
Haefliger structure
[ "Mathematics" ]
651
[ "Topological spaces", "Mathematical structures", "Topology", "Space (mathematics)" ]
21,045,989
https://en.wikipedia.org/wiki/Principal%20indecomposable%20module
In mathematics, especially in the area of abstract algebra known as module theory, a principal indecomposable module has many important relations to the study of a ring's modules, especially its simple modules, projective modules, and indecomposable modules. Definition A (left) principal indecomposable module of a ring R is a (left) submodule of R that is a direct summand of R and is an indecomposable module. Alternatively, it is an indecomposable, projective, cyclic module. Principal indecomposable modules are also called PIMs for short. Relations The projective indecomposable modules over some rings have very close connections with those rings' simple, projective, and indecomposable modules. If the ring R is Artinian or even semiperfect, then R is a direct sum of principal indecomposable modules, and there is one isomorphism class of PIM per isomorphism class of simple module. To each PIM P is associated its head, P/JP, which is a simple module, being an indecomposable semi-simple module. To each simple module S is associated its projective cover P, which is a PIM, being an indecomposable, projective, cyclic module. Similarly over a semiperfect ring, every indecomposable projective module is a PIM, and every finitely generated projective module is a direct sum of PIMs. In the context of group algebras of finite groups over fields (which are semiperfect rings), the representation ring describes the indecomposable modules, and the modular characters of simple modules represent both a subring and a quotient ring. The representation ring over the complex field is usually better understood and since PIMs correspond to modules over the complexes using p-modular system, one can use PIMs to transfer information from the complex representation ring to the representation ring over a field of positive characteristic. Roughly speaking this is called block theory. Over a Dedekind domain that is not a PID, the ideal class group measures the difference between projective indecomposable modules and principal indecomposable modules: the projective indecomposable modules are exactly the (modules isomorphic to) nonzero ideals and the principal indecomposable modules are precisely the (modules isomorphic to) nonzero principal ideals. References Representation theory of finite groups Module theory
Principal indecomposable module
[ "Mathematics" ]
510
[ "Fields of abstract algebra", "Module theory" ]
21,047,750
https://en.wikipedia.org/wiki/Oxylipin
Oxylipins constitute a family of oxygenated natural products which are formed from fatty acids by pathways involving at least one step of dioxygen-dependent oxidation. These small polar lipid compounds are metabolites of polyunsaturated fatty acids (PUFAs) including omega-3 fatty acids and omega-6 fatty acids. Oxylipins are formed by enzymatic or non-enzymatic oxidation of PUFAs. In animal species, four main pathways of oxylipin production prevail: lipoxygenases (LOXs) pathway, cyklooxygenases (COXs) route, cytochrome P450 (CYPs) pathway, and reactive oxygen species (ROS) route. These pathways result in formation of many different oxylipin molecules which are important for number of processes in living organisms. The processes include inflamation, blood flow, energy metabolism, cellular life, cell signaling, or muscle contractions. Oxylipins have both pro- and anti-inflamatory roles. Oxylipins are widespread in aerobic organisms including plants, animals and fungi. Many of oxylipins have physiological significance. Typically, oxylipins are not stored in tissues but are formed on demand by liberation of precursor fatty acids from esterified forms. Biosynthesis Biosynthesis of oxylipins is initiated by dioxygenases or monooxygenases; however also non-enzymatic autoxidative processes contribute to oxylipin formation (phytoprostanes, isoprostanes). Dioxygenases include lipoxygenases (plants, animals, fungi), heme-dependent fatty acid oxygenases (plants, fungi), and cyclooxygenases (animals). Fatty acid hydroperoxides or endoperoxides are formed by action of these enzymes. Monooxygenases involved in oxylipin biosynthesis are members of the cytochrome P450 superfamily and can oxidize double bonds with epoxide formation or saturated carbons forming alcohols. Nature has evolved numerous enzymes which metabolize oxylipins into secondary products, many of which possess strong biological activity. Of special importance are the cytochrome P450 enzymes in animals, including CYP5A1 (thromboxane synthase), CYP8A1 (prostacyclin synthase), and the CYP74 family of hydroperoxide-metabolizing enzymes in plants, lower animals and bacteria. In the plant and animal kingdoms the C18 and C20 polyenoic fatty acids, respectively, are the major precursors of oxylipins. Structure and function Oxylipins in animals, referred to as eicosanoids (Greek icosa; twenty) because of their formation from twenty-carbon essential fatty acids, have potent and often opposing effects on e.g. smooth muscle (vasculature, myometrium) and blood platelets. Certain eicosanoids (leukotrienes B4 and C4) are proinflammatory whereas others (resolvins, protectins) are antiinflammatory and are involved in the resolution process which follows tissue injury. Plant oxylipins are mainly involved in control of ontogenesis, reproductive processes and in the resistance to various microbial pathogens and other pests. Oxylipins most often act in an autocrine or paracrine manner, notably in targeting peroxisome proliferator-activated receptors (PPARs) to modify adipocyte formation and function. Most oxylipins in the body are derived from linoleic acid or alpha-linolenic acid. Linoleic acid oxylipins are usually present in blood and tissue in higher concentrations than any other PUFA oxylipin, despite the fact that alpha-linolenic acid is more readily metabolized to oxylipin. Linoleic acid oxylipins can be anti-inflammatory, but are more often pro-inflammatory, associated with atherosclerosis, non-alcoholic fatty liver disease, and Alzheimer's disease. Centenarians have shown reduced levels of linoleic acid oxylipins in their blood circulation. Lowering dietary linoleic acid results in fewer linoleic acid oxylipins in humans. From 1955 to 2005 the linoleic acid content of human adipose tissue has risen an estimated 136% in the United States. In general, oxylipins derived from omega-6 fatty acids are more pro-inflammatory, vasoconstrictive, and proliferative than those derived from omega-3 fatty acids. The omega-3 eicosapentaenoic acid (EPA)-derived and docosahexaenoic acid (DHA)-derived oxylipins are anti-inflammatory and vasodilatory. In a clinical trial of men with high triglycerides, 3 grams daily of DHA compared with placebo (olive oil) given for 91 days nearly tripled the DHA in red blood cells while reducing oxylipins in those cells. Both groups were given Vitamin C (ascorbyl palmitate) and Vitamin E (mixed tocopherol) supplements. Oxylipins and disease Oxylipins play important role in many diseases, for example, diabetes, obesity, cardiovascular diseases, cancer, COVID-19, or neurodegenerative disorders. Changes in oxylipin metabolism have been reported in these diseases. In 2021, Alzheimer's disease was associated with changes in oxylipin levels in plasma and cerebrospinal fluid (CSF) for the first time. Interestingly, improvement in neurodegenerative diseases and also cardiovascular diseases may be achieved by using inhibitors of an enzyme (soluble epoxide hydrolase) involved in formation of oxylipins. In Parkinson's disease, oxylipin profiles reflect the stage of the disease. This should be taken into consideration when choosing the suitable medication for Parkinson's disease. References Lipids
Oxylipin
[ "Chemistry" ]
1,282
[ "Organic compounds", "Biomolecules by chemical classification", "Lipids" ]
21,051,205
https://en.wikipedia.org/wiki/Perfectly%20orderable%20graph
In graph theory, a perfectly orderable graph is a graph whose vertices can be ordered in such a way that a greedy coloring algorithm with that ordering optimally colors every induced subgraph of the given graph. Perfectly orderable graphs form a special case of the perfect graphs, and they include the chordal graphs, comparability graphs, and distance-hereditary graphs. However, testing whether a graph is perfectly orderable is NP-complete. Definition The greedy coloring algorithm, when applied to a given ordering of the vertices of a graph G, considers the vertices of the graph in sequence and assigns each vertex its first available color, the minimum excluded value for the set of colors used by its neighbors. Different vertex orderings may lead this algorithm to use different numbers of colors. There is always an ordering that leads to an optimal coloring – this is true, for instance, of the ordering determined from an optimal coloring by sorting the vertices by their color – but it may be difficult to find. The perfectly orderable graphs are defined to be the graphs for which there is an ordering that is optimal for the greedy algorithm not just for the graph itself, but for all of its induced subgraphs. More formally, a graph G is said to be perfectly orderable if there exists an ordering π of the vertices of G, such that every induced subgraph of G is optimally colored by the greedy algorithm using the subsequence of π induced by the vertices of the subgraph. An ordering π has this property exactly when there do not exist four vertices a, b, c, and d for which abcd is an induced path, a appears before b in the ordering, and c appears after d in the ordering. Computational complexity Perfectly orderable graphs are NP-complete to recognize. However, it is easy to test whether a particular ordering is a perfect ordering of a graph. Consequently, it is also NP-hard to find a perfect ordering of a graph, even if the graph is already known to be perfectly orderable. Related graph classes Every perfectly orderable graph is a perfect graph. Chordal graphs are perfectly orderable; a perfect ordering of a chordal graph may be found by reversing a perfect elimination ordering for the graph. Thus, applying greedy coloring to a perfect ordering provides an efficient algorithm for optimally coloring chordal graphs. Comparability graphs are also perfectly orderable, with a perfect ordering being given by a topological ordering of a transitive orientation of the graph. The complement graphs of tolerance graphs are perfectly orderable. Another class of perfectly orderable graphs is given by the graphs G such that, in every subset of five vertices from G, at least one of the five has a closed neighborhood that is a subset of (or equal to) the closed neighborhood of another of the five vertices. Equivalently, these are the graphs in which the partial order of closed neighborhoods, ordered by set inclusion, has width at most four. The 5-vertex cycle graph has a neighborhood partial order of width five, so four is the maximum width that ensures perfect orderability. As with the chordal graphs (and unlike the perfectly orderable graphs more generally) the graphs with width four are recognizable in polynomial time. A concept intermediate between the perfect elimination ordering of a chordal graph and a perfect ordering is a semiperfect elimination ordering: in an elimination ordering, there is no three-vertex induced path in which the middle vertex is the first of the three to be eliminated, and in a semiperfect elimination ordering, there is no four-vertex induced path in which one of the two middle vertices is the first to be eliminated. The reverse of this ordering therefore satisfies the requirements of a perfect ordering, so graphs with semiperfect elimination orderings are perfectly orderable. In particular, the same lexicographic breadth-first search algorithm used to find perfect elimination orders of chordal graphs can be used to find semiperfect elimination orders of distance-hereditary graphs, which are therefore also perfectly orderable. The graphs for which every vertex ordering is a perfect ordering are the cographs. Because cographs are the graphs with no four-vertex induced path, they cannot violate the path-ordering requirement on a perfect ordering. Several additional classes of perfectly orderable graphs are known. Notes References . As cited by . . . As cited by . . . . . . . Graph coloring Perfect graphs NP-complete problems
Perfectly orderable graph
[ "Mathematics" ]
892
[ "Graph coloring", "Graph theory", "Computational problems", "Mathematical relations", "Mathematical problems", "NP-complete problems" ]
21,054,515
https://en.wikipedia.org/wiki/Drug%20reference%20standard
A drug reference standard or pharmaceutical reference standard is a highly characterized material suitable to test the identity, strength, quality and purity of substances for pharmaceutical use and medicinal products. Pharmacopoeial reference standards Pharmacopoeial reference standards are a subset of pharmaceutical reference standards. They are established for the intended use described in pharmacopeial texts (monographs and general chapters). Pharmacopeial reference standards are available from various pharmacopoeias such as United States Pharmacopeia and the European Pharmacopoeia. Where pharmacopoeial tests or assays call for the use of a pharmacopoeial reference standard, only those results obtained using the specified pharmacopoeial reference standard are conclusive. See also Standard (metrology) Pharmacopoeia References External links European Pharmacopoeia United States Pharmacopeia Reference Standards of the European Pharmacopoeia Reference Standards of the United States Pharmacopeia US National Institute of Standards and Technology EDQM Reference Standards Training Resources Drugs Health standards
Drug reference standard
[ "Chemistry" ]
228
[ "Pharmacology", "Chemicals in medicine", "Drugs", "Products of chemical industry" ]
21,054,852
https://en.wikipedia.org/wiki/Anwell%20Technologies
Anwell Technologies Limited was a Hong Kong multinational manufacturing company. Founded in 2000, the company initially designed machines that mass-produced optical discs, but later began manufacturing thin-film solar cells and organic light-emitting diodes (OLEDs) as well. The company was listed on the Singapore Exchange in 2004, but delisted in 2019 as the company shut down its operations. History Anwell was founded in 2000 by chairman and CEO Fan Kai Leung (), known as Franky Fan, and five other engineering partners with initial capital of US$100,000. In 2004, the company was listed on the mainboard of the Singapore Stock Exchange. In September 2009, Anwell produced its first thin-film solar cell at their production plant located in Anyang, Henan, China. The following month, Anwell's wholly owned subsidiary Sungen signed a memorandum of understanding with American energy company Solargen to supply solar panels for their solar farm projects. In 2011, Anwell received a total of RMB 800 million in funding from the municipal government of Dongguan for the construction of a second manufacturing base in the city, as well as RMB 700 million increase production capacity at its existing plant in Anyang. In February 2012, Anwell secured its first engineering, procurement, and construction contract for a solar power plant in Thailand, in a deal worth US$25 million. Legal issues and closure In November 2017, Anwell's judicial managers RSM Corporate Advisory announced that Anwell's Chinese subsidiary, Dongguan Anwell Digital Machinery, as well as Anwell CEO Fan, executive director Wu Wai Kin (known as Ken Wu), and group financial controller Kwong Chi Kit (known as Victor Kwong) were found guilty of fraud by Chinese courts. Dongguan Anwell was ordered to pay a total of RMB 1.2 billion in fines and other payments; Fan was sentenced to life imprisonment and a seizure of personal assets worth up to RMB 5 million, while Wu and Kwong were fined RMB 4 million each and sentenced to 20 and 19 years' imprisonment, respectively. In March 2018, the Singapore High Court granted an application for the company to shut down its operations and begin the process of liquidation. The company applied to delist from the Singapore Exchange in January 2019. References External links Official website (archived 4 October 2017) Companies formerly listed on the Singapore Exchange Thin-film cells Solar energy companies Thin-film cell manufacturers Photovoltaics manufacturers Engineering companies of Hong Kong Hong Kong brands
Anwell Technologies
[ "Materials_science", "Mathematics", "Engineering" ]
511
[ "Thin-film cells", "Engineering companies", "Photovoltaics manufacturers", "Planes (geometry)", "Thin films" ]
19,941,455
https://en.wikipedia.org/wiki/Depth%E2%80%93slope%20product
The depth–slope product is used to calculate the shear stress at the bed of an open channel containing fluid that is undergoing steady, uniform flow. It is widely used in river engineering, stream restoration, sedimentology, and fluvial geomorphology. It is the product of the water depth and the mean bed slope, along with the acceleration due to gravity and density of the fluid. Formulation The use of the depth–slope product — in computing the bed shear-stress — specifically refers to two assumptions that are widely applicable to natural river channels: that the angle of the channel from horizontal is small enough that it can be approximated as the slope by the small-angle formula, and that the channel is much wider than it is deep, and sidewall effects can be ignored. Although it is a simplistic approach to find the shear stress in what can often be a locally unsteady fluvial system, when averaged over distances of kilometers, these local variations average and the depth–slope product becomes a useful tool to understand shear stress in open channels such as rivers. Depth and hydraulic radius The first assumption is that the channel is much wider than it is deep, and the equations can be solved as if the channel were infinitely wide. This means that side-wall effects can be ignored, and that the hydraulic radius, , can be assumed to be equal to the channel depth, . where is the cross sectional area of flow and is wetted perimeter. For a semicircular channel, the hydraulic radius would simply be the true radius. For an approximately rectangular channel (for simplicity in the mathematics of the explanation of the assumption), , where is the width (breadth) of the channel, and . For b>>h, , and therefore . Formally, this assumption can generally be held to hold when the width is greater than about 20 times the height; the exact amount of error accrued can be found by comparing the height to the hydraulic radius. For channels with a lower width-to-depth ratio, a better solution can be found by using the hydraulic radius instead of the above simplification. Pressure The total stress on the bed of an open channel of infinite width is given by the hydrostatic pressure acting on the bed. For a fluid of density , an acceleration due to gravity , and a flow depth , the pressure exerted on the bed is simply the weight of an element of fluid, , times the depth of the flow, . From this, we get the expression for the total pressure, , acting on the bed. Shear stress In order to convert the pressure into a shear stress, it is necessary to determine the component of the pressure that provides shear on the bed. For a channel that is at an angle from horizontal, the shear component of the stress acting on the bed , which is the component acting tangentially to the bed, equals the total pressure times the sine of the angle . In natural rivers, the angle is typically very small. As a result, the small-angle formula states that: The tangent of the angle is, by definition, equal to the slope of the channel, . From this, we can arrive at the final form of the relation between bed shear stress and depth–slope product: Scaling Assuming a single, well-mixed, homogeneous fluid and a single acceleration due to gravity (both are good assumptions in natural rivers, and the second is a good assumption for processes on Earth, or any planetary body with a dominant influence on the local gravitational field), the only two variables that determine the boundary shear stress are the depth and the slope. This is the significance of the name of the formula. For natural streams, in the mks or SI system (units of pascals for shear stress), a typical useful relationship to remember is that: for water with a density of 1000 kg/m3 and approximating the acceleration due to gravity as 10 m/s2 (the error in this assumption is typically much smaller than the error from measurements). Uses Bed shear stress can be used to find: The vertical velocity profile within the fluid flow The ability of the fluid to carry sediment The rate of shear dispersion of contaminants and tracers. See also Sediment Fluid mechanics References Leopold, Wolman, and Miller (1964), Fluvial Processes in Geomorphology, Dover Publications, Mineola, NY, USA, 535 pp. Fluid dynamics Geological techniques Civil engineering Environmental engineering
Depth–slope product
[ "Chemistry", "Engineering" ]
901
[ "Chemical engineering", "Construction", "Civil engineering", "Environmental engineering", "Piping", "Fluid dynamics" ]
19,947,748
https://en.wikipedia.org/wiki/Reid%20vapor%20pressure
Reid vapor pressure (RVP) is a common measure of the volatility of gasoline and other petroleum products. It is defined as the absolute vapor pressure exerted by the vapor of the liquid and any dissolved gases/moisture at 37.8 °C (100 °F) as determined by the test method ASTM-D-323, which was first developed in 1930 and has been revised several times (the latest version is ASTM D323-15a). The test method measures the vapor pressure of gasoline, volatile crude oil, jet fuels, naphtha, and other volatile petroleum products but is not applicable for liquefied petroleum gases. ASTM D323-15a requires that the sample be chilled to 0 to 1 degrees Celsius and then poured into the apparatus; for any material that solidifies at this temperature, this step cannot be performed. RVP is commonly reported in kilopascals (kPa) or pounds per square inch (psi) and represents volatization at atmospheric pressure because ASTM-D-323 measures the gauge pressure of the sample in a non-evacuated chamber. The matter of vapor pressure is important relating to the function and operation of gasoline-powered, especially carbureted, vehicles and is also important for many other reasons. High levels of vaporization are desirable for winter starting and operation and lower levels are desirable in avoiding vapor lock during summer heat. Fuel cannot be pumped when there is vapor in the fuel line (summer) and winter starting will be more difficult when liquid gasoline in the combustion chambers has not vaporized. Thus, oil refineries manipulate the Reid vapor pressure seasonally specifically to maintain gasoline engine reliability. The Reid vapor pressure (RVP) can differ substantially from the true vapor pressure (TVP) of a liquid mixture, since (1) RVP is the vapor pressure measured at 37.8 °C (100 °F) and the TVP is a function of the temperature; (2) RVP is defined as being measured at a vapor-to-liquid ratio of 4:1, whereas the TVP of mixtures can depend on the actual vapor-to-liquid ratio; (3) RVP will include the pressure associated with the presence of dissolved water and air in the sample (which is excluded by some but not all definitions of TVP); and (4) the RVP method is applied to a sample which has had the opportunity to volatilize somewhat prior to measurement: i.e., the sample container is required to be only 70-80% full of liquid (so that whatever volatilizes into the container headspace is lost prior to analysis); the sample then again volatilizes into the headspace of the D323 test chamber before it is heated to 37.8 degrees Celsius. See also Crude oil assay Gasoline volatility Vapor pressure External links ASTM D323 - 06 Standard Test Method for Vapor Pressure of Petroleum Products (Reid Method) Reid Vapor Pressure Requirements for Ethanol Congressional Research Service USA's Environmental Protection Agency (EPA) publication AP-42, Compilation of Air Pollutant Emissions. Chapter 7 (RVP is a parameter in the estimation of petroleum tank evaporative losses) References Chemical properties Physical chemistry Engineering thermodynamics Natural gas Oil refining Petroleum production
Reid vapor pressure
[ "Physics", "Chemistry", "Engineering" ]
678
[ "Applied and interdisciplinary physics", "Engineering thermodynamics", "Petroleum technology", "Thermodynamics", "Oil refining", "nan", "Mechanical engineering", "Physical chemistry" ]
19,949,953
https://en.wikipedia.org/wiki/Water%20power%20engine
A water power engine includes prime movers driven by water and which may be classified under three categories: Water pressure motors, having a piston and cylinder with inlet and outlet valves: their action is that analogous of a steam- or gas-engine with water as the working fluid – see water engine Water wheels Turbines, deriving their energy from high velocity jet of jets (the impulse machine), or from water supplied under pressure and passing through the vanes of a runner which is thereby caused to rotate (the reaction type) Hydro power is generated when the natural force from the water's current moves a device (fan, propeller, wheel) that is pushed by the force of the water. Ordinary water weighs 8.36 lbs per gallon (1 kg per liter). The force makes the turbine mechanism spin, creating electricity. As long as there is flow, it is possible to produce electricity. The advantage of electricity generated in this way is that it is a renewable resource. A small-scale Micro Hydro Power can be a reliable and long lasting piece of technology. The disadvantage of the system is that technology has yet to be developed more than what it is today. Stanley Myer As the prices for gasoline continued to soar a man of many inventions named Stanley Myer worked on a solution that would cut the cost of fueling our cars as well as help the planet. The war on the supply and demand of a necessity for vehicles would become a distant memory if Myer could make his invention work for all vehicles. Myer transformed a dune buggy's fuel system into a system that used water to fuel its engine, which replaced gasoline. The idea was to have cars altered to accommodate the water powered engines. On June 24, 1992 Myer applied to have his work patented. He was a man of many inventions and patents such as his work on Process and apparatus for the production of fuel gas and the enhanced release of thermal energy from such gas, Method for the production of a fuel gas, Controlled process for the production of thermal energy from gases and apparatus useful therefore,  Electrical pulse generator, Gas electrical hydrogen generator, Start-up/shut-down for a hydrogen gas burner, Light-guide lens, Solar heating system Multi-stage solar storage system. After a long two years later on March 15, 1994, Myer patented his work. Hydrogen gas fuel and management system for an internal combustion engine using hydrogen gas fuel, patent number 5293857. This invention would use a 2:1 portion of hydrogen to oxygen and a regulated density of the hydrogen component of the mixture such that the burn rate of the mixture approximates that of a fossil fuel and a system for maintaining the foregoing gas fuel mixture and characteristics in an internal combustion engine. According to Myer, this transformed dune buggy could run a whopping 100 miles per gallon. The dune buggy was able to go so far on water due to a process called electrolysis. The investors and courts felt Myer and his inventions were fraudulent. Two years later Myer was accused of fraud and had to pay back his investors. The idea was that he had not invented anything new or useful it was a simple use of electrolysis. The investors and courts felt Myer and his inventions were fraudulent. There was another idea that it is impossible to use water in cars due to heat the tolerability of it. Normal water electrolysis requires the passage of current measured in amps, Meyer's cell achieves the same effect in milliamps. Furthermore ordinary tap water requires the addition of an electrolyte such as sulphuric acid to aid current conduction; Meyer's cell functions at greatest efficiency with pure water. Pros of water-power engines and hydro power Water-powered engines and hydro power can have many advantages in a society that relies mostly on non-renewable resources such as oil and coal. Water covers an estimated 71 percent of the Earth's surface. In conjunction with normal weather patterns such as evaporation and precipitation, water is a natural renewable resource that is in abundance on Earth. Hydroelectric power has been a popular method of energy dating back to the late 19th century. The main advantage of using hydropower is that it is a clean form of energy, otherwise known as "green" energy. Since the process of using waterpower does not require burning fossil fuels, it is more environmentally friendly. Fewer Greenhouse gasses are emitted into the atmosphere contributing to climate change, lower levels of smog in large cities, and a lesser chance of acid rain taking place. In the current economy, fossil fuels account for most competitive markets between big businesses. This leads to a constant fluctuation in economic prices being considerably high or low depending on supply and demand. Unlike fossil fuels which are non-renewable, rivers, lakes, and ocean water are an infinite resource. Dams are a product of the water-power engine and provide consistent energy to nearby populated areas. Murray 1 and 2 Hydro Electric Power Stations and the Tumut 3 Hydroelectric Power Station in Australia is responsible for generating between 550 megawatts and 1,800 megawatts of electricity. The water powered turbines used in these dams need little maintenance, are easily upgradable with modern technology, and have a lifespan of 50–100 years. Clean energy created by hydro power plants attracts positive results in otherwise remote areas. It enhances commerce and gives rise to more industry. The overall education improves in these areas as well as healthcare. Dams that run on hydro powered engines create lakes that attract tourists and boosts the economy in those areas. Such as the Hoover Dam which attracts 7 million tourists every year. The advantages of using hydro power and controlling water flow also has irrigation benefits. In areas that have less rainfall, such as Arizona and Nevada, the ability to control the waterpower engine's water consumption saves water during dry seasons making the region less reliant on natural rainfall. Cons of water-power engines and hydro power Although beneficial in the long run, water powered engines require materials that require a high financial price. Hydro powered dams require less maintenance once fully constructed, however the time it takes to earn its revenue back may take almost its whole lifespan. Some water powered engines and the plants associated with them can emit large amounts of methane and carbon dioxide into the atmosphere. This is mainly due to the surrounding reservoirs that have stagnant water where over long periods of time plants and other biological material decomposes and produces environmentally harmful pollutants into the air. Recent inventions Currently in Israel, MayMaan Research, LLC, has developed a powerful piston engine that runs on a combination of water and ethanol (or another alcohol) and does not require the use of diesel or regular gasoline. The water powered piston engine eliminates both nitrogen and sulfur oxides that create harmful air quality. They estimate that it is 60 percent more efficient than gasoline and can save on 50 percent of fuel costs. They plan to not only focus on automobiles but overall transportation such as ships, trains, and large trucks. The mission that MayMaan Research and its founders are trying to accomplish is to eliminate the dependency on fossil fuels worldwide and to create a greener environment for all. Ten examples of relevant inventions: Helicoid penstocks – Similar to a rifle barrel, etched spiral grooves on the inside, rushed water flows through the helicoid penstock and starts to spin then the pipes flows the water directly on an electric turbine which then improves the turbines performance. Fish ladders – The Thompson Falls hydroelectric plant in Montana houses the most technologically advanced fish ladder in the United States of America. Hydrosphere – A hydroelectric generator which uses the intense pressure differentials in the deeper ends of lakes and or oceans. The hydrosphere's inventor Rick Dickson states it can generate up to 500 megawatts of continuous renewable energy. Air–water gravity generator – Another invention from Rick Dickson, believed to be the hydro plant of the future. Pressured water is let into the Air – Water- Gravity generator which generates power by entering a vacuum chamber which then forces a piston to climb a stator. Electricity is generated at that point. Wave power – Less known about waves, they are considered to produce hydrokinetic energy as kinetic energy is in the movement of the waves crashing against the shores and rocks.  Scientists believe if they could extract 15 percent of energy it could generate as much electricity as all the hydroelectric dams in the nation. Tidal power – Again the powers of the water are at work here. It is noted that the water can carry immense power and pressure therefore can be used in several ways that benefit us and in this case we can use it to generate electricity. Resembling a lawn mower better known as a under sea windmill the turbines rotate when tides rush in and out which generates power and can give power up to 30 homes. River power – This happens to be an idea that can produce energy but also preserves wildlife as it does not require the alterations and constructions like dams and fish ladders. It would consist of modules- turbines, stabilizer, mooring system, and energy conversion systems. Again, water flows through the turbines which then allows the river's energy to be collected and drives a generator. The river's energy can generate 50 kilowatts with a water speed of 4 knots. With this system in place it does not disrupt the natural livelihood of fish and river traffic. Vortex power – VIVACE – Vortex Induced Vibration for Aquatic Clean Energy. This system is based on fish movement and their use of pushing their bodies off vortices to move forward. This invention can capture energy on slow moving rivers. Pipe power – A newer invention to capture energy through water, this system uses municipal pipes. Created by Leviathan. Benkatina turbine works off water that flows through enclosed pipes, sewer pipes, canals, and pipes used to remove wastewater from factories.  Making a splash – Fzulton Innovation has created Lilliputian hydroelectric technologies . This system allows one to power their home appliances with the use of the water from their own faucets. It is noted it can also create emergency lighting and can charge batteries. This seems to be a great invention for families who live in areas with a high impact from weather and having their power cut off consistently. Many inventions involve water. The problem is finding inventions that can be useful to all people and help curve poverty as well as curve the negative impacts inventions have on our natural world and wild life. Hoaxes There have been a number of hoaxes, claiming the invention of water-powered engines. No water powered engine has successfully been invented to the point of getting a patent. Conspiracy theorists believe that there is a global suppression surrounding the idea of creating a successful water fuel cell or fully water powered engine. This stems from the idea that large oil companies that control most of the revenue related to gas and fossil fuels do not want water fueled technology to overpower current gas and electric reliant vehicles. This would not only create a cheaper, cleaner, and more efficient engine but also would eventually make oil companies obsolete. The uncertainty behind events such as these are what fuels conspiracy theorists to continue to support the claim of suppression of clean energy technology by unknown entities. Stanley Meyer's water fuel cell The idea of a water powered car has been around since Stanley Meyer's "water fuel cell" made it popular in the late 20th century. However, he was met with pushback from an Ohio court claiming that such an automobile could not possibly work. Meyer abruptly died in 1998 while eating at a restaurant. According to his brother, Meyer ran out of the restaurant screaming "They’ve poisoned me" before succumbing to his death. Due to this account, some believe that Meyer was poisoned by those trying to dismantle the idea of clean energy, especially in a time of a booming crude oil industry. This was never proven to be true as an autopsy revealed that Meyer died from a cerebral aneurysm and not poison. Genepax One such event that raised eyebrows was the company Genepax and their "water powered car". Primarily based in Japan, it was unveiled in 2008 that Genepax had a working concept car that apparently ran solely on air and water. It did this by using a special water energy system and a membrane electrode assembly (MES). These two technologies combined was able to break down hydrogen and oxygen through the processes of a chemical reaction. It was a design that made the future of water powered technology attainable and not too far off from becoming the future of clean energy. That is until the mid 2000s that Genepax suddenly shut its doors for good without notice. For a company that made many appearances to the public showcasing the future of automobiles, many find it strange that it ended so suddenly, which correlates with the sudden death of the inventor himself Stanley Meyer. See also Water engine Water-fuelled car Water-returning engine Water turbine Working fluid Rankine cycle Hydroelectricity References Engines Hydraulics Hydraulic engineering Sustainable technologies
Water power engine
[ "Physics", "Chemistry", "Technology", "Engineering", "Environmental_science" ]
2,626
[ "Machines", "Hydrology", "Engines", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering", "Fluid dynamics" ]
19,950,193
https://en.wikipedia.org/wiki/Hydraulic%20redistribution
Hydraulic redistribution is a passive mechanism where water is transported from moist to dry soils via subterranean networks. It occurs in vascular plants that commonly have roots in both wet and dry soils, especially plants with both taproots that grow vertically down to the water table, and lateral roots that sit close to the surface. In the late 1980s, there was a movement to understand the full extent of these subterranean networks. Since then it was found that vascular plants are assisted by fungal networks which grow on the root system to promote water redistribution. Process Hot, dry periods, when the surface soil dries out to the extent that the lateral roots exude whatever water they contain, will result in the death of such lateral roots unless the water is replaced. Similarly, under extremely wet conditions when lateral roots are inundated by flood waters, oxygen deprivation will also lead to root peril. In plants that exhibit hydraulic redistribution, there are xylem pathways from the taproots to the laterals, such that the absence or abundance of water at the laterals creates a pressure potential analogous to that of transpirational pull. In drought conditions, ground water is drawn up through the taproot to the laterals and exuded into the surface soil, replenishing that which was lost. Under flooding conditions, plant roots perform a similar function in the opposite direction. Though often referred to as hydraulic lift, movement of water by the plant roots has been shown to occur in any direction. This phenomenon has been documented in over sixty plant species spanning a variety of plant types (from herbs and grasses to shrubs and trees) and over a range of environmental conditions (from the Kalahari Desert to the Amazon Rainforest). Causes The movement of this water can be explained by a water transport theory throughout a plant. This well-established water transport theory is called the cohesion-tension theory. In brief, it explains the movement of water throughout the plant depends on having a continuous column of water, from the leaves to roots. Water is then pulled up from the roots to the leaves moving throughout the plant's vascular system, all facilitated by the differences in water potential in the boundary layers of the soil and the atmosphere. Therefore, the driving force for moving water through a plant is the cohesive strength of water molecules and a pressure gradient from the roots to the leaves. This theory is still applied when the boundary layer to the atmosphere is closed, e.g. when plant stomata are closed or in senesced plants. The pressure gradient is developed between soil layers with different water potentials causing water to move by the roots from wetter to drier soil layers in a similar manner as when a plant is transpiring. Fungal associations It has been understood that hydraulic lift aids the host plant and its neighboring plants in the transportation of water and other vital nutrients. At that time, the hydraulic lift described as the movement of water and soil nutrients from a vascularized host into the soil during at night mostly. Then after studies in the 2000s, a more comprehensive word was taken into consideration where it described a bi-directional and passive movement exhibited by the plant roots and further assisted by mycorrhizal networks. A 2015 study then described a "direct transfer of hydraulically redistributed water" between the host and fungi into the surrounding root system. As mentioned, hydraulic redistribution not only transports water but nutrients as well. The fungi most likely to form water and nutrient networks are Ectomycorrhizae and Arbuscular mycorrhizae. Significance The ecological importance of hydraulically redistributed water is becoming better understood as this phenomenon is more carefully examined. Water redistribution by plant roots has been found influencing crop irrigation, where watering schemes leave a harsh heterogeneity in soil moisture. This influencing process also assist in seedling success. The plant roots have been shown to smooth or homogenize the soil moisture. This sort of smoothing out of soil moisture is important in maintaining plant root health. The redistribution of water from deep moist layers to shallow drier layers by large trees has shown to increase the moisture available in the daytime to meet the transpiration demand. The implications of hydraulic redistribution seem to have an important influence on plant ecosystems. Whether or not plants redistribute water through the soil layers can affect plant population dynamics, such as the facilitation of neighboring species. The increase in available daytime soil moisture can also offset low transpiration rates due to drought (see also drought rhizogenesis) or alleviate competition for water between competing plant species. Water redistributed to the near surface layers may also influence plant nutrient availability. Observations and modeling Due to the ecological significance of hydraulically redistributed water, there is an ongoing effort to continue the categorization of plants exhibiting this behaviour and adapting this physiological process into land-surface models to improve model predictions. Traditional methods of observating hydraulic redistribution include Deuterium isotope traces, sap flow, and soil moisture. In attempts to characterize the magnitude of the water redistributed, numerous models (both empirically and theoretically based) have been developed. See also Cohesion tension theory Evapotranspiration Mycorrhizal network Soil plant atmosphere continuum Water potential References Further reading Plant physiology Hydrology Plant roots Ecological processes Soil physics Water and the environment
Hydraulic redistribution
[ "Physics", "Chemistry", "Engineering", "Biology", "Environmental_science" ]
1,084
[ "Plant physiology", "Physical phenomena", "Earth phenomena", "Applied and interdisciplinary physics", "Hydrology", "Plants", "Soil physics", "Ecological processes", "Environmental engineering" ]
19,953,006
https://en.wikipedia.org/wiki/Plasmonic%20nanolithography
Plasmonic nanolithography (also known as plasmonic lithography or plasmonic photolithography) is a nanolithographic process that utilizes surface plasmon excitations such as surface plasmon polaritons (SPPs) to fabricate nanoscale structures. SPPs, which are surface waves that propagate in between planar dielectric-metal layers in the optical regime, can bypass the diffraction limit on the optical resolution that acts as a bottleneck for conventional photolithography. Theory Surface plasmon polaritons are surface electromagnetic waves that propagate in between two surfaces with sign-changing permittivities. They originate from coupling of photons to plasma oscillations, quantized as plasmons. SPPs result in evanescent fields that decay perpendicularly to the interface where the propagation occurs. The dispersion relation for SPPs permits the excitation of wavelengths shorter than the free-space wavelength of the inbound light, additionally ensuring subwavelength field confinement. Nevertheless, the excitation of SPPs necessitate momentum mismatch; prism and grating coupling methods are common. For plasmonic nanolithography processes, this is achieved through surface roughness and perforations. Methods Plasmonic contact lithography, a modification on the evanescent near-field lithography, uses a metal photomask, on which the SPPs are excited. Similar to common photolithographic processes, photoresist is exposed to SPPs that propagate from the mask. Photomasks with holes enable grating coupling of SPPs; the fields only propagate for nanometers. Srituravanich et al. has demonstrated the lithographic process experimentally with a 2D silver hole array mask; 90 nm hole arrays were produced at 365 nm wavelength, which is beyond diffraction limit. Zayats and Smolyaninov utilized a multi-layered metal film mask to enhance the subwavelength aperture; such structures can be realized by thin film deposition methods. Bowtie apertures and nanogaps were also suggested as alternative apertures. A version of the method, named as surface plasmon interference nanolithography by Liu et al., uses SPP interference patterns. Despite offering high resolution and throughput, plasmonic contact lithography is regarded as an expensive and complex method; contamination due to contact is also a limiting factor. Planar lens imaging nanolithography uses plasmonic lenses or negative-index superlenses, which were first proposed by John Pendry. Many superlens designs, such as Pendry's thin silver film or Fang et al.'s superlens, benefit from plasmonic excitations to focus Fourier components of incoming light beyond the diffraction limit. Chaturvedi et al. has demonstrated the imaging of a 30 nm chromium grating through silver superlens photolithography at 380 nm, while Shi et al. simulated a 20 nm lithography resolution at 193 nm wavelength with an aluminum superlens. Srituravanich et al. has developed a mechanically adjustable, hovering plasmonic lens for maskless near-field nanolithography, whereas another maskless approach by Pan et al. uses a "multi-stage plasmonic lens" for progressive coupling. Plasmonic direct writing is a maskless form of photolithography that is based on scanning probe lithography; the method uses localized surface plasmon (LSP) enhancements from embedded plasmonic scanning probes to expose the photoresist. Wang et al. experimentally demonstrated 100 nm field confinement with this method. Kim et al. has developed a ~50 nm resolution scanning probe with a patterning speed of ~10 mm/s. Gold nanoparticles and other plasmonic nanostructures such as nanogaps have been used as masks for lithography; etching in this case can be achieved through either through photomasking principles or enhanced local heating in the vicinity of the nanostructure due to the LSP resonances. Lin et al. also used localized thermal excitations in gold nanoparticles to fabricate two-dimensional structures such as patterned graphene and molybdenum disulfide monolayers in a process termed as "optothermoplasmonic nanolithography." Photochemical effects of LSP resonances were also used as a catalyst in lithographic processes: Saito et al. demonstrated selective etching of silver nanocubes on titanium dioxide substrates by the means of plasmon-induced charge separation. See also Electron-beam lithography Nanoimprint lithography Nanosphere lithography Plasmonic metamaterial References Lithography (microfabrication) Plasmonics
Plasmonic nanolithography
[ "Physics", "Chemistry", "Materials_science" ]
1,029
[ "Plasmonics", "Microtechnology", "Surface science", "Condensed matter physics", "Nanotechnology", "Solid state engineering", "Lithography (microfabrication)" ]
27,292,230
https://en.wikipedia.org/wiki/Graphical%20models%20for%20protein%20structure
Graphical models have become powerful frameworks for protein structure prediction, protein–protein interaction, and free energy calculations for protein structures. Using a graphical model to represent the protein structure allows the solution of many problems including secondary structure prediction, protein-protein interactions, protein-drug interaction, and free energy calculations. There are two main approaches to using graphical models in protein structure modeling. The first approach uses discrete variables for representing the coordinates or the dihedral angles of the protein structure. The variables are originally all continuous values and, to transform them into discrete values, a discretization process is typically applied. The second approach uses continuous variables for the coordinates or dihedral angles. Discrete graphical models for protein structure Markov random fields, also known as undirected graphical models are common representations for this problem. Given an undirected graph G = (V, E), a set of random variables X = (Xv)v ∈ V indexed by V, form a Markov random field with respect to G if they satisfy the pairwise Markov property: any two non-adjacent variables are conditionally independent given all other variables: In the discrete model, the continuous variables are discretized into a set of favorable discrete values. If the variables of choice are dihedral angles, the discretization is typically done by mapping each value to the corresponding rotamer conformation. Model Let X = {Xb, Xs} be the random variables representing the entire protein structure. Xb can be represented by a set of 3-d coordinates of the backbone atoms, or equivalently, by a sequence of bond lengths and dihedral angles. The probability of a particular conformation x can then be written as: where represents any parameters used to describe this model, including sequence information, temperature etc. Frequently the backbone is assumed to be rigid with a known conformation, and the problem is then transformed to a side-chain placement problem. The structure of the graph is also encoded in . This structure shows which two variables are conditionally independent. As an example, side chain angles of two residues far apart can be independent given all other angles in the protein. To extract this structure, researchers use a distance threshold, and only a pair of residues which are within that threshold are considered connected (i.e. have an edge between them). Given this representation, the probability of a particular side chain conformation xs given the backbone conformation xb can be expressed as where C(G) is the set of all cliques in G, is a potential function defined over the variables, and Z is the partition function. To completely characterize the MRF, it is necessary to define the potential function . To simplify, the cliques of a graph are usually restricted to only the cliques of size 2, which means the potential function is only defined over pairs of variables. In Goblin System, these pairwise functions are defined as where is the energy of interaction between rotamer state p of residue and rotamer state q of residue and is the Boltzmann constant. Using a PDB file, this model can be built over the protein structure. From this model, free energy can be calculated. Free energy calculation: belief propagation It has been shown that the free energy of a system is calculated as where E is the enthalpy of the system, T the temperature and S, the entropy. Now if we associate a probability with each state of the system, (p(x) for each conformation value, x), G can be rewritten as Calculating p(x) on discrete graphs is done by the generalized belief propagation algorithm. This algorithm calculates an approximation to the probabilities, and it is not guaranteed to converge to a final value set. However, in practice, it has been shown to converge successfully in many cases. Continuous graphical models for protein structures Graphical models can still be used when the variables of choice are continuous. In these cases, the probability distribution is represented as a multivariate probability distribution over continuous variables. Each family of distribution will then impose certain properties on the graphical model. Multivariate Gaussian distribution is one of the most convenient distributions in this problem. The simple form of the probability and the direct relation with the corresponding graphical model makes it a popular choice among researchers. Gaussian graphical models of protein structures Gaussian graphical models are multivariate probability distributions encoding a network of dependencies among variables. Let be a set of variables, such as dihedral angles, and let be the value of the probability density function at a particular value D. A multivariate Gaussian graphical model defines this probability as follows: Where is the closed form for the partition function. The parameters of this distribution are and . is the vector of mean values of each variable, and , the inverse of the covariance matrix, also known as the precision matrix. Precision matrix contains the pairwise dependencies between the variables. A zero value in means that conditioned on the values of the other variables, the two corresponding variable are independent of each other. To learn the graph structure as a multivariate Gaussian graphical model, we can use either L-1 regularization, or neighborhood selection algorithms. These algorithms simultaneously learn a graph structure and the edge strength of the connected nodes. An edge strength corresponds to the potential function defined on the corresponding two-node clique. We use a training set of a number of PDB structures to learn the and . Once the model is learned, we can repeat the same step as in the discrete case, to get the density functions at each node, and use analytical form to calculate the free energy. Here, the partition function already has a closed form, so the inference, at least for the Gaussian graphical models is trivial. If the analytical form of the partition function is not available, particle filtering or expectation propagation can be used to approximate Z, and then perform the inference and calculate free energy. References Time Varying Undirected Graphs, Shuheng Zhou and John D. Lafferty and Larry A. Wasserman, COLT 2008 Free Energy Estimates of All-atom Protein Structures Using Generalized Belief Propagation, Hetunandan Kamisetty Eric P. Xing Christopher J. Langmead, RECOMB 2008 External links http://www.liebertonline.com/doi/pdf/10.1089/cmb.2007.0131 https://web.archive.org/web/20110724225908/http://www.learningtheory.org/colt2008/81-Zhou.pdf Predicting Protein Folds with Structural Repeats Using a Chain Graph Model Graphical models Protein methods Computational chemistry
Graphical models for protein structure
[ "Chemistry", "Biology" ]
1,364
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Theoretical chemistry", "Computational chemistry" ]
27,296,457
https://en.wikipedia.org/wiki/Endocannabinoid%20reuptake%20inhibitor
Endocannabinoid reuptake inhibitors (eCBRIs), also called cannabinoid reuptake inhibitors (CBRIs), are drugs which limit the reabsorption of endocannabinoid neurotransmitters by the releasing neuron. Pharmacology The method of transport of endocannabinoids through the cell membrane and cytoplasm to their respective degradation enzymes has been rigorously debated for nearly two decades, and a putative endocannabinoid membrane transporter was proposed. However, as lipophilic molecules endocannabinoids readily pass through the cell lipid bilayer without assistance and would more likely need a chaperone through the cytoplasm to the endoplasmic reticulum where the enzyme FAAH is located. More recently fatty acid-binding proteins (FABPs) and heat shock proteins (Hsp70s) have been described and verified as such chaperones, and their inhibitors have been synthesized. The inhibition of endocannabinoid reuptake raises the amount of those neurotransmitters available in the synaptic cleft and therefore increases neurotransmission. Following the increase of neurotransmission in the endocannabinoid system is the stimulation of its functions which, in humans, include: suppression of pain perception (analgesia), increased appetite, mood elevation and inhibition of short-term memory. Examples of eCBRIs AM404 – active metabolite of paracetamol (acetaminophen) AM1172 Guineensine LY-2183240 O-2093 OMDM-2 RX-055 SYT-510 UCM-707 VDM-11 WOBE437 See also Endocannabinoid enhancer Endocannabinoid system Reuptake inhibitor Cannabinoid receptor antagonist Endocannabinoid transporters FAAH inhibitor MAGL inhibitor References Neurochemistry Endocannabinoid reuptake inhibitors
Endocannabinoid reuptake inhibitor
[ "Chemistry", "Biology" ]
421
[ "Biochemistry", "Neurochemistry" ]
18,790,525
https://en.wikipedia.org/wiki/SCR-658%20radar
The SCR-658 radar is a radio direction finding set introduced by the U. S. Army in 1944, was developed in conjunction with the SCR-268 radar. It was preceded by the SCR-258. Its primary purpose was to track weather balloons. Prior to this it was only possible to track weather balloons with a theodolite, causing difficulty with visual tracking in poor weather conditions. The set is small enough to be portable and carried in a Ben Hur trailer. Surviving examples There is one known survivor at the Air Force museum in Dayton Ohio. See also Signal Corps Radio Radiosonde Notes References TM 11-1158 TM 11-2409 mobile Meteorological station Air Defense Artillery Journal March–April 1949 External links http://www.photolib.noaa.gov/htmls/wea01200.htm https://web.archive.org/web/20100413132056/http://www.gordon.army.mil/ocos/museum/equipment.asp SCR and BC lists https://web.archive.org/web/20081121225613/http://6thweathermobile.org/1949_(part%201).htm excellent pics. http://www.srh.noaa.gov/ssd/tstm/html/tstorm.htm Meteorological instrumentation and equipment Weather radars Military radars of the United States World War II radars World War II American electronics Military equipment introduced from 1940 to 1944 de:Radiosonde es:Radiosonda nl:Radiosonde ja:ラジオゾンデ no:Radiosonde nn:Radiosonde pl:Radiosonda pt:Radiossonda fi:Radioluotaus sv:Radiosond zh:无线电探空仪
SCR-658 radar
[ "Technology", "Engineering" ]
379
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
18,790,726
https://en.wikipedia.org/wiki/High-pressure%20electrolysis
High-pressure electrolysis (HPE) is the electrolysis of water by decomposition of water (H2O) into oxygen (O2) and hydrogen gas (H2) due to the passing of an electric current through the water. The difference with a standard proton exchange membrane (PEM) electrolyzer is the compressed hydrogen output around at 70 °C. By pressurising the hydrogen in the electrolyser the need for an external hydrogen compressor is eliminated, the average energy consumption for internal differential pressure compression is around 3%. Approaches As the required compression power for water is less than that for hydrogen-gas the water is pumped up to a high-pressure, in the other approach differential pressure is used. There is also an importance for the electrolyser stacks to be able to accept a fluctuating electrical input, such as that found with renewable energy. This then enables the ability to help with grid balancing and energy storage. Ultrahigh-pressure electrolysis Ultrahigh-pressure electrolysis is high-pressure electrolysis operating at . At ultra-high pressures the water solubility and cross-permeation across the membrane of H2 and O2 is affecting hydrogen purity, modified PEMs are used to reduce cross-permeation in combination with catalytic H2/O2 recombiners to maintain H2 levels in O2 and O2 levels in H2 at values compatible with hydrogen safety requirements. Research The US DOE believes that high-pressure electrolysis, supported by ongoing research and development, will contribute to the enabling and acceptance of technologies where hydrogen is the energy carrier between renewable energy resources and clean energy consumers. High-pressure electrolysis is being investigated by the DOE for efficient production of hydrogen from water. The target total in 2005 is $4.75 per gge H2 at an efficiency of 64%. The total goal for the DOE in 2010 is $2.85 per gge H2 at an efficiency of 75%. As of 2005 the DOE provided a total of $1,563,882 worth of funding for research. Mitsubishi is pursuing such technology with its High-pressure hydrogen energy generator (HHEG) project. The Forschungszentrum Jülich, in Jülich Germany is currently researching the cost reduction of components used in high-pressure PEM electrolysis in the EKOLYSER project. The primary goal of this research is to improve performance and gas purity, reduce cost and volume of expensive materials and reach the alternative energy targets set forth by the German government for 2050 in the Energy Concept published in 2010. ThalesNano Energy released a lab-scale high pressure (100 bar) hydrogen generator as a replacement for hydrogen cylinders in chemistry laboratories. Commercial Products Honda installed its Smart Hydrogen Station (SHS) in Los Angeles for use by fuel cell automobiles. See also Regenerative fuel cell Electrochemical engineering High-temperature electrolysis References External links High pressure electrolyzer EC-supported STREP program on high pressure PEM water electrolysis Hydrogen technologies Electrolysis Hydrogen production
High-pressure electrolysis
[ "Chemistry" ]
622
[ "Electrochemistry", "Electrolysis" ]
18,793,978
https://en.wikipedia.org/wiki/Flame%20speed
The flame speed is the measured rate of expansion of the flame front in a combustion reaction. Whereas flame velocity is generally used for a fuel, a related term is explosive velocity, which is the same relationship measured for an explosive. Combustion engineers differentiate between the laminar flame speed and turbulent flame speed. Flame speed is typically measured in m/s, cm/s, etc. The physical phenomena of combustion can be found. In engines In an internal combustion engine, the flame speed of a fuel is a property which determines its ability to undergo controlled combustion without detonation. Flame speed is used along with adiabatic flame temperature to help determine the engine's efficiency. According to one source, "...high flame-speed combustion processes, which closely approximate constant-volume processes, should reflect in high efficiencies." The flame speeds are not the actual engine flame speeds, A 12:1 compression ratio gasoline engine at 1500 rpm would have a flame speed of about 16.5 m/s, and a similar hydrogen engine yields 48.3 m/s, but such engine flame speeds are also very dependent on stoichiometry See also Chemical kinetics Deflagration G-Equation Burn rate (chemistry) Wobbe index Octane rating References Combustion
Flame speed
[ "Chemistry" ]
259
[ "Chemical reaction stubs", "Combustion", "Chemical process stubs" ]
18,795,045
https://en.wikipedia.org/wiki/TP-003
TP-003 is an anxiolytic drug with a novel chemical structure, which is used in scientific research. It has similar effects to benzodiazepine drugs, but is structurally distinct and so is classed as a nonbenzodiazepine anxiolytic. TP-003 is a positive allosteric modulator at the benzodiazepine binding site of GABAA receptors. It possesses relative selectivity for benzodiazepine sites on α3-containing GABAA receptors, which are thought to contribute to the anxiolytic effects of benzodiazepines (in tandem with those containing α2 subunits). It has modest anticonvulsant activity although less than that of diazepam. See also Imidazopyridine L-838,417 Alpidem References Tertiary alcohols Anxiolytics GABAA receptor positive allosteric modulators Imidazopyridines Nitriles Fluoroarenes
TP-003
[ "Chemistry" ]
209
[ "Nitriles", "Functional groups" ]
4,936,677
https://en.wikipedia.org/wiki/Cranege%20brothers
Thomas and George Cranege (also spelled Cranage), who worked in the ironworking industry in England in the 1760s, are notable for introducing a new method of producing wrought iron from pig iron. Experiment of 1766 The process of converting pig iron into wrought iron (also known as bar iron) was at that time carried out in a finery forge, which was fuelled by charcoal. Charcoal was a limited resource, but coal, more widely available, could not be used because the sulphur in coal would adversely affect the quality of the wrought iron. George Cranege worked in Coalbrookdale in Shropshire, at the ironworks established by Abraham Darby I, and his brother Thomas worked at a forge in Bridgnorth in Shropshire. They suggested to Richard Reynolds, manager of the works at Coalbrookdale, that the conversion process could be done in a reverbatory furnace, where the iron did not mix with the coal. Reynolds was sceptical, but authorized the brothers to try out the idea. Richard Reynolds, in a letter dated 25 April 1766 to his colleague Thomas Goldney III, described his conversation with the Craneges and the experiment: I told them, consistent with the notion I had adopted in common with all others I had conversed with, that I thought it impossible, because the vegetable salts in the charcoal being an alkali acted as an absorbent to the sulphur of the iron, which occasions the red-short quality of the iron, and pit coal abounding with sulphur would increase it.... They replied that from the observations they had made, and repeated conversations together, they were both firmly of opinion that the alteration from the quality of pig iron into that of bar iron was effected merely by heat, and if I would give them leave, they would make a trial some day.... A trial of it has been made this week, and the success has surpassed the most sanguine expectations.... I look upon it as one of the most important discoveries ever made.... A patent for the process, dated 17 June 1766, in the name of the brothers Cranege, was secured. It apparently made little difference to the lives of the brothers. The process was improved soon afterwards, by Peter Onions who received a patent in 1783, and by Henry Cort who received patents in 1783 and 1784 for his improvements. References History of metallurgy People of the Industrial Revolution
Cranege brothers
[ "Chemistry", "Materials_science" ]
502
[ "Metallurgy", "History of metallurgy" ]
4,938,776
https://en.wikipedia.org/wiki/Gerotor
A gerotor is a positive displacement pump. The name gerotor is derived from "generated rotor." A gerotor unit consists of an inner and an outer rotor. The inner rotor has n teeth, while the outer rotor has n + 1 teeth, with n defined as a natural number greater than or equal to 2. The axis of the inner rotor is offset from the axis of the outer rotor and both rotors rotate on their respective axes. The geometry of the two rotors partitions the volume between them into n different dynamically-changing volumes. During the assembly's rotation cycle, each of these volumes changes continuously, so any given volume first increases, and then decreases. An increase creates a vacuum. This vacuum creates suction, and hence, this part of the cycle is where the inlet is located. As a volume decreases, compression occurs. During this compression period, fluids can be pumped or, if they are gaseous fluids, compressed. Gerotor pumps are generally designed using a trochoidal inner rotor and an outer rotor formed by a circle with intersecting circular arcs. A gerotor can also function as a pistonless rotary engine. High-pressure gas enters the intake and pushes against the inner and outer rotors, causing both to rotate as the volume between the inner and outer rotor increases. During the compression period, the exhaust is pumped out. History At the most basic level, a gerotor is essentially one that is moved via fluid power. Originally, this fluid was water; today, the wider use is in hydraulic devices. Myron F. Hill, who might be called the father of the gerotor, in his booklet "Kinematics of Ge-rotors", lists efforts by Galloway in 1787, by Nash and Tilden in 1879, by Cooley in 1900, by Professor Lilly of Dublin University in 1915, and by Feuerheerd in 1918. These men were all working to perfect an internal gear mechanism by a one-tooth difference to provide displacement. Myron Hill made his first efforts in 1906, then in 1921, gave his entire time to developing the gerotor. He developed a great deal of geometric theory bearing upon these rotors, coined the word GE-ROTOR (meaning generated rotor), and secured basic patents on GE-ROTOR. Gerotors are widely used today throughout industry, and are produced in a variety of shapes and sizes by a number of different methods. Uses Engine Fuel pump Gas compressor Hydraulic motor Limited-slip differential Oil pump (internal combustion engine) Power steering units See also Conical screw compressor Gear pump Quasiturbine Wankel engine References https://www.academia.edu/10200507/Gerotor_Modeling_with_NX3 External links Cascon Inc. Nichols Portland LLC Pump School - Gerotor pump description and animation Step by step drawing Engine technology Gas compressors Pumps
Gerotor
[ "Physics", "Chemistry", "Technology" ]
592
[ "Pumps", "Turbomachinery", "Engines", "Gas compressors", "Physical systems", "Engine technology", "Hydraulics" ]
4,939,073
https://en.wikipedia.org/wiki/Homomorphic%20encryption
Homomorphic encryption is a form of encryption that allows computations to be performed on encrypted data without first having to decrypt it. The resulting computations are left in an encrypted form which, when decrypted, result in an output that is identical to that of the operations performed on the unencrypted data. While homomorphic encryption does not protect against side-channel attacks that observe behavior, it can be used for privacy-preserving outsourced storage and computation. This allows data to be encrypted and outsourced to commercial cloud environments for processing, all while encrypted. As an example of a practical application of homomorphic encryption: encrypted photographs can be scanned for points of interest, without revealing the contents of a photo. However, observation of side-channels can see a photograph being sent to a point-of-interest lookup service, revealing the fact that photographs were taken. Thus, homomorphic encryption eliminates the need for processing data in the clear, thereby preventing attacks that would enable an attacker to access that data while it is being processed, using privilege escalation. For sensitive data, such as healthcare information, homomorphic encryption can be used to enable new services by removing privacy barriers inhibiting data sharing or increasing security to existing services. For example, predictive analytics in healthcare can be hard to apply via a third-party service provider due to medical data privacy concerns. But if the predictive-analytics service provider could operate on encrypted data instead, without having the decryption keys, these privacy concerns are diminished. Moreover, even if the service provider's system is compromised, the data would remain secure. Description Homomorphic encryption is a form of encryption with an additional evaluation capability for computing over encrypted data without access to the secret key. The result of such a computation remains encrypted. Homomorphic encryption can be viewed as an extension of public-key cryptography. Homomorphic refers to homomorphism in algebra: the encryption and decryption functions can be thought of as homomorphisms between plaintext and ciphertext spaces. Homomorphic encryption includes multiple types of encryption schemes that can perform different classes of computations over encrypted data. The computations are represented as either Boolean or arithmetic circuits. Some common types of homomorphic encryption are partially homomorphic, somewhat homomorphic, leveled fully homomorphic, and fully homomorphic encryption: Partially homomorphic encryption encompasses schemes that support the evaluation of circuits consisting of only one type of gate, e.g., addition or multiplication. Somewhat homomorphic encryption schemes can evaluate two types of gates, but only for a subset of circuits. Leveled fully homomorphic encryption supports the evaluation of arbitrary circuits composed of multiple types of gates of bounded (pre-determined) depth. Fully homomorphic encryption (FHE) allows the evaluation of arbitrary circuits composed of multiple types of gates of unbounded depth and is the strongest notion of homomorphic encryption. For the majority of homomorphic encryption schemes, the multiplicative depth of circuits is the main practical limitation in performing computations over encrypted data. Homomorphic encryption schemes are inherently malleable. In terms of malleability, homomorphic encryption schemes have weaker security properties than non-homomorphic schemes. History Homomorphic encryption schemes have been developed using different approaches. Specifically, fully homomorphic encryption schemes are often grouped into generations corresponding to the underlying approach. Pre-FHE The problem of constructing a fully homomorphic encryption scheme was first proposed in 1978, within a year of publishing of the RSA scheme. For more than 30 years, it was unclear whether a solution existed. During that period, partial results included the following schemes: RSA cryptosystem (unbounded number of modular multiplications) ElGamal cryptosystem (unbounded number of modular multiplications) Goldwasser–Micali cryptosystem (unbounded number of exclusive or operations) Benaloh cryptosystem (unbounded number of modular additions) Paillier cryptosystem (unbounded number of modular additions) Sander-Young-Yung system (after more than 20 years solved the problem for logarithmic depth circuits) Boneh–Goh–Nissim cryptosystem (unlimited number of addition operations but at most one multiplication) Ishai-Paskin cryptosystem (polynomial-size branching programs) First-generation FHE Craig Gentry, using lattice-based cryptography, described the first plausible construction for a fully homomorphic encryption scheme in 2009. Gentry's scheme supports both addition and multiplication operations on ciphertexts, from which it is possible to construct circuits for performing arbitrary computation. The construction starts from a somewhat homomorphic encryption scheme, which is limited to evaluating low-degree polynomials over encrypted data; it is limited because each ciphertext is noisy in some sense, and this noise grows as one adds and multiplies ciphertexts, until ultimately the noise makes the resulting ciphertext indecipherable. Gentry then shows how to slightly modify this scheme to make it bootstrappable, i.e., capable of evaluating its own decryption circuit and then at least one more operation. Finally, he shows that any bootstrappable somewhat homomorphic encryption scheme can be converted into a fully homomorphic encryption through a recursive self-embedding. For Gentry's "noisy" scheme, the bootstrapping procedure effectively "refreshes" the ciphertext by applying to it the decryption procedure homomorphically, thereby obtaining a new ciphertext that encrypts the same value as before but has lower noise. By "refreshing" the ciphertext periodically whenever the noise grows too large, it is possible to compute an arbitrary number of additions and multiplications without increasing the noise too much. Gentry based the security of his scheme on the assumed hardness of two problems: certain worst-case problems over ideal lattices, and the sparse (or low-weight) subset sum problem. Gentry's Ph.D. thesis provides additional details. The Gentry-Halevi implementation of Gentry's original cryptosystem reported a timing of about 30 minutes per basic bit operation. Extensive design and implementation work in subsequent years have improved upon these early implementations by many orders of magnitude runtime performance. In 2010, Marten van Dijk, Craig Gentry, Shai Halevi and Vinod Vaikuntanathan presented a second fully homomorphic encryption scheme, which uses many of the tools of Gentry's construction, but which does not require ideal lattices. Instead, they show that the somewhat homomorphic component of Gentry's ideal lattice-based scheme can be replaced with a very simple somewhat homomorphic scheme that uses integers. The scheme is therefore conceptually simpler than Gentry's ideal lattice scheme, but has similar properties with regards to homomorphic operations and efficiency. The somewhat homomorphic component in the work of Van Dijk et al. is similar to an encryption scheme proposed by Levieil and Naccache in 2008, and also to one that was proposed by Bram Cohen in 1998. Cohen's method is not even additively homomorphic, however. The Levieil–Naccache scheme supports only additions, but it can be modified to also support a small number of multiplications. Many refinements and optimizations of the scheme of Van Dijk et al. were proposed in a sequence of works by Jean-Sébastien Coron, Tancrède Lepoint, Avradip Mandal, David Naccache, and Mehdi Tibouchi. Some of these works included also implementations of the resulting schemes. Second-generation FHE The homomorphic cryptosystems of this generation are derived from techniques that were developed starting in 2011–2012 by Zvika Brakerski, Craig Gentry, Vinod Vaikuntanathan, and others. These innovations led to the development of much more efficient somewhat and fully homomorphic cryptosystems. These include: The Brakerski-Gentry-Vaikuntanathan (BGV, 2011) scheme, building on techniques of Brakerski-Vaikuntanathan; The NTRU-based scheme by Lopez-Alt, Tromer, and Vaikuntanathan (LTV, 2012); The Brakerski/Fan-Vercauteren (BFV, 2012) scheme, building on Brakerski's cryptosystem; The NTRU-based scheme by Bos, Lauter, Loftus, and Naehrig (BLLN, 2013), building on LTV and Brakerski's scale-invariant cryptosystem; The security of most of these schemes is based on the hardness of the (Ring) Learning With Errors (RLWE) problem, except for the LTV and BLLN schemes that rely on an overstretched variant of the NTRU computational problem. This NTRU variant was subsequently shown vulnerable to subfield lattice attacks, which is why these two schemes are no longer used in practice. All the second-generation cryptosystems still follow the basic blueprint of Gentry's original construction, namely they first construct a somewhat homomorphic cryptosystem and then convert it to a fully homomorphic cryptosystem using bootstrapping. A distinguishing characteristic of the second-generation cryptosystems is that they all feature a much slower growth of the noise during the homomorphic computations. Additional optimizations by Craig Gentry, Shai Halevi, and Nigel Smart resulted in cryptosystems with nearly optimal asymptotic complexity: Performing operations on data encrypted with security parameter has complexity of only . These optimizations build on the Smart-Vercauteren techniques that enable packing of many plaintext values in a single ciphertext and operating on all these plaintext values in a SIMD fashion. Many of the advances in these second-generation cryptosystems were also ported to the cryptosystem over the integers. Another distinguishing feature of second-generation schemes is that they are efficient enough for many applications even without invoking bootstrapping, instead operating in the leveled FHE mode. Third-generation FHE In 2013, Craig Gentry, Amit Sahai, and Brent Waters (GSW) proposed a new technique for building FHE schemes that avoids an expensive "relinearization" step in homomorphic multiplication. Zvika Brakerski and Vinod Vaikuntanathan observed that for certain types of circuits, the GSW cryptosystem features an even slower growth rate of noise, and hence better efficiency and stronger security. Jacob Alperin-Sheriff and Chris Peikert then described a very efficient bootstrapping technique based on this observation. These techniques were further improved to develop efficient ring variants of the GSW cryptosystem: FHEW (2014) and TFHE (2016). The FHEW scheme was the first to show that by refreshing the ciphertexts after every single operation, it is possible to reduce the bootstrapping time to a fraction of a second. FHEW introduced a new method to compute Boolean gates on encrypted data that greatly simplifies bootstrapping and implemented a variant of the bootstrapping procedure. The efficiency of FHEW was further improved by the TFHE scheme, which implements a ring variant of the bootstrapping procedure using a method similar to the one in FHEW. Fourth-generation FHE In 2016, Cheon, Kim, Kim and Song (CKKS) proposed an approximate homomorphic encryption scheme that supports a special kind of fixed-point arithmetic that is commonly referred to as block floating point arithmetic. The CKKS scheme includes an efficient rescaling operation that scales down an encrypted message after a multiplication. For comparison, such rescaling requires bootstrapping in the BGV and BFV schemes. The rescaling operation makes CKKS scheme the most efficient method for evaluating polynomial approximations, and is the preferred approach for implementing privacy-preserving machine learning applications. The scheme introduces several approximation errors, both nondeterministic and deterministic, that require special handling in practice. A 2020 article by Baiyu Li and Daniele Micciancio discusses passive attacks against CKKS, suggesting that the standard IND-CPA definition may not be sufficient in scenarios where decryption results are shared. The authors apply the attack to four modern homomorphic encryption libraries (HEAAN, SEAL, HElib and PALISADE) and report that it is possible to recover the secret key from decryption results in several parameter configurations. The authors also propose mitigation strategies for these attacks, and include a Responsible Disclosure in the paper suggesting that the homomorphic encryption libraries already implemented mitigations for the attacks before the article became publicly available. Further information on the mitigation strategies implemented in the homomorphic encryption libraries has also been published. Partially homomorphic cryptosystems In the following examples, the notation is used to denote the encryption of the message . Unpadded RSA If the RSA public key has modulus and encryption exponent , then the encryption of a message is given by . The homomorphic property is then ElGamal In the ElGamal cryptosystem, in a cyclic group of order with generator , if the public key is , where , and is the secret key, then the encryption of a message is , for some random . The homomorphic property is then Goldwasser–Micali In the Goldwasser–Micali cryptosystem, if the public key is the modulus and quadratic non-residue , then the encryption of a bit is , for some random . The homomorphic property is then where denotes addition modulo 2, (i.e., exclusive-or). Benaloh In the Benaloh cryptosystem, if the public key is the modulus and the base with a blocksize of , then the encryption of a message is , for some random . The homomorphic property is then Paillier In the Paillier cryptosystem, if the public key is the modulus and the base , then the encryption of a message is , for some random . The homomorphic property is then Other partially homomorphic cryptosystems Okamoto–Uchiyama cryptosystem Naccache–Stern cryptosystem Damgård–Jurik cryptosystem Sander–Young–Yung encryption scheme Boneh–Goh–Nissim cryptosystem Ishai–Paskin cryptosystem Joye-Libert cryptosystem Castagnos–Laguillaumie cryptosystem Fully homomorphic encryption A cryptosystem that supports on ciphertexts is known as fully homomorphic encryption (FHE). Such a scheme enables the construction of programs for any desirable functionality, which can be run on encrypted inputs to produce an encryption of the result. Since such a program need never decrypt its inputs, it can be run by an untrusted party without revealing its inputs and internal state. Fully homomorphic cryptosystems have great practical implications in the outsourcing of private computations, for instance, in the context of cloud computing. Implementations A list of open-source FHE libraries implementing second-generation (BGV/BFV), third-generation (FHEW/TFHE), and/or fourth-generation (CKKS) FHE schemes is provided below. There are several open-source implementations of fully homomorphic encryption schemes. Second-generation and fourth-generation FHE scheme implementations typically operate in the leveled FHE mode (though bootstrapping is still available in some libraries) and support efficient SIMD-like packing of data; they are typically used to compute on encrypted integers or real/complex numbers. Third-generation FHE scheme implementations often bootstrap after each operation but have limited support for packing; they were initially used to compute Boolean circuits over encrypted bits, but have been extended to support integer arithmetics and univariate function evaluation. The choice of using a second-generation vs. third-generation vs fourth-generation scheme depends on the input data types and the desired computation. In addition, FHE has been combined with zero knowledge proofs, the blockchain technology that proves that something is true, without revealing any private information. zkFHE enables data encryption throughout data processing, while the results of any processing are verified in a confidential manner. Standardization In 2017, researchers from IBM, Microsoft, Intel, the NIST, and others formed the open Homomorphic Encryption Standardization Consortium, which maintains a community security Homomorphic Encryption Standard. See also Homomorphic secret sharing Homomorphic signatures for network coding Private biometrics Verifiable computing using a fully homomorphic scheme Client-side encryption Confidential computing Searchable symmetric encryption Secure multi-party computation Format-preserving encryption Polymorphic code Private set intersection References External links FHE.org Community (conference, meetup and discussion group) Daniele Micciancio's FHE references Vinod Vaikuntanathan's FHE references A list of homomorphic encryption implementations maintained on GitHub Homomorphic encryption Cryptographic primitives Public-key cryptography Information privacy
Homomorphic encryption
[ "Engineering" ]
3,578
[ "Cybersecurity engineering", "Information privacy" ]
4,939,491
https://en.wikipedia.org/wiki/Parylene
Parylene is the common name of a polymer whose backbone consists of para-benzenediyl rings −− connected by 1,2-ethanediyl bridges −−−. It can be obtained by polymerization of para-xylylene ==. The name is also used for several polymers with the same backbone, where some hydrogen atoms are replaced by other functional groups. Some of these variants are designated in commerce by letter-number codes such as "parylene C" and "parylene AF-4". Some of these names are registered trademarks in some countries. Coatings of parylene are often applied to electronic circuits and other equipment as electrical insulation, moisture barriers, or protection against corrosion and chemical attack (conformal coating). They are also used to reduce friction and in medicine to prevent adverse reactions to implanted devices. These coatings are typically applied by chemical vapor deposition in an atmosphere of the monomer para-xylylene. Parylene is considered a "green" polymer because its polymerization needs no initiator or other chemicals to terminate the chain; and the coatings can be applied at or near room temperature, without any solvent. History Parylene was discovered in 1947 by Michael Szwarc as one of the thermal decomposition products of para-xylene −− above 1000 °C. Szwarc identified para-xylylene as the precursor by observing that reaction with iodine yielded para-xylylene di-iodide as the only product. The reaction yield was only a few percent. A more efficient route was found in 1965 by William F. Gorham at Union Carbide. He deposited parylene films by the thermal decomposition of [2.2]paracyclophane at temperatures exceeding 550 °C and in vacuum below 1 Torr. This process did not require a solvent and resulted in chemically resistant films free from pinholes. Union Carbide commercialized a parylene coating system in 1965. Union Carbide went on to undertake research into the synthesis of numerous parylene precursors, including parylene AF-4, throughout the 1960s into the early 1970s. Union Carbide purchased NovaTran (a parylene coater) in 1984 and combined it with other electronic chemical coating businesses to form the Specialty Coating Systems division. The division was sold to Cookson Electronics in 1994. There are parylene coating service companies located around the world, but there is limited commercial availability of parylene. The [2.2]paracyclophane precursors can be purchased for parylene N, C, D, AF-4 and VT-4. Parylene services are provided for N, C, AF-4, VT-4 and E (copolymer of N and E). Varieties Parylene N Parylene N is the un-substituted polymer obtained by polymerization of the para-xylene intermediate. Chlorinated parylenes Derivatives of parylene can be obtained by replacing hydrogen atoms on the phenyl ring or the aliphatic bridge by other functional groups. The most common of these variants is parylene C, which has one hydrogen atom in the aryl ring replaced by chlorine. Another common variant is parylene D, with two such substitutions on the ring. Parylene C is the most used variety, due to its low cost of its precursor and to the balance of its properties as dielectric and moisture barrier properties and ease of deposition. A major disadvantage for many applications is its insolubility in any solvent at room temperature, which prevents removal of the coating when the part has to be re-worked. Parylene C is also the most commonly used because of its relatively low cost. It can be deposited at room temperature while still possessing a high degree of conformality and uniformity and a moderate deposition rate in a batch process. Also, the chlorine on the phenyl ring of the parylene C repeat unit is problematic for RoHS compliance, especially for the printed circuit board manufacture. Moreover, some of the dimer precursor is decomposed by breaking of the aryl-chlorine bond during pyrolysis, generating carbonaceous material that contaminates the coating, and hydrogen chloride that may harm vacuum pumps and other equipment. The chlorine atom leaves the phenyl ring in the pyrolysis tube at all temperatures; however, optimizing the pyrolysis temperature will minimize this problem. The free-radical (phenyl radical) generated in this process is not resonance-stabilized and mitigates the deposition of a parylene-like material on the downside of the pyrolysis tube. This material becomes carbonized and generates particles in situ to contaminate clean rooms and create defects on printed-circuit boards that are often called "stringers and nodules". Parylene N and E do not have this problem and therefore are preferred for manufacturing and clean room use. Fluorinated parylenes Another common halogenated variant is parylene AF-4, with the four hydrogen atoms on the aliphatic chain replaced by fluorine atoms. This variant is also marketed under the trade names of parylene SF (Kisco) and HT parylene (SCS). The −− unit that comprises the ethylene chain is the same as the repeating unit of PTFE (Teflon), consistent with its superior oxidative and UV stability. Parylene AF-4 has been used to protect outdoor LED displays and lighting from water, salt and pollutants successfully. Another fluorinated variant is parylene VT-4 (also called parylene F), with fluorine substituted for the four hydrogens on the aryl ring. This variant is marketed by Kisco with the trademark Parylene CF. Because of the aliphatic −CH2− units, it has poor oxidative and UV stability, but still better than N, C, or D. Alkyl-substituted parylenes The hydrogen atoms can be replaced also by alkyl groups. Substitution may occur on either the phenyl ring or the ethylene bridge, or both. Specifically, replacement of one hydrogen on the phenyl ring by a methyl group or an ethyl group yields parylene M and E respectively. These substitutions increase the intermolecular (chain-to-chain) distance, which makes the polymer more soluble and permeable. For example, compared to parylene C, parylene M was shown to have a lower dielectric constant (2.48 vs. 3.2 at 1 kHz). Parylene E had a lower tensile modulus (175 kpsi (1.21 GPa) vs. 460 kpsi (3.17 GPa)), a lower dielectric constant (2.34 vs. 3.05 at 10 kHz), slightly worse moisture barrier properties (4.1 vs. 0.6 g·mil/(atom·100 in²·24 hr) (11 vs. 1.6 kg·m·pmol⁻¹·m⁻²·s⁻¹)), and equivalent dielectric breakdown 5–6 kV/mil for a 1-mil coating) but better solubility. However, the copolymer of parylene N and E has equivalent barrier performance of parylene C. Replacement of one hydrogen by methyl on each carbon of the ethyl bridge yields parylene AM-2, (not to be confused with an amine-substituted variant trademarked by Kisco). The solubility of parylene AM-2 is not as good as parylene E. Reactive parylenes While parylene coatings are mostly used to protect an object from water and other chemicals, some applications require a coating that can bind to adhesives or other coated parts, or immobilize various molecules such as dyes, catalysts, or enzymes. These "reactive" parylene coatings can be obtained with chemically active substituents. Two commercially available products are parylene A, featuring one amine substituent − in each unit, and parylene AM, with one methylene amine group − per unit. Both are trademarks of Kisco. Parylene AM is more reactive than the A variant. The amine of the latter, being adjacent to the phenyl ring, is in resonance stabilization and therefore less basic. However, parylene A is much easier to synthesize and hence cheaper. Another reactive variant is parylene X, which features an ethinyl group − attached to the phenyl ring in some of the units. This variant, which contains no elements other than hydrogen and carbon, can be cross-linked by heat or with UV light and can react with copper or silver salts to generate the corresponding metalorganic complexes Cu-acetylide or Ag-acetylide. It can also undergo "click chemistry" and can be used as an adhesive, allowing parylene-to-parylene bonding without any by-products during processing. Unlike most other variants, parylene X is amorphous (non-crystalline). Colored parylenes It is possible to attach a chromophore directly to the [2.2]paracyclophane base molecule to impart color to parylene. Parylene-like copolymers Copolymers and nanocomposites (SiO2/parylene C) of parylene have been deposited at near-room temperature previously. With strongly electron withdrawing comonomers, parylene can be used as an initiator to initiate polymerizations, such as with N-phenyl maleimide. Using the parylene C/SiO2 nanocomposites, parylene C could be used as a sacrificial layer to make nanoporous silica thin films with a porosity of >90%. Properties Transparency and crystallinity Parylene thin films and coatings are transparent; however, they are not amorphous except for the alkylated parylenes, e.g. parylene E. As a result of this semi-crystallinity, they scatter light. Parylene N and C have a low degree of crystallinity; however, parylene VT-4 and AF-4 are highly crystalline ~60% in their as-deposited condition (hexagonal crystal structure) and therefore are generally not suitable as optical materials. Parylene C will become more crystalline if heated at elevated temperatures until its melting point at 270 °C. Parylene N has a monoclinic crystal structure in its as-deposited condition and it does not appreciably become more crystalline until it undergoes a crystallographic phase transformation at ~220 °C to hexagonal, at which point it becomes highly crystalline like the fluorinated parylenes. It can reach 80% crystallinity at anneal temperatures up to 400 °C, after which point it degrades. Mechanical and chemical Parylenes are relatively flexible (0.5 GPa for parylene N), except for cross-linked parylene X (1.0 GPa), and have poor oxidative resistance (~60–100 °C, depending on failure criteria) and UV stability, except for parylene AF-4. However, parylene AF-4 is more expensive due to a three-step synthesis of its precursor with low yield and poor deposition efficiency. Their UV stability is so poor that parylene cannot be exposed to regular sunlight without yellowing. Nearly all the parylenes are insoluble at room temperature, except for the alkylated parylenes, one of which is parylene E, and the alkylated-ethynyl parylenes. This lack of solubility has made it difficult to re-work printed circuit boards coated with parylene. Permeability As a moisture diffusion barrier, the efficacy of halogneated parylene coatings scales non-linearly with their density. Halogen atoms such as F, Cl and Br add much density to the coating and therefore allow the coating to be a better diffusion barrier; however, if parylenes are used as a diffusion barrier against water then the apolar chemistries such as parylene E are much more effective. For moisture barriers the three principal material parameters to be optimized are: coating density, coating polarity (olefin chemistry is best) and a glass-transition temperature above room temperature and ideally above the service limit of the printed-circuit board, device or part. In this regard parylene E is a best choice although it has a low density compared to, for example, parylene C. Industry specifications Coating process Parylene coatings are generally applied by chemical vapor deposition in an atmosphere of the monomer para-xylylene or a derivative thereof. This method has one very strong benefit, namely it does not generate any byproducts besides the parylene polymer, which would need to be removed from the reaction chamber and could interfere with the polymerization. Parts to be coated need to be clean in order to ensure good adherence of the film. Since the monomer diffuses, areas that are not to be coated must be hermetically sealed, without gaps, crevices or other openings. The part must be maintained in a relatively narrow window of pressure and temperature. The process involves three steps: generation of the gaseous monomer, adsorption on the part's surface, and polymerization of adsorbed film. Polymerization Polymerization of the adsorbed p-xylylene monomer requires a minimum threshold temperature. For parylene N, its threshold temperature is 40 °C. The p-xylylene intermediate has two quantum mechanical states, the benzoid state (triplet state) and the quinoid state (singlet state). The triplet state is effectively the initiator and the singlet state is effectively the monomer. The triplet state can be de-activated when in contact with transition metals or metal oxides including Cu/CuOx. Many of the parylenes exhibit this selectivity based on quantum mechanical deactivation of the triplet state, including parylene X. Polymerization may proceed by a variety of routes that differ in the transient termination of the growing chains, such as a radical group − or a negative anion group : Physisorption The monomer polymerizes only after it is physically adsorbed (physisorbed) on the part's surface. This process has inverse Arrhenius kinetics, meaning that it is stronger at lower temperatures than higher temperatures. There is critical threshold temperature above which there is practically no physisorption, and hence no deposition. The closer the deposition temperature is to the threshold temperature the weaker the physisorption. Parylene C has a higher threshold temperature, 90 °C, and therefore has a much higher deposition rate, greater than 1 nm/s, while still yielding fairly uniform coatings. In contrast, the threshold temperature of parylene AF-4 is very close to room temperature (30–35 °C), as a result, its deposition efficiency is poor. An important property of the monomer is the so-called 'sticking coefficient', that expresses the degree to which it adsorbs on the polymer. A lower coefficient results more uniform deposition thickness and a more conformal coating. Another relevant property for the deposition process is polarizability, which determines how strongly the monomer interacts with the surface. Deposition of halogenated parylenes strongly correlates with molecular weight of the monomer. The fluorinated variants are an exception: the polarizability of parylene AF-4 is low, resulting in inefficient deposition. Monomer generation From the cyclic dimer The p-xylylene monomer is normally generated during the coating process by evaporating the cyclic dimer [2.2]para-cyclophane at a relatively low temperature, then decomposing the vapor at 450–700 °C and pressure 0.01–1.0 Torr. This method (Gorham process) yields 100% monomer with no by-products or decomposition of the monomer. The dimer can be synthesized from p-xylene involving several steps involving bromination, amination and Hofmann elimination. The same method can be used to deposit substituted parylenes. For example, parylene C can be obtained from the dimeric precursor dichloro[2.2]para-cyclophane, except that the temperature must be carefully controlled since the chlorine-aryl bond breaks at 680 °C. The standard Gorham process is shown above for parylene AF-4. The octafluoro[2.2]para-cyclophane precursor dimer can be sublimed below <100 °C and cracked at 700–750 °C, higher than the temperature (680 °C) used to crack the unsubstituted cyclophane since the −CF2−CF2− bond is stronger than the −CH2−CH2− bond. This resonance-stabilized intermediate is transported to a room temperature deposition chamber where polymerization occurs under low pressure (1–100 mTorr) conditions. From substituted p-xylenes Another route to generation of the monomer is to use a para-xylene precursor with a suitable substituent on each methyl groups, whose elimination generates para-xylylene. Selection of a leaving group may consider its toxicity (which excludes sulfur and amine-based reactions), how easily it leaves the precursor, and possible interference with the polymerization. The leaving group can either be trapped before the deposition chamber, or it can be highly volatile so that it does not condense in the latter. For example, the precursor α,α'-dibromo-α,α,α',α'-tetrafluoro-para-xylene yields parylene AF-4 with elimination of bromine. The advantage to this process is the low cost of synthesis for the precursor. The precursor is also a liquid and can be delivered by standard methods developed in the semiconductor industry, such as with a vaporizer, vaporizer with a bubbler, or a mass-flow controller. Originally the precursor was just thermally cracked, but suitable catalysts lower the pyrolysis temperature, resulting in less char residue and a better coating. By either method an atomic bromine free-radical is given off from each methyl end, which can be converted to hydrogen bromide and removed from monomer flow. Special precautions are needed since bromine and HBr are toxic and corrosive towards most metals and metal alloys, and bromine can damage viton O-rings. A similar synthesis for parylene N uses the precursor α,α'-dimethoxy-p-xylene. The methoxy group − is the leaving group; while it condenses in the deposition chamber, it does not interfere with the deposition of the polymer. This precursor is much less expensive than [2.2]para-cyclophane. Moreover, being a liquid just above room temperature, this precursor can delivered reliably using a mass-flow controller; whereas the generation and delivery of the gaseous monomer of the Gorham process are difficult to measure and control. The same chemistry can generate parylene AM-2 can be generated from the precursor α,α'-dimethyl-α,α'-dimethoxy-p-xylene. Another example of this approach is the synthesis of parylene AF-4 from α,α'-diphenoxy-α,α,α',α'-tetrafluoro-para-xylene. In this case, the leaving group is phenoxy −, which can be condensed before the deposition chamber. Characteristics and advantages Parylenes may confer several desirable qualities to the coated parts. Among other properties, they are Hydrophobic, chemically resistant, and mostly impermeable to gases (including water vapor) and inorganic and organic liquids (including strong acids and bases). Good electrical insulator with a low dielectric constant (average in-plane and out-of-plane: 2.67 parylene N and 2.5 parylene AF-4, SF, HT) Stable and accepted in biological tissues, having been approved by the US FDA for various medical applications. Dense and pinhole free, for thickness above 1.4 nm Homogeneous and uniformly thick, even within cavities. Stable to oxidation up to 350 °C (AF-4, SF, HT) Low coefficient of friction (AF-4, HT, SF) Since the coating process takes place at ambient temperature in a mild vacuum, it can be applied even to temperature-sensitive objects such as dry biological specimens. The low temperature also results in low intrinsic stress in the thin film. Moreover, the only gas in the deposition chamber is the monomer, without any solvents, catalysts, or byproducts that could attack the object. Parylene AF-4 and VT-4 are both fluorinated and as a result very expensive compared to parylene N and C, which has severely limited their commercial use, except for niche applications. Applications Parylene C and to a lesser extent AF-4, SF, HT (all the same polymer) are used for coating printed circuit boards (PCBs) and medical devices. There are numerous other applications as parylene is an excellent moisture barrier. It is the most bio-accepted coating for stents, defibrillators, pacemakers and other devices permanently implanted into the body. Molecular layers The classic molecular layer chemistries are self-assembled monolayers (SAMs). SAMs are long-chain alkyl chains, which interact with surfaces based on sulfur-metal interaction (alkylthiolates) or a sol-gel type reaction with a hydroxylated oxide surface (trichlorosilyl alkyls or trialkoxy alkyls). However, unless the gold or oxide surface is carefully treated and the alkyl chain is long, these SAMs form disordered monolayers, which do not pack well. This lack of packing causes issues in, for example, stiction in MEMS devices. The observation that parylenes could form ordered molecular layers (MLs) came with contact angle measurements, where MLs thicker than 10 Å had an equilibrium contact angle of 80 degrees (same as bulk parylene N) but those thinner had a reduced contact angle. This was also confirmed with electrical measurements (bias-temperature stress measurements) using metal-insulator-semiconductor capacitors (MISCAPs). In short, parylene N and AF-4 (those parylenes with no functional groups) are pin-hole free at ~14 Å. This results because the parylene repeat units possess a phenyl ring and due to the high electronic polarizability of the phenyl ring adjacent repeat units order themselves in the XY-plane. As a result of this interaction parylene MLs are surface independent, except for transition metals, which de-activate the triplet (benzoid) state and therefore the parylenes cannot be initiated. This finding of parylenes as molecular layers is very powerful for industrial applications because of the robustness of the process and that the MLs are deposited at room temperature. In this way parylenes can be used as diffusion barriers and for reducing the polarizability of surface (de-activation of oxide surfaces). Combining the properties of the reactive parylenes with the observation that they can form dense pin-hole-free molecular layers, parylene X has been utilized as a genome sequencing interface layer. One caveat with the molecular layer parylenes, namely they are deposited as oligomers and not high polymer. As a result, a vacuum anneal is needed to convert the oligomers to high polymer. For parylene N that temperature is 250 °C, whereas it is 300 °C for payrlene AF-4. Typical applications Parylene films have been used in various applications, including Hydrophobic coating (moisture barriers, e.g., for biomedical hoses) Barrier layers (e.g., for filter, diaphragms, valves) Microwave electronics (e.g., protection of PTFE dielectric substrates from oil contamination) Implantable medical devices Sensors in rough environment (e.g., automotive fuel/air sensors) Electronics for space travel and defense Corrosion protection for metallic surfaces Reinforcement of micro-structures Protection of plastic, rubber, etc., from harmful environmental conditions Reduction of friction, e.g., for guiding catheters, acupuncture needles and microelectromechanical systems. See also Conformal coating References Polymers
Parylene
[ "Chemistry", "Materials_science" ]
5,235
[ "Polymers", "Polymer chemistry" ]
4,941,216
https://en.wikipedia.org/wiki/Protein%20pKa%20calculations
{{DISPLAYTITLE:Protein pKa calculations}} In computational biology, protein pKa calculations are used to estimate the pKa values of amino acids as they exist within proteins. These calculations complement the pKa values reported for amino acids in their free state, and are used frequently within the fields of molecular modeling, structural bioinformatics, and computational biology. Amino acid pKa values pKa values of amino acid side chains play an important role in defining the pH-dependent characteristics of a protein. The pH-dependence of the activity displayed by enzymes and the pH-dependence of protein stability, for example, are properties that are determined by the pKa values of amino acid side chains. The pKa values of an amino acid side chain in solution is typically inferred from the pKa values of model compounds (compounds that are similar to the side chains of amino acids). See Amino acid for the pKa values of all amino acid side chains inferred in such a way. There are also numerous experimental studies that have yielded such values, for example by use of NMR spectroscopy. The table below lists the model pKa values that are often used in a protein pKa calculation, and contains a third column based on protein studies. The effect of the protein environment When a protein folds, the titratable amino acids in the protein are transferred from a solution-like environment to an environment determined by the 3-dimensional structure of the protein. For example, in an unfolded protein, an aspartic acid typically is in an environment which exposes the titratable side chain to water. When the protein folds, the aspartic acid could find itself buried deep in the protein interior with no exposure to solvent. Furthermore, in the folded protein, the aspartic acid will be closer to other titratable groups in the protein and will also interact with permanent charges (e.g. ions) and dipoles in the protein. All of these effects alter the pKa value of the amino acid side chain, and pKa calculation methods generally calculate the effect of the protein environment on the model pKa value of an amino acid side chain. Typically, the effects of the protein environment on the amino acid pKa value are divided into pH-independent effects and pH-dependent effects. The pH-independent effects (desolvation, interactions with permanent charges and dipoles) are added to the model pKa value to give the intrinsic pKa value. The pH-dependent effects cannot be added in the same straightforward way and have to be accounted for using Boltzmann summation, Tanford–Roxby iterations or other methods. The interplay of the intrinsic pKa values of a system with the electrostatic interaction energies between titratable groups can produce quite spectacular effects such as non-Henderson–Hasselbalch titration curves and even back-titration effects. The image on the right shows a theoretical system consisting of three acidic residues. One group is displaying a back-titration event (blue group). pKa calculation methods Several software packages and webserver are available for the calculation of protein pKa values. Using the Poisson–Boltzmann equation Some methods are based on solutions to the Poisson–Boltzmann equation (PBE), often referred to as FDPB-based methods (FDPB stands for "finite difference Poisson–Boltzmann"). The PBE is a modification of Poisson's equation that incorporates a description of the effect of solvent ions on the electrostatic field around a molecule. The H++ web server, the pKD webserver, MCCE2, Karlsberg+, PETIT and GMCT use the FDPB method to compute pKa values of amino acid side chains. FDPB-based methods calculate the change in the pKa value of an amino acid side chain when that side chain is moved from a hypothetical fully solvated state to its position in the protein. To perform such a calculation, one needs theoretical methods that can calculate the effect of the protein interior on a pKa value, and knowledge of the pKa values of amino acid side chains in their fully solvated states. Empirical methods A set of empirical rules relating the protein structure to the pKa values of ionizable residues have been developed by Li, Robertson, and Jensen. These rules form the basis for the web-accessible program called PROPKA for rapid predictions of pKa values. A recent empirical pKa prediction program was released by Tan KP et.al. with the online server DEPTH web server. Molecular dynamics (MD)-based methods Molecular dynamics methods of calculating pKa values make it possible to include full flexibility of the titrated molecule. Molecular dynamics based methods are typically much more computationally expensive, and not necessarily more accurate, ways to predict pKa values than approaches based on the Poisson–Boltzmann equation. Limited conformational flexibility can also be realized within a continuum electrostatics approach, e.g., for considering multiple amino acid sidechain rotamers. In addition, current commonly used molecular force fields do not take electronic polarizability into account, which could be an important property in determining protonation energies. Determining pKa values from titration curves or free energy calculations From the titration of protonatable group, one can read the so-called pKa which is equal to the pH value where the group is half-protonated (i.e. when 50% such groups would be protonated). The pKa is equal to the Henderson–Hasselbalch pKa (pK) if the titration curve follows the Henderson–Hasselbalch equation. Most pKa calculation methods silently assume that all titration curves are Henderson–Hasselbalch shaped, and pKa values in pKa calculation programs are therefore often determined in this way. In the general case of multiple interacting protonatable sites, the pKa value is not thermodynamically meaningful. In contrast, the Henderson–Hasselbalch pKa value can be computed from the protonation free energy via and is thus in turn related to the protonation free energy of the site via The protonation free energy can in principle be computed from the protonation probability of the group (pH) which can be read from its titration curve Titration curves can be computed within a continuum electrostatics approach with formally exact but more elaborate analytical or Monte Carlo (MC) methods, or inexact but fast approximate methods. MC methods that have been used to compute titration curves are Metropolis MC or Wang–Landau MC. Approximate methods that use a mean-field approach for computing titration curves are the Tanford–Roxby method and hybrids of this method that combine an exact statistical mechanics treatment within clusters of strongly interacting sites with a mean-field treatment of intercluster interactions. In practice, it can be difficult to obtain statistically converged and accurate protonation free energies from titration curves if is close to a value of 1 or 0. In this case, one can use various free energy calculation methods to obtain the protonation free energy such as biased Metropolis MC, free-energy perturbation, thermodynamic integration, the non-equilibrium work method or the Bennett acceptance ratio method. Note that the pK value does in general depend on the pH value. This dependence is small for weakly interacting groups like well solvated amino acid side chains on the protein surface, but can be large for strongly interacting groups like those buried in enzyme active sites or integral membrane proteins. While many protein pKa prediction methods are available, their accuracies often differ significantly due to subtle and often drastic differences in strategy. References External links AccelrysPKA — Accelrys CHARMm based pKa calculation H++ — Poisson–Boltzmann based pKa calculations MCCE2 — Multi-Conformation Continuum Electrostatics (Version 2) Karlsberg+ — pKa computation with multiple pH adapted conformations PETIT — Proton and Electron TITration GMCT — Generalized Monte Carlo Titration DEPTH web server — Empirical calculation of pKa values using Residue Depth as a major feature Protein methods Equilibrium chemistry
Protein pKa calculations
[ "Chemistry", "Biology" ]
1,687
[ "Equilibrium chemistry", "Protein methods", "Protein biochemistry", "Biochemistry methods" ]
4,941,851
https://en.wikipedia.org/wiki/Schedule
A schedule (, ) or a timetable, as a basic time-management tool, consists of a list of times at which possible tasks, events, or actions are intended to take place, or of a sequence of events in the chronological order in which such things are intended to take place. The process of creating a schedule — deciding how to order these tasks and how to commit resources between the variety of possible tasks — is called scheduling, and a person responsible for making a particular schedule may be called a scheduler. Making and following schedules is an ancient human activity. Some scenarios associate this kind of planning with learning life skills. Schedules are necessary, or at least useful, in situations where individuals need to know what time they must be at a specific location to receive a specific service, and where people need to accomplish a set of goals within a set time. Schedules can usefully span both short periods, such as a daily or weekly schedule, and long-term planning for periods of several months or years. They are often made using a calendar, where the person making the schedule can note the dates and times at which various events are planned to occur. Schedules that do not set forth specific times for events to occur may instead list algorithmically an expected order in which events either can or must take place. In some situations, schedules can be uncertain, such as where the conduct of daily life relies on environmental factors outside human control. People who are vacationing or otherwise seeking to reduce stress and achieve relaxation may intentionally avoid having a schedule for a certain period of time. Kinds of schedules Publicly available schedules Certain kinds of schedules reflect information that is generally made available to the public, so that members of the public can plan certain activities around them. These may include things like: Hours of operation of businesses, tourist attractions, and government offices, which allow consumers of these services to know when they can obtain them. Transportation schedules, such as airline timetables, train schedules, bus schedules, and various public transport timetables are published to allow commuters to plan their travels. From the perspective of the organization responsible for making transportation available, schedules must provide for the possibility of schedule delay, a term in transport modeling which refers to a difference between a desired time of arrival or departure and the actual time. Despite the use of "delay", it can refer to a difference in either the early or late direction. In broadcast programming, the minute planning of the content of a radio or television broadcast channel, the result of that activity is the generation of a list of shows to be broadcast at regular times or at specific times, which is then distributed to the public so that the potential audience for the show will know when it will be available to them. Concerts and sporting events are typically scheduled so that fans can plan to buy tickets and attend the events. Internal schedules An internal schedule is a schedule that is only of importance to the people who must directly abide by it. It has been noted that "groups often begin with a schedule imposed from the outside, but effective groups also develop an internal schedule that sets goals for the completion of micro-tasks". Unlike schedules for public events or publicly available amenities, there is no need to go to the time and effort of publicizing the internal schedule. To the contrary, an internal schedule may be kept confidential as a matter of security or propriety. An example of an internal schedule is a workplace schedule, which lists the hours that specific employees are expected to be in a workplace, ensure sufficient staffing at all times while in some instances avoiding overstaffing. A work schedule for a business that is open to the public must correspond to the hours of operation of the business, so that employees are available at times when customers are able to use the services of the business. One common method of scheduling employees to ensure the availability of appropriate resources is a Gantt chart. Another example of an internal schedule is the class schedule of an individual student, indicating what days and times their classes will be held. Project management scheduling A schedule may also involve the completion of a project with which the public has no interaction public prior to its completion. In project management, a formal schedule will often be created as an initial step in carrying out a specific project, such as the construction of a building, development of a product, or launch of a program. Establishing a project management schedule involves listing milestones, activities, and deliverables with intended start and finish dates, of which the scheduling of employees may be an element. A production process schedule is used for the planning of the production or the operation, while a resource schedule aids in the logistical planning for sharing resources among several entities. In such cases, a schedule "is obtained by estimating the duration of each task and noting any dependencies amongst those tasks". Dependencies, in turn, are tasks that must be completed in order to make other tasks possible, such as renting a truck before loading materials on the truck (since nothing can be loaded until the truck is available for things to be loaded on). Scheduling of projects, therefore, requires the identification of all of the tasks necessary to complete the project, and the earliest time at which each task can be completed. In creating a schedule, a certain amount of time is usually set aside as a contingency against unforeseen days. This time is called scheduling variance, or float, and is a core concept for the critical path method. In computing Scheduling is important as an internal process in computer science, wherein a database transaction schedule is a list of actions from a set of transactions in databases, and scheduling is the way various processes are assigned in computer multitasking and multiprocessing operating system design. This kind of scheduling is incorporated into the computer program, and the user may be completely unaware of what tasks are being carried out and when. Scheduling operations and issues in computing may include: The operation of a network scheduler or packet scheduler, an arbiter program that manages the movement of certain pieces of information in the computer. Open-shop scheduling, Job Shop Scheduling, Flow Shop Scheduling Problem, optimization problems in computer science. I/O scheduling, the order in which I/O requests are submitted to a block device in operating systems. Job scheduler, an enterprise software application in charge of unattended background executions. In wireless communications Wireless networks should have a flexible service architecture to integrate different types of services on a single air-interface because terminals have different service requirements. On top of the flexible service architecture, effective quality of service (QoS) management schemes are also needed. Therefore, wireless resources need to be shared among all terminals carefully and it is desirable to schedule the usage of wireless resources as efficiently as possible, while maximizing the overall network performance. In operations research The scheduling of resources, usually subject to constraints, is the subject of several problems that are in the area of research known as operations research, usually in terms of finding an optimal solution or method for solving. For example, the nurse scheduling problem is concerned with scheduling a number of employees with typical constraints such as rotation of shifts, limits on overtime, etc. The travelling salesman problem is concerned with scheduling a series of journeys to minimize time or distance. Some of these problems may be solved efficiently with linear programming, but many scheduling problems require integer variables. Although efficient algorithms exist to give integer solutions in some situations (see network flow models), most problems that require integer solutions cannot yet be solved efficiently. In transportation planning Scheduling is useful in transportation planning. The important components of transportation improvement proposals include (a) comprehensive evaluations of the scope of work to be completed, (b) reasonably accurate cost estimates for finishing the task, and (c) a feasible project schedule. If any of these factors are not accurately defined, then there is a strong possibility of unexpected difficulties. Poor scoping and/or scheduling may result in serious budget problems, delays and cancellations of transportation improvements, and sometimes even a domino effect that can negatively impact the entire area's transportation planning. In education In an educational institution, a timetable must be established that refers students and teachers to classrooms each hour. The challenge of constructing this schedule for larger institutions was addressed by Gunther Schmidt and Thomas Ströhlein in 1976. They formalized the timetable construction problem, and indicated an iterative process using logical matrices and hypergraphs to obtain a solution. See also Automated planning and scheduling Calendaring software Employee scheduling software Notation for theoretic scheduling problems Timeblocking References Time management Scheduling (computing)
Schedule
[ "Physics" ]
1,719
[ "Spacetime", "Physical quantities", "Time", "Time management" ]
4,942,384
https://en.wikipedia.org/wiki/Conjugacy%20problem
In abstract algebra, the conjugacy problem for a group G with a given presentation is the decision problem of determining, given two words x and y in G, whether or not they represent conjugate elements of G. That is, the problem is to determine whether there exists an element z of G such that The conjugacy problem is also known as the transformation problem. The conjugacy problem was identified by Max Dehn in 1911 as one of the fundamental decision problems in group theory; the other two being the word problem and the isomorphism problem. The conjugacy problem contains the word problem as a special case: if x and y are words, deciding if they are the same word is equivalent to deciding if is the identity, which is the same as deciding if it's conjugate to the identity. In 1912 Dehn gave an algorithm that solves both the word and conjugacy problem for the fundamental groups of closed orientable two-dimensional manifolds of genus greater than or equal to 2 (the genus 0 and genus 1 cases being trivial). It is known that the conjugacy problem is undecidable for many classes of groups. Classes of group presentations for which it is known to be solvable include: free groups (no defining relators) one-relator groups with torsion braid groups knot groups finitely presented conjugacy separable groups finitely generated abelian groups (relators include all commutators) Gromov-hyperbolic groups biautomatic groups CAT(0) groups Fundamental groups of geometrizable 3-manifolds References Group theory Undecidable problems
Conjugacy problem
[ "Mathematics" ]
345
[ "Computational problems", "Group theory", "Fields of abstract algebra", "Undecidable problems", "Mathematical problems" ]
6,476,735
https://en.wikipedia.org/wiki/Zinc-finger%20nuclease
Zinc-finger nucleases (ZFNs) are artificial restriction enzymes generated by fusing a zinc finger DNA-binding domain to a DNA-cleavage domain. Zinc finger domains can be engineered to target specific desired DNA sequences and this enables zinc-finger nucleases to target unique sequences within complex genomes. By taking advantage of endogenous DNA repair machinery, these reagents can be used to precisely alter the genomes of higher organisms. Alongside CRISPR/Cas9 and TALEN, ZFN is a prominent tool in the field of genome editing. It was initially created by researcher Srinivasan Chandrasegaran. Domains DNA-binding domain The DNA-binding domains of individual ZFNs typically contain between three and six individual zinc finger repeats and can each recognize between 9 and 18 basepairs. If the zinc finger domains perfectly recognize a 3 basepair DNA sequence, they can generate a 3-finger array that can recognize a 9 basepair target site. Other procedures can utilize either 1-finger or 2-finger modules to generate zinc-finger arrays with six or more individual zinc fingers. The main drawback with this procedure is the specificities of individual zinc fingers can overlap and can depend on the context of the surrounding zinc fingers and DNA. Without methods to account for this "context dependence", the standard modular assembly procedure often fails. Numerous selection methods have been used to generate zinc-finger arrays capable of targeting desired sequences. Initial selection efforts utilized phage display to select proteins that bound a given DNA target from a large pool of partially randomized zinc-finger arrays. More recent efforts have utilized yeast one-hybrid systems, bacterial one-hybrid and two-hybrid systems, and mammalian cells. A promising new method to select novel zinc-finger arrays utilizes a bacterial two-hybrid system and has been dubbed "OPEN" by its creators. This system combines pre-selected pools of individual zinc fingers that were each selected to bind a given triplet and then utilizes a second round of selection to obtain 3-finger arrays capable of binding a desired 9-bp sequence. This system was developed by the Zinc-Finger Consortium as an alternative to commercial sources of engineered zinc-finger arrays. (see: Zinc finger chimera for more info on zinc finger selection techniques) DNA-cleavage domain The non-specific cleavage domain from the type IIs restriction endonuclease FokI is typically used as the cleavage domain in ZFNs. This cleavage domain must dimerize in order to cleave DNA and thus a pair of ZFNs are required to target non-palindromic DNA sites. Standard ZFNs fuse the cleavage domain to the C-terminus of each zinc finger domain. To let the two cleavage domains dimerize and cleave DNA, the two individual ZFNs must bind opposite strands of DNA with their C-termini a certain distance apart. The most commonly used linker sequences between the zinc finger domain and the cleavage domain requires the 5′ edge of each binding site to be separated by 5 to 7 bp. Several different protein engineering techniques have been employed to improve both the activity and specificity of the nuclease domain used in ZFNs. Directed evolution has been employed to generate a FokI variant with enhanced cleavage activity that the authors dubbed "Sharkey". Structure-based design has also been employed to improve the cleavage specificity of FokI by modifying the dimerization interface so that only the intended heterodimeric species are active. Applications Zinc finger nucleases are useful to manipulate the genomes of many plants and animals including arabidopsis, tobacco, soybean, corn, Drosophila melanogaster, C. elegans, Platynereis dumerilii, sea urchin, silkworm, zebrafish, frogs, mice, rats, rabbits, pigs, cattle, and various types of mammalian cells. Zinc finger nucleases have also been used in a mouse model of haemophilia and a clinical trial found CD4+ human T-cells with the CCR5 gene disrupted by zinc finger nucleases to be safe as a potential treatment for HIV/AIDS. ZFNs are also used to create a new generation of genetic disease models called isogenic human disease models. Disabling an allele ZFNs can be used to disable dominant mutations in heterozygous individuals by producing double-strand breaks (DSBs) in the DNA (see Genetic recombination) in the mutant allele, which will, in the absence of a homologous template, be repaired by non-homologous end-joining (NHEJ). NHEJ repairs DSBs by joining the two ends together and usually produces no mutations, provided that the cut is clean and uncomplicated. In some instances, however, the repair is imperfect, resulting in deletion or insertion of base-pairs, producing frame-shift and preventing the production of the harmful protein. Multiple pairs of ZFNs can also be used to completely remove entire large segments of genomic sequence. To monitor the editing activity, a PCR of the target area amplifies both alleles and, if one contains an insertion, deletion, or mutation, it results in a heteroduplex single-strand bubble that cleavage assays can easily detect. ZFNs have also been used to modify disease-causing alleles in triplet repeat disorders. Expanded CAG/CTG repeat tracts are the genetic basis for more than a dozen inherited neurological disorders including Huntington's disease, myotonic dystrophy, and several spinocerebellar ataxias. It has been demonstrated in human cells that ZFNs can direct double-strand breaks (DSBs) to CAG repeats and shrink the repeat from long pathological lengths to short, less toxic lengths. Recently, a group of researchers have successfully applied the ZFN technology to genetically modify the gol pigment gene and the ntl gene in zebrafish embryo. Specific zinc-finger motifs were engineered to recognize distinct DNA sequences. The ZFN-encoding mRNA was injected into one-cell embryos and a high percentage of animals carried the desired mutations and phenotypes. Their research work demonstrated that ZFNs can specifically and efficiently create heritable mutant alleles at loci of interest in the germ line, and ZFN-induced alleles can be propagated in subsequent generations. Similar research of using ZFNs to create specific mutations in zebrafish embryo has also been carried out by other research groups. The kdr gene in zebra fish encodes for the vascular endothelial growth factor-2 receptor. Mutagenic lesions at this target site was induced using ZFN technique by a group of researchers in US. They suggested that the ZFN technique allows straightforward generation of a targeted allelic series of mutants; it does not rely on the existence of species-specific embryonic stem cell lines and is applicable to other vertebrates, especially those whose embryos are easily available; finally, it is also feasible to achieve targeted knock-ins in zebrafish, therefore it is possible to create human disease models that are heretofore inaccessible. Allele editing ZFNs are also used to rewrite the sequence of an allele by invoking the homologous recombination (HR) machinery to repair the DSB using the supplied DNA fragment as a template. The HR machinery searches for homology between the damaged chromosome and the extra-chromosomal fragment and copies the sequence of the fragment between the two broken ends of the chromosome, regardless of whether the fragment contains the original sequence. If the subject is homozygous for the target allele, the efficiency of the technique is reduced since the undamaged copy of the allele may be used as a template for repair instead of the supplied fragment. Gene therapy The success of gene therapy depends on the efficient insertion of therapeutic genes at an appropriate chromosomal target site within the human genome, without causing cell injury, oncogenic mutations, or an immune response. The construction of plasmid vectors is simple and straightforward. Custom-designed ZFNs that combine the non-specific cleavage domain (N) of FokI endonuclease with zinc-finger proteins (ZFPs) offer a general way to deliver a site-specific DSB to the genome, and stimulate local homologous recombination by several orders of magnitude. This makes targeted gene correction or genome editing a viable option in human cells. Since ZFN-encoding plasmids could be used to transiently express ZFNs to target a DSB to a specific gene locus in human cells, they offer an excellent way for targeted delivery of the therapeutic genes to a pre-selected chromosomal site. The ZFN-encoding plasmid-based approach has the potential to circumvent all the problems associated with the viral delivery of therapeutic genes. The first therapeutic applications of ZFNs are likely to involve ex vivo therapy using a patient's own stem cells. After editing the stem cell genome, the cells could be expanded in culture and reinserted into the patient to produce differentiated cells with corrected functions. Initial targets likely include the causes of monogenic diseases, such as the IL2Rγ gene and the β-globin gene for gene correction and CCR5 gene for mutagenesis and disablement. Potential problems Off-target cleavage If the zinc finger domains are not specific enough for their target site or they do not target a unique site within the genome of interest, off-target cleavage may occur. Such off-target cleavage may lead to the production of enough double-strand breaks to overwhelm the repair machinery and, as a consequence, yield chromosomal rearrangements and/or cell death. Off-target cleavage events may also promote random integration of donor DNA. Two separate methods have been demonstrated to decrease off-target cleavage for 3-finger ZFNs that target two adjacent 9-basepair sites. Other groups use ZFNs with 4, 5 or 6 zinc fingers that target longer and presumably rarer sites and such ZFNs could theoretically yield less off-target activity. A comparison of a pair of 3-finger ZFNs and a pair of 4-finger ZFNs detected off-target cleavage in human cells at 31 loci for the 3-finger ZFNs and at 9 loci for the 4-finger ZFNs. Whole genome sequencing of C. elegans modified with a pair of 5-finger ZFNs found only the intended modification and a deletion at a site "unrelated to the ZFN site" indicating this pair of ZFNs was capable of targeting a unique site in the C. elegans genome. Immunogenicity As with many foreign proteins inserted into the human body, there is a risk of an immunological response against the therapeutic agent and the cells in which it is active. Since the protein must be expressed only transiently, however, the time over which a response may develop is short. Liu et al. respectively target ZFNickases to the endogenous b-casein(CSN2) locus stimulates lysostaphin and human lysozyme gene addition by homology-directed repair and derive secrete lysostaphin cows. Prospects The ability to precisely manipulate the genomes of plants and animals has numerous applications in basic research, agriculture, and human therapeutics. Using ZFNs to modify endogenous genes has traditionally been a difficult task due mainly to the challenge of generating zinc finger domains that target the desired sequence with sufficient specificity. Improved methods of engineering zinc finger domains and the availability of ZFNs from a commercial supplier now put this technology in the hands of increasing numbers of researchers. Several groups are also developing other types of engineered nucleases including engineered homing endonucleases and nucleases based on engineered TAL effectors. TAL effector nucleases (TALENs) are particularly interesting because TAL effectors appear to be very simple to engineer and TALENs can be used to target endogenous loci in human cells. But to date no one has reported the isolation of clonal cell lines or transgenic organisms using such reagents. One type of ZFN, known as SB-728-T, has been tested for potential application in the treatment of HIV. Zinc-finger nickases Zinc-finger nickases (ZFNickases) are created by inactivating the catalytic activity of one ZFN monomer in the ZFN dimer required for double-strand cleavage. ZFNickases demonstrate strand-specific nicking activity in vitro and thus provide for highly specific single-strand breaks in DNA. These SSBs undergo the same cellular mechanisms for DNA that ZFNs exploit, but they show a significantly reduced frequency of mutagenic NHEJ repairs at their target nicking site. This reduction provides a bias for HR-mediated gene modifications. ZFNickases can induce targeted HR in cultured human and livestock cells, although at lower levels than corresponding ZFNs from which they were derived because nicks can be repaired without genetic alteration. A major limitation of ZFN-mediated gene modifications is the competition between NHEJ and HR repair pathways. Regardless of the presence of a DNA donor construct, both repair mechanisms can be activated following DSBs induced by ZFNs. Thus, ZFNickases is the first plausible attempt at engineering a method to favor the HR method of DNA repair as opposed to the error-prone NHEJ repair. By reducing NHEJ repairs, ZFNickases can thereby reduce the spectrum of unwanted off-target alterations. The ease by which ZFNickases can be derive from ZFNs provides a great platform for further studies regarding the optimization of ZFNickases and possibly increasing their levels of targeted HR while still maintain their reduced NHEJ frequency. See also Chimeric nuclease Genome editing Gene targeting Zinc finger protein Zinc finger chimera Protein engineering Zinc finger nuclease treatment of HIV CompoZr References Further reading External links Zinc finger selector Zinc Finger Consortium website Zinc Finger Consortium materials from Addgene A commercial supplier of ZFNs Engineered proteins Genes Genome editing History of biotechnology Non-coding RNA Repetitive DNA sequences Zinc proteins
Zinc-finger nuclease
[ "Engineering", "Biology" ]
2,969
[ "Genetics techniques", "History of biotechnology", "Genome editing", "Genetic engineering", "Molecular genetics", "Repetitive DNA sequences" ]
8,440,094
https://en.wikipedia.org/wiki/Serial%20manipulator
Serial manipulators are the most common industrial robots and they are designed as a series of links connected by motor-actuated joints that extend from a base to an end-effector. Often they have an anthropomorphic arm structure described as having a "shoulder", an "elbow", and a "wrist". Serial robots usually have six joints, because it requires at least six degrees of freedom to place a manipulated object in an arbitrary position and orientation in the workspace of the robot. A popular application for serial robots in today's industry is the pick-and-place assembly robot, called a SCARA robot, which has four degrees of freedom. Structure In its most general form, a serial robot consists of a number of rigid links connected to joints. Simplicity considerations in manufacturing and control have led to robots with only revolute or prismatic joints and orthogonal, parallel and/or intersecting joint axes (instead of arbitrarily placed joint axes). Donald L. Pieper derived the first practically relevant result in this context, referred to as 321 kinematic structure: The inverse kinematics of serial manipulators with six revolute joints, and with three consecutive joints intersecting, can be solved in closed-form, i.e. analytically This result had a tremendous influence on the design of industrial robots. The main advantage of a serial manipulator is a large workspace with respect to the size of the robot and the floor space it occupies. The main disadvantages of these robots are: the low stiffness inherent to an open kinematic structure, errors are accumulated and amplified from link to link, the fact that they have to carry and move the large weight of most of the actuators, and the relatively low effective load that they can manipulate. Kinematics The position and orientation of a robot's end effector are derived from the joint positions by means of a geometric model of the robot arm. For serial robots, the mapping from joint positions to end-effector pose is easy, the inverse mapping is more difficult. Therefore, most industrial robots have special designs that reduce the complexity of the inverse mapping. Workspace The reachable workspace of a robot's end-effector is the manifold of reachable frames. The dextrous workspace consists of the points of the reachable workspace where the robot can generate velocities that span the complete tangent space at that point, i.e., it can translate the manipulated object with three degrees of freedom, and rotate the object with three degrees of rotation freedom. The relationships between joint space and Cartesian space coordinates of the object held by the robot are in general multiple-valued: the same pose can be reached by the serial arm in different ways, each with a different set of joint coordinates. Hence the reachable workspace of the robot is divided in configurations (also called assembly modes), in which the kinematic relationships are locally one-to-one. Singularity A singularity is a configuration of a serial manipulator in which the joint parameters no longer completely define the position and orientation of the end-effector. Singularities occur in configurations when joint axes align in a way that reduces the ability of the arm to position the end-effector. For example when a serial manipulator is fully extended it is in what is known as the boundary singularity. At a singularity the end-effector loses one or more degrees of twist freedom (instantaneously, the end-effector cannot move in these directions). Serial robots with less than six independent joints are always singular in the sense that they can never span a six-dimensional twist space. This is often called an architectural singularity. A singularity is usually not an isolated point in the workspace of the robot, but a sub-manifold. Redundant manipulator A redundant manipulator has more than six degrees of freedom which means that it has additional joint parameters that allow the configuration of the robot to change while it holds its end-effector in a fixed position and orientation. A typical redundant manipulator has seven joints, for example three at the shoulder, one elbow joint and three at the wrist. This manipulator can move its elbow around a circle while it maintains a specific position and orientation of its end-effector. A snake robot has many more than six degrees of freedom and is often called hyper-redundant. Manufacturers ABB Robotics Adept Technology Comau Epson Robots FANUC Robotics Kawasaki Robotics KUKA Mitsubishi Motoman Staubli Robotics Design Universal Robots See also Parallel manipulator Robot kinematics References Robot kinematics - Robotic manipulators
Serial manipulator
[ "Engineering" ]
966
[ "Industrial robots", "Robotics engineering", "Robot kinematics" ]
8,441,756
https://en.wikipedia.org/wiki/Perosamine
Perosamine (or GDP-perosamine) is a mannose-derived 4-aminodeoxysugar produced by some bacteria. Biological role N-acetyl-perosamine is found in the O-antigen of Gram-negative bacteria such as Vibrio cholerae O1, E. coli O157:H7 and Caulobacter crescentus CB15. The sugar is also found in perimycin, an antibiotic produced by the Gram-positive organism Streptomyces coelicolor var. aminophilus. Biosynthesis Its biosynthesis from mannose-1-phosphate follows a pathway similar to that of colitose, but is different in that it is aminated and does not undergo 3-OH deoxygenation or C-5 epimerization. GDP-4-keto-6-deoxymannose-4-aminotransferase (GDP-perosamine synthase) GDP-perosamine synthase is a PLP-dependent enzyme that transfers a nitrogen from glutamate to the 4-keto position of GDP-4-keto-6-deoxymannose during the biosynthesis of GDP-perosamine. References Hexosamines Deoxy sugars
Perosamine
[ "Chemistry" ]
275
[ "Deoxy sugars", "Carbohydrates" ]
8,442,558
https://en.wikipedia.org/wiki/Nickel%20oxide%20hydroxide
Nickel oxide hydroxide is the inorganic compound with the chemical formula NiO(OH). It is a black solid that is insoluble in all solvents but attacked by base and acid. It is a component of the nickel–metal hydride battery and of the nickel–iron battery. Related materials Nickel(III) oxides are often poorly characterized and are assumed to be nonstoichiometric compounds. Nickel(III) oxide (Ni2O3) has not been verified crystallographically. For applications in organic chemistry, nickel oxides or peroxides are generated in situ and lack crystallographic characterization. For example, "nickel peroxide" (CAS# 12035-36-8) is also closely related to or even identical with NiO(OH). Synthesis and structure Its layered structure resembles that of the brucite polymorph of nickel(II) hydroxide, but with half as many hydrogens. The oxidation state of nickel is 3+. It can be prepared by the reaction of nickel(II) hydroxide with aqueous potassium hydroxide and bromine as the oxidant: 2 Ni(OH)2 + 2 KOH + Br2 → 2 KBr + 2 H2O + 2 NiOOH Use in organic chemistry Nickel(III) oxides catalyze the oxidation of benzyl alcohol to benzoic acid using bleach: Similarly it catalyzes the double oxidation of 3-butenoic acid to fumaric acid: References Inorganic compounds Catalysts Electrochemistry Nickel compounds Non-stoichiometric compounds Transition metal oxides
Nickel oxide hydroxide
[ "Chemistry" ]
332
[ "Catalysis", "Catalysts", "Physical chemistry stubs", "Inorganic compounds", "Non-stoichiometric compounds", "Electrochemistry", "Chemical kinetics", "Electrochemistry stubs" ]
8,447,200
https://en.wikipedia.org/wiki/Cosmic%20Origins%20Spectrograph
The Cosmic Origins Spectrograph (COS) is a science instrument that was installed on the Hubble Space Telescope during Servicing Mission 4 (STS-125) in May 2009. It is designed for ultraviolet (90–320 nm) spectroscopy of faint point sources with a resolving power of ≈1,550–24,000. Science goals include the study of the origins of large scale structure in the universe, the formation and evolution of galaxies, and the origin of stellar and planetary systems and the cold interstellar medium. COS was developed and built by the Center for Astrophysics and Space Astronomy (CASA-ARL) at the University of Colorado at Boulder and the Ball Aerospace and Technologies Corporation in Boulder, Colorado. COS is installed into the axial instrument bay previously occupied by the Corrective Optics Space Telescope Axial Replacement (COSTAR) instrument, and is intended to complement the Space Telescope Imaging Spectrograph (STIS) that was repaired during the same mission. While STIS operates across a wider wavelength range, COS is many times more sensitive in the UV. Instrument overview The Cosmic Origins Spectrograph is an ultraviolet spectrograph that is optimized for high sensitivity and moderate spectral resolution of compact (point like) objects (stars, quasars, etc.). COS has two principal channels, one for Far Ultraviolet (FUV) spectroscopy covering 90–205 nm and one for Near Ultraviolet (NUV) spectroscopy spanning 170–320 nm. The FUV channel can work with one of three diffraction gratings, the NUV with one of four, providing both low and medium resolution spectra (table 1). In addition, COS has a narrow field of view NUV imaging mode intended for target acquisition. One key technique for achieving high sensitivity in the FUV is minimizing the number of optics. This is done because FUV reflection and transmission efficiencies are typically quite low compared to what is common at visible wavelengths. In accomplishing this, the COS FUV channel uses a single (selectable) optic to diffract the light from HST, correct for the Hubble spherical aberration, focus the diffracted light onto the FUV detector and correct for astigmatism typical of this sort of instrument. Since aberration correction is performed after the light passes into the instrument, the entrance to the spectrograph must be an extended aperture, rather than the traditional narrow entrance slit, in order to allow the entire aberrated HST image from a point source to enter the instrument. The 2.5 arc second diameter entrance aperture allows ≈ 95% of the light from compact sources to enter COS, yielding high sensitivity at the design resolution for compact sources. Post launch performance closely matched expectations. Instrument sensitivity is close to pre-launch calibration values, and detector background is exceptionally low (0.16 counts per resolution element per 1000 seconds for the FUV detector, and 1.7 counts per resolution element per 100 seconds for the NUV detector). FUV resolution is slightly lower than pre-launch predictions due to mid-frequency polishing errors on the HST primary mirror, while NUV resolution exceeds pre-launch values in all modes. Thanks to the minimal number of reflections, the G140L mode, and G130M central wavelength settings added after 2010, can observe light at wavelengths down to ~90 nm, and shorter, despite the very low reflectivity of the MgF2 coated optics at these wavelengths. Science Goals The Cosmic Origins Spectrograph is designed to enable the observation of faint, point-like UV targets at moderate spectral resolution, allowing COS to observe hot stars (OB stars, white dwarfs, cataclysmic variables and binary stars) in the Milky Way and to observe the absorption features in the spectra of active galactic nuclei. Observations are also planned of extended objects. Spectroscopy provides a wealth of information about distant astronomical objects that is unobtainable through imaging: Spectroscopy lies at the heart of astrophysical inference. Our understanding of the origin and evolution of the cosmos critically depends on our ability to make quantitative measurements of physical parameters such as the total mass, distribution, motions, temperatures, and composition of matter in the Universe. Detailed information on all of these properties can be gleaned from high-quality spectroscopic data. For distant objects, some of these properties (e.g., motions and composition) can only be measured through spectroscopy. Ultraviolet (UV) spectroscopy provides some of the most fundamental diagnostic data necessary for discerning the physical characteristics of planets, stars, galaxies, and interstellar and intergalactic matter. The UV offers access to spectral features that provide key diagnostic information that cannot be obtained at other wavelengths. Obtaining absorption spectra of interstellar and intergalactic gas forms the basis of many of the COS science programs. These spectra will address questions such as how was the Cosmic Web formed, how much mass can be found in interstellar and intergalactic gas, and what is the composition, distribution and temperature of this gas. In general, COS will address questions such as: What is the large-scale structure of matter in the Universe? How did galaxies form out of the intergalactic medium? What types of galactic halos and outflowing winds do star-forming galaxies produce? How were the chemical elements for life created in massive stars and supernovae? How do stars and planetary systems form from dust grains in molecular clouds? What is the composition of planetary atmospheres and comets in our Solar System (and beyond)? Some specific programs include the following: Large-Scale Structure of Baryonic Matter: With its high FUV spectroscopic sensitivity, COS uniquely suited for exploring the Lyman-alpha forest. This is the ‘forest’ of absorption spectra seen in the spectra of distant galaxies and quasars caused by intergalactic gas clouds, which may contain the majority of baryonic matter in the universe. Because the most useful absorption lines for these observations are in the far ultraviolet and the sources are faint, a high sensitivity FUV spectrograph with wide wavelength coverage is needed to perform these observations. By determining the redshift and line width of the intervening absorbers, COS will be able to map out the temperature, density and composition of dark baryonic matter in the Cosmic Web. Warm–hot intergalactic medium: Absorption line studies of highly ionized (hot) gas (O IV, N V, etc.) and broad Lyman-alpha will explore the ionization state and distribution of hot intergalactic gas. Great Wall Structure: Background active galactic nuclei will be used to study intergalactic absorbers to estimate their transverse size and physical density and determine how the distribution of material correlates with nearby galaxy distributions in the CFA2 Great Wall. He II Reionization: Highly redshifted ionized helium will be used study the reionization process at a redshift (z) of ≈ 3. Additional instrument design details COS has two channels, the Far Ultraviolet (FUV) covering 90–205 nm and the Near Ultraviolet (NUV) covering 170–320 nm. All COS optics are reflective (except for the bright object aperture filter and NUV order sorters) to maximize efficiency and to avoid chromatic aberration. Principal COS observing modes are summarized in table 1. Light from the Hubble Space Telescope enters the instrument through either the Primary Science Aperture (PSA) or the Bright Object Aperture (BOA). The BOA introduces a neutral density filter to the optical path that attenuates the light by approximately a factor of one hundred (five astronomical magnitudes). Both apertures are oversized (2.5 arc second clear aperture) permitting more than 95% of the light from a point source to enter the spectrograph. After passing through the PSA or BOA the light travels to one of the optics on the first of two optic select wheels, either one of the three FUV diffraction gratings or the first of the NUV collimation mirrors (table 1), depending on whether an FUV, NUV, or target acquisition channel is selected. All optics on the first wheel have an aspheric profile to correct for the Hubble spherical aberration. The FUV channel has two medium and one low resolution spectroscopy modes. The FUV channels are modified Rowland Circle spectrographs in which the single holographically ruled aspheric concave diffraction grating simultaneously focuses and diffracts the incoming light and corrects both for the HST spherical aberration and for aberrations introduced by the extreme off-Rowland layout. The diffracted light is focused onto a 170x10 mm cross delay line microchannel plate detector. The FUV detector active area is curved to match the spectrograph's focal surface and is divided into two physically distinct segments separated by a small gap. The NUV channel has three medium and one low resolution spectroscopy modes as well as an imaging mode with an approximately 1.0 arc second unvignetted field of view. The NUV channels utilize a modified Czerny-Turner design in which collimated light is fed to the selected grating, followed by three camera mirrors that direct the diffracted light onto three separate stripes on a 25×25 mm Multi Anode Microchannel Array (MAMA) detector. The imaging mode is primarily intended for target acquisition. See also Advanced Camera for Surveys Faint Object Camera Faint Object Spectrograph Goddard High Resolution Spectrograph Near Infrared Camera and Multi-Object Spectrometer Space Telescope Imaging Spectrograph Wide Field and Planetary Camera Wide Field and Planetary Camera 2 Wide Field Camera 3 Photon underproduction crisis References External links The Cosmic Origins Spectrograph Web Site at the University of Colorado the COS Web site at the Space Telescope Science Institute (STScI) Ultraviolet telescopes Hubble Space Telescope instruments Spectrographs
Cosmic Origins Spectrograph
[ "Physics", "Chemistry" ]
2,035
[ "Spectrographs", "Spectroscopy", "Spectrum (physical sciences)" ]
8,447,393
https://en.wikipedia.org/wiki/Ligand%20cone%20angle
In coordination chemistry, the ligand cone angle (θ) is a measure of the steric bulk of a ligand in a transition metal coordination complex. It is defined as the solid angle formed with the metal at the vertex of a cone and the outermost edge of the van der Waals spheres of the ligand atoms at the perimeter of the base of the cone. Tertiary phosphine ligands are commonly classified using this parameter, but the method can be applied to any ligand. The term cone angle was first introduced by Chadwick A. Tolman, a research chemist at DuPont. Tolman originally developed the method for phosphine ligands in nickel complexes, determining them from measurements of accurate physical models. Asymmetric cases The concept of cone angle is most easily visualized with symmetrical ligands, e.g. PR3. But the approach has been refined to include less symmetrical ligands of the type PRR′R″ as well as diphosphines. In such asymmetric cases, the substituent angles' half angles, , are averaged and then doubled to find the total cone angle, θ. In the case of diphosphines, the of the backbone is approximated as half the chelate bite angle, assuming a bite angle of 74°, 85°, and 90° for diphosphines with methylene, ethylene, and propylene backbones, respectively. The Manz cone angle is often easier to compute than the Tolman cone angle: Variations The Tolman cone angle method assumes empirical bond data and defines the perimeter as the maximum possible circumscription of an idealized free-spinning substituent. The metal-ligand bond length in the Tolman model was determined empirically from crystal structures of tetrahedral nickel complexes. In contrast, the solid-angle concept derives both bond length and the perimeter from empirical solid state crystal structures. There are advantages to each system. If the geometry of a ligand is known, either through crystallography or computations, an exact cone angle (θ) can be calculated. No assumptions about the geometry are made, unlike the Tolman method. Application The concept of cone angle is of practical importance in homogeneous catalysis because the size of the ligand affects the reactivity of the attached metal center. In an example, the selectivity of hydroformylation catalysts is strongly influenced by the size of the coligands. Despite being monovalent, some phosphines are large enough to occupy more than half of the coordination sphere of a metal center. Recent research has found that other descriptors—such as percent buried volume—are more accurate than cone angle at capturing the relevant steric effects of the phosphine ligand(s) when bound to the metal center. See also Steric effects (versus electronic effects) Tolman electronic parameter References Tertiary phosphines Stereochemistry Organometallic chemistry Coordination chemistry
Ligand cone angle
[ "Physics", "Chemistry" ]
598
[ "Stereochemistry", "Coordination chemistry", "Space", "nan", "Spacetime", "Organometallic chemistry" ]
1,312,802
https://en.wikipedia.org/wiki/Scroll%20compressor
A scroll compressor (also called spiral compressor, scroll pump and scroll vacuum pump) is a device for compressing air or refrigerant. It is used in air conditioning equipment, as an automobile supercharger (where it is known as a scroll-type supercharger) and as a vacuum pump. Many residential central heat pump and air conditioning systems and a few automotive air conditioning systems employ a scroll compressor instead of the more traditional rotary, reciprocating, and wobble-plate compressors. A scroll compressor operating in reverse is a scroll expander, and can generate mechanical work. History Léon Creux first patented a scroll compressor in 1905 in France and the US. Creux invented the compressor as a rotary steam engine concept, but the metal casting technology of the period was not sufficiently advanced to construct a working prototype, since a scroll compressor demands very tight tolerances to function effectively. In the 1905 patent, Creux defines a co-orbiting or spinning reversible steam expander driven by a fixed radius crank on a single shaft. However, the scroll expander engine could not overcome the machining hurdles of radial compliance inherent to achieving efficiency in scroll operation that would not be adequately addressed until the works of Niels Young in 1975. The first practical scroll compressors did not appear on the market until after World War II, when higher-precision machine tools enabled their construction. In 1981, Sanden began manufacturing the first commercially available scroll compressors for automobile air conditioners. They were not commercially produced for room air conditioning until 1983 when Hitachi launched the world's first air conditioner with a hermetic scroll compressor. Design A scroll compressor uses two interleaving scrolls to pump, compress or pressurize fluids such as liquids and gases. The vane geometry may be involute, Archimedean spiral, or hybrid curves. Often, one of the scrolls is fixed, while the other orbits eccentrically without rotating, thereby trapping and pumping or compressing pockets of fluid between the scrolls. An eccentric shaft can provide the orbital motion but the scroll must be prevented from rotating, typically with an Oldham-type coupling, additional eccentric idler shafts, or a bellows joint (particularly for high-purity applications). Another method for producing the compression motion is co-rotating the scrolls, in synchronous motion, but with offset centers of rotation. The relative motion is the same as if one were orbiting. Leaks from axial gaps are prevented by the use of spiral-shaped tip seals, placed into grooves on the tips of both spirals. These tip seals also help lower the friction and can be replaced when worn down. Some compressors use the pressurized discharge gas to push both scrolls together, eliminating the need for tip seals and improving sealing with use; these compressors are said to wear-in instead of wear-out. Engineering comparison to other pumps These devices are known for operating more smoothly, quietly, and reliably than conventional compressors in some applications. Rotations and pulse flow The compression process occurs over approximately 2 to 2½ rotations of the crankshaft, compared to one rotation for rotary compressors, and one-half rotation for reciprocating compressors. The scroll discharge and suction processes occur for a full rotation, compared to less than a half-rotation for the reciprocating suction process, and less than a quarter-rotation for the reciprocating discharge process. Reciprocating compressors have multiple cylinders (typically, anywhere from two to six), while scroll compressors only have one compression element. The presence of multiple cylinders in reciprocating compressors reduces suction and discharge pulsations. Therefore, it is difficult to state whether scroll compressors have lower pulsation levels than reciprocating compressors as has often been claimed by some suppliers of scroll compressors. The more steady flow yields lower gas pulsations, lower sound and lower vibration of attached piping, while having no influence on the compressor operating efficiency. Valves Scroll compressors never have a suction valve, but depending on the application may or may not have a discharge valve. The use of a dynamic discharge valve is more prominent in high pressure ratio applications, typical of refrigeration. Typically, an air-conditioning scroll does not have a dynamic discharge valve. The use of a dynamic discharge valve improves scroll compressor efficiency over a wide range of operating conditions, when the operating pressure ratio is well above the built-in pressure ratio of the compressors. If the compressor is designed to operate near a single operating point, then the scroll compressor can actually gain efficiency around this point if there is no dynamic discharge valve present (since there are additional discharge flow losses associated with the presence of the discharge valve as well as discharge ports tend to be smaller when the discharge is present). Efficiency The isentropic efficiency of scroll compressors is slightly higher than that of a typical reciprocating compressor when the compressor is designed to operate near one selected rating point. The scroll compressors are more efficient in this case because they do not have a dynamic discharge valve that introduces additional throttling losses. However, the efficiency of a scroll compressor that does not have a discharge valve begins to decrease as compared to the reciprocating compressor at higher pressure ratio operation. This is a result of under-compression losses that occur at high pressure ratio operation of the positive displacement compressors that do not have a dynamic discharge valve. The scroll compression process is nearly 100% volumetrically efficient in pumping the trapped fluid. The suction process creates its own volume, separate from the compression and discharge processes further inside. By comparison, reciprocating compressors leave a small amount of compressed gas in the cylinder, because it is not practical for the piston to touch the head or valve plate. That remnant gas from the last cycle then occupies space intended for suction gas. The reduction in capacity (i.e. volumetric efficiency) depends on the suction and discharge pressures with greater reductions occurring at higher ratios of discharge to suction pressures. Size Scroll compressors tend to be very compact and smooth running and so do not require spring suspension. This allows them to have very small shell enclosures which reduces overall cost but also results in smaller free volume. Reliability Scroll compressors have fewer moving parts than reciprocating compressors which, theoretically, should improve reliability. According to Emerson Climate Technologies, manufacturer of Copeland scroll compressors, scroll compressors have 70 percent fewer moving parts than conventional reciprocating compressors. At least one manufacturer found through testing that the scroll compressor design delivered better reliability and efficiency in operation than reciprocating compressors. Scroll expander The scroll expander is a work-producing device used mostly in low-pressure heat recovery applications. It is essentially a scroll compressor working in reverse; high enthalpy working fluid or gas enters the discharge side of the compressor and rotates the eccentric scroll before discharging from the compressor inlet. The basic modification required to convert the scroll compressor to a scroll expander is to remove the non-return valve from the compressor discharge. See also Compressed air battery References External links – a video showing how the scroll compressor works Gas compressors Vacuum pumps
Scroll compressor
[ "Physics", "Chemistry", "Engineering" ]
1,470
[ "Turbomachinery", "Gas compressors", "Vacuum pumps", "Vacuum", "Vacuum systems", "Matter" ]
1,313,639
https://en.wikipedia.org/wiki/Skyrmion
In particle theory, the skyrmion () is a topologically stable field configuration of a certain class of non-linear sigma models. It was originally proposed as a model of the nucleon by (and named after) Tony Skyrme in 1961. As a topological soliton in the pion field, it has the remarkable property of being able to model, with reasonable accuracy, multiple low-energy properties of the nucleon, simply by fixing the nucleon radius. It has since found application in solid-state physics, as well as having ties to certain areas of string theory. Skyrmions as topological objects are important in solid-state physics, especially in the emerging technology of spintronics. A two-dimensional magnetic skyrmion, as a topological object, is formed, e.g., from a 3D effective-spin "hedgehog" (in the field of micromagnetics: out of a so-called "Bloch point" singularity of homotopy degree +1) by a stereographic projection, whereby the positive north-pole spin is mapped onto a far-off edge circle of a 2D-disk, while the negative south-pole spin is mapped onto the center of the disk. In a spinor field such as for example photonic or polariton fluids the skyrmion topology corresponds to a full Poincaré beam (a spin vortex comprising all the states of polarization mapped by a stereographic projection of the Poincaré sphere to the real plane). A dynamical pseudospin skyrmion results from the stereographic projection of a rotating polariton Bloch sphere in the case of dynamical full Bloch beams. Skyrmions have been reported, but not conclusively proven, to appear in Bose–Einstein condensates, thin magnetic films, and chiral nematic liquid crystals, as well as in free-space optics. As a model of the nucleon, the topological stability of the skyrmion can be interpreted as a statement that the baryon number is conserved; i.e. that the proton does not decay. The Skyrme Lagrangian is essentially a one-parameter model of the nucleon. Fixing the parameter fixes the proton radius, and also fixes all other low-energy properties, which appear to be correct to about 30%, a significant level of predictive power. Hollowed-out skyrmions form the basis for the chiral bag model (Cheshire Cat model) of the nucleon. The exact results for the duality between the fermion spectrum and the topological winding number of the non-linear sigma model have been obtained by Dan Freed. This can be interpreted as a foundation for the duality between a quantum chromodynamics (QCD) description of the nucleon (but consisting only of quarks, and without gluons) and the Skyrme model for the nucleon. The skyrmion can be quantized to form a quantum superposition of baryons and resonance states. It could be predicted from some nuclear matter properties. Topological soliton In field theory, skyrmions are homotopically non-trivial classical solutions of a nonlinear sigma model with a non-trivial target manifold topology – hence, they are topological solitons. An example occurs in chiral models of mesons, where the target manifold is a homogeneous space of the structure group where SU(N)L and SU(N)R are the left and right chiral symmetries, and SU(N)diag is the diagonal subgroup. In nuclear physics, for N = 2, the chiral symmetries are understood to be the isospin symmetry of the nucleon. For N = 3, the isoflavor symmetry between the up, down and strange quarks is more broken, and the skyrmion models are less successful or accurate. If spacetime has the topology S3×R, then classical configurations can be classified by an integral winding number because the third homotopy group is equivalent to the ring of integers, with the congruence sign referring to homeomorphism. A topological term can be added to the chiral Lagrangian, whose integral depends only upon the homotopy class; this results in superselection sectors in the quantised model. In (1 + 1)-dimensional spacetime, a skyrmion can be approximated by a soliton of the Sine–Gordon equation; after quantisation by the Bethe ansatz or otherwise, it turns into a fermion interacting according to the massive Thirring model. Lagrangian The Lagrangian for the skyrmion, as written for the original chiral SU(2) effective Lagrangian of the nucleon-nucleon interaction (in (3 + 1)-dimensional spacetime), can be written as where , , are the isospin Pauli matrices, is the Lie bracket commutator, and tr is the matrix trace. The meson field (pion field, up to a dimensional factor) at spacetime coordinate is given by . A broad review of the geometric interpretation of is presented in the article on sigma models. When written this way, the is clearly an element of the Lie group SU(2), and an element of the Lie algebra su(2). The pion field can be understood abstractly to be a section of the tangent bundle of the principal fiber bundle of SU(2) over spacetime. This abstract interpretation is characteristic of all non-linear sigma models. The first term, is just an unusual way of writing the quadratic term of the non-linear sigma model; it reduces to . When used as a model of the nucleon, one writes with the dimensional factor of being the pion decay constant. (In 1 + 1 dimensions, this constant is not dimensional and can thus be absorbed into the field definition.) The second term establishes the characteristic size of the lowest-energy soliton solution; it determines the effective radius of the soliton. As a model of the nucleon, it is normally adjusted so as to give the correct radius for the proton; once this is done, other low-energy properties of the nucleon are automatically fixed, to within about 30% accuracy. It is this result, of tying together what would otherwise be independent parameters, and doing so fairly accurately, that makes the Skyrme model of the nucleon so appealing and interesting. Thus, for example, constant in the quartic term is interpreted as the vector-pion coupling ρ–π–π between the rho meson (the nuclear vector meson) and the pion; the skyrmion relates the value of this constant to the baryon radius. Topological charge or winding number The local winding number density (or topological charge density) is given by where is the totally antisymmetric Levi-Civita symbol (equivalently, the Hodge star, in this context). As a physical quantity, this can be interpreted as the baryon current; it is conserved: , and the conservation follows as a Noether current for the chiral symmetry. The corresponding charge is the baryon number: Which is conserved due to topological reasons and it is always an integer. For this reason, it is associated with the baryon number of the nucleus. As a conserved charge, it is time-independent: , the physical interpretation of which is that protons do not decay. In the chiral bag model, one cuts a hole out of the center and fills it with quarks. Despite this obvious "hackery", the total baryon number is conserved: the missing charge from the hole is exactly compensated by the spectral asymmetry of the vacuum fermions inside the bag. Magnetic materials/data storage One particular form of skyrmions is magnetic skyrmions, found in magnetic materials that exhibit spiral magnetism due to the Dzyaloshinskii–Moriya interaction, double-exchange mechanism or competing Heisenberg exchange interactions. They form "domains" as small as 1 nm (e.g. in Fe on Ir(111)). The small size and low energy consumption of magnetic skyrmions make them a good candidate for future data-storage solutions and other spintronics devices. Researchers could read and write skyrmions using scanning tunneling microscopy. The topological charge, representing the existence and non-existence of skyrmions, can represent the bit states "1" and "0". Room-temperature skyrmions were reported. Skyrmions operate at current densities that are several orders of magnitude weaker than conventional magnetic devices. In 2015 a practical way to create and access magnetic skyrmions under ambient room-temperature conditions was announced. The device used arrays of magnetized cobalt disks as artificial Bloch skyrmion lattices atop a thin film of cobalt and palladium. Asymmetric magnetic nanodots were patterned with controlled circularity on an underlayer with perpendicular magnetic anisotropy (PMA). Polarity is controlled by a tailored magnetic-field sequence and demonstrated in magnetometry measurements. The vortex structure is imprinted into the underlayer's interfacial region by suppressing the PMA by a critical ion-irradiation step. The lattices are identified with polarized neutron reflectometry and have been confirmed by magnetoresistance measurements. A recent (2019) study demonstrated a way to move skyrmions, purely using electric field (in the absence of electric current). The authors used Co/Ni multilayers with a thickness slope and Dzyaloshinskii–Moriya interaction and demonstrated skyrmions. They showed that the displacement and velocity depended directly on the applied voltage. In 2020, a team of researchers from the Swiss Federal Laboratories for Materials Science and Technology (Empa) has succeeded for the first time in producing a tunable multilayer system in which two different types of skyrmions – the future bits for "0" and "1" – can exist at room temperature. See also Hopfion, 3D counterpart of skyrmions References Further reading Developments in Magnetic Skyrmions Come in Bunches, IEEE Spectrum 2015 web article Hypothetical particles Quantum chromodynamics
Skyrmion
[ "Physics" ]
2,130
[ "Hypothetical particles", "Matter", "Unsolved problems in physics", "Physics beyond the Standard Model", "Subatomic particles" ]
1,313,664
https://en.wikipedia.org/wiki/Selection%20rule
In physics and chemistry, a selection rule, or transition rule, formally constrains the possible transitions of a system from one quantum state to another. Selection rules have been derived for electromagnetic transitions in molecules, in atoms, in atomic nuclei, and so on. The selection rules may differ according to the technique used to observe the transition. The selection rule also plays a role in chemical reactions, where some are formally spin-forbidden reactions, that is, reactions where the spin state changes at least once from reactants to products. In the following, mainly atomic and molecular transitions are considered. Overview In quantum mechanics the basis for a spectroscopic selection rule is the value of the transition moment integral  where and are the wave functions of the two states, "state 1" and "state 2", involved in the transition, and is the transition moment operator. This integral represents the propagator (and thus the probability) of the transition between states 1 and 2; if the value of this integral is zero then the transition is "forbidden". In practice, to determine a selection rule the integral itself does not need to be calculated: It is sufficient to determine the symmetry of the transition moment function If the transition moment function is symmetric over all of the totally symmetric representation of the point group to which the atom or molecule belongs, then the integral's value is (in general) not zero and the transition is allowed. Otherwise, the transition is "forbidden". The transition moment integral is zero if the transition moment function, is anti-symmetric or odd, i.e. holds. The symmetry of the transition moment function is the direct product of the parities of its three components. The symmetry characteristics of each component can be obtained from standard character tables. Rules for obtaining the symmetries of a direct product can be found in texts on character tables. Examples Electronic spectra The Laporte rule is a selection rule formally stated as follows: In a centrosymmetric environment, transitions between like atomic orbitals such as s-s, p-p, d-d, or f-f, transitions are forbidden. The Laporte rule (law) applies to electric dipole transitions, so the operator has u symmetry (meaning ungerade, odd). p orbitals also have u symmetry, so the symmetry of the transition moment function is given by the product (formally, the product is taken in the group) u×u×u, which has u symmetry. The transitions are therefore forbidden. Likewise, d orbitals have g symmetry (meaning gerade, even), so the triple product g×u×g also has u symmetry and the transition is forbidden. The wave function of a single electron is the product of a space-dependent wave function and a spin wave function. Spin is directional and can be said to have odd parity. It follows that transitions in which the spin "direction" changes are forbidden. In formal terms, only states with the same total spin quantum number are "spin-allowed". In crystal field theory, d-d transitions that are spin-forbidden are much weaker than spin-allowed transitions. Both can be observed, in spite of the Laporte rule, because the actual transitions are coupled to vibrations that are anti-symmetric and have the same symmetry as the dipole moment operator. Vibrational spectra In vibrational spectroscopy, transitions are observed between different vibrational states. In a fundamental vibration, the molecule is excited from its ground state (v = 0) to the first excited state (v = 1). The symmetry of the ground-state wave function is the same as that of the molecule. It is, therefore, a basis for the totally symmetric representation in the point group of the molecule. It follows that, for a vibrational transition to be allowed, the symmetry of the excited state wave function must be the same as the symmetry of the transition moment operator. In infrared spectroscopy, the transition moment operator transforms as either x and/or y and/or z. The excited state wave function must also transform as at least one of these vectors. In Raman spectroscopy, the operator transforms as one of the second-order terms in the right-most column of the character table, below. The molecule methane, CH4, may be used as an example to illustrate the application of these principles. The molecule is tetrahedral and has Td symmetry. The vibrations of methane span the representations A1 + E + 2T2. Examination of the character table shows that all four vibrations are Raman-active, but only the T2 vibrations can be seen in the infrared spectrum. In the harmonic approximation, it can be shown that overtones are forbidden in both infrared and Raman spectra. However, when anharmonicity is taken into account, the transitions are weakly allowed. In Raman and infrared spectroscopy, the selection rules predict certain vibrational modes to have zero intensities in the Raman and/or the IR. Displacements from the ideal structure can result in relaxation of the selection rules and appearance of these unexpected phonon modes in the spectra. Therefore, the appearance of new modes in the spectra can be a useful indicator of symmetry breakdown. Rotational spectra The selection rule for rotational transitions, derived from the symmetries of the rotational wave functions in a rigid rotor, is ΔJ = ±1, where J is a rotational quantum number. Coupled transitions There are many types of coupled transition such as are observed in vibration–rotation spectra. The excited-state wave function is the product of two wave functions such as vibrational and rotational. The general principle is that the symmetry of the excited state is obtained as the direct product of the symmetries of the component wave functions. In rovibronic transitions, the excited states involve three wave functions. The infrared spectrum of hydrogen chloride gas shows rotational fine structure superimposed on the vibrational spectrum. This is typical of the infrared spectra of heteronuclear diatomic molecules. It shows the so-called P and R branches. The Q branch, located at the vibration frequency, is absent. Symmetric top molecules display the Q branch. This follows from the application of selection rules. Resonance Raman spectroscopy involves a kind of vibronic coupling. It results in much-increased intensity of fundamental and overtone transitions as the vibrations "steal" intensity from an allowed electronic transition. In spite of appearances, the selection rules are the same as in Raman spectroscopy. Angular momentum In general, electric (charge) radiation or magnetic (current, magnetic moment) radiation can be classified into multipoles E (electric) or M (magnetic) of order 2, e.g., E1 for electric dipole, E2 for quadrupole, or E3 for octupole. In transitions where the change in angular momentum between the initial and final states makes several multipole radiations possible, usually the lowest-order multipoles are overwhelmingly more likely, and dominate the transition. The emitted particle carries away angular momentum, with quantum number , which for the photon must be at least 1, since it is a vector particle (i.e., it has = 1− ). Thus, there is no radiation from E0 (electric monopoles) or M0 (magnetic monopoles, which do not seem to exist). Since the total angular momentum has to be conserved during the transition, we have that where and its z-projection is given by and where and are, respectively, the initial and final angular momenta of the atom. The corresponding quantum numbers and (-axis angular momentum) must satisfy and Parity is also preserved. For electric multipole transitions while for magnetic multipoles Thus, parity does not change for E-even or M-odd multipoles, while it changes for E-odd or M-even multipoles. These considerations generate different sets of transitions rules depending on the multipole order and type. The expression forbidden transitions is often used, but this does not mean that these transitions cannot occur, only that they are electric-dipole-forbidden. These transitions are perfectly possible; they merely occur at a lower rate. If the rate for an E1 transition is non-zero, the transition is said to be permitted; if it is zero, then M1, E2, etc. transitions can still produce radiation, albeit with much lower transitions rates. The transition rate decreases by a factor of about 1000 from one multipole to the next one, so the lowest multipole transitions are most likely to occur. Semi-forbidden transitions (resulting in so-called intercombination lines) are electric dipole (E1) transitions for which the selection rule that the spin does not change is violated. This is a result of the failure of LS coupling. Summary table is the total angular momentum, is the azimuthal quantum number, is the spin quantum number, and is the secondary total angular momentum quantum number. Which transitions are allowed is based on the hydrogen-like atom. The symbol is used to indicate a forbidden transition. In hyperfine structure, the total angular momentum of the atom is where is the nuclear spin angular momentum and is the total angular momentum of the electron(s). Since has a similar mathematical form as it obeys a selection rule table similar to the table above. Surface In surface vibrational spectroscopy, the surface selection rule is applied to identify the peaks observed in vibrational spectra. When a molecule is adsorbed on a substrate, the molecule induces opposite image charges in the substrate. The dipole moment of the molecule and the image charges perpendicular to the surface reinforce each other. In contrast, the dipole moments of the molecule and the image charges parallel to the surface cancel out. Therefore, only molecular vibrational peaks giving rise to a dynamic dipole moment perpendicular to the surface will be observed in the vibrational spectrum. See also Superselection rule Spin-forbidden reactions Singlet fission Notes References Further reading Section 4.1.5: Selection rules for Raman activity. Chapter 4: The interaction of radiation with a crystal. External links National Institute of Standards and Technology Lecture notes from The University of Sheffield Quantum mechanics Spectroscopy Nuclear magnetic resonance
Selection rule
[ "Physics", "Chemistry" ]
2,072
[ "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Molecular physics", "Instrumental analysis", "Theoretical physics", "Quantum mechanics", "Nuclear physics", "Spectroscopy" ]
1,314,419
https://en.wikipedia.org/wiki/Active%20vibration%20control
Active vibration control is the active application of force in an equal and opposite fashion to the forces imposed by external vibration. With this application, a precision industrial process can be maintained on a platform essentially vibration-free. Many precision industrial processes cannot take place if the machinery is being affected by vibration. For example, the production of semiconductor wafers requires that the machines used for the photolithography steps be used in an essentially vibration-free environment or the sub-micrometre features will be blurred. Active vibration control is now also commercially available for reducing vibration in helicopters, offering better comfort with less weight than traditional passive technologies. In the past, passive techniques were used. These include traditional vibration dampers, shock absorbers, and base isolation. The typical active vibration control system uses several components: A massive platform suspended by several active drivers (that may use voice coils, hydraulics, pneumatics, piezo-electric or other techniques) Three accelerometers that measure acceleration in the three degrees of freedom An electronic amplifier system that amplifies and inverts the signals from the accelerometers. A PID controller can be used to get better performance than a simple inverting amplifier. For very large systems, pneumatic or hydraulic components that provide the high drive power required. If the vibration is periodic, then the control system may adapt to the ongoing vibration, thereby providing better cancellation than would have been provided simply by reacting to each new acceleration without referring to past accelerations. Active vibration control has been successfully implemented for vibration attenuation of beam, plate and shell structures by numerous researchers. For effective active vibration control, the structure should be smart enough to sense external disturbances and react accordingly. In order to develop an active structure (also known as smart structure), smart materials must be integrated or embedded with the structure. The smart structure involves sensors (strain, acceleration, velocity, force etc.), actuators (force, inertial, strain etc.) and a control algorithm (feedback or feed forward). The number of smart materials have been investigated and fabricated over the years; some of them are shape memory alloys, piezoelectric materials, optical fibers, electro-rheological fluids, magneto-strictive materials. See also Active noise control Active vibration isolation Magnetorheological fluid Noise-cancelling headphones References Mechanical vibrations Earthquake engineering
Active vibration control
[ "Physics", "Engineering" ]
488
[ "Structural engineering", "Civil engineering", "Mechanics", "Mechanical vibrations", "Earthquake engineering" ]