id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
2,521,046
https://en.wikipedia.org/wiki/Free%20induction%20decay
In Fourier transform nuclear magnetic resonance spectroscopy, free induction decay (FID) is the observable nuclear magnetic resonance (NMR) signal generated by non-equilibrium nuclear spin magnetization precessing about the magnetic field (conventionally along z). This non-equilibrium magnetization can be created generally by applying a pulse of radio-frequency close to the Larmor frequency of the nuclear spins. If the magnetization vector has a non-zero component in the XY plane, then the precessing magnetisation will induce a corresponding oscillating voltage in a detection coil surrounding the sample. This time-domain signal (a sinusoid) is typically digitised and then Fourier transformed in order to obtain a frequency spectrum of the NMR signal i.e. the NMR spectrum. The duration of the NMR signal is ultimately limited by T2 relaxation, but mutual interference of the different NMR frequencies present also causes the signal to be damped more quickly. When NMR frequencies are well-resolved, as is typically the case in the NMR of samples in solution, the overall decay of the FID is relaxation-limited and the FID is approximately exponential (with the time constant T2 changed, indicated by T2*). FID durations will then be of the order of seconds for nuclei such as 1H. Particularly if a limited number of frequency components are present, the FID may be analysed directly for quantitative determinations of physical properties, such as hydrogen content in aviation fuel, solid and liquid ratio in dairy products (time-domain NMR). Advances in the development of quantum-scale sensors, particularly NV centres, have enabled the observation of the FID of single nuclei. When measuring the precession of a single nucleus, quantum mechanical measurement back action has to be considered. In this special case, also the measurement itself contributes to the decay as predicted by quantum mechanics. References Nuclear magnetic resonance
Free induction decay
[ "Physics", "Chemistry" ]
398
[ "Nuclear chemistry stubs", "Nuclear magnetic resonance", "Nuclear magnetic resonance stubs", "Nuclear physics" ]
20,616,237
https://en.wikipedia.org/wiki/Boltzmann%E2%80%93Matano%20analysis
The Boltzmann–Matano method is used to convert the partial differential equation resulting from Fick's law of diffusion into a more easily solved ordinary differential equation, which can then be applied to calculate the diffusion coefficient as a function of concentration. Ludwig Boltzmann worked on Fick's second law to convert it into an ordinary differential equation, whereas Chujiro Matano performed experiments with diffusion couples and calculated the diffusion coefficients as a function of concentration in metal alloys. Specifically, Matano proved that the diffusion rate of A atoms into a B-atom crystal lattice is a function of the amount of A atoms already in the B lattice. The importance of the classic Boltzmann–Matano method consists in the ability to extract diffusivities from concentration–distance data. These methods, also known as inverse methods, have both proven to be reliable, convenient and accurate with the assistance of modern computational techniques. Boltzmann’s transformation Boltzmann’s transformation converts Fick's second law into an easily solvable ordinary differential equation. Assuming a diffusion coefficient D that is in general a function of concentration c, Fick's second law is where t is time, and x is distance. Boltzmann's transformation consists in introducing a variable ξ, defined as a combination of t and x: The partial derivatives of ξ are: To introduce ξ into Fick's law, we express its partial derivatives in terms of ξ, using the chain rule: Inserting these expressions into Fick's law produces the following modified form: Note how the time variable in the right-hand side could be taken outside of the partial derivative, since the latter regards only variable x. It is now possible to remove the last reference to x by using again the same chain rule used above to obtain ∂ξ/∂x: Because of the appropriate choice in the definition of ξ, the time variable t can now also be eliminated, leaving ξ as the only variable in the equation, which is now an ordinary differential equation: This form is significantly easier to solve numerically, and one only needs to perform a back-substitution of t or x into the definition of ξ to find the value of the other variable. The parabolic law Observing the previous equation, a trivial solution is found for the case dc/dξ = 0, that is when concentration is constant over ξ. This can be interpreted as the rate of advancement of a concentration front being proportional to the square root of time (), or, equivalently, to the time necessary for a concentration front to arrive at a certain position being proportional to the square of the distance (); the square term gives the name parabolic law. Matano’s method Chuijiro Matano applied Boltzmann's transformation to obtain a method to calculate diffusion coefficients as a function of concentration in metal alloys. Two alloys with different concentration would be put into contact, and annealed at a given temperature for a given time t, typically several hours; the sample is then cooled to ambient temperature, and the concentration profile is virtually "frozen". The concentration profile c at time t can then be extracted as a function of the x coordinate. In Matano's notation, the two concentrations are indicated as cL and cR (L and R for left and right, as shown in most diagrams), with the implicit assumption that cL > cR; this is however not strictly necessary as the formulas hold also if cR is the larger one. The initial conditions are: Also, the alloys on both sides are assumed to stretch to infinity, which means in practice that they are large enough that the concentration at their other ends is unaffected by the transient for the entire duration of the experiment. To extract D from Boltzmann's formulation above, we integrate it from ξ=+∞, where c=cR at all times, to a generic ξ*; we can immediately simplify dξ, and with a change of variables we get: We can translate ξ back into its definition and bring the t terms out of the integrals, as t is constant and given as the time of annealing in the Matano method; on the right-hand side, extraction from the integral is trivial and follows from definition. We know that dc/dx → 0 as c → cR, that is the concentration curve "flattens out" when approaching the limit concentration value. We can then rearrange: Knowing the concentration profile c(x) at annealing time t, and assuming it is invertible as x(c), we can then calculate the diffusion coefficient for all concentrations between cR and cL. The Matano interface The last formula has one significant shortcoming: no information is given about the reference according to which x should be measured. It was not necessary to introduce one as Boltzmann's transformation worked fine without a specific reference for x; it is easy to verify that the Boltzmann transformation holds also when using x-XM instead of plain x. XM is often indicated as the Matano interface, and is in general not coincident with x=0: since D is in general variable with concentration c, the concentration profile is not necessarily symmetric. Introducing XM in the expression for D(c*) above, however, introduces a bias that appears to make the value of D completely an arbitrary function of which XM we choose. XM, however, can only assume one value due to physical constraints. Since the denominator term dc/dx goes to zero for c → cL (as the concentration profile flattens out), the integral in the numerator must also tend to zero in the same conditions. If this were not the case D(cL) would tend to infinity, which is not physically meaningful. Note that, strictly speaking, this does not guarantee that D does not tend to infinity, but it is one of the necessary conditions to ensure that it does not. The condition is then: In other words, XM is the average position weighed on concentrations, and can be easily found from the concentration profile providing it is invertible to the form x(c). Sources M. E. Glicksman, Diffusion in Solids: Field Theory, Solid-State Principles, and Applications, Wiley, New York, 2000. Matano, Chujiro. "On the Relation between the Diffusion-Coefficients and Concentrations of Solid Metals (The Nickel-Copper System)". Japanese Journal of Physics. Jan. 16, 1933. References Diffusion
Boltzmann–Matano analysis
[ "Physics", "Chemistry" ]
1,326
[ "Transport phenomena", "Physical phenomena", "Diffusion" ]
20,616,573
https://en.wikipedia.org/wiki/Genwi
GENWI is a privately held technology company based in San Jose, CA that provides a mobile content enablement platform. GENWI is short for "Generation Wireless". History GENWI was a free web-based news reader, or aggregator, initially released in March 2007. Genwi provided a news feed service by enabling users to publish their feeds to one profile and follow others' news feeds in the feed reader – this feed reader was called "Wire" and was capable of reading RSS, Media RSS, iTunes RSS and ATOM feeds. Genwi offered a suite of social networking features built into the RSS reader. Users were able to add friends, send messages, leave comments and share individual feed items. The site underwent a major redesign in November 2008 and was shut down in 2009. In January 2010, GENWI, Inc. used the same technology that built their RSS reader to launch iSites.us, a smartphone app builder and management system, which enables businesses to build applications for iPhone and Android using RSS, ATOM or social feeds. GENWI uses cloud-based technology to keep more than 1,500 native apps up-to-date and to instantly build HTML5 apps for iPhone. In September 2011, GENWI launched Condé Nast's "The Daily W" app and rebranded the iSites brand back to GENWI and now helps publishers and brands create engaging native and HTML5 apps with a cloud-based mobile content management system, or mCMS. See also Enterprise mobile application Mobile commerce References External links GENWI category;genetica Mobile technology Social information processing Smartphones
Genwi
[ "Technology" ]
323
[ "nan" ]
22,115,338
https://en.wikipedia.org/wiki/Fractional%20Calculus%20and%20Applied%20Analysis
Fractional Calculus and Applied Analysis is a peer-reviewed mathematics journal published by Walter de Gruyter. It covers research on fractional calculus, special functions, integral transforms, and some closely related areas of applied analysis. The journal is abstracted and indexed in Science Citation Index Expanded, Scopus, Current Contents/Physical, Chemical and Earth Sciences, Zentralblatt MATH, and Mathematical Reviews. The journal's Founding Editors were Professors Eric Love, Ian Sneddon, Bogoljub Stanković, Rudolf Gorenflo, Danuta Przeworska-Rolewicz, Gary Roach, Anatoly Kilbas, and Wen Chen. References External links Mathematical analysis journals Academic journals established in 1998 Quarterly journals De Gruyter academic journals English-language journals
Fractional Calculus and Applied Analysis
[ "Mathematics" ]
159
[ "Mathematical analysis", "Mathematical analysis journals" ]
22,116,598
https://en.wikipedia.org/wiki/GHS%20precautionary%20statements
Precautionary statements form part of the Globally Harmonized System of Classification and Labelling of Chemicals (GHS). They are intended to form a set of standardized phrases giving advice about the correct handling of chemical substances and mixtures, which can be translated into different languages. As such, they serve the same purpose as the well-known S-phrases, which they are intended to replace. Precautionary statements are one of the key elements for the labelling of containers under the GHS, along with: an identification of the product; one or more hazard pictograms (where necessary) a signal word – either Danger or Warning – where necessary hazard statements, indicating the nature and degree of the risks posed by the product the identity of the supplier (who might be a manufacturer or importer) Each precautionary statement is designated a code, starting with the letter P and followed by three digits. Statements which correspond to related hazards are grouped together by code number, so the numbering is not consecutive. The code is used for reference purposes, for example to help with translations, but it is the actual phrase which should appear on labels and safety data sheets. Some precautionary phrases are combinations, indicated by a plus sign "+". In several cases, there is a choice of wording, for example "Avoid breathing dust/fume/gas/mist/vapours/spray": the supplier or regulatory agency should choose the appropriate wording for the product concerned. General precautionary statements Note: "" = to be specified Prevention precautionary statements Response precautionary statements Storage precautionary statements Disposal precautionary statements References External links ("GHS Rev.10") (the "CLP Regulation") Chemical Hazard & Precautionary Phrases in 23 European Languages, machine-readable and versioned Precautionary statements
GHS precautionary statements
[ "Chemistry" ]
389
[ "Globally Harmonized System" ]
22,122,235
https://en.wikipedia.org/wiki/Topology%20control
Topology control is a technique used in distributed computing to alter the underlying network (modeled as a graph) to reduce the cost of distributed algorithms if run over the resulting graphs. It is a basic technique in distributed algorithms. For instance, a (minimum) spanning tree is used as a backbone to reduce the cost of broadcast from O(m) to O(n), where m and n are the number of edges and vertices in the graph, respectively. The term "topology control" is used mostly by the wireless ad hoc and sensor networks research community. The main aim of topology control in this domain is to save energy, reduce interference between nodes and extend lifetime of the network. However, recently the term has also been gaining traction with regards to control of the network structure of electric power systems. Topology construction and maintenance Lately, topology control algorithms have been divided into two subproblems: topology construction, in charge of the initial reduction, and topology maintenance, in charge of the maintenance of the reduced topology so that characteristics like connectivity and coverage are preserved. This is the first stage of a topology control protocol. Once the initial topology is deployed, specially when the location of the nodes is random, the administrator has no control over the design of the network; for example, some areas may be very dense, showing a high number of redundant nodes, which will increase the number of message collisions and will provide several copies of the same information from similarly located nodes. However, the administrator has control over some parameters of the network: transmission power of the nodes, state of the nodes (active or sleeping), role of the nodes (Clusterhead, gateway, regular), etc. By modifying these parameters, the topology of the network can change. Upon the same time a topology is reduced and the network starts serving its purpose, the selected nodes start spending energy: Reduced topology starts losing its "optimality as soon as full network activity evolves. After some time being active, some nodes will start to run out of energy. Especially in wireless sensor networks with multihopping, intensive packet forwarding causes nodes that are closer to the sink to spend higher amounts of energy than nodes that are farther away. Topology control has to be executed periodically in order to preserve the desired properties such as connectivity, coverage, density. Topology construction algorithms There are many ways to perform topology construction: Optimizing the node locations during the deployment phase Change the transmission range of the nodes Turn off nodes from the network Create a communication backbone Clustering Adding new nodes to the network to preserve connectivity (Federated Wireless sensor networks) Some examples of topology construction algorithms are: Tx range-based Geometry-based: Gabriel graph (GG), Relative neighborhood graph (RNG), Voronoi diagram Spanning Tree Based: LMST, iMST Direction Based: Yao graph and Nearest neighbor graph, Cone Based Topology Control (CBTC), Distributed RNG Neighbor based: KNeigh, XTC Routing based: COMPOW Hierarchical CDS-based: A3, EECDS, CDS-Rule K Cluster-based: Low Energy Adaptive Clustering Hierarchy (LEACH), HEED Graphical examples Topology maintenance algorithms In the same manner as topology construction, there are many ways to perform topology maintenance: Global Vs. Local Dynamic Vs. Static Vs. Hybrid Triggered by time, energy, density, random, etc. Some examples of topology maintenance algorithms are: Global DGTRec (Dynamic Global Topology Recreation): Periodically, wake up all inactive nodes, reset the existing reduced topology in the network and apply a topology construction protocol. SGTRot (Static Global Topology Rotation): Initially, the topology construction protocol must create more than one reduced topology (hopefully as disjoint as possible). Then, periodically, wake up all inactive nodes, and change the current active reduced topology to the next, like in a Christmas tree. HGTRotRec (Hybrid Global Topology Rotation and Recreation) Work as the SGTRot, but when the current active reduced topology detects a certain level of disconnection, reset the reduced topology and invoke the topology construction protocol to recreate that particular reduced topology. Local DL-DSR (Dynamic Local DSR-based TM) This protocol, based on the Dynamic Source Routing (DSR) routing algorithm, recreates the paths of disconnected nodes when a node fails. In all of the above protocols can be found in. In Atarraya, two version of each of these protocols are implemented with different triggers: one by time, and the other one by energy. In addition, Atarraya allows the pairing of all the topology construction and topology maintenance protocols in order to test the optimal maintenance policy for a particular construction protocol; it is important to mention that many papers on topology construction have not performed any study on this regard. Further reading Many books and papers have been written in the topic: Topology Control for Wireless Sensor Networks. ACM MobiCom 2003. Topology Control in Wireless Sensor Networks: with a companion simulation tool for teaching and research. Miguel Labrador and Pedro Wightman. Springer. 2009. Topology Control in Wireless Ad Hoc and Sensor Networks. Paolo Santi. Wiley. 2005. Protocols and Architectures for Wireless Sensor Networks. Holger Karl and Andreas Willig. Wiley-Interscience. 2007. Capacity-Optimized Topology Control for MANETs with Cooperative Communications. 2011. Robust Topology control for indoor wireless sensor networks. 2008 . Simulation of topology control There are many networking simulation tools, however there is one specifically designed for testing, design and teaching topology control algorithms: Atarraya. Atarraya is an event-driven simulator developed in Java that present a new framework for designing and testing topology control algorithms. It is an open source application, distributed under the GNU V.3 license. It was developed by Pedro Wightman, a Ph.D. candidate at University of South Florida, with the collaboration of Dr. Miguel Labrador. A paper with the detailed description of the simulator was presented in SIMUTools 2009. The paper can be found in this link. References Network topology Wireless sensor network
Topology control
[ "Mathematics", "Technology" ]
1,233
[ "Network topology", "Wireless networking", "Topology", "Wireless sensor network" ]
22,122,416
https://en.wikipedia.org/wiki/Glass%20transition
The glass–liquid transition, or glass transition, is the gradual and reversible transition in amorphous materials (or in amorphous regions within semicrystalline materials) from a hard and relatively brittle "glassy" state into a viscous or rubbery state as the temperature is increased. An amorphous solid that exhibits a glass transition is called a glass. The reverse transition, achieved by supercooling a viscous liquid into the glass state, is called vitrification. The glass-transition temperature Tg of a material characterizes the range of temperatures over which this glass transition occurs (as an experimental definition, typically marked as 100 s of relaxation time). It is always lower than the melting temperature, Tm, of the crystalline state of the material, if one exists, because the glass is a higher energy state (or enthalpy at constant pressure) than the corresponding crystal. Hard plastics like polystyrene and poly(methyl methacrylate) are used well below their glass transition temperatures, i.e., when they are in their glassy state. Their Tg values are both at around . Rubber elastomers like polyisoprene and polyisobutylene are used above their Tg, that is, in the rubbery state, where they are soft and flexible; crosslinking prevents free flow of their molecules, thus endowing rubber with a set shape at room temperature (as opposed to a viscous liquid). Despite the change in the physical properties of a material through its glass transition, the transition is not considered a phase transition; rather it is a phenomenon extending over a range of temperature and defined by one of several conventions. Such conventions include a constant cooling rate () and a viscosity threshold of 1012 Pa·s, among others. Upon cooling or heating through this glass-transition range, the material also exhibits a smooth step in the thermal-expansion coefficient and in the specific heat, with the location of these effects again being dependent on the history of the material. The question of whether some phase transition underlies the glass transition is a matter of ongoing research. Characteristics The glass transition of a liquid to a solid-like state may occur with either cooling or compression. The transition comprises a smooth increase in the viscosity of a material by as much as 17 orders of magnitude within a temperature range of 500 K without any pronounced change in material structure. This transition is in contrast to the freezing or crystallization transition, which is a first-order phase transition in the Ehrenfest classification and involves discontinuities in thermodynamic and dynamic properties such as volume, energy, and viscosity. In many materials that normally undergo a freezing transition, rapid cooling will avoid this phase transition and instead result in a glass transition at some lower temperature. Other materials, such as many polymers, lack a well defined crystalline state and easily form glasses, even upon very slow cooling or compression. The tendency for a material to form a glass while quenched is called glass forming ability. This ability depends on the composition of the material and can be predicted by the rigidity theory. Below the transition temperature range, the glassy structure does not relax in accordance with the cooling rate used. The expansion coefficient for the glassy state is roughly equivalent to that of the crystalline solid. If slower cooling rates are used, the increased time for structural relaxation (or intermolecular rearrangement) to occur may result in a higher density glass product. Similarly, by annealing (and thus allowing for slow structural relaxation) the glass structure in time approaches an equilibrium density corresponding to the supercooled liquid at this same temperature. Tg is located at the intersection between the cooling curve (volume versus temperature) for the glassy state and the supercooled liquid. The configuration of the glass in this temperature range changes slowly with time towards the equilibrium structure. The principle of the minimization of the Gibbs free energy provides the thermodynamic driving force necessary for the eventual change. At somewhat higher temperatures than Tg, the structure corresponding to equilibrium at any temperature is achieved quite rapidly. In contrast, at considerably lower temperatures, the configuration of the glass remains sensibly stable over increasingly extended periods of time. Thus, the liquid-glass transition is not a transition between states of thermodynamic equilibrium. It is widely believed that the true equilibrium state is always crystalline. Glass is believed to exist in a kinetically locked state, and its entropy, density, and so on, depend on the thermal history. Therefore, the glass transition is primarily a dynamic phenomenon. Time and temperature are interchangeable quantities (to some extent) when dealing with glasses, a fact often expressed in the time–temperature superposition principle. On cooling a liquid, internal degrees of freedom successively fall out of equilibrium. However, there is a longstanding debate whether there is an underlying second-order phase transition in the hypothetical limit of infinitely long relaxation times. In a more recent model of glass transition, the glass transition temperature corresponds to the temperature at which the largest openings between the vibrating elements in the liquid matrix become smaller than the smallest cross-sections of the elements or parts of them when the temperature is decreasing. As a result of the fluctuating input of thermal energy into the liquid matrix, the harmonics of the oscillations are constantly disturbed and temporary cavities ("free volume") are created between the elements, the number and size of which depend on the temperature. The glass transition temperature Tg0 defined in this way is a fixed material constant of the disordered (non-crystalline) state that is dependent only on the pressure. As a result of the increasing inertia of the molecular matrix when approaching Tg0, the setting of the thermal equilibrium is successively delayed, so that the usual measuring methods for determining the glass transition temperature in principle deliver Tg values that are too high. In principle, the slower the temperature change rate is set during the measurement, the closer the measured Tg value Tg0 approaches. Techniques such as dynamic mechanical analysis can be used to measure the glass transition temperature. Formal definitions The definition of the glass and the glass transition are not settled, and many definitions have been proposed over the past century. Franz Simon: Glass is a rigid material obtained from freezing-in a supercooled liquid in a narrow temperature range. Zachariasen: Glass is a topologically disordered network, with short range order equivalent to that in the corresponding crystal. Glass is a "frozen liquid” (i.e., liquids where ergodicity has been broken), which spontaneously relax towards the supercooled liquid state over a long enough time. Glasses are thermodynamically non-equilibrium kinetically stabilized amorphous solids, in which the molecular disorder and the thermodynamic properties corresponding to the state of the respective under-cooled melt at a temperature T* are frozen-in. Hereby T* differs from the actual temperature T. Glass is a nonequilibrium, non-crystalline condensed state of matter that exhibits a glass transition. The structure of glasses is similar to that of their parent supercooled liquids (SCL), and they spontaneously relax toward the SCL state. Their ultimate fate is to solidify, i.e., crystallize. Transition temperature Tg Refer to the figure on the bottom right plotting the heat capacity as a function of temperature. In this context, Tg is the temperature corresponding to point A on the curve. Different operational definitions of the glass transition temperature Tg are in use, and several of them are endorsed as accepted scientific standards. Nevertheless, all definitions are arbitrary, and all yield different numeric results: at best, values of Tg for a given substance agree within a few kelvins. One definition refers to the viscosity, fixing Tg at a value of 1013 poise (or 1012 Pa·s). As evidenced experimentally, this value is close to the annealing point of many glasses. In contrast to viscosity, the thermal expansion, heat capacity, shear modulus, and many other properties of inorganic glasses show a relatively sudden change at the glass transition temperature. Any such step or kink can be used to define Tg. To make this definition reproducible, the cooling or heating rate must be specified. The most frequently used definition of Tg uses the energy release on heating in differential scanning calorimetry (DSC, see figure). Typically, the sample is first cooled with 10 K/min and then heated with that same speed. Yet another definition of Tg uses the kink in dilatometry (a.k.a. thermal expansion): refer to the figure on the top right. Here, heating rates of are common. The linear sections below and above Tg are colored green. Tg is the temperature at the intersection of the red regression lines. Summarized below are Tg values characteristic of certain classes of materials. Polymers Dry nylon-6 has a glass transition temperature of . Nylon-6,6 in the dry state has a glass transition temperature of about . Whereas polyethene has a glass transition range of The above are only mean values, as the glass transition temperature depends on the cooling rate and molecular weight distribution and could be influenced by additives. For a semi-crystalline material, such as polyethene that is 60–80% crystalline at room temperature, the quoted glass transition refers to what happens to the amorphous part of the material upon cooling. Silicates and other covalent network glasses Linear heat capacity In 1971, Zeller and Pohl discovered that when glass is at a very low temperature ~1K, its specific heat has a linear component: . This is an unusual effect, because crystal material typically has , as in the Debye model. This was explained by the two-level system hypothesis, which states that a glass is populated by two-level systems, which look like a double potential well separated by a wall. The wall is high enough such that resonance tunneling does not occur, but thermal tunneling does occur. Namely, if the two wells have energy difference , then a particle in one well can tunnel to the other well by thermal interaction with the environment. Now, imagine that there are many two-level systems in the glass, and their is randomly distributed but fixed ("quenched disorder"), then as temperature drops, more and more of these two-level levels are frozen out (meaning that it takes such a long time for a tunneling to occur, that they cannot be experimentally observed). Consider a single two-level system that is not frozen-out, whose energy gap is . It is in a Boltzmann distribution, so its average energy . Now, assume that the two-level systems are all quenched, so that each varies little with temperature. In that case, we can write as the density of states with energy gap . We also assume that is positive and smooth near . Then, the total energy contributed by those two-level systems is The effect is that the average energy in these two-level systems is , leading to a term. Experimental data In experimental measurements, the specific heat capacity of glass is measured at different temperatures, and a graph is plotted. Assuming that , the graph should show , that is, a straight line with slope showing the typical Debye-like heat capacity, and a vertical intercept showing the anomalous linear component. Kauzmann's paradox As a liquid is supercooled, the difference in entropy between the liquid and solid phase decreases. By extrapolating the heat capacity of the supercooled liquid below its glass transition temperature, it is possible to calculate the temperature at which the difference in entropies becomes zero. This temperature has been named the Kauzmann temperature. If a liquid could be supercooled below its Kauzmann temperature, and it did indeed display a lower entropy than the crystal phase, this would be paradoxical, as the liquid phase should have the same vibrational entropy, but much higher positional entropy, as the crystal phase. This is the Kauzmann paradox, still not definitively resolved. Possible resolutions There are many possible resolutions to the Kauzmann paradox. Kauzmann himself resolved the entropy paradox by postulating that all supercooled liquids must crystallize before the Kauzmann temperature is reached. Perhaps at the Kauzmann temperature, glass reaches an ideal glass phase, which is still amorphous, but has a long-range amorphous order which decreases its overall entropy to that of the crystal. The ideal glass would be a true phase of matter. The ideal glass is hypothesized, but cannot be observed naturally, as it would take too long to form. Something approaching an ideal glass has been observed as "ultrastable glass" formed by vapor deposition, Perhaps there must be a phase transition before the entropy of the liquid decreases. In this scenario, the transition temperature is known as the calorimetric ideal glass transition temperature T0c. In this view, the glass transition is not merely a kinetic effect, i.e. merely the result of fast cooling of a melt, but there is an underlying thermodynamic basis for glass formation. The glass transition temperature: Perhaps the heat capacity of the supercooled liquid near the Kauzmann temperature smoothly decreases to a smaller value. Perhaps first order phase transition to another liquid state occurs before the Kauzmann temperature with the heat capacity of this new state being less than that obtained by extrapolation from higher temperature. In specific materials Silica, SiO2 Silica (the chemical compound SiO2) has a number of distinct crystalline forms in addition to the quartz structure. Nearly all of the crystalline forms involve tetrahedral SiO4 units linked together by shared vertices in different arrangements (stishovite, composed of linked SiO6 octahedra, is the main exception). Si-O bond lengths vary between the different crystal forms. For example, in α-quartz the bond length is , whereas in α-tridymite it ranges from . The Si-O-Si bond angle also varies from 140° in α-tridymite to 144° in α-quartz to 180° in β-tridymite. Any deviations from these standard parameters constitute microstructural differences or variations that represent an approach to an amorphous, vitreous or glassy solid. The transition temperature Tg in silicates is related to the energy required to break and re-form covalent bonds in an amorphous (or random network) lattice of covalent bonds. The Tg is clearly influenced by the chemistry of the glass. For example, addition of elements such as B, Na, K or Ca to a silica glass, which have a valency less than 4, helps in breaking up the network structure, thus reducing the Tg. Alternatively, P, which has a valency of 5, helps to reinforce an ordered lattice, and thus increases the Tg. Tg is directly proportional to bond strength, e.g. it depends on quasi-equilibrium thermodynamic parameters of the bonds e.g. on the enthalpy Hd and entropy Sd of configurons – broken bonds: Tg = Hd / [Sd + R ln[(1 − fc)/ fc] where R is the gas constant and fc is the percolation threshold. For strong melts such as SiO2 the percolation threshold in the above equation is the universal Scher–Zallen critical density in the 3-D space e.g. fc = 0.15, however for fragile materials the percolation thresholds are material-dependent and fc ≪ 1. The enthalpy Hd and the entropy Sd of configurons – broken bonds can be found from available experimental data on viscosity. On the surface of SiO2 films, scanning tunneling microscopy has resolved clusters of ca. 5 SiO2 in diameter that move in a two-state fashion on a time scale of minutes. This is much faster than dynamics in the bulk, but in agreement with models that compare bulk and surface dynamics. Polymers In polymers the glass transition temperature, Tg, is often expressed as the temperature at which the Gibbs free energy is such that the activation energy for the cooperative movement of 50 or so elements of the polymer is exceeded . This allows molecular chains to slide past each other when a force is applied. From this definition, we can see that the introduction of relatively stiff chemical groups (such as benzene rings) will interfere with the flowing process and hence increase Tg. The stiffness of thermoplastics decreases due to this effect (see figure.) When the glass temperature has been reached, the stiffness stays the same for a while, i.e., at or near E2, until the temperature exceeds Tm, and the material melts. This region is called the rubber plateau. In ironing, a fabric is heated through this transition so that the polymer chains become mobile. The weight of the iron then imposes a preferred orientation. Tg can be significantly decreased by addition of plasticizers into the polymer matrix. Smaller molecules of plasticizer embed themselves between the polymer chains, increasing the spacing and free volume, and allowing them to move past one another even at lower temperatures. Addition of plasticizer can effectively take control over polymer chain dynamics and dominate the amounts of the associated free volume so that the increased mobility of polymer ends is not apparent. The addition of nonreactive side groups to a polymer can also make the chains stand off from one another, reducing Tg. If a plastic with some desirable properties has a Tg that is too high, it can sometimes be combined with another in a copolymer or composite material with a Tg below the temperature of intended use. Note that some plastics are used at high temperatures, e.g., in automobile engines, and others at low temperatures. In viscoelastic materials, the presence of liquid-like behavior depends on the properties of and so varies with rate of applied load, i.e., how quickly a force is applied. The silicone toy Silly Putty behaves quite differently depending on the time rate of applying a force: pull slowly and it flows, acting as a heavily viscous liquid; hit it with a hammer and it shatters, acting as a glass. On cooling, rubber undergoes a liquid-glass transition, which has also been called a rubber-glass transition. Mechanics of vitrification Molecular motion in condensed matter can be represented by a Fourier series whose physical interpretation consists of a superposition of longitudinal and transverse waves of atomic displacement with varying directions and wavelengths. In monatomic systems, these waves are called density fluctuations. (In polyatomic systems, they may also include compositional fluctuations.) Thus, thermal motion in liquids can be decomposed into elementary longitudinal vibrations (or acoustic phonons) while transverse vibrations (or shear waves) were originally described only in elastic solids exhibiting the highly ordered crystalline state of matter. In other words, simple liquids cannot support an applied force in the form of a shearing stress, and will yield mechanically via macroscopic plastic deformation (or viscous flow). Furthermore, the fact that a solid deforms locally while retaining its rigidity – while a liquid yields to macroscopic viscous flow in response to the application of an applied shearing force – is accepted by many as the mechanical distinction between the two. The inadequacies of this conclusion, however, were pointed out by Frenkel in his revision of the kinetic theory of solids and the theory of elasticity in liquids. This revision follows directly from the continuous characteristic of the viscoelastic crossover from the liquid state into the solid one when the transition is not accompanied by crystallization—ergo the supercooled viscous liquid. Thus we see the intimate correlation between transverse acoustic phonons (or shear waves) and the onset of rigidity upon vitrification, as described by Bartenev in his mechanical description of the vitrification process. The velocities of longitudinal acoustic phonons in condensed matter are directly responsible for the thermal conductivity that levels out temperature differentials between compressed and expanded volume elements. Kittel proposed that the behavior of glasses is interpreted in terms of an approximately constant "mean free path" for lattice phonons, and that the value of the mean free path is of the order of magnitude of the scale of disorder in the molecular structure of a liquid or solid. The thermal phonon mean free paths or relaxation lengths of a number of glass formers have been plotted versus the glass transition temperature, indicating a linear relationship between the two. This has suggested a new criterion for glass formation based on the value of the phonon mean free path. It has often been suggested that heat transport in dielectric solids occurs through elastic vibrations of the lattice, and that this transport is limited by elastic scattering of acoustic phonons by lattice defects (e.g. randomly spaced vacancies). These predictions were confirmed by experiments on commercial glasses and glass ceramics, where mean free paths were apparently limited by "internal boundary scattering" to length scales of . The relationship between these transverse waves and the mechanism of vitrification has been described by several authors who proposed that the onset of correlations between such phonons results in an orientational ordering or "freezing" of local shear stresses in glass-forming liquids, thus yielding the glass transition. Electronic structure The influence of thermal phonons and their interaction with electronic structure is a topic that was appropriately introduced in a discussion of the resistance of liquid metals. Lindemann's theory of melting is referenced, and it is suggested that the drop in conductivity in going from the crystalline to the liquid state is due to the increased scattering of conduction electrons as a result of the increased amplitude of atomic vibration. Such theories of localization have been applied to transport in metallic glasses, where the mean free path of the electrons is very small (on the order of the interatomic spacing). The formation of a non-crystalline form of a gold-silicon alloy by the method of splat quenching from the melt led to further considerations of the influence of electronic structure on glass forming ability, based on the properties of the metallic bond. Other work indicates that the mobility of localized electrons is enhanced by the presence of dynamic phonon modes. One claim against such a model is that if chemical bonds are important, the nearly free electron models should not be applicable. However, if the model includes the buildup of a charge distribution between all pairs of atoms just like a chemical bond (e.g., silicon, when a band is just filled with electrons) then it should apply to solids. Thus, if the electrical conductivity is low, the mean free path of the electrons is very short. The electrons will only be sensitive to the short-range order in the glass since they do not get a chance to scatter from atoms spaced at large distances. Since the short-range order is similar in glasses and crystals, the electronic energies should be similar in these two states. For alloys with lower resistivity and longer electronic mean free paths, the electrons could begin to sense that there is disorder in the glass, and this would raise their energies and destabilize the glass with respect to crystallization. Thus, the glass formation tendencies of certain alloys may therefore be due in part to the fact that the electron mean free paths are very short, so that only the short-range order is ever important for the energy of the electrons. It has also been argued that glass formation in metallic systems is related to the "softness" of the interaction potential between unlike atoms. Some authors, emphasizing the strong similarities between the local structure of the glass and the corresponding crystal, suggest that chemical bonding helps to stabilize the amorphous structure. Other authors have suggested that the electronic structure yields its influence on glass formation through the directional properties of bonds. Non-crystallinity is thus favored in elements with a large number of polymorphic forms and a high degree of bonding anisotropy. Crystallization becomes more unlikely as bonding anisotropy is increased from isotropic metallic to anisotropic metallic to covalent bonding, thus suggesting a relationship between the group number in the periodic table and the glass forming ability in elemental solids. See also Gardner transition Glass formation References External links Fragility VFT Eqn. Polymers I Polymers II Angell: Aqueous media DoITPoMS Teaching and Learning Package- "The Glass Transition in Polymers" Glass Transition Temperature short overview Cryobiology Glass engineering and science Glass physics Phase transitions Polymer chemistry Rubber properties Threshold temperatures
Glass transition
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
5,091
[ "Glass engineering and science", "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Cryobiology", "Threshold temperatures", "Materials science", "Glass physics", "Condensed matter physics", "Polymer chemistry", "Biochemistry", "Statistical mechanics", "Ma...
22,124,452
https://en.wikipedia.org/wiki/Einstein%E2%80%93Infeld%E2%80%93Hoffmann%20equations
The Einstein–Infeld–Hoffmann equations of motion, jointly derived by Albert Einstein, Leopold Infeld and Banesh Hoffmann, are the differential equations describing the approximate dynamics of a system of point-like masses due to their mutual gravitational interactions, including general relativistic effects. It uses a first-order post-Newtonian expansion and thus is valid in the limit where the velocities of the bodies are small compared to the speed of light and where the gravitational fields affecting them are correspondingly weak. Given a system of N bodies, labelled by indices A = 1, ..., N, the barycentric acceleration vector of body A is given by: where: is the barycentric position vector of body A is the barycentric velocity vector of body A is the barycentric acceleration vector of body A is the coordinate distance between bodies A and B is the unit vector pointing from body B to body A is the mass of body A. is the speed of light is the gravitational constant and the big O notation is used to indicate that terms of order c−4 or beyond have been omitted. The coordinates used here are harmonic. The first term on the right hand side is the Newtonian gravitational acceleration at A; in the limit as c → ∞, one recovers Newton's law of motion. The acceleration of a particular body depends on the accelerations of all the other bodies. Since the quantity on the left hand side also appears in the right hand side, this system of equations must be solved iteratively. In practice, using the Newtonian acceleration instead of the true acceleration provides sufficient accuracy. References Further reading Differential equations General relativity Albert Einstein
Einstein–Infeld–Hoffmann equations
[ "Physics", "Mathematics" ]
344
[ "Mathematical objects", "Differential equations", "Equations", "General relativity", "Relativity stubs", "Theory of relativity" ]
10,389,193
https://en.wikipedia.org/wiki/Temperature-sensitive%20mutant
Temperature-sensitive mutants are variants of genes that allow normal function of the organism at low temperatures, but altered function at higher temperatures. Cold sensitive mutants are variants of genes that allow normal function of the organism at higher temperatures, but altered function at low temperatures. Mechanism Most temperature-sensitive mutations affect proteins, and cause loss of protein function at the non-permissive temperature. The permissive temperature is one at which the protein typically can fold properly, or remain properly folded. At higher temperatures, the protein is unstable and ceases to function properly. These mutations are usually recessive in diploid organisms. Temperature sensitive mutants arrange a reversible mechanism and are able to reduce particular gene products at varying stages of growth and are easily done by changing the temperature of growth. Permissive temperature The permissive temperature is the temperature at which a temperature-sensitive mutant gene product takes on a normal, functional phenotype. When a temperature-sensitive mutant is grown in a permissive condition, the mutant gene product behaves normally (meaning that the phenotype is not observed), even if there is a mutant allele present. This results in the survival of the cell or organism, as if it were a wild type strain. In contrast, the nonpermissive temperature or restrictive temperature is the temperature at which the mutant phenotype is observed. Temperature sensitive mutations are usually missense mutations, which slightly modifies the energy landscape of the protein folding. The mutant protein will function at the standard, permissive, low temperature. It will alternatively lack the function at a rather high, non-permissive, temperature and display a hypomorphic (partial loss of gene function) and a middle, semi-permissive, temperature. Use in research Temperature-sensitive mutants are useful in biological research. They allow the study of essential processes required for the survival of the cell or organism. Mutations to essential genes are generally lethal and hence temperature-sensitive mutants enable researchers to induce the phenotype at the restrictive temperatures and study the effects. The temperature-sensitive phenotype could be expressed during a specific developmental stage to study the effects. Examples In the late 1970s, the Saccharomyces cerevisiae secretory pathway, essential for viability of the cell and for growth of new buds, was dissected using temperature-sensitive mutants, resulting in the identification of twenty-three essential genes. In the 1970s, several temperature-sensitive mutant genes were identified in Drosophila melanogaster, such as shibirets, which led to the first genetic dissection of synaptic function.< In the 1990s, the heat shock promoter hsp70 was used in temperature-modulated gene expression in the fruit fly. Bacteriophage An infection of an Escherichia coli host cell by a bacteriophage (phage) T4 temperature sensitive (ts) conditionally lethal mutant at a high restrictive temperature generally leads to no phage growth. However, a co-infection under restrictive conditions with two ts mutants defective in different genes generally leads to robust growth because of intergenic complementation. The discovery of ts mutants of phage T4, and the employment of such mutants in complementation tests contributed to the identification of many of the genes in this organism. Because multiple copies of a polypeptide specified by a gene often form multimers, mixed infections with two different ts mutants defective in the same gene often leads to mixed multimers and partial restoration of function, a phenomenon referred to as intragenic complementation. Intragenic complementation of ts mutants defective in the same gene can provide information on the structural organization of the multimer. Growth of phage ts mutants under partially restrictive conditions has been used to identify the functions of genes. Thus genes employed in the repair of DNA damages were identified, as well as genes affecting genetic recombination. For example, growing a ts DNA repair mutant at an intermediate temperature will allow some progeny phage to be produced. However, if that ts mutant is irradiated with UV light, its survival will be more strongly reduced compared the reduction of survival of irradiated wild-type phage T4. Conditional lethal mutants able to grow at high temperatures, but unable to grow at low temperatures, were also isolated in phage T4. These cold sensitive mutants defined a discrete set of genes, some of which had been previously identified by other types of conditional lethal mutants. References Temperature Cell biology Biology terminology
Temperature-sensitive mutant
[ "Physics", "Chemistry", "Biology" ]
905
[ "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Cell biology", "Physical quantities", "SI base quantities", "Intensive quantities", "Thermodynamics", "nan", "Wikipedia categories named after physical quantities" ]
10,389,277
https://en.wikipedia.org/wiki/Mixed%20Hodge%20module
In mathematics, mixed Hodge modules are the culmination of Hodge theory, mixed Hodge structures, intersection cohomology, and the decomposition theorem yielding a coherent framework for discussing variations of degenerating mixed Hodge structures through the six functor formalism. Essentially, these objects are a pair of a filtered D-module together with a perverse sheaf such that the functor from the Riemann–Hilbert correspondence sends to . This makes it possible to construct a Hodge structure on intersection cohomology, one of the key problems when the subject was discovered. This was solved by Morihiko Saito who found a way to use the filtration on a coherent D-module as an analogue of the Hodge filtration for a Hodge structure. This made it possible to give a Hodge structure on an intersection cohomology sheaf, the simple objects in the Abelian category of perverse sheaves. Abstract structure Before going into the nitty gritty details of defining mixed Hodge modules, which is quite elaborate, it is useful to get a sense of what the category of mixed Hodge modules actually provides. Given a complex algebraic variety there is an abelian category pg 339 with the following functorial properties There is a faithful functor called the rationalization functor. This gives the underlying rational perverse sheaf of a mixed Hodge module. There is a faithful functor sending a mixed Hodge module to its underlying D-module. These functors behave well with respect to the Riemann-Hilbert correspondence , meaning for every mixed Hodge module there is an isomorphism . In addition, there are the following categorical properties The category of mixed Hodge modules over a point is isomorphic to the category of Mixed hodge structures, Every object in admits a weight filtration such that every morphism in preserves the weight filtration strictly, the associated graded objects are semi-simple, and in the category of mixed Hodge modules over a point, this corresponds to the weight filtration of a mixed Hodge structure. There is a dualizing functor lifting the Verdier dualizing functor in which is an involution on . For a morphism of algebraic varieties, the associated six functors on and have the following properties don't increase the weights of a complex of mixed Hodge modules. don't decrease the weights of a complex of mixed Hodge modules. Relation between derived categories The derived category of mixed Hodge modules is intimately related to the derived category of constructible sheaves equivalent to the derived category of perverse sheaves. This is because of how the rationalization functor is compatible with the cohomology functor of a complex of mixed Hodge modules. When taking the rationalization, there is an isomorphismfor the middle perversity . Notepg 310 this is the function sending , which differs from the case of pseudomanifolds where the perversity is a function where . Recall this is defined as taking the composition of perverse truncations with the shift functor, sopg 341This kind of setup is also reflected in the derived push and pull functors and with nearby and vanishing cycles , the rationalization functor takes these to their analogous perverse functors on the derived category of perverse sheaves. Tate modules and cohomology Here we denote the canonical projection to a point by . One of the first mixed Hodge modules available is the weight 0 Tate object, denoted which is defined as the pullback of its corresponding object in , soThe Hodge structure corresponds to the weight 0 Tate object in the category of mixed Hodge structures. This object is useful because it can be used to compute the various cohomologies of through the six functor formalism and give them a mixed Hodge structure. These can be summarized with the tableMoreover, given a closed embedding there is the local cohomology group Variations of Mixed Hodge structures For a morphism of varieties the pushforward maps and give degenerating variations of mixed Hodge structures on . In order to better understand these variations, the decomposition theorem and intersection cohomology are required. Intersection cohomology One of the defining features of the category of mixed Hodge modules is the fact intersection cohomology can be phrased in its language. This makes it possible to use the decomposition theorem for maps of varieties. To define the intersection complex, let be the open smooth part of a variety . Then the intersection complex of can be defined aswhereas with perverse sheavespg 311. In particular, this setup can be used to show the intersection cohomology groupshave a pure weight Hodge structure. See also Mixed motives (math) Deligne cohomology References A young person's guide to mixed Hodge modules Algebraic geometry Generalized manifolds Homological algebra Hodge theory
Mixed Hodge module
[ "Mathematics", "Engineering" ]
968
[ "Mathematical structures", "Tensors", "Hodge theory", "Differential forms", "Fields of abstract algebra", "Category theory", "Algebraic geometry", "Homological algebra" ]
10,389,335
https://en.wikipedia.org/wiki/Wave%20turbulence
In continuum mechanics, wave turbulence is a set of nonlinear waves deviated far from thermal equilibrium. Such a state is usually accompanied by dissipation. It is either decaying turbulence or requires an external source of energy to sustain it. Examples are waves on a fluid surface excited by winds or ships, and waves in plasma excited by electromagnetic waves etc. Appearance External sources by some resonant mechanism usually excite waves with frequencies and wavelengths in some narrow interval. For example, shaking a container with frequency ω excites surface waves with frequency ω/2 (parametric resonance, discovered by Michael Faraday). When wave amplitudes are small – which usually means that the wave is far from breaking – only those waves exist that are directly excited by an external source. When, however, wave amplitudes are not very small (for surface waves: when the fluid surface is inclined by more than few degrees) waves with different frequencies start to interact. That leads to an excitation of waves with frequencies and wavelengths in wide intervals, not necessarily in resonance with an external source. In experiments with high shaking amplitudes one initially observes waves that are in resonance with one another. Thereafter, both longer and shorter waves appear as a result of wave interaction. The appearance of shorter waves is referred to as a direct cascade while longer waves are part of an inverse cascade of wave turbulence. Statistical wave turbulence and discrete wave turbulence Two generic types of wave turbulence should be distinguished: statistical wave turbulence (SWT) and discrete wave turbulence (DWT). In SWT theory exact and quasi-resonances are omitted, which allows using some statistical assumptions and describing the wave system by kinetic equations and their stationary solutions – the approach developed by Vladimir E. Zakharov. These solutions are called Kolmogorov–Zakharov (KZ) energy spectra and have the form k−α, with k the wavenumber and α a positive constant depending on the specific wave system. The form of KZ-spectra does not depend on the details of initial energy distribution over the wave field or on the initial magnitude of the complete energy in a wave turbulent system. Only the fact the energy is conserved at some inertial interval is important. The subject of DWT, first introduced in , are exact and quasi-resonances. Previous to the two-layer model of wave turbulence, the standard counterpart of SWT were low-dimensioned systems characterized by a small number of modes included. However, DWT is characterized by resonance clustering, and not by the number of modes in particular resonance clusters – which can be fairly big. As a result, while SWT is completely described by statistical methods, in DWT both integrable and chaotic dynamics are accounted for. A graphical representation of a resonant cluster of wave components is given by the corresponding NR-diagram (nonlinear resonance diagram). In some wave turbulent systems both discrete and statistical layers of turbulence are observed simultaneously, this wave turbulent regime have been described in and is called mesoscopic. Accordingly, three wave turbulent regimes can be singled out—kinetic, discrete and mesoscopic described by KZ-spectra, resonance clustering and their coexistence correspondingly. Energetic behavior of kinetic wave turbulent regime is usually described by Feynman-type diagrams (i.e. Wyld's diagrams), while NR-diagrams are suitable for representing finite resonance clusters in discrete regime and energy cascades in mesoscopic regimes. Notes References Further reading Nonlinear systems Water waves Oceanography
Wave turbulence
[ "Physics", "Chemistry", "Mathematics", "Environmental_science" ]
715
[ "Physical phenomena", "Hydrology", "Applied and interdisciplinary physics", "Water waves", "Oceanography", "Waves", "Nonlinear systems", "Dynamical systems", "Fluid dynamics" ]
10,390,090
https://en.wikipedia.org/wiki/Chip-scale%20atomic%20clock
A chip scale atomic clock (CSAC) is a compact, low-power atomic clock fabricated using techniques of microelectromechanical systems (MEMS) and incorporating a low-power semiconductor laser as the light source. The first CSAC physics package was demonstrated at the National Institute of Standards and Technology (NIST) in 2003, based on an invention made in 2001. The work was funded by the US Department of Defense's Defense Advanced Research Projects Agency (DARPA) with the goal of developing a microchip-sized atomic clock for use in portable equipment. In military equipment it is expected to provide improved location and battlespace situational awareness for dismounted soldiers when the global positioning system is not available, but many civilian applications are also envisioned. Commercial manufacturing of these atomic clocks began in 2011. The CSAC, the world's smallest atomic clock, is 4 x 3.5 x 1 cm (1.5 x 1.4 x 0.4 inches) in size, weighs 35 grams, consumes only 115 mW of power, and can keep time to within 100 microseconds per day after several years of operation. A more stable design based on the vibration of rubidium atoms was demonstrated by NIST in 2019. How it works Like other caesium atomic clocks, the clock keeps time by a precise 9.192631770 GHz microwave signal emitted by electron spin transitions between two hyperfine energy levels in atoms of caesium-133. A feedback mechanism keeps a quartz crystal oscillator on the chip locked to this frequency, which is divided down by digital counters to give 10 MHz and 1 Hz clock signals provided to output pins. On the chip, liquid metal caesium in a tiny 2 mm capsule, fabricated using silicon micromachining techniques, is heated to vaporize the alkali metal. A semiconductor laser shines a beam of infrared light modulated by the microwave oscillator through the capsule onto a photodetector. When the oscillator is at the precise frequency of the transition, the optical absorption of the caesium atoms is reduced, increasing the output of the photodetector. The output of the photodetector is used as feedback in a frequency locked loop circuit to keep the oscillator at the correct frequency. Development Conventional vapor cell atomic clocks are about the size of a deck of cards, consume about 10 W of electrical power and cost about $3,000. Shrinking these to the size of a semiconductor chip required extensive development and several breakthroughs. An important part of development was designing the device so it could be manufactured using standard semiconductor fabrication techniques where possible, to keep its cost low enough that it could become a mass market device. Conventional caesium clocks use a glass tube containing caesium, which are challenging to make smaller than 1 cm. In the CSAC, MEMS techniques were used to create a caesium capsule only 2 cubic millimeters in size. The light source in conventional atomic clocks is a rubidium atomic-vapor discharge lamp, which was bulky and consumed large amounts of power. In the CSAC this was replaced by an infrared vertical cavity surface emitting laser (VCSEL) fabricated on the chip, with its beam radiating upward into the caesium capsule above it. Another advance was the elimination of the microwave cavity used in conventional clocks, whose size, equal to a wavelength of the microwave frequency, about 3 cm, formed the fundamental lower limit to the size of the clock. The cavity was made unnecessary by the use of a quantum technique, coherent population trapping. Commercialization The CSAC program achieved a hundredfold size reduction while using 50 times less power than traditional atomic clocks, which led to extensive CSAC use in military and commercial applications. According to an October 2023 report, the CSAC market is expected to grow at a "remarkable" compound annual growth rate (CAGR) from 2023 to 2030. Major commercial players include Microsemi (Microchip Technology), Teledyne, Chengdu Spaceon Electronics, and AccuBeat. External links NIST on a chip References Atomic clocks Electronic test equipment
Chip-scale atomic clock
[ "Technology", "Engineering" ]
847
[ "Electronic test equipment", "Measuring instruments" ]
10,391,074
https://en.wikipedia.org/wiki/Acetic%20acid/hydrocortisone
Acetic acid/hydrocortisone is a commonly used combination drug to treat infections of the outer ear and ear canal. Branded as Vosol HC and Acetasol HC, it combines the antibacterial and antifungal action of acetic acid with the anti-inflammatory functions of hydrocortisone. References Acetasol HC Vosol HC Antibiotics Antifungals Combination drugs
Acetic acid/hydrocortisone
[ "Chemistry", "Biology" ]
82
[ "Pharmacology", "Biotechnology products", "Medicinal chemistry stubs", "Antibiotics", "Pharmacology stubs", "Biocides" ]
10,393,385
https://en.wikipedia.org/wiki/Nitrenium%20ion
A nitrenium ion (also called: aminylium ion or imidonium ion (obsolete)) in organic chemistry is a reactive intermediate based on nitrogen with both an electron lone pair and a positive charge and with two substituents (). Nitrenium ions are isoelectronic with carbenes, and can exist in either a singlet or a triplet state. The parent nitrenium ion, , is a ground state triplet species with a gap of to the lowest energy singlet state. Conversely, most arylnitrenium ions are ground state singlets. Certain substituted arylnitrenium ions can be ground state triplets, however. Nitrenium ions can have microsecond or longer lifetimes in water. Aryl nitrenium ions are of biological interest because of their involvement in certain DNA damaging processes. They are generated upon in vivo oxidation of arylamines. The regiochemistry and energetics of the reaction of phenylnitrenium ion with guanine has been investigated using density functional theory computations. Nitrenium species have been exploited as intermediates in organic reactions. They are typically generated via heterolysis of N–X (X = N, O, Halogen) bonds. For instance, they are formed upon treatment of chloramine derivatives with silver salts or by activation of aryl hydroxylamine derivatives or aryl azides with Brønsted or Lewis acids. The Bamberger rearrangement is an early example of a reaction that is now thought to proceed via an aryl nitrenium intermediate. They can also act as electrophiles in electrophilic aromatic substitution. See also The related neutral nitrenes R–N: References Reactive intermediates Nitrogen hydrides
Nitrenium ion
[ "Chemistry" ]
368
[ "Organic compounds", "Reactive intermediates", "Physical organic chemistry" ]
10,394,955
https://en.wikipedia.org/wiki/GCIRS%2013E
GCIRS 13E is an infrared and radio object near the Galactic Center. It is believed to be a cluster of hot massive stars, possibly containing an intermediate-mass black hole (IMBH) at its center. GCIRS 13E was first identified as GCIRS 13, which was later resolved into two components GCIRS13E and W. GCIRS 13E was initially modelled as a single object, possibly a binary system. It was even classified as a Wolf-Rayet star because of its strong emission line spectrum, and named WR 101f. It was then resolved into seven Wolf-Rayet and class O stars. The highest-resolution infrared imaging and spectroscopy can now identify 19 objects in GCIRS 13E, of which 15 are dense gaseous regions. The remaining four objects are stars: WN8 and WC9 Wolf-Rayet stars; an OB supergiant; and a K3 giant. The motions of the members of GCIRS 13E appear to indicate a much higher mass than can be accounted for by the visible objects. It has been proposed that there may be an intermediate-mass black hole with a mass of about at its center. There are a number of problems with this theory. However, the true nature of the cluster remains unknown. GCIRS 13E is a small cluster dominated by a few massive stars. It is thought that massive stars cannot form so close to a supermassive black hole and since such massive stars have a short lifespan it is thought that GCIRS 13E must have migrated inward toward the central black hole within the past 10 million years, probably from about 60 light-years further out than its current orbit. The stars are possibly the remains of a globular cluster where a middleweight black hole could develop through runaway star collisions. GCIRS 13E could also be a dark star cluster which forms in the inner Galaxy if the evaporation rate of stars from the cluster is faster due to a strong tidal field than the depletion of the black hole content though ejections. References Sagittarius (constellation) Galactic Center Intermediate-mass black holes Wolf–Rayet stars Star clusters
GCIRS 13E
[ "Physics", "Astronomy" ]
444
[ "Black holes", "Star clusters", "Unsolved problems in physics", "Intermediate-mass black holes", "Constellations", "Sagittarius (constellation)", "Astronomical objects" ]
10,399,056
https://en.wikipedia.org/wiki/HITRAN
HITRAN (an acronym for High Resolution Transmission) molecular spectroscopic database is a compilation of spectroscopic parameters used to simulate and analyze the transmission and emission of light in gaseous media, with an emphasis on planetary atmospheres. The knowledge of spectroscopic parameters for transitions between energy levels in molecules (and atoms) is essential for interpreting and modeling the interaction of radiation (light) within different media. For half a century, HITRAN has been considered to be an international standard which provides the user a recommended value of parameters for millions of transitions for different molecules. HITRAN includes both experimental and theoretical data which are gathered from a worldwide network of contributors as well as from articles, books, proceedings, databases, theses, reports, presentations, unpublished data, papers in-preparation and private communications. A major effort is then dedicated to evaluating and processing the spectroscopic data. A single transition in HITRAN has many parameters, including a default 160-byte fixed-width format used since HITRAN2004. Wherever possible, the retrieved data are validated against accurate laboratory data. The original version of HITRAN was compiled by the US Air Force Cambridge Research Laboratories (1960s) in order to enable surveillance of military aircraft detected through the terrestrial atmosphere. One of the early applications of HITRAN was a program called Atmospheric Radiation Measurement (ARM) for the US Department of Energy. In this program spectral atmospheric measurements were made around the globe in order to better understand the balance between the radiant energy that reaches Earth from the sun and the energy that flows from Earth back out to space. The US Department of Transportation also utilized HITRAN in its early days for monitoring the gas emissions (NO, SO2, NO2) of super-sonic transports flying at high altitude. HITRAN was first made publicly available in 1973 and today there are a multitude of ongoing and future NASA satellite missions which incorporate HITRAN. One of the NASA missions currently utilizing HITRAN is the Orbiting Carbon Observatory (OCO) which measures the sources and sinks of CO2 in the global atmosphere. HITRAN is a free resource and is currently maintained and developed at the Center for Astrophysics Harvard & Smithsonian, Cambridge MA, USA (CFA/HITRAN). HITRAN is the worldwide standard for calculating or simulating atmospheric molecular transmission and radiance from the microwave through ultraviolet region of the spectrum. The HITRAN database is officially released on a quadrennial basis, with updates posted in the intervening years on HITRANonline. There is a new journal article published in conjunction with the most recent release of the HITRAN database, and users are strongly encouraged to use the most recent edition. Throughout HITRAN's history, there have been around 50,000 unique users of the database and in recent years there are over 24,000 users registered on HITRANonline. There are YouTube tutorials on the HITRANonline webpage to answer frequently asked questions by users. Line-by-Line The current version, HITRAN2020, contains 55 molecules in the line-by-line portion of HITRAN along with some of their most significant isotopologues (144 isotopologues in total). These data are archived as a multitude of high-resolution line transitions, each containing many spectral parameters required for high-resolution simulations. Absorption Cross-Sections In addition to the traditional line-by-line spectroscopic absorption parameters, the HITRAN database contains information on absorption cross-sections where the line-by-line parameters are absent or incomplete. Typically HITRAN includes absorption cross-sections for heavy polyatomic molecules (with low-lying vibrational modes) which are difficult for detailed analysis due to the high density of the spectral bands/lines, broadening effects, isomerization, and overall modeling complexity. There are 327 molecular species in the current edition of the database provided as cross-section files. The cross-section files are provided in the HITRAN format described on the official HITRAN website (http://hitran.org/docs/cross-sections-definitions/). Collision-Induced Absorption The HITRAN compilation also provides collision-induced absorption (CIA) that was first introduced into HITRAN in the 2012 edition. CIA refers to absorption by transient electric dipoles induced by the interaction between colliding molecules. Instructions for accessing the CIA data files can be found on HITRAN/CIA. Aerosol Refractive Indices HITRAN2020 also has an aerosols refractive indices section, with data in the visible, infrared, and millimeter spectral ranges of many types of cloud and aerosol particles. Knowledge of the refractive indices of the aerosols and cloud particles and their size distributions is necessary in order to specify their optical properties. HITEMP HITEMP is the molecular spectroscopic database analogous to HITRAN for high-temperature modeling of the spectra of molecules in the gas phase. HITEMP encompasses many more bands and transitions than HITRAN for eight absorbers: H2O, CO2, N2O, CO, CH4, NO, NO2 and OH. Due to the extremely large number of transitions required for high-temperature simulations, it was necessary to provide the HITEMP data as separate files to that of HITRAN. The HITEMP line lists retain the same 160-character format that was used for earlier editions of HITRAN. There are numerous applications for HITEMP data, some examples include the thermometry of high-temperature environments, analysis of combustion processes, and modeling spectra of atmospheres in the Solar System, exoplanets, brown dwarfs, and stars. HAPI A Python library HAPI (HITRAN Application Programming Interface) has been developed which serves as a tool for absorption and transmission calculations as well as comparisons of spectroscopic data sets. HAPI extends the functionality of the main site, in particular, for the calculation of spectra using several types of line shape calculations, including the flexible HT (Hartmann-Tran) profile. This HT line shape can also be reduced to a number of conventional line profiles such as Gaussian (Doppler), Lorentzian, Voigt, Rautian, Speed-Dependent Voigt and Speed-Dependent Rautian. In addition to accounting for pressure, temperature and optical path length, the user can include a number of instrumental functions to simulate experimental spectra. HAPI is able to account for broadening of lines due to mixtures of gases as well as make use of all broadening parameters supplied by HITRAN. This includes the traditional broadeners (air, self) as well as additional parameters for CO2, H2O, H2 and He broadening. The following spectral functions can be calculated in the current version #1 of HAPI: absorption coefficient absorption spectrum transmittance spectrum radiance spectrum HAPIEST (an acronym for HITRAN Application Programming Interface and Efficient Spectroscopic Tools) is a graphical user interface allowing users to access some of the functionality provided by HAPI without any knowledge of Python programming, including downloading data from HITRAN, and plotting of spectra and cross-sections. The source code for HAPIEST is available on GitHub (HAPIEST), along with binary distributions for Mac and PC. See also Atmospheric radiative--transfer codes Absorption spectrum MODTRAN References Further reading External links hitran.org Official HITRANonline website for accessing HITRAN data HITRAN on the Web HITRAN/CIA HITRAN CIA data access HAPI The page for HAPI on the HITRAN website HAPIEST The GitHub repository for HAPIEST Atmospheric radiative transfer codes Spectroscopy
HITRAN
[ "Physics", "Chemistry" ]
1,559
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
1,211,913
https://en.wikipedia.org/wiki/Least%20fixed%20point
In order theory, a branch of mathematics, the least fixed point (lfp or LFP, sometimes also smallest fixed point) of a function from a partially ordered set ("poset" for short) to itself is the fixed point which is less than each other fixed point, according to the order of the poset. A function need not have a least fixed point, but if it does then the least fixed point is unique. Examples With the usual order on the real numbers, the least fixed point of the real function f(x) = x2 is x = 0 (since the only other fixed point is 1 and 0 < 1). In contrast, f(x) = x + 1 has no fixed points at all, so has no least one, and f(x) = x has infinitely many fixed points, but has no least one. Let be a directed graph and be a vertex. The set of vertices accessible from can be defined as the least fixed-point of the function , defined as The set of vertices which are co-accessible from is defined by a similar least fix-point. The strongly connected component of is the intersection of those two least fixed-points. Let be a context-free grammar. The set of symbols which produces the empty string can be obtained as the least fixed-point of the function , defined as , where denotes the power set of . Applications Many fixed-point theorems yield algorithms for locating the least fixed point. Least fixed points often have desirable properties that arbitrary fixed points do not. Denotational semantics In computer science, the denotational semantics approach uses least fixed points to obtain from a given program text a corresponding mathematical function, called its semantics. To this end, an artificial mathematical object, , is introduced, denoting the exceptional value "undefined". Given e.g. the program datatype int, its mathematical counterpart is defined as it is made a partially ordered set by defining for each and letting any two different members be uncomparable w.r.t. , see picture. The semantics of a program definition int f(int n){...} is some mathematical function If the program definition f does not terminate for some input n, this can be expressed mathematically as The set of all mathematical functions is made partially ordered by defining if, for each the relation holds, that is, if is less defined or equal to For example, the semantics of the expression x+x/x is less defined than that of x+1, since the former, but not the latter, maps to and they agree otherwise. Given some program text f, its mathematical counterpart is obtained as least fixed point of some mapping from functions to functions that can be obtained by "translating" f. For example, the C definition int fact(int n) { if (n == 0) return 1; else return n * fact(n-1); } is translated to a mapping defined as The mapping is defined in a non-recursive way, although fact was defined recursively. Under certain restrictions (see Kleene fixed-point theorem), which are met in the example, necessarily has a least fixed point, , that is for all . It is possible to show that A larger fixed point of is e.g. the function defined by however, this function does not correctly reflect the behavior of the above program text for negative e.g. the call fact(-1) will not terminate at all, let alone return 0. Only the least fixed point, can reasonably be used as a mathematical program semantic. Descriptive complexity Immerman and Vardi independently showed the descriptive complexity result that the polynomial-time computable properties of linearly ordered structures are definable in FO(LFP), i.e. in first-order logic with a least fixed point operator. However, FO(LFP) is too weak to express all polynomial-time properties of unordered structures (for instance that a structure has even size). Greatest fixed points The greatest fixed point of a function can be defined analogously to the least fixed point, as the fixed point which is greater than any other fixed point, according to the order of the poset. In computer science, greatest fixed points are much less commonly used than least fixed points. Specifically, the posets found in domain theory usually do not have a greatest element, hence for a given function, there may be multiple, mutually incomparable maximal fixed points, and the greatest fixed point of that function may not exist. To address this issue, the optimal fixed point has been defined as the most-defined fixed point compatible with all other fixed points. The optimal fixed point always exists, and is the greatest fixed point if the greatest fixed point exists. The optimal fixed point allows formal study of recursive and corecursive functions that do not converge with the least fixed point. Unfortunately, whereas Kleene's recursion theorem shows that the least fixed point is effectively computable, the optimal fixed point of a computable function may be a non-computable function. See also Knaster–Tarski theorem Fixed-point logic Notes References Immerman, Neil. Descriptive Complexity, 1999, Springer-Verlag. Libkin, Leonid. Elements of Finite Model Theory, 2004, Springer. Order theory Fixed points (mathematics)
Least fixed point
[ "Mathematics" ]
1,103
[ "Mathematical analysis", "Fixed points (mathematics)", "Topology", "Order theory", "Dynamical systems" ]
1,211,986
https://en.wikipedia.org/wiki/Virtual%20work
In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action. The work of a force on a particle along a virtual displacement is known as the virtual work. Historically, virtual work and the associated calculus of variations were formulated to analyze systems of rigid bodies, but they have also been developed for the study of the mechanics of deformable bodies. History The principle of virtual work had always been used in some form since antiquity in the study of statics. It was used by the Greeks, medieval Arabs and Latins, and Renaissance Italians as "the law of lever". The idea of virtual work was invoked by many notable physicists of the 17th century, such as Galileo, Descartes, Torricelli, Wallis, and Huygens, in varying degrees of generality, when solving problems in statics. Working with Leibnizian concepts, Johann Bernoulli systematized the virtual work principle and made explicit the concept of infinitesimal displacement. He was able to solve problems for both rigid bodies as well as fluids. Bernoulli's version of virtual work law appeared in his letter to Pierre Varignon in 1715, which was later published in Varignon's second volume of Nouvelle mécanique ou Statique in 1725. This formulation of the principle is today known as the principle of virtual velocities and is commonly considered as the prototype of the contemporary virtual work principles. In 1743 D'Alembert published his Traité de Dynamique where he applied the principle of virtual work, based on Bernoulli's work, to solve various problems in dynamics. His idea was to convert a dynamical problem into static problem by introducing inertial force. In 1768, Lagrange presented the virtual work principle in a more efficient form by introducing generalized coordinates and presented it as an alternative principle of mechanics by which all problems of equilibrium could be solved. A systematic exposition of Lagrange's program of applying this approach to all of mechanics, both static and dynamic, essentially D'Alembert's principle, was given in his Mécanique Analytique of 1788. Although Lagrange had presented his version of least action principle prior to this work, he recognized the virtual work principle to be more fundamental mainly because it could be assumed alone as the foundation for all mechanics, unlike the modern understanding that least action does not account for non-conservative forces. Overview If a force acts on a particle as it moves from point to point , then, for each possible trajectory that the particle may take, it is possible to compute the total work done by the force along the path. The principle of virtual work, which is the form of the principle of least action applied to these systems, states that the path actually followed by the particle is the one for which the difference between the work along this path and other nearby paths is zero (to the first order). The formal procedure for computing the difference of functions evaluated on nearby paths is a generalization of the derivative known from differential calculus, and is termed the calculus of variations. Consider a point particle that moves along a path which is described by a function from point , where , to point , where . It is possible that the particle moves from to along a nearby path described by , where is called the variation of . The variation satisfies the requirement . The scalar components of the variation , and are called virtual displacements. This can be generalized to an arbitrary mechanical system defined by the generalized coordinates , . In which case, the variation of the trajectory is defined by the virtual displacements , . Virtual work is the total work done by the applied forces and the inertial forces of a mechanical system as it moves through a set of virtual displacements. When considering forces applied to a body in static equilibrium, the principle of least action requires the virtual work of these forces to be zero. Mathematical treatment Consider a particle P that moves from a point A to a point B along a trajectory , while a force is applied to it. The work done by the force is given by the integral where is the differential element along the curve that is the trajectory of P, and is its velocity. It is important to notice that the value of the work depends on the trajectory . Now consider particle P that moves from point A to point B again, but this time it moves along the nearby trajectory that differs from by the variation , where is a scaling constant that can be made as small as desired and is an arbitrary function that satisfies . Suppose the force is the same as . The work done by the force is given by the integral The variation of the work associated with this nearby path, known as the virtual work, can be computed to be If there are no constraints on the motion of P, then 3 parameters are needed to completely describe Ps position at any time . If there are () constraint forces, then parameters are needed. Hence, we can define generalized coordinates (), and express and in terms of the generalized coordinates. That is, Then, the derivative of the variation is given by then we have The requirement that the virtual work be zero for an arbitrary variation is equivalent to the set of requirements The terms are called the generalized forces associated with the virtual displacement . Static equilibrium Static equilibrium is a state in which the net force and net torque acted upon the system is zero. In other words, both linear momentum and angular momentum of the system are conserved. The principle of virtual work states that the virtual work of the applied forces is zero for all virtual movements of the system from static equilibrium. This principle can be generalized such that three dimensional rotations are included: the virtual work of the applied forces and applied moments is zero for all virtual movements of the system from static equilibrium. That is where Fi , i = 1, 2, ..., m and Mj , j = 1, 2, ..., n are the applied forces and applied moments, respectively, and δri , i = 1, 2, ..., m and δφj, j = 1, 2, ..., n are the virtual displacements and virtual rotations, respectively. Suppose the system consists of N particles, and it has f (f ≤ 6N) degrees of freedom. It is sufficient to use only f coordinates to give a complete description of the motion of the system, so f generalized coordinates qk , k = 1, 2, ..., f are defined such that the virtual movements can be expressed in terms of these generalized coordinates. That is, The virtual work can then be reparametrized by the generalized coordinates: where the generalized forces Qk are defined as Kane shows that these generalized forces can also be formulated in terms of the ratio of time derivatives. That is, The principle of virtual work requires that the virtual work done on a system by the forces Fi and moments Mj vanishes if it is in equilibrium. Therefore, the generalized forces Qk are zero, that is Constraint forces An important benefit of the principle of virtual work is that only forces that do work as the system moves through a virtual displacement are needed to determine the mechanics of the system. There are many forces in a mechanical system that do no work during a virtual displacement, which means that they need not be considered in this analysis. The two important examples are (i) the internal forces in a rigid body, and (ii) the constraint forces at an ideal joint. Lanczos presents this as the postulate: "The virtual work of the forces of reaction is always zero for any virtual displacement which is in harmony with the given kinematic constraints." The argument is as follows. The principle of virtual work states that in equilibrium the virtual work of the forces applied to a system is zero. Newton's laws state that at equilibrium the applied forces are equal and opposite to the reaction, or constraint forces. This means the virtual work of the constraint forces must be zero as well. Law of the lever A lever is modeled as a rigid bar connected to a ground frame by a hinged joint called a fulcrum. The lever is operated by applying an input force FA at a point A located by the coordinate vector rA on the bar. The lever then exerts an output force FB at the point B located by rB. The rotation of the lever about the fulcrum P is defined by the rotation angle θ. Let the coordinate vector of the point P that defines the fulcrum be rP, and introduce the lengths which are the distances from the fulcrum to the input point A and to the output point B, respectively. Now introduce the unit vectors eA and eB from the fulcrum to the point A and B, so This notation allows us to define the velocity of the points A and B as where eA⊥ and eB⊥ are unit vectors perpendicular to eA and eB, respectively. The angle θ is the generalized coordinate that defines the configuration of the lever, therefore using the formula above for forces applied to a one degree-of-freedom mechanism, the generalized force is given by Now, denote as FA and FB the components of the forces that are perpendicular to the radial segments PA and PB. These forces are given by This notation and the principle of virtual work yield the formula for the generalized force as The ratio of the output force FB to the input force FA is the mechanical advantage of the lever, and is obtained from the principle of virtual work as This equation shows that if the distance a from the fulcrum to the point A where the input force is applied is greater than the distance b from fulcrum to the point B where the output force is applied, then the lever amplifies the input force. If the opposite is true that the distance from the fulcrum to the input point A is less than from the fulcrum to the output point B, then the lever reduces the magnitude of the input force. This is the law of the lever, which was proven by Archimedes using geometric reasoning. Gear train A gear train is formed by mounting gears on a frame so that the teeth of the gears engage. Gear teeth are designed to ensure the pitch circles of engaging gears roll on each other without slipping, this provides a smooth transmission of rotation from one gear to the next. For this analysis, we consider a gear train that has one degree-of-freedom, which means the angular rotation of all the gears in the gear train are defined by the angle of the input gear. The size of the gears and the sequence in which they engage define the ratio of the angular velocity ωA of the input gear to the angular velocity ωB of the output gear, known as the speed ratio, or gear ratio, of the gear train. Let R be the speed ratio, then The input torque TA acting on the input gear GA is transformed by the gear train into the output torque TB exerted by the output gear GB. If we assume, that the gears are rigid and that there are no losses in the engagement of the gear teeth, then the principle of virtual work can be used to analyze the static equilibrium of the gear train. Let the angle θ of the input gear be the generalized coordinate of the gear train, then the speed ratio R of the gear train defines the angular velocity of the output gear in terms of the input gear, that is The formula above for the principle of virtual work with applied torques yields the generalized force The mechanical advantage of the gear train is the ratio of the output torque TB to the input torque TA, and the above equation yields Thus, the speed ratio of a gear train also defines its mechanical advantage. This shows that if the input gear rotates faster than the output gear, then the gear train amplifies the input torque. And, if the input gear rotates slower than the output gear, then the gear train reduces the input torque. Dynamic equilibrium for rigid bodies If the principle of virtual work for applied forces is used on individual particles of a rigid body, the principle can be generalized for a rigid body: When a rigid body that is in equilibrium is subject to virtual compatible displacements, the total virtual work of all external forces is zero; and conversely, if the total virtual work of all external forces acting on a rigid body is zero then the body is in equilibrium. If a system is not in static equilibrium, D'Alembert showed that by introducing the acceleration terms of Newton's laws as inertia forces, this approach is generalized to define dynamic equilibrium. The result is D'Alembert's form of the principle of virtual work, which is used to derive the equations of motion for a mechanical system of rigid bodies. The expression compatible displacements means that the particles remain in contact and displace together so that the work done by pairs of action/reaction inter-particle forces cancel out. Various forms of this principle have been credited to Johann (Jean) Bernoulli (1667–1748) and Daniel Bernoulli (1700–1782). Generalized inertia forces Let a mechanical system be constructed from n rigid bodies, Bi, i=1,...,n, and let the resultant of the applied forces on each body be the force-torque pairs, Fi and Ti, i = 1,...,n. Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity Vi and angular velocities ωi, i=1,...,n, for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom. Consider a single rigid body which moves under the action of a resultant force F and torque T, with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force Q* associated with the generalized coordinate q is given by This inertia force can be computed from the kinetic energy of the rigid body, by using the formula A system of n rigid bodies with m generalized coordinates has the kinetic energy which can be used to calculate the m generalized inertia forces D'Alembert's form of the principle of virtual work D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that for any set of virtual displacements δqj. This condition yields m equations, which can also be written as The result is a set of m equations of motion that define the dynamics of the rigid body system, known as Lagrange's equations or the generalized equations of motion. If the generalized forces Qj are derivable from a potential energy V(q1,...,qm), then these equations of motion take the form In this case, introduce the Lagrangian, , so these equations of motion become These are known as the Euler-Lagrange equations for a system with m degrees of freedom, or Lagrange's equations of the second kind. Virtual work principle for a deformable body Consider now the free body diagram of a deformable body, which is composed of an infinite number of differential cubes. Let's define two unrelated states for the body: The -State : This shows external surface forces T, body forces f, and internal stresses in equilibrium. The -State : This shows continuous displacements and consistent strains . The superscript * emphasizes that the two states are unrelated. Other than the above stated conditions, there is no need to specify if any of the states are real or virtual. Imagine now that the forces and stresses in the -State undergo the displacements and deformations in the -State: We can compute the total virtual (imaginary) work done by all forces acting on the faces of all cubes in two different ways: First, by summing the work done by forces such as which act on individual common faces (Fig.c): Since the material experiences compatible displacements, such work cancels out, leaving only the virtual work done by the surface forces T (which are equal to stresses on the cubes' faces, by equilibrium). Second, by computing the net work done by stresses or forces such as , which act on an individual cube, e.g. for the one-dimensional case in Fig.(c): where the equilibrium relation has been used and the second order term has been neglected. Integrating over the whole body gives: – Work done by the body forces f. Equating the two results leads to the principle of virtual work for a deformable body: where the total external virtual work is done by T and f. Thus, The right-hand-side of (,) is often called the internal virtual work. The principle of virtual work then states: External virtual work is equal to internal virtual work when equilibrated forces and stresses undergo unrelated but consistent displacements and strains. It includes the principle of virtual work for rigid bodies as a special case where the internal virtual work is zero. Proof of equivalence between the principle of virtual work and the equilibrium equation We start by looking at the total work done by surface traction on the body going through the specified deformation: Applying divergence theorem to the right hand side yields: Now switch to indicial notation for the ease of derivation. To continue our derivation, we substitute in the equilibrium equation . Then The first term on the right hand side needs to be broken into a symmetric part and a skew part as follows: where is the strain that is consistent with the specified displacement field. The 2nd to last equality comes from the fact that the stress matrix is symmetric and that the product of a skew matrix and a symmetric matrix is zero. Now recap. We have shown through the above derivation that Move the 2nd term on the right hand side of the equation to the left: The physical interpretation of the above equation is, the External virtual work is equal to internal virtual work when equilibrated forces and stresses undergo unrelated but consistent displacements and strains. For practical applications: In order to impose equilibrium on real stresses and forces, we use consistent virtual displacements and strains in the virtual work equation. In order to impose consistent displacements and strains, we use equilibriated virtual stresses and forces in the virtual work equation. These two general scenarios give rise to two often stated variational principles. They are valid irrespective of material behaviour. Principle of virtual displacements Depending on the purpose, we may specialize the virtual work equation. For example, to derive the principle of virtual displacements in variational notations for supported bodies, we specify: Virtual displacements and strains as variations of the real displacements and strains using variational notation such as and Virtual displacements be zero on the part of the surface that has prescribed displacements, and thus the work done by the reactions is zero. There remains only external surface forces on the part that do work. The virtual work equation then becomes the principle of virtual displacements: This relation is equivalent to the set of equilibrium equations written for a differential element in the deformable body as well as of the stress boundary conditions on the part of the surface. Conversely, () can be reached, albeit in a non-trivial manner, by starting with the differential equilibrium equations and the stress boundary conditions on , and proceeding in the manner similar to () and (). Since virtual displacements are automatically compatible when they are expressed in terms of continuous, single-valued functions, we often mention only the need for consistency between strains and displacements. The virtual work principle is also valid for large real displacements; however, Eq.() would then be written using more complex measures of stresses and strains. Principle of virtual forces Here, we specify: Virtual forces and stresses as variations of the real forces and stresses. Virtual forces be zero on the part of the surface that has prescribed forces, and thus only surface (reaction) forces on (where displacements are prescribed) would do work. The virtual work equation becomes the principle of virtual forces: This relation is equivalent to the set of strain-compatibility equations as well as of the displacement boundary conditions on the part . It has another name: the principle of complementary virtual work. Alternative forms A specialization of the principle of virtual forces is the unit dummy force method, which is very useful for computing displacements in structural systems. According to D'Alembert's principle, inclusion of inertial forces as additional body forces will give the virtual work equation applicable to dynamical systems. More generalized principles can be derived by: allowing variations of all quantities. using Lagrange multipliers to impose boundary conditions and/or to relax the conditions specified in the two states. These are described in some of the references. Among the many energy principles in structural mechanics, the virtual work principle deserves a special place due to its generality that leads to powerful applications in structural analysis, solid mechanics, and finite element method in structural mechanics. See also Flexibility method Unit dummy force method Finite element method in structural mechanics Calculus of variations Lagrangian mechanics Müller-Breslau's principle D'Alembert's principle References External links Examples applications of the virtual work principle Bibliography Bathe, K.J. "Finite Element Procedures", Prentice Hall, 1996. Charlton, T.M. Energy Principles in Theory of Structures, Oxford University Press, 1973. Dym, C. L. and I. H. Shames, Solid Mechanics: A Variational Approach, McGraw-Hill, 1973. Greenwood, Donald T. Classical Dynamics, Dover Publications Inc., 1977, Hu, H. Variational Principles of Theory of Elasticity With Applications, Taylor & Francis, 1984. Langhaar, H. L. Energy Methods in Applied Mechanics, Krieger, 1989. Reddy, J.N. Energy Principles and Variational Methods in Applied Mechanics, John Wiley, 2002. Shames, I. H. and Dym, C. L. Energy and Finite Element Methods in Structural Mechanics, Taylor & Francis, 1995, Tauchert, T.R. Energy Principles in Structural Mechanics, McGraw-Hill, 1974. Washizu, K. Variational Methods in Elasticity and Plasticity, Pergamon Pr, 1982. Wunderlich, W. Mechanics of Structures: Variational and Computational Methods, CRC, 2002. Mechanics Dynamical systems Structural analysis Linkages (mechanical)
Virtual work
[ "Physics", "Mathematics", "Engineering" ]
4,700
[ "Structural engineering", "Structural analysis", "Mechanics", "Mechanical engineering", "Aerospace engineering", "Dynamical systems" ]
1,212,009
https://en.wikipedia.org/wiki/Descriptive%20complexity%20theory
Descriptive complexity is a branch of computational complexity theory and of finite model theory that characterizes complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic. This connection between complexity and the logic of finite structures allows results to be transferred easily from one area to the other, facilitating new proof methods and providing additional evidence that the main complexity classes are somehow "natural" and not tied to the specific abstract machines used to define them. Specifically, each logical system produces a set of queries expressible in it. The queries – when restricted to finite structures – correspond to the computational problems of traditional complexity theory. The first main result of descriptive complexity was Fagin's theorem, shown by Ronald Fagin in 1974. It established that NP is precisely the set of languages expressible by sentences of existential second-order logic; that is, second-order logic excluding universal quantification over relations, functions, and subsets. Many other classes were later characterized in such a manner. The setting When we use the logic formalism to describe a computational problem, the input is a finite structure, and the elements of that structure are the domain of discourse. Usually the input is either a string (of bits or over an alphabet) and the elements of the logical structure represent positions of the string, or the input is a graph and the elements of the logical structure represent its vertices. The length of the input will be measured by the size of the respective structure. Whatever the structure is, we can assume that there are relations that can be tested, for example " is true if and only if there is an edge from to " (in case of the structure being a graph), or " is true if and only if the th letter of the string is 1." These relations are the predicates for the first-order logic system. We also have constants, which are special elements of the respective structure, for example if we want to check reachability in a graph, we will have to choose two constants s (start) and t (terminal). In descriptive complexity theory we often assume that there is a total order over the elements and that we can check equality between elements. This lets us consider elements as numbers: the element represents the number if and only if there are elements with . Thanks to this we also may have the primitive predicate "bit", where is true if only the th bit of the binary expansion of is 1. (We can replace addition and multiplication by ternary relations such that is true if and only if and is true if and only if ). Overview of characterisations of complexity classes If we restrict ourselves to ordered structures with a successor relation and basic arithmetical predicates, then we get the following characterisations: First-order logic defines the class AC0, the languages recognized by polynomial-size circuits of bounded depth, which equals the languages recognized by a concurrent random access machine in constant time. First-order logic augmented with symmetric or deterministic transitive closure operators yield L, problems solvable in logarithmic space. First-order logic with a transitive closure operator yields NL, the problems solvable in nondeterministic logarithmic space. First-order logic with a least fixed point operator gives P, the problems solvable in deterministic polynomial time. Existential second-order logic yields NP. Universal second-order logic (excluding existential second-order quantification) yields co-NP. Second-order logic corresponds to the polynomial hierarchy PH. Second-order logic with a transitive closure (commutative or not) yields PSPACE, the problems solvable in polynomial space. Second-order logic with a least fixed point operator gives EXPTIME, the problems solvable in exponential time. HO, the complexity class defined by higher-order logic, is equal to ELEMENTARY Sub-polynomial time FO without any operators In circuit complexity, first-order logic with arbitrary predicates can be shown to be equal to AC0, the first class in the AC hierarchy. Indeed, there is a natural translation from FO's symbols to nodes of circuits, with being and of size . First-order logic in a signature with arithmetical predicates characterises the restriction of the AC0 family of circuits to those constructible in alternating logarithmic time. First-order logic in a signature with only the order relation corresponds to the set of star-free languages. Transitive closure logic First-order logic gains substantially in expressive power when it is augmented with an operator that computes the transitive closure of a binary relation. The resulting transitive closure logic is known to characterise non-deterministic logarithmic space (NL) on ordered structures. This was used by Immerman to show that NL is closed under complement (i. e. that NL = co-NL). When restricting the transitive closure operator to deterministic transitive closure, the resulting logic exactly characterises logarithmic space on ordered structures. Second-order Krom formulae On structures that have a successor function, NL can also be characterised by second-order Krom formulae. SO-Krom is the set of Boolean queries definable with second-order formulae in conjunctive normal form such that the first-order quantifiers are universal and the quantifier-free part of the formula is in Krom form, which means that the first-order formula is a conjunction of disjunctions, and in each "disjunction" there are at most two variables. Every second-order Krom formula is equivalent to an existential second-order Krom formula. SO-Krom characterises NL on structures with a successor function. Polynomial time On ordered structures, first-order least fixed-point logic captures PTIME: First-order least fixed-point logic FO[LFP] is the extension of first-order logic by a least fixed-point operator, which expresses the fixed-point of a monotone expression. This augments first-order logic with the ability to express recursion. The Immerman–Vardi theorem, shown independently by Immerman and Vardi, shows that FO[LFP] characterises PTIME on ordered structures. As of 2022, it is still open whether there is a natural logic characterising PTIME on unordered structures. The Abiteboul–Vianu theorem states that FO[LFP]=FO[PFP] on all structures if and only if FO[LFP]=FO[PFP]; hence if and only if P=PSPACE. This result has been extended to other fixpoints. Second-order Horn formulae In the presence of a successor function, PTIME can also be characterised by second-order Horn formulae. SO-Horn is the set of Boolean queries definable with SO formulae in disjunctive normal form such that the first-order quantifiers are all universal and the quantifier-free part of the formula is in Horn form, which means that it is a big AND of OR, and in each "OR" every variable except possibly one are negated. This class is equal to P on structures with a successor function. Those formulae can be transformed to prenex formulas in existential second-order Horn logic. Non-deterministic polynomial time Fagin's theorem Ronald Fagin's 1974 proof that the complexity class NP was characterised exactly by those classes of structures axiomatizable in existential second-order logic was the starting point of descriptive complexity theory. Since the complement of an existential formula is a universal formula, it follows immediately that co-NP is characterized by universal second-order logic. SO, unrestricted second-order logic, is equal to the Polynomial hierarchy PH. More precisely, we have the following generalisation of Fagin's theorem: The set of formulae in prenex normal form where existential and universal quantifiers of second order alternate k times characterise the kth level of the polynomial hierarchy. Unlike most other characterisations of complexity classes, Fagin's theorem and its generalisation do not presuppose a total ordering on the structures. This is because existential second-order logic is itself sufficiently expressive to refer to the possible total orders on a structure using second-order variables. Beyond NP Partial fixed point is PSPACE The class of all problems computable in polynomial space, PSPACE, can be characterised by augmenting first-order logic with a more expressive partial fixed-point operator. Partial fixed-point logic, FO[PFP], is the extension of first-order logic with a partial fixed-point operator, which expresses the fixed-point of a formula if there is one and returns 'false' otherwise. Partial fixed-point logic characterises PSPACE on ordered structures. Transitive closure is PSPACE Second-order logic can be extended by a transitive closure operator in the same way as first-order logic, resulting in SO[TC]. The TC operator can now also take second-order variables as argument. SO[TC] characterises PSPACE. Since ordering can be referenced in second-order logic, this characterisation does not presuppose ordered structures. Elementary functions The time complexity class ELEMENTARY of elementary functions can be characterised by HO, the complexity class of structures that can be recognized by formulas of higher-order logic. Higher-order logic is an extension of first-order logic and second-order logic with higher-order quantifiers. There is a relation between the th order and non-deterministic algorithms the time of which is bounded by levels of exponentials. Definition We define higher-order variables. A variable of order has an arity and represents any set of -tuples of elements of order . They are usually written in upper-case and with a natural number as exponent to indicate the order. Higher-order logic is the set of first-order formulae where we add quantification over higher-order variables; hence we will use the terms defined in the FO article without defining them again. HO is the set of formulae with variables of order at most . HO is the subset of formulae of the form , where is a quantifier and means that is a tuple of variable of order with the same quantification. So HO is the set of formulae with alternations of quantifiers of order , beginning with , followed by a formula of order . Using the standard notation of the tetration, and . with times Normal form Every formula of order th is equivalent to a formula in prenex normal form, where we first write quantification over variable of th order and then a formula of order in normal form. Relation to complexity classes HO is equal to the class ELEMENTARY of elementary functions. To be more precise, , meaning a tower of 2s, ending with , where is a constant. A special case of this is that , which is exactly Fagin's theorem. Using oracle machines in the polynomial hierarchy, Notes References External links Neil Immerman's descriptive complexity page, including a diagram Computational complexity theory Finite model theory
Descriptive complexity theory
[ "Mathematics" ]
2,352
[ "Finite model theory", "Model theory" ]
1,213,271
https://en.wikipedia.org/wiki/Micronization
Micronization is the process of reducing the average diameter of a solid material's particles. Traditional techniques for micronization focus on mechanical means, such as milling and grinding. Modern techniques make use of the properties of supercritical fluids and manipulate the principles of solubility. The term micronization usually refers to the reduction of average particle diameters to the micrometer range, but can also describe further reduction to the nanometer scale. Common applications include the production of active chemical ingredients, foodstuff ingredients, and pharmaceuticals. These chemicals need to be micronized to increase efficacy. Traditional techniques Traditional micronization techniques are based on friction to reduce particle size. Such methods include milling, bashing and grinding. A typical industrial mill is composed of a cylindrical metallic drum that usually contains steel spheres. As the drum rotates the spheres inside collide with the particles of the solid, thus crushing them towards smaller diameters. In the case of grinding, the solid particles are formed when the grinding units of the device rub against each other while particles of the solid are trapped in between. Methods like crushing and cutting are also used for reducing particle diameter, but produce more rough particles compared to the two previous techniques (and are therefore the early stages of the micronization process). Crushing employs hammer-like tools to break the solid into smaller particles by means of impact. Cutting uses sharp blades to cut the rough solid pieces into smaller ones. Modern techniques Modern methods use supercritical fluids in the micronization process. These methods use supercritical fluids to induce a state of supersaturation, which leads to precipitation of individual particles. The most widely applied techniques of this category include the RESS process (Rapid Expansion of Supercritical Solutions), the SAS method (Supercritical Anti-Solvent) and the PGSS method (Particles from Gas Saturated Solutions). These modern techniques allow for greater tuneability of the process. Supercritical carbon dioxide (scCO2) is a commonly used medium in micronization processes. This is because scCO2 is not very reactive and has easily accessible critical point state parameters. As a result, scCO2 can be effectively used to obtain pure crystalline or amorphous micronized forms. Parameters like relative pressure and temperature, solute concentration, and antisolvent to solvent ratio are varied to adjust the output to the producer's needs. Control of particle size in micronization can be influenced by macroscopic factors, such as geometric parameters of the spray nozzle and flow rate, and molecular level changes due to adjustments in state parameters. These adjustments can lead to the nucleation of particles of varying sizes by polymorphic or amorphous transformations, as well as due to the characteristics of aggregation processes, which in some cases is accompanied by changes in conformational equilibria. The supercritical fluid methods result in finer control over particle diameters, distribution of particle size and consistency of morphology. Because of the relatively low pressure involved, many supercritical fluid methods can incorporate thermolabile materials. Modern techniques involve renewable, nonflammable and nontoxic chemicals. RESS In the case of RESS (Rapid Expansion of Supercritical Solutions), the supercritical fluid is used to dissolve the solid material under high pressure and temperature, thus forming a homogeneous supercritical phase. Thereafter, the mixture is expanded through a nozzle to form the smaller particles. Immediately upon exiting the nozzle, rapid expansion occurs, lowering the pressure. The pressure will drop below supercritical pressure, causing the supercritical fluid - usually carbon dioxide - to return to the gas state. This phase change severely decreases the solubility of the mixture and results in precipitation of particles. The less time it takes the solution to expand and the solute to precipitate, the narrower the particle size distribution will be. Faster precipitation times also tend to result in smaller particle diameters. SAS In the SAS method (Supercritical Anti-Solvent), the solid material is dissolved in an organic solvent. The supercritical fluid is then added as an antisolvent, which decreases the solubility of the system. As a result, particles of small diameter are formed. There are various submethods to SAS which differ in the method of introduction of the supercritical fluid into the organic solution. PGSS In the PGSS method (Particles from Gas Saturated Solutions) the solid material is melted and the supercritical fluid is dissolved in it. However, in this case the solution is forced to expand through a nozzle, and in this way nanoparticles are formed. The PGSS method has the advantage that because of the supercritical fluid, the melting point of the solid material is reduced. Therefore, the solid melts at a lower temperature than the normal melting temperature at ambient pressure. Applications Pharmaceuticals and foodstuff ingredients are the main industries in which micronization is utilized. Particles with reduced diameters have higher dissolution rates, which increases efficacy. Progesterone, for example, can be micronized by making very tiny crystals of the progesterone. Micronized progesterone is manufactured in a laboratory from plants. It is available for use as HRT, infertility treatment, progesterone deficiency treatment, including dysfunctional uterine bleeding in premenopausal women. Compounding pharmacies can supply micronized progesterone in sublingual tablets, oil caps, or transdermal creams. Creatine is among the other drugs that are micronized. References External links Example of a Micronizing Mill McCrone Micronizing Mill Materials science Unit operations
Micronization
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,165
[ "Applied and interdisciplinary physics", "Unit operations", "Materials science", "nan", "Chemical process engineering" ]
1,213,566
https://en.wikipedia.org/wiki/Quantum%20cosmology
Quantum cosmology is the attempt in theoretical physics to develop a quantum theory of the universe. This approach attempts to answer open questions of classical physical cosmology, particularly those related to the first phases of the universe. Classical cosmology is based on Albert Einstein's general theory of relativity (GTR or simply GR) which describes the evolution of the universe very well, as long as you do not approach the Big Bang. It is the gravitational singularity and the Planck time where relativity theory fails to provide what must be demanded of a final theory of space and time. Therefore, a theory is needed that integrates relativity theory and quantum theory. Such an approach is attempted for instance with loop quantum cosmology, loop quantum gravity, string theory and causal set theory. In quantum cosmology, the universe is treated as a wave function instead of classical spacetime. See also String cosmology Brane cosmology Loop quantum cosmology Top-down cosmology Non-standard cosmology Loop quantum gravity Canonical quantum gravity Dark energy Minisuperspace Hamilton–Jacobi–Einstein equation Theory of everything World crystal Quantum vacuum state False vacuum Why is there anything at all?#Something may exist necessarily References Notes External links A Layman's Explanation of Quantum Cosmology Lectures on Quantum Cosmology by J.J. Halliwell Quantum gravity Physical cosmology
Quantum cosmology
[ "Physics", "Astronomy" ]
282
[ "Astronomical sub-disciplines", "Theoretical physics", "Unsolved problems in physics", "Astrophysics", "Quantum gravity", "Physics beyond the Standard Model", "Physical cosmology" ]
8,035,060
https://en.wikipedia.org/wiki/Ecosystem%20model
An ecosystem model is an abstract, usually mathematical, representation of an ecological system (ranging in scale from an individual population, to an ecological community, or even an entire biome), which is studied to better understand the real system. Using data gathered from the field, ecological relationships—such as the relation of sunlight and water availability to photosynthetic rate, or that between predator and prey populations—are derived, and these are combined to form ecosystem models. These model systems are then studied in order to make predictions about the dynamics of the real system. Often, the study of inaccuracies in the model (when compared to empirical observations) will lead to the generation of hypotheses about possible ecological relations that are not yet known or well understood. Models enable researchers to simulate large-scale experiments that would be too costly or unethical to perform on a real ecosystem. They also enable the simulation of ecological processes over very long periods of time (i.e. simulating a process that takes centuries in reality, can be done in a matter of minutes in a computer model). Ecosystem models have applications in a wide variety of disciplines, such as natural resource management, ecotoxicology and environmental health, agriculture, and wildlife conservation. Ecological modelling has even been applied to archaeology with varying degrees of success, for example, combining with archaeological models to explain the diversity and mobility of stone tools. Types of models There are two major types of ecological models, which are generally applied to different types of problems: (1) analytic models and (2) simulation / computational models. Analytic models are typically relatively simple (often linear) systems, that can be accurately described by a set of mathematical equations whose behavior is well-known. Simulation models on the other hand, use numerical techniques to solve problems for which analytic solutions are impractical or impossible. Simulation models tend to be more widely used, and are generally considered more ecologically realistic, while analytic models are valued for their mathematical elegance and explanatory power. Ecopath is a powerful software system which uses simulation and computational methods to model marine ecosystems. It is widely used by marine and fisheries scientists as a tool for modelling and visualising the complex relationships that exist in real world marine ecosystems. Model design The process of model design begins with a specification of the problem to be solved, and the objectives for the model. Ecological systems are composed of an enormous number of biotic and abiotic factors that interact with each other in ways that are often unpredictable, or so complex as to be impossible to incorporate into a computable model. Because of this complexity, ecosystem models typically simplify the systems they are studying to a limited number of components that are well understood, and deemed relevant to the problem that the model is intended to solve. The process of simplification typically reduces an ecosystem to a small number of state variables and mathematical functions that describe the nature of the relationships between them. The number of ecosystem components that are incorporated into the model is limited by aggregating similar processes and entities into functional groups that are treated as a unit. After establishing the components to be modeled and the relationships between them, another important factor in ecosystem model structure is the representation of space used. Historically, models have often ignored the confounding issue of space. However, for many ecological problems spatial dynamics are an important part of the problem, with different spatial environments leading to very different outcomes. Spatially explicit models (also called "spatially distributed" or "landscape" models) attempt to incorporate a heterogeneous spatial environment into the model. A spatial model is one that has one or more state variables that are a function of space, or can be related to other spatial variables. Validation After construction, models are validated to ensure that the results are acceptably accurate or realistic. One method is to test the model with multiple sets of data that are independent of the actual system being studied. This is important since certain inputs can cause a faulty model to output correct results. Another method of validation is to compare the model's output with data collected from field observations. Researchers frequently specify beforehand how much of a disparity they are willing to accept between parameters output by a model and those computed from field data. Examples The Lotka–Volterra equations One of the earliest, and most well-known, ecological models is the predator-prey model of Alfred J. Lotka (1925) and Vito Volterra (1926). This model takes the form of a pair of ordinary differential equations, one representing a prey species, the other its predator. where, Volterra originally devised the model to explain fluctuations in fish and shark populations observed in the Adriatic Sea after the First World War (when fishing was curtailed). However, the equations have subsequently been applied more generally. Although simple, they illustrate some of the salient features of ecological models: modelled biological populations experience growth, interact with other populations (as either predators, prey or competitors) and suffer mortality. A credible, simple alternative to the Lotka-Volterra predator-prey model and its common prey dependent generalizations is the ratio dependent or Arditi-Ginzburg model. The two are the extremes of the spectrum of predator interference models. According to the authors of the alternative view, the data show that true interactions in nature are so far from the Lotka-Volterra extreme on the interference spectrum that the model can simply be discounted as wrong. They are much closer to the ratio dependent extreme, so if a simple model is needed one can use the Arditi-Ginzburg model as the first approximation. Others The theoretical ecologist Robert Ulanowicz has used information theory tools to describe the structure of ecosystems, emphasizing mutual information (correlations) in studied systems. Drawing on this methodology and prior observations of complex ecosystems, Ulanowicz depicts approaches to determining the stress levels on ecosystems and predicting system reactions to defined types of alteration in their settings (such as increased or reduced energy flow, and eutrophication. Conway's Game of Life and its variations model ecosystems where proximity of the members of a population are factors in population growth. See also Compartmental models in epidemiology Dynamic global vegetation model Ecological forecasting Gordon Arthur Riley Land Surface Model (LSM version 1.0) Liebig's law of the minimum Mathematical biology Population dynamics Population ecology Rapoport's rule Scientific modelling System dynamics References Further reading External links Ecological modelling resources (ecobas.org) Exposure Assessment Models United States Environmental Protection Agency Ecotoxicology & Models (ecotoxmodels.org) Biological systems Environmental terminology Fisheries science Habitat Mathematical and theoretical biology Population models Systems ecology
Ecosystem model
[ "Mathematics", "Biology", "Environmental_science" ]
1,359
[ "Mathematical and theoretical biology", "Applied mathematics", "nan", "Environmental social science", "Systems ecology" ]
8,035,604
https://en.wikipedia.org/wiki/Electrochemical%20migration
Electrochemical migration (ECM) is the dissolution and movement of metal ions in presence of electric potential, which results in the growth of dendritic structures between anode and cathode. The process is most commonly observed in printed circuit boards where it may significantly decrease the insulation between conductors. The main factor facilitating the ECM is humidity. In the presence of water, ECM can go very quickly. Usually the process involves several stages: water adsorption, anode metal dissolution, ion accumulation, ion migration to cathode, and dendritic growth. The growth of the dendrite takes fraction of a second, during which time the resistance between anode and cathode drops almost to zero. Characteristics In addition to the electrical potential difference, the presence of moisture is a driving factor in the ECM. If there is a sufficient film of moisture with condensation and even at low electrical voltage, the ECM can form a bridging structure between the contacts after just a few minutes. In general, the process can be broken down into the following steps: Adsorption of water through condensation on the surface between the contacts (often promoted by hygroscopic ionic impurities) Alkalization of the water due to the applied potential difference and thus lowering of the pH value in the water film (initiates the corrosion of (contact) metallizations e.g. silver, copper, tin) Dissolution of the anode material (silver, copper, tin etc.) Migration of the metal cations to the cathode Reduction of the migrated cations and deposition on the cathode with the formation of a metallic dendrite Dendrite growth in the opposite direction, towards the anode Reduction of the resistance between the contacts up to a permanent short circuit There is also bridge formation through interaction with impurities that run from the anode to the cathode This mechanism impairs the reliability and longevity of electronic assemblies. This means that electrochemical migration is often the focus of failure root cause analyses as a possible trigger for malfunctions in the field. See also Whisker (metallurgy) References Electrochemistry
Electrochemical migration
[ "Chemistry" ]
444
[ "Electrochemistry", "Physical chemistry stubs", "Electrochemistry stubs" ]
8,036,048
https://en.wikipedia.org/wiki/Protofection
Protofection is a protein-mediated transfection of foreign mitochondrial DNA (mtDNA) into the mitochondria of cells in a tissue to supplement or replace the native mitochondrial DNA already present. The complete mtDNA genome or just fragments of mtDNA generated by polymerase chain reaction can be transferred into the target mitochondria through the technique. Scientists have hypothesized for the last couple of decades that protofection can be beneficial for patients with mitochondrial diseases. This technique is a recent development and is continuously being improved. As mitochondrial DNA becomes progressively more damaged with age, this may provide a method of at least partially rejuvenating mitochondria in old tissue, restoring them to their original, youthful function. Method Protofection is a developing technique and is continuously being improved. A specific protein transduction system has been created that is complexed with mtDNA, which enables the mtDNA to move across the targeted cell's membrane and specifically target mitochondria. The transduction system used consists of a protein transduction domain, mitochondrial localization sequences, and mitochondrial transcription factor A. Each of these play a specific role in protofection: A protein transduction domain is needed because they are small regions of proteins that can cross the cell membrane of cells, independently. A specific mitochondrial localization sequences is used for protofection because it permits mtDNA to enter the mitochondria. Mitochondrial transcription factor A is used because it unwinds the mtDNA that enters the mitochondria, which is critical for mtDNA replication. This process can lead to an increase in the amount of mtDNA present in the mitochondria of the target cells. The transduction system has been tweaked and modified, since the first use of protofection. To shorten the name of the complex, which was previously called PTD-MLS-TFAM complex, it is now named MTD-TFAM. MTD stands for mitochondrial transduction domain and it includes the protein transduction domain and the mitochondrial localization sequences. Possible therapeutic uses One hypothesis for mitochondrial diseases is that mitochondrial damage and dysfunction play an important role in aging. Protofection is being researched as a possibly viable laboratory technique for constructing gene therapies for inherited mitochondrial diseases, such as Leber's hereditary optic neuropathy. Studies have shown that protofection can lead to improved mitochondrial function in targeted cells. Protofection could be applied to modified or artificial mitochondria. Mitochondria could be modified to produce few or no free radicals without compromising energy production. Recent studies have demonstrated that mitochondrial transplants may be useful to rejuvenate dead or dying tissue, such as in heart attacks, for which the mitochondria is the first part of the cell that dies. References External links What Happened to Protofection? An article giving some context for the past few years of protofection development. Mitochondrial Gene Therapy Augments Mitochondrial Physiology in a Parkinson's Disease Cell Model Development of Mitochondrial Gene Replacement Therapy Novel Therapeutic Approaches for Leber's Hereditary Optic Neuropathy Molecular biology
Protofection
[ "Chemistry", "Biology" ]
639
[ "Biochemistry", "Molecular biology" ]
8,036,355
https://en.wikipedia.org/wiki/Framework-specific%20modeling%20language
A framework-specific modeling language (FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework. FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices. A FSML concept can be configured by selecting features and providing values for features. Such a concept configuration represents how the concept should be implemented in the code. In other words, concept configuration describes how the framework should be completed in order to create the implementation of the concept. Applications FSMLs are used in model-driven development for creating models or specifications of software to be built. FSMLs enable the creation of the models from the framework completion code (that is, automated reverse engineering) the creation of the framework completion code from the models (that is, automated forward engineering) code verification through constraint checking on the model automated round-trip engineering Examples Eclipse Workbench Part Interaction FSML An example FSML for modeling Eclipse Parts (that is, editors and views) and Part Interactions (for example listens to parts, requires adapter, provides selection). The prototype implementation supports automated round-trip engineering of Eclipse plug-ins that implement workbench parts and part interactions. See also General-purpose modeling (GPM) Model-driven engineering (MDE) Domain-specific language (DSL) Model-driven architecture (MDA) Meta-Object Facility (MOF) References Specification languages Modeling languages
Framework-specific modeling language
[ "Engineering" ]
312
[ "Software engineering", "Specification languages" ]
8,037,655
https://en.wikipedia.org/wiki/James%20Bay%20Cree%20hydroelectric%20conflict
The James Bay Cree hydroelectric conflict refers to the resistance by James Bay Cree to the James Bay Hydroelectric Project and the Quebec Government, beginning in 1971. The First Phase The Quebec government announced plans in April 1971 for a hydroelectric project in the Baie-James region of northern Quebec. It followed typical practice of neither informing the Cree people living in the area, nor estimating the consequences of the development as far as they were concerned. The Cree decision to present a unified front in negotiations in order to protect their lands and future autonomy provided the foundation for increased contact between the different communities and the start of a regional identity for the James Bay Cree. The Quebec Association of Indians, an ad hoc association of native northern Quebecers, won an injunction on 15 November 1973 blocking the construction of the hydroelectric project. They were represented by their lawyer James O'Reilly, who became one of the foremost experts in Indian law. The day after the Malouf judgement was issued, two appeals were launched, one against the merits of the Malouf judgement and one an application to the effect that the Malouf judgement should be suspended pending the hearing. One week later the Court of Appeal of Quebec heard the case. The three judges, Lucien Tremblay, Jean Turgeron, and P.C. Casey, suspended the Malouf judgement until the Court of Appeal was ready to hear the case. In 1974 the Court of Appeal overturned the Malouf judgement. Although the judgement was suspended seven days later and overturned in 1974, the Malouf judgement confirmed Quebec's legal obligation to negotiate a treaty covering the territory, even as construction proceeded. The Grand Council of the Crees, representing the Cree villages of Northern Quebec, was created in 1974 to better protect Cree rights during negotiations with the governments of Quebec and Canada. The governments of Canada and Quebec and representatives from each of the Cree villages and most of the Inuit villages signed the James Bay and Northern Quebec Agreement on November 11, 1975. The Agreement offered, for the first time, a written contract which explicitly presented the rights of indigenous people. The result of the hydroelectric treaty became an example for future conflicts in other communities with issues of the same nature. It allowed hydroelectric development on Cree lands in exchange for financial compensation, greater autonomy, and improvements to health care, housing, and educational services. The Agreement strengthened the social and political position of the Cree, but drove a split between them and other native groups by establishing what was seen as an undesirable precedent by which native land claims could be resolved. The intention of the Cree was not to 'sell out' and sacrifice a part of their Cree culture to compensate for their place in Canadian society, but to secure and uphold as much of their rooted lifestyle and land as possible, maintaining the power of their native traditions while carefully amalgamating into the economically dominant society. Even during these negotiations, construction of roads and dams for the hydroelectric projects never stopped for an appreciable length of time. The Cree had no legal way of stopping or suspending this development, so even if they had succeeded in obtaining complete recognition of their claims, much of the land would have already been flooded. They were well aware of the fact that the damage to their culture and land was inevitable, and desired reimbursement for its repair. Unfortunately, the federal and provincial governments repeatedly failed to fulfil the monetary promises made in the Agreement, and the Cree were forced to use their own compensation money for improvements, such as those to basic water and sewage systems, that would otherwise have waited a long time for a solution. The new village of Chisasibi, on the southern shore of La Grande River, replaced the Fort George settlement on an island at the mouth of the river in 1981. The Fort George settlement itself had been home to people forced to relocate by earlier hydroelectric development. The construction of first phase of the James Bay Project was completed in 1986. The Second Phase In 1986 the Quebec government announced plans for the second phase of the project. The Grande-Baleine hydroelectric project involved the creation of three power plants and the flooding of about 1,700 square kilometres of land (3% of the Grande-Baleine watershed) upstream from the Whapmagoostui village. The Grande-Baleine project represented a source of employment for the citizens of Quebec, and an alteration to the local ecosystem to environmentalists. For the Cree and Inuit in the area, however, the project would not only cause serious change to the environment, but would also have a social impact. Attempts to estimate the social impact of the hydroelectric projects (usually included within the environmental assessment) are complicated by unresolved dilemmas such as whether the changes were caused by the project itself or if they were beginning to happen before the project came to fruition. With the Grande-Baleine project, the Cree community of Whapmagoostui found themselves facing new social changes that they had avoided up to that point. Unfortunately for the community, Hydro-Quebec and the sector of the government most involved in the project took the position that any social effects on the communities were not their problem, and would not impact decisions made regarding the project. The Cree and Inuit worked together with environmentalists to protest the development, but the debate of the Grande-Baleine project was reduced to the issue of political power (whether the Inuit and Cree were to be allowed to exercise their interests in the development and what form it would take), instead of accommodating other (non-political) interests. In 1991, under the direction of Grand Chief Matthew Coon Come, the Cree launched a very visible protest of the Grande-Baleine project in New York City. Following agreements in 1989 and 1992 with the Governments of Canada and Quebec, a new Cree village, Oujé-Bougoumou, was created in 1992 for the 600 Cree of the Chibougamau area. The Quebec government canceled the Grande-Baleine hydroelectric project in 1994, in part due to public concern about its potential impact on the environment and on First Nations communities. Continuing Impact The Cree and the Government of Quebec signed the landmark Agreement Respecting a New Relationship Between the Cree Nation and the Government of Quebec, also known as La Paix des Braves, in 2002. Far more than an economic deal, this was seen as a "nation to nation" agreement. The agreement paved the way for the construction of a final element of the original James Bay Project, the Eastmain-1 power station. The Cree and the Government of Quebec signed an agreement in 2004 providing for the joint environmental assessment of the Rupert River Diversion. The Rupert River Diversion was approved in 2007 and construction began. References Source: Canadian Geographic1986 External links James Bay Project and the Cree CBC Archives, The James Bay And Northern Quebec Agreement And The Northeastern Quebec Agreement, Indian and Northern Affairs Canada July 1993 Indigenous conflicts in Canada Cree James Bay Project Eeyou Istchee (territory)
James Bay Cree hydroelectric conflict
[ "Engineering" ]
1,410
[ "James Bay Project", "Macro-engineering" ]
8,038,191
https://en.wikipedia.org/wiki/Preservation%20breeding
Preservation breeding is an attempt by many plant and animal breeders to preserve bloodlines of species, either of a rare breed, or of rare pedigrees within a breed. Purpose Preservation breeding can have several purposes: Protection of genetic diversity within a species or a breed; Preservation of valuable genetic traits that may not be popular or in fashion in the present, but may be of great value in the future; Population or re-population of an area where a species previously existed; Support of a wild population that is defective or infected, by breeding healthy individuals and releasing them into the population in order to strengthen the overall health of the population. Mechanism Preservation breeding can take the following forms: Selective breeding of rare breeds and rare pedigrees, particularly monitoring breeding genetics in small populations to ensure diversity is maintained as much as possible; Rare breeds that suffer life-threatening genetic deficiencies can be intentionally cross-bred with other breeds that have the critical gene, in order to preserve the rare breed into the future; History The term preservation breeding was first used by notable American Kennel Club Judges Douglas Johnson and Bill Shelton in breeder seminars for dog breeders in the early 2000s. The preservation of dog breeds and the conservancy of canine genetics started gaining more traction in the mid-2010s. See also Breeding back Conservation genetics References Ecological experiments Evolutionary biology Breeding Rare breed conservation
Preservation breeding
[ "Biology" ]
274
[ "Evolutionary biology", "Behavior", "Breeding", "Reproduction" ]
8,038,607
https://en.wikipedia.org/wiki/Wohl%E2%80%93Aue%20reaction
The Wohl–Aue reaction is an organic reaction between an aromatic nitro compound and an aniline to form a phenazine in presence of an alkali base. An example is the reaction between nitrobenzene and aniline: The reaction is named after Alfred Wohl and W. Aue. References Heterocycle forming reactions Organic redox reactions Name reactions
Wohl–Aue reaction
[ "Chemistry" ]
80
[ "Name reactions", "Organic redox reactions", "Heterocycle forming reactions", "Organic reactions" ]
8,040,039
https://en.wikipedia.org/wiki/Vault%20%28architecture%29
In architecture, a vault (French voûte, from Italian volta) is a self-supporting arched form, usually of stone or brick, serving to cover a space with a ceiling or roof. As in building an arch, a temporary support is needed while rings of voussoirs are constructed and the rings placed in position. Until the topmost voussoir, the keystone, is positioned, the vault is not self-supporting. Where timber is easily obtained, this temporary support is provided by centering consisting of a framed truss with a semicircular or segmental head, which supports the voussoirs until the ring of the whole arch is completed. Vault types Corbelled vaults, also called false vaults, with horizontally joined layers of stone have been documented since prehistoric times; in the 14th century BC from Mycenae. They were built regionally until modern times. The real vault construction with radially joined stones was already known to the Egyptians and Assyrians and was introduced into the building practice of the West by the Etruscans. The Romans in particular developed vault construction further and built barrel, cross and dome vaults. Some outstanding examples have survived in Rome, e.g. the Pantheon and the Basilica of Maxentius. Brick vaults have been used in Egypt since the early 3rd millennium BC. widely used and from the end of the 8th century B.C. Keystone vaults were built. However, monumental temple buildings of the pharaonic culture in the Nile Valley did not use vaults, since even the huge portals with widths of more than 7 meters were spanned with cut stone beams. Dome Amongst the earliest known examples of any form of vaulting is to be found in the neolithic village of Khirokitia on Cyprus. Dating from BCE, the circular buildings supported beehive shaped corbel domed vaults of unfired mud-bricks and also represent the first evidence for settlements with an upper floor. Similar beehive tombs, called tholoi, exist in Crete and Northern Iraq. Their construction differs from that at Khirokitia in that most appear partially buried and make provision for a dromos entry. The inclusion of domes, however, represents a wider sense of the word vault. The distinction between the two is that a vault is essentially an arch which is extruded into the third dimension, whereas a dome is an arch revolved around its vertical axis. Pitched brick barrel vault Pitched-brick vaults are named for their construction, the bricks are installed vertically (not radially) and are leaning (pitched) at an angle: This allows their construction to be completed without the use of centering. Examples have been found in archaeological excavations in Mesopotamia dating to the 2nd and 3rd millennium BCE, which were set in gypsum mortar. Barrel vault A barrel vault is the simplest form of a vault and resembles a barrel or tunnel cut lengthwise in half. The effect is that of a structure composed of continuous semicircular or pointed sections. The earliest known examples of barrel vaults were built by the Sumerians, possibly under the ziggurat at Nippur in Babylonia, which was built of fired bricks cemented with clay mortar. The earliest barrel vaults in ancient Egypt are thought to be those in the granaries built by the 19th dynasty Pharaoh Ramesses II, the ruins of which are behind the Ramesseum, at Thebes. The span was and the lower part of the arch was built in horizontal courses, up to about one-third of the height, and the rings above were inclined back at a slight angle, so that the bricks of each ring, laid flatwise, adhered till the ring was completed, no centering of any kind being required; the vault thus formed was elliptic in section, arising from the method of its construction. A similar system of construction was employed for the vault over the great hall at Ctesiphon, where the material employed was fired bricks or tiles of great dimensions, cemented with mortar; but the span was close upon , and the thickness of the vault was nearly at the top, there being four rings of brickwork. Assyrian palaces used pitched-brick vaults, made with sun-dried mudbricks, for gates, subterranean graves and drains. During the reign of king Sennacherib they were used to construct aqueducts, such as those at Jerwan. In the provincial city Dūr-Katlimmu they were used to created vaulted platforms. The tradition of their erection, however, would seem to have been handed down to their successors in Mesopotamia, viz. to the Sassanians, who in their palaces in Sarvestan and Firouzabad built domes of similar form to those shown in the Nimrud sculptures, the chief difference being that, constructed in rubble stone and cemented with mortar, they still exist, though probably abandoned on the Islamic invasion in the 7th century. Groin vaults A groin vault is formed by the intersection of two or more barrel vaults, resulting in the formation of angles or groins along the lines of transition between the webs. In these bays the longer transverse arches are semi-circular, as are the shorter longitudinal arches. The curvatures of these bounding arches were apparently used as the basis for the web centrings, which was created in the form of two intersecting tunnels as though each web was an arch projected horizontally in three dimensions. The earliest example is thought to be over a small hall at Pergamum, in Asia Minor, but its first employment over halls of great dimensions is due to the Romans. When two semicircular barrel vaults of the same diameter cross one another their intersection (a true ellipse) is known as a groin vault, down which the thrust of the vault is carried to the cross walls; if a series of two or more barrel vaults intersect one another, the weight is carried on to the piers at their intersection and the thrust is transmitted to the outer cross walls; thus in the Roman reservoir at Baiae, known as the Piscina Mirabilis, a series of five aisles with semicircular barrel vaults are intersected by twelve cross aisles, the vaults being carried on 48 piers and thick external walls. The width of these aisles being only about there was no great difficulty in the construction of these vaults, but in the Roman Baths of Caracalla the tepidarium had a span of , more than twice that of an English cathedral, so that its construction both from the statical and economical point of view was of the greatest importance. The researches of M. Choisy (L'Art de bâtir chez les Romains), based on a minute examination of those portions of the vaults which still remain in situ, have shown that, on a comparatively slight centering, consisting of trusses placed about apart and covered with planks laid from truss to truss, were laid – to begin with – two layers of the Roman brick (measuring nearly square and 2 in. thick); on these and on the trusses transverse rings of brick were built with longitudinal ties at intervals; on the brick layers and embedding the rings and cross ties concrete was thrown in horizontal layers, the haunches being filled in solid, and the surface sloped on either side and covered over with a tile roof of low pitch laid direct on the concrete. The rings relieved the centering from the weight imposed, and the two layers of bricks carried the concrete till it had set. As the walls carrying these vaults were also built in concrete with occasional bond courses of brick, the whole structure was homogeneous. One of the important ingredients of the mortar was a volcanic deposit found near Rome, known as pozzolana, which, when the concrete had set, not only made the concrete as solid as the rock itself, but to a certain extent neutralized the thrust of the vaults, which formed shells equivalent to that of a metal lid; the Romans, however, do not seem to have recognized the value of this pozzolana mixture, for they otherwise provided amply for the counteracting of any thrust which might exist by the erection of cross walls and buttresses. In the tepidaria of the Thermae and in the basilica of Constantine, in order to bring the thrust well within the walls, the main barrel vault of the hall was brought forward on each side and rested on detached columns, which constituted the principal architectural decoration. In cases where the cross vaults intersecting were not of the same span as those of the main vault, the arches were either stilted so that their soffits might be of the same height, or they formed smaller intersections in the lower part of the vault; in both of these cases, however, the intersections or groins were twisted, for which it was very difficult to form a centering, and, moreover, they were of disagreeable effect: though every attempt was made to mask this in the decoration of the vault by panels and reliefs modelled in stucco. Rib vault A rib vault is one in which all of the groins are covered by ribs or diagonal ribs in the form of segmental arches. Their curvatures are defined by the bounding arches. Whilst the transverse arches retain the same semi-circular profile as their groin-vaulted counterparts, the longitudinal arches are pointed with both arcs having their centres on the impost line. This allows the latter to correspond more closely to the curvatures of the diagonal ribs, producing a straight tunnel running from east to west. Reference has been made to the rib vault in Roman work, where the intersecting barrel vaults were not of the same diameter. Their construction must at all times have been somewhat difficult, but where the barrel vaulting was carried round over the choir aisle and was intersected (as in St Bartholomew-the-Great in Smithfield, London) by semicones instead of cylinders, it became worse and the groins more complicated. This would seem to have led to a change of system and to the introduction of a new feature, which completely revolutionized the construction of the vault. Hitherto the intersecting features were geometrical surfaces, of which the diagonal groins were the intersections, elliptical in form, generally weak in construction and often twisting. The medieval builder reversed the process, and set up the diagonal ribs first, which were utilized as permanent centres, and on these he carried his vault or web, which henceforward took its shape from the ribs. Instead of the elliptical curve which was given by the intersection of two semicircular barrel vaults, or cylinders, he employed the semicircular arch for the diagonal ribs; this, however, raised the centre of the square bay vaulted above the level of the transverse arches and of the wall ribs, and thus gave the appearance of a dome to the vault, such as may be seen in the nave of Sant'Ambrogio, Florence. To meet this, at first the transverse and wall ribs were stilted, or the upper part of their arches was raised, as in the Abbaye-aux-Hommes at Caen, and the Abbey of Lessay, in Normandy. The problem was ultimately solved by the introduction of the pointed arch for the transverse and wall ribs – the pointed arch had long been known and employed, on account of its much greater strength and of the less thrust it exerted on the walls. When employed for the ribs of a vault, however narrow the span might be, by adopting a pointed arch, its summit could be made to range in height with the diagonal rib; and, moreover, when utilized for the ribs of the annular vault, as in the aisle round the apsidal termination of the choir, it was not necessary that the half ribs on the outer side should be in the same plane as those of the inner side; for when the opposite ribs met in the centre of the annular vault, the thrust was equally transmitted from one to the other, and being already a broken arch the change of its direction was not noticeable. The first introduction of the pointed arch rib took place at Cefalù Cathedral and pre-dated the abbey of Saint-Denis. Whilst the pointed rib-arch is often seen as an identifier for Gothic architecture, Cefalù is a Romanesque cathedral whose masons experimented with the possibility of Gothic rib-arches before it was widely adopted by western church architecture. Besides Cefalù Cathedral, the introduction of the pointed arch rib would seem to have taken place in the choir aisles of the abbey of Saint-Denis, near Paris, built by the abbot Suger in 1135. It was in the church at Vezelay (1140) that it was extended to the square bay of the porch. As has been pointed out, the aisles had already in the early Christian churches been covered over with groined vaults, the only advance made in the later developments being the introduction of transverse ribs' dividing the bays into square compartments. In the 12th century the first attempts were made to vault over the naves, which were twice the width of the aisles, so it became necessary to include two bays of the aisles to form one rectangular bay in the nave (although this is often mistaken as square). It followed that every alternate pier served no purpose, so far as the support of the nave vault was concerned, and this would seem to have suggested an alternative to provide a supplementary rib across the church and between the transverse ribs. This resulted in what is known as a sexpartite, or six-celled vault, of which one of the earliest examples is found in the Abbaye-aux-Hommes at Caen. This church, built by William the Conqueror, was originally constructed to carry a timber roof only, but nearly a century later the upper part of the nave walls were partly rebuilt, in order that it might be covered with a vault. The immense size, however, of the square vault over the nave necessitated some additional support, so that an intermediate rib was thrown across the church, dividing the square compartment into six cells, and called the sexpartite vault The intermediate rib, however, had the disadvantage of partially obscuring one side of the clerestory windows, and it threw unequal weights on the alternate piers, so that in the cathedral of Soissons (1205) a quadripartite or four-celled vault was introduced, the width of each bay being half the span of the nave, and corresponding therefore with the aisle piers. To this there are some exceptions, in Sant' Ambrogio, Milan, and San Michele, Pavia (the original vault), and in the cathedrals of Speyer, Mainz and Worms, where the quadripartite vaults are nearly square, the intermediate piers of the aisles being of much smaller dimensions. In England sexpartite vaults exist at Canterbury (1175) (set out by William of Sens), Rochester (1200), Lincoln (1215), Durham (east transept), and St. Faith's chapel, Westminster Abbey. In the earlier stage of rib vaulting, the arched ribs consisted of independent or separate voussoirs down to the springing; the difficulty, however, of working the ribs separately led to two other important changes: (1) the lower part of the transverse diagonal and wall ribs were all worked out of one stone; and (2) the lower horizontal, constituting what is known as the tas-de-charge or solid springer. The tas-de-charge, or solid springer, had two advantages: (1) it enabled the stone courses to run straight through the wall, so as to bond the whole together much better; and (2) it lessened the span of the vault, which then required a centering of smaller dimensions. As soon as the ribs were completed, the web or stone shell of the vault was laid on them. In some English work each course of stone was of uniform height from one side to the other; but, as the diagonal rib was longer than either the transverse or wall rib, the courses dipped towards the former, and at the apex of the vault were cut to fit one another. In the early English Gothic period, in consequence of the great span of the vault and the very slight rise or curvature of the web, it was thought better to simplify the construction of the web by introducing intermediate ribs between the wall rib and the diagonal rib and between the diagonal and the transverse ribs; and in order to meet the thrust of these intermediate ribs a ridge rib was required, and the prolongation of this rib to the wall rib hid the junction of the web at the summit, which was not always very sightly, and constituted the ridge rib. In France, on the other hand, the web courses were always laid horizontally, and they are therefore of unequal height, increasing towards the diagonal rib. Each course also was given a slight rise in the centre, so as to increase its strength; this enabled the French masons to dispense with the intermediate rib, which was not introduced by them till the 15th century, and then more as a decorative than a constructive feature, as the domical form given to the French web rendered unnecessary the ridge rib, which, with some few exceptions, exists only in England. In both English and French vaulting centering was rarely required for the building of the web, a template (Fr. cerce) being employed to support the stones of each ring until it was complete. In Italy, Germany and Spain the French method of building the web was adopted, with horizontal courses and a domical form. Sometimes, in the case of comparatively narrow compartments, and more especially in clerestories, the wall rib was stilted, and this caused a peculiar twisting of the web, where the springing of the wall rib is at K: to these twisted surfaces the term ploughshare vaulting is given. One of the earliest examples of the introduction of the intermediate rib is found in the nave of Lincoln Cathedral, and there the ridge rib is not carried to the wall rib. It was soon found, however, that the construction of the web was much facilitated by additional ribs, and consequently there was a tendency to increase their number, so that in the nave of Exeter Cathedral three intermediate ribs were provided between the wall rib and the diagonal rib. In order to mask the junction of the various ribs, their intersections were ornamented with richly carved bosses, and this practice increased on the introduction of another short rib, known as the lierne, a term in France given to the ridge rib. Lierne ribs are short ribs crossing between the main ribs, and were employed chiefly as decorative features, as, for instance, in the Liebfrauenkirche (1482) of Mühlacker, Germany. One of the best examples of Lierne ribs exists in the vault of the oriel window of Crosby Hall, London. The tendency to increase the number of ribs led to singular results in some cases, as in the choir of Gloucester Cathedral, where the ordinary diagonal ribs become mere ornamental mouldings on the surface of an intersected pointed barrel vault, and again in the cloisters, where the introduction of the fan vault, forming a concave-sided conoid, returned to the principles of the Roman geometrical vault. This is further shown in the construction of these fan vaults, for although in the earliest examples each of the ribs above the tas-de-charge was an independent feature, eventually it was found easier to carve them and the web out of the solid stone, so that the rib and web were purely decorative and had no constructional or independent functions. Fan vault This form of vaulting is found in English late Gothic in which the vault is constructed as a single surface of dressed stones, with the resulting conoid forming an ornamental network of blind tracery. The fan vault would seem to have owed its origin to the employment of centerings of one curve for all the ribs, instead of having separate centerings for the transverse, diagonal wall and intermediate ribs; it was facilitated also by the introduction of the four-centred arch, because the lower portion of the arch formed part of the fan, or conoid, and the upper part could be extended at pleasure with a greater radius across the vault. These ribs were often cut from the same stones as the webs, with the entire vault being treated as a single jointed surface covered in interlocking tracery. The earliest example is perhaps the east walk of the cloister at Gloucester, with its surface consisting of intricately decorated panels of stonework forming conical structures that rise from the springers of the vault. In later examples, as in King's College Chapel, Cambridge, on account of the great dimensions of the vault, it was found necessary to introduce transverse ribs, which were required to give greater strength. Similar transverse ribs are found in Henry VII's chapel and in the Divinity School at Oxford, where a new development presented itself. One of the defects of the fan vault at Gloucester is the appearance it gives of being half sunk in the wall; to remedy this, in the two buildings just quoted, the complete conoid is detached and treated as a pendant. Byzantine vaults and domes The vault of the Basilica of Maxentius, completed by Constantine, was the last great work carried out in Rome before its fall, and two centuries pass before the next important development is found in the Church of the Holy Wisdom (Hagia Sophia) at Constantinople. It is probable that the realization of the great advance in the science of vaulting shown in this church owed something to the eastern tradition of dome vaulting seen in the Assyrian domes, which are known to us only by the representations in the bas-relief from Nimrud, because in the great water cisterns in Istanbul, known as the Basilica Cistern and Bin bir direk (cistern with a thousand and one columns), we find the intersecting groin vaults of the Romans already replaced by small cupolas or domes. These domes, however, are of small dimensions when compared with that projected and carried out by Justinian in the Hagia Sophia. Previous to this the greatest dome was that of the Pantheon at Rome, but this was carried on an immense wall thick, and with the exception of small niches or recesses in the thickness of the wall could not be extended, so that Justinian apparently instructed his architect to provide an immense hemicycle or apse at the eastern end, a similar apse at the western end, and great arches on either side, the walls under which would be pierced with windows. Unlike the Pantheon dome, the upper portions of which are made of concrete, Byzantine domes were made of brick, which were lighter and thinner, but more vulnerable to the forces exerted onto them. The diagram shows the outlines of the solution of the problem. If a hemispherical dome is cut by four vertical planes, the intersection gives four semicircular arches; if cut in addition by a horizontal plane tangent to the top of these arches, it describes a circle; that portion of the sphere which is below this circle and between the arches, forming a spherical spandrel, is the pendentive, and its radius is equal to the diagonal of the square on which the four arches rest. Having obtained a circle for the base of the dome, it is not necessary that the upper portion of the dome should spring from the same level as the arches, or that its domical surface should be a continuation of that of the pendentive. The first and second dome of the Hagia Sophia apparently fell down, so that Justinian determined to raise it, possibly to give greater lightness to the structure, but mainly in order to obtain increased light for the interior of the church. This was effected by piercing it with forty windows – the effect of which, as the light streaming through these windows, gave the dome the appearance of being suspended in the air. The pendentive which carried the dome rested on four great arches, the thrust of those crossing the church being counteracted by immense buttresses which traversed the aisles, and the other two partly by smaller arches in the apse, the thrust being carried to the outer walls, and to a certain extent by the side walls which were built under the arches. From the description given by Procopius we gather that the centering employed for the great arches consisted of a wall erected to support them during their erection. The construction of the pendentives is not known, but it is surmised that to the top of the pendentives they were built in horizontal courses of brick, projecting one over the other, the projecting angles being cut off afterwards and covered with stucco in which the mosaics were embedded; this was the method employed in the erection of the Périgordian domes, to which we shall return; these, however, were of less diameter than those of the Hagia Sophia, being only about 40 to instead of The apotheosis of Byzantine architecture, in fact, was reached in Hagia Sophia, for although it formed the model on which all subsequent Byzantine churches were based, so far as their plan was concerned, no domes approaching the former in dimensions were even attempted. The principal difference in some later examples is that which took place in the form of the pendentive on which the dome was carried. Instead of the spherical spandril of Hagia Sophia, large niches were formed in the angles, as in the Mosque of Damascus, which was built by Byzantine workmen for the Al-Walid I in CE 705; these gave an octagonal base on which the hemispherical dome rested; or again, as in the Sassanian palaces of Sarvestan and Firouzabad of the 4th and 5th century, when a series of concentric arch rings, projecting one in front of the other, were built, giving also an octagonal base; each of these pendentives is known as a squinch. There is one other remarkable vault, also built by Justinian, in the Church of the Saints Sergius and Bacchus in Constantinople. The central area of this church was octagonal on plan, and the dome is divided into sixteen compartments; of these eight consist of broad flat bands rising from the centre of each of the walls, and the alternate eight are concave cells over the angles of the octagon, which externally and internally give to the roof the appearance of an umbrella. Romanesque Although the dome constitutes the principal characteristic of the Byzantine church, throughout Asia Minor are numerous examples in which the naves are vaulted with the semicircular barrel vault, and this is the type of vault found throughout the south of France in the 11th and 12th centuries, the only change being the occasional substitution of the pointed barrel vault, adopted not only on account of its exerting a less thrust, but because, as pointed out by Fergusson (vol. ii. p. 46), the roofing tiles were laid directly on the vault and a less amount of filling in at the top was required. The continuous thrust of the barrel vault in these cases was met either by semicircular or pointed barrel vaults on the aisles, which had only half the span of the nave; of this there is an interesting example in the Chapel of Saint John in the Tower of London – and sometimes by half-barrel vaults. The great thickness of the walls, however, required in such constructions would seem to have led to another solution of the problem of roofing over churches with incombustible material, viz. that which is found throughout Périgord and La Charente, where a series of domes carried on pendentives covered over the nave, the chief peculiarities of these domes being the fact that the arches carrying them form part of the pendentives, which are all built in horizontal courses. The intersecting and groined vault of the Romans was employed in the early Christian churches in Rome, but only over the aisles, which were comparatively of small span, but in these there was a tendency to raise the centres of these vaults, which became slightly domical; in all these cases centering was employed. Gothic Revival and the Renaissance One good example of the fan vault is that over the staircase leading to the hall of Christ Church, Oxford, where the complete conoid is displayed in its centre carried on a central column. This vault, not built until 1640, is an example of traditional workmanship, probably in Oxford transmitted in consequence of the late vaulting of the entrance gateways to the colleges. Fan vaulting is peculiar to England, the only example approaching it in France being the pendant of the Lady-chapel at Caudebec-en-Caux, in Normandy. In France, Germany, and Spain the multiplication of ribs in the 15th century led to decorative vaults of various kinds, but with some singular modifications. Thus, in Germany, recognizing that the rib was no longer a necessary constructive feature, they cut it off abruptly, leaving a stump only; in France, on the other hand, they gave still more importance to the rib, by making it of greater depth, piercing it with tracery and hanging pendants from it, and the web became a horizontal stone paving laid on the top of these decorated vertical webs. This is the characteristic of the great Renaissance work in France and Spain; but it soon gave way to Italian influence, when the construction of vaults reverted to the geometrical surfaces of the Romans, without, however, always that economy in centering to which they had attached so much importance, and more especially in small structures. In large vaults, where it constituted an important expense, the chief boast of some of the most eminent architects has been that centering was dispensed with, as in the case of the dome of the Santa Maria del Fiore in Florence, built by Filippo Brunelleschi, and Ferguson cites as an example the great dome of the church at Mousta in Malta, erected in the first half of the 19th century, which was built entirely without centering of any kind. Vaulting and faux-vaulting in the Renaissance and after It is important to note that whereas Roman vaults, like that of the Pantheon, and Byzantine vaults, like that at Hagia Sophia, were not protected from above (i.e. the vault from the inside was the same that one saw from the outside), the European architects of the Middle Ages protected their vaults with wooden roofs. In other words, one will not see a Gothic vault from the outside. The reasons for this development are hypothetical, but the fact that the roofed basilica form preceded the era when vaults begin to be made is certainly to be taken into consideration. In other words, the traditional image of a roof took precedence over the vault. The separation between interior and exterior – and between structure and image – was to be developed very purposefully in the Renaissance and beyond, especially once the dome became reinstated in the Western tradition as a key element in church design. Michelangelo's dome for St. Peter's Basilica in Rome, as redesigned between 1585 and 1590 by Giacomo della Porta, for example, consists of two domes of which, however, only the inner is structural. Baltasar Neumann, in his baroque churches, perfected light-weight plaster vaults supported by wooden frames. These vaults, which exerted no lateral pressures, were perfectly suited for elaborate ceiling frescoes. In St Paul's Cathedral in London there is a highly complex system of vaults and faux-vaults. The dome that one sees from the outside is not a vault, but a relatively light-weight wooden-framed structure resting on an invisible – and for its age highly original – catenary vault of brick, below which is another dome, (the dome that one sees from the inside), but of plaster supported by a wood frame. From the inside, one can easily assume that one is looking at the same vault that one sees from the outside. India There are two distinctive "other ribbed vaults" (called "Karbandi" in Persian) in India which form no part of the development of European vaults, but have some unusual features; one carries the central dome of the Jumma Musjid at Bijapur (A.D. 1559), and the other is Gol Gumbaz, the tomb of Muhammad Adil Shah II (1626–1660) in the same town. The vault of the latter was constructed over a hall square, to carry a hemispherical dome. The ribs, instead of being carried across the angles only, thus giving an octagonal base for the dome, are carried across to the further pier of the octagon and consequently intersect one another, reducing the central opening to in diameter, and, by the weight of the masonry they carry, serving as counterpoise to the thrust of the dome, which is set back so as to leave a passage about wide round the interior. The internal diameter of the dome is , its height and the ribs struck from four centres have their springing from the floor of the hall. The Jumma Musjid dome was of smaller dimensions, on a square of with a diameter of , and was carried on piers only instead of immensely thick walls as in the tomb; but any thrust which might exist was counteracted by its transmission across aisles to the outer wall. Islamic architecture The Muqarnas is a form of vaulting common in Islamic architecture. Modern vaults Hyperbolic paraboloids The 20th century saw great advances in reinforced concrete design. The advent of shell construction and the better mathematical understanding of hyperbolic paraboloids allowed very thin, strong vaults to be constructed with previously unseen shapes. The vaults in the Church of Saint Sava are made of prefabricated concrete boxes. They were built on the ground and lifted to 40 m on chains. Vegetal vault When made by plants or trees, either artificially or grown on purpose by humans, structures of this type are called tree tunnels. See also References Citations Sources Copplestone, Trewin. (ed). (1963). World architecture – An illustrated history. Hamlyn, London. Further reading Block, Philippe, (2005) Equilibrium Systems, studies in masonry structure. Severy, Ching, Francis D. K. (1995). A Visual Dictionary of Architecture. Van Nostrand Reinhold Company. p. 262. External links Documentation on Arches, Domes and Vaults on the Auroville Earth Institute website Tracing the past: 3D analysis of medieval vaults, a talk for the British Archaeological Association by Dr Alex Buchanan, Dr James Hillson, and Dr Nick Webb Arches and vaults
Vault (architecture)
[ "Engineering" ]
7,055
[ "Structural engineering", "Ceilings" ]
8,042,122
https://en.wikipedia.org/wiki/Cholesterol%207%20alpha-hydroxylase
Cholesterol 7 alpha-hydroxylase also known as cholesterol 7-alpha-monooxygenase or cytochrome P450 7A1 (CYP7A1) is an enzyme that in humans is encoded by the gene which has an important role in cholesterol metabolism. It is a cytochrome P450 enzyme, which belongs to the oxidoreductase class, and converts cholesterol to 7-alpha-hydroxycholesterol, the first and rate limiting step in bile acid synthesis. The inhibition of cholesterol 7-alpha-hydroxylase (CYP7A1) represses bile acid biosynthesis. Evolution Sequence comparisons indicated a huge similarity between cytochromes P450 identified in man and bacteria, and suggested that the superfamily cytochrome P450 first originated from a common ancestral gene some three billion years ago. The superfamily cytochrome P450 was named in 1961, because of the 450-nm spectral peak pigment that cytochrome P450 has when reduced and bound to carbon monoxide. In the early 1960s, P450 was thought to be one enzyme, and by the mid 1960s it was associated with drug and steroid metabolism. However, the membrane-associated and hydrophobic nature of the enzyme system impeded purification, and the number of proteins involved could not be accurately counted. Advances in mRNA purification in the early 1980s allowed to isolate the first cDNA encoding a complete cytochrome P450 (CYP) protein, and thereafter, results of many cloning studies have revealed a large number of different enzymes. Advances in molecular biology and genomics facilitated the biochemical characterisation of individual P450 enzymes: The cytochromes P450 act on many endogenous substrates, introducing oxidative, peroxidative, and reductive changes into small molecules of widely different chemical structures. Substrates identified to date include saturated and unsaturated fatty acids, eicosanoids, sterols and steroids, bile acids, vitamin D3 derivatives, retinoids, and uroporphyrinogens. Many cytochrome P450 enzymes can metabolise various exogenous compounds including drugs, environmental chemicals and pollutants, and natural plant products. Metabolism of foreign chemicals frequently results in successful detoxication of the irritant; However, the actions of P450 enzymes can also generate toxic metabolites that contribute to increased risks of cancer, birth defects, and other toxic effects. The expression of many P450 enzymes is often induced by accumulation of a substrate. The ability of one P450 substrate to affect the concentrations of another in this manner is the basis for so-called drug-drug interactions, which complicate treatment. Molecular structure Cholesterol 7 alpha hydroxylase consists of 491 amino acids, which on folding forms 23 alpha helices and 26 beta sheets. Function Cholesterol 7 alpha-hydroxylase is a cytochrome P450 heme enzyme that oxidizes cholesterol in the position 7 using molecular oxygen. It is an oxidoreductase. CYP7A1 is located in the endoplasmic reticulum (ER) and is important for the synthesis of bile acid and the regulation of cholesterol levels. Synthesis of bile acid Cholesterol 7 alpha-hydroxylase is the rate-limiting enzyme in the synthesis of bile acid from cholesterol via the classic pathway, catalyzing the formation of 7α-hydroxycholesterol. The unique detergent properties of bile acids are essential for the digestion and intestinal absorption of hydrophobic nutrients. Bile acids have powerful toxic properties like membrane disruption and there are a wide range of mechanisms to restrict their accumulation in tissues and blood. The discovery of farnesoid X receptor (FXR) which is located in the liver, has opened new insights. Bile acid activation of FXR represses the expression of CYP7A1 via, raising the expression of small heterodimer partner (SHP, NR0B2), a non-DNA binding protein. The increased abundance of SHP causes it to associate with liver receptor homolog (LRH)-1, an obligate factor required for the transcription of CYP7A1. Furthermore, there is an "FXR/SHP-independent" mechanism that also represses CYP7A1 expression. This "FXR/SHP-independent" pathway involves the interaction of bile acids with liver macrophages, which finally induces the expression and secretion of cytokines. These inflammatory cytokines, which include tumor necrosis factor alpha and interleukin-1beta, act upon the liver parenchymal cells causing a rapid repression of the CYP7A1 gene. Regulation of activity Regulation of CYP7A1 occurs at several levels including synthesis. Bile acids, steroid hormones, inflammatory cytokines, insulin, and growth factors inhibit CYP7A1 transcription through the 5′-upstream region of the promoter. The average life of this enzyme is between two and three hours. Activity can be regulated by phosphorylation-dephosphorylation. CYP7A1 is upregulated by the nuclear receptor LXR (liver X receptor) when cholesterol (to be specific, oxysterol) levels are high. The effect of this upregulation is to increase the production of bile acids and reduce the level of cholesterol in hepatocytes. It is downregulated by sterol regulatory element-binding proteins (SREBP) when plasma cholesterol levels are low. Bile acids provide feedback inhibition of CYP7A1 by at least two different pathways, both involving the farnesoid X receptor, FXR. In the liver, bile acids bound to FXR induce small heterodimer partner, SHP which binds to LRH-1 and so inhibits the transcription of the enzyme. In the intestine, bile acids/FXR stimulate production of FGF15/19 (depending on species), which then acts as a hormone in the liver via FGFR4. Enzymatic mechanism Specificity One feature of enzymes is their high specificity. They are specific on a singular substrate, reaction or both together, that means, that the enzymes can catalyze all reactions wherein the substrate can experience. The enzyme cholesterol 7 alpha hydroxylase catalyzes the reaction that converts cholesterol into cholesterol 7 alpha hydroxylase reducing and oxidizing that molecule. Interactive pathway map Clinical significance Deficiency of this enzyme will increase the possibility of cholesterol gallstones. Disruption of CYP7A1 from classic bile acid synthesis in mice leads to either increased postnatal death or a milder phenotype with elevated serum cholesterol. The latter is similar to the case in humans, where CYP7A1 mutations associate with high plasma low-density lipoprotein and hepatic cholesterol content, as well as deficient bile acid excretion. There is also a synergy between plasma low-density lipoprotein cholesterol (LDL-C) and risks of coronary artery disease (CAD). Glucose signaling also induces CYP7A1 gene transcription by epigenetic regulation of the histone acetylation status. Glucose induction of bile acid synthesis have an important implication in metabolic control of glucose, lipid, and energy homeostasis under normal and diabetic conditions. CYP7A1-rs3808607 and apolipoprotein E (APOE) isoform are associated with the extent of reduction in circulating LDL cholesterol in response to plant sterol consumption and could serve as potential predictive genetic markers to identify individuals who would derive maximum LDL cholesterol lowering with plant sterol consumption. Genetic variations in CYP7A1 influence its expression and thus may affect the risk of gallstone disease and gallbladder cancer. One of the many lipid lowering effects of the fibrate drug class is mediated through the inhibition of transcription of this enzyme. This inhibition leads to more cholesterol in the bile, which is the body's only route of cholesterol excretion. This also increases the risk of cholesterol gallstone formation. Inhibition of CYP7A1 is thought to be involved in or responsible for the hepatotoxicity associated with ketoconazole. The levorotatory enantiomer of ketoconazole, levoketoconazole, shows 12-fold reduced potency in inhibition of this enzyme, and is under development for certain indications (e.g., Cushing's syndrome) as a replacement for ketoconazole with reduced toxicity and improved tolerability and safety. See also Steroidogenic enzyme References Further reading External links EC 1.14.13 Metabolism Enzymes of known structure
Cholesterol 7 alpha-hydroxylase
[ "Chemistry", "Biology" ]
1,843
[ "Biochemistry", "Metabolism", "Cellular processes" ]
8,045,367
https://en.wikipedia.org/wiki/BIOVIA
BIOVIA is a software company headquartered in the United States, with representation in Europe and Asia. It provides software for chemical, materials and bioscience research for the pharmaceutical, biotechnology, consumer packaged goods, aerospace, energy and chemical industries. Previously named Accelrys, it is a wholly owned subsidiary of Dassault Systèmes after an April 2014 acquisition and has been renamed BIOVIA. History Accelrys was formed in 2001 as a wholly owned subsidiary of Pharmacopeia, Inc. from the fusion of five companies: Molecular Simulations Inc., Synopsys Scientific Systems, Oxford Molecular, the Genetics Computer Group (GCG), and Synomics Ltd. MSI, itself a result of the combination of Biodesign, Cambridge Molecular Design, Polygen and, later, Biocad and Biosym Technologies. In late 2003, Pharmacopeia, Inc. separated its drug discovery and software development businesses. The drug discovery company retained the name Pharmacopeia and remained in Princeton, New Jersey, while the software company moved to San Diego, California. In 2004, Accelrys acquired SciTegic, producer of the Pipeline pilot software. Accelrys managed a nanotechnology consortium producing software tools for rational nanodesign from 2004 to 2010. In 2010, Symyx Technologies was merged with Accelrys. In May 2011, the company acquired Contur Software AB, an electronic lab notebook software firm. In January 2012, Accelrys acquired VelQuest, a maker of pharmaceutical and medical device-related software, for $35 million in cash. In May 2012, Accelrys purchased Hit Explorer Operating System (HEOS) - a SaaS system that provides groups with project information in the cloud and access to biological assay results, analytics, chemical registration and pharmacokinetics data - from Scynexis. In October 2012, Accelrys acquired Aegis Analytical Corp. for $30 million in cash, expanding Accelrys’ reach for customers in the move from the lab to the manufacturing floor. The company's Discoverant software aggregates and analyzes manufacturing, quality and development data to allow manufacturers for quality by design. In January 2013, Accelrys acquired Swiss biosciences systems integrator Vialis AG for $5 million in cash. In September 2013, Accelrys acquired Environmental Health & Safety (EH&S) compliance provider ChemSW. On January 30, 2014 Dassault Systèmes of France announced the acquisition of Accelrys in an all-cash tender offer for at $12.50 per share, representing a fully diluted equity value for Accelrys of approximately $750 million. After the acquisition, Accelrys was renamed BIOVIA. Products The Accelrys Enterprise Platform, a scientifically aware, service-oriented architecture (SOA) spanning data management and informatics, enterprise lab management, modeling and simulation, and workflow automation. Pipeline Pilot, a program that aggregates and provides immediate access to the volumes of disparate research data locked in silos, automates the scientific analysis of that data, and enables researchers to rapidly explore, visualize and report research results. ISIS/Draw, a chemical drawing tool. ISIS/Base, a personal chemical database counterpart. ISIS/Host, a chemical structure database that uses Oracle Accelrys Draw, a chemical drawing tool. Accelrys Direct, a chemical substance database that uses Oracle's data cartridge technology. The Available Chemicals Directory (ACD) a compilation of supplier catalogues that is searchable by substructure. The Accelrys Process Management and Compliance Suite, a "combination of software products for scientists working in early and mid-stage analytical, formulation and process/bioprocess development ... through to stability, material and release testing during late-stage quality control and commercial production." The Suite streamlines product development Symyx Notebook by Accelrys, an Electronic lab notebook. Materials Studio, a suite of modeling and simulation programs for material science. Discovery Studio, a suite of modeling and simulation programs for life sciences. Contur ELN Externalized Collaboration Suite Discoverant iLabber Experiment Knowledge Base Lab Execution System (LES) Commercial versions of otherwise academically licensed programs: CHARMM (Chemistry at Harvard Macromolecular Mechanics) is commercially available from Accelrys. In October 2013, Martin Karplus of Harvard University, Michael Levitt of Stanford University and Arieh Warshel of the University of Southern California were awarded the 2013 Nobel Prize in chemistry for their work in modeling and simulation including CHARMM. MODELLER References External links Sdcexec.com Fiercebiotechit.com Molecular modelling software Clinical data management Nanotechnology companies Software companies established in 2001 Software companies of the United Kingdom Software companies based in California Companies based in San Diego Defunct software companies of the United States Companies formerly listed on the Nasdaq 2014 mergers and acquisitions
BIOVIA
[ "Chemistry", "Materials_science" ]
1,006
[ "Molecular modelling software", "Computational chemistry software", "Nanotechnology companies", "Molecular modelling", "Nanotechnology" ]
6,109,333
https://en.wikipedia.org/wiki/Andreev%20reflection
Andreev reflection, named after the Russian physicist Alexander F. Andreev, is a type of particle scattering which occurs at interfaces between a superconductor (S) and a normal state material (N). It is a charge-transfer process by which normal current in N is converted to supercurrent in S. Each Andreev reflection transfers a charge 2e across the interface, avoiding the forbidden single-particle transmission within the superconducting energy gap. This effect is generally called Andreev reflection but it is also be referred to as Andreev–Saint-James reflection, as it was predicted independently by Saint-James and de Gennes and by Andreev in the early sixties. Overview The process involves an electron incident on the interface from the normal state material at energies less than the superconducting energy gap. The incident electron forms a Cooper pair in the superconductor with the retroreflection of a hole of opposite spin and velocity but equal momentum to the incident electron, as seen in the figure. The barrier transparency is assumed to be high, with no oxide or tunnel layer which reduces instances of normal electron-electron or hole-hole scattering at the interface. Since the pair consists of an up and down spin electron, a second electron of opposite spin to the incident electron from the normal state forms the pair in the superconductor, and hence the retroreflected hole. Through time-reversal symmetry, the process with an incident electron will also work with an incident hole (and retroreflected electron). The process is highly spin-dependent – if only one spin band is occupied by the conduction electrons in the normal-state material (i.e. it is fully spin-polarized), Andreev reflection will be inhibited due to inability to form a pair in the superconductor and impossibility of single-particle transmission. In a ferromagnet or material where spin-polarization exists or may be induced by a magnetic field, the strength of the Andreev reflection (and hence conductance of the junction) is a function of the spin-polarization in the normal state. The spin-dependence of Andreev reflection gives rise to the Point contact Andreev reflection technique, whereby a narrow superconducting tip (often niobium, antimony or lead) is placed into contact with a normal material at temperatures below the critical temperature of the tip. By applying a voltage to the tip, and measuring differential conductance between it and the sample, the spin polarization of the normal metal at that point (and magnetic field) may be determined. This is of use in such tasks as measurement of spin-polarized currents or characterizing spin polarization of material layers or bulk samples, and the effects of magnetic fields on such properties. In an Andreev process, the phase difference between the electron and hole is −π/2 plus the phase of the superconducting order parameter. Crossed Andreev reflection Crossed Andreev reflection, also known as non-local Andreev reflection, occurs when two spatially separated normal state material electrodes form two separate junctions with a superconductor, with the junction separation of the order of the BCS superconducting coherence length of the material in question. In such a device, retroreflection of the hole from an Andreev reflection process, resulting from an incident electron at energies less than the superconducting gap at one lead, occurs in the second spatially separated normal lead with the same charge transfer as in a normal Andreev reflection process to a Cooper pair in the superconductor. For crossed Andreev reflection to occur, electrons of opposite spin must exist at each normal electrode (so as to form the pair in the superconductor). If the normal material is a ferromagnet this may be guaranteed by creating opposite spin polarization via the application of a magnetic field to normal electrodes of differing coercivity. Crossed Andreev reflection occurs in competition with elastic cotunneling, the quantum mechanical tunneling of electrons between the normal leads via an intermediate state in the superconductor. This process conserves electron spin. As such, a detectable potential at one electrode on the application of current to the other may be masked by the competing elastic cotunneling process, making clear detection difficult. In addition, normal Andreev reflection may occur at either interface, in conjunction with other normal electron scattering processes from the normal/superconductor interface. The process is of interest in the formation of solid-state quantum entanglement, via the formation of a spatially separated entangled electron-hole (Andreev) pair, with applications in spintronics and quantum computing. References Further reading Books Papers Superconductivity Physical phenomena Scattering Mesoscopic physics
Andreev reflection
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
985
[ "Physical phenomena", "Physical quantities", "Superconductivity", "Quantum mechanics", "Materials science", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics", "Mesoscopic physics", "Electrical resistance and conductance" ]
6,109,368
https://en.wikipedia.org/wiki/Thermal%20treatment
Thermal treatment is any waste treatment technology that involves high temperatures in the processing of the waste feedstock. Commonly this involves the combustion of waste materials. Systems that are generally considered to be thermal treatment include: Cement kiln Gasification Incineration Mechanical heat treatment Pyrolysis Thermal depolymerization Waste autoclaves See also Anaerobic digestion List of solid waste treatment technologies Mechanical biological treatment Waste-to-energy Pyrolysis References Waste management Waste treatment technology
Thermal treatment
[ "Chemistry", "Engineering" ]
97
[ "Water treatment", "Waste treatment technology", "Environmental engineering" ]
6,110,795
https://en.wikipedia.org/wiki/Passive%20cooling
Passive cooling is a building design approach that focuses on heat gain control and heat dissipation in a building in order to improve the indoor thermal comfort with low or no energy consumption. This approach works either by preventing heat from entering the interior (heat gain prevention) or by removing heat from the building (natural cooling). Natural cooling utilizes on-site energy, available from the natural environment, combined with the architectural design of building components (e.g. building envelope), rather than mechanical systems to dissipate heat. Therefore, natural cooling depends not only on the architectural design of the building but on how the site's natural resources are used as heat sinks (i.e. everything that absorbs or dissipates heat). Examples of on-site heat sinks are the upper atmosphere (night sky), the outdoor air (wind), and the earth/soil. Passive cooling is an important tool for design of buildings for climate change adaptationreducing dependency on energy-intensive air conditioning in warming environments. Overview Passive cooling covers all natural processes and techniques of heat dissipation and modulation without the use of energy. Some authors consider that minor and simple mechanical systems (e.g. pumps and economizers) can be integrated in passive cooling techniques, as long they are used to enhance the effectiveness of the natural cooling process. Such applications are also called 'hybrid cooling systems'. The techniques for passive cooling can be grouped in two main categories: Preventive techniques that aim to provide protection and/or prevention of external and internal heat gains. Modulation and heat dissipation techniques that allow the building to store and dissipate heat gain through the transfer of heat from heat sinks to the climate. This technique can be the result of thermal mass or natural cooling. Preventive techniques Protection from or prevention of heat gains encompasses all the design techniques that minimizes the impact of solar heat gains through the building's envelope and of internal heat gains that is generated inside the building due occupancy and equipment. It includes the following design techniques: Microclimate and site design - By taking into account the local climate and the site context, specific cooling strategies can be selected to apply which are the most appropriate for preventing overheating through the envelope of the building. The microclimate can play a huge role in determining the most favorable building location by analyzing the combined availability of sun and wind. The bioclimatic chart, the solar diagram and the wind rose are relevant analysis tools in the application of this technique. Solar control - A properly designed shading system can effectively contribute to minimizing the solar heat gains. Shading both transparent and opaque surfaces of the building envelope will minimize the amount of solar radiation that induces overheating in both indoor spaces and building's structure. By shading the building structure, the heat gain captured through the windows and envelope will be reduced. Building form and layout - Building orientation and an optimized distribution of interior spaces can prevent overheating. Rooms can be zoned within the buildings in order to reject sources of internal heat gain and/or allocating heat gains where they can be useful, considering the different activities of the building. For example, creating a flat, horizontal plan will increase the effectiveness of cross-ventilation across the plan. Locating the zones vertically can take advantage of temperature stratification. Typically, building zones in the upper levels are warmer than the lower zones due to stratification. Vertical zoning of spaces and activities uses this temperature stratification to accommodate zone uses according to their temperature requirements. Form factor (i.e. the ratio between volume and surface) also plays a major role in the building's energy and thermal profile. This ratio can be used to shape the building form to the specific local climate. For example, more compact forms tend to preserve more heat than less compact forms because the ratio of the internal loads to envelope area is significant. Thermal insulation - Insulation in the building's envelope will decrease the amount of heat transferred by radiation through the facades. This principle applies both to the opaque (walls and roof) and transparent surfaces (windows) of the envelope. Since roofs could be a larger contributor to the interior heat load, especially in lighter constructions (e.g. building and workshops with roof made out of metal structures), providing thermal insulation can effectively decrease heat transfer from the roof. Behavioral and occupancy patterns - Some building management policies such as limiting the number of people in a given area of the building can also contribute effectively to the minimization of heat gains inside a building. Building occupants can also contribute to indoor overheating prevention by: shutting off the lights and equipment of unoccupied spaces, operating shading when necessary to reduce solar heat gains through windows, or dress lighter in order to adapt better to the indoor environment by increasing their thermal comfort tolerance. Internal gain control - More energy-efficient lighting and electronic equipment tend to release less energy thus contributing to less internal heat loads inside the space. Modulation and heat dissipation techniques The modulation and heat dissipation techniques rely on natural heat sinks to store and remove the internal heat gains. Examples of natural sinks are night sky, earth soil, and building mass. Therefore, passive cooling techniques that use heat sinks can act to either modulate heat gain with thermal mass or dissipate heat through natural cooling strategies. Thermal mass - Heat gain modulation of an indoor space can be achieved by the proper use of the building's thermal mass as a heat sink. The thermal mass will absorb and store heat during daytime hours and return it to the space at a later time. Thermal mass can be coupled with night ventilation natural cooling strategy if the stored heat that will be delivered to the space during the evening/night is not desirable. Natural cooling - Natural cooling refers to the use of ventilation or natural heat sinks for heat dissipation from indoor spaces. Natural cooling can be separated into five different categories: ventilation, night flushing, radiative cooling, evaporative cooling, and earth coupling. Ventilation Ventilation as a natural cooling strategy uses the physical properties of air to remove heat or provide cooling to occupants. In select cases, ventilation can be used to cool the building structure, which subsequently may serve as a heat sink. Cross ventilation - The strategy of cross ventilation relies on wind to pass through the building for the purpose of cooling the occupants. Cross ventilation requires openings on two sides of the space, called the inlet and outlet. The sizing and placement of the ventilation inlets and outlets will determine the direction and velocity of cross ventilation through the building. Generally, an equal (or greater) area of outlet openings must also be provided to provide adequate cross ventilation. Stack ventilation - Cross ventilation is an effective cooling strategy, however, wind is an unreliable resource. Stack ventilation is an alternative design strategy that relies on the buoyancy of warm air to rise and exit through openings located at ceiling height. Cooler outside air replaces the rising warm air through carefully designed inlets placed near the floor. These two strategies are part of the ventilative cooling strategies. One specific application of natural ventilation is night flushing. Night flushing Night flushing (also known as night ventilation, night cooling, night purging, or nocturnal convective cooling) is a passive or semi-passive cooling strategy that requires increased air movement at night to cool the structural elements of a building. A distinction may be made between free cooling to chill water and night flushing to cool down building thermal mass. To execute night flushing, one typically keeps the building envelope closed during the day. The building structure's thermal mass acts as a sink through the day and absorbs heat gains from occupants, equipment, solar radiation, and conduction through walls, roofs, and ceilings. At night, when the outside air is cooler, the envelope is opened, allowing cooler air to pass through the building so the stored heat can be dissipated by convection. This process reduces the temperature of the indoor air and of the building's thermal mass, allowing convective, conductive, and radiant cooling to take place during the day when the building is occupied. Night flushing is most effective in climates with a large diurnal swing, i.e. a large difference between the daily maximum and minimum outdoor temperature. For optimal performance, the nighttime outdoor air temperature should fall well below the daytime comfort zone limit of , and should have low absolute or specific humidity. In hot, humid climates the dirunial temperature swing is typically small, and the nighttime humidity stays high. Night flushing has limited effectiveness and can introduce high humidity that causes problems and can lead to high energy costs if it is removed by active systems during the day. Thus, night flushing's effectiveness is limited to sufficiently dry climates. For the night flushing strategy to be effective at reducing indoor temperature and energy usage, the thermal mass must be sized sufficiently and distributed over a wide enough surface area to absorb the space's daily heat gains. Also, the total air change rate must be high enough to remove the internal heat gains from the space at night. There are three ways night flushing can be achieved in a building: Natural night flushing by opening windows at night, letting wind-driven or buoyancy-driven airflow cool the space, and then closing windows during the day. Mechanical night flushing by forcing air mechanically through ventilation ducts at night at a high airflow rate and supplying air to the space during the day at a code-required minimum airflow rate. Mixed-mode night flushing through a combination of natural ventilation and mechanical ventilation, also known as mixed-mode ventilation, by using fans to assist the natural nighttime airflow. These three strategies are part of the ventilative cooling strategies. There are numerous benefits to using night flushing as a cooling strategy for buildings, including improved comfort and a shift in peak energy load. Energy is most expensive during the day. By implementing night flushing, the usage of mechanical ventilation is reduced during the day, leading to energy and money savings. There are also a number of limitations to using night flushing, such as usability, security, reduced indoor air quality, humidity, and poor room acoustics. For natural night flushing, the process of manually opening and closing windows every day can be tiresome, especially in the presence of insect screens. This problem can be eased with automated windows or ventilation louvers, such as in the Manitoba Hydro Place. Natural night flushing also requires windows to be open at night when the building is most likely unoccupied, which can raise security issues. If outdoor air is polluted, night flushing can expose occupants to harmful conditions inside the building. In loud city locations, the opening of windows can create poor acoustical conditions inside the building. In humid climates, night flushing can introduce humid air, typically above 90% relative humidity during the coolest part of the night. This moisture can accumulate in the building overnight leading to increased humidity during the day leading to comfort problems and even mold growth. Radiative cooling Evaporative cooling This design relies on the evaporative process of water to cool the incoming air while simultaneously increasing the relative humidity. A saturated filter is placed at the supply inlet so the natural process of evaporation can cool the supply air. Apart from the energy to drive the fans, water is the only other resource required to provide conditioning to indoor spaces. The effectiveness of evaporative cooling is largely dependent on the humidity of the outside air; dryer air produces more cooling. A study of field performance results in Kuwait revealed that power requirements for an evaporative cooler are approximately 75% less than the power requirements for a conventional packaged unit air-conditioner. As for interior comfort, a study found that evaporative cooling reduced inside air temperature by 9.6 °C compared to outdoor temperature. An innovative passive system uses evaporating water to cool the roof so that a major portion of solar heat does not come inside. Ancient Egypt used evaporative cooling; for instance, reeds were hung in windows and were moistened with trickling water. Evaporation from the soil and transpiration from plants also provides cooling; the water released from the plant evaporates. Gardens and potted plants are used to drive cooling, as in the of a , the of a , and so on. Earth coupling Earth coupling uses the moderate and consistent temperature of the soil to act as a heat sink to cool a building through conduction. This passive cooling strategy is most effective when earth temperatures are cooler than ambient air temperature, such as in hot climates. Direct coupling or earth sheltering occurs when a building uses earth as a buffer for the walls. The earth acts as a heat sink and can effectively mitigate temperature extremes. Earth sheltering improves the performance of building envelopes by reducing heat losses and also reduces heat gains by limiting infiltration. Indirect coupling means that a building is coupled with the earth by means of earth ducts. An earth duct is a buried tube that acts as avenue for supply air to travel through before entering the building. The supply air is cooled by conductive heat transfer between the tubes and surrounding soil. Therefore, earth ducts will not perform well as a source of cooling unless the soil temperature is lower than the desired room air temperature. Earth ducts typically require long tubes to cool the supply air to an appropriate temperature before entering the building. A fan is required to draw the air from the earth duct into the building. Some of the factors that affect the performance of an earth duct are: duct length, number of bends, thickness of duct wall, depth of duct, diameter of the duct, and air velocity. In conventional buildings There are "smart-roof coatings" and "smart windows" for cooling that switches to warming during cold temperatures. The whitest paint formulation can reflect up to 98.1% of sunlight. See also Ab anbar Cistern Cross ventilation Passive ventilation Qanat Reflective surfaces (climate engineering) Shinsulator Stepwell Ventilative cooling Windcatcher Yakhchāl References Solar design Energy conservation Environmental design Heating, ventilation, and air conditioning Low-energy building Sustainable architecture Heat transfer Energy and the environment Climate change adaptation
Passive cooling
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
2,871
[ "Transport phenomena", "Environmental design", "Physical phenomena", "Heat transfer", "Sustainable architecture", "Solar design", "Energy engineering", "Thermodynamics", "Design", "Environmental social science", "Architecture" ]
3,461,780
https://en.wikipedia.org/wiki/Carbenium%20ion
A carbenium ion is a positive ion with the structure RR′R″C+, that is, a chemical species with carbon atom having three covalent bonds, and it bears a +1 formal charge. Carbenium ions are a major subset of carbocations, which is a general term for diamagnetic carbon-based cations. In parallel with carbenium ions is another subset of carbocations, the carbonium ions with the formula R5+. In carbenium ions charge is localized. They are isoelectronic with monoboranes such as B(CH3)3. Nomenclature Reactivity Carbenium ions are generally highly reactive due to having an incomplete octet of electrons; however, certain carbenium ions, such as the tropylium ion, are relatively stable due to the positive charge being delocalised between the carbon atoms. Rearrangements Carbenium ions sometimes rearrange readily. For example, when pentan-3-ol is heated with aqueous HCl, the initially formed 3-pentyl carbocation rearranges to a mixture of the 3-pentyl and 2-pentyl. These cations react with chloride ion to produce 3-chloropentane and 2-chloropentane in a ratio of approximately 1:2. Migration of an alkyl group to form a new carbocationic center is also observed. This often occurs with rate constants in excess of 1010 s−1 at ambient temperature and still takes place rapidly (compared to the NMR timescale) at temperatures as low as −120 °C (see Wagner-Meerwein shift). In especially favorable cases like the 2-norbornyl cation, hydrogen shifts may still take place at rates fast enough to interfere with X-ray crystallography at . Typically, carbocations will rearrange to give a tertiary isomer. For instance, all isomers of rapidly rearrange to give the 1-methyl-1-cyclopentyl cation. This fact often complicates synthetic pathways. For example, when 3-pentanol is heated with aqueous HCl, the initially formed 3-pentyl carbocation rearranges to a statistical mixture of the 3-pentyl and 2-pentyl. These cations react with chloride ion to produce about one third 3-chloropentane and two thirds 2-chloropentane. The Friedel–Crafts alkylation suffers from this limitation; for this reason, the acylation (followed by Wolff–Kishner or Clemmensen reduction to give the alkylated product) is more frequently applied. As electrophiles Carbocations are susceptible to attack by nucleophiles, like water, alcohols, carboxylates, azide, and halide ions, to form the addition product. Strongly basic nucleophiles, especially hindered ones, favor elimination over addition. Because even weak nucleophiles will react with carbocations, most can only be directly observed or isolated in non-nucleophilic media like superacids. Types of carbenium ions Stability The stability order of carbocations, from most stable to least stable as reflected by hydride ion affinity (HIA) values, are as follows (HIA values in kcal/mol in parentheses): Since carbenium ions can be highly reactive, a major consideration is their stability. The stability of carbenium ions correlates with the electron-donating properties of the substituents. Trialkylcarbenium ions, such as , are isolable as salts, but cannot. An analogous situation applies to triarylcarbenium ions: salts of triphenylcarbenium are readily isolable (see trityl), and those with amine substituents so robust that they are used as dyes, e.g. crystal violet. Carbenium ions can also be stabilized by conjugation to double bonds giving allyl cations, which enjoy some resonance stabilization. This situation is illustrated by the isolation of protonated benzene. Lone-pair bearing heteroatoms also stabilize carbenium ions. Alkylium ions The stability of alkyl-substituted carbocations follows the order . This trend can be inferred by the hydride ion affinity values (231, 246, 273, and 312 kcal/mol for , , , and ). The effect of alkyl substitution is a strong one: tertiary cations are stable and many are directly observable in superacid media. The stabilization by alkyl groups is explained by hyperconjugation. The donation of electron density from a β C-H or C-C bond into the unoccupied p orbital of the carbocation (a σCH/CC → p interaction) allows the positive charge to be delocalized. Secondary cations are usually transient. Only the isopropyl, s-butyl, and cyclopentyl cations have been observed in solution. Primary carbocations in the solution phase, even as transient intermediates (the ethyl cation has been proposed for reactions in 99.9% sulfuric acid and in ), and methyl cation has only been unambiguously identified in the gas phase. In most, if not all cases, the ground state of alleged primary carbenium ions consist of bridged structures in which positive charge is shared by two or more carbon atoms and are better described as side-protonated alkenes, edge-protonated cyclopropanes, or corner-protonated cyclopropanes rather than true primary cations. The simple ethyl cation, has been demonstrated experimentally and computationally to be bridged and can be thought of as a symmetrically protonated ethylene molecule. The same is true for higher homologues like 1-propyl and 1-butyl cations. Neopentyl derivatives are thought to ionize with concomitant migration of a methyl group (anchimeric assistance); thus, in most if not all cases, a discrete neopentyl cation is not believed to be involved. Carbenium ions can be prepared directly from alkanes by removing a hydride anion, , with a strong acid. For example, magic acid, a mixture of antimony pentafluoride () and fluorosulfuric acid (), turns isobutane into the trimethylcarbenium cation, . Extra stabilizing effects A carbocation may be stabilized by resonance by a carbon–carbon double bond or by the lone pair of a heteroatom adjacent to the ionized carbon. The allyl cation and benzyl cation are more stable than most other carbenium ions due to donation of electron density from π systems to the cationic center. The doubly- and triply-benzylic carbocations, diphenylcarbenium and triphenylcarbenium (trityl) cation, are particularly stable. For the same reasons, the partial p character of strained C–C bonds in cyclopropyl groups also allows for donation of electron density and stabilizes the cyclopropylmethyl (cyclopropylcarbinyl) cation. Oxocarbenium and iminium ions have important secondary canonical forms (resonance structures) in which carbon bears a positive charge. As such, they are carbocations according to the IUPAC definition although some chemists do not regard them to be "true" carbocations, as their most important resonance contributors carry the formal positive charge on an oxygen or nitrogen atom, respectively. Aromatic carbenium ions The tropylium ion is an aromatic species with the formula . Its name derives from the molecule tropine (itself named for the molecule atropine). Salts of the tropylium cation can be stable, e.g. tropylium tetrafluoroborate. It can be made from cycloheptatriene (tropylidene) and bromine or phosphorus pentachloride. It is a planar, cyclic, heptagonal ion; it also has 6 π-electrons (4n + 2, where n = 1), which fulfills Hückel's rule of aromaticity. It can coordinate as a ligand to metal atoms. The structure shown is a composite of seven resonance contributors in which each carbon carries part of the positive charge. In 1891 G. Merling obtained a water-soluble salt from a reaction of cycloheptatriene and bromine. The structure was elucidated by Eggers Doering and Knox in 1954. On the other hand, the antiaromatic cyclopentadienyl cation () is destabilized by some 40 kcal/mol. Another aromatic carbenium ion is the cyclopropenyl or cyclopropenium ion, . Although less stable than the tropylium cation, this carbenium ion can also form salts at room temperature. Solutions of such salts were exhibit conventional spectroscopic and chemical properties. The cyclopropenium cation (), although somewhat destabilized by angle strain, is still clearly stabilized by aromaticity when compared to its open-chain analog, allyl cation. These varying cation stabilities, depending on the number of π electrons in the ring system, can furthermore be crucial factors in reaction kinetics. The formation of an aromatic carbocation is much faster than the formation of an anti-aromatic or open-chain carbocation. Arenium ions An arenium ion is a cyclohexadienyl cation that appears as a reactive intermediate in electrophilic aromatic substitution. For historic reasons this complex is also called a Wheland intermediate, or a σ-complex. Two hydrogen atoms bonded to one carbon lie in a plane perpendicular to the benzene ring. The arenium ion is no longer an aromatic species; however it is relatively stable due to delocalization: the positive charge is delocalized over 5 carbon atoms. Also contributing to the stability of arenium ions is the energy gain resulting from the strong C-e bond (E = electrophile). The smallest arenium ion is protonated benzene, . The benzenium ion can be isolated as a stable compound when benzene is protonated by the carborane superacid, H(CB11H(CH3)5Br6). The benzenium salt is crystalline with thermal stability up to 150 °C. Bond lengths deduced from X-ray crystallography are consistent with a cyclohexadienyl cation structure. Acylium ions An acylium ion is a cation with the formula RCO+. The structure is described as R−C≡O+ or R−=O. It is an acyl carbocation, but the actual structure has the oxygen and carbon linked by a triple bond. Such species are common reactive intermediates, for example, in the Friedel−Crafts acylations also in many other organic reactions such as the Hayashi rearrangement. Salts containing acylium ions can be generated by removal of the halide from acyl halides: RCOCl + SbCl5 → RCO+ The C–O distance in these cations is near 1.1 ångströms, even shorter than that in carbon monoxide. Acylium cations are characteristic fragments observed in EI-mass spectra of ketones. Vinyl and alkynyl carbenium ions Based on hydride ion affinity, the parent vinyl cation is less stable than even a primary sp2-hybridized carbocation, while an α alkyl-substituted vinyl cation has a stability that is comparable to the latter. Hence, vinyl cations are relatively uncommon intermediates. They can be generated by the ionization of a vinyl electrophile, provided the leaving group is sufficiently good (e.g., , IPh, or ). They have been implicated as intermediates in some vinyl substitution reactions (designated as SN1(vinyl)) and as intermediates in the electrophilic addition reactions of arylalkynes. With the exception of the parent vinyl cation, which is believed to be a bridged species, and geometrically constrained cyclic vinyl cations, most vinyl cations take on sp hybridization and are linear. Aryl cations are less stable than vinyl cations due to the ring-enforced distortion to a nonlinear geometry and approximately sp2-character of the unoccupied orbital. Only in aryldiazonium salts is a good enough leaving group for the chemical generation of aryl cations. Alkynyl cations are extremely unstable, much less stable than even (hydride ion affinity 386 kcal/mol versus 312 kcal/mol for ) and cannot be generated by purely chemical means. They can, however, be generated radiochemically via the beta decay of tritium: Selected applications Carbenium ions are so integrated into organic chemistry that a full inventory of their commercially useful reactions would be long. For example, catalytic cracking, a major step in petroleum refining involves carbenium ion intermediates. The alkylation of benzene with alpha-olefins to give linear alkylbenzene (LABs) illustrates the behaviour of secondary carbenium ions. The alkylation is initiated by strong acids. LABs are a key precursor to detergents. Derivatives of the triphenylcarbenium are the triarylmethane dyes. Acylium ions are intermediates in Friedel-Crafts acylations and Koch reactions. See also Borenium ion Nitrenium ion References Cations Reactive intermediates Carbocations
Carbenium ion
[ "Physics", "Chemistry" ]
2,931
[ "Matter", "Organic compounds", "Physical organic chemistry", "Cations", "Reactive intermediates", "Ions" ]
3,462,904
https://en.wikipedia.org/wiki/Electrical%20load
An electrical load is an electrical component or portion of a circuit that consumes (active) electric power, such as electrical appliances and lights inside the home. The term may also refer to the power consumed by a circuit. This is opposed to a power supply source, such as a battery or generator, which provides power. The term is used more broadly in electronics for a device connected to a signal source, whether or not it consumes power. If an electric circuit has an output port, a pair of terminals that produces an electrical signal, the circuit connected to this terminal (or its input impedance) is the load. For example, if a CD player is connected to an amplifier, the CD player is the source, and the amplifier is the load, and to continue the concept, if loudspeakers are connected to that amplifier, then that amplifier becomes a new, second source (to the loudspeakers), and the loudspeakers will be the load for the amplifier (but not for the CD player... these are two separate sources and two separate loads, chained together in series. Load affects the performance of circuits with respect to output voltages or currents, such as in sensors, voltage sources, and amplifiers. Mains power outlets provide an easy example: they supply power at constant voltage, with electrical appliances connected to the power circuit collectively making up the load. When a high-power appliance switches on, it dramatically reduces the load impedance. The voltages will drop if the load impedance is not much higher than the power supply impedance. Therefore, switching on a heating appliance in a domestic environment may cause incandescent lights to dim noticeably. A more technical approach When discussing the effect of load on a circuit, it is helpful to disregard the circuit's actual design and consider only the Thévenin equivalent. (The Norton equivalent could be used instead, with the same results.) The Thévenin equivalent of a circuit looks like this: With no load (open-circuited terminals), all of falls across the output; the output voltage is . However, the circuit will behave differently if a load is added. Therefore, we would like to ignore the details of the load circuit, as we did for the power supply, and represent it as simply as possible. For example, if we use an input resistance to represent the load, the complete circuit looks like this: Whereas the voltage source by itself was an open circuit, adding the load makes a closed circuit and allows charge to flow. This current places a voltage drop across , so the voltage at the output terminal is no longer . The output voltage can be determined by the voltage division rule: If the source resistance is not negligibly small compared to the load impedance, the output voltage will fall. This illustration uses simple resistances, but a similar discussion can be applied in alternating current circuits using resistive, capacitive, and inductive elements. See also Dummy load References Electrical circuits
Electrical load
[ "Engineering" ]
612
[ "Electrical engineering", "Electronic engineering", "Electrical circuits" ]
3,463,982
https://en.wikipedia.org/wiki/EarthScope
The EarthScope project (2003-2018) was an National Science Foundation (NSF) funded Earth science program using geological and geophysical techniques to explore the structure and evolution of the North American continent and to understand the processes controlling earthquakes and volcanoes. The project had three components: USArray, the Plate Boundary Observatory, and the San Andreas Fault Observatory at Depth (some of which continued beyond the end of the project). Organizations associated with the project included UNAVCO, the Incorporated Research Institutions for Seismology (IRIS), Stanford University, the United States Geological Survey (USGS) and National Aeronautics and Space Administration (NASA). Several international organizations also contributed to the initiative. EarthScope data are publicly accessible. Observatories There were three EarthScope project observatories: The San Andreas Fault Observatory at Depth (SAFOD) The Plate Boundary Observatory (PBO) The Seismic and Magnetotelluric Observatory (USArray) These observatories consisted of boreholes into an active fault zone, global positioning system (GPS) receivers, tiltmeters, long-baseline laser strainmeters, borehole strainmeters, permanent and portable seismometers, and magnetotelluric stations. The various EarthScope components provided integrated and highly accessible data on geochronology and thermochronology, petrology and geochemistry, structure and tectonics, surficial processes and geomorphology, geodynamic modeling, rock physics, and hydrogeology. Seismic and Magnetotelluric Observatory (USArray) USArray, managed by IRIS, was a 15-year program to place a dense network of permanent and portable seismographs across the continental United States. These seismographs recorded the seismic waves released by earthquakes that occur around the world. Seismic waves are indicators of energy disbursement within the earth. By analyzing the records of earthquakes obtained from this dense grid of seismometers, scientists could learn about Earth structure and dynamics and the physical processes controlling earthquakes and volcanoes. The goal of USArray was primarily to gain a better understanding of the structure and evolution of the continental crust, lithosphere, and mantle underneath North America. The USArray was composed of four facilities: a Transportable Array, a Flexible Array, a Reference Network, and a Magnetotelluric Facility. Transportable Array The Transportable Array was composed of 400 seismometers that were deployed in a rolling grid across the United States over a period of 10 years. The stations were placed 70 km apart, and could map the upper 70 km of the Earth. After approximately two years, stations were moved east to the next site on the grid – unless adopted by an organization and made a permanent installation. Once the sweep across the United States was completed, over 2000 locations had been occupied. The Array Network Facility was responsible for data collection from the Transportable Array stations. Flexible Array The Flexible Array was composed of 291 broadband stations, 120 short period stations, and 1700 active source stations. The Flexible Array allowed sites to be targeted in a more focused manner than the broad Transportable Array. Natural or artificially created seismic waves could be used to map structures in the Earth. Reference Network The Reference Network was composed of permanent seismic stations spaced about 300 km apart. The Reference Network provided a baseline for the Transportable Array and Flexible Array. EarthScope added and upgraded 39 stations to the already existing Advanced National Seismic System, which was part of the Reference Network. Magnetotelluric Facility The Magnetotelluric Facility was composed of seven permanent and 20 portable sensors that recorded electromagnetic fields. It is the electromagnetic equivalent of the seismic arrays. The portable sensors were moved in a rolling grid similar to the Transportable Array grid, but were only in place about a month before they were moved to the next location. A magnetotelluric station consists of a magnetometer, four electrodes, and a data recording unit that are buried in shallow holes. The electrodes are oriented north-south and east-west and are saturated in a salt solution to improve conductivity with the ground. Plate Boundary Observatory (PBO) The Plate Boundary Observatory PBO consisted of a series of geodetic instruments, Global Positioning System (GPS) receivers and borehole strainmeters, that were installed to help understand the boundary between the North American Plate and Pacific Plate. The PBO network included several major observatory components: a network of 1100 permanent, continuously operating Global Positioning System (GPS) stations, many of which provide data at high-rate and in real-time, 78 borehole seismometers, 74 borehole strainmeters, 26 shallow borehole tiltmeters, and six long baseline laser strainmeters. These instruments were complemented by InSAR (interferometric synthetic aperture radar) and LiDAR (light detection and ranging) imagery and geochronology acquired as part of the GeoEarthScope initiative. PBO also included comprehensive data products, data management and education and outreach efforts. These permanent networks were supplemented by a pool of portable GPS receivers that could be deployed for temporary networks to researchers, to measure the crustal motion at a specific target or in response to a geologic event. The Plate Boundary Observatory portion of EarthScope was operated by UNAVCO. San Andreas Fault Observatory at Depth (SAFOD) The San Andreas Fault Observatory at Depth (SAFOD) consisted of a main borehole that cut across the active San Andreas Fault at a depth of approximately 3 km and a pilot hole about 2 km southwest of San Andreas Fault. Data from the instruments installed in the holes, which consisted of geophone sensors, data acquisition systems, and GPS clocks, as well as samples collected during drilling, helped to better understand the processes that control the behavior of the San Andreas Fault. Data Products Data collected from the various observatories were used to create different types of data products. Each data product addressed a different scientific problem. P-Wave Tomography Tomography is a method of producing a three-dimensional image of the internal structures of a solid object (such as the human body or the earth) by the observation and recording of differences in the effects on the passage of energy waves impinging on those structures. The waves of energy are P-waves generated by earthquakes and are recording the wave velocities. The high quality data that was collected by the permanent seismic stations of USArray and the Advanced National Seismic System (ANSS) allowed the creation of high resolution seismic imaging of the Earth's interior below the United States. Seismic tomography helps constrain mantle velocity structure and aids in the understanding of chemical and geodynamic processes that are at work. With the use of the data collected by USArray and global travel-time data, a global tomography model of P-wave velocity heterogeneity in the mantle could be created. The range and resolution of this technique allowed investigation into the suite of problems that are of concern in the North American mantle lithosphere, including the nature of the major tectonic features. This method gives evidence for differences in thickness and the velocity anomaly of the mantle lithosphere between the stable center of the continent and the more active western North America. These data are vital for the understanding of local lithosphere evolution, and when combined with additional global data, allow the mantle to be imaged beyond the current extent of USArray. Receiver Reference Models EarthScope Automated Receiver Survey (EARS), created a prototype of a system that was used to address several key elements of the production of EarthScope products. One of the prototype systems was the receiver reference model. It provided crustal thickness and average crustal Vp/Vs ratios beneath USArray transportable array stations. Ambient Seismic Noise The main function of the Advanced National Seismic System (ANSS) and USArray, was to provide high quality data for earthquake monitoring, source studies and Earth structure research. The utility of seismic data is greatly increased when noise levels, unwanted vibrations, are reduced; however broadband seismograms will always contain a certain level of noise. The dominant sources of noise are either from the instrumentation itself or from ambient Earth vibrations. Normally, seismometer self noise will be well below the seismic noise level, and every station will have a characteristic noise pattern that can be calculated or observed. Sources of seismic noise within the Earth are caused by any of the following: the actions of human beings at or near the surface of the Earth, objects moved by wind with the movement being transferred to the ground, running water (river flow), surf, volcanic activity, or long period tilt due to thermal instabilities from poor station design. A new approach to seismic noise studies was introduced with the EarthScope project, in that there were no attempts to screen the continuous waveforms to eliminate body and surface waves from the naturally occurring earthquakes. Earthquake signals are not generally included in the processing of noise data, because they are generally low probability occurrences, even at low power levels. The two objectives behind the collection of the seismic noise data were to provide and document a standard method to calculate ambient seismic background noise, and to characterize the variation of ambient background seismic noise levels across the United States as a function of geography, season, and time of day. The new statistical approach provided the ability to compute probability density functions (PDFs) to evaluate the full range of noise at a given seismic station, allowing the estimation of noise levels over a broad range of frequencies from 0.01–16 Hz (100-0.0625s period). With the use of this new method it became much easier to compare seismic noise characteristics between different networks in different regions. Earthquake Ground Motion Animations Seismometers of USArray transportable array recorded the passage of numerous seismic waves through a given point near the Earth's surface, and classically these seismograms are analyzed to deduce properties of the Earth's structure and the seismic source. Given a spatially dense set of seismic recordings, these signals could also be used to visualize the actual continuous seismic waves, providing new insights and interpretation techniques into complex wave propagation effects. Using signals recorded by the array of seismometers, the EarthScope project animated seismic waves as they sweep across the USArray transportable array for selected larger earthquakes. This illustrated the regional and teleseismic wave propagation phenomena. The seismic data collected from both permanent and transportable seismic stations was used to provide these computer generated animations. Regional Moment Tensors The seismic moment tensor is one of the fundamental parameters of earthquakes that can be determined from seismic observations. It is directly related to earthquake fault orientation and rupture direction. The moment magnitude, Mw derived from the moment tensor magnitude, is the most reliable quantity for comparing and measuring the size of an earthquake with other earthquake magnitudes. Moment tensors are used in a wide range of seismological research fields, such as earthquake statistics, earthquake scaling relationships, and stress inversion. The creation of regional moment tensor solutions, with the appropriate software, for moderate-to-large earthquakes in the U.S. came from USArray transportable array and Advance National Seismic System broadband seismic stations. Results were obtained in the time and the frequency domain. Waveform fit and amplitude-phase match figures were provided to allow users to evaluate moment tensor quality. Geodetic Monitoring of the Western US and Hawaii Global Positioning System (GPS) equipment and techniques provide a unique opportunity for earth scientists to study regional and local tectonic plate motions and conduct natural hazards monitoring. Cleaned network solutions from several GPS arrays merged into regional clusters in conjunction with the EarthScope project. The arrays included the Pacific Northwest Geodetic Array, EarthScope's Plate Boundary Observatory, the Western Canadian Deformation Array, and networks run by the US Geological Survey. The daily GPS measurements from ~1500 stations along the Pacific/North American plate boundary provided millimeter-scale accuracy and could be used monitor the displacements of the earths crust. With the use of data modeling software and the recorded GPS data, the opportunity to quantify crustal deformation caused by plate tectonics, earthquakes, landslides and volcanic eruptions was possible. Time-dependent Strain The goal was to provide models of time-dependent strain associated with a number of recent earthquakes and other geologic events as constrained by GPS data. With the use of InSAR (Interferometric Synthetic Aperture Radar), a remote-sensing technique, and PBO (Plate Boundary Observatory), a fixed array of GPS receivers and strainmeters, the EarthScope project provided spatially continuous strain measurements over wide geographic areas with decimeter to centimeter resolution. Global Strain Rate Map The Global Strain Rate Map (GSRM) is a project of the International Lithosphere Program whose mission is to determine a globally self-consistent strain rate and velocity field model, consistent with geodetic and geologic field observations collected by GPS, seismometers, and strainometers. GSRM is a digital model of the global velocity gradient tensor field associated with the accommodation of present-day crustal motions. The overall mission also includes: (1) contributions of global, regional, and local models by individual researchers; (2) archive existing data sets of geologic, geodetic, and seismic information that can contribute toward a greater understanding of strain phenomena; and (3) archive existing methods for modeling strain rates and strain transients. A completed global strain rate map provided a large amount of information which will contribute to the understanding of continental dynamics and for the quantification of seismic hazards. Science There were seven topics the EarthScope program addressed with the use of the observatories. Convergent Margin Processes Convergent margins, also known as convergent boundaries, are active regions of deformation between two or more tectonic plates colliding with one another. Convergent margins create areas of tectonic uplift, such as mountain ranges or volcanoes. EarthScope focused on the boundary between the Pacific Plate and the North American Plate in the western United States. EarthScope provided GPS geodetic data, seismic images, detailed seismicity, magnetotelluric data, InSAR, stress field maps, digital elevation models, baseline geology, and paleoseismology for a better understanding of convergent margin processes. A few questions EarthScope addressed include: What controls the lithospheric architecture? What controls the locus of volcanism? How do convergent margin processes contribute to growth of the continent through time? Crustal Strain and Deformation Crustal strain and deformation is the change in shape and volume of continental and oceanic crust caused by stress applied to rock through tectonic forces. An array of variables including composition, temperature, pressure, etc., determines how the crust will deform. A few questions EarthScope addressed include: How do crust and mantle rheology vary with rock type and with depth? How does lithospheric rheology change in the vicinity of a fault zone? What is the distribution of stress in the lithosphere? Continental Deformation Continental deformation is driven by plate interactions through active tectonic processes such as continental transform systems with extensional, strike-slip, and contractional regimes. EarthScope provided velocity field data, portable and continuous GPS data, fault-zone drilling and sampling, reflection seismology, modern seismicity, pre-Holocene seismicity, and magnetotelluric and potential field data for a better understanding of continental deformation. A few questions EarthScope addressed include: What are the fundamental controls on deformation of the continent? What is the strength profile(s) of the lithosphere? What defines tectonic regimes within the continent? Continent Structure and Evolution Earth's continents are compositionally distinct from the oceanic crust. The continents record four billion years of geologic history, while the oceanic crust gets recycled about every 180 million years. Because of the age of continental crusts, the ancient structural evolution of the continents can be studied. Data from EarthScope was used to find the mean seismic structure of the continental crust, associated mantle, and crust-mantle transition. Variability in that structure was also studied. EarthScope attempted to define continental lithosphere formation and continent structure and to identify the relationship between continental structure and deformation. A few questions EarthScope addressed include: How does magmatism modify, enlarge, and deform continental lithosphere? How are the crust and lithospheric mantle related? What is the role of extension, orogenic collapse, and rifting in constructing the continents? Faults and Earthquake Processes EarthScope acquired 3D and 4D data that gave scientists a more detailed insight into faulting and earthquakes than ever before. This project provided a much needed data upgrade from work done in previous years thanks to many technological advances. New data enabled an improved study and understanding of faults and earthquakes that increased our knowledge of the complete earthquake process, allowing for the continued development of building predictive models. Detailed information on internal fault zone architecture, crust and upper mantle structure, strain rates, and transitions between fault systems and deformation types; as well as heat flow, electromagnetic/magnetotelluric, and seismic waveform data, were all made available. A few questions EarthScope addressed include: How does strain accumulate and release at plate boundaries and within the North American plate? How do earthquakes start, rupture, and stop? What is the absolute strength of faults and the surrounding lithosphere? Deep Earth Structure Through the use of seismology, scientists were be able to collect and evaluate data from the deepest parts of our planet, from the continental lithosphere down to the core. The relationship between lithospheric and the upper mantle processes is something that is not completely known, including upper mantle processes below the United States and their effects on the continental lithosphere. There are many issues of interest, such as determining the source of forces originating in the upper mantle and their effects on the continental lithosphere. Seismic data gave scientists more understanding and insight into the lower mantle and the Earth's core, as well as activity at the core-mantle boundary. A few questions hoping to be answered by EarthScope included: How is evolution of the continents linked to processes in the upper mantle? What is the level of heterogeneity in the mid-mantle? What is the nature and heterogeneity of the lower mantle and core-mantle boundary? Fluids and Magmas EarthScope hoped to provide a better understanding of the physics of fluids and magmas in active volcanic systems in relation to the deep Earth and how the evolution of continental lithosphere is related to upper mantle processes. The basic idea of how the various melts are formed is known, but not the volumes and rates of magma production outside of Mid-ocean ridge basalts. EarthScope provided seismic data and tomographic images of the mantle to better understand these processes. A few questions EarthScope addressed include: Over what temporal and spatial scales do earthquake deformation and volcanic eruptions couple? What controls eruption style? What are the predictive signs of imminent volcanic eruption? What are the structural, rheological, and chemical controls on fluid flow in the crust? Education and Outreach The Education and Outreach Program was designed to integrate EarthScope into both the classroom and the community. The program reached out to scientific educators and students as well as industry professionals (engineers, land/resource managers, technical application/data users), partners of the project (UNAVCO, IRIS, USGS, NASA, etc.), and the general public. To accomplish this, the EOP offered a wide array of educational workshops and seminars, directed at various audiences, to offer support on data interpretation and implementation of data products into the classroom. Their job was to make sure that everyone understood what EarthScope was, what it was doing in the community, and how to use the data it was producing. By generating new research opportunities for students in the scientific community, the program also hoped to expand recruitment for future generations of earth scientists. Mission "To use EarthScope data, products, and results to create a measurable and lasting change on the way that Earth science is taught and perceived in the United States." Goals Create a high-profile public identity for EarthScope that emphasizes the integrated nature of the scientific discoveries and the importance of EarthScope research initiatives. Establish a sense of ownership among scientific, professional, and educational communities and the public so that a diverse group of individuals and organizations can and will make contributions to EarthScope. Promote science literacy and understanding of EarthScope among all audiences through informal education venues. Advance formal Earth science education by promoting inquiry-based classroom investigations that focus on understanding Earth and the interdisciplinary nature of EarthScope. Encourage use of EarthScope data, discoveries, and new technology in resolving challenging problems and improving our quality of life. EarthScope In the Classroom Education and outreach developed tools for educators and students across the United States to interpret and apply this information for solving a wide range of scientific issues within the earth sciences. The project tailored its products to the specified needs and requests of educators. K-12 Education The EarthScope Education and Outreach Bulletin was a bulletin targeted for grades 5-8 that summarized a volcanic or tectonic event documented by EarthScope and put it into an easily interpretable format, complete with diagrams and 3D models. They followed specific content standards based on what a child should be learning at those grade levels. The EarthScope Voyager, Jr. allowed students to explore and visualize the various types of data that were collected. In this interactive map, the user could add various types of base maps, features, and plate velocities. Educators could access to real time GPS data of plate movement and influences through the UNAVCO website. University Level EarthScope promised to produce a large amount of geological and geophysical data to the door for numerous research opportunities in the scientific community. As the USArray Big Foot project moved across the country, universities adopted seismic stations near their areas. These stations were then monitored and maintained by not only the professors, but their students as well. Scouting for future seismic station locations created field work opportunities for students. The influx of data helped creaate projects for undergraduate research, master's thesis, and doctoral dissertations. A list of funded proposals can be found on the NSF website. Legacy Many applications for EarthScope data currently exist, as mentioned above. The EarthScope program was dedicated to determining the three dimensional structure of the North American continent. Future uses of the data that it produced might include hydrocarbon exploration, aquifer boundary establishment, remote sensing technique development, and earthquake risk assessment. Due to the open and free-to-the-public data portals that EarthScope and its partners maintain, the applications are limited only by the creativity of those who wish to sort through the gigabytes of data. Also, because of its scale, the program will undoubtedly be the topic of casual conversation for many people outside of the geologic community. EarthScope chatter will be made by people in political, educational, social, and scientific arenas. Geologic Legacy The multidisciplinary character of EarthScope helped create stronger network connections between geologists of all types and from around the country. Building an Earth model of this scale required a complex community effort, and this model is largely the first EarthScope legacy. Researchers analyzing the data left us with a greater scientific understanding of geologic resources in the Great Basin and of the evolution of the plate boundary on the North American west coast. Another geologic legacy desired by the initiative, was to invigorate the Earth sciences community. Invigoration is self-perpetuating as evidenced by participation from thousands of organizations from around the world and from all levels of students and researchers. This leads to a significantly heightened awareness within the general public, including the next cohort of prospective Earth scientists. With further evolution of the EarthScope project, there were opportunities to create new observatories with greater capabilities, including extending the USArray over the Gulf of Mexico and the Gulf of California. There is much promise for EarthScope tools and observatories, even after retirement, to be used by universities and professional geologists. These tools include the physical equipment, software invented to analyze the data, and other data and educational products initiated or inspired by EarthScope. Political Legacy The science produced by EarthScope and the researchers using its data products help guide lawmakers in environmental policy, hazard identification, and ultimately, federal funding of more large-scale projects like this one. Besides the three physical dimensions of North America's structure, a fourth dimension of the continent is being described through geochronology using EarthScope data. Improving understanding of the continent's geologic history allow future generations to more efficiently manage and use geologic resources and live with geologic hazards. Environmental policy laws have been the subject of some controversy since the European settlement of North America. Specifically, water and mineral rights issues have been the focus of dispute. Representatives in Washington D.C. and the state capitals require guidance from authoritative science in drafting the soundest environmental laws for our country. The EarthScope research community was in a position to provide the most reliable course for government to take concerning environmental policy. Hazard identification with EarthScope is an application already in use. In fact, the Federal Emergency Management Agency (FEMA) has awarded the Arizona Geological Survey and its partner universities funding to adopt and maintain eight Transportable Array stations. The stations will be used to update Arizona's earthquake risk assessment. Social Legacy For EarthScope to live up to its potential in the Earth sciences, the connections between the research and the education and outreach communities must continue to be cultivated. Enhanced public outreach to museums, the National Park System, and public schools will ensure that these forward-thinking connections are fostered. National media collaboration with high-profile outlets such as Discovery Channel, Science Channel, and National Geographic may secure a lasting legacy within the social consciousness of the world. Earth science has already been promoted as a vital modern discipline, especially in today's “green” culture, to which EarthScope is contributing. The size of the EarthScope project augments the growing public awareness of the broad structure of the planet on which we live. EarthScope Consortium Given that IRIS and UNAVCO operated the seismology and geodesy components of the instrumentation that the project relied on, when these two organizations merged in 2023 they adopted the name EarthScope Consortium to represent the shared vision of the new organization. See also German Continental Deep Drilling Programme (KTB) Kola Superdeep Borehole San Andreas Fault Observatory at Depth (SAFOD project) References External links EarthScope Consortium Historic EarthScope program website archive SAGE GAGE National Science Foundation (NSF) United States Geological Survey (USGS) Seismological observatories, organisations and projects Geophysics Geodesy Seismology Regional geology Satellite navigation Global Positioning System
EarthScope
[ "Physics", "Mathematics", "Technology", "Engineering" ]
5,492
[ "Applied and interdisciplinary physics", "Wireless locating", "Applied mathematics", "Aerospace engineering", "Aircraft instruments", "Geophysics", "Global Positioning System", "Geodesy" ]
3,464,419
https://en.wikipedia.org/wiki/ZINDO
ZINDO is a semi-empirical quantum chemistry method used in computational chemistry. It is a development of the INDO method. It stands for Zerner's Intermediate Neglect of Differential Overlap, as it was developed by Michael Zerner and his coworkers in the 1970s. Unlike INDO, which was really restricted to organic molecules and those containing the atoms B to F, ZINDO covers a wide range of the periodic table, even including the rare-earth elements. There are two distinct versions of the method: ZINDO/1 – for calculating ground-state properties such as bond lengths and bond angles. It refers to a SCF (RHF or ROHF) calculation with the INDO/1 level as suggested by Pople, which provides the reference state MO coefficients. Ground-state dipole moments and ionization potentials are in general very accurate. Geometry optimizations are erratic, what prompted Zerner's group to improve the performance of the code in the late 1990s ZINDO/S (sometimes just called INDO/S) – use the INDO/1 molecular orbitals for calculating excited states and hence electronic spectra. It consists of a CI calculation including only the reference state plus a small set of single-electron excitations within a selected active space, typically five HOMOs and five LUMOs. The original BIGSPEC program from the Zerner group is not widely available, but the method is implemented in ORCA, in part, in Gaussian, and in SCIGRESS. To obtain good results, it is frequently necessary to fit the parameters to a given molecule, thereby making it ideal only in semi-empirical calculations. References Semiempirical quantum chemistry methods
ZINDO
[ "Chemistry" ]
347
[ "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Computational chemistry", "Physical chemistry stubs", "Semiempirical quantum chemistry methods" ]
3,464,501
https://en.wikipedia.org/wiki/Cosmotron
The Cosmotron was a particle accelerator, specifically a proton synchrotron, at Brookhaven National Laboratory. Its construction was approved by the U.S. Atomic Energy Commission in 1948, reaching its full energy in 1953, and continuing to run until 1966. It was dismantled in 1969. It was the first particle accelerator to impart kinetic energy in the range of GeV to a single particle, accelerating protons to 3.3 GeV. It was also the first accelerator to allow the extraction of the particle beam for experiments located physically outside the accelerator. It was used to observe a number of mesons previously seen only in cosmic rays, and to make the first discoveries of heavy, unstable particles (called V particles at the time) leading to the experimental confirmation of the theory of associated production of strange particles. It was the first accelerator that was able to produce all positive and negative mesons known to exist in cosmic rays. Its discoveries include the first vector meson. The name chosen for the synchrotron was Cosmitron (representing an ambition to produce cosmic rays) but was changed to Cosmotron to sound like the cyclotron. The beam size of 64 × 15 cm and an energy goal of about 3 GeV determined the machine parameters. The synchrotron had a 75-foot/22.9-meter diameter. It consisted of 288 magnets each weighing 6 tons and providing up to 1.5 T, forming four curved sections. The range of field change was kept within limits by first accelerating particles to an intermediate energy in another accelerator and then injected into the Cosmotron. The straight sections without magnets were worrisome because there was no focusing and the betatron oscillations would change suddenly and might swing wildly. But, all these major problems were overcome. Gallery References External links BNL-Cosmotron experiment record on INSPIRE-HEP Cosomtron Magnet Lamina at the Smithsonian Museum of Natural History History from BNL website BNL website on BNL's 60th anniversary Brookhaven National Laboratory Particle physics facilities Particle experiments
Cosmotron
[ "Physics" ]
424
[ "Particle physics stubs", "Particle physics" ]
3,465,656
https://en.wikipedia.org/wiki/Plate%20Boundary%20Observatory
The Plate Boundary Observatory (PBO) was the geodetic component of the EarthScope Facility. EarthScope was an Earth science program that explored the 4-dimensional structure of the North American Continent. EarthScope (and PBO) was a 15-year project (2003-2018) funded by the National Science Foundation (NSF) in conjunction with NASA. PBO construction (an NSF MREFC) took place from October 2003 through September 2008. Phase 1 of operations and maintenance concluded in September 2013. Phase 2 of operations ended in September 2018, along with the end of the EarthScope project. In October 2018, PBO was assimilated into a broader Network of the Americas (NOTA), along with networks in Mexico (TLALOCNet) and the Caribbean (COCONet), as part of the NSF's Geodetic Facility for the Advancement of Geosciences (GAGE). GAGE is operated by EarthScope Consortium. PBO precisely measured Earth deformation resulting from the constant motion of the Pacific and North American tectonic plates in the western United States. These Earth movements can be very small and incremental and not felt by people, or they can be very large and sudden, such as those that occur during earthquakes and volcanic eruptions. The high-precision instrumentation of the PBO enabled detection of motions to a sub-centimeter level. PBO measured Earth deformation through a network of instrumentation including: high precision Global Positioning System (GPS) and Global Navigation Satellite System (GNSS) receivers, strainmeters, seismometers, tiltmeters, and other geodetic instruments. The PBO GPS network included 1100 stations extending from the Aleutian Islands south to Baja and eastward across the continental United States. During the construction phase, 891 permanent and continuously operating GPS stations were installed, and another 209 existing stations were integrated (PBO Nucleus stations) into the network. Geodetic imaging data was transmitted, often in realtime, from a wide network of GPS stations, augmented by seismometers, strainmeters and tiltmeters, complemented by InSAR (interferometric synthetic aperture radar), LiDAR (light-activated radar), and geochronology. The GPS stations were categorized into clusters. The transform cluster was near the San Andreas Fault in California; the subduction cluster was in the Cascadia subduction zone (northern California, Oregon, Washington, and southern British Columbia); the extension cluster was in the Basin and Range region; the volcanic cluster was in the Yellowstone caldera, the Long Valley caldera, and the Cascade Volcanoes; the backbone cluster was at 100–200 km intervals across the United States to provide complete spatial coverage. Data from the PBO was, and NOTA data continue to be, transmitted to the GAGE Facility, operated by EarthScope Consortium, to the data center where it is collected, archived and distributed. These data sets continue to be freely and openly available to the public, with equal access provided for all users. PBO data includes the raw data collected from each instrument, quality-checked data in formats commonly used by PBO's various user communities, and processed data such as calibrated time series, velocity fields, and error estimates. Some scientific questions that addressed by the EarthScope project and the PBO data include: How does accumulated strain lead to earthquakes? Are there recognizable precursors to earthquakes? How does the evolution of the continent influence the motions that are happening today? What happens to geologic structures at depth? What influences the location of features such as faults and mountain ranges? Is it inherited from earlier tectonic events or related to deeper processes in the mantle? How is magma generated? How does it travel from the mantle to reach the surface? What are the precursors to a volcanic eruption? References Global Positioning System Plate tectonics
Plate Boundary Observatory
[ "Technology", "Engineering" ]
789
[ "Global Positioning System", "Aerospace engineering", "Wireless locating", "Aircraft instruments" ]
3,467,050
https://en.wikipedia.org/wiki/Uranium%20dioxide
Uranium dioxide or uranium(IV) oxide (), also known as urania or uranous oxide, is an oxide of uranium, and is a black, radioactive, crystalline powder that naturally occurs in the mineral uraninite. It is used in nuclear fuel rods in nuclear reactors. A mixture of uranium and plutonium dioxides is used as MOX fuel. Prior to 1960, it was used as yellow and black color in ceramic glazes and glass. Production Uranium dioxide is produced by reducing uranium trioxide with hydrogen. UO3 + H2 → UO2 + H2O at 700 °C (973 K) This reaction plays an important part in the creation of nuclear fuel through nuclear reprocessing and uranium enrichment. Chemistry Structure The solid is isostructural with (has the same structure as) fluorite (calcium fluoride), where each U is surrounded by eight O nearest neighbors in a cubic arrangement. In addition, the dioxides of cerium, thorium, and the transuranic elements from neptunium through californium have the same structures. No other elemental dioxides have the fluorite structure. Upon melting, the measured average U-O coordination reduces from 8 in the crystalline solid (UO8 cubes), down to 6.7±0.5 (at 3270 K) in the melt. Models consistent with these measurements show the melt to consist mainly of UO6 and UO7 polyhedral units, where roughly of the connections between polyhedra are corner sharing and are edge sharing. Oxidation Uranium dioxide is oxidized in contact with oxygen to the triuranium octaoxide. 3 UO2 + O2 → U3O8 at 700 °C (973 K) The electrochemistry of uranium dioxide has been investigated in detail as the galvanic corrosion of uranium dioxide controls the rate at which used nuclear fuel dissolves. See spent nuclear fuel for further details. Water increases the oxidation rate of plutonium and uranium metals. Carbonization Uranium dioxide is carbonized in contact with carbon, forming uranium carbide and carbon monoxide. UO2 \ + \ 4C -> UC2 \ + \ 2CO. This process must be done under an inert gas as uranium carbide is easily oxidized back into uranium oxide. Uses Nuclear fuel UO2 is used mainly as nuclear fuel, specifically as UO2 or as a mixture of UO2 and PuO2 (plutonium dioxide) called a mixed oxide (MOX fuel), in the form of fuel rods in nuclear reactors. The thermal conductivity of uranium dioxide is very low when compared with elemental uranium, uranium nitride, uranium carbide and zircalloy cladding material as well as most uranium-based alloys. This low thermal conductivity can result in localised overheating in the centres of fuel pellets. The graph below shows the different temperature gradients in different fuel compounds. For these fuels, the thermal power density is the same and the diameter of all the pellets are the same. Color for glass ceramic glaze Uranium oxide (urania) was used to color glass and ceramics prior to World War II, and until the applications of radioactivity were discovered this was its main use. In 1958 the military in both the US and Europe allowed its commercial use again as depleted uranium, and its use began again on a more limited scale. Urania-based ceramic glazes are dark green or black when fired in a reduction or when UO2 is used; more commonly it is used in oxidation to produce bright yellow, orange and red glazes. Orange-colored Fiestaware is a well-known example of a product with a urania-colored glaze. Uranium glass is pale green to yellow and often has strong fluorescent properties. Urania has also been used in formulations of enamel and porcelain. It is possible to determine with a Geiger counter if a glaze or glass produced before 1958 contains urania. Other uses Prior to the realisation of the harmfulness of radiation, uranium was included in false teeth and dentures, as its slight fluorescence made the dentures appear more like real teeth in a variety of lighting conditions. Depleted UO2 (DUO2) can be used as a material for radiation shielding. For example, DUCRETE is a "heavy concrete" material where gravel is replaced with uranium dioxide aggregate; this material is investigated for use for casks for radioactive waste. Casks can be also made of DUO2-steel cermet, a composite material made of an aggregate of uranium dioxide serving as radiation shielding, graphite and/or silicon carbide serving as neutron radiation absorber and moderator, and steel as the matrix, whose high thermal conductivity allows easy removal of decay heat. Depleted uranium dioxide can be also used as a catalyst, e.g. for degradation of volatile organic compounds in gaseous phase, oxidation of methane to methanol, and removal of sulfur from petroleum. It has high efficiency and long-term stability when used to destroy VOCs when compared with some of the commercial catalysts, such as precious metals, TiO2, and Co3O4 catalysts. Much research is being done in this area, DU being favoured for the uranium component due to its low radioactivity. The use of uranium dioxide as a material for rechargeable batteries is being investigated. The batteries could have high power density and potential of 4.7 V per cell. Another investigated application is in photoelectrochemical cells for solar-assisted hydrogen production where UO2 is used as a photoanode. In earlier times, uranium dioxide was also used as heat conductor for current limitation (URDOX-resistor), which was the first use of its semiconductor properties. Uranium dioxide displays strong piezomagnetism in the antiferromagnetic state, observed at cryogenic temperatures below 30 kelvins. Accordingly, the linear magnetostriction found in UO2 changes sign with the applied magnetic field and exhibits magnetoelastic memory switching phenomena at record high switch-fields of 180,000 Oe. The microscopic origin of the material magnetic properties lays in the face-centered-cubic crystal lattice symmetry of uranium atoms, and its response to applied magnetic fields. Semiconductor properties The band gap of uranium dioxide is comparable to those of silicon and gallium arsenide, near the optimum for efficiency vs band gap curve for absorption of solar radiation, suggesting its possible use for very efficient solar cells based on Schottky diode structure; it also absorbs at five different wavelengths, including infrared, further enhancing its efficiency. Its intrinsic conductivity at room temperature is about the same as of single crystal silicon. The dielectric constant of uranium dioxide is about 22, which is almost twice as high as of silicon (11.2) and GaAs (14.1). This is an advantage over Si and GaAs in the construction of integrated circuits, as it may allow higher density integration with higher breakdown voltages and with lower susceptibility to the CMOS tunnelling breakdown. The Seebeck coefficient of uranium dioxide at room temperature is about 750 μV/K, a value significantly higher than the 270 μV/K of thallium tin telluride (Tl2SnTe5) and thallium germanium telluride (Tl2GeTe5) and of bismuth-tellurium alloys, other materials promising for thermoelectric power generation applications and Peltier elements. The radioactive decay impact of the 235U and 238U on its semiconducting properties was not measured . Due to the slow decay rate of these isotopes, it should not meaningfully influence the properties of uranium dioxide solar cells and thermoelectric devices, but it may become an important factor for VLSI chips. Use of depleted uranium oxide is necessary for this reason. The capture of alpha particles emitted during radioactive decay as helium atoms in the crystal lattice may also cause gradual long-term changes in its properties. The stoichiometry of the material dramatically influences its electrical properties. For example, the electrical conductivity of UO1.994 is orders of magnitude lower at higher temperatures than the conductivity of UO2.001. Uranium dioxide, like U3O8, is a ceramic material capable of withstanding high temperatures (about 2300 °C, in comparison with at most 200 °C for silicon or GaAs), making it suitable for high-temperature applications like thermophotovoltaic devices. Uranium dioxide is also resistant to radiation damage, making it useful for rad-hard devices for special military and aerospace applications. A Schottky diode of U3O8 and a p-n-p transistor of UO2 were successfully manufactured in a laboratory. Toxicity Uranium dioxide is known to be absorbed by phagocytosis in the lungs. See also Cleveite Ducrete Uranium oxide Uranium glass Uranium tile References Further reading External links Semiconducting properties of uranium oxides Free Dictionary Listing for Uranium Dioxide The Uranium dioxide International Bio-Analytical Industries, Inc. Nuclear chemistry Uranium(IV) compounds Nuclear materials Oxides Semiconductor materials Articles containing video clips Fluorite crystal structure
Uranium dioxide
[ "Physics", "Chemistry" ]
1,913
[ "Nuclear chemistry", "Semiconductor materials", "Oxides", "Salts", "Materials", "Nuclear materials", "nan", "Nuclear physics", "Matter" ]
3,467,973
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Graham%20problem
In combinatorial number theory, the Erdős–Graham problem is the problem of proving that, if the set of integers greater than one is partitioned into finitely many subsets, then one of the subsets can be used to form an Egyptian fraction representation of unity. That is, for every , and every -coloring of the integers greater than one, there is a finite monochromatic subset of these integers such that In more detail, Paul Erdős and Ronald Graham conjectured that, for sufficiently large , the largest member of could be bounded by for some constant independent of . It was known that, for this to be true, must be at least Euler's constant . Ernie Croot proved the conjecture as part of his Ph.D thesis, and later (while a post-doctoral researcher at UC Berkeley) published the proof in the Annals of Mathematics. The value Croot gives for is very large: it is at most . Croot's result follows as a corollary of a more general theorem stating the existence of Egyptian fraction representations of unity for sets of smooth numbers in intervals of the form , where contains sufficiently many numbers so that the sum of their reciprocals is at least six. The Erdős–Graham conjecture follows from this result by showing that one can find an interval of this form in which the sum of the reciprocals of all smooth numbers is at least ; therefore, if the integers are -colored there must be a monochromatic subset satisfying the conditions of Croot's theorem. A stronger form of the result, that any set of integers with positive upper density includes the denominators of an Egyptian fraction representation of one, was announced in 2021 by Thomas Bloom, a postdoctoral researcher at the University of Oxford. See also Conjectures by Erdős References External links Ernie Croot's Webpage Combinatorics Conjectures that have been proved Theorems in number theory Egyptian fractions Graham problem
Erdős–Graham problem
[ "Mathematics" ]
393
[ "Discrete mathematics", "Combinatorics", "Theorems in number theory", "Conjectures that have been proved", "Mathematical problems", "Mathematical theorems", "Number theory" ]
3,468,491
https://en.wikipedia.org/wiki/Banach%20bundle
In mathematics, a Banach bundle is a vector bundle each of whose fibres is a Banach space, i.e. a complete normed vector space, possibly of infinite dimension. Definition of a Banach bundle Let M be a Banach manifold of class Cp with p ≥ 0, called the base space; let E be a topological space, called the total space; let π : E → M be a surjective continuous map. Suppose that for each point x ∈ M, the fibre Ex = π−1(x) has been given the structure of a Banach space. Let be an open cover of M. Suppose also that for each i ∈ I, there is a Banach space Xi and a map τi such that the map τi is a homeomorphism commuting with the projection onto Ui, i.e. the following diagram commutes: and for each x ∈ Ui the induced map τix on the fibre Ex is an invertible continuous linear map, i.e. an isomorphism in the category of topological vector spaces; if Ui and Uj are two members of the open cover, then the map is a morphism (a differentiable map of class Cp), where Lin(X; Y) denotes the space of all continuous linear maps from a topological vector space X to another topological vector space Y. The collection {(Ui, τi)|i∈I} is called a trivialising covering for π : E → M, and the maps τi are called trivialising maps. Two trivialising coverings are said to be equivalent if their union again satisfies the two conditions above. An equivalence class of such trivialising coverings is said to determine the structure of a Banach bundle on π : E → M. If all the spaces Xi are isomorphic as topological vector spaces, then they can be assumed all to be equal to the same space X. In this case, π : E → M is said to be a Banach bundle with fibre X. If M is a connected space then this is necessarily the case, since the set of points x ∈ M for which there is a trivialising map for a given space X is both open and closed. In the finite-dimensional case, the second condition above is implied by the first. Examples of Banach bundles If V is any Banach space, the tangent space TxV to V at any point x ∈ V is isomorphic in an obvious way to V itself. The tangent bundle TV of V is then a Banach bundle with the usual projection This bundle is "trivial" in the sense that TV admits a globally defined trivialising map: the identity function If M is any Banach manifold, the tangent bundle TM of M forms a Banach bundle with respect to the usual projection, but it may not be trivial. Similarly, the cotangent bundle T*M, whose fibre over a point x ∈ M is the topological dual space to the tangent space at x: also forms a Banach bundle with respect to the usual projection onto M. There is a connection between Bochner spaces and Banach bundles. Consider, for example, the Bochner space X = L²([0, T]; H1(Ω)), which might arise as a useful object when studying the heat equation on a domain Ω. One might seek solutions σ ∈ X to the heat equation; for each time t, σ(t) is a function in the Sobolev space H1(Ω). One could also think of Y = [0, T] × H1(Ω), which as a Cartesian product also has the structure of a Banach bundle over the manifold [0, T] with fibre H1(Ω), in which case elements/solutions σ ∈ X are cross sections of the bundle Y of some specified regularity (L², in fact). If the differential geometry of the problem in question is particularly relevant, the Banach bundle point of view might be advantageous. Morphisms of Banach bundles The collection of all Banach bundles can be made into a category by defining appropriate morphisms. Let π : E → M and π′ : E′ → M′ be two Banach bundles. A Banach bundle morphism from the first bundle to the second consists of a pair of morphisms For f to be a morphism means simply that f is a continuous map of topological spaces. If the manifolds M and M′ are both of class Cp, then the requirement that f0 be a morphism is the requirement that it be a p-times continuously differentiable function. These two morphisms are required to satisfy two conditions (again, the second one is redundant in the finite-dimensional case): the diagram commutes, and, for each x ∈ M, the induced map is a continuous linear map; for each x0 ∈ M there exist trivialising maps such that x0 ∈ U, f0(x0) ∈ U′, and the map is a morphism (a differentiable map of class Cp). Pull-back of a Banach bundle One can take a Banach bundle over one manifold and use the pull-back construction to define a new Banach bundle on a second manifold. Specifically, let π : E → N be a Banach bundle and f : M → N a differentiable map (as usual, everything is Cp). Then the pull-back of π : E → N is the Banach bundle f*π : f*E → M satisfying the following properties: for each x ∈ M, (f*E)x = Ef(x); there is a commutative diagram with the top horizontal map being the identity on each fibre; if E is trivial, i.e. equal to N × X for some Banach space X, then f*E is also trivial and equal to M × X, and is the projection onto the first coordinate; if V is an open subset of N and U = f−1(V), then and there is a commutative diagram where the maps at the "front" and "back" are the same as those in the previous diagram, and the maps from "back" to "front" are (induced by) the inclusions. References Banach spaces Differential geometry Generalized manifolds Manifolds Nonlinear functional analysis Structures on manifolds Vector bundles
Banach bundle
[ "Mathematics" ]
1,320
[ "Topological spaces", "Topology", "Manifolds", "Space (mathematics)" ]
17,245,684
https://en.wikipedia.org/wiki/ASM%20International
ASM (previously known as ASM International N.V., originally standing for Advanced Semiconductor Materials) is a Dutch headquartered multinational corporation that specializes in the design, manufacturing, sales and service of semiconductor wafer processing equipment for the fabrication of semiconductor devices. ASM's products are used by semiconductor manufacturers in front-end wafer processing in their semiconductor fabrication plants. ASM's technologies include atomic layer deposition, epitaxy, chemical vapor deposition and diffusion. The company was founded by Arthur del Prado (1931-2016) as 'Advanced Semiconductor Materials' in 1964. From 2008 until 2020, son of Arthur del Prado, Chuck del Prado was CEO. ASM pioneered important aspects of many established wafer-processing technologies used in industry, including lithography, deposition, ion implantation, single-wafer epitaxy, and in recent years atomic layer deposition. Semiconductor equipment companies ASML, ASM Pacific Technology (ASMPT) and Besi are former divisions of ASM. ASM headquarters is located in Almere, the Netherlands. The company has R&D sites in Almere (the Netherlands), Helsinki (Finland), Leuven (Belgium, near IMEC), Phoenix (Arizona), Tama (Japan), and Dongtan (South Korea). Manufacturing primarily occurs in Singapore and Dongtan (South-Korea). ASM also has sales & service offices across the globe, including United States, South Korea, China, Taiwan, Japan, Singapore and Israel. As of 2021, it has 3,312 staff, located in 14 countries. The shares of the company are listed on the Euronext Amsterdam. In March 2020, ASM was promoted to the AEX index. ASM has a minority stake in ASM Pacific Technology, a Hong Kong–based company active in semiconductor assembly, packaging and surface-mount technology. Technology To create a semiconductor chip, many individual steps are performed using various types of wafer processing equipment, including photolithographic patterning, depositing thin layers, etching to remove material, thermal treatments, and other steps. ASM's systems are designed for deposition processes, when thin films, or layers, of various materials are grown or deposited onto the wafer. Many different thin-film layers are deposited to complete the full sequence of process steps necessary to manufacture a chip. ASM's technology development is driven by its customers' goal to build faster, cheaper, and more powerful semiconductor chips with reduced energy consumption. This goal drives the need to shrink the dimensions of components on the chip, targeting to double the number of components per unit area on a chip every two years (Moore's law). As part of this scaling of dimensions, ASM supplies its customers – chip manufacturers – with machines that deposit ever thinner films of semiconductor materials. ASM also develops deposition processes for new materials to be used in semiconductor fabrication. During the past 15 years, an increasing array of new materials has been introduced in the fabrication of chips. These new materials were required to achieve the necessary performance improvements of chips, as outlined by Moore's Law. For instance, in 2007 in a MOSFET transistor, the silicon oxidegate dielectric was replaced with a high-κ, a material that has a higher electrical resistance than silicon oxide. In this particular case, ASM pioneered the chemical process and the new deposition method called atomic layer deposition during nearly a decade of R&D. In addition, increasingly precise deposition methods are required as components on a chip such as transistors moved from planar to 3D structures, like FinFETs in the past decade. ASM has a leading position in single wafer atomic layer deposition (ALD). Research ASM offers a number of methods and accompanying machines to deposit these thin films of materials. The company tries to expand the applicability of its deposition technologies and machines as much as possible. R&D is critical in that effort. In 2021, the company spent 151 million euro on R&D (or 9% of its annual revenues). R&D activities stretch from basic research of new materials to the application of new materials in chip manufacturing. Products ASM designs and sells both single-wafer deposition tools, in which the process is performed one wafer at a time, as well as so-called batch tools, in which the deposition is performed on multiple wafers at a time. The prices of the company's systems varies, but typically are multiple of million euros per system. The products of ASM can be categorized by deposition method: Atomic Layer Deposition is a layer-by-layer process that results in the deposition of thin films one atomic layer at a time in a highly controlled manner. Layers are formed during reaction cycles by alternately pulsing precursors and reactants and purging with inert gas in between each pulse. ASM offers single wafer ALD tools in two technology segments: thermal ALD and plasma enhanced ALD (PEALD). ASM's ALD tools include Synergis, Pulsar and EmerALD. PEALD tools include Eagle XP8 and the XP8 QCM. Epitaxy is a process that is used for depositing precisely controlled crystalline silicon-based layers that are important for semiconductor device electrical properties. The silicon epitaxy process can be used to modify the electrical characteristics of the wafer surface to create high-performance transistors during the manufacturing of semiconductor chips. ASM's epitaxy tools are single wafer tools and include Intrepid and Epsilon. Chemical Vapor Deposition is a chemical deposition process in which the wafer is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired film. Within Chemical Vapor Deposition (CVD) ASM offers two types of tools: single-wafer plasma enhanced CVD (PECVD) and batch low pressure CVD (LPCVD). ASM provides single-wafer PECVD processes on the Dragon XP8 tool. ASM provides batch LPCVD/diffusion processes on the vertical furnace A400 DUO and novel Sonora tools. History 1960s: In 1964, founds ASM as 'Advanced Semiconductor Materials' in Bilthoven, the Netherlands. Initially the company operates as a sales agent in semiconductor fabrication technology in Europe. In 1968, the company was formally listed as a private limited company. 1970s: ASM starts to design, manufacture and sell chemical vapor deposition equipment. In 1974 it acquires Fico Toolings, a Dutch manufacturer of semiconductor molds. A Hong Kong sales office ASM Asia, now known and traded as ASM Pacific Technology, is established in 1975. ASM America is founded in Phoenix, Arizona, in 1976. Sale of ASM's horizontal plasma-enhanced chemical vapor deposition furnaces drive the company's growth. 1980s: Following an initial public offering on the Nasdaq in May 1981, the company expands. In 1982 ASM Japan is established. ASM invests in new semiconductor fabrication technologies, like lithography, ion implantation, epitaxy, and wire bonding. In 1988, the company divests ASML Holding N.V., ASM Ion Implant, and it lists its Hong Kong–based activities as ASM Pacific Technology on the Hong Kong stock exchange in 1989. 1990s: The company reorganizes thoroughly between 1991 and 1994. In 1993, ASM divests ASM Fico to Berliner Electro Holding, now known as Besi. ASM focusses on vertical low-pressure chemical vapor deposition furnaces by ASM Europe, single wafer plasma-enhanced chemical vapor deposition by ASM Japan and single wafer epitaxy by ASM America. From 1996 onwards, the company is also listed on the Euronext, Amsterdam.ASM retains a majority stake in ASM Pacific Technology. 2000s: ASM expands again with investments in 300-mm wafer technology and atomic layer deposition. In 2007, the company successfully brings atomic layer deposition from R&D to high-volume production via the high-κ metal gate application. At the same time, hedge funds question the company's stake in ASM Pacific Technology. In 2008 Arthur del Prado is succeeded as CEO by his son, Chuck del Prado. In 2009 headquarters move from Bilthoven to Almere, the Netherlands. 2010s: The company returns to structural profitability after execution of a worldwide restructuring program, that includes the implementation of a product driven organization, a single global sales organization, consolidation of manufacturing in Singapore, and the establishment of a global human resources, finance, IT, operational excellence and environment, health and safety organization. The application of (plasma enhanced) atomic layer deposition in multiple patterning and high-κ metal gate drives ASM's growth. Other products include epitaxy, PECVD and vertical furnace. Its stake in ASM Pacific Technology is reduced to 25%. 2020s: In 2020, on the Euronext, the company is included on the AEX index. which includes the top-25 of companies listed on the Euronext Amsterdam stock exchange. The same year, after 12 years as CEO, Chuck del Prado decided to step down, and was succeeded by Benjamin Loh. Between 2020 and 2022, ASM renewed its vertical furnace product line with A400DUO (200mm wafers) and Sonora (300mm wafers). Finances Revenues ASM sells its equipment to semiconductor manufacturers worldwide, with the majority of its revenues from Asian customers. In 2021, 1.41 billion euro of the total 1.73 billion euro in revenues was generated through equipment sales, the rest came from spares and service. Market capitalization Shares of ASM are traded on the Euronext stock exchange since 1996. Since March 2020, ASM is included on the AEX index. The market capitalization of ASM Pacific Technology is no longer consolidated after ASM's interest in ASM Pacific Technology decreased to 25 percent in 2013. Between 1981 and 2015 ASM was also listed on the Nasdaq. In 2018 share price averaged at € 48.62 resulting in an average market capitalization of 2.53 billion euro. In 2019 average closing price was € 68.98, resulting in an average market capitalization of 3.38 billion euro. Market capitalization at year-end 2021 was 18.88 billion euro, based on the closing share price of €388.70 on Euronext Amsterdam on December 31, 2021. References External links Companies based in Flevoland Multinational companies headquartered in the Netherlands Equipment semiconductor companies Electronics companies established in 1964 Organisations based in Almere 1964 establishments in the Netherlands Companies listed on Euronext Amsterdam Companies formerly listed on the Nasdaq Companies in the AEX index
ASM International
[ "Engineering" ]
2,205
[ "Equipment semiconductor companies", "Semiconductor fabrication equipment" ]
17,247,558
https://en.wikipedia.org/wiki/D%2A
D* (pronounced "D star") is any one of the following three related incremental search algorithms: The original D*, by Anthony Stentz, is an informed incremental search algorithm. Focused D* is an informed incremental heuristic search algorithm by Anthony Stentz that combines ideas of A* and the original D*. Focused D* resulted from a further development of the original D*. D* Lite is an incremental heuristic search algorithm by Sven Koenig and Maxim Likhachev that builds on LPA*, an incremental heuristic search algorithm that combines ideas of A* and Dynamic SWSF-FP. All three search algorithms solve the same assumption-based path planning problems, including planning with the freespace assumption, where a robot has to navigate to given goal coordinates in unknown terrain. It makes assumptions about the unknown part of the terrain (for example: that it contains no obstacles) and finds a shortest path from its current coordinates to the goal coordinates under these assumptions. The robot then follows the path. When it observes new map information (such as previously unknown obstacles), it adds the information to its map and, if necessary, replans a new shortest path from its current coordinates to the given goal coordinates. It repeats the process until it reaches the goal coordinates or determines that the goal coordinates cannot be reached. When traversing unknown terrain, new obstacles may be discovered frequently, so this replanning needs to be fast. Incremental (heuristic) search algorithms speed up searches for sequences of similar search problems by using experience with the previous problems to speed up the search for the current one. Assuming the goal coordinates do not change, all three search algorithms are more efficient than repeated A* searches. D* and its variants have been widely used for mobile robot and autonomous vehicle navigation. Current systems are typically based on D* Lite rather than the original D* or Focused D*. In fact, even Stentz's lab uses D* Lite rather than D* in some implementations. Such navigation systems include a prototype system tested on the Mars rovers Opportunity and Spirit and the navigation system of the winning entry in the DARPA Urban Challenge, both developed at Carnegie Mellon University. The original D* was introduced by Anthony Stentz in 1994. The name D* comes from the term "Dynamic A*", because the algorithm behaves like A* except that the arc costs can change as the algorithm runs. Operation The basic operation of D* is outlined below. Like Dijkstra's algorithm and A*, D* maintains a list of nodes to be evaluated, known as the "OPEN list". Nodes are marked as having one of several states: NEW, meaning it has never been placed on the OPEN list OPEN, meaning it is currently on the OPEN list CLOSED, meaning it is no longer on the OPEN list RAISE, indicating its cost is higher than the last time it was on the OPEN list LOWER, indicating its cost is lower than the last time it was on the OPEN list Expansion The algorithm works by iteratively selecting a node from the OPEN list and evaluating it. It then propagates the node's changes to all of the neighboring nodes and places them on the OPEN list. This propagation process is termed "expansion". In contrast to canonical A*, which follows the path from start to finish, D* begins by searching backwards from the goal node. This means that the algorithm is actually computing the A* optimal path for every possible start node. Each expanded node has a back pointer which refers to the next node leading to the target, and each node knows the exact cost to the target. When the start node is the next node to be expanded, the algorithm is done, and the path to the goal can be found by simply following the back pointers. Obstacle handling When an obstruction is detected along the intended path, all the points that are affected are again placed on the OPEN list, this time marked RAISE. Before a RAISED node increases in cost, however, the algorithm checks its neighbors and examines whether it can reduce the node's cost. If not, the RAISE state is propagated to all of the nodes' descendants, that is, nodes which have back pointers to it. These nodes are then evaluated, and the RAISE state is passed on, forming a wave. When a RAISED node can be reduced, its back pointer is updated, and passes the LOWER state to its neighbors. These waves of RAISE and LOWER states are the heart of D*. By this point, a whole series of other points are prevented from being "touched" by the waves. The algorithm has therefore only worked on the points which are affected by change of cost. Another deadlock occurs This time, the deadlock cannot be bypassed so elegantly. None of the points can find a new route via a neighbor to the destination. Therefore, they continue to propagate their cost increase. Only points outside of the channel can be found, which can lead to destination via a viable route. This is how two Lower waves develop, which expand as points marked as unattainable with new route information. Pseudocode while (!openList.isEmpty()) { point = openList.getFirst(); expand(point); } Expand void expand(currentPoint) { boolean isRaise = isRaise(currentPoint); double cost; for each (neighbor in currentPoint.getNeighbors()) { if (isRaise) { if (neighbor.nextPoint == currentPoint) { neighbor.setNextPointAndUpdateCost(currentPoint); openList.add(neighbor); } else { cost = neighbor.calculateCostVia(currentPoint); if (cost < neighbor.getCost()) { currentPoint.setMinimumCostToCurrentCost(); openList.add(currentPoint); } } } else { cost = neighbor.calculateCostVia(currentPoint); if (cost < neighbor.getCost()) { neighbor.setNextPointAndUpdateCost(currentPoint); openList.add(neighbor); } } } } Check for raise boolean isRaise(point) { double cost; if (point.getCurrentCost() > point.getMinimumCost()) { for each(neighbor in point.getNeighbors()) { cost = point.calculateCostVia(neighbor); if (cost < point.getCurrentCost()) { point.setNextPointAndUpdateCost(neighbor); } } } return point.getCurrentCost() > point.getMinimumCost(); } Variants Focused D* As its name suggests, Focused D* is an extension of D* which uses a heuristic to focus the propagation of RAISE and LOWER toward the robot. In this way, only the states that matter are updated, in the same way that A* only computes costs for some of the nodes. D* Lite D* Lite is not based on the original D* or Focused D*, but implements the same behavior. It is simpler to understand and can be implemented in fewer lines of code, hence the name "D* Lite". Performance-wise, it is as good as or better than Focused D*. D* Lite is based on Lifelong Planning A*, which was introduced by Koenig and Likhachev few years earlier. D* Lite Minimum cost versus current cost For D*, it is important to distinguish between current and minimum costs. The former is only important at the time of collection and the latter is critical because it sorts the OpenList. The function which returns the minimum cost is always the lowest cost to the current point since it is the first entry of the OpenList. References External links Sven Koenig's web page Anthony Stentz's web page Robot control Search algorithms Graph algorithms
D*
[ "Engineering" ]
1,674
[ "Robotics engineering", "Robot control" ]
15,515,002
https://en.wikipedia.org/wiki/Royal%20Corps%20of%20Naval%20Constructors
The Royal Corps of Naval Constructors (RCNC) is an institution of the British Royal Navy and Admiralty for training in naval architecture, marine, electrical and weapon engineering. It was established by Order in Council in August 1883, on the recommendation of the naval architect Sir William White. Its precursor was the Royal School of Naval Architecture, London. According to the Royal Navy's Books of Reference 3 Chapter 46, it is a "civilian corps and an integrated part of the Defence Engineering & Science Group". Members in certain posts who do not hold commissions are eligible to wear a uniform similar to that of the Royal Navy and are accorded the same respect as commissioned officers. History From Tudor times, the ships of the Royal Navy were built in the Royal Dockyards under the supervision of the Master Shipwright and to the design of the Surveyor of the Navy who was always an ex-Master Shipwright. In 1805, seeing the growing application of science in industry, Lord Barham’s Commission recommended, that a School of Naval Architecture should be formed to produce men suitably trained both to design the ships of the fleet and to manage the work of the Royal Dockyards. This school was created in 1811 at Portsmouth and after an erratic series of changes it settled down at Greenwich in 1873. The graduates of these schools were Naval Architects who quickly established high professional standards in the field. Their influence, combined with the effects of the Industrial Revolution led to the formation of the Institution (now the Royal Institution) of Naval Architects in 1860. Although the number of professionally qualified Naval Architects employed in the design, building and repair of warships had risen to 27 by 1875, ships were still being designed and built against the Chief Constructor’s advice and there were inevitable disasters. The main obstacle to progress was the poor career prospects of the professionally qualified Naval Architect with the linked difficulty of getting sufficient recruits. To solve these linked problems William White, then Professional Assistant to the Director of Naval Construction, proposed a co-ordinated training programme and career structure and these ideas were approved in 1882 by a committee under Lord Brassey. The first head of the Royal Corps of Naval Constructors was Sir Nathaniel Barnaby. Due to illness his resignation in 1885 led to the appointment of Sir William White as his successor. The professional Naval Architects of the Royal Corps had grown in number to 91 by 1901 and were heavily involved in the build up to the First World War. The rapid and successful design and building of was probably their best known achievement of the time, although the foundations were being laid for future advances in weapons and machinery and also in the field of submarine design. The Royal Corps had a flirtation with airship design between 1915 and 1922 but this was overshadowed by the conversion of ships to operate aircraft and the design and construction of the first purpose built ship to carry aircraft, . The success of these ships, together with that of submarines and escorts designed by the Royal Corps, played a large part in establishing British naval supremacy. The Second World War saw a similar expansion of the shipbuilding effort and the evacuation to Bath of the Director of Naval Construction. Many members of the Royal Corps served in uniform in the ranks up to the level of Constructor Rear-Admiral. In the post-war period the major features have been the very considerable achievement in designing and maintaining a fleet of nuclear-powered submarines and the changing nature of the Royal Corps itself. Recognising the increasing impact of a vessel’s equipment on its hull and structure, the Royal Corps combined with the professional Electrical and Mechanical Engineers of the Royal Naval Engineering Service (RNES) in 1977. Further amalgamation with specialist weapons designers was also enacted. In the last decade this more diverse corps has been instrumental in the design and manufacture of the very latest warships such as the Type 45 destroyer, s and s; all of which contain highly complex engineering systems. The Royal Corps currently numbers nearly 100 naval architects, marine, electrical and weapon engineers and, in keeping with its original aims, continues to provide professional engineers for the design, building and maintenance of vessels of the Royal Navy. Six naval constructors gave their lives in the course of duty; Arthur K Stephens, Assistant Constructor 2c, who was lost 31 May 1916 aboard HMS Queen Mary which was sunk at the Battle of Jutland (listed as ‘Admiralty Civilian’). F. Bailey and A.A.F. Hill were lost in the disaster of June 1939. H.H.Palmer was lost at sea on the SS Aguila whilst on route to Gibraltar for Dockyard duties in August 1941, Also during World War Two F. Bryant was killed in the bombing of Bath in 1942 and R. King was killed in Mombasa. The RCNC and the Naval Service Some members of the RCNC are entitled to wear a modified version of the standard RN uniform, the difference being the presence of grey bands between gold stripes worn on the arms and on shoulder boards. Constructors may wear uniform in certain posts in UK establishments (predominantly naval bases) and in several overseas posts. RCNC Uniform Ranks References External links A set of annual albums produced by the Corps 1883 establishments in the United Kingdom Marine engineering organizations History of the Royal Navy Admiralty during World War II Royal Navy
Royal Corps of Naval Constructors
[ "Engineering" ]
1,052
[ "Marine engineering organizations", "Marine engineering" ]
15,515,379
https://en.wikipedia.org/wiki/Taylor%E2%80%93Green%20vortex
In fluid dynamics, the Taylor–Green vortex is an unsteady flow of a decaying vortex, which has an exact closed form solution of the incompressible Navier–Stokes equations in Cartesian coordinates. It is named after the British physicist and mathematician Geoffrey Ingram Taylor and his collaborator A. E. Green. Original work In the original work of Taylor and Green, a particular flow is analyzed in three spatial dimensions, with the three velocity components at time specified by The continuity equation determines that . The small time behavior of the flow is then found through simplification of the incompressible Navier–Stokes equations using the initial flow to give a step-by-step solution as time progresses. An exact solution in two spatial dimensions is known, and is presented below. Incompressible Navier–Stokes equations The incompressible Navier–Stokes equations in the absence of body force, and in two spatial dimensions, are given by The first of the above equation represents the continuity equation and the other two represent the momentum equations. Taylor–Green vortex solution In the domain , the solution is given by where , being the kinematic viscosity of the fluid. Following the analysis of Taylor and Green for the two-dimensional situation, and for , gives agreement with this exact solution, if the exponential is expanded as a Taylor series, i.e. . The pressure field can be obtained by substituting the velocity solution in the momentum equations and is given by The stream function of the Taylor–Green vortex solution, i.e. which satisfies for flow velocity , is Similarly, the vorticity, which satisfies , is given by The Taylor–Green vortex solution may be used for testing and validation of temporal accuracy of Navier–Stokes algorithms. A generalization of the Taylor–Green vortex solution in three dimensions is described in. References Fluid dynamics Vortices Computational fluid dynamics
Taylor–Green vortex
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
389
[ "Vortices", "Computational fluid dynamics", "Dynamical systems", "Chemical engineering", "Computational physics", "Piping", "Fluid dynamics" ]
15,515,444
https://en.wikipedia.org/wiki/Bituminous%20waterproofing
Bituminous waterproofing systems are designed to protect residential and commercial buildings. Bitumen (asphalt or coal-tar pitch) is a material made up of organic liquids that are highly sticky, viscous, and waterproof. Systems incorporating bituminous-based substrates are sometimes used to construct roofs, in the form of "roofing felt" or "roll roofing" products. Roofing felt Roofing felt (similar to and often confused with tar paper, but historically made from recycled rags rather than heavy kraft paper) has been used for decades as waterproof coverings in residential and commercial roofs as an underlay(ment) (sarking) beneath other building materials, particularly roofing and siding materials, and is one type of membrane used in asphalt built up roofing (BUR) systems. Over time the felt's natural mesh used as a substrate for asphalt impregnation (derived from fabrics like cotton or burlap) has evolved into synthetic products performing the same function with improved durability. Other changes with time have enhanced performance, with roofing felt remaining a heavier and more durable product than tar paper. Function The rapid application of waterproof or water-resistant roofing underlay protects the roof deck during construction until the roofing material is applied and is required for roofs required to meet Underwriters Laboratory (UL) fire ratings. The separation of the roof covering from the roof deck protects the roof covering from resins in some sheathing materials and cushions unevenness and old nails and splinters in re-roofing applications. The underlayment also sheds any water which penetrates the roof covering from an ordinary leak, a leak from wind-driven rain or snow, wind damage to the roof covering, or ice dams. However, the application of underlays may increase the roof temperature, which is the leading cause of ageing of asphalt shingles. Not installing an underlay may void the roof covering warranty. Weights and grades Felt paper is available in several grades, the most common being Type 1—commonly called 15-pound (15#) or No. 15 (#15)—and Type 2—commonly called 30-pound (30#) or No. 30 (#30). The weight designations originated with organic base felt weighing 15 or 30 pounds per 100 sq. ft. ( or per ). However, modern base felts are made of lighter-weight fibre, so the weight designations, though common colloquially, are no longer literally accurate. A heavier class of materials with a similar construction but designed for civil engineering, environmental protection, and mining applications are known as Bituminous Geomembranes (BGMs). BGMs are distinguished in part by larger roll widths which can exceed 5m and substantial thickness of up to 6.0mm. Another basic designation is organic or inorganic. Organic felt paper has a base material made with formerly living materials such as rag fibre, hessian (burlap), or cellulose fibres (wood, or jute). Organic felt papers are now considered obsolete, having dwindled to just five percent of the market by 1987. Inorganic base products are polyester, glass fibre developed in the 1950s, and historically, asbestos mat. Polyester mat is weaker and less chemically stable than fibreglass but because it is cheap is gaining market share. Polyester mat is primarily used with the more flexible modified-bitumen felt products. Asbestos mat was the first inorganic base material but was outlawed in the 1980s for health reasons but is still in use on some buildings. Inorganic felts are lighter, more tear-resistant, more fire-resistant, and do not absorb water. Another type of felt paper is perforated for use in built-up roofing and is not for use as a water-resistant underlay. Heavier material is typically used for underlayment of longer-lived roof materials in order to match their longer life span, and on less sloped roofs, which are more susceptible to leaking. For example, two layers of No. 30 felt might be used under a slate or tile roof, whereas a single layer of No. 15 might be adequate for a steeply raked roof of 24-year asphalt shingles. s0 Manufacturing process Roofing felt is manufactured in roll format. Rolls of base felt are pulled on rollers through large tanks of bitumen mixes until they are saturated with the tar-like bitumen mixture, producing rolls of water-resistant but breathable material. Modified bitumen is mixed with filler components such as limestone, sand, or polymers such as atactic polypropylene (APP) that gives rigidity and tear resistance or styrene-butadiene styrene (SBS), a rubber additive that gives more elastic benefits. Felt paper standards The American Society for Testing Materials (ASTM) standards that apply to felt paper are: ASTM D226 / D226M Standard — 09: Specification for Asphalt-Saturated Organic Felt Used in Roofing and Waterproofing. Type I - #15 or 15 lb. perforated or non-perforated Type II - #30 or 30 lb. perforated or non-perforated ASTM D4869 / D4869M Standard — Specification for Asphalt-Saturated Organic Felt Underlayment Used in Steep Slope Roofing. ASTM 4869-03 now includes the non-perforated felt referred to in ASTM D226-97a which will be phased out. ASTM 4869-03 includes a liquid-water transmission test (shower test) and dimensional stability limits (wrinkling) which ASTM D226-97a does not include. Type 1 - #8. Formerly ASTM D4869-93 Type I Type 2 - #13. Formerly ASTM D226-97a Type I (No. 15) Type 3 - #20. Formerly ASTM D4869-93 Type II Type 4 - #26. Formerly ASTM D226-97a Type II (No. 30) ASTM D2178 / D2178M-15a Standard — Specification for Asphalt Glass Felt Used in Roofing and Waterproofing. Type IV has a 44-pound breaking strength Type VI has a 66-pound breaking strength ASTM D6757 / D6757M-16a Standard — Specification for Underlayment Felt Containing Inorganic Fibres Used in Steep-Slope Roofing. ASTM D6222 / D6222M-16 Standard — Specification for Atactic Polypropylene (APP) Modified Bituminous Sheet Materials Using Polyester Reinforcements. Type 1 Type 2 Grade G, surface coated granules Grade S, smooth surface (uncoated) The Canadian Standards Association standards are: CSA A123.3 Asphalt Saturated Organic Roofing Felt Roll roofing components Roll roofing is a bitumen product similar to asphalt shingles meant for direct exposure to the weather. To protect its asphaltic base from ultraviolet degradation mineral granules are added on top of the felt, also decreasing the product's fire vulnerability. Thin, removable transparent film is added to the base of rolled roofing during manufacturing on all torch-on products. This stops the felt from sticking to the mineral layer when rolled up during the packaging process. A similar removable membrane on self-adhesive rolled roofing separates the adhesive from the mineral layer. Torch-on roofing felt also receives a removable membrane to keep it from sticking to itself prior to application. Irritants The complex chemical composition of bitumen makes it difficult to identify the specific component(s) responsible for adverse health effects observed in exposed workers. Known carcinogens have been found in bitumen fumes generated at work sites. Observations of acute irritation in workers from airborne and dermal exposures to fumes and aerosols and the potential for chronic health effects, including cancer, warrant continued diligence in the control of exposures. Reasons to use a roofing underlayment It protects the roof deck from rain before the roofing is installed. It provides an extra weather barrier in case of blow offs or water penetration through the roofing or flashings. It protects the roofing from any resins that bleed out of the sheathing. It helps prevent unevenness in the roof sheathing from telegraphing through the shingles. It is usually required for the UL fire rating to apply (since shingles are usually tested with underlayment). Negative aspects Bitumen is mostly produced from crude oil and is not regarded as a sustainable building product Bitumen is combustible Exposure to extreme heat and UV radiation drastically decreases the lifespan The fumes that are produced during hot application of asphalt or tar can cause dermal and respiratory problems Some felt paper installed on existing buildings may contain asbestos, which has a carcinogenic risk if its dust is inhaled. Malthoid From 1905 to 1988, The Paraffine Paint Co. of San Francisco had Malthoid as a trademark for waterproof and weatherproof building and roofing materials made of paper and felt in whole or in part. However, it had become well known before that. About 1913, Paraffine promoted its Malthoid roofing materials with a 16-page booklet. In 1941, the Duroid Company began making Malthoid in Onehunga, New Zealand. Malthoid was once common enough to be used as a generic description of flat roofing material in New Zealand and South Africa (item 26). A description of a New Zealand house built about 1914 says it was, "built of timber framework. covered by sheets of asbestos. The roof was closely timbered, then covered by strips of Malthoid paper. This was then painted with tar and topped off with a sprinkling of sand." Railway vehicles in Australia were roofed with Malthoid. Malthoid is still available for flat roofs and damp courses. See also Butyl rubber References External links Bituminous Membranes Article Roofs Building materials Roofing materials
Bituminous waterproofing
[ "Physics", "Technology", "Engineering" ]
2,055
[ "Structural engineering", "Building engineering", "Architecture", "Structural system", "Construction", "Materials", "Roofs", "Matter", "Building materials" ]
19,437,874
https://en.wikipedia.org/wiki/Vaporized%20hydrogen%20peroxide
Vaporized hydrogen peroxide (trademarked VHP, also known as hydrogen peroxide vapor, HPV) is a vapor form of hydrogen peroxide (H2O2) with applications as a low-temperature antimicrobial vapor used to decontaminate enclosed and sealed areas such as laboratory workstations, isolation and pass-through rooms, and even aircraft interiors. Use as sterilant Regulatory status VHP is registered by the U.S. Environmental Protection Agency as a sterilant, which the EPA defines as "a substance that destroys or eliminates all forms of microbial life in the inanimate environment, including all forms of vegetative bacteria, bacterial spores, fungi, fungal spores, and viruses". As a sterilant, VHP is one of the chemicals approved for decontamination of anthrax spores from contaminated buildings, such as occurred during the 2001 anthrax attacks in the U.S. It has also been shown to be effective in removing exotic animal viruses, such as avian influenza and Newcastle disease from equipment and surfaces. Application VHP is produced from a solution of liquid H2O2 and water, by generators specifically designed for the purpose. These generators initially dehumidify the ambient air, then produce VHP by passing aqueous hydrogen peroxide over a vaporizer, and circulate the vapor at a programmed concentration in the air, typically from 140 ppm to 1400 ppm, depending on the infectious agent to be cleared. By comparison, a concentration of 75 ppm is considered to be "Immediately Dangerous to Life or Health" in humans. After the VHP has circulated in the enclosed space for a pre-defined period, it is circulated back through the generator, where it is broken down into water and oxygen by a catalytic converter until concentrations of VHP fall to safe levels (typically <1 ppm). Alternatively, the VHP is vented to the outside air, in cases where recapturing of the VHP is not needed. Use in hospitals Vaporized hydrogen peroxide has been investigated as an airborne disinfectant and infection control measure for hospitals and has been shown to reduce incidence of nosocomial infections from several pathogens. Clostridioides difficile associated disease, VRE and MRSA are all associated with environmental contamination. H2O2 vapor has been used in hospitals to eradicate causal agents, e.g., antibiotic-resistant Klebsiella pneumoniae, from the environment and prevent infection of subsequent patients. Monitoring technologies OSHA mandates a PEL of 1.0 ppm (1.4 mg/m ) for HPV. Typically, safe working environments around sterilization equipment is achieved with electrochemical sensors capable of measuring in the parts per billion and low parts per million levels. These sensors are typically inexpensive and limited to ambient conditions. Moreover, HPV electrochemical sensors are often located near the sterilization equipment to detect possible leaks during the sterilization cycle. In 2014, Advanced Sterilization Products (ASP), sovaldi the manufacturer of the Sterrad hydrogen peroxide gas plasma sterilizer, issued a letter to hospital risk managers warning them that hydrogen peroxide residues may be found in the sterilization load. HPV being present in the sterilization load, could lead the accidental exposure of hospital staff. Monitoring hydrogen peroxide levels inside the sterilization chamber during the sterilization cycle can be challenging. Technical issues such as condensation, vacuum, and high concentration have prevented many sensing technologies such as electrochemical sensors from providing real-time monitoring of H2O2 concentration. Under these conditions, optical methods such as spectroscopy can be used to ensure that lethal concentrations of H2O2 are achieved in the sterilization chamber. Dangers of manipulation Hydrogen peroxide vapors and fumes can irritate and damage the skin, the respiratory tract, and the eyes. Extreme precautions must be taken when manipulating hydrogen peroxide, and it must not be considered harmless. References Hydrogen peroxide Antiseptics Disinfectants Sterilization (microbiology)
Vaporized hydrogen peroxide
[ "Chemistry", "Biology" ]
849
[ "Microbiology techniques", "Sterilization (microbiology)" ]
19,444,228
https://en.wikipedia.org/wiki/Label-free%20quantification
Label-free quantification is a method in mass spectrometry that aims to determine the relative amount of proteins in two or more biological samples. Unlike other methods for protein quantification, label-free quantification does not use a stable isotope containing compound to chemically bind to and thus label the protein. Implementation Label-free quantification may be based on precursor signal intensity or on spectral counting. The first method is useful when applied to high precision mass spectra, such as those obtained using the new generation of time-of-flight (ToF), fourier transform ion cyclotron resonance (FTICR), or Orbitrap mass analyzers. The high-resolution power facilitates the extraction of peptide signals on the MS1 level and thus uncouples the quantification from the identification process. In contrast, spectral counting simply counts the number of spectra identified for a given peptide in different biological samples and then integrates the results for all measured peptides of the protein(s) that are quantified. The computational framework of label free approach includes detecting peptides, matching the corresponding peptides across multiple LC-MS data, selecting discriminatory peptides. Intact protein expression spectrometry (IPEx) is a label-free quantification approach in mass spectrometry under development by the analytical chemistry group at the United States Food and Drug Administration Center for Food Safety and Applied Nutrition and elsewhere. Intact proteins are analyzed by an LCMS instrument, usually a quadrupole time-of-flight in profile mode, and the full protein profile is determined and quantified using data reduction software. Early results are very encouraging. In one study, two groups of treatment replicates from mammalian samples (different organisms with similar treatment histories, but not technical replicates) show dozens of low CV protein biomarkers, suggesting that IPEx is a viable technology for studying protein expression. Detecting peptides Typically, peptide signals are detected at the MS1 level and distinguished from chemical noise through their characteristic isotopic pattern. These patterns are then tracked across the retention time dimension and used to reconstruct a chromatographic elution profile of the mono-isotopic peptide mass. The total ion current of the peptide signal is then integrated and used as a quantitative measurement of the original peptide concentration. For each detected peptide, all isotopic peaks are first found and the charge state is then assigned. Label-free quantification may be based on precursor signal intensity and has problems due to isolation interference: in high-throughput studies, the identity of the peptide precursor ion being measured could easily be a completely different peptide with a similar m/z ratio and which elutes in a time frame overlapping with that of the former peptide. Spectral counting has problems due to the fact that the peptides are identified, thus making it necessary to run an additional MS/MS scan which takes time and therefore reduces the resolution of the experiment. Matching corresponding peptides In contrast to differential labelling, every biological specimen needs to be measured separately in a label-free experiment. The extracted peptide signals are then mapped across few or multiple LC-MS measurements using their coordinates on the mass-to-charge and retention-time dimensions. Data from high mass precision instruments greatly facilitate this process and increase the certainty of matching correct peptide signals across runs. Clearly, differential processing of biological samples makes it necessary to have a standard which can be used to adjust the results. Peptides that are not expected to change in their expression levels in different biological samples may be used for this purpose. However, not all peptides ionize well and therefore the choice of candidates should be done after an initial study which should only characterize the protein content of the biological samples that will be investigated. Selecting discriminatory peptides Finally, sophisticated normalization methods are used to remove systematic artefacts in the peptide intensity values between LC-MS measurements. Then, discriminatory peptides are identified by selecting the peptides whose normalized intensities are different (e.g., p-value < 0.05) among multiple groups of samples. In addition, newer hybrid mass spectrometers like LTQ OrbiTrap offer the possibility to acquire MS/MS peptide identifications in parallel to the high mass precision measurement of peptides on the MS1 level. This raises the computational challenge for the processing and integration of these two sources of information and has led to the development of novel promising quantification strategies. References Biochemistry detection methods Mass spectrometry
Label-free quantification
[ "Physics", "Chemistry", "Biology" ]
922
[ "Biochemistry methods", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Chemical tests", "Mass spectrometry", "Biochemistry detection methods", "Matter" ]
19,446,142
https://en.wikipedia.org/wiki/Metabolite%20channeling
Metabolite channeling is the passing of the intermediary metabolic product of one enzyme directly to another enzyme or active site without its release into solution. When several consecutive enzymes of a metabolic pathway channel substrates between themselves, this is called a metabolon. Channeling can make a metabolic pathway more rapid and efficient than it would be if the enzymes were randomly distributed in the cytosol, or prevent the release of unstable intermediates. It can also protect an intermediate from being consumed by competing reactions catalyzed by other enzymes. Mechanisms for channeling Channeling can occur in several ways. One possibility, which occurs in the pyruvate dehydrogenase complex, is by a substrate being attached to a flexible arm that moves between several active sites (not very likely). Another possibility is by two active sites being connected by a tunnel through the protein and the substrate moving through the tunnel; this is seen in tryptophan synthase. A third possibility is by a charged region on the surface of the enzyme acting as a pathway or "electrostatic highway" to guide a substrate that has the opposite charge from one active site to another. This is seen in the bifunctional enzyme dihydrofolate reductase-thymidylate synthase. The channeling of aminoacyl-tRNA for protein synthesis in vivo has been also reported. Controversies Channeling of NADH between oxidoreductases Some authors have maintained that direct transfer of NADH from one enzyme as product to another as substrate is a common phenomenon. However others, such as Gutfreund and Chock and Pettersson have argued that the experimental evidence is too weak to support such a conclusion. In a more recent study Svedružić and colleagues conclude that such direct transfer is a real phenomenon, but they sound a note of caution:Our results also show that it is impossible to design experiments that can conclusively analyze substrate channeling in cells if we do not understand the underlying molecular principles and the properties of the related enzymes. Physiological effects of metabolite channeling It is sometimes suggested, for example by Ovádi, that metabolite channeling decreases the concentration of metabolite in free solution. However, it has also been argued that there is no net effect on the free concentration in steady-state conditions, a claim disputed by others. More recent authors consider this and other questions about channeling to be unresolved: "Substrate channeling in vivo has also been a subject of yet to be resolved debates," or they recognize that an effect on free concentration exists, but is "generally small." See also Enzyme kinetics Enzyme assay Enzyme catalysis References Enzyme kinetics
Metabolite channeling
[ "Chemistry" ]
552
[ "Chemical kinetics", "Enzyme kinetics" ]
19,449,848
https://en.wikipedia.org/wiki/Freeze%20thaw%20resistance
Freeze thaw resistance, or freezing and thawing resistance, is the property of solids to resist cyclic freezing and melting. See also Frost weathering Further reading Phase transitions Condensed matter physics
Freeze thaw resistance
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
38
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Materials science", "Condensed matter physics", "Statistical mechanics", "Matter" ]
19,450,493
https://en.wikipedia.org/wiki/Phason
In physics, a phason is a form of collective excitation found in aperiodic crystal structures. Phasons are a type of quasiparticle: an emergent phenomenon of many-particle systems. The phason can also be seen as a degree of freedom unique to quasicrystals. Similar to phonons, phasons are quasiparticles associated with atomic motion. However, whereas phonons are related to the translation of atoms, phasons are associated with atomic rearrangement. As a result of this rearrangement, or modulation, the waves that describe the position of atoms in the crystal change phase -- hence the term "phason". In the language of the superspace picture commonly employed in the description of aperiodic crystals in which the aperiodic function is obtained via projection from a higher dimensional periodic function, the 'phason' displacement can be seen as displacement of the (higher-dimensional) lattice points in the perpendicular space. Phasons can travel faster than the speed of sound within quasicrystalline materials, giving these materials a higher thermal conductivity than materials in which the transfer of heat is carried out only by phonons. Different phasonic modes can change the material properties of a quasicrystal. In the superspace representation, aperiodic crystals can be obtained from a periodic crystal of higher dimension by projection to a lower dimensional space– this is commonly referred to as the cut-and-project method. While phonons change the position of atoms relative to the crystal structure in space, phasons change the position of atoms relative to the quasicrystal structure and the cut-through superspace that defines it. Therefore, phonon modes are excitations of the "in-plane" real (also called parallel, direct, or external) space, whereas phasons are excitations of the perpendicular (also called internal or virtual) space. Phasons may be described in terms of hydrodynamic theory: when going from a homogenous fluid to a quasicrystal, hydrodamic theory predicts six new modes arising from the translational symmetry breaking in the parallel and perpendicular spaces. Three of these modes (corresponding to the parallel space) are acoustic phonon modes, while the remaining three are diffusive phason modes. In incommensurately-modulated crystals, phasons may be constructed from a coherent superposition of phonons of the unmodulated parent structure, though this is not possible for quasicrystals. Hydrodynamic analysis of quasicrystals predicts that, while the strain relaxation of phonons is relatively rapid, relaxation of phason strain is diffusive and is much slower. Therefore, metastable quasicrystals grown by rapid quenching from the melt exhibit built-in phason strain associated with shifts and anisotropic broadenings of X-ray and electron diffraction peaks. See also Quasicrystal Quasiparticle References Freedman, B., Lifshitz, R., Fleischer, J. et al. Phason dynamics in nonlinear photonic quasicrystals. Nature Mater 6, 776–781 (2007). https://doi.org/10.1038/nmat1981 Books Quasiparticles Crystallography
Phason
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
713
[ "Matter", "Materials science stubs", "Materials science", "Crystallography", "Condensed matter physics", "Quasiparticles", "Condensed matter stubs", "Subatomic particles" ]
4,662,960
https://en.wikipedia.org/wiki/Intermittency
In dynamical systems, intermittency is the irregular alternation of phases of apparently periodic and chaotic dynamics (Pomeau–Manneville dynamics), or different forms of chaotic dynamics (crisis-induced intermittency). Experimentally, intermittency appears as long periods of almost periodic behavior interrupted by chaotic behavior. As control variables change, the chaotic behavior become more frequent until the system is fully chaotic. This progression is known as the intermittency route to chaos. Pomeau and Manneville described three routes to intermittency where a nearly periodic system shows irregularly spaced bursts of chaos. These (type I, II and III) correspond to the approach to a saddle-node bifurcation, a subcritical Hopf bifurcation, or an inverse period-doubling bifurcation. In the apparently periodic phases the behaviour is only nearly periodic, slowly drifting away from an unstable periodic orbit. Eventually the system gets far enough away from the periodic orbit to be affected by chaotic dynamics in the rest of the state space, until it gets close to the orbit again and returns to the nearly periodic behaviour. Since the time spent near the periodic orbit depends sensitively on how closely the system entered its vicinity (in turn determined by what happened during the chaotic period) the length of each phase is unpredictable. Another kind, on-off intermittency, occurs when a previously transversally stable chaotic attractor with dimension less than the embedding space begins to lose stability. Near unstable orbits within the attractor orbits can escape into the surrounding space, producing a temporary burst before returning to the attractor. In crisis-induced intermittency a chaotic attractor suffers a crisis, where two or more attractors cross the boundaries of each other's basin of attraction. As an orbit moves through the first attractor it can cross over the boundary and become attracted to the second attractor, where it will stay until its dynamics moves it across the boundary again. Intermittent behaviour is commonly observed in fluid flows that are turbulent or near the transition to turbulence. In highly turbulent flows, intermittency is seen in the irregular dissipation of kinetic energy and the anomalous scaling of velocity increments. Understanding and modeling atmospheric flow and turbulence under such conditions are further complicated by “turbulence intermittency,” which manifests as periods of strong turbulent activity interspersed in a more quiescent airflow. It is also seen in the irregular alternation between turbulent and non-turbulent fluid that appear in turbulent jets and other turbulent free shear flows. In pipe flow and other wall bounded shear flows, there are intermittent puffs that are central to the process of transition from laminar to turbulent flow. Intermittent behavior has also been experimentally demonstrated in circuit oscillators and chemical reactions. See also Pomeau–Manneville scenario Crisis (dynamical systems) Turbulent flow Fluorescence intermittency (blinking) of organic molecules and colloidal quantum dots (nanocrystals) References Dynamical systems
Intermittency
[ "Physics", "Mathematics" ]
620
[ "Mechanics", "Dynamical systems" ]
4,665,038
https://en.wikipedia.org/wiki/Sobolev%20inequality
In mathematics, there is in mathematical analysis a class of Sobolev inequalities, relating norms including those of Sobolev spaces. These are used to prove the Sobolev embedding theorem, giving inclusions between certain Sobolev spaces, and the Rellich–Kondrachov theorem showing that under slightly stronger conditions some Sobolev spaces are compactly embedded in others. They are named after Sergei Lvovich Sobolev. Sobolev embedding theorem Let denote the Sobolev space consisting of all real-valued functions on whose weak derivatives up to order are functions in . Here is a non-negative integer and . The first part of the Sobolev embedding theorem states that if , and are two real numbers such that (given , , and this is satisfied for some provided ), then and the embedding is continuous: for every , one has , and In the special case of and , Sobolev embedding gives where is the Sobolev conjugate of , given by and for every , one has and This special case of the Sobolev embedding is a direct consequence of the Gagliardo–Nirenberg–Sobolev inequality. The result should be interpreted as saying that if a function in has one derivative in , then itself has improved local behavior, meaning that it belongs to the space where . (Note that , so that .) Thus, any local singularities in must be more mild than for a typical function in . The second part of the Sobolev embedding theorem applies to embeddings in Hölder spaces . If and with then one has the embedding In other words, for every and , one has , in addition, This part of the Sobolev embedding is a direct consequence of Morrey's inequality. Intuitively, this inclusion expresses the fact that the existence of sufficiently many weak derivatives implies some continuity of the classical derivatives. If then for every . In particular, as long as , the embedding criterion will hold with and some positive value of . That is, for a function on , if has derivatives in and , then will be continuous (and actually Hölder continuous with some positive exponent ). Generalizations The Sobolev embedding theorem holds for Sobolev spaces on other suitable domains . In particular (; ), both parts of the Sobolev embedding hold when is a bounded open set in with Lipschitz boundary (or whose boundary satisfies the cone condition; ) is a compact Riemannian manifold is a compact Riemannian manifold with boundary and the boundary is Lipschitz (meaning that the boundary can be locally represented as a graph of a Lipschitz continuous function). is a complete Riemannian manifold with injectivity radius and bounded sectional curvature. If is a bounded open set in with continuous boundary, then is compactly embedded in (). Kondrachov embedding theorem On a compact manifold with boundary, the Kondrachov embedding theorem states that if andthen the Sobolev embedding is completely continuous (compact). Note that the condition is just as in the first part of the Sobolev embedding theorem, with the equality replaced by an inequality, thus requiring a more regular space . Gagliardo–Nirenberg–Sobolev inequality Assume that is a continuously differentiable real-valued function on with compact support. Then for there is a constant depending only on and such that with . The case is due to Sobolev and the case to Gagliardo and Nirenberg independently. The Gagliardo–Nirenberg–Sobolev inequality implies directly the Sobolev embedding The embeddings in other orders on are then obtained by suitable iteration. Hardy–Littlewood–Sobolev lemma Sobolev's original proof of the Sobolev embedding theorem relied on the following, sometimes known as the Hardy–Littlewood–Sobolev fractional integration theorem. An equivalent statement is known as the Sobolev lemma in . A proof is in . Let and . Let be the Riesz potential on . Then, for defined by there exists a constant depending only on such that If , then one has two possible replacement estimates. The first is the more classical weak-type estimate: where . Alternatively one has the estimatewhere is the vector-valued Riesz transform, c.f. . The boundedness of the Riesz transforms implies that the latter inequality gives a unified way to write the family of inequalities for the Riesz potential. The Hardy–Littlewood–Sobolev lemma implies the Sobolev embedding essentially by the relationship between the Riesz transforms and the Riesz potentials. Morrey's inequality Assume . Then there exists a constant , depending only on and , such that for all , where Thus if , then is in fact Hölder continuous of exponent , after possibly being redefined on a set of measure 0. A similar result holds in a bounded domain with Lipschitz boundary. In this case, where the constant depends now on and . This version of the inequality follows from the previous one by applying the norm-preserving extension of to . The inequality is named after Charles B. Morrey Jr. General Sobolev inequalities Let be a bounded open subset of , with a boundary. ( may also be unbounded, but in this case its boundary, if it exists, must be sufficiently well-behaved.) Assume . Then we consider two cases: or , In this case we conclude that , where We have in addition the estimate , the constant depending only on , and . Here, we conclude that belongs to a Hölder space, more precisely: where We have in addition the estimate the constant depending only on , and . In particular, the condition guarantees that is continuous (and actually Hölder continuous with some positive exponent). Case If , then is a function of bounded mean oscillation and for some constant depending only on . This estimate is a corollary of the Poincaré inequality. Nash inequality The Nash inequality, introduced by , states that there exists a constant , such that for all , The inequality follows from basic properties of the Fourier transform. Indeed, integrating over the complement of the ball of radius , because . On the other hand, one has which, when integrated over the ball of radius gives where is the volume of the -ball. Choosing to minimize the sum of () and () and applying Parseval's theorem: gives the inequality. In the special case of , the Nash inequality can be extended to the case, in which case it is a generalization of the Gagliardo-Nirenberg-Sobolev inequality (, Comments on Chapter 8). In fact, if is a bounded interval, then for all and all the following inequality holds where: Logarithmic Sobolev inequality The simplest of the Sobolev embedding theorems, described above, states that if a function in has one derivative in , then itself is in , where We can see that as tends to infinity, approaches . Thus, if the dimension of the space on which is defined is large, the improvement in the local behavior of from having a derivative in is small ( is only slightly larger than ). In particular, for functions on an infinite-dimensional space, we cannot expect any direct analog of the classical Sobolev embedding theorems. There is, however, a type of Sobolev inequality, established by Leonard Gross () and known as a logarithmic Sobolev inequality, that has dimension-independent constants and therefore continues to hold in the infinite-dimensional setting. The logarithmic Sobolev inequality says, roughly, that if a function is in with respect to a Gaussian measure and has one derivative that is also in , then is in "-log", meaning that the integral of is finite. The inequality expressing this fact has constants that do not involve the dimension of the space and, thus, the inequality holds in the setting of a Gaussian measure on an infinite-dimensional space. It is now known that logarithmic Sobolev inequalities hold for many different types of measures, not just Gaussian measures. Although it might seem as if the -log condition is a very small improvement over being in , this improvement is sufficient to derive an important result, namely hypercontractivity for the associated Dirichlet form operator. This result means that if a function is in the range of the exponential of the Dirichlet form operator—which means that the function has, in some sense, infinitely many derivatives in —then the function does belong to for some ( Theorem 6). References . . , , MAA review , Translated from the Russian by T. O. Shaposhnikova. . . Inequalities Sobolev spaces Compactness theorems
Sobolev inequality
[ "Mathematics" ]
1,853
[ "Compactness theorems", "Binary relations", "Theorems in topology", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
4,665,840
https://en.wikipedia.org/wiki/Reliability-centered%20maintenance
Reliability-centered maintenance (RCM) is a concept of maintenance planning to ensure that systems continue to do what their users require in their present operating context. Successful implementation of RCM will lead to increase in cost effectiveness, reliability, machine uptime, and a greater understanding of the level of risk that the organization is managing. Context It is generally used to achieve improvements in fields such as the establishment of safe minimum levels of maintenance, changes to operating procedures and strategies and the establishment of capital maintenance regimes and plans. Successful implementation of RCM will lead to increase in cost effectiveness, machine uptime, and a greater understanding of the level of risk that the organization is managing. John Moubray characterized RCM as a process to establish the safe minimum levels of maintenance. This description echoed statements in the Nowlan and Heap report from United Airlines. It is defined by the technical standard SAE JA1011, Evaluation Criteria for RCM Processes, which sets out the minimum criteria that any process should meet before it can be called RCM. This starts with the seven questions below, worked through in the order that they are listed: 1. What is the item supposed to do and its associated performance standards? 2. In what ways can it fail to provide the required functions? 3. What are the events that cause each failure? 4. What happens when each failure occurs? 5. In what way does each failure matter? 6. What systematic task can be performed proactively to prevent, or to diminish to a satisfactory degree, the consequences of the failure? 7. What must be done if a suitable preventive task cannot be found? Reliability centered maintenance is an engineering framework that enables the definition of a complete maintenance regimen. It regards maintenance as the means to maintain the functions a user may require of machinery in a defined operating context. As a discipline it enables machinery stakeholders to monitor, assess, predict and generally understand the working of their physical assets. This is embodied in the initial part of the RCM process which is to identify the operating context of the machinery, and write a Failure Mode Effects and Criticality Analysis (FMECA). The second part of the analysis is to apply the "RCM logic", which helps determine the appropriate maintenance tasks for the identified failure modes in the FMECA. Once the logic is complete for all elements in the FMECA, the resulting list of maintenance is "packaged", so that the periodicities of the tasks are rationalised to be called up in work packages; it is important not to destroy the applicability of maintenance in this phase. Lastly, RCM is kept live throughout the "in-service" life of machinery, where the effectiveness of the maintenance is kept under constant review and adjusted in light of the experience gained. RCM can be used to create a cost-effective maintenance strategy to address dominant causes of equipment failure. It is a systematic approach to defining a routine maintenance program composed of cost-effective tasks that preserve important functions. The important functions (of a piece of equipment) to preserve with routine maintenance are identified, their dominant failure modes and causes determined and the consequences of failure ascertained. Levels of criticality are assigned to the consequences of failure. Some functions are not critical and are left to "run to failure" while other functions must be preserved at all cost. Maintenance tasks are selected that address the dominant failure causes. This process directly addresses maintenance preventable failures. Failures caused by unlikely events, non-predictable acts of nature, etc. will usually receive no action provided their risk (combination of severity and frequency) is trivial (or at least tolerable). When the risk of such failures is very high, RCM encourages (and sometimes mandates) the user to consider changing something which will reduce the risk to a tolerable level. The result is a maintenance program that focuses scarce economic resources on those items that would cause the most disruption if they were to fail. RCM emphasizes the use of predictive maintenance (PdM) techniques in addition to traditional preventive measures. Background The term "reliability-centered maintenance" authored by Tom Matteson, Stanley Nowlan and Howard Heap of United Airlines (UAL) to describe a process used to determine the optimum maintenance requirements for aircraft (having left United Airlines to pursue a consulting career a few months before the publication of the final Nowlan-Heap report, Matteson received no authorial credit for the work). The US Department of Defense (DOD) sponsored the authoring of both a textbook (by UAL) and an evaluation report (by Rand Corporation) on Reliability-Centered Maintenance, both published in 1978. They brought RCM concepts to the attention of a wider audience. The first generation of jet aircraft had a crash rate that would be considered highly alarming today, and both the Federal Aviation Administration (FAA) and the airlines' senior management felt strong pressure to improve matters. In the early 1960s, with FAA approval the airlines began to conduct a series of intensive engineering studies on in-service aircraft. The studies proved that the fundamental assumption of design engineers and maintenance planners—that every aircraft and every major component thereof (such as its engines) had a specific "lifetime" of reliable service, after which it had to be replaced (or overhauled) in order to prevent failures—was wrong in nearly every specific example in a complex modern jet airliner. This was one of many astounding discoveries that have revolutionized the managerial discipline of physical asset management and have been at the base of many developments since this seminal work was published. Among some of the paradigm shifts inspired by RCM were: an understanding that the vast majority of failures are not necessarily linked to the age of the asset changing from efforts to predict life expectancies to trying to manage the process of failure an understanding of the difference between the requirements of assets from a user perspective, and the design reliability of the asset an understanding of the importance of managing assets on condition (often referred to as condition monitoring, condition based maintenance and predictive maintenance) an understanding of four basic routine maintenance tasks linking levels of tolerable risk to maintenance strategy development Later RCM was defined in the standard SAE JA1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. This sets out the minimum criteria for what is, and for what is not, able to be defined as RCM. The standard is a watershed event in the ongoing evolution of the discipline of physical asset management. Prior to the development of the standard many processes were labeled as RCM even though they were not true to the intentions and the principles in the original report that defined the term publicly. Basic features The RCM process described in the DOD/UAL report recognized three principal risks from equipment failures: threats to safety, to operations, and to the maintenance budget. Modern RCM gives threats to the environment a separate classification, though most forms manage them in the same way as threats to safety. RCM offers five principal options among the risk management strategies: Predictive maintenance tasks, Preventive Restoration or Preventive Replacement maintenance tasks, Detective maintenance tasks, Run-to-Failure, and One-time changes to the "system" (changes to hardware design, to operations, or to other things). RCM also offers specific criteria to use when selecting a risk management strategy for a system that presents a specific risk when it fails. Some are technical in nature (can the proposed task detect the condition it needs to detect? does the equipment actually wear out, with use?). Others are goal-oriented (is it reasonably likely that the proposed task-and-task-frequency will reduce the risk to a tolerable level?). The criteria are often presented in the form of a decision-logic diagram, though this is not intrinsic to the nature of the process. In use After being created by the commercial aviation industry, RCM was adopted by the U.S. military (beginning in the mid-1970s) and by the U.S. commercial nuclear power industry (in the 1980s). Starting in the late 1980s, an independent initiative led by John Moubray corrected some early flaws in the process, and adapted it for use in the wider industry. Moubray was also responsible for popularizing the method and for introducing it to much of the industrial community outside of the aviation industry. In the two decades since this approach (called by the author RCM2) was first released, industry has undergone massive change with advances in lean thinking and efficiency methods. At this point in time many methods sprung up that took an approach of reducing the rigour of the RCM approach. The result was the propagation of methods that called themselves RCM, yet had little in common with the original concepts. In some cases these were misleading and inefficient, while in other cases they were even dangerous. Since each initiative is sponsored by one or more consulting firms eager to help clients use it, there is still considerable disagreement about their relative dangers (or merits). The RCM standard (SAE JA1011, available from http://www.sae.org) provides the minimum criteria that processes must comply with if they are to be called RCM. Although a voluntary standard, it provides a reference for companies looking to implement RCM to ensure they are getting a process, software package or service that is in line with the original report. The Walt Disney Company introduced RCM to its parks in 1997, led by Paul Pressler and consultants McKinsey & Company, laying off a large number of maintenance workers and saving large amounts of money. Some people blamed the new cost-conscious maintenance culture for some of the Incidents at Disneyland Resort that occurred in the following years. See also Maintenance RAMS Notes References Further reading Standard To Define RCM (Part 1), Dana Netherton, Maintenance Technology (1998) Standard To Define RCM (Part 2), Dana Netherton, Maintenance Technology (1998) Standard RCM Process Requirements, Jesús R, Sifonte, Conscious Reliability (2017) What about RCM-R®? How does it stand when compared with SAE JA1011?, Jesús R, Sifonte, Conscious Reliability (2017) Reliability Centered Maintenance: 9 Principles of a Modern Preventive Maintenance Program, Erik Hupje, Reliability Academy (2020) Maintenance Reliability engineering
Reliability-centered maintenance
[ "Engineering" ]
2,096
[ "Systems engineering", "Maintenance", "Mechanical engineering", "Reliability engineering" ]
4,668,173
https://en.wikipedia.org/wiki/Flitch%20beam
A flitch beam (or flitched beam) is a compound beam used in the construction of houses, decks, and other primarily wood-frame structures. Typically, the flitch beam is made up of a vertical steel plate sandwiched between two wood beams, the three layers being held together with bolts. In that common form it is sometimes referenced as a steel flitch beam. Further alternating layers of wood and steel can be used to produce an even stronger beam. The metal plates within the beam are known as flitch plates.[1] Flitch beams were used as a cost-effective way to strengthen long-span wooden beams, and have been largely supplanted by more recent technology. History "Flitch" originally referred to a slab of bacon, which was cut into strips lengthwise. Similarly, a wooden beam was flitched by cutting it lengthwise; one half was then rotated 180 degrees both longitudinally and laterally to ensure that any defects were separated. In the 18th century, before the availability of steel beams, pine beams were flitched with hardwood such as oak. With the availability of affordable steel, flitch beams became a way to strengthen long-span wooden beams cost-effectively while taking up less space than solid wood. An 1883 article from The American Architect and Building News compares three alternatives in a hypothetical railway station "in which the second story is devoted to offices, and where we must use girders to support the second floor of 25-foot span, and not less than 12 feet on centres if we can avoid it. This would give us, to be supported by the girder, a floor area of 12' x 25' = 300 square feet" and 31,500 pounds of load. After performing calculations the beams compare as follows: The article also cites the fire-retardant character of the flitch beam, "in case of a fire would not probably affect the iron until the wooden beams were badly burned." With the advent of high-strength engineered lumber, the advantages of flitch-beams disappeared. For example, comparing the capacity of 2 beams spanning 18 feet: Additionally, use of this type of beam has greatly declined due to the high cost of labor. Engineered lumber can be cut to length and installed much like sawn lumber; the flitch requires shop fabrication and/or field bolting. This, coupled with a much increased self-weight of the beam (11.4 pounds (5.2 kg) for engineered wood vs. 25.2 pounds (11.4 kg) for a flitch beam), decreases the viability of the system. Modern uses Flitch beams are currently mainly used in historic renovations, where they can be used to reinforce aged lumber supports, or for aesthetic purposes, where exposed beams with the appearance of wood and the strength of steel are required. An adaptive use project in the UK, changing stables into offices, required cutting the beam supporting a floor down its entire length, and then inserting a similarly sized steel plate. The resulting flitched beam was then secured with resin and bolts, preserving appearance while providing strength.  Flitch beams were used as columns in a two-story new construction.  Glulam beams were used to support the second floor and the roof.  This allowed the appearance of wooden columns, while providing the necessary strength. The method for calculating the size of a flitch beam to be used in construction is straightforward, using the transformed-section method. The steel plate is treated as an equally stiff piece of wood, with its width modified by the ratio of their moduli of elasticity. This allows the deflection of the entire beam to be calculated as if it were entirely made up of wood. There is modest business activity involving flitch beams with The Timber Research and Development Association (TRADA) developing a new flitch beam, a construction software program offering calculation for flitch beam designs, and at least one firm offering pre-fabricated flitch beams in various configurations. References External links Flitch Plate & Beam Specifications Structural engineering
Flitch beam
[ "Engineering" ]
821
[ "Structural engineering", "Civil engineering", "Construction" ]
4,668,382
https://en.wikipedia.org/wiki/Lapatinib
Lapatinib (INN), used in the form of lapatinib ditosylate (USAN) (trade names Tykerb and Tyverb marketed by Novartis) is an orally active drug for breast cancer and other solid tumours. It is a dual tyrosine kinase inhibitor which interrupts the HER2/neu and epidermal growth factor receptor (EGFR) pathways. It is used in combination therapy for HER2-positive breast cancer. It is used for the treatment of patients with advanced or metastatic breast cancer whose tumors overexpress HER2 (ErbB2). Status In March 2007, the U.S. Food and Drug Administration (FDA) approved lapatinib in combination therapy for breast cancer patients already using capecitabine (Xeloda). In January 2010, Tykerb received accelerated approval for the treatment of postmenopausal women with hormone receptor positive metastatic breast cancer that overexpresses the HER2 receptor and for whom hormonal therapy is indicated (in combination with letrozole). Pharmaceutical company GlaxoSmithKline (GSK) markets the drug under the proprietary names Tykerb (mostly U.S.) and Tyverb (mostly Europe and Russia). The drug currently has approval for sale and clinical use in the US, Australia, Bahrain, Kuwait, Venezuela, Brazil, New Zealand, South Korea, Switzerland, Japan, Jordan, the European Union, Lebanon, India and Pakistan. In August 2013, India's Intellectual Property Appellate Board revoked the patent for Glaxo's Tykerb citing its derivative status, while upholding at the same time the original patent granted for lapatinib. The drug lapatinib ditosylate is classified as S/NM (a synthetic compound showing competitive inhibition of the natural product) that is naturally derived or inspired substrate. Mode of action Biochemistry Lapatinib inhibits the tyrosine kinase activity associated with two oncogenes, EGFR (epidermal growth factor receptor) and HER2/neu (human EGFR type 2). Over expression of HER2/neu can be responsible for certain types of high-risk breast cancers in women. Like sorafenib, lapatinib is a protein kinase inhibitor shown to decrease tumor-causing breast cancer stem cells. Lapatinib inhibits receptor signal processes by binding to the ATP-binding pocket of the EGFR/HER2 protein kinase domain, preventing self-phosphorylation and subsequent activation of the signal mechanism (see Receptor tyrosine kinase#Signal transduction). Clinical application Breast cancer Lapatinib is used as a treatment for women's breast cancer in treatment-naïve, ER+/EGFR+/HER2+ breast cancer patients and in patients who have HER2-positive advanced breast cancer that has progressed after previous treatment with other chemotherapeutic agents, such as anthracycline, taxane-derived drugs, or trastuzumab (Herceptin). A 2006 GSK-supported randomized clinical trial on female breast cancer previously being treated with those agents (anthracycline, a taxane and trastuzumab) demonstrated that administrating lapatinib in combination with capecitabine delayed the time of further cancer growth compared to regimens that use capecitabine alone. The study also reported that risk of disease progression was reduced by 51%, and that the combination therapy was not associated with increases in toxic side effects. The outcome of this study resulted in a somewhat complex and rather specific initial indication for lapatinib—use only in combination with capecitabine for HER2-positive breast cancer in women whose cancer have progressed following previous chemotherapy with anthracycline, taxanes and trastuzumab. Early clinical trials have been performed suggesting that high dose intermittent lapatinib might have better efficacy with manageable toxicities in the treatment of HER2-overexpressing breast cancers. Adverse effects Like many small molecule tyrosine kinase inhibitors, lapatinib is regarded as well tolerated. The most common side effects reported are diarrhea, fatigue, nausea and rashes. Of note, lapatinib related rash is associated with improved outcome. In clinical studies elevated liver enzymes have been reported. QT prolongation has been observed with the use of lapatinib ditosylate but there are no reports of torsades de pointes. Caution is advised in patients with hypokalaemia, hypomagnesaemia, congenital long QT syndrome, or with coadministration of medicines known to cause QT prolongation. In combination with capecitabine, reversible decreased left ventricular function are common (2%). Ongoing trials in gastric cancer Phase III study designed to assess lapatinib in combination with chemotherapy for advanced HER2-positive gastric cancer in 2013 failed to meet the primary endpoint of improved overall survival (OS) against chemotherapy alone. The trial did not discover new safety signals, while the median OS for patients in the lapatinib and chemotherapy group was 12.2 months against 10.5 months for patients in the placebo plus chemotherapy. Secondary endpoints of the randomized, double-blinded study, were progression-free survival (PFS), response rate and duration of response. Median PFS was 6 months, response rate was 53% and the duration of response was 7.3 months in the investigational combination chemotherapy group compared to median PFS of 5.4 months, response rate of 39% and duration of response of 5.6 months for patients in chemotherapy alone group. Diarrhoea, vomiting, anemia, dehydration and nausea were serious adverse events (SAE) reported in over 2% of patients in the investigational combination chemotherapy group, while vomiting was the most common SAE noted in the chemotherapy group. References External links Amines Anilines Aromatic amines Chloroarenes Furans 3-Fluorophenyl compounds Phenol ethers Quinazolines Receptor tyrosine kinase inhibitors Sulfones Drugs developed by GSK plc Drugs developed by Novartis Antineoplastic drugs
Lapatinib
[ "Chemistry" ]
1,295
[ "Sulfones", "Amines", "Bases (chemistry)", "Functional groups" ]
4,668,395
https://en.wikipedia.org/wiki/Sequential%20space
In topology and related fields of mathematics, a sequential space is a topological space whose topology can be completely characterized by its convergent/divergent sequences. They can be thought of as spaces that satisfy a very weak axiom of countability, and all first-countable spaces (notably metric spaces) are sequential. In any topological space if a convergent sequence is contained in a closed set then the limit of that sequence must be contained in as well. Sets with this property are known as sequentially closed. Sequential spaces are precisely those topological spaces for which sequentially closed sets are in fact closed. (These definitions can also be rephrased in terms of sequentially open sets; see below.) Said differently, any topology can be described in terms of nets (also known as Moore–Smith sequences), but those sequences may be "too long" (indexed by too large an ordinal) to compress into a sequence. Sequential spaces are those topological spaces for which nets of countable length (i.e., sequences) suffice to describe the topology. Any topology can be refined (that is, made finer) to a sequential topology, called the sequential coreflection of The related concepts of Fréchet–Urysohn spaces, -sequential spaces, and -sequential spaces are also defined in terms of how a space's topology interacts with sequences, but have subtly different properties. Sequential spaces and -sequential spaces were introduced by S. P. Franklin. History Although spaces satisfying such properties had implicitly been studied for several years, the first formal definition is due to S. P. Franklin in 1965. Franklin wanted to determine "the classes of topological spaces that can be specified completely by the knowledge of their convergent sequences", and began by investigating the first-countable spaces, for which it was already known that sequences sufficed. Franklin then arrived at the modern definition by abstracting the necessary properties of first-countable spaces. Preliminary definitions Let be a set and let be a sequence in ; that is, a family of elements of , indexed by the natural numbers. In this article, means that each element in the sequence is an element of and, if is a map, then For any index the tail of starting at is the sequence A sequence is eventually in if some tail of satisfies Let be a topology on and a sequence therein. The sequence converges to a point written (when context allows, ), if, for every neighborhood of eventually is in is then called a limit point of A function between topological spaces is sequentially continuous if implies Sequential closure/interior Let be a topological space and let be a subset. The topological closure (resp. topological interior) of in is denoted by (resp. ). The sequential closure of in is the setwhich defines a map, the sequential closure operator, on the power set of If necessary for clarity, this set may also be written or It is always the case that but the reverse may fail. The sequential interior of in is the set(the topological space again indicated with a subscript if necessary). Sequential closure and interior satisfy many of the nice properties of topological closure and interior: for all subsets and ; and ; ; ; and That is, sequential closure is a preclosure operator. Unlike topological closure, sequential closure is not idempotent: the last containment may be strict. Thus sequential closure is not a (Kuratowski) closure operator. Sequentially closed and open sets A set is sequentially closed if ; equivalently, for all and such that we must have A set is defined to be sequentially open if its complement is sequentially closed. Equivalent conditions include: or For all and such that eventually is in (that is, there exists some integer such that the tail ). A set is a sequential neighborhood of a point if it contains in its sequential interior; sequential neighborhoods need not be sequentially open (see below). It is possible for a subset of to be sequentially open but not open. Similarly, it is possible for there to exist a sequentially closed subset that is not closed. Sequential spaces and coreflection As discussed above, sequential closure is not in general idempotent, and so not the closure operator of a topology. One can obtain an idempotent sequential closure via transfinite iteration: for a successor ordinal define (as usual)and, for a limit ordinal defineThis process gives an ordinal-indexed increasing sequence of sets; as it turns out, that sequence always stabilizes by index (the first uncountable ordinal). Conversely, the sequential order of is the minimal ordinal at which, for any choice of the above sequence will stabilize. The transfinite sequential closure of is the terminal set in the above sequence: The operator is idempotent and thus a closure operator. In particular, it defines a topology, the sequential coreflection. In the sequential coreflection, every sequentially-closed set is closed (and every sequentially-open set is open). Sequential spaces A topological space is sequential if it satisfies any of the following equivalent conditions: is its own sequential coreflection. Every sequentially open subset of is open. Every sequentially closed subset of is closed. For any subset that is closed in there exists some and a sequence in that converges to (Universal Property) For every topological space a map is continuous if and only if it is sequentially continuous (if then ). is the quotient of a first-countable space. is the quotient of a metric space. By taking and to be the identity map on in the universal property, it follows that the class of sequential spaces consists precisely of those spaces whose topological structure is determined by convergent sequences. If two topologies agree on convergent sequences, then they necessarily have the same sequential coreflection. Moreover, a function from is sequentially continuous if and only if it is continuous on the sequential coreflection (that is, when pre-composed with ). - and -sequential spaces A -sequential space is a topological space with sequential order 1, which is equivalent to any of the following conditions: The sequential closure (or interior) of every subset of is sequentially closed (resp. open). or are idempotent. or Any sequential neighborhood of can be shrunk to a sequentially-open set that contains ; formally, sequentially-open neighborhoods are a neighborhood basis for the sequential neighborhoods. For any and any sequential neighborhood of there exists a sequential neighborhood of such that, for every the set is a sequential neighborhood of Being a -sequential space is incomparable with being a sequential space; there are sequential spaces that are not -sequential and vice-versa. However, a topological space is called a -sequential (or neighborhood-sequential) if it is both sequential and -sequential. An equivalent condition is that every sequential neighborhood contains an open (classical) neighborhood. Every first-countable space (and thus every metrizable space) is -sequential. There exist topological vector spaces that are sequential but -sequential (and thus not -sequential). Fréchet–Urysohn spaces A topological space is called Fréchet–Urysohn if it satisfies any of the following equivalent conditions: is hereditarily sequential; that is, every topological subspace is sequential. For every subset For any subset that is not closed in and every there exists a sequence in that converges to Fréchet–Urysohn spaces are also sometimes said to be "Fréchet," but should be confused with neither Fréchet spaces in functional analysis nor the T1 condition. Examples and sufficient conditions Every CW-complex is sequential, as it can be considered as a quotient of a metric space. The prime spectrum of a commutative Noetherian ring with the Zariski topology is sequential. Take the real line and identify the set of integers to a point. As a quotient of a metric space, the result is sequential, but it is not first countable. Every first-countable space is Fréchet–Urysohn and every Fréchet-Urysohn space is sequential. Thus every metrizable or pseudometrizable space — in particular, every second-countable space, metric space, or discrete space — is sequential. Let be a set of maps from Fréchet–Urysohn spaces to Then the final topology that induces on is sequential. A Hausdorff topological vector space is sequential if and only if there exists no strictly finer topology with the same convergent sequences. Spaces that are sequential but not Fréchet-Urysohn Schwartz space and the space of smooth functions, as discussed in the article on distributions, are both widely-used sequential spaces. More generally, every infinite-dimensional Montel DF-space is sequential but not Fréchet–Urysohn. Arens' space is sequential, but not Fréchet–Urysohn. Non-examples (spaces that are not sequential) The simplest space that is not sequential is the cocountable topology on an uncountable set. Every convergent sequence in such a space is eventually constant; hence every set is sequentially open. But the cocountable topology is not discrete. (One could call the topology "sequentially discrete".) Let denote the space of -smooth test functions with its canonical topology and let denote the space of distributions, the strong dual space of ; neither are sequential (nor even an Ascoli space). On the other hand, both and are Montel spaces and, in the dual space of any Montel space, a sequence of continuous linear functionals converges in the strong dual topology if and only if it converges in the weak* topology (that is, converges pointwise). Consequences Every sequential space has countable tightness and is compactly generated. If is a continuous open surjection between two Hausdorff sequential spaces then the set of points with unique preimage is closed. (By continuity, so is its preimage in the set of all points on which is injective.) If is a surjective map (not necessarily continuous) onto a Hausdorff sequential space and bases for the topology on then is an open map if and only if, for every basic neighborhood of and sequence in there is a subsequence of that is eventually in Categorical properties The full subcategory Seq of all sequential spaces is closed under the following operations in the category Top of topological spaces: The category Seq is closed under the following operations in Top: Since they are closed under topological sums and quotients, the sequential spaces form a coreflective subcategory of the category of topological spaces. In fact, they are the coreflective hull of metrizable spaces (that is, the smallest class of topological spaces closed under sums and quotients and containing the metrizable spaces). The subcategory Seq is a Cartesian closed category with respect to its own product (not that of Top). The exponential objects are equipped with the (convergent sequence)-open topology. P.I. Booth and A. Tillotson have shown that Seq is the smallest Cartesian closed subcategory of Top containing the underlying topological spaces of all metric spaces, CW-complexes, and differentiable manifolds and that is closed under colimits, quotients, and other "certain reasonable identities" that Norman Steenrod described as "convenient". Every sequential space is compactly generated, and finite products in Seq coincide with those for compactly generated spaces, since products in the category of compactly generated spaces preserve quotients of metric spaces. See also Notes Citations References Arkhangel'skii, A.V. and Pontryagin, L.S., General Topology I, Springer-Verlag, New York (1990) . Engelking, R., General Topology, Heldermann, Berlin (1989). Revised and completed edition. Goreham, Anthony, "Sequential Convergence in Topological Spaces", (2016) General topology Properties of topological spaces
Sequential space
[ "Mathematics" ]
2,525
[ "General topology", "Properties of topological spaces", "Space (mathematics)", "Topological spaces", "Topology" ]
4,668,618
https://en.wikipedia.org/wiki/TFNP
In computational complexity theory, the complexity class TFNP is the class of total function problems which can be solved in nondeterministic polynomial time. That is, it is the class of function problems that are guaranteed to have an answer, and this answer can be checked in polynomial time, or equivalently it is the subset of FNP where a solution is guaranteed to exist. The abbreviation TFNP stands for "Total Function Nondeterministic Polynomial". TFNP contains many natural problems that are of interest to computer scientists. These problems include integer factorization, finding a Nash Equilibrium of a game, and searching for local optima. TFNP is widely conjectured to contain problems that are computationally intractable, and several such problems have been shown to be hard under cryptographic assumptions. However, there are no known unconditional intractability results or results showing NP-hardness of TFNP problems. TFNP is not believed to have any complete problems. Formal definition The class TFNP is formally defined as follows. A binary relation P(x,y) is in TFNP if and only if there is a deterministic polynomial time algorithm that can determine whether P(x,y) holds given both x and y, and for every x, there exists a y which is at most polynomially longer than x such that P(x,y) holds. It was first defined by Megiddo and Papadimitriou in 1989, although TFNP problems and subclasses of TFNP had been defined and studied earlier. Examples Pigeonhole principle problem Input: A (polynomially computable) mapping f which maps a set of n + 1 items to a set of n items. Question: Find two items a and b such that f(a) = f(b). Let x be a mapping, and y a 2-tuple of items in its domain. The binary relation in question P(x,y) has the meaning "the images of both entries of y under x are equal", which, since the mapping is polynomially computable, is polynomially decidable. Moreover, such tuple y must exist for any mapping because of the pigeonhole principle. Connections to other complexity classes F(NP ∩ coNP) The complexity class can be defined in two different ways, and those ways are not known to be equivalent. One way applies F to the machine model for . It is known that with this definition, coincides with TFNP. To see this, first notice that the inclusion follows easily from the definitions of the classes. All "yes" answers to problems in TFNP can be easily verified by definition, and since problems in TFNP are total, there are no "no" answers, so it is vacuously true that "no" answers can be easily verified. For the reverse inclusion, let R be a binary relation in . Decompose R into such that precisely when and y is a "yes" answer, and let R2 be such and y is a "no" answer. Then the binary relation is in TFNP. The other definition uses that is known to be a well-behaved class of decision problems, and applies F to that class. With this definition, if then . Connection to NP NP is one of the most widely studied complexity classes. The conjecture that there are intractable problems in NP is widely accepted and often used as the most basic hardness assumption. Therefore, it is only natural to ask how TFNP is related to NP. It is not difficult to see that solutions to problems in NP can imply solutions to problems in TFNP. However, there are no TFNP problems which are known to be NP-hard. Intuition for this fact comes from the fact that problems in TFNP are total. For a problem to be NP-hard, there must exist a reduction from some NP-complete problem to the problem of interest. A typical reduction from problem A to problem B is performed by creating and analyzing a map that sends "yes" instances of A to "yes" instances of B and "no" instances of A to "no" instances of B. However, TFNP problems are total, so there are no "no" instances for this type of reduction, causing common techniques to be difficult to apply. Beyond this rough intuition, there are several concrete results that suggest that it might be difficult or even impossible to prove NP-hardness for TFNP problems. For example, if any TFNP problem is NP-complete, then NP = coNP, which is generally conjectured to be false, but is still a major open problem in complexity theory. This lack of connections with NP is a major motivation behind the study of TFNP as its own independent class. Notable subclasses The structure of TFNP is often studied through the study of its subclasses. These subclasses are defined by the mathematical theorem by which solutions to the problems are guaranteed. One appeal of studying subclasses of TFNP is that although TFNP is believed not to have any complete problems, these subclasses are defined by a certain complete problem, making them easier to reason about. PLS PLS (standing for "Polynomial Local Search") is a class of problems designed to model the process of searching for a local optimum for a function. In particular, it is the class of total function problems that are polynomial-time reducible to the following problem Given input circuits S and C each with n input and output bits, find x such that . It contains the class CLS. PPA PPA (standing for "Polynomial time Parity Argument") is the class of problems whose solution is guaranteed by the handshaking lemma: any undirected graph with an odd degree vertex must have another odd degree vertex. It contains the subclass PPAD. PPP PPP (standing for "Polynomial time Pigeonhole Principle") is the class of problems whose solution is guaranteed by the Pigeonhole principle. More precisely, it is the class of problems that can be reduced in polynomial time to the Pigeon problem, defined as follows Given circuit C with n input and output bits, find x such that or x ≠ y such that . PPP contains the classes PPAD and PWPP. Notable problems in this class include the short integer solution problem. PPAD PPAD (standing for "Polynomial time Parity Argument, Directed") is a restriction of PPA to problems whose solutions are guaranteed by a directed version of the handshake lemma. It is often defined as the set of problems that are polynomial-time reducible to End-of-a-Line: Given circuits S and P with n input and output bits and , find x such that or such that . PPAD is in the intersection of PPA and PPP, and it contains CLS. Here, the circuit S in the definition sends each point of the line to its successor, or to itself if the point is a sink. Likewise P sends each point of the line to its predecessor, or to itself if the point is a source. Points outside of all lines are identified by being fixed under both P and S (in other words, any isolated points are removed from the graph). Then the condition defines the end of a line, which is either a sink or is such that S(x) = S(y) for some other point y; similarly the condition defines the beginning of a line (since we assume that 0 is a source, we require the solution be nonzero in this case). CLS Continuous local search (CLS) is a class of search problems designed to model the process of finding a local optima of a continuous function over a continuous domain. It is defined as the class of problems that are polynomial-time reducible to the Continuous Localpoint problem: Given two Lipschitz continuous functions S and C and parameters ε and λ, find an ε-approximate fixed point of S with respect to C or two points that violate the λ-continuity of C or S. This class was first defined by Daskalakis and Papadimitriou in 2011. It is contained in the intersection of PPAD and PLS, and in 2020 it has been proven that . It was designed to be a class of relatively simple optimization problems that still contains many interesting problems that are believed to be hard. Complete problems for CLS are for example finding an ε-KKT point, finding an ε-Banach fixed point and the Meta-Metric-Contraction problem. EOPL and UEOPL EOPL and UEOPL (which stands for "end of potential line" and "unique end of potential line") were introduced in 2020 by. EOPL captures search problems that can be solved by local search, i.e. it is possible to jump from one candidate solution to the next one in polynomial time. A problem in EOPL can be interpreted as an exponentially large, directed, acyclic graph where each node is a candidate solution and has a cost (also called potential) which increases along the edges. The in- and out-degree of each node is at most one which means that the nodes form a collection of exponentially long lines. The end of each line is the node with highest cost on that line. EOPL contains all problems that can be reduced in polynomial time to the search problem End-of-Potential-Line: Given input circuits S, P each with n input and output bits, and C with n input and m output bits, , , and , find x such that x is the end of the line , x is the start of a second line , or x violates the increasing cost , and Here, S sends each vertex of the graph to its successor, or to itself if the vertex is a sink. Likewise P sends each vertex of the graph to its predecessor, or to itself. Points outside the graph are identified by being fixed under both P and S. Then the first and second solution types are respectively the upper and lower ends of the line, and the third solution type is a violation of the condition that the potential increases along the edges. If this last condition is violated, the endpoint may not maximize the potential on the line. Therefore the problem is total: Either a solution is found or a short proof is found that the conditions are not satisfied. UEOPL is defined very similarly, but it is promised that there is only one line. Hence finding the second type of solution above would violate the promise ensuring that the first type of solution is unique. A fourth solution type is added to provide another way of detecting the presence of a second line: two points x, y such that and either or . A solution of this type either indicates that x and y are on different lines, or indicates a violation of the condition that values on the same line are strictly increasing. The advantage of including this condition is that it may be easier to find x and y as required than to find the start of their lines, or an explicit violation of the increasing cost condition. UEOPL contains, among others, the problem of solving the P-matrix-Linear complementarity problem, finding the sink of a Unique sink orientation in cubes, solving a simple stochastic game and the α-Ham Sandwich problem. Complete problems of UEOPL are Unique-End-of-Potential-Line, some variants of it with costs increasing exactly by 1 or an instance without the P circuit, and One-Permutation-Discrete-Contraction. EOPL captures search problems like the ones in UEOPL with the relaxation that there are multiple lines allowed and it is searched for any end of a line. There are currently no problems known that are in EOPL but not in UEOPL. EOPL is a subclass of CLS, it is unknown whether they are equal or not. UEOPL is trivially contained in EOPL. FP FP (complexity) (standing for "Function Polynomial") is the class of function problems that can be solved in deterministic polynomial time. , and it is conjectured that this inclusion is strict. This class represents the class of function problems that are believed to be computationally tractable (without randomization). If TFNP = FP, then , which should be intuitive given the fact that . However, it is generally conjectured that , and so TFNP ≠ FP. References Complexity classes Binary relations
TFNP
[ "Mathematics" ]
2,573
[ "Mathematical relations", "Binary relations" ]
4,669,465
https://en.wikipedia.org/wiki/Expressive%20power%20%28computer%20science%29
In computer science, the expressive power (also called expressiveness or expressivity) of a language is the breadth of ideas that can be represented and communicated in that language. The more expressive a language is, the greater the variety and quantity of ideas it can be used to represent. For example, the Web Ontology Language expression language profile (OWL2 EL) lacks ideas (such as negation) that can be expressed in OWL2 RL (rule language). OWL2 EL may therefore be said to have less expressive power than OWL2 RL. These restrictions allow for more efficient (polynomial time) reasoning in OWL2 EL than in OWL2 RL. So OWL2 EL trades some expressive power for more efficient reasoning (processing of the knowledge representation language). Information description The term expressive power may be used with a range of meaning. It may mean a measure of the ideas expressible in that language: regardless of ease (theoretical expressivity) concisely and readily (practical expressivity) The first sense dominates in areas of mathematics and logic that deal with the formal description of languages and their meaning, such as formal language theory, mathematical logic and process algebra. In informal discussions, the term often refers to the second sense, or to both. This is often the case when discussing programming languages. Efforts have been made to formalize these informal uses of the term. The notion of expressive power is always relative to a particular kind of thing that the language in question can describe, and the term is normally used when comparing languages that describe the same kind of things, or at least comparable kinds of things. The design of languages and formalisms involves a trade-off between expressive power and analyzability. The more a formalism can express, the harder it becomes to understand what instances of the formalism say. Decision problems become harder to answer or completely undecidable. Examples In formal language theory Formal language theory mostly studies formalisms to describe sets of strings, such as context-free grammars and regular expressions. Each instance of a formalism, e.g. each grammar and each regular expression, describes a particular set of strings. In this context, the expressive power of a formalism is the set of sets of strings its instances describe, and comparing expressive power is a matter of comparing these sets. An important yardstick for describing the relative expressive power of formalisms in this area is the Chomsky hierarchy. It says, for instance, that regular expressions, nondeterministic finite automata and regular grammars have equal expressive power, while that of context-free grammars is greater; what this means is that the sets of sets of strings described by the first three formalisms are equal, and a proper subset of the set of sets of strings described by context-free grammars. In this area, the cost of expressive power is a central topic of study. It is known, for instance, that deciding whether two arbitrary regular expressions describe the same set of strings is hard, while doing the same for arbitrary context-free grammars is completely impossible. However, it can still be efficiently decided whether any given string is in the set. For more expressive formalisms, this problem can be harder, or even undecidable. For a Turing complete formalism, such as arbitrary formal grammars, not only this problem, but every nontrivial property regarding the set of strings they describe is undecidable, a fact known as Rice's Theorem. There are some results on conciseness as well; for instance, nondeterministic finite automata and regular grammars are more concise than regular expressions, in the sense that the latter can be translated to the former without a blowup in size (i.e. in O(1)), while the reverse is not possible. Similar considerations apply to formalisms that describe not sets of strings, but sets of trees (e.g. XML schema languages), of graphs, or other structures. In database theory Database theory is concerned, among other things, with database queries, e.g. formulas that, given the contents of a database, specify certain information to be extracted from it. In the predominant relational database paradigm, the contents of a database are described as a finite set of finite mathematical relations; Boolean queries, that always yield true or false, are formulated in first-order logic. It turns out that first-order logic is lacking in expressive power: it cannot express certain types of Boolean queries, e.g. queries involving transitive closure. However, adding expressive power must be done with care: it must still remain possible to evaluate queries with reasonable efficiency, which is not the case, e.g., for second-order logic. Consequently, a literature sprang up in which many query languages and language constructs were compared on the basis of expressive power and efficiency, e.g. various versions of Datalog. Similar considerations apply for query languages on other types of data, e.g. XML query languages such as XQuery. See also Extensible programming Semantic spectrum Turing tarpit References Programming language topics Ontology languages
Expressive power (computer science)
[ "Engineering" ]
1,053
[ "Software engineering", "Programming language topics" ]
4,670,556
https://en.wikipedia.org/wiki/Rotary-screw%20compressor
A rotary-screw compressor is a type of gas compressor, such as an air compressor, that uses a rotary-type positive-displacement mechanism. These compressors are common in industrial applications and replace more traditional piston compressors where larger volumes of compressed gas are needed, e.g. for large refrigeration cycles such as chillers, or for compressed air systems to operate air-driven tools such as jackhammers and impact wrenches. For smaller rotor sizes the inherent leakage in the rotors becomes much more significant, leading to this type of mechanism being less suitable for smaller compressors than piston compressors. The screw compressor is identical to the screw pump except that the pockets of trapped material get progressively smaller along the screw, thus compressing the material held within the pockets. Thus the screw of a screw compressor is asymmetrical along its length, while a screw pump is symmetrical all the way. The gas compression process of a rotary screw is a continuous sweeping motion, so there is very little pulsation or surging of flow, as occurs with piston compressors. This also allows screw compressors to be significantly quieter and produce much less vibration than piston compressors, even at large sizes, and produces some benefits in efficiency. Working Rotary-screw compressors use two very closely meshing spiral rotors to compress the gas. In a dry-running rotary-screw compressor, timing gears ensure that the male and female rotors maintain precise alignment without contact which would produce rapid wear. In an oil-flooded rotary-screw compressor, lubricating oil bridges the space between the rotors, both providing a hydraulic seal and transferring mechanical energy between the rotors, allowing one rotor to be entirely driven by the other. Gas enters at the suction side and moves through the threads as the screws rotate. The meshing rotors force the gas through the compressor, and the gas exits at the end of the screws. The working area is the inter-lobe volume between the male and female rotors. It is larger at the intake end, and decreases along the length of the rotors until the exhaust port. This change in volume is the compression. The intake charge is drawn in at the end of the rotors in the large clearance between the male and female lobes. At the intake end the male lobe is much smaller than its female counterpart, but the relative sizes reverse proportions along the lengths of both rotors (the male becomes larger and the female smaller) until (tangential to the discharge port) the clearance space between each pair of lobes is much smaller. This reduction in volume causes compression of the charge before being presented to the output manifold. The effectiveness of this mechanism is dependent on precisely fitting clearances between the spiral rotors and between the rotors and the chamber for sealing of the compression cavities. However, some leakage is inevitable, and high rotational speeds must be used to minimize the ratio of leakage flow rate over effective flow rate. In contrast to Roots blowers, modern screw compressors are made with different profiles on the two rotors: the male rotor has convex lobes which mesh with the concave cavities of the female rotor. Usually the male rotor has fewer lobes than the female rotor, so that it rotates faster. Originally, screw compressors were made with symmetrical rotor cavity profiles, but modern versions use asymmetrical rotors, with the exact rotor designs being the subject of patents. Size The capacities of rotary-screw compressors are typically rated in horsepower (HP), Standard Cubic Feet per Minute (SCFM)* and pounds per square inch gauge (PSIG.) For units in the 5 through 30 HP range the physical size of these units are comparable to a typical two-stage compressor. As horsepower increases, there is a substantial economy of scale in favor of the rotary-screw compressors. As an example, a 250 HP compound compressor is a large piece of equipment that generally requires a special foundation, building accommodations and highly trained riggers to place the equipment. On the other hand, a 250 HP rotary-screw compressor can be placed on an ordinary shop floor using a standard forklift. Within industry, a 250 HP rotary-screw compressor is generally considered to be a compact piece of equipment. Rotary-screw compressors are commonly available in the 5 through 500 HP range and can produce air flows in excess of 2500 SCFM. While the pressure produced by a single-stage screw compressor is limited to 250 PSIG, a two-stage screw compressor can deliver pressures of up to 600 PSIG. Rotary-screw compressors tend to be smooth running with limited vibration, thus not requiring a specialized foundation or mounting system. Normally, rotary-screw compressors are mounted using standard rubber isolation mounts designed to absorb high-frequency vibrations. This is especially true in rotary-screw compressors that operate at high rotational speeds. *To a lesser extent, some compressors are rated in Actual Cubic Feet per Minute (ACFM). Still others are rated in Cubic Feet per Minute (CFM). Using CFM to rate a compressor is incorrect because it represents a flow rate that is independent of a pressure reference. i.e. 20 CFM at 60 PSI. History The screw compressor was first patented in 1878 by Heinrich Krigar in Germany, however the patent expired without a working machine being built. The modern helical lobe screw compressor was developed in Sweden by Alf Lysholm who was the chief engineer at Ljungstroms Angturbin. Lysholm developed the screw compressor while looking for a way to overcome compressor surge in gas turbines. Lysholm first considered a roots type blower but found this was unable to generate a high enough pressure ratio. In 1935, Ljungstroms patented a helical lobe screw compressor which was then widely licensed to other manufacturers. Ljungstroms Angturbin AB was renamed Svenska Rotor Maskiner (SRM) in 1951. In 1952, the first Holroyd cutting machine was used, by the Scottish engineering company Howden, to produce helical lobe compressor rotors greatly reducing both cost and manufacturing time. In 1954, Howden and SRM jointly developed the first oil flooded screw compressor. Flooding provided both cooling, which allowed higher pressure ratios, and the elimination of timing gears. The first commercially available flooded screw air compressor was introduced in 1957 by Atlas Copco. Slot valves were developed by SRM in the 1950s, allowing for improvements in capacity control which had been a limiting factor for screw compressor application. Asymmetric rotors were first patented by SRM and subsequently introduced commercially by Sullair in 1969. The introduction of asymmetric rotors improved sealing, further increasing the types efficiency. Applications Rotary-screw compressors are generally used to supply compressed air for larger industrial applications. They are best applied in applications that have a continuous air demand such as food packaging plants and automated manufacturing systems although a large enough number of intermittent demands, along with some storage, will also present a suitably continuous load. In addition to fixed units, rotary-screw compressors are commonly mounted on tow-behind trailers and powered with small diesel engines. These portable compression systems are typically referred to as construction compressors. Construction compressors are used to provide compressed air to jack hammers, riveting tools, pneumatic pumps, sand blasting operations and industrial paint systems. They are commonly seen at construction sites and on duty with road repair crews throughout the world. Screw air compressors are also commonly used on Rotary, DTH and RC drill rigs used in mining production and exploration drilling applications and in oil and gas pipeline services such as pneumatic testing or air pigging. Oil-free In an oil-free compressor, the air is compressed entirely through the action of the screws, without the assistance of an oil seal. They usually have lower maximal discharge pressure capability as a result. However, multi-stage oil-free compressors, where the air is compressed by several sets of screws, can achieve pressures of over and output volume of over . Oil-free compressors are used in applications where entrained oil carry-over is not acceptable, such as medical research and semiconductor manufacturing. However, this does not preclude the need for filtration, as hydrocarbons and other contaminants ingested from the ambient air must also be removed prior to the point of use. Consequently, air treatment identical to that used for an oil-flooded screw compressor is frequently required to ensure quality compressed air. In small piston compressors carpenters homeowners sometimes use "oil free" compressors in which oil free is a reference to not using oil but Teflon type of coatings permanently adhered to wear surfaces. Oil-injected In an oil-injected rotary-screw compressor, oil is injected into the compression cavities to aid sealing and provide cooling for the gas charge. The oil is separated from the discharge stream, cooled, filtered and recycled. The oil captures non-polar particulates from the incoming air, effectively reducing the particle loading of compressed-air particulate filtration. It is usual for some entrained compressor oil to carry into the compressed-gas stream downstream of the compressor. In many applications, this is rectified by coalescer/filter vessels. Refrigerated compressed air dryers with internal cold coalescing filters are rated to remove more oil and water than coalescing filters that are downstream of air dryers, because after the air is cooled and the moisture is removed, the cold air is used to pre-cool the hot entering air, which warms the exiting air. In other applications, this is rectified by the use of receiver tanks that reduce the local velocity of compressed air, allowing oil to condense, drop out of the air stream, and to be removed from the compressed-air system by condensate-management equipment. Oil flooded screw compressors are used in a wide variety of applications including air compression, gas refrigeration, hydrocarbon processing and power utilization from low-grade heat sources. Sizes range from small workshop air compressors to heavy industrial compressors with output pressures as high as . New oil flooded screw air compressors release <5 mg/m3 of oil carryover. Lubricants, polyalkylene glycol (PAG), polyalphaolefin (PAO), mineral oils PAG oil is polyalkylene glycol which is also called polyglycol. PAG oil burns off cleanly, leaving no residue, and have been used as a carrier oil for solid lubricants for high-temperature chain lubrication. Some versions are food grade and biodegradable. PAG lubricants are used by the two largest U.S. air compressor OEMs in rotary screw air compressors. PAG oil-injected compressors are not used to spray paint, because PAG oil dissolve paints. Reaction-hardening two-component epoxy resin paints are resistant to PAG oil. Polyglycols are not compatible with mineral oil based greases. A mixture of polyglycols with mineral oils results is a gelatinous, gooey mess. Silicon grease does tolerate Polyglycols. One pneumatic controls manufacturer puts silicon grease on the seals and gaskets. Mineral oil (but not polyalkylene glycol oil) lubricated compressors are recommended for mineral oil greases coated seals, such as pneumatic high speed 4-way valves and air cylinders that operate without mineral oiler lubricators. One manufacturer has rated its pneumatic high speed 4-way valves with a life of 50 million cycles, if not exposed to polyglycol oils. Polyalphaolefin PAO oil is compatible with mineral oil greases. Conical screw compressor The relatively recently developed conical screw compressor is in effect a conical spiral extension of a gerotor. It does not have the inherent "blow-hole" leakage path which, in well designed screw compressors, is responsible for significant leakage through the assembly. This allows much smaller rotors to have practical efficiency since at smaller sizes the leakage area does not become as large a portion of the pumping area as in straight screw compressors. In conjunction with the decreasing diameter of the cone shaped rotor this also allows much higher compression ratios in a single stage with lower output pulsation. Control schemes Among rotary-screw compressors, there are multiple control schemes, each with differing advantages and disadvantages. Start/stop In a start/stop control scheme, compressor controls actuate relays to apply and remove power to the motor according to compressed air needs. Significant storage is required in most usage cases if the load is intermittent or is poorly matched to the compressor, the storage required will often be larger than the compressor itself. Load/unload In a load/unload control scheme, the compressor remains continuously powered. However, when the demand for compressed air is satisfied or reduced, instead of disconnecting power to the compressor, a device known as a slide valve is activated. This device uncovers part of the rotor and proportionately reduces capacity of the machine down to typically 25% of the compressor's capability, thereby unloading the compressor. This reduces the number of start/stop cycles for electric motors over a start/stop control scheme in electrically driven compressors, improving equipment service life with a minimal change in operating cost. When a load/unload control scheme is combined with a timer to stop the compressor after a predetermined period of continuously unloaded operation, it is known as a dual-control or auto-dual scheme. This control scheme still requires storage since there are only two production rates available to match consumption, although significantly less than a start/stop scheme. Most diesel powered air compressors, uses this method. Modulation Instead of starting and stopping the compressor, a slide valve as described above continuously modulates capacity to the demand rather than being controlled in steps. While this yields a consistent discharge pressure over a wide range of demand, overall power consumption may be higher than with a load/unload scheme, resulting in approximately 70% of full-load power consumption when the compressor is at a zero-load condition. Due to the limited adjustment in compressor power consumption relative to compressed-air output capacity, modulation is a generally inefficient method of control when compared to variable-speed drives. However, for applications where it is not readily possible to frequently cease and resume operation of the compressor (such as when a compressor is driven by an internal-combustion engine and operated without the presence of a compressed-air receiver), modulation is suitable. The continuously variable production rate also eliminates the need for significant storage if the load never exceeds the compressor capacity. Variable displacement Utilized by compressor companies Quincy Compressor, Kobelco, Gardner Denver, Kaishan USA, and Sullair, variable displacement alters the percentage of the screw compressor rotors working to compress air by allowing air flow to bypass portions of the screws. While this does reduce power consumption when compared to a modulation control scheme, a load/unload system can be more effective with large amounts of storage (10 gallons per CFM). If a large amount of storage is not practical, a variable-displacement system can be very effective, especially at greater than 70% of full load. One way that variable displacement may be accomplished is by using multiple lifting valves on the suction side of the compressor, each plumbed to a corresponding location on the discharge. In automotive superchargers, this is analogous to the operation of a bypass valve. Variable speed While an air compressor powered by a variable-speed drive can offer the lowest operating-energy cost without any appreciable reduction in service life over a properly maintained load/unload compressor, the variable-frequency power inverter of a variable-speed drive typically adds significant cost to the design of such a compressor, reducing its economic benefits over a properly sized load/unload compressor if air demand is constant. However, a variable-speed drive provides for a nearly linear relationship between compressor power consumption and free air delivery allowing the most efficient operation over a very wide range of air demand. The compressor will still have to enter start/stop mode for very low demand as efficiency still drops off rapidly at low production rates due to rotor leakage. In harsh environments (hot, humid or dusty) the electronics of variable-speed drives may have to be protected to retain expected service life. Superchargers The twin-screw type supercharger is a positive displacement type device that operates by pushing air through a pair of meshing close-tolerance screws similar to a set of worm gears. Twin-screw superchargers are also known as Lysholm superchargers (or compressors) after their inventor, Alf Lysholm. Each rotor is radially symmetrical, but laterally asymmetric. By comparison, conventional "Roots" type blowers have either identical rotors (with straight rotors) or mirror-image rotors (with helixed rotors). The Whipple-manufactured male rotor has three lobes, the female five lobes. The Kenne-Bell male rotor has four lobes, the female six lobes. Females in some earlier designs had four. By comparison, Roots blowers always have the same number of lobes on both rotors, typically 2, 3 or 4. Comparative advantages The rotary screw compressor has low leakage levels and low parasitic losses vs. Roots type. The supercharger is typically driven directly from the engine's crankshaft via a belt or gear drive. Unlike the Roots type supercharger, the twin-screw exhibits internal compression which is the ability of the device to compress air within the housing as it is moved through the device instead of relying upon resistance to flow downstream of the discharge to establish an increase of pressure. The requirement of high-precision computer-controlled manufacturing techniques makes the screw type supercharger a more expensive alternative to other forms of available forced induction. With later technology, manufacturing cost has been lowered while performance increased. All supercharger types benefit from the use of an intercooler to reduce heat produced during pumping and compression. A clear example of the technology applied by the twin-screw in companies like Ford, Mazda, Mercedes and Mercury Marine can also demonstrate the effectiveness of the twin screw. While some centrifugal superchargers are consistent and reliable, they typically do not produce full boost until near peak engine rpm, while positive displacement superchargers such as Roots type superchargers and twin-screw types offer more immediate boost. In addition to this, twin-screw superchargers can keep the reasonable boost to higher rpm better than other positive displacement supercharges. Related terms The term "blower" is commonly used to define a device placed on engines with a functional need for additional airflow, such as a 2-stroke Diesel engine, where positive intake pressure is needed to "scavenge", or clear spent exhaust gasses from the cylinder and force a fresh intake charge into the cylinder before the compression stroke. The term "blower" is applied to rotary screw, roots-type, and centrifugal compressors when utilized as part of an automotive forced induction system. The term 'cabin blower' is also used for the pressurisation of aircraft for high altitude flight, which used Roots type compressors particularly in the 1950s (see Marshall supercharger). See also Gas compressor Guided-rotor compressor Reciprocating compressor Vapor-compression refrigeration Variable-speed air compressor References Gas compressors
Rotary-screw compressor
[ "Chemistry" ]
4,016
[ "Gas compressors", "Turbomachinery" ]
20,631,781
https://en.wikipedia.org/wiki/Bacterial%20adhesion%20in%20aquatic%20system
Bacterial adhesion involves the attachment (or deposition) of bacteria on the surface (solid, gel layer, etc.). This interaction plays an important role in natural system as well as in environmental engineering. The attachment of biomass on the membrane surface will result in membrane fouling, which can significantly reduce the efficiency of the treatment system using membrane filtration process in wastewater treatment plants. The low adhesion of bacteria to soil is essential key for the success of in-situ bioremediation in groundwater treatment. However, the contamination of pathogens in drinking water could be linked to the transportation of microorganisms in groundwater and other water sources. Controlling and preventing the adverse impact of the bacterial deposition on the aquatic environment need a deeply understanding about the mechanisms of this process. DLVO theory has been used extensively to describe the deposition of bacteria in many current researches. Prediction of bacterial deposition by classical DLVO theory DLVO theory describes the interaction potential between charged surfaces. It is the sum of electrostatic double layer, which can be either attractive of repulsive, and attractive Van der Waals interactions of the charge surfaces. DLVO theory is applied widely in explaining the aggregation and deposition of colloidal and nano particles such as Fullerene C60 in aquatic system. Because bacteria and colloid particles both share the similarities in size and surface charge, the deposition of bacteria also can be describe by the DLVO theory. The prediction is based on sphere-plate interaction for one cell and the surface. The electrostatic double layer interactions could be describes by the expression for the constant surface potential Where ε0is the vacuum permittivity, εr is the relative dielectric permittivity of water, ap is the equivalent spherical radius of the bacteria, κ is the inverse of Debye length, h is the separation distance between the bacterium and the collector surface; ψp and ψc are the surface potentials of the bacterial cell and the collector surface. Zeta potential at the surface of the bacteria and the collector were used instead of the surface potential. The retarded Van der Waals interaction potential was calculated using the expression from Gregory, 1981 . With A is Hamaker constant for bacteria-water-surface collector (quartz) = 6.5 x 10−21 J and λ is the characteristic wavelength of the dielectric and could be assumed 100 nm, a is the equivalent radius of the bacteria, h is the separation distance from the surface collector to the bacteria. Thus, the total interaction between bacteria and charged surface can be expressed as follow Current experimental result Experimental method Radial stagnant point flow (RSPF) system has currently been used for the experiment of bacterial adhesion with the verification of DLVO theory. It is a well-characterized experimental system and is useful for visualizing the deposition of individual bacteria on the uniform charge, flat quartz surface. The deposition of bacteria on the surface was observed and estimated through an inverted microscope and recorded at regular intervals (10 s or 20 s) with a digital camera. Flow flied at the stagnation point flow https://web.archive.org/web/20090418224617/http://www.yale.edu/env/alexis_folder/alexis_research_2b.jpg Many bacterial stains have been used for the experiments. They are: Cryptosporidium parvum oocysts, having 3.7 μm equivalent spherical diameter. Escherichia coli, having 1.7 μm equivalent spherical diameter. Pseudomonas aeruginosa, having 1.24 μm equivalent spherical diameter. All of the bacterial strains have negative zeta potential at experimental pH (5.5 and 5.8) and less become negative at higher ionic strength in both mono and divalent salt solutions. Ultra pure quartz surface collectors have been used extensively due to their surface homogeneity, which is an important factor for applying DLVO theory. The quartz surface originally has negative potential. However, the surface of the collectors was usually modified to have positive surface for the favorable deposition experiments. In some experiments, the surface collector was coated with an alginate layer with negative charge for simulating the real conditioning film in natural system. Result It was concluded that bacterial deposition mainly occurred in a secondary energy minimum by using DLVO theory. DLVO calculation predicted an energy barrier of 140kT at 31.6 mM ionic strength to over 2000kT at 1mM ionic strength. This data was not in agreement with the experimental data, which showed increasing deposition with increasing ionic strength. Therefore, the deposit could occur at secondary minimum having the energy from 0.09kT to 8.1kT at 1mM and 31.6 mM ionic strength, respectively. The conclusion was further proven by the partial release of deposited bacteria when the ionic strength decreased. Because the amount of released bacteria was less than 100%, it was suggested that bacteria could deposit at the primary minimum due to the heterogeneity of the surface collector or bacterial surface. This fact was not covered in classical DLVO theory. The presence of divalent electrolytes (Ca2+) can neutralize the charge surface of bacteria by the binding between Ca2+ and the functional group on the oocyst surface. This resulted in an observable bacterial deposition despite the very high electrostatic repulsive energy from the DLVO prediction. The motility of bacteria also has a significant effect on the bacterial adhesion. Nonmotile and motile bacteria showed different behavior in deposition experiments. At the same ionic strength, motile bacteria showed greater adhesion to the surface than nonmotile bacteria and motile bacteria can attach to the surface of the collector at high repulsive electrostatic force. It was suggested that the swimming energy of the cells could overcome the repulsive energy or they can adhere to regions of heterogeneity on the surface. The swimming capacity increase with the ionic strength and 100mM is the optimal concentration for the rotation of flagella. Despite the electrostatic repulsion energy from DLVO calculation between the bacteria and surface collector, the deposition could occur due to other interactions such as the steric impact of the presence of flagella on the cell environment and the strong hydrophobicity of the cell. References Physical chemistry Colloidal chemistry
Bacterial adhesion in aquatic system
[ "Physics", "Chemistry" ]
1,276
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Colloids", "Surface science", "nan", "Physical chemistry" ]
20,637,356
https://en.wikipedia.org/wiki/Closure%20temperature
In radiometric dating, closure temperature or blocking temperature refers to the temperature of a system, such as a mineral, at the time given by its radiometric date. In physical terms, the closure temperature is the temperature at which a system has cooled so that there is no longer any significant diffusion of the parent or daughter isotopes out of the system and into the external environment. The concept's initial mathematical formulation was presented in a seminal paper by Martin H. Dodson, "Closure temperature in cooling geochronological and petrological systems" in the journal Contributions to Mineralogy and Petrology, 1973, with refinements to a usable experimental formulation by other scientists in later years. This temperature varies broadly among different minerals and also differs depending on the parent and daughter atoms being considered. It is specific to a particular material and isotopic system. The closure temperature of a system can be experimentally determined in the lab by artificially resetting sample minerals using a high-temperature furnace. As the mineral cools, the crystal structure begins to form and diffusion of isotopes slows. At a certain temperature, the crystal structure has formed sufficiently to prevent diffusion of isotopes. This temperature is what is known as blocking temperature and represents the temperature below which the mineral is a closed system to measurable diffusion of isotopes. The age that can be calculated by radiometric dating is thus the time at which the rock or mineral cooled to blocking temperature. These temperatures can also be determined in the field by comparing them to the dates of other minerals with well-known closure temperatures. Closure temperatures are used in geochronology and thermochronology to date events and determine rates of processes in the geologic past. Table of values The following table represents the closure temperatures of some materials. These values are the approximate values of the closure temperatures of certain minerals listed by the isotopic system being used. These values are approximations; better values of the closure temperature require more precise calculations and characterizations of the diffusion characteristics of the mineral grain being studied. Potassium-argon method Uranium-lead method Electron spin resonance dating References Radiometric dating
Closure temperature
[ "Physics", "Chemistry" ]
430
[ "Radiometric dating", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Nuclear physics", "Radioactivity" ]
20,639,064
https://en.wikipedia.org/wiki/Interbilayer%20forces%20in%20membrane%20fusion
Membrane fusion is a key biophysical process that is essential for the functioning of life itself. It is defined as the event where two lipid bilayers approach each other and then merge to form a single continuous structure. In living beings, cells are made of an outer coat made of lipid bilayers; which then cause fusion to take place in events such as fertilization, embryogenesis and even infections by various types of bacteria and viruses. It is therefore an extremely important event to study. From an evolutionary angle, fusion is an extremely controlled phenomenon. Random fusion can result in severe problems to the normal functioning of the human body. Fusion of biological membranes is mediated by proteins. Regardless of the complexity of the system, fusion essentially occurs due to the interplay of various interfacial forces, namely hydration repulsion, hydrophobic attraction and van der Waals forces. Inter-bilayer forces Lipid bilayers are structures of lipid molecules consisting of a hydrophobic tail and a hydrophilic head group. Therefore, these structures experience all the characteristic Interbilayer forces involved in that regime. Hydration repulsion Two hydrated bilayers experience strong repulsion as they approach each other. These forces have been measured using the Surface forces apparatus (S.F.A), an instrument used for measuring forces between surfaces. This repulsion was first proposed by Langmuir and was thought to arise due to water molecules that hydrate the bilayers. Hydration repulsion can thus be defined as the work required in removing the water molecules around hydrophilic molecules (like lipid head groups) in the bilayer system. As water molecules have an affinity towards hydrophilic head groups, they try to arrange themselves around the head groups of the lipid molecules and it becomes very hard to separate this favorable combination. Experiments performed through SFA have confirmed that the nature of this force is an exponential decline. The potential VR is given by where CR (>0) is a measure of the hydration interaction energy for hydrophilic molecules of the given system, λR is a characteristic length scale of hydration repulsion and z is the distance of separation. In other words, it is on distances up to this length that molecules/surfaces fully experience this repulsion. Hydrophobic attraction Hydrophobic forces are the attractive entropic forces between any two hydrophobic groups in aqueous media, e.g. the forces between two long hydrocarbon chains in aqueous solutions. The magnitude of these forces depends on the hydrophobicity of the interacting groups as well as the distance separating them (they are found to decrease roughly exponentially with the distance). The physical origin of these forces is a debated issue but they have been found to be long-ranged and are the strongest among all the physical interaction forces operating between biological surfaces and molecules. Due to their long range nature, they are responsible for rapid coagulation of hydrophobic particles in water and play important roles in various biological phenomena including folding and stabilization of macromolecules such as proteins and fusion of cell membranes. The potential VA is given by where CA (<0) is a measure of the hydrophobic interaction energy for the given system, λA is a characteristic length scale of hydrophobic attraction and z is the distance of separation. van der Waals forces in bilayers These forces arise due to dipole–dipole interactions (induced/permanent) between molecules of bilayers. As molecules come closer, this attractive force arises due to the ordering of these dipoles; like in the case of magnets that align and attract each other as they approach. This also implies that any surface would experience a van der waals attraction. In bilayers, the form taken by van der Waals interaction potential VVDW is given by where H is the Hamaker constant and D and z are the bilayers thickness and the distance of separation respectively. Background For fusion to take place, it has to overcome huge repulsive forces due to the strong hydration repulsion between hydrophilic lipid head groups. However, it has been hard to exactly determine the connection between adhesion, fusion and interbilayer forces. The forces that promote cell adhesion are not the same as the ones that promote membrane fusion. Studies show that by creating a stress on the interacting bilayers, fusion can be achieved without disrupting the interbilayer interactions. It has also been suggested that membrane fusion takes place through a sequence of structural rearrangements that help to overcome the barrier that prevents fusion. Thus, interbilayer fusion takes place through local approach of membrane structural rearrangements causing hydration repulsion forces to be overcome complete merging to form a single entity Interbilayer interactions during membrane fusion When two lipid bilayers approach each other, they experience weak van der Waals attractive forces and much stronger repulsive forces due to hydration repulsion. These forces are normally dominant over the hydrophobic attractive forces between the membranes. Studies done on membrane bilayers using Surface forces apparatus (SFA) indicate that membrane fusion can instantaneously occur when two bilayers are still at a finite distance from each other without them having to overcome the short-range repulsive force barrier. This is attributed to the molecular rearrangements that occur resulting in the bypassing of these forces by the membranes. During fusion, the hydrophobic tails of a small patch of lipids on the cell membrane are exposed to the aqueous phase surrounding them. This results in very strong hydrophobic attractions (which dominate the repulsive force) between the exposed groups leading to membrane fusion. The attractive van der Waals forces play a negligible role in membrane fusion. Thus, fusion is a result of the hydrophobic attractions between internal hydrocarbon chain groups that are exposed to the normally inaccessible aqueous environment. Fusion is observed to start at points on the membranes where the membrane stresses are either the weakest or the strongest. Applications Interbilayer forces play a key role in mediating membrane fusion, which has extremely important biomedical applications. The most important application of membrane fusion is in the production of hybridomas which are cells that arise as a result of the fusion of antibody-secreting and immortal B-cells. Hybridomas are used in the industry for the production of monoclonal antibodies. Membrane fusion also has a major role in cancer immunotherapy. Currently, one of the approaches in cancer immunotherapy involves vaccination of dendritic cells which express a specific tumor antigen on their membranes. Instead, the hybrid cells obtained from the fusion of dendritic cells with tumor cells can be used. These hybrids would help in the expression of a range of tumor-associated antigens on their membranes. Understanding membrane fusion better can also lead to improvements in gene therapy. See also Cell membrane Hydrate Hydrophobic effect Lipid bilayers Surface forces apparatus References Intermolecular forces Membrane biology Surface science Biophysics
Interbilayer forces in membrane fusion
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
1,429
[ "Applied and interdisciplinary physics", "Molecular physics", "Membrane biology", "Materials science", "Surface science", "Intermolecular forces", "Biophysics", "Condensed matter physics", "Molecular biology" ]
446,457
https://en.wikipedia.org/wiki/Iron%28III%29%20chloride
Iron(III) chloride describes the inorganic compounds with the formula (H2O)x. Also called ferric chloride, these compounds are some of the most important and commonplace compounds of iron. They are available both in anhydrous and in hydrated forms, which are both hygroscopic. They feature iron in its +3 oxidation state. The anhydrous derivative is a Lewis acid, while all forms are mild oxidizing agents. It is used as a water cleaner and as an etchant for metals. Electronic and optical properties All forms of ferric chloride are paramagnetic, owing to the presence of unpaired electrons residing in 3d orbitals. Although Fe(III) chloride can be octahedral or tetrahedral (or both, see structure section), all of these forms have five unpaired electrons, one per d-orbital. The high spin d5 electronic configuration requires that d-d electronic transitions are spin forbidden, in addition to violating the Laporte rule. This double forbidden-ness results in its solutions being only pale colored. Or, stated more technically, the optical transitions are non-intense. Aqueous ferric sulfate and ferric nitrate, which contain , are nearly colorless, whereas the chloride solutions are yellow. Thus, the chloride ligands significantly influence the optical properties of the iron center. Structure Iron(III) chloride can exist as an anhydrous material and a series of hydrates, which results in distinct structures. Anhydrous The anhydrous compound is a hygroscopic crystalline solid with a melting point of 307.6 °C. The colour depends on the viewing angle: by reflected light, the crystals appear dark green, but by transmitted light, they appear purple-red. Anhydrous iron(III) chloride has the structure, with octahedral Fe(III) centres interconnected by two-coordinate chloride ligands. Iron(III) chloride has a relatively low melting point and boils at around 315 °C. The vapor consists of the dimer , much like aluminium chloride. This dimer dissociates into the monomeric (with D3h point group molecular symmetry) at higher temperatures, in competition with its reversible decomposition to give iron(II) chloride and chlorine gas. Hydrates Ferric chloride form hydrates upon exposure to water, reflecting its Lewis acidity. All hydrates exhibit deliquescence, meaning that they become liquid by absorbing moisture from the air. Hydration invariably gives derivatives of aquo complexes with the formula . This cation can adopt either trans or cis stereochemistry, reflecting the relative location of the chloride ligands on the octahedral Fe center. Four hydrates have been characterized by X-ray crystallography: the dihydrate , the disesquihydrate , the trisesquihydrate , and finally the hexahydrate . These species differ with respect to the stereochemistry of the octahedral iron cation, the identity of the anions, and the presence or absence of water of crystallization. The structural formulas are , , , and . The first three members of this series have the tetrahedral tetrachloroferrate () anion. Solution Like the solid hydrates, aqueous solutions of ferric chloride also consist of the octahedral of unspecified stereochemistry. Detailed speciation of aqueous solutions of ferric chloride is challenging because the individual components do not have distinctive spectroscopic signatures. Iron(III) complexes, with a high spin d5 configuration, is kinetically labile, which means that ligands rapidly dissociate and reassociate. A further complication is that these solutions are strongly acidic, as expected for aquo complexes of a tricationic metal. Iron aquo complexes are prone to olation, the formation of polymeric oxo derivatives. Dilute solutions of ferric chloride produce soluble nanoparticles with molecular weight of 104, which exhibit the property of "aging", i.e., the structure change or evolve over the course of days. The polymeric species formed by the hydrolysis of ferric chlorides are key to the use of ferric chloride for water treatment. In contrast to the complicated behavior of its aqueous solutions, solutions of iron(III) chloride in diethyl ether and tetrahydrofuran are well-behaved. Both ethers form 1:2 adducts of the general formula FeCl3(ether)2. In these complexes, the iron is pentacoordinate. Preparation Several hundred tons of anhydrous iron(III) chloride are produced annually. The principal method, called direct chlorination, uses scrap iron as a precursor: The reaction is conducted at several hundred degrees such that the product is gaseous. Using excess chlorine guarantees that the intermediate ferrous chloride is converted to the ferric state. A similar but laboratory-scale process also has been described. Aqueous solutions of iron(III) chloride are also produced industrially from a number of iron precursors, including iron oxides: In complementary route, iron metal can be oxidized by hydrochloric acid followed by chlorination: A number of variables apply to these processes, including the oxidation of iron by ferric chloride and the hydration of intermediates. Hydrates of iron(III) chloride do not readily yield anhydrous ferric chloride. Attempted thermal dehydration yields hydrochloric acid and iron oxychloride. In the laboratory, hydrated iron(III) chloride can be converted to the anhydrous form by treatment with thionyl chloride or trimethylsilyl chloride: Reactions Being high spin d5 electronic configuration iron(III) chlorides are labile, meaning that its Cl- and H2O ligands exchange rapidly with free chloride and water. In contrast to their kinetic lability, iron(III) chlorides are thermodynamically robust, as reflected by the vigorous methods applied to their synthesis, as described above. Anhydrous FeCl3 Aside from lability, which applies to anhydrous and hydrated forms, the reactivity of anhydrous ferric chloride reveals two trends: It is a Lewis acid and an oxidizing agent. Reactions of anhydrous iron(III) chloride reflect its description as both oxophilic and a hard Lewis acid. Myriad manifestations of the oxophiliicty of iron(III) chloride are available. When heated with iron(III) oxide at 350 °C it reacts to give iron oxychloride: Alkali metal alkoxides react to give the iron(III) alkoxide complexes. These products have more complicated structures than anhydrous iron(III) chloride. In the solid phase a variety of multinuclear complexes have been described for the nominal stoichiometric reaction between and sodium ethoxide: Iron(III) chloride forms a 1:2 adduct with Lewis bases such as triphenylphosphine oxide; e.g., . The related 1:2 complex , has been crystallized from ether solution. Iron(III) chloride also reacts with tetraethylammonium chloride to give the yellow salt of the tetrachloroferrate ion (). Similarly, combining FeCl3 with NaCl and KCl gives and , respectively. In addition to these simple stoichiometric reactions, the Lewis acidity of ferric chloride enables its use in a variety of acid-catalyzed reactions as described below in the section on organic chemistry. In terms of its being an oxidant, iron(III) chloride oxidizes iron powder to form iron(II) chloride via a comproportionation reaction: A traditional synthesis of anhydrous ferrous chloride is the reduction of FeCl3 with chlorobenzene: iron(III) chloride releases chlorine gas when heated above 160 °C, generating ferrous chloride: To suppress this reaction, the preparation of iron(III) chloride requires an excess of chlorinating agent, as discussed above. Hydrated FeCl3 Unlike the anhydrous material, hydrated ferric chloride is not a particularly strong Lewis acid since water ligands have quenched the Lewis acidity by binding to Fe(III). Like the anhydrous material, hydrated ferric chloride is oxophilic. For example, oxalate salts react rapidly with aqueous iron(III) chloride to give , known as ferrioxalate. Other carboxylate sources, e.g., citrate and tartrate, bind as well to give carboxylate complexes. The affinity of iron(III) for oxygen ligands was the basis of qualitative tests for phenols. Although superseded by spectroscopic methods, the ferric chloride test is a traditional colorimetric test. The affinity of iron(III) for phenols is exploited in the Trinder spot test. Aqueous iron(III) chloride serves as a one-electron oxidant illustrated by its reaction with copper(I) chloride to give copper(II) chloride and iron(II) chloride. This fundamental reaction is relevant to the use of ferric chloride solutions in etching copper. Organometallic chemistry The interaction of anhydrous iron(III) chloride with organolithium and organomagnesium compounds has been examined often. These studies are enabled because of the solubility of FeCl3 in ethereal solvents, which avoids the possibility of hydrolysis of the nucleophilic alkylating agents. Such studies may be relevant to the mechanism of FeCl3-catalyzed cross-coupling reactions. The isolation of organoiron(III) intermediates requires low-temperature reactions, lest the [FeR4]− intermediates degrade. Using methylmagnesium bromide as the alkylation agent, salts of Fe(CH3)4]− have been isolated. Illustrating the sensitivity of these reactions, methyl lithium reacts with iron(III) chloride to give lithium tetrachloroferrate(II) : To a significant extent, iron(III) acetylacetonate and related beta-diketonate complexes are more widely used than FeCl3 as ether-soluble sources of ferric ion. These diketonate complexes have the advantages that they do not form hydrates, unlike iron(III) chloride, and they are more soluble in relevant solvents. Cyclopentadienyl magnesium bromide undergoes a complex reaction with iron(III) chloride, resulting in ferrocene: This conversion, although not of practical value, was important in the history of organometallic chemistry where ferrocene is emblematic of the field. Uses Water treatment The largest applications of iron(III) chloride are sewage treatment and drinking water production. By forming highly dispersed networks of Fe-O-Fe containing materials, ferric chlorides serve as coagulant and flocculants. In this application, an aqueous solution of is treated with base to form a floc of iron(III) hydroxide (), also formulated as FeO(OH) (ferrihydrite). This floc facilitates the separation of suspended materials, clarifying the water. Iron(III) chloride is also used to remove soluble phosphate from wastewater. Iron(III) phosphate is insoluble and thus precipitates as a solid. One potential advantage of its use in water treatment, is that the ferric ion oxidizes (deodorizes) hydrogen sulfide. Etching and metal cleaning It is also used as a leaching agent in chloride hydrometallurgy, for example in the production of Si from FeSi (Silgrain process by Elkem). In another commercial application, a solution of iron(III) chloride is useful for etching copper according to the following equation: The soluble copper(II) chloride is rinsed away, leaving a copper pattern. This chemistry is used in the production of printed circuit boards (PCB). Iron(III) chloride is used in many other hobbies involving metallic objects. Organic chemistry In industry, iron(III) chloride is used as a catalyst for the reaction of ethylene with chlorine, forming ethylene dichloride (1,2-dichloroethane): Ethylene dichloride is a commodity chemical, which is mainly used for the industrial production of vinyl chloride, the monomer for making PVC. Illustrating it use as a Lewis acid, iron(III) chloride catalyses electrophilic aromatic substitution and chlorinations. In this role, its function is similar to that of aluminium chloride. In some cases, mixtures of the two are used. Organic synthesis research Although iron(III) chlorides are seldom used in practical organic synthesis, they have received considerable attention as reagents because they are inexpensive, earth abundant, and relatively nontoxic. Many experiments probe both its redox activity and its Lewis acidity. For example, iron(III) chloride oxidizes naphthols to naphthoquinones: 3-Alkylthiophenes are polymerized to polythiophenes upon treatment with ferric chloride. Iron(III) chloride has been shown to promote C-C coupling reaction. Several reagents have been developed based on supported iron(III) chloride. On silica gel, the anhydrous salt has been applied to certain dehydration and pinacol-type rearrangement reactions. A similar reagent but moistened induces hydrolysis or epimerization reactions. On alumina, ferric chloride has been shown to accelerate ene reactions. When pretreated with sodium hydride, iron(III) chloride gives a hydride reducing agent that convert alkenes and ketones into alkanes and alcohols, respectively. Histology Iron(III) chloride is a component of useful stains, such as Carnoy's solution, a histological fixative with many applications. Also, it is used to prepare Verhoeff's stain. Natural occurrence Like many metal halides, naturally occurs as a trace mineral. The rare mineral molysite is usually associated with volcanoes and fumaroles. -based aerosol are produced by a reaction between iron-rich dust and hydrochloric acid from sea salt. This iron salt aerosol causes about 1-5% of naturally-occurring oxidization of methane and is thought to have a range of cooling effects; thus, it has been proposed as a catalyst for Atmospheric Methane Removal. The clouds of Venus are hypothesized to contain approximately 1% dissolved in sulfuric acid. Safety Iron(III) chlorides are widely used in the treatment of drinking water, so they pose few problems as poisons, at low concentrations. Nonetheless, anhydrous iron(III) chloride, as well as concentrated aqueous solution, is highly corrosive, and must be handled using proper protective equipment. Notes References Further reading Chlorides Iron(III) compounds Metal halides Coordination complexes Deliquescent materials Dehydrating agents Acid catalysts
Iron(III) chloride
[ "Chemistry" ]
3,217
[ "Chlorides", "Acids", "Inorganic compounds", "Acid catalysts", "Coordination complexes", "Coordination chemistry", "Reagents for organic chemistry", "Salts", "Metal halides", "Deliquescent materials", "Dehydrating agents" ]
446,539
https://en.wikipedia.org/wiki/Fermionic%20condensate
A fermionic condensate (or Fermi–Dirac condensate) is a superfluid phase formed by fermionic particles at low temperatures. It is closely related to the Bose–Einstein condensate, a superfluid phase formed by bosonic atoms under similar conditions. The earliest recognized fermionic condensate described the state of electrons in a superconductor; the physics of other examples including recent work with fermionic atoms is analogous. The first atomic fermionic condensate was created by a team led by Deborah S. Jin using potassium-40 atoms at the University of Colorado Boulder in 2003. Background Superfluidity Fermionic condensates are attained at lower temperatures than Bose–Einstein condensates. Fermionic condensates are a type of superfluid. As the name suggests, a superfluid possesses fluid properties similar to those possessed by ordinary liquids and gases, such as the lack of a definite shape and the ability to flow in response to applied forces. However, superfluids possess some properties that do not appear in ordinary matter. For instance, they can flow at high velocities without dissipating any energy—i.e. zero viscosity. At lower velocities, energy is dissipated by the formation of quantized vortices, which act as "holes" in the medium where superfluidity breaks down. Superfluidity was originally discovered in liquid helium-4 whose atoms are bosons, not fermions. Fermionic superfluids It is far more difficult to produce a fermionic superfluid than a bosonic one, because the Pauli exclusion principle prohibits fermions from occupying the same quantum state. However, there is a well-known mechanism by which a superfluid may be formed from fermions: That mechanism is the BCS transition, discovered in 1957 by J. Bardeen, L.N. Cooper, and R. Schrieffer for describing superconductivity. These authors showed that, below a certain temperature, electrons (which are fermions) can pair up to form bound pairs now known as Cooper pairs. As long as collisions with the ionic lattice of the solid do not supply enough energy to break the Cooper pairs, the electron fluid will be able to flow without dissipation. As a result, it becomes a superfluid, and the material through which it flows a superconductor. The BCS theory was phenomenally successful in describing superconductors. Soon after the publication of the BCS paper, several theorists proposed that a similar phenomenon could occur in fluids made up of fermions other than electrons, such as helium-3 atoms. These speculations were confirmed in 1971, when experiments performed by D.D. Osheroff showed that helium-3 becomes a superfluid below 0.0025 K. It was soon verified that the superfluidity of helium-3 arises from a BCS-like mechanism. Condensates of fermionic atoms When Eric Cornell and Carl Wieman produced a Bose–Einstein condensate from rubidium atoms in 1995, there naturally arose the prospect of creating a similar sort of condensate made from fermionic atoms, which would form a superfluid by the BCS mechanism. However, early calculations indicated that the temperature required for producing Cooper pairing in atoms would be too cold to achieve. In 2001, Murray Holland at JILA suggested a way of bypassing this difficulty. He speculated that fermionic atoms could be coaxed into pairing up by subjecting them to a strong magnetic field. In 2003, working on Holland's suggestion, Deborah Jin at JILA, Rudolf Grimm at the University of Innsbruck, and Wolfgang Ketterle at MIT managed to coax fermionic atoms into forming molecular bosons, which then underwent Bose–Einstein condensation. However, this was not a true fermionic condensate. On December 16, 2003, Jin managed to produce a condensate out of fermionic atoms for the first time. The experiment involved 500,000 potassium-40 atoms cooled to a temperature of 5×10−8 K, subjected to a time-varying magnetic field. Examples Chiral condensate A chiral condensate is an example of a fermionic condensate that appears in theories of massless fermions with chiral symmetry breaking, such as the theory of quarks in Quantum Chromodynamics. BCS theory The BCS theory of superconductivity has a fermion condensate. A pair of electrons in a metal with opposite spins can form a scalar bound state called a Cooper pair. The bound states themselves then form a condensate. Since the Cooper pair has electric charge, this fermion condensate breaks the electromagnetic gauge symmetry of a superconductor, giving rise to the unusual electromagnetic properties of such states. QCD In quantum chromodynamics (QCD) the chiral condensate is also called the quark condensate. This property of the QCD vacuum is partly responsible for giving masses to hadrons (along with other condensates like the gluon condensate). In an approximate version of QCD, which has vanishing quark masses for N quark flavours, there is an exact chiral symmetry of the theory. The QCD vacuum breaks this symmetry to SU(N) by forming a quark condensate. The existence of such a fermion condensate was first shown explicitly in the lattice formulation of QCD. The quark condensate is therefore an order parameter of transitions between several phases of quark matter in this limit. This is very similar to the BCS theory of superconductivity. The Cooper pairs are analogous to the pseudoscalar mesons. However, the vacuum carries no charge. Hence all the gauge symmetries are unbroken. Corrections for the masses of the quarks can be incorporated using chiral perturbation theory. Helium-3 superfluid A helium-3 atom is a fermion and at very low temperatures, they form two-atom Cooper pairs which are bosonic and condense into a superfluid. These Cooper pairs are substantially larger than the interatomic separation. See also Fermi gas Bose gas Footnotes References Sources </ref> American inventions Condensed matter physics Phases of matter Quantum field theory Exotic matter Quantum phases Superfluidity
Fermionic condensate
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,345
[ "Quantum phases", "Quantum field theory", "Physical phenomena", "Phase transitions", "Phases of matter", "Quantum mechanics", "Materials science", "Superfluidity", "Condensed matter physics", "Exotic matter", "Matter", "Fluid dynamics" ]
446,712
https://en.wikipedia.org/wiki/Josephson%20effect
In physics, the Josephson effect is a phenomenon that occurs when two superconductors are placed in proximity, with some barrier or restriction between them. The effect is named after the British physicist Brian Josephson, who predicted in 1962 the mathematical relationships for the current and voltage across the weak link. It is an example of a macroscopic quantum phenomenon, where the effects of quantum mechanics are observable at ordinary, rather than atomic, scale. The Josephson effect has many practical applications because it exhibits a precise relationship between different physical measures, such as voltage and frequency, facilitating highly accurate measurements. The Josephson effect produces a current, known as a supercurrent, that flows continuously without any voltage applied, across a device known as a Josephson junction (JJ). These consist of two or more superconductors coupled by a weak link. The weak link can be a thin insulating barrier (known as a superconductor–insulator–superconductor junction, or S-I-S), a short section of non-superconducting metal (S-N-S), or a physical constriction that weakens the superconductivity at the point of contact (S-c-S). Josephson junctions have important applications in quantum-mechanical circuits, such as SQUIDs, superconducting qubits, and RSFQ digital electronics. The NIST standard for one volt is achieved by an array of 20,208 Josephson junctions in series. History The DC Josephson effect had been seen in experiments prior to 1962, but had been attributed to "super-shorts" or breaches in the insulating barrier leading to the direct conduction of electrons between the superconductors. In 1962, Brian Josephson became interested into superconducting tunneling. He was then 23 years old and a second-year graduate student of Brian Pippard at the Mond Laboratory of the University of Cambridge. That year, Josephson took a many-body theory course with Philip W. Anderson, a Bell Labs employee on sabbatical leave for the 1961–1962 academic year. The course introduced Josephson to the idea of broken symmetry in superconductors, and he "was fascinated by the idea of broken symmetry, and wondered whether there could be any way of observing it experimentally". Josephson studied the experiments by Ivar Giaever and Hans Meissner, and theoretical work by Robert Parmenter. Pippard initially believed that the tunneling effect was possible but that it would be too small to be noticeable, but Josephson did not agree, especially after Anderson introduced him to a preprint of "Superconductive Tunneling" by Cohen, Falicov, and Phillips about the superconductor-barrier-normal metal system. Josephson and his colleagues were initially unsure about the validity of Josephson's calculations. Anderson later remembered: We were all—Josephson, Pippard and myself, as well as various other people who also habitually sat at the Mond tea and participated in the discussions of the next few weeks—very much puzzled by the meaning of the fact that the current depends on the phase. After further review, they concluded that Josephson's results were valid. Josephson then submitted "Possible new effects in superconductive tunnelling" to Physics Letters in June 1962. The newer journal Physics Letters was chosen instead of the better established Physical Review Letters due to their uncertainty about the results. John Bardeen, by then already Nobel Prize winner, was initially publicly skeptical of Josephson's theory in 1962, but came to accept it after further experiments and theoretical clarifications. See also: . In January 1963, Anderson and his Bell Labs colleague John Rowell submitted the first paper to Physical Review Letters to claim the experimental observation of Josephson's effect "Probable Observation of the Josephson Superconducting Tunneling Effect". These authors were awarded patents on the effects that were never enforced, but never challenged. Before Josephson's prediction, it was only known that single (i.e., non-paired) electrons can flow through an insulating barrier, by means of quantum tunneling. Josephson was the first to predict the tunneling of superconducting Cooper pairs. For this work, Josephson received the Nobel Prize in Physics in 1973. John Bardeen was one of the nominators. Applications Types of Josephson junction include the φ Josephson junction (of which π Josephson junction is a special example), long Josephson junction, and superconducting tunnel junction. Other uses include: A "Dayem bridge" is a thin-film Josephson junction where the weak link comprises a superconducting wire measuring a few micrometres or less. The Josephson junction count is a proxy variable for a device's complexity SQUIDs, or superconducting quantum interference devices, are very sensitive magnetometers that operate via the Josephson effect Superfluid helium quantum interference devices (SHeQUIDs) are the superfluid helium analog of a dc-SQUID In precision metrology, the Josephson effect is a reproducible conversion between frequency and voltage. The Josephson voltage standard takes the caesium standard definition of frequency and gives the standard representation of a volt Single-electron transistors are often made from superconducting materials and called "superconducting single-electron transistors". Elementary charge is most precisely measured in terms of the Josephson constant and the von Klitzing constant which is related to the quantum Hall effect RSFQ digital electronics are based on shunted Josephson junctions. Junction switching emits one magnetic flux quantum . Its presence and absence represents binary 1 and 0. Superconducting quantum computing uses Josephon junctions as qubits such as in a flux qubit or other schemes where the phase and charge are conjugate variables. Superconducting tunnel junction detectors are used in superconducting cameras The Josephson equations The Josephson effect can be calculated using the laws of quantum mechanics. A diagram of a single Josephson junction is shown at right. Assume that superconductor A has Ginzburg–Landau order parameter , and superconductor B , which can be interpreted as the wave functions of Cooper pairs in the two superconductors. If the electric potential difference across the junction is , then the energy difference between the two superconductors is , since each Cooper pair has twice the charge of one electron. The Schrödinger equation for this two-state quantum system is therefore: where the constant is a characteristic of the junction. To solve the above equation, first calculate the time derivative of the order parameter in superconductor A: and therefore the Schrödinger equation gives: The phase difference of Ginzburg–Landau order parameters across the junction is called the Josephson phase: The Schrödinger equation can therefore be rewritten as: and its complex conjugate equation is: Add the two conjugate equations together to eliminate : Since , we have: Now, subtract the two conjugate equations to eliminate : which gives: Similarly, for superconductor B we can derive that: Noting that the evolution of Josephson phase is and the time derivative of charge carrier density is proportional to current , when , the above solution yields the Josephson equations: where and are the voltage across and the current through the Josephson junction, and is a parameter of the junction named the critical current. Equation (1) is called the first Josephson relation or weak-link current-phase relation, and equation (2) is called the second Josephson relation or superconducting phase evolution equation. The critical current of the Josephson junction depends on the properties of the superconductors, and can also be affected by environmental factors like temperature and externally applied magnetic field. The Josephson constant is defined as: and its inverse is the magnetic flux quantum: The superconducting phase evolution equation can be reexpressed as: If we define: then the voltage across the junction is: which is very similar to Faraday's law of induction. But note that this voltage does not come from magnetic energy, since there is no magnetic field in the superconductors; Instead, this voltage comes from the kinetic energy of the carriers (i.e. the Cooper pairs). This phenomenon is also known as kinetic inductance. Three main effects There are three main effects predicted by Josephson that follow directly from the Josephson equations: The DC Josephson effect The DC Josephson effect is a direct current crossing the insulator in the absence of any external electromagnetic field, owing to tunneling. This DC Josephson current is proportional to the sine of the Josephson phase (phase difference across the insulator, which stays constant over time), and may take values between and . The AC Josephson effect With a fixed voltage across the junction, the phase will vary linearly with time and the current will be a sinusoidal AC (alternating current) with amplitude and frequency . This means a Josephson junction can act as a perfect voltage-to-frequency converter. The inverse AC Josephson effect Microwave radiation of a single (angular) frequency can induce quantized DC voltages across the Josephson junction, in which case the Josephson phase takes the form , and the voltage and current across the junction will be: The DC components are: This means a Josephson junction can act like a perfect frequency-to-voltage converter, which is the theoretical basis for the Josephson voltage standard. Josephson inductance When the current and Josephson phase varies over time, the voltage drop across the junction will also vary accordingly; As shown in derivation below, the Josephson relations determine that this behavior can be modeled by a kinetic inductance named Josephson Inductance. Rewrite the Josephson relations as: Now, apply the chain rule to calculate the time derivative of the current: Rearrange the above result in the form of the current–voltage characteristic of an inductor: This gives the expression for the kinetic inductance as a function of the Josephson Phase: Here, is a characteristic parameter of the Josephson junction, named the Josephson Inductance. Note that although the kinetic behavior of the Josephson junction is similar to that of an inductor, there is no associated magnetic field. This behaviour is derived from the kinetic energy of the charge carriers, instead of the energy in a magnetic field. Josephson energy Based on the similarity of the Josephson junction to a non-linear inductor, the energy stored in a Josephson junction when a supercurrent flows through it can be calculated. The supercurrent flowing through the junction is related to the Josephson phase by the current-phase relation (CPR): The superconducting phase evolution equation is analogous to Faraday's law: Assume that at time , the Josephson phase is ; At a later time , the Josephson phase evolved to . The energy increase in the junction is equal to the work done on the junction: This shows that the change of energy in the Josephson junction depends only on the initial and final state of the junction and not the path. Therefore, the energy stored in a Josephson junction is a state function, which can be defined as: Here is a characteristic parameter of the Josephson junction, named the Josephson Energy. It is related to the Josephson Inductance by . An alternative but equivalent definition is also often used. Again, note that a non-linear magnetic coil inductor accumulates potential energy in its magnetic field when a current passes through it; However, in the case of Josephson junction, no magnetic field is created by a supercurrent — the stored energy comes from the kinetic energy of the charge carriers instead. The RCSJ model The Resistively Capacitance Shunted Junction (RCSJ) model, or simply shunted junction model, includes the effect of AC impedance of an actual Josephson junction on top of the two basic Josephson relations stated above. As per Thévenin's theorem, the AC impedance of the junction can be represented by a capacitor and a shunt resistor, both parallel to the ideal Josephson Junction. The complete expression for the current drive becomes: where the first term is displacement current with – effective capacitance, and the third is normal current with – effective resistance of the junction. Josephson penetration depth The Josephson penetration depth characterizes the typical length on which an externally applied magnetic field penetrates into the long Josephson junction. It is usually denoted as and is given by the following expression (in SI): where is the magnetic flux quantum, is the critical supercurrent density (A/m2), and characterizes the inductance of the superconducting electrodes where is the thickness of the Josephson barrier (usually insulator), and are the thicknesses of superconducting electrodes, and and are their London penetration depths. The Josephson penetration depth usually ranges from a few μm to several mm if the critical current density is very low. See also Pi Josephson junction φ Josephson junction Josephson diode Andreev reflection Fractional vortices Ginzburg–Landau theory Macroscopic quantum phenomena Macroscopic quantum self-trapping Quantum computer Quantum gyroscope Rapid single flux quantum (RSFQ) Semifluxon Zero-point energy Josephson vortex References Condensed matter physics Superconductivity Sensors Mesoscopic physics Energy (physics)
Josephson effect
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Technology", "Engineering" ]
2,802
[ "Electrical resistance and conductance", "Josephson effect", "Physical quantities", "Quantity", "Superconductivity", "Phases of matter", "Quantum mechanics", "Measuring instruments", "Materials science", "Energy (physics)", "Sensors", "Condensed matter physics", "Wikipedia categories named a...
447,181
https://en.wikipedia.org/wiki/Weierstrass%20elliptic%20function
In mathematics, the Weierstrass elliptic functions are elliptic functions that take a particularly simple form. They are named for Karl Weierstrass. This class of functions are also referred to as ℘-functions and they are usually denoted by the symbol ℘, a uniquely fancy script p. They play an important role in the theory of elliptic functions, i.e., meromorphic functions that are doubly periodic. A ℘-function together with its derivative can be used to parameterize elliptic curves and they generate the field of elliptic functions with respect to a given period lattice. Symbol for Weierstrass -function Motivation A cubic of the form , where are complex numbers with , cannot be rationally parameterized. Yet one still wants to find a way to parameterize it. For the quadric ; the unit circle, there exists a (non-rational) parameterization using the sine function and its derivative the cosine function: Because of the periodicity of the sine and cosine is chosen to be the domain, so the function is bijective. In a similar way one can get a parameterization of by means of the doubly periodic -function (see in the section "Relation to elliptic curves"). This parameterization has the domain , which is topologically equivalent to a torus. There is another analogy to the trigonometric functions. Consider the integral function It can be simplified by substituting and : That means . So the sine function is an inverse function of an integral function. Elliptic functions are the inverse functions of elliptic integrals. In particular, let: Then the extension of to the complex plane equals the -function. This invertibility is used in complex analysis to provide a solution to certain nonlinear differential equations satisfying the Painlevé property, i.e., those equations that admit poles as their only movable singularities. Definition Let be two complex numbers that are linearly independent over and let be the period lattice generated by those numbers. Then the -function is defined as follows: This series converges locally uniformly absolutely in the complex torus . It is common to use and in the upper half-plane as generators of the lattice. Dividing by maps the lattice isomorphically onto the lattice with . Because can be substituted for , without loss of generality we can assume , and then define . Properties is a meromorphic function with a pole of order 2 at each period in . is an even function. That means for all , which can be seen in the following way: The second last equality holds because . Since the sum converges absolutely this rearrangement does not change the limit. The derivative of is given by: and are doubly periodic with the periods and . This means: It follows that and for all . Laurent expansion Let . Then for the -function has the following Laurent expansion where for are so called Eisenstein series. Differential equation Set and . Then the -function satisfies the differential equation This relation can be verified by forming a linear combination of powers of and to eliminate the pole at . This yields an entire elliptic function that has to be constant by Liouville's theorem. Invariants The coefficients of the above differential equation g2 and g3 are known as the invariants. Because they depend on the lattice they can be viewed as functions in and . The series expansion suggests that g2 and g3 are homogeneous functions of degree −4 and −6. That is for . If and are chosen in such a way that , g2 and g3 can be interpreted as functions on the upper half-plane . Let . One has: That means g2 and g3 are only scaled by doing this. Set and As functions of are so called modular forms. The Fourier series for and are given as follows: where is the divisor function and is the nome. Modular discriminant The modular discriminant Δ is defined as the discriminant of the characteristic polynomial of the differential equation as follows: The discriminant is a modular form of weight 12. That is, under the action of the modular group, it transforms as where with ad − bc = 1. Note that where is the Dedekind eta function. For the Fourier coefficients of , see Ramanujan tau function. The constants e1, e2 and e3 , and are usually used to denote the values of the -function at the half-periods. They are pairwise distinct and only depend on the lattice and not on its generators. , and are the roots of the cubic polynomial and are related by the equation: Because those roots are distinct the discriminant does not vanish on the upper half plane. Now we can rewrite the differential equation: That means the half-periods are zeros of . The invariants and can be expressed in terms of these constants in the following way: , and are related to the modular lambda function: Relation to Jacobi's elliptic functions For numerical work, it is often convenient to calculate the Weierstrass elliptic function in terms of Jacobi's elliptic functions. The basic relations are: where and are the three roots described above and where the modulus k of the Jacobi functions equals and their argument w equals Relation to Jacobi's theta functions The function can be represented by Jacobi's theta functions: where is the nome and is the period ratio . This also provides a very rapid algorithm for computing . Relation to elliptic curves Consider the embedding of the cubic curve in the complex projective plane For this cubic there exists no rational parameterization, if . In this case it is also called an elliptic curve. Nevertheless there is a parameterization in homogeneous coordinates that uses the -function and its derivative : Now the map is bijective and parameterizes the elliptic curve . is an abelian group and a topological space, equipped with the quotient topology. It can be shown that every Weierstrass cubic is given in such a way. That is to say that for every pair with there exists a lattice , such that and . The statement that elliptic curves over can be parameterized over , is known as the modularity theorem. This is an important theorem in number theory. It was part of Andrew Wiles' proof (1995) of Fermat's Last Theorem. Addition theorems Let , so that . Then one has: As well as the duplication formula: These formulas also have a geometric interpretation, if one looks at the elliptic curve together with the mapping as in the previous section. The group structure of translates to the curve and can be geometrically interpreted there: The sum of three pairwise different points is zero if and only if they lie on the same line in . This is equivalent to: where , and . Typography The Weierstrass's elliptic function is usually written with a rather special, lower case script letter ℘, which was Weierstrass's own notation introduced in his lectures of 1862–1863. It should not be confused with the normal mathematical script letters P: 𝒫 and 𝓅. In computing, the letter ℘ is available as \wp in TeX. In Unicode the code point is , with the more correct alias . In HTML, it can be escaped as &weierp;. See also Weierstrass functions Jacobi elliptic functions Lemniscate elliptic functions Footnotes References N. I. Akhiezer, Elements of the Theory of Elliptic Functions, (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island Tom M. Apostol, Modular Functions and Dirichlet Series in Number Theory, Second Edition (1990), Springer, New York (See chapter 1.) K. Chandrasekharan, Elliptic functions (1980), Springer-Verlag Konrad Knopp, Funktionentheorie II (1947), Dover Publications; Republished in English translation as Theory of Functions (1996), Dover Publications Serge Lang, Elliptic Functions (1973), Addison-Wesley, E. T. Whittaker and G. N. Watson, A Course of Modern Analysis, Cambridge University Press, 1952, chapters 20 and 21 External links Weierstrass's elliptic functions on Mathworld. Chapter 23, Weierstrass Elliptic and Modular Functions in DLMF (Digital Library of Mathematical Functions) by W. P. Reinhardt and P. L. Walker. Weierstrass P function and its derivative implemented in C by David Dumas Modular forms Algebraic curves Elliptic functions
Weierstrass elliptic function
[ "Mathematics" ]
1,747
[ "Modular forms", "Number theory" ]
448,321
https://en.wikipedia.org/wiki/Thermoelectric%20effect
The thermoelectric effect is the direct conversion of temperature differences to electric voltage and vice versa via a thermocouple. A thermoelectric device creates a voltage when there is a different temperature on each side. Conversely, when a voltage is applied to it, heat is transferred from one side to the other, creating a temperature difference. This effect can be used to generate electricity, measure temperature or change the temperature of objects. Because the direction of heating and cooling is affected by the applied voltage, thermoelectric devices can be used as temperature controllers. The term "thermoelectric effect" encompasses three separately identified effects: the Seebeck effect (temperature differences cause electromotive forces), the Peltier effect (thermocouples create temperature differences), and the Thomson effect (the Seebeck coefficient varies with temperature). The Seebeck and Peltier effects are different manifestations of the same physical process; textbooks may refer to this process as the Peltier–Seebeck effect (the separation derives from the independent discoveries by French physicist Jean Charles Athanase Peltier and Baltic German physicist Thomas Johann Seebeck). The Thomson effect is an extension of the Peltier–Seebeck model and is credited to Lord Kelvin. Joule heating, the heat that is generated whenever a current is passed through a conductive material, is not generally termed a thermoelectric effect. The Peltier–Seebeck and Thomson effects are thermodynamically reversible, whereas Joule heating is not. Origin At the atomic scale, a temperature gradient causes charge carriers in the material to diffuse from the hot side to the cold side. This is due to charge carrier particles having higher mean velocities (and thus kinetic energy) at higher temperatures, leading them to migrate on average towards the colder side, in the process carrying heat across the material. Depending on the material properties and nature of the charge carriers (whether they are positive holes in a bulk material or electrons of negative charge), heat can be carried in either direction with respect to voltage. Semiconductors of n-type and p-type are often combined in series as they have opposite directions for heat transport, as specified by the sign of their Seebeck coefficients. Seebeck effect The Seebeck effect is the electromotive force (emf) that develops across two points of an electrically conducting material when there is a temperature difference between them. The emf is called the Seebeck emf (or thermo/thermal/thermoelectric emf). The ratio between the emf and temperature difference is the Seebeck coefficient. A thermocouple measures the difference in potential across a hot and cold end for two dissimilar materials. This potential difference is proportional to the temperature difference between the hot and cold ends. First discovered in 1794 by Italian scientist Alessandro Volta, it is named after the Russian born, Baltic German physicist Thomas Johann Seebeck who rediscovered it in 1821. Seebeck observed what he called "thermomagnetic effect" wherein a magnetic compass needle would be deflected by a closed loop formed by two different metals joined in two places, with an applied temperature difference between the joints. Danish physicist Hans Christian Ørsted noted that the temperature difference was in fact driving an electric current, with the generation of magnetic field being an indirect consequence, and so coined the more accurate term "thermoelectricity". The Seebeck effect is a classic example of an electromotive force (EMF) and leads to measurable currents or voltages in the same way as any other EMF. The local current density is given by where is the local voltage, and is the local conductivity. In general, the Seebeck effect is described locally by the creation of an electromotive field where is the Seebeck coefficient (also known as thermopower), a property of the local material, and is the temperature gradient. The Seebeck coefficients generally vary as function of temperature and depend strongly on the composition of the conductor. For ordinary materials at room temperature, the Seebeck coefficient may range in value from −100 μV/K to +1,000 μV/K (see Seebeck coefficient article for more information). Applications In practice, thermoelectric effects are essentially unobservable for a localized hot or cold spot in a single homogeneous conducting material, since the overall EMFs from the increasing and decreasing temperature gradients will perfectly cancel out. Attaching an electrode to the hotspot in an attempt to measure the locally shifted voltage will only partly succeed: It means another temperature gradient will appear inside of the electrode, so the overall EMF will depend on the difference in Seebeck coefficients between the electrode and the conductor it is attached to. Thermocouples involve two wires, each of a different material, that are electrically joined in a region of unknown temperature. The loose ends are measured in an open-circuit state (without any current, ). Although the materials' Seebeck coefficients are nonlinearly temperature dependent and different for the two materials, the open-circuit condition means that everywhere. Therefore (see the thermocouple article for more details) the voltage measured at the loose ends of the wires is directly dependent on the unknown temperature, and yet totally independent of other details such as the exact geometry of the wires. This direct relationship allows the thermocouple arrangement to be used as a straightforward uncalibrated thermometer, provided knowledge of the difference in -vs- curves of the two materials, and of the reference temperature at the measured loose wire ends. Thermoelectric sorting functions similarly to a thermocouple but involves an unknown material instead of an unknown temperature: a metallic probe of known composition is kept at a constant known temperature and held in contact with the unknown sample that is locally heated to the probe temperature, thereby providing an approximate measurement of the unknown Seebeck coefficient . This can help distinguish between different metals and alloys. Thermopiles are formed from many thermocouples in series, zig-zagging back and forth between hot and cold. This multiplies the voltage output. Thermoelectric generators are like a thermocouple/thermopile but instead draw some current from the generated voltage in order to extract power from heat differentials. They are optimized differently from thermocouples, using high quality thermoelectric materials in a thermopile arrangement, to maximize the extracted power. Though not particularly efficient, these generators have the advantage of not having any moving parts. Peltier effect When an electric current is passed through a circuit of a thermocouple, heat is generated at one junction and absorbed at the other junction. This is known as the Peltier effect: the presence of heating or cooling at an electrified junction of two different conductors. The effect is named after French physicist Jean Charles Athanase Peltier, who discovered it in 1834. When a current is made to flow through a junction between two conductors, A and B, heat may be generated or removed at the junction. The Peltier heat generated at the junction per unit time is where and are the Peltier coefficients of conductors A and B, and is the electric current (from A to B). The total heat generated is not determined by the Peltier effect alone, as it may also be influenced by Joule heating and thermal-gradient effects (see below). The Peltier coefficients represent how much heat is carried per unit charge. Since charge current must be continuous across a junction, the associated heat flow will develop a discontinuity if and are different. The Peltier effect can be considered as the back-action counterpart to the Seebeck effect (analogous to the back-EMF in magnetic induction): if a simple thermoelectric circuit is closed, then the Seebeck effect will drive a current, which in turn (by the Peltier effect) will always transfer heat from the hot to the cold junction. The close relationship between Peltier and Seebeck effects can be seen in the direct connection between their coefficients: (see below). A typical Peltier heat pump involves multiple junctions in series, through which a current is driven. Some of the junctions lose heat due to the Peltier effect, while others gain heat. Thermoelectric heat pumps exploit this phenomenon, as do thermoelectric cooling devices found in refrigerators. Applications The Peltier effect can be used to create a heat pump. Notably, the Peltier thermoelectric cooler is a refrigerator that is compact and has no circulating fluid or moving parts. Such refrigerators are useful in applications where their advantages outweigh the disadvantage of their very low efficiency. Other heat pump applications such as dehumidifiers may also use Peltier heat pumps. Thermoelectric coolers are trivially reversible, in that they can be used as heaters by simply reversing the current. Unlike ordinary resistive electrical heating (Joule heating) that varies with the square of current, the thermoelectric heating effect is linear in current (at least for small currents) but requires a cold sink to replenish with heat energy. This rapid reversing heating and cooling effect is used by many modern thermal cyclers, laboratory devices used to amplify DNA by the polymerase chain reaction (PCR). PCR requires the cyclic heating and cooling of samples to specified temperatures. The inclusion of many thermocouples in a small space enables many samples to be amplified in parallel. Thomson effect For certain materials, the Seebeck coefficient is not constant in temperature, and so a spatial gradient in temperature can result in a gradient in the Seebeck coefficient. If a current is driven through this gradient, then a continuous version of the Peltier effect will occur. This Thomson effect was predicted and later observed in 1851 by Lord Kelvin (William Thomson). It describes the heating or cooling of a current-carrying conductor with a temperature gradient. If a current density is passed through a homogeneous conductor, the Thomson effect predicts a heat production rate per unit volume. where is the temperature gradient, and is the Thomson coefficient. The Thomson effect is a manifestation of the direction of flow of electrical carriers with respect to a temperature gradient within a conductor. These absorb energy (heat) flowing in a direction opposite to a thermal gradient, increasing their potential energy, and, when flowing in the same direction as a thermal gradient, they liberate heat, decreasing their potential energy. The Thomson coefficient is related to the Seebeck coefficient as (see below). This equation, however, neglects Joule heating and ordinary thermal conductivity (see full equations below). Full thermoelectric equations Often, more than one of the above effects is involved in the operation of a real thermoelectric device. The Seebeck effect, Peltier effect, and Thomson effect can be gathered together in a consistent and rigorous way, described here; this also includes the effects of Joule heating and ordinary heat conduction. As stated above, the Seebeck effect generates an electromotive force, leading to the current equation To describe the Peltier and Thomson effects, we must consider the flow of energy. If temperature and charge change with time, the full thermoelectric equation for the energy accumulation, , is where is the thermal conductivity. The first term is the Fourier's heat conduction law, and the second term shows the energy carried by currents. The third term, , is the heat added from an external source (if applicable). If the material has reached a steady state, the charge and temperature distributions are stable, so and . Using these facts and the second Thomson relation (see below), the heat equation can be simplified to The middle term is the Joule heating, and the last term includes both Peltier ( at junction) and Thomson ( in thermal gradient) effects. Combined with the Seebeck equation for , this can be used to solve for the steady-state voltage and temperature profiles in a complicated system. If the material is not in a steady state, a complete description needs to include dynamic effects such as relating to electrical capacitance, inductance and heat capacity. The thermoelectric effects lie beyond the scope of equilibrium thermodynamics. They necessarily involve continuing flows of energy. At least, they involve three bodies or thermodynamic subsystems, arranged in a particular way, along with a special arrangement of the surroundings. The three bodies are the two different metals and their junction region. The junction region is an inhomogeneous body, assumed to be stable, not suffering amalgamation by diffusion of matter. The surroundings are arranged to maintain two temperature reservoirs and two electric reservoirs. For an imagined, but not actually possible, thermodynamic equilibrium, heat transfer from the hot reservoir to the cold reservoir would need to be prevented by a specifically matching voltage difference maintained by the electric reservoirs, and the electric current would need to be zero. For a steady state, there must be at least some heat transfer or some non-zero electric current. The two modes of energy transfer, as heat and by electric current, can be distinguished when there are three distinct bodies and a distinct arrangement of surroundings. But in the case of continuous variation in the media, heat transfer and thermodynamic work cannot be uniquely distinguished. This is more complicated than the often considered thermodynamic processes, in which just two respectively homogeneous subsystems are connected. Thomson relations In 1854, Lord Kelvin found relationships between the three coefficients, implying that the Thomson, Peltier, and Seebeck effects are different manifestations of one effect (uniquely characterized by the Seebeck coefficient). The first Thomson relation is where is the absolute temperature, is the Thomson coefficient, is the Peltier coefficient, and is the Seebeck coefficient. This relationship is easily shown given that the Thomson effect is a continuous version of the Peltier effect. The second Thomson relation is This relation expresses a subtle and fundamental connection between the Peltier and Seebeck effects. It was not satisfactorily proven until the advent of the Onsager relations, and it is worth noting that this second Thomson relation is only guaranteed for a time-reversal symmetric material; if the material is placed in a magnetic field or is itself magnetically ordered (ferromagnetic, antiferromagnetic, etc.), then the second Thomson relation does not take the simple form shown here. Now, using the second relation, the first Thomson relation becomes The Thomson coefficient is unique among the three main thermoelectric coefficients because it is the only one directly measurable for individual materials. The Peltier and Seebeck coefficients can only be easily determined for pairs of materials; hence, it is difficult to find values of absolute Seebeck or Peltier coefficients for an individual material. If the Thomson coefficient of a material is measured over a wide temperature range, it can be integrated using the Thomson relations to determine the absolute values for the Peltier and Seebeck coefficients. This needs to be done only for one material, since the other values can be determined by measuring pairwise Seebeck coefficients in thermocouples containing the reference material and then adding back the absolute Seebeck coefficient of the reference material. For more details on absolute Seebeck coefficient determination, see Seebeck coefficient. Efficiency See also Barocaloric material Nernst effect – a thermoelectric phenomenon when a sample allowing electrical conduction in a magnetic field and a temperature gradient normal (perpendicular) to each other Ettingshausen effect – thermoelectric phenomenon affecting current in a conductor in a magnetic field Pyroelectricity – the creation of an electric polarization in a crystal after heating/cooling, an effect distinct from thermoelectricity Thermionic emission – the liberation of charged particles from a hot electrode Thermogalvanic cell – the production of electrical power from a galvanic cell with electrodes at different temperatures Thermopile Thermophotovoltaic – production of electrical power from thermal energy using the photovoltaic effect References Notes Further reading External links International Thermoelectric Society A news article on the increases in thermal diode efficiency Physical phenomena Energy conversion Thermoelectricity
Thermoelectric effect
[ "Physics" ]
3,391
[ "Physical phenomena" ]
448,518
https://en.wikipedia.org/wiki/Heisenberg%20group
In mathematics, the Heisenberg group , named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form under the operation of matrix multiplication. Elements a, b and c can be taken from any commutative ring with identity, often taken to be the ring of real numbers (resulting in the "continuous Heisenberg group") or the ring of integers (resulting in the "discrete Heisenberg group"). The continuous Heisenberg group arises in the description of one-dimensional quantum mechanical systems, especially in the context of the Stone–von Neumann theorem. More generally, one can consider Heisenberg groups associated to n-dimensional systems, and most generally, to any symplectic vector space. Three-dimensional case In the three-dimensional case, the product of two Heisenberg matrices is given by As one can see from the term , the group is non-abelian. The neutral element of the Heisenberg group is the identity matrix, and inverses are given by The group is a subgroup of the 2-dimensional affine group Aff(2): acting on corresponds to the affine transform There are several prominent examples of the three-dimensional case. Continuous Heisenberg group If , are real numbers (in the ring R), then one has the continuous Heisenberg group H3(R). It is a nilpotent real Lie group of dimension 3. In addition to the representation as real 3×3 matrices, the continuous Heisenberg group also has several different representations in terms of function spaces. By Stone–von Neumann theorem, there is, up to isomorphism, a unique irreducible unitary representation of H in which its centre acts by a given nontrivial character. This representation has several important realizations, or models. In the Schrödinger model, the Heisenberg group acts on the space of square integrable functions. In the theta representation, it acts on the space of holomorphic functions on the upper half-plane; it is so named for its connection with the theta functions. Discrete Heisenberg group If are integers (in the ring Z), then one has the discrete Heisenberg group H3(Z). It is a non-abelian nilpotent group. It has two generators: and relations where is the generator of the center of H3. (Note that the inverses of x, y, and z replace the 1 above the diagonal with −1.) By Bass's theorem, it has a polynomial growth rate of order 4. One can generate any element through Heisenberg group modulo an odd prime p If one takes a, b, c in Z/p Z for an odd prime p, then one has the Heisenberg group modulo p. It is a group of order p3 with generators x, y and relations Analogues of Heisenberg groups over finite fields of odd prime order p are called extra special groups, or more properly, extra special groups of exponent p. More generally, if the derived subgroup of a group G is contained in the center Z of G, then the map G/Z × G/Z → Z is a skew-symmetric bilinear operator on abelian groups. However, requiring that G/Z to be a finite vector space requires the Frattini subgroup of G to be contained in the center, and requiring that Z be a one-dimensional vector space over Z/p Z requires that Z have order p, so if G is not abelian, then G is extra special. If G is extra special but does not have exponent p, then the general construction below applied to the symplectic vector space G/Z does not yield a group isomorphic to G. Heisenberg group modulo 2 The Heisenberg group modulo 2 is of order 8 and is isomorphic to the dihedral group D4 (the symmetries of a square). Observe that if then and The elements x and y correspond to reflections (with 45° between them), whereas xy and yx correspond to rotations by 90°. The other reflections are xyx and yxy, and rotation by 180° is xyxy (= yxyx). Heisenberg algebra The Lie algebra of the Heisenberg group (over the real numbers) is known as the Heisenberg algebra. It may be represented using the space of 3×3 matrices of the form with . The following three elements form a basis for : These basis elements satisfy the commutation relations The name "Heisenberg group" is motivated by the preceding relations, which have the same form as the canonical commutation relations in quantum mechanics: where is the position operator, is the momentum operator, and is the Planck constant. The Heisenberg group has the special property that the exponential map is a one-to-one and onto map from the Lie algebra to the group : In conformal field theory In conformal field theory, the term Heisenberg algebra is used to refer to an infinite-dimensional generalization of the above algebra. It is spanned by elements with commutation relations Under a rescaling, this is simply a countably-infinite number of copies of the above algebra. Higher dimensions More general Heisenberg groups may be defined for higher dimensions in Euclidean space, and more generally on symplectic vector spaces. The simplest general case is the real Heisenberg group of dimension , for any integer . As a group of matrices, (or to indicate that this is the Heisenberg group over the field of real numbers) is defined as the group matrices with entries in and having the form where a is a row vector of length n, b is a column vector of length n, In is the identity matrix of size n. Group structure This is indeed a group, as is shown by the multiplication: and Lie algebra The Heisenberg group is a simply-connected Lie group whose Lie algebra consists of matrices where a is a row vector of length n, b is a column vector of length n, 0n is the zero matrix of size n. By letting e1, ..., en be the canonical basis of Rn and setting the associated Lie algebra can be characterized by the canonical commutation relations where p1, ..., pn, q1, ..., qn, z are the algebra generators. In particular, z is a central element of the Heisenberg Lie algebra. Note that the Lie algebra of the Heisenberg group is nilpotent. Exponential map Let which fulfills . The exponential map evaluates to The exponential map of any nilpotent Lie algebra is a diffeomorphism between the Lie algebra and the unique associated connected, simply-connected Lie group. This discussion (aside from statements referring to dimension and Lie group) further applies if we replace R by any commutative ring A. The corresponding group is denoted Hn(A). Under the additional assumption that the prime 2 is invertible in the ring A, the exponential map is also defined, since it reduces to a finite sum and has the form above (e.g. A could be a ring Z/p Z with an odd prime p or any field of characteristic 0). Representation theory The unitary representation theory of the Heisenberg group is fairly simple later generalized by Mackey theory and was the motivation for its introduction in quantum physics, as discussed below. For each nonzero real number , we can define an irreducible unitary representation of acting on the Hilbert space by the formula This representation is known as the Schrödinger representation. The motivation for this representation is the action of the exponentiated position and momentum operators in quantum mechanics. The parameter describes translations in position space, the parameter describes translations in momentum space, and the parameter gives an overall phase factor. The phase factor is needed to obtain a group of operators, since translations in position space and translations in momentum space do not commute. The key result is the Stone–von Neumann theorem, which states that every (strongly continuous) irreducible unitary representation of the Heisenberg group in which the center acts nontrivially is equivalent to for some . Alternatively, that they are all equivalent to the Weyl algebra (or CCR algebra) on a symplectic space of dimension 2n. Since the Heisenberg group is a one-dimensional central extension of , its irreducible unitary representations can be viewed as irreducible unitary projective representations of . Conceptually, the representation given above constitutes the quantum-mechanical counterpart to the group of translational symmetries on the classical phase space, . The fact that the quantum version is only a projective representation of is suggested already at the classical level. The Hamiltonian generators of translations in phase space are the position and momentum functions. The span of these functions does not form a Lie algebra under the Poisson bracket, however, because Rather, the span of the position and momentum functions and the constants forms a Lie algebra under the Poisson bracket. This Lie algebra is a one-dimensional central extension of the commutative Lie algebra , isomorphic to the Lie algebra of the Heisenberg group. On symplectic vector spaces The general abstraction of a Heisenberg group is constructed from any symplectic vector space. For example, let (V, ω) be a finite-dimensional real symplectic vector space (so ω is a nondegenerate skew symmetric bilinear form on V). The Heisenberg group H(V) on (V, ω) (or simply V for brevity) is the set V×R endowed with the group law The Heisenberg group is a central extension of the additive group V. Thus there is an exact sequence Any symplectic vector space admits a Darboux basis {ej, fk}1 ≤ j,k ≤ n satisfying ω(ej, fk) = δjk and where 2n is the dimension of V (the dimension of V is necessarily even). In terms of this basis, every vector decomposes as The qa and pa are canonically conjugate coordinates. If {ej, fk}1 ≤ j,k ≤ n is a Darboux basis for V, then let {E} be a basis for R, and {ej, fk, E}1 ≤ j,k ≤ n is the corresponding basis for V×R. A vector in H(V) is then given by and the group law becomes Because the underlying manifold of the Heisenberg group is a linear space, vectors in the Lie algebra can be canonically identified with vectors in the group. The Lie algebra of the Heisenberg group is given by the commutation relation or written in terms of the Darboux basis and all other commutators vanish. It is also possible to define the group law in a different way but which yields a group isomorphic to the group we have just defined. To avoid confusion, we will use u instead of t, so a vector is given by and the group law is An element of the group can then be expressed as a matrix , which gives a faithful matrix representation of H(V). The u in this formulation is related to t in our previous formulation by , so that the t value for the product comes to , as before. The isomorphism to the group using upper triangular matrices relies on the decomposition of V into a Darboux basis, which amounts to a choice of isomorphism V ≅ U ⊕ U*. Although the new group law yields a group isomorphic to the one given higher up, the group with this law is sometimes referred to as the polarized Heisenberg group as a reminder that this group law relies on a choice of basis (a choice of a Lagrangian subspace of V is a polarization). To any Lie algebra, there is a unique connected, simply connected Lie group G. All other connected Lie groups with the same Lie algebra as G are of the form G/N where N is a central discrete group in G. In this case, the center of H(V) is R and the only discrete subgroups are isomorphic to Z. Thus H(V)/Z is another Lie group which shares this Lie algebra. Of note about this Lie group is that it admits no faithful finite-dimensional representations; it is not isomorphic to any matrix group. It does however have a well-known family of infinite-dimensional unitary representations. Connection with the Weyl algebra The Lie algebra of the Heisenberg group was described above, (1), as a Lie algebra of matrices. The Poincaré–Birkhoff–Witt theorem applies to determine the universal enveloping algebra . Among other properties, the universal enveloping algebra is an associative algebra into which injectively imbeds. By the Poincaré–Birkhoff–Witt theorem, it is thus the free vector space generated by the monomials where the exponents are all non-negative. Consequently, consists of real polynomials with the commutation relations The algebra is closely related to the algebra of differential operators on with polynomial coefficients, since any such operator has a unique representation in the form This algebra is called the Weyl algebra. It follows from abstract nonsense that the Weyl algebra Wn is a quotient of . However, this is also easy to see directly from the above representations; viz. by the mapping Applications Weyl's parameterization of quantum mechanics The application that led Hermann Weyl to an explicit realization of the Heisenberg group was the question of why the Schrödinger picture and Heisenberg picture are physically equivalent. Abstractly, the reason is the Stone–von Neumann theorem: there is a unique unitary representation with given action of the central Lie algebra element z, up to a unitary equivalence: the nontrivial elements of the algebra are all equivalent to the usual position and momentum operators. Thus, the Schrödinger picture and Heisenberg picture are equivalent – they are just different ways of realizing this essentially unique representation. Theta representation The same uniqueness result was used by David Mumford for discrete Heisenberg groups, in his theory of equations defining abelian varieties. This is a large generalization of the approach used in Jacobi's elliptic functions, which is the case of the modulo 2 Heisenberg group, of order 8. The simplest case is the theta representation of the Heisenberg group, of which the discrete case gives the theta function. Fourier analysis The Heisenberg group also occurs in Fourier analysis, where it is used in some formulations of the Stone–von Neumann theorem. In this case, the Heisenberg group can be understood to act on the space of square integrable functions; the result is a representation of the Heisenberg groups sometimes called the Weyl representation. As a sub-Riemannian manifold The three-dimensional Heisenberg group H3(R) on the reals can also be understood to be a smooth manifold, and specifically, a simple example of a sub-Riemannian manifold. Given a point p = (x, y, z) in R3, define a differential 1-form Θ at this point as This one-form belongs to the cotangent bundle of R3; that is, is a map on the tangent bundle. Let It can be seen that H is a subbundle of the tangent bundle TR3. A cometric on H is given by projecting vectors to the two-dimensional space spanned by vectors in the x and y direction. That is, given vectors and in TR3, the inner product is given by The resulting structure turns H into the manifold of the Heisenberg group. An orthonormal frame on the manifold is given by the Lie vector fields which obey the relations [X, Y] = Z and [X, Z] = [Y, Z] = 0. Being Lie vector fields, these form a left-invariant basis for the group action. The geodesics on the manifold are spirals, projecting down to circles in two dimensions. That is, if is a geodesic curve, then the curve is an arc of a circle, and with the integral limited to the two-dimensional plane. That is, the height of the curve is proportional to the area of the circle subtended by the circular arc, which follows by Green's theorem. Heisenberg group of a locally compact abelian group It is more generally possible to define the Heisenberg group of a locally compact abelian group K, equipped with a Haar measure. Such a group has a Pontrjagin dual , consisting of all continuous -valued characters on K, which is also a locally compact abelian group if endowed with the compact-open topology. The Heisenberg group associated with the locally compact abelian group K is the subgroup of the unitary group of generated by translations from K and multiplications by elements of . In more detail, the Hilbert space consists of square-integrable complex-valued functions on K. The translations in K form a unitary representation of K as operators on : for . So too do the multiplications by characters: for . These operators do not commute, and instead satisfy multiplication by a fixed unit modulus complex number. So the Heisenberg group associated with K is a type of central extension of , via an exact sequence of groups: More general Heisenberg groups are described by 2-cocyles in the cohomology group . The existence of a duality between and gives rise to a canonical cocycle, but there are generally others. The Heisenberg group acts irreducibly on . Indeed, the continuous characters separate points so any unitary operator of that commutes with them is an multiplier. But commuting with translations implies that the multiplier is constant. A version of the Stone–von Neumann theorem, proved by George Mackey, holds for the Heisenberg group . The Fourier transform is the unique intertwiner between the representations of and . See the discussion at Stone–von Neumann theorem#Relation to the Fourier transform for details. See also Canonical commutation relations Wigner–Weyl transform Stone–von Neumann theorem Projective representation Geometrization conjecture Notes References External links Groupprops, The Group Properties Wiki Unitriangular matrix group UT(3,p) Group theory Lie groups Mathematical quantization Mathematical physics Werner Heisenberg 3-manifolds
Heisenberg group
[ "Physics", "Mathematics" ]
3,820
[ "Lie groups", "Mathematical structures", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Group theory", "Fields of abstract algebra", "Mathematical quantization", "Algebraic structures", "Mathematical physics" ]
22,124,835
https://en.wikipedia.org/wiki/Multiphoton%20lithography
Multiphoton lithography (also known as direct laser lithography or direct laser writing) is similar to standard photolithography techniques; structuring is accomplished by illuminating negative-tone or positive-tone photoresists via light of a well-defined wavelength. The main difference is the avoidance of photomasks. Instead, two-photon absorption is utilized to induce a change in the solubility of the resist for appropriate developers. Hence, multiphoton lithography is a technique for creating small features in a photosensitive material, without the use of excimer lasers or photomasks. This method relies on a multi-photon absorption process in a material that is transparent at the wavelength of the laser used for creating the pattern. By scanning and properly modulating the laser, a chemical change (usually polymerization) occurs at the focal spot of the laser and can be controlled to create an arbitrary three-dimensional pattern. This method has been used for rapid prototyping of structures with fine features. Two-photon absorption (TPA) is a third-order with respect to the third-order optical susceptibility and a second-order process with respect to light intensity. For this reason it is a non-linear process several orders of magnitude weaker than linear absorption, thus very high light intensities are required to increase the number of such rare events. For example, tightly-focused laser beams provide the needed intensities. Here, pulsed laser sources, with pulse widths of around 100 fs, are preferred as they deliver high-intensity pulses while depositing a relatively low average energy. To enable 3D structuring, the light source must be adequately adapted to the liquid photoresin in that single-photon absorption is highly suppressed. TPA is thus essential for creating complex geometries with high resolution and shape accuracy. For best results, the photoresins should be transparent to the excitation wavelength λ, which is between 500-1000 nm and, simultaneously, absorbing in the range of λ/2. As a result, a given sample relative to the focused laser beam can be scanned while changing the resist's solubility only in a confined volume. The geometry of the latter mainly depends on the iso-intensity surfaces of the focus. Concretely, those regions of the laser beam which exceed a given exposure threshold of the photosensitive medium define the basic building block, the so-called voxel. Voxels are thus the smallest, single volumes of cured photopolymer. They represent the basic building blocks of 3D-printed objects. Other parameters which influence the actual shape of the voxel are the laser mode and the refractive-index mismatch between the resist and the immersion system leading to spherical aberration. It was found that polarization effects in laser 3D nanolithography can be employed to fine-tune the feature sizes (and corresponding aspect ratio) in the structuring of photoresists. This proves polarization to be a variable parameter next to laser power (intensity), scanning speed (exposure duration), accumulated dose, etc. In addition, a plant-derived renewable pure bioresins without additional photosensitization can be employed for the optical rapid prototyping. Materials for multiphoton polymerization The materials employed in multiphoton lithography are those normally used in conventional photolithography techniques. They can be found in liquid-viscous, gel or solid state, in relation to the fabrication need. Liquid resins imply more complex sample fixing processes, during the fabrication step, while the preparation of the resins themselves may be easier and faster. In contrast, solid resists can be handled in an easier way, but they require complex and time-consuming processes. The resin always include a prepolymer (the monomer) and, considering the final application, a photoinitiator. In addition, we can find such polymerization inhibitors (useful to stabilize resins both reducing the obtained voxel), solvents (which may simplify casting procedures), thickens (so called "fillers") and other additives (as pigments and so on) which aim to functionalize the photopolymer. Acrylates The acrylates are the most diffused resin components. They can be found in many traditional photolithography processes which imply a radical reaction. They are largely diffused and commercially available in a wide range of products, having different properties and composition. The main advantages of this kind of liquid resins are found in the excellent mechanical properties and in the high reactivity. Acrylates exhibit slightly more shrinkage compared to epoxies, but their rapid iteration capability allows for close alignment with the design. Moreover, Acrylates offer enhanced usability as they eliminate the need for spin coating or baking steps during processing. Finally the polymerization steps are faster than other kind of photopolymers. Methacrylates are largely diffused due to their biocompatibility. The majority of materials for Two-Photon Polymerization are supplied by companies that also provide printers. Nevertheless, there are third-party resins available like ORMOCER, alongside numerous self-made resins. Epoxy resins These are the most employed resins into the MEMS and microfluidic fields. They exploit cationic polymerization. One of the best known epoxy resin is SU-8, which allows thin film deposition (up to 500 μm) and polymerization of structures with a high aspect ratio. We can find many others epoxy resins such as: SCR-701, largely employed in micro moving objects, and the SCR-500. Inorganic glass/ceramics Inorganic glass and ceramics have better thermal and chemical stabilities than photopolymers do, and they also offer improved durability due to their high resistance to corrosion, degradation, and wear. Therefore, there has been continuous interest in the development of resins and techniques that allow using multiphoton lithography for 3D printing of glasses and ceramics in recent years. It has been demonstrated that using hybrid inorganic-organic resins and high-temperature thermal treatments, one can achieve 3D printing of glass-ceramics with sub-micrometer resolution. Recently, multiphoton lithography of an entirely inorganic resin for 3D printing of glasses without involving thermal treatments has also been shown, enabling 3D printing of glass micro-optics on the tips of optical fibers without causing damage to the optical fiber. Applications Nowadays there are several application fields for microstructured devices, made by multiphoton polymerization, such as: regenerative medicine, biomedical engineering, micromechanic, microfluidic, atomic force microscopy, optics and telecommunication science. Regenerative medicine and biomedical engineering By the arrival of biocompatible photopolymers (as SZ2080 and OMOCERs) many scaffolds have been realized by multiphoton lithography, to date. They vary in key parameters as geometry, porosity and dimension to control and condition, in a mechanical and chemical fashion, fundamental cues in in vitro cell cultures: migration, adhesion, proliferation and differentiation. The capability to fabricate structures having a feature size smaller than the cells' one, have dramatically improved the mechanobiology field, giving the possibility to combine mechanical cues directly into cells microenvironment. Their final application range from stemness maintenance in adult mesenchymal stem cells, such as into the NICHOID scaffold which mimics in vitro a physiological niche, to the generation of migration engineered scaffolds. Micromechanic and microfluidic The multiphoton polymerization can be suitable to realize microsized active (as pumps) or passive (as filters) devices that can be combined with Lab-on-a-chip. These devices can be widely used coupled to microchannels with the advantage to polymerize in pre-sealed channels. Considering filters, they can be used to separate the plasma from the red blood cells, to separate cell populations (in relation to the single cell dimension) or basically to filter solutions from impurity and debris. A porous 3D filter, which can only be fabricated by 2PP technology, offers two key advantages compared to filters based on 2D pillars. First, the 3D filter has increased mechanical resistance to shear stresses, enabling a higher void ratio and hence more efficient operation. Second, the 3D porous filter can efficiently filter disk-shaped elements without reducing the pore size to the minimum dimension of the cell. Considering the integrated micropumps, they can be polymerized as two-lobed independent rotors, confined into the channel by their own shaft, to avoid unwanted rotations. Such systems are simply activated by using focalized CW laser system. Atomic force microscopy To date, atomic force microscopy microtips are realized with standard photolithographic techniques on hard materials, such as gold, silicon, and its derivatives. Nonetheless, the mechanical properties of such materials require time-consuming and expensive production processes to create or bend the tips. Multiphoton lithography can be used to prototype and modify, thus avoiding the complex fabrication protocol. Optics With the ability to create 3D planar structures, multiphoton polymerization can build optical components for optical waveguides, resonators, photonic crystals, and lenses. References External links Nano sculptures, the first nano-scale human form. Sculpture made by artist Jonty Hurwitz using multiphoton lithography, November 2014. Nonlinear optics Lithography (microfabrication) Computer printing Printing processes
Multiphoton lithography
[ "Materials_science" ]
1,969
[ "Nanotechnology", "Microtechnology", "Lithography (microfabrication)" ]
22,128,220
https://en.wikipedia.org/wiki/Baffle%20%28heat%20transfer%29
Baffles are flow-directing or obstructing vanes or panels used to direct a flow of liquid or gas. It is used in some household stoves and in some industrial process vessels (tanks), such as shell and tube heat exchangers, chemical reactors, and static mixers. Baffles are an integral part of the shell and tube heat exchanger design. A baffle is designed to support tube bundles and direct the flow of fluids for maximum efficiency. Baffle design and tolerances for heat exchangers are discussed in the standards of the Tubular Exchanger Manufacturers Association (TEMA). Use of baffles The main roles of a baffle in a shell and tube heat exchanger are to: Hold tubes in position (preventing sagging), both in production and operation Prevent the effects of steam starvation, which is increased with both fluid velocity and the length of the exchanger Direct shell-side fluid flow along the tube field. This increases fluid velocity and the effective heat transfer co-efficient of the exchanger In a static mixer, baffles are used to minimize the tangential component of velocity which causes vortex formation, and thus promotes mixing. In a chemical reactor, baffles are often attached to the interior walls to promote mixing and thus increase heat transfer and possibly chemical reaction rates. In a household stoves like Handölkassetten and similar stoves a baffle is used to prevent the gas from going directly up in the chimney and possibly causing a chimney fire and direct the gas towards the front of the oven before it continues upwards into the chimney. In this case the baffle helps increase the efficiency of the stove as more heat leaves the gas before it exits. The baffles prevent the rotational flow without affecting radial or longitudinal flow. The tank is provided with baffles which prevent swirling and vortex formation. Except in very large tanks, four(4) baffles are placed. Types of baffles Implementation of baffles is decided on the basis of size, cost and their ability to lend support to the tube bundles and direct flow: Longitudinal Flow Baffles (used in a two-pass shell) Impingement Baffles (used for protecting bundle when entrance velocity is high) Orifice Baffles Single segmental Double segmental Support/Blanking baffles Deresonating (detuning) baffles used to reduce tube vibration(Anish Apw) Installation of baffles As mentioned, baffles deal with the concern of support and fluid direction in heat exchangers. In this way it is vital that they are spaced correctly at installation. The minimum baffle spacing is the greater of 50.8 mm or one fifth of the inner shell diameter. The maximum baffle spacing is dependent on material and size of tubes. The Tubular Exchanger Manufacturers Association sets out guidelines. There are also segments with a "no tubes in window" design that affects the acceptable spacing within the design. An important design consideration is that no recirculation zones or dead spots form – both of which are counterproductive to effective heat transfer. References (Editors) Perry, R.H. and Green, D.W. (Oct, 2007) Perry's Chemical Engineers' Handbook (8th ed.) McGraw-Hill Wolverine Tube Inc, (2008) Heat Transfer Data Book Available Professor J. Kavanagh (2009) Heat Transfer Lectures 4&5 Usyd Chemical Engineering Department Heat exchangers
Baffle (heat transfer)
[ "Chemistry", "Engineering" ]
708
[ "Chemical equipment", "Heat exchangers" ]
22,130,330
https://en.wikipedia.org/wiki/Atmospheric-pressure%20plasma
Atmospheric-pressure plasma (or AP plasma or normal pressure plasma) is a plasma in which the pressure approximately matches that of the surrounding atmosphere – the so-called normal pressure. Fundamentals of atmospheric-pressure plasma generation A discharge can be ignited and plasma can be sustained when a DC voltage that is delivered to the gas medium via electrodes is higher than the breakdown voltage for the gas. The relationship between this breakdown voltage and the pd product—where p is the gas pressure and d is the distance between the electrodes—is referred to as Paschen's law. For a range of gas molecules, the breakdown voltage estimated by Paschen's law has a minimum value of around pd = 1-10 Torr cm. This suggests that in order to get a practical breakdown voltage for the gas discharge to ignite, a smaller electrode-gap distance is preferred as gas pressure increases. The Paschen-minimum condition at atmospheric pressure can be reached at a gap spacing of considerably less than a millimeter, at which point a few hundreds of volt should be the DC voltage needed for the gas breakdown. However, the breakdown DC voltage for argon gas at atmospheric pressure is predicted to rise to a few kV at a gap spacing of 5 mm. Reducing the breakdown voltage is advantageous from a plasma source design perspective since it allows for handling flexibility and easier source operation. The use of higher-frequency HF voltage sources is one approach to reducing the breakdown voltage. As the pressure increases, the transfer of energy from electrons to gas molecules and ions through collisions becomes more efficient, resulting in the establishment of thermal equilibrium among electrons, gas molecules, and ions. However, it is possible to inhibit the energy transfer between the electrons and the gas molecules and ions. Dielectric barrier discharge (DBD) is one of the main ways to produce low-temperature plasmas in a non-equilibrium condition at atmospheric pressure. Additionally, there have been reports stating that the Atmospheric-pressure glow discharge, when powered by a low-frequency (10-100 kHz) source, needs a dielectric barrier on one side of the electrodes to ensure stable and consistent operation. However, when the operating frequency is increased to RF, reaching frequencies as high as 13.56 MHz, the stability of the plasma greatly improves, making the dielectric barrier no longer necessary for stable operation. Technical significance Atmospheric-pressure plasmas matter because in contrast with low-pressure plasma or high-pressure plasma, no reaction vessel is needed to maintain pressure. Depending on the generation principle, these plasmas can be employed directly in the production line. This eliminates the need for cost-intensive chambers for producing a partial vacuum as used in low-pressure plasma technology. Generation Although the disadvantages of low-pressure plasmas can be avoided by plasma formation at atmospheric pressure, maintaining atmospheric pressure plasmas necessitates high voltage for gas breakdown and causes greater collisions between electrons and gas molecules, which can lead to arcing and gas heating. Various forms of excitation are distinguished: DC (direct current) and low-frequency excitation RF (radio frequency) excitation Microwave excitation Atmospheric-pressure plasmas that have attained any noteworthy industrial significance are those generated by DC excitation (electric arc), AC excitation (corona discharge, dielectric barrier discharge, piezoelectric direct discharge and plasma jets as well as 2.45 GHz microwave microplasma). DC plasma jet By means of a high-voltage discharge (5–15 kV, 10–100 kHz) a pulsed electric arc is generated. A process gas, usually oil-free compressed air flowing past this discharge section, is excited and converted to the plasma state. This plasma passes through a jet head to the surface of the material to be treated. The jet head determines the geometry of the beam, and is at earth potential to hold back potential-carrying parts of the plasma stream. Microwave plasma jet A microwave system uses amplifiers that output up to 200 watts of power radio frequency (RF) power to produce the arc that generates plasma. Most solutions work at 2.45 GHz. A new technology provides ignition and highly efficient operation with the same electronic and couple network. This kind of atmospheric-pressure plasmas is different. The plasma is only top of the electrode. That is the reason the construction of a cannula jet was possible. Applications Manufacturers use plasma jets for, among other things, activating and cleaning plastic and metal surfaces to prepare them for adhesive bonding and painting. Sheet materials up to several meters wide can be treated today by aligning a number of jets in a row. Surface modification achieved by plasma jets is comparable to the effects obtained with low-pressure plasma. Depending on the power of the jet, the plasma beam can be up to 40 mm long and attain a treatment width of 15 mm. Special rotary systems allow a treatment width per jet tool of up to 13 cm. Depending on the required treatment performance, the plasma source is moved at a spacing of 10–40 mm and at a speed of 5–400 m/min relative to the surface of the material being treated. A key advantage of this system is it can be integrated in-line in existing production systems. In addition the activation achievable is distinctly higher than in potential-based pretreatment methods (corona discharge). It is possible to coat varied surfaces with this technique. Anticorrosive layers and adhesion promoter layers can be applied to many metals without solvents, providing a much more environmentally friendly solution. See also Laser Schlieren Deflectometry Dielectric barrier discharge Plasma pencil References Citations Bibliography Tendero C., Tixier C., Tristant P., Desmaison J., Leprince P.: Atmospheric pressure plasmas: A review; Spectrochimica Acta Part B: Atomic Spectroscopy; Volume 61, Issue 1, January 2006, pp 2–30. Förnsel P.: Vorrichtung zur Oberflächen-Vorbehandlung von Werkstücken (Device for surface pretreatment of workpieces); DE 195 32 412 EU-IP4Plasma e-learning portal: Basic facts on the fourth state of matter and its technical use Fraunhofer-Institut für Fertigungstechnik und Angewandte Materialforschung (IFAM): Plasmatechnik und Oberflächen (Plasma technology and surfaces) – PLATO Leibniz Institute for Plasma Science and Technology (INP Greifswald e.V.) Generation of atmospheric plasma and effect on surfaces - Flash animations Pulsed Atmospheric Arc Technology Atmospheric Plasma Treatment Explained in Simple Terms from the UK manufacturer Henniker Plasma Plasmatreat US LP: Atmospheric Plasma Treatment Basics of microwave driven atmospheric plasma Plasma types
Atmospheric-pressure plasma
[ "Physics" ]
1,396
[ "Plasma types", "Plasma physics" ]
22,131,077
https://en.wikipedia.org/wiki/GHS%20hazard%20pictograms
Hazard pictograms form part of the international Globally Harmonized System of Classification and Labelling of Chemicals (GHS). Two sets of pictograms are included within the GHS: one for the labelling of containers and for workplace hazard warnings, and a second for use during the transport of dangerous goods. Either one or the other is chosen, depending on the target audience, but the two are not used together for the same hazard. The two sets of pictograms use the same symbols for the same hazards, although certain symbols are not required for transport pictograms. Transport pictograms come in a wider variety of colors and may contain additional information such as a subcategory number. Hazard pictograms are one of the key elements for the labelling of containers under the GHS, along with: an identification of the product; a signal word – either Danger or Warning – where necessary hazard statements, indicating the nature and degree of the risks posed by the product precautionary statements, indicating how the product should be handled to minimize risks to the user (as well as to other people and the general environment) the identity of the supplier (who might be a manufacturer or importer) The GHS chemical hazard pictograms are intended to provide the basis for or to replace national systems of hazard pictograms. It has still to be implemented by the European Union (CLP regulation) in 2009. The GHS transport pictograms are the same as those recommended in the UN Recommendations on the Transport of Dangerous Goods, widely implemented in national regulations such as the U.S. Federal Hazardous Materials Transportation Act (49 U.S.C. 5101–5128) and D.O.T. regulations at 49 C.F.R. 100–185. Physical hazards pictograms Health hazards pictograms Physical and health hazard pictograms Environmental hazards pictograms Transport pictograms Class 1: Explosives Class 2: Gases Classes 3 and 4: Flammable liquids and solids Other GHS transport classes Non-GHS transport pictograms The following pictograms are included in the UN Model Regulations but have not been incorporated into the GHS because of the nature of the hazards. See also Globally Harmonized System of Classification and Labeling of Chemicals Hazard symbol HMIS Color Bar Hazchem Hazmat NFPA 704 Notes References (the "CLP Regulation") ("UN Model Regulations Rev.15") ("UN Manual of Tests and Criteria Rev.4") External links GHS pictogram gallery from the United Nations Economic Commission for Europe Hazard pictograms Pictograms
GHS hazard pictograms
[ "Chemistry", "Mathematics" ]
556
[ "Symbols", "Pictograms", "Globally Harmonized System" ]
22,131,699
https://en.wikipedia.org/wiki/Monolithic%20HPLC%20column
A monolithic HPLC column, or monolithic column, is a column used in high-performance liquid chromatography (HPLC). The internal structure of the monolithic column is created in such a way that many channels form inside the column. The material inside the column which separates the channels can be porous and functionalized. In contrast, most HPLC configurations use particulate packed columns; in these configurations, tiny beads of an inert substance, typically a modified silica, are used inside the column. Monolithic columns can be broken down into two categories, silica-based and polymer-based monoliths. Silica-based monoliths are known for their efficiency in separating smaller molecules while, polymer-based are known for separating large protein molecules. Technology overview In analytical chromatography, the goal is to separate and uniquely identify each of the compounds in a substance. Alternatively, preparative scale chromatography is a method of purification of large batches of material in a production environment. The basic methods of separation in HPLC rely on a mobile phase (water, organic solvents, etc.) being passed through a stationary phase (particulate silica packings, monoliths, etc.) in a closed environment (column); the differences in reactivity among the solvent of interest and the mobile and stationary phases distinguish compounds from one another in a series of adsorption and desorption phenomena. The results are then visually displayed in a resulting chromatogram. Stationary phases are available in many varieties of packing styles as well as chemical structures and can be functionalized for added specificity. Monolithic-style columns, or monoliths, are one of many types of stationary phase structure. Monoliths, in chromatographic terms, are porous rod structures characterized by mesopores and macropores. These pores provide monoliths with high permeability, a large number of channels, and a high surface area available for reactivity. The backbone of a monolithic column is composed of either an organic or inorganic substrate in, and can easily be chemically altered for specific applications. Their unique structure gives them several physico-mechanical properties that enable them to perform competitively against traditionally packed columns. Historically, the typical HPLC column consists of high-purity particulate silica compressed into stainless steel tubing. To decrease run times and increase selectivity, smaller diffusion distances have been pursued. To achieve smaller diffusion distances there has been a decrease in the particle sizes. However, as the particle size decreases, the backpressure (for a given column diameter and a given volumetric flow) increases proportionally. Pressure is inversely proportional to the square of the particle size; i.e., when particle size is halved, pressure increases by a factor of four. This is because as the particle sizes get smaller, the interstitial voids (the spaces between the particles) do as well, and it is harder to push the compounds through the smaller spaces. Modern HPLC systems are generally designed to withstand about of backpressure in order to deal with this problem. Monoliths also have very short diffusion distances, while also providing multiple pathways for solute dispersion. Packed particle columns have pore connectivity values of about 1.5, while monoliths have values ranging from 6 to greater than 10. This means that, in a particulate column, a given analyte may diffuse into and out of the same pore, or enter through one pore and exit through a connected pore. By contrast, an analyte in a monolith is able to enter one channel and exit through any of 6 or more different venues. Little of the surface area in a monolith is inaccessible to compounds in the mobile phase. The high degree of interconnectivity in monoliths confers an advantage seen in the low backpressures and readily achievable high flow rates. Monoliths are ideally suited for large molecules; although the purification of larger molecules can be very time-consuming. As mentioned previously, particle sizes are decreasing in an attempt to achieve higher resolution and faster separations, which led to higher backpressures. When the smaller particle sizes are used to separate biomolecules, backpressures increase further because of the large molecule size. In monoliths, where backpressures are low and channel sizes are large, small molecule separations are less efficient. This is demonstrated by the dynamic binding capacities, a measure of how much sample can bind to the surface of the stationary phase. Dynamic binding capacities of monoliths for large molecules can be an order of ten times greater than that for particulate packings. Monoliths exhibit no shear forces or eddying effects. High interconnectivity of the mesopores allows for multiple avenues of convective flow through the column. Mass transport of solutes through the column is relatively unaffected by flow rate. This is completely at odds to traditional particulate packings, whereby eddy effects and shear forces contribute greatly to the loss of resolution and capacity, as seen in the vanDeemter curve. Monoliths can, however, suffer from a different flow disadvantage: wall effects. Silica monoliths, especially, have a tendency to pull away from the sides of their column encasing. When this happens, the flow of the mobile phase occurs around the stationary phase as well as through it, decreasing resolution. Wall effects have been reduced greatly by advances in column construction. Other advantages of monoliths conferred by their individual construction include greater column to column and batch to batch reproducibility. One technique of creating monolith columns is to polymerize the structure in situ. This involves filling the mold or column tubing with a mixture of monomers, a cross-linking agent, a free-radical initiator, and a porogenic solvent, then initiating the polymerization process under carefully controlled thermal or irradiating conditions. Monolithic in situ polymerization avoids the primary source of column to column variability, which is the packing procedure. Additionally, packed particle columns must be maintained in a solvent environment and cannot be exposed to air during or after the packing procedure. If exposed to air, the pores dry out and no longer provide adequate surface area for reactivity; the column must be repacked or discarded. Further, because particle compression and packing uniformity are not relevant to monoliths, they exhibit greater mechanical robustness; if particulate columns are dropped, for example, the integrity of the column may be corrupted. Monolithic columns are more physically stable than their particulate counterparts. Technology development The roots of liquid chromatography extend back over a century ago to 1900, when Russian botanist Mikhail Tsvet began experimenting with plant pigments in chlorophyll. He noted that, when a solvent was applied, distinct bands appeared that migrated at different rates along a stationary phase. For this new observation, he coined the term “chromatography,” a colored picture. His first lecture on the subject was presented in 1903, but his most important contribution occurred three years later, in 1906, when the paper “Adsorption analysis and chromatographic method. Applications on the chemistry of chlorophyll,” was published. Rivalry with a colleague who readily and vocally denounced his work meant that chromatographic analysis was shelved for almost 25 years. The great irony of the matter is that it was his rival's students who later took up the chromatography banner in their work with carotins. Greatly unchanged from Tswett's time until the 1940s, normal phase chromatography was performed by passing a gravity-fed solvent through small glass tubes packed with pellicular adsorbent beads. It was in the 1940s, however, that there was a great revolution in gas chromatography (GC). Although GC was a wonderful technique for analyzing inorganic compounds, less than 20% of organic molecules are able to be separated using this technique. It was Richard Synge, who in 1952 won the Nobel Prize in Chemistry for his work with partition chromatography, who applied the theoretical knowledge gained from his work in GC to LC. From this revolution, the 1950s also saw the advent of paper chromatography, reversed-phase partition chromatography (RPC), and hydrophobic interaction chromatography (HIC). The first gels for use in LC were created using cross-linked dextrans (Sephadex) in an attempt to realize Synge's prediction that a unique single-piece stationary phase could provide an ideal chromatographic solution. In the 1960s, polyacrylamide and agarose gels were created in a further attempt to create a single-piece stationary phase, but the purity of and stability of available components did not prove useful for implementation in the HPLC. In this decade, affinity chromatography was invented, an ultra-violet (UV) detector was used for the first time in conjunction with LC, and, most importantly, the modern HPLC was born. Csaba Horvath led the development of modern HPLC by piecing together laboratory equipment to suit his purposes. In 1968, Picker Nuclear Company marketed the first commercially available HPLC as a “Nucleic Acid Analyzer.” The following year, the first international symposia on HPLC was held, and Kirkland at DuPont was able to functionalize controlled porosity pellicular particles for the first time. The 1970s and 1980s witnessed a renewed interest in separations media with reduced interparticular void volumes. Perfusion chromatography showed, for the first time, that chromatography media could support high flow rates without sacrificing resolution. Monoliths aptly fit into this new class of media, as they exhibit no void volume and can withstand flow rates up to 9mL/minute. Polymeric monoliths as they exist today were developed independently by three different labs in the late 1980s led by Hjerten, Svec, and Tennikova. Simultaneously, bioseparations became increasingly important, and monolith technologies proved beneficial in biotechnology separations. Though industry focus in the 1980s was on biotechnology, focus in the 1990s shifted to process engineering. While mainstream chromatographers were using 3μm particulate columns, sub-2μm columns were in research phase. The smaller particles meant better resolution and shorter run times; there was also an associated increase in backpressure. In order to withstand the pressure, a new field of chromatography came into being: UHPLC or UPLC- ultra high pressure liquid chromatography. The new instruments were able to endure pressures of up to , as opposed to conventional machines, which, as previously state, can hold up to . UPLC is an alternative solution to the same problems monolithic columns solve. Similarly to UPLC, monolith chromatography can help the bottom line by increasing sample throughput, but without the need to spend capital on new equipment. In 1996, Nobuo Tanaka, at the Kyoto Institute of Technology, prepared silica monoliths using a colloidal suspension synthesis (aka “sol-gel”) developed by a colleague. The process is different from that used in polymeric monoliths. Polymeric monoliths, as mentioned above, are created in situ, using a mixture of monomers and a porogen within the column tubing. Silica monoliths, on the other hand, are created in a mold, undergo a significant amount of shrinkage, and are then clad in a polymeric shrink tubing like PEEK (polyetheretherketone) to reduce wall effects. This method limits the size of columns that can be produced to less than 15 cm long, and though standard analytical inner diameters are readily achieved, there is currently a trend in developing nanoscale capillary and prep scale silica monoliths. Technology life cycle Silica monoliths have only been commercially available since 2001, when Merck began their Chromolith campaign. The Chromolith technology was licensed from Soga and Nakanishi's group at Kyoto University. The new product won the PittCon Editors’ Gold Award for Best New Product, as well as an R&D 100 Award, both in 2001. Individual monolith columns have a life cycle that generally exceeds that of its particulate competitors. When selecting an HPLC column supplier, column lifetime was second only to column-to-column reproducibility in importance to the purchaser. Chromolith columns, for example, have demonstrated reproducibility of 3,300 sample injections and 50,000 column volumes of mobile phase. Also important to the life cycle of the monolith is its increased mechanical robustness; polymeric monoliths are able to withstand pH ranges from 1 to 14, can endure elevated temperatures, and do not need to be handled delicately. “Monoliths are still teenagers,” affirms Frantisec Svec, a leader in the field of novel stationary phases for LC. Industry evolution Liquid chromatography as we know it today really got its start in 1969, when the first modern HPLC was designed and marketed as a nucleic acid analyzer. Columns throughout the 1970s were unreliable, pump flow rates were inconsistent, and many biologically active compounds escaped detection by UV and fluorescence detectors. Focus on purification methods in the '70s morphed into faster analyses in the 1980s, when computerized controls were integrated into HPLC equipment. Higher degrees of computerization then led to emphasis on more precise, faster, automated equipment in the 1990s. Atypical of many technologies of the '60s and '70s, the emphasis in improvements was not on “bigger and better,” but on “smaller and better”. At the same time the HPLC user-interface was improving, it was critical to be able to isolate hundreds of peptides or biomarkers from ever decreasing sample sizes. Laboratory analytical instrumentation has only been recognized as a separate and distinct industry by NAICS and SIC since 1987. This market segmentation includes not only gas and liquid chromatography, but also mass spectrometry and spectrophotometric instruments. Since first recognized as a separate market, sales of analytical laboratory equipment increased from about $3.5 billion in 1987 to more than $26 billion in 2004. Revenues in the world liquid chromatography market, specifically, are expected to grow from $3.4 billion in 2007 to $4.7 billion in 2013, with a slight decrease in spending expected in 2008 and 2009 from the worldwide economic slump and decreased or stagnant spending. The pharmaceutical industry alone accounts for 35% of all the HPLC instruments in use. The main source of growth in LC stems from biosciences and pharmaceutical companies. Technology applications In its earliest form, liquid chromatography was used to separate the pigments of chlorophyll by a Russian botanist. Decades later, other chemists used the procedure for the study of carotins. Liquid chromatography was then used for the isolation of small molecules and organic compounds like amino acids, and most recently has been used in peptide and DNA research. Monolith columns have been instrumental in advancing the field of biomolecular research. In recent trade shows and international meetings for HPLC, interest in column monoliths and biomolecular applications has grown steadily, and this correlation is no coincidence. Monoliths have been shown to possess great potential in the “omics” fields- genomics, proteomics, metabolomics, and pharmacogenomics, among others. The reductionist approach to understanding the chemical pathways of the body and reactions to different stimuli, like drugs, are essential to new waves of healthcare like personalized medicine. Pharmacogenomics studies how responses to pharmaceutical products differ in efficacy and toxicity based on variations in the patient's genome; it is a correlation of drug response to gene expression in a patient. Jeremy K. Nicholson of the Imperial College, London, used a postgenomic viewpoint to understand adverse drug reactions and the molecular basis of human disesase. His group studied gut microbial metabolic profiles and were able to see distinct differences in reactions to drug toxicity and metabolism even among various geographical distributions of the same race. Affinity monolith chromatography provides another approach to drug response measurements. David Hage at the University of Nebraska binds ligands to monolithic supports and measures the equilibrium phenomena of binding interactions between drugs and serum proteins. A monolith-based approach at the University of Bologna, Italy, is currently in use for high-speed screening of drug candidates in the treatment of Alzheimer's. In 2003, Regnier and Liu of Purdue University described a multi-dimensional LC procedure for identifying single nucleotide polymorphisms (SNPs) in proteins. SNPs are alterations in the genetic code that can sometimes cause changes in protein conformation, as is the case with sickle cell anemia. Monoliths are particularly useful in these kinds of separations because of their superior mass transport capabilities, low backpressures coupled with faster flow rates, and relative ease of modification of the support surface. Bioseparations on a production scale are enhanced by monolith column technologies as well. The fast separations and high resolving power of monoliths for large molecules means that real-time analysis on production fermentors is possible. Fermentation is well known for its use in making alcoholic beverages, but is also an essential step in the production of vaccines for rabies and other viruses. Real-time, on-line analysis is critical for monitoring of production conditions, and adjustments can be made if necessary. Boehringer Ingelheim Austria has validated a method with cGMP (commercial good manufacturing practices) for production of pharmaceutical-grade DNA plasmids. They are able to process 200L of fermentation broth on an 800mL monolith. At BIA Separations, processing time of the tomato mosaic virus decreased considerably from the standard five days of manually intensive work to equivalent purity and better recovery in only two hours with a monolith column. Other viruses have been purified on monoliths as well. Another area of interest for HPLC is forensics. GC-MS (Gas Chromatography-Mass Spectroscopy) is generally considered the gold standard for forensic analysis. It is used in conjunction with online databases for rapid analysis of compounds in tests for blood alcohol, cause of death, street drugs, and food analysis, especially in poisoning cases. Analysis of buprenorphine, a heroin substitute, demonstrated the potential utility of multidimensional LC as a low-level detection method. HPLC methods can measure this compound at 40 ng/mL, compared to GC-MS at 0.5 ng/mL, but LC-MS-MS can detect buprenorphine at levels as low as 0.02 ng/mL. The sensitivity of multidimensional LC is therefore 2000 times greater than that of conventional HPLC. Industry applications The liquid chromatography marketplace is incredibly diverse. Five to ten firms are consistently market leaders, yet nearly half of the market is made up of small, fragmented companies. This section of the report will focus on the roles that a few companies have had in bringing monolith column technologies to the commercial market. In 1998, start-up biotechnology company BIA Separations of Ljubljana, Slovenia, came into being. The technology was originally developed by Tatiana Tennikova and Frantisek Svec during a collaboration between their respective institutes. The patent for these columns was acquired by BIA Separations and Ales Podgornik and Milos Barut developed the first commercially available monolith column in the form of a short disc encapsulated in a plastic housing. Trademarked CIM, BIA Separations has since introduced full lines of reversed-phase, normal-phase, ion-exchange, and affinity polymeric monoliths. Ales Podgornik and Janez Jancar then went on to develop large scale tube monolithic columns for industrial use. The largest column currently available is 8L. In May 2008, LC instrumentation powerhouse Agilent technologies agreed to market BIA Separations’ analytical columns based on monolith technology. Agilent's commercialized the columns with strong and weak ion exchange phases and Protein A in September 2008 when they unveiled their new Bio-Monolith product line at the BioProcess International conference. While BIA Separations was the first to commercially market polymeric monoliths, Merck KGaA was the first company to market silica monoliths. In 1996, Tanaka and coworkers at the Kyoto Institute of Technology published extensive work on silica monolith technologies. Merck was later issued a license from Kyoto Institute of Technology to develop and produce the silica monoliths. Promptly thereafter, in 2001, Merck introduced its Chromolith line of monolithic HPLC columns at analytical instrumentation trade show PittCon. Initially, says Karin Cabrera, senior scientist at Merck, the high flow rate was the selling point for the Chromolith line. Based on customer feedback, though, Merck soon learned that the columns were more stable and longer-lived than particle-packed columns. The columns were the recipients of various new product awards. Difficulties in production of the silica monoliths and tight patent protection have precluded attempts by other companies at developing a similar product. It has been noted that there are more patents concerning how to encapsulate the silica rod than there are on the manufacture of the silica itself. Historically, Merck has been known for its superior chemical products, and, in liquid chromatography, for the purity and reliability of its particulate silica. Merck is not known for its LC columns. Five years after the introduction of its Chromolith line, Merck made a very strategic marketing decision. They granted a worldwide sublicense of the technology to a small (less than $100M in sales), innovative company well known for its cutting-edge column technology: Phenomenex. This was a superior strategic move for two reasons. As mentioned above, Merck is not well known for its column manufacturing. Furthermore, having more than one silica monolith manufacturer serves to better validate the technology. Having sublicensed the technology from Merck, Phenomenex introduced its Onyx product line in January 2005. On the other side of monolith technologies are the polymerics. Unlike the inorganic silica columns, the polymer monoliths are made of an organic polymer base. Dionex, traditionally known for its ion chromatography capabilities, has led this side of the field. In the 1990s, Dionex first acquired a license for the polymeric monolith technology developed by leading monolithic chromatography researcher Frantisec Svec while he was at Cornell University. In 2000, they acquired LC Packings, whose competencies were in LC column packings. LC Packings/Dionex revealed their first monolithic capillary column at the Montreux LC-MS Conference. Earlier that year, another company, Isco, introduced a polystyrene divinylbenzene (PS-DVB) monolith column under the brand SWIFT. In January 2005, Dionex was sold the rights to Teledyne Isco's SWIFT media products, intellectual property, technology, and related assets. Though the core competencies of Dionex have traditionally been in ion chromatography, through strategic acquisitions and technology transfers, it has quickly established itself as the primary producer of polymeric monoliths. Economic impact Though the many advances of HPLC and monoliths are highly visible within the confines of the analytical and pharmaceutical industries, it is unlikely that general society is aware of these developments. Currently, consumers may witness technology developments in the analytical sciences industry in the form of a broader array of available pharmaceutical products of higher purity, advanced forensic testing in criminal trials, better environmental monitoring, and faster returns on medical tests. In the future, presumably, this may not be the case. As medicine becomes more individualized over time, consumer awareness that something is improving their quality of care seems more likely. The further thought that monoliths or HPLC are involved is unlikely to concern the general public, however. There are two main cost drivers behind technological change in this industry. Though many different analytical areas use LC, including food and beverage industries, forensics labs, and clinical testing facilities, the largest impetus toward technology developments comes from the research and development and production arms of the pharmaceutical industry. The areas in which high-throughput monolithic column technologies are likely to have the largest economic impact are R&D and downstream processing. From the Research and Development field comes the desire for more resolved, faster separations from smaller sample quantities. The only phase of drug development under direct control of a pharmaceutical company is the R&D stage. The goal of analytical work is to obtain as much information as possible from the sample. At this stage, high-throughput and analysis of tiny sample quantities are critical. Pharmaceutical companies are looking for tools that will better enable them to measure and predict the efficacy of candidate drugs in shorter times and with less expensive clinical trials. To this end, nano-scale separations, highly automated HPLC equipment, and multi-dimensional chromatography have become influential. The prevailing method to increase the sensitivity of analytical methods has been multi-dimensional chromatography. This practice uses other analysis techniques in conjunction with liquid chromatography. For example, mass spectrometry (MS) has very much gained in popularity as an on-line analytical technique following HPLC. It is limited, however, in that MS, like nuclear magnetic resonance spectroscopy (NMR) or electrospray ionization techniques (ESI), is only feasible when using very small quantities of solute and solvent; LC-MS is used with nano or capillary scale techniques, but cannot be used in prep-scale. Another tactic for increasing selectivity in multi-dimensional chromatography is to use two columns with different selectivity orthogonally; ie... linking an ion exchange column to a C18 endcapped column. In 2007, Karger reported that, through multi-dimensional chromatography and other techniques, starting with only about 12,000 cells containing 1-4μg of protein, he was able to identify 1867 unique proteins. Of those, Karger can isolate 4 that may be of interest as cervical cancer markers. Today, liquid chromatographers using multi-dimensional LC can isolate compounds at the femtomole (10−15 mole) and attomole (10−18 mole) levels. After a drug has been approved by the U.S. Food and Drug Administration (FDA), the emphasis at a pharmaceutical company is on getting a product to market. This is where prep or process scale chromatography has a role. In contrast to analytical analysis, preparatory scale chromatography focuses on isolation and purity of compounds. There is a trade-off between the degree of purity of compound and the amount of time required to achieve that purity. Unfortunately, many of the preparatory or process scale solutions used by pharmaceutical companies are proprietary, due to difficulties in patenting a process. Hence, there is not a great deal of literature available. However, some attempts to address the problems of prep scale chromatography include monoliths and simulated moving beds. A comparison of immunoglobulin protein capture on a conventional column and a monolithic column yields some economically interesting results. If processing times are equivalent, process volumes of IgG, an antibody, are 3,120L for conventional columns versus 5,538L for monolithic columns. This represents a 78% increase in process volume efficiency, while at the same time only a tenth of the media waste volume is generated. Not only is the monolith column more economically prudent when considering the value of product processing times, but, at the same time, less media is used, representing a significant reduction in variable costs. References External links “History of HPLC.” https://web.archive.org/web/20100410045845/http://kerouac.pharm.uky.edu/. Chromatography Scientific techniques
Monolithic HPLC column
[ "Chemistry" ]
5,879
[ "Chromatography", "Separation processes" ]
22,132,096
https://en.wikipedia.org/wiki/Laser%20ablation%20synthesis%20in%20solution
Laser ablation synthesis in solution (LASiS) is a commonly used method for obtaining colloidal solution of nanoparticles in a variety of solvents. Nanoparticles (NPs,), are useful in chemistry, engineering and biochemistry due to their large surface-to-volume ratio that causes them to have unique physical properties. LASiS is considered a "green" method due to its lack of use for toxic chemical precursors to synthesize nanoparticles. In the LASiS method, nanoparticles are produced by a laser beam hitting a solid target in a liquid and during the condensation of the plasma plume, the nanoparticles are formed. Since the ablation is occurring in a liquid, versus air/vacuum/gas/, the environment allows for plume expansion, cooling and condensation with a higher temperature, pressure and density to create a plume with stronger confinement. These environmental conditions allow for more refined and smaller nanoparticles LASiS is usually considered a top-down physical approach. LASiS emerged as a reliable alternative to traditional chemical reduction methods for obtaining noble metal nanoparticles (NMNp). LASiS is also used for synthesis of silver nanoparticles AgNPs, which are known for their antimicrobial effects. Production of AgNPs via LASiS causes nanoparticles with varying antimicrobial characteristics due to different properties achieved via the fine tuning of NPs size in liquid ablation. Pros and Cons LASiS has some limitations in the size control of NMNp, which can be overcome by laser treatments of NMNp. Other cons of LASiS include: the slow rate of NPs production, high consumption of energy, laser equipment cost, and decreased ablation efficiency with longer usage of the laser within a session. Other pros of LASiS include: minimal waste production, minimal manual operation, and refined size control of nanoparticles. References Nanoparticles Plasma technology and applications Chemical synthesis
Laser ablation synthesis in solution
[ "Physics", "Chemistry", "Materials_science" ]
411
[ "Materials science stubs", "Plasma physics", "Plasma technology and applications", "Nanotechnology stubs", "nan", "Chemical synthesis", "Nanotechnology" ]
22,132,209
https://en.wikipedia.org/wiki/Luttinger%20parameter
In semiconductors, valence bands are well characterized by 3 Luttinger parameters. At the Г-point in the band structure, and orbitals form valence bands. But spin–orbit coupling splits sixfold degeneracy into high energy 4-fold and lower energy 2-fold bands. Again 4-fold degeneracy is lifted into heavy- and light hole bands by phenomenological Hamiltonian by J. M. Luttinger. Three valence band state In the presence of spin–orbit interaction, total angular momentum should take part in. From the three valence band, l=1 and s=1/2 state generate six state of as The spin–orbit interaction from the relativistic quantum mechanics, lowers the energy of states down. Phenomenological Hamiltonian for the j=3/2 states Phenomenological Hamiltonian in spherical approximation is written as Phenomenological Luttinger parameters are defined as and If we take as , the Hamiltonian is diagonalized for states. Two degenerated resulting eigenenergies are for for () indicates heav-(light-) hole band energy. If we regard the electrons as nearly free electrons, the Luttinger parameters describe effective mass of electron in each bands. Example: GaAs In gallium arsenide, References Further reading Semiconductors
Luttinger parameter
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
279
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
18,436,210
https://en.wikipedia.org/wiki/Massera%27s%20lemma
In stability theory and nonlinear control, Massera's lemma, named after José Luis Massera, deals with the construction of the Lyapunov function to prove the stability of a dynamical system. The lemma appears in as the first lemma in section 12, and in more general form in as lemma 2. In 2004, Massera's original lemma for single variable functions was extended to the multivariable case, and the resulting lemma was used to prove the stability of switched dynamical systems, where a common Lyapunov function describes the stability of multiple modes and switching signals. Massera's original lemma Massera’s lemma is used in the construction of a converse Lyapunov function of the following form (also known as the integral construction) for an asymptotically stable dynamical system whose stable trajectory starting from The lemma states: Let be a positive, continuous, strictly decreasing function with as . Let be a positive, continuous, nondecreasing function. Then there exists a function such that and its derivative are class-K functions defined for all t ≥ 0 There exist positive constants k1, k2, such that for any continuous function u satisfying 0 ≤ u(t) ≤ g(t) for all t ≥ 0, Extension to multivariable functions Massera's lemma for single variable functions was extended to the multivariable case by Vu and Liberzon. Let be a positive, continuous, strictly decreasing function with as . Let be a positive, continuous, nondecreasing function. Then there exists a differentiable function such that and its derivative are class-K functions on . For every positive integer , there exist positive constants k1, k2, such that for any continuous function satisfying for all , we have References Footnotes Stability theory
Massera's lemma
[ "Mathematics" ]
380
[ "Stability theory", "Dynamical systems" ]
18,436,459
https://en.wikipedia.org/wiki/Extended%20Kalman%20filter
In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered the de facto standard in the theory of nonlinear state estimation, navigation systems and GPS. History The papers establishing the mathematical foundations of Kalman type filters were published between 1959 and 1961. The Kalman filter is the optimal linear estimator for linear system models with additive independent white noise in both the transition and the measurement systems. Unfortunately, in engineering, most systems are nonlinear, so attempts were made to apply this filtering method to nonlinear systems; most of this work was done at NASA Ames. The EKF adapted techniques from calculus, namely multivariate Taylor series expansions, to linearize a model about a working point. If the system model (as described below) is not well known or is inaccurate, then Monte Carlo methods, especially particle filters, are employed for estimation. Monte Carlo techniques predate the existence of the EKF but are more computationally expensive for any moderately dimensioned state-space. Formulation In the extended Kalman filter, the state transition and observation models don't need to be linear functions of the state but may instead be differentiable functions. Here wk and vk are the process and observation noises which are both assumed to be zero mean multivariate Gaussian noises with covariance Qk and Rk respectively. uk is the control vector. The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed. At each time step, the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the non-linear function around the current estimate. See the Kalman Filter article for notational remarks. Discrete-time predict and update equations Notation represents the estimate of at time n given observations up to and including at time . Predict Update where the state transition and observation matrices are defined to be the following Jacobians Disadvantages and alternatives Unlike its linear counterpart, the extended Kalman filter in general is not an optimal estimator (it is optimal if the measurement and the state transition model are both linear, as in that case the extended Kalman filter is identical to the regular one). In addition, if the initial estimate of the state is wrong, or if the process is modeled incorrectly, the filter may quickly diverge, owing to its linearization. Another problem with the extended Kalman filter is that the estimated covariance matrix tends to underestimate the true covariance matrix and therefore risks becoming inconsistent in the statistical sense without the addition of "stabilising noise" . More generally one should consider the infinite dimensional nature of the nonlinear filtering problem and the inadequacy of a simple mean and variance-covariance estimator to fully represent the optimal filter. It should also be noted that the extended Kalman filter may give poor performances even for very simple one-dimensional systems such as the cubic sensor, where the optimal filter can be bimodal and as such cannot be effectively represented by a single mean and variance estimator, having a rich structure, or similarly for the quadratic sensor. In such cases the projection filters have been studied as an alternative, having been applied also to navigation. Other general nonlinear filtering methods like full particle filters may be considered in this case. Having stated this, the extended Kalman filter can give reasonable performance, and is arguably the de facto standard in navigation systems and GPS. Generalizations Continuous-time extended Kalman filter Model Initialize Predict-Update Unlike the discrete-time extended Kalman filter, the prediction and update steps are coupled in the continuous-time extended Kalman filter. Discrete-time measurements Most physical systems are represented as continuous-time models while discrete-time measurements are frequently taken for state estimation via a digital processor. Therefore, the system model and measurement model are given by where . Initialize Predict where Update where The update equations are identical to those of discrete-time extended Kalman filter. Higher-order extended Kalman filters The above recursion is a first-order extended Kalman filter (EKF). Higher order EKFs may be obtained by retaining more terms of the Taylor series expansions. For example, second and third order EKFs have been described. However, higher order EKFs tend to only provide performance benefits when the measurement noise is small. Non-additive noise formulation and equations The typical formulation of the EKF involves the assumption of additive process and measurement noise. This assumption, however, is not necessary for EKF implementation. Instead, consider a more general system of the form: Here wk and vk are the process and observation noises which are both assumed to be zero mean multivariate Gaussian noises with covariance Qk and Rk respectively. Then the covariance prediction and innovation equations become where the matrices and are Jacobian matrices: The predicted state estimate and measurement residual are evaluated at the mean of the process and measurement noise terms, which is assumed to be zero. Otherwise, the non-additive noise formulation is implemented in the same manner as the additive noise EKF. Implicit extended Kalman filter In certain cases, the observation model of a nonlinear system cannot be solved for , but can be expressed by the implicit function: where are the noisy observations. The conventional extended Kalman filter can be applied with the following substitutions: where: Here the original observation covariance matrix is transformed, and the innovation is defined differently. The Jacobian matrix is defined as before, but determined from the implicit observation model . Modifications Iterated extended Kalman filter The iterated extended Kalman filter improves the linearization of the extended Kalman filter by recursively modifying the centre point of the Taylor expansion. This reduces the linearization error at the cost of increased computational requirements. Robust extended Kalman filter The robust extended Kalman filter arises by linearizing the signal model about the current state estimate and using the linear Kalman filter to predict the next estimate. This attempts to produce a locally optimal filter, however, it is not necessarily stable because the solutions of the underlying Riccati equation are not guaranteed to be positive definite. One way of improving performance is the faux algebraic Riccati technique which trades off optimality for stability. The familiar structure of the extended Kalman filter is retained but stability is achieved by selecting a positive definite solution to a faux algebraic Riccati equation for the gain design. Another way of improving extended Kalman filter performance is to employ the H-infinity results from robust control. Robust filters are obtained by adding a positive definite term to the design Riccati equation. The additional term is parametrized by a scalar which the designer may tweak to achieve a trade-off between mean-square-error and peak error performance criteria. Invariant extended Kalman filter The invariant extended Kalman filter (IEKF) is a modified version of the EKF for nonlinear systems possessing symmetries (or invariances). It combines the advantages of both the EKF and the recently introduced symmetry-preserving filters. Instead of using a linear correction term based on a linear output error, the IEKF uses a geometrically adapted correction term based on an invariant output error; in the same way the gain matrix is not updated from a linear state error, but from an invariant state error. The main benefit is that the gain and covariance equations converge to constant values on a much bigger set of trajectories than equilibrium points as it is the case for the EKF, which results in a better convergence of the estimation. Unscented Kalman filters A nonlinear Kalman filter which shows promise as an improvement over the EKF is the unscented Kalman filter (UKF). In the UKF, the probability density is approximated by a deterministic sampling of points which represent the underlying distribution as a Gaussian. The nonlinear transformation of these points are intended to be an estimation of the posterior distribution, the moments of which can then be derived from the transformed samples. The transformation is known as the unscented transform. The UKF tends to be more robust and more accurate than the EKF in its estimation of error in all the directions. "The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the time scale of the updates. Many of these difficulties arise from its use of linearization." A 2012 paper includes simulation results which suggest that some published variants of the UKF fail to be as accurate as the Second Order Extended Kalman Filter (SOEKF), also known as the augmented Kalman filter. The SOEKF predates the UKF by approximately 35 years with the moment dynamics first described by Bass et al. The difficulty in implementing any Kalman-type filters for nonlinear state transitions stems from the numerical stability issues required for precision, however the UKF does not escape this difficulty in that it uses linearization as well, namely linear regression. The stability issues for the UKF generally stem from the numerical approximation to the square root of the covariance matrix, whereas the stability issues for both the EKF and the SOEKF stem from possible issues in the Taylor Series approximation along the trajectory. Ensemble Kalman Filter The UKF was in fact predated by the Ensemble Kalman filter, invented by Evensen in 1994. It has the advantage over the UKF that the number of ensemble members used can be much smaller than the state dimension, allowing for applications in very high-dimensional systems, such as weather prediction, with state-space sizes of a billion or more. Fuzzy Kalman Filter Fuzzy Kalman filter with a new method to represent possibility distributions was recently proposed to replace probability distributions by possibility distributions in order to obtain a genuine possibilistic filter, enabling the use of non-symmetric process and observation noises as well as higher inaccuracies in both process and observation models. See also Kalman filter Ensemble Kalman filter Fast Kalman filter Invariant extended Kalman filter Moving horizon estimation Particle filter Unscented Kalman filter Nonlinear filtering problem Projection filters References Further reading External links Position estimation of a differential-wheel robot based on odometry and landmarks Signal estimation Nonlinear filters Robot control
Extended Kalman filter
[ "Engineering" ]
2,185
[ "Robotics engineering", "Robot control" ]
18,436,662
https://en.wikipedia.org/wiki/Rational%20motion
In kinematics, the motion of a rigid body is defined as a continuous set of displacements. One-parameter motions can be defined as a continuous displacement of moving object with respect to a fixed frame in Euclidean three-space (E3), where the displacement depends on one parameter, mostly identified as time. Rational motions are defined by rational functions (ratio of two polynomial functions) of time. They produce rational trajectories, and therefore they integrate well with the existing NURBS (Non-Uniform Rational B-Spline) based industry standard CAD/CAM systems. They are readily amenable to the applications of existing computer-aided geometric design (CAGD) algorithms. By combining kinematics of rigid body motions with NURBS geometry of curves and surfaces, methods have been developed for computer-aided design of rational motions. These CAD methods for motion design find applications in animation in computer graphics (key frame interpolation), trajectory planning in robotics (taught-position interpolation), spatial navigation in virtual reality, computer-aided geometric design of motion via interactive interpolation, CNC tool path planning, and task specification in mechanism synthesis. Background There has been a great deal of research in applying the principles of computer-aided geometric design (CAGD) to the problem of computer-aided motion design. In recent years, it has been well established that rational Bézier and rational B-spline based curve representation schemes can be combined with dual quaternion representation of spatial displacements to obtain rational Bézier and B-spline motions. Ge and Ravani, developed a new framework for geometric constructions of spatial motions by combining the concepts from kinematics and CAGD. Their work was built upon the seminal paper of Shoemake, in which he used the concept of a quaternion for rotation interpolation. A detailed list of references on this topic can be found in and. Rational Bézier and B-spline motions Let denote a unit dual quaternion. A homogeneous dual quaternion may be written as a pair of quaternions, ; where . This is obtained by expanding using dual number algebra (here, ). In terms of dual quaternions and the homogeneous coordinates of a point of the object, the transformation equation in terms of quaternions is given by where and are conjugates of and , respectively and denotes homogeneous coordinates of the point after the displacement. Given a set of unit dual quaternions and dual weights respectively, the following represents a rational Bézier curve in the space of dual quaternions. where are the Bernstein polynomials. The Bézier dual quaternion curve given by above equation defines a rational Bézier motion of degree . Similarly, a B-spline dual quaternion curve, which defines a NURBS motion of degree 2p, is given by, where are the pth-degree B-spline basis functions. A representation for the rational Bézier motion and rational B-spline motion in the Cartesian space can be obtained by substituting either of the above two preceding expressions for in the equation for point transform. In what follows, we deal with the case of rational Bézier motion. The trajectory of a point undergoing rational Bézier motion is given by, where is the matrix representation of the rational Bézier motion of degree in Cartesian space. The following matrices (also referred to as Bézier Control Matrices) define the affine control structure of the motion: where . In the above equations, and are binomial coefficients and are the weight ratios and In above matrices, are four components of the real part and are four components of the dual part of the unit dual quaternion . Example See also Quaternion and Dual quaternion NURBS Computer animation Robotics Robot kinematics Computational geometry CNC machining Mechanism design References External links Computational Design Kinematics Lab Robotics and Spatial Systems Laboratory (RASSL) Robotics and Automation Laboratory Kinematics
Rational motion
[ "Physics", "Technology" ]
821
[ "Machines", "Kinematics", "Physical phenomena", "Classical mechanics", "Physical systems", "Motion (physics)", "Mechanics" ]
18,437,142
https://en.wikipedia.org/wiki/SEIF%20SLAM
In robotics, the SEIF SLAM is the use of the sparse extended information filter (SEIF) to solve the simultaneous localization and mapping by maintaining a posterior over the robot pose and the map. Similar to GraphSLAM, the SEIF SLAM solves the SLAM problem fully, but is an online algorithm (GraphSLAM is offline). References Robot control
SEIF SLAM
[ "Engineering" ]
74
[ "Robotics engineering", "Robot control" ]
18,437,271
https://en.wikipedia.org/wiki/Monte%20Carlo%20POMDP
In the class of Markov decision process algorithms, the Monte Carlo POMDP (MC-POMDP) is the particle filter version for the partially observable Markov decision process (POMDP) algorithm. In MC-POMDP, particles filters are used to update and approximate the beliefs, and the algorithm is applicable to continuous valued states, actions, and measurements. References Robot control
Monte Carlo POMDP
[ "Engineering" ]
83
[ "Robotics engineering", "Robot control" ]
18,441,673
https://en.wikipedia.org/wiki/Tethered%20particle%20motion
Tethered particle motion (TPM) is a biophysical method that is used for studying various polymers such as DNA and their interaction with other entities such as proteins. The method allows observers to measure various physical properties on the substances, as well as to measure the properties of biochemical interactions with other substances such as proteins and enzymes. TPM is a single molecule experiment method. History TPM was first introduced by Schafer, Gelles, Sheetz and Landick in 1991. In their research, they attached RNA polymerase to the surface, and gold beads were attached to one end of the DNA molecules. In the beginning, the RNA polymerase "captures" the DNA near the gold bead. During the transcription, the DNA "slides" on the RNA polymerase so the distance between the RNA polymerase and the gold bead (the tether length)is increased. Using an optical microscope the area that the bead moves in was detected. The transcription rate was extracted from data. Since then, a lot of TPM experiments have been done, and the method was improved in many ways such as bead types, biochemistry techniques, imaging (faster cameras, different microscopy methods etc.) data analysis and combination with other single-molecule techniques (e.g. optical or magnetical tweezers). Principle of the method One end of a polymer is attached to a small bead (tens to hundreds of nanometer), while the other end is attached to a surface. Both the polymer and the bead stay in an aqueous environment, so the bead moves in Brownian motion. Because of the tether, the motion is restricted. Using an optical microscope and CCD camera, one can track the bead position in a time series. Although the bead is usually smaller than the diffraction limit, so the image is a spot which is larger than the bead itself (point spread function), the center of the spot represents the projection on the X-Y plane of the end of the polymer (end-to-end vector). Analyzing the distribution of the bead position can tell us a lot of information about the polymer. Excursion number In order that the motion would be polymer dominated, and not bead dominated, one should notice that the excursion number, NR, will be less than 1: where is the bead radius, is the contour length of the polymer and is the persistence length (50 nm in physiological conditions) of the polymer. (It is possible to work also when , but it should be treated carefully.) Bead types Metallic beads (usually gold) scatter light with high intensity, so one can use very small beads (~40 nm diameter), and still have a good picture. From the other hand, metallic beads are not the appropriate tool for optical tweezers experiments. Polystyrene beads scatter light weaker than metallic (in order to get the same intensity as getting from 40 nm gold bead, the polystyrene bead should be ~125 nm!), but it has the advantage that it can be combined with optical tweezers experiments. The major advantage of fluorospheres is that the excitation wavelength and the emission wavelength are not the same, so dichroic filter can be used to give a cleaner signal. The disadvantage of the fluorospheres is photobleaching. All of the bead types and diameters (with the biochemistry marker, look at the tether assembly section) are manufactured by commercial companies, and can purchased easily. Chip and tether assembly Chip assembly A chip can be made of two coverslips. One of them should be drilled to make two hole, allowing the reagents to be injected into the flowcell. The slides should be cleaned to remove dirt. A bath sonicator is a good tool for that, 15 minutes in Isopropanol should do the trick. Next, the a channel should be made. One way of doing so is to cut parafilm in the center, leaving a frame of parafilm that would be used as a spacer between the slides. The slides should be assembled one on the other with the cut parafilm between them. The final step is to heat the chip so that the parafilm will melt and glue the slides together. Tether assembly First, the chip has to be passivated so that the polymer won't stick to the glass, there are plenty of blocking reagents available (BSA, alpha-casein, etc.) and one should find what works best for the specific situation Next, the surface should be coated with an antibody or other reactive molecule (such as anti-digoxigenin) that will bind to an antigen (digoxigenin) at one end of the polymer. After an incubation of about 45min, the excess antibody has to be washed away. After washing the excess antibody, the polymer should be injected into the chip and incubated for about the same time. The polymer had been modified before at the ends. One end has a biotin tail and the other has a digoxigenin tail. After incubation, unbound polymer has to be washed out from the cell. Then, anti-biotin coated beads should be injected to the flowcell and incubate for about 30-45min. Excess beads should be washed out. Data analysis Tracking As mentioned above, the image doesn't show the bead itself but a larger spot according to its PSF (Point spread function). In addition, the pixel size on the camera may reduce the resolution of the measure. In order to extract the exact bead's position (that corresponds to the end-to-end vector), the center of the spot should be found as accurate as possible. It can be done with good resolution using two different techniques, both based on spot characteristics. The light intensity in the focal plane distributed as airy disk, and has circular symmetry. A 2-dimensional Gaussian function is a good approximation for airy disk. By fitting this function to the spot one can find the parameters and that are the coordinates of the center of the spot, and of the end-to-end vector. The second technique is to find the center of intensity, using the definition of center of mass: where is the center of mass coordinate, is the total intensity of the spot, and and are the intensity and coordinate of the k-th pixel. Because of the circular symmetry, the coordinate of the center of intensity is the coordinate of the center of the bead. Both techniques give us the coordinate of the end-to-end vector in a resolution better than pixel size. Drift correction Usually, the whole system drifts during the measuring. There are several methods to correct the drift, generally these can divided into 3 groups: The Brownian motion frequency is much larger than the drift frequency, so one can use high-pass filter in order to remove the drift. Similar effect can achieve by smoothing the data, and subtraction of the smoothed from the data (see figure). If few beads are shown in the frame, because every bead moving randomly, averaging over the position of them for every frame should give us the drift (it should subtracted from the data for having clean data). If an immobilized bead is shown in the frame, we can take its position as a reference, and correct the data by the immobilized bead's position. (Another advantage of looking at immobilized bead, is the fact that the motion of it can tell us about the accuracy of the measure.) Of course one can use more than one method. Polymer characterization It is common to fit random walk statistics to the end-to-end vector of the polymer. For 1-dimensional we'll get the Normal distribution, and for 2-dimensional the Rayleigh distribution: where is the contour length and is the persistence length. After collecting the data of time series, one should fitting the histogram of the data to the distribution function (one or two dimensional). If the contour length of the polymer is known, the only fitting parameter is the persistence length. Spring constant Due to entropic force, the polymer acts like Hookian spring. According to Boltzmann distribution, the distribution is proportional to exponent of the ratio between the elastic energy and the thermal energy: where is the spring constant, is Boltzmann constant and is the temperature. By taking the logarithm of the distribution and fitting it to a parabola shape, one can get the spring constant of the polymer: where is the coefficient of from the parabola fit. Advantage and disadvantage Advantages include a simple setup, cost, the fact that observations are made in the polymer's natural environment (no external forces are used), it is suitable for various microscopy methods (e.g. TIRFM, dark field, differential interference contrast microscopy, etc.), it can be combined and manipulated using other methods, and there are a high variety of applications. Disadvantages include low spatial resolution (~30 nm) and that it fits to in vitro experiments only. See also Single-particle tracking Single-molecule experiment References Biophysics Articles containing video clips
Tethered particle motion
[ "Physics", "Biology" ]
1,899
[ "Applied and interdisciplinary physics", "Biophysics" ]
18,442,067
https://en.wikipedia.org/wiki/HD%20223311
HD 223311 is a star in the equatorial constellation of Aquarius. It has an orange hue and is visible to the naked eye as a dim star with an apparent visual magnitude of 6.08. Based on parallax measurements, the star is located at a distance of approximately 910 light years from the Sun. It is a radial velocity standard star that is drifting closer to the Sun at the rate of −20 km/s. The star is situated near the ecliptic and thus is subject to lunar occultations. This is an aging K-type giant star with a stellar classification of K4III. Having exhausted the supply of hydrogen at its core, it has cooled and expanded off the main sequence. At present it has 41 times the girth of the Sun. It is a suspected variable star of unknown type that has been measured ranging in brightness from magnitude 5.01 down to 5.26 in the infrared I band. The star is radiating 496 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,267 K. References External links Image HD 223311 K-type giants Suspected variables Aquarius (constellation) Durchmusterung objects 223311 117420 9014
HD 223311
[ "Astronomy" ]
254
[ "Constellations", "Aquarius (constellation)" ]
18,442,792
https://en.wikipedia.org/wiki/TFAP2B
Transcription factor AP-2 beta also known as AP2-beta is a protein that in humans is encoded by the TFAP2B gene. Function AP-2 beta is a member of the AP-2 family of transcription factors. AP-2 proteins form homo- or hetero-dimers with other AP-2 family members and bind specific DNA sequences. They are thought to stimulate cell proliferation and suppress terminal differentiation of specific cell types during embryonic development. Specific AP-2 family members differ in their expression patterns and binding affinity for different promoters. This protein functions as both a transcriptional activator and repressor. Clinical significance Mutations in this gene result in autosomal dominant Char syndrome, suggesting that this gene functions in the differentiation of neural crest cell derivatives. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Char Syndrome Transcription factors
TFAP2B
[ "Chemistry", "Biology" ]
183
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
18,442,793
https://en.wikipedia.org/wiki/TFAP2C
Transcription factor AP-2 gamma also known as AP2-gamma is a protein that in humans is encoded by the TFAP2C gene. AP2-gamma is a member of the activating protein 2 family of transcription factors. Transcription factor AP-2 gamma is involved in early development, specifically morphogenesis - the formation of shape. AP2-gamma can regulate gene transcription by interacting with viral and cellular enhancing components and binding to the sequence 5'-GCCNNNGGC-3’. AP2-gamma activates genes that are important for placenta development and retinoic acid-mediated differentiation of the eyes, face, body wall, limbs, and neural tube. AP2-gamma also suppresses genes such as MYC and C/EBP alpha. It also represses CD44 expression, which is a cell marker for some breast and prostate cancers. Mutations of this transcription factor can lead to poorly developed placenta and tissues. A mutated AP2-gamma gene is known to cause branchiooculofacial syndrome (BOFS), which is a disease characterized by face and neck abnormalities, such as cleft lip or anophthalmia – lack of eyeballs, that have developed prior to birth. Complete knockout of the TAP2C gene that encoded AP-2 gamma leads to placenta malformation and embryonic/fetal death. References Auman, H. J., T. Nottoli, O. Lakiza, Q. Winger, S. Donaldson, and T. Williams. "Transcription Factor AP-2gamma Is Essential in the Extra-embryonic Lineages for Early Postimplantation Development." National Center for Biotechnology Information. U.S. National Library of Medicine, June 2002. Web. 15 Apr. 2014. Bogachek, M. V., and R. J. Weigel. "TFAP2C (transcription Factor AP-2 Gamma (activating Enhancer Binding Protein 2 Gamma))." TFAP2C (transcription Factor AP-2 Gamma(activating Enhancer Binding Protein 2 Gamma)). Oct. 2013. Web. 15 Apr. 2014. "Branchio-oculo-facial Syndrome." Genetics Home Reference. U.S. National Library of Medicine, 7 Apr. 2014. Web. 15 Apr. 2014. "Genes and Mapped Phenotypes." National Center for Biotechnology Information. U.S. National Library of Medicine, 12 Apr. 2014. Web. 15 Apr. 2014. "Transcription Factor AP-2 Gamma - TFAP2C - Homo sapiens (Human)." Transcription Factor AP-2 Gamma - TFAP2C - Homo sapiens (Human). UniProtKB, 19 Mar. 2014. Web. 15 Apr. 2014. Further reading External links Transcription factors
TFAP2C
[ "Chemistry", "Biology" ]
590
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
18,442,794
https://en.wikipedia.org/wiki/TFAP2D
Transcription factor AP-2 delta (activating enhancer binding protein 2 delta), also known as TFAP2D, is a human gene. The protein encoded by this gene is a transcription factor. See also Activating protein 2 References External links Transcription factors
TFAP2D
[ "Chemistry", "Biology" ]
55
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
18,442,795
https://en.wikipedia.org/wiki/TFAP2E
Transcription factor AP-2 epsilon (activating enhancer binding protein 2 epsilon), also known as TFAP2E, is a human gene. The protein encoded by this gene is a transcription factor. See also Activating protein 2 References External links Transcription factors
TFAP2E
[ "Chemistry", "Biology" ]
55
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
752,861
https://en.wikipedia.org/wiki/Bosch%20reaction
The Bosch reaction is a catalytic chemical reaction between carbon dioxide (CO2) and hydrogen (H2) that produces elemental carbon (C,graphite), water, and a 10% return of invested heat. CO2 is usually reduced by H2 to carbon in presence of a catalyst (e.g. iron (Fe)) and requires a temperature level of . The overall reaction is as follows: CO2(g) + 2 H2(g) → C(s) + 2 H2O(l) The above reaction is actually the result of two reactions. The first reaction, the reverse water gas shift reaction, is a fast one: CO2 + H2 → CO + H2O The second reaction is the rate determining step: CO + H2 → C + H2O The overall reaction produces 2.3×103 joules for every gram of carbon dioxide reacted at 650 °C. Reaction temperatures are in the range of 450 to 600 °C. The reaction can be accelerated in the presence of an iron, cobalt or nickel catalyst. Ruthenium also serves to speed up the reaction. Applications Together with the Sabatier reaction, the Bosch reaction is studied as a way to remove carbon dioxide and to generate clean water aboard a space station. The reaction is also used to produce graphite for radiocarbon dating with Accelerator Mass Spectrometry. The Bosch reaction is being investigated for use in maintaining space station life support. Though the Bosch reaction would present a completely closed hydrogen and oxygen cycle which only produces atomic carbon as waste, difficulties in maintaining its higher required temperature and properly handling carbon deposits mean that significantly more research will be required before a Bosch reactor can become a reality. One problem is that the production of elemental carbon tends to foul the catalyst's surface, which is detrimental to the reaction's efficiency. Notes External links A carbon dioxide reduction unit using Bosch reaction Organic redox reactions Name reactions Hydrogen
Bosch reaction
[ "Chemistry" ]
392
[ "Name reactions", "Organic redox reactions", "Organic reactions" ]
752,897
https://en.wikipedia.org/wiki/Buffer%20gas
A buffer gas is an inert or nonflammable gas. In the Earth's atmosphere, nitrogen acts as a buffer gas. A buffer gas adds pressure to a system and controls the speed of combustion with any oxygen present. Any inert gas such as helium, neon, or argon will serve as a buffer gas. A buffer gas usually consists of atomically inert gases such as helium, argon, or nitrogen. Krypton, neon, and xenon are also used, primarily for lighting. In most scenarios, buffer gases are used in conjunction with other molecules for the main purpose of causing collisions with the other co-existing molecules. Buffer gases are commonly used in many applications from high pressure discharge lamps to reduce line width of microwave transitions in alkali atoms. Uses Lighting In fluorescent lamps, mercury is used as the primary ion from which light is emitted. Krypton is the buffer gas used in conjunction with the mercury which is used to moderate the momentum of collisions of mercury ions in order to reduce the damage done to the electrodes in the fluorescent lamp. Generally speaking, the longest lasting lamps are those with the heaviest noble gases as buffer gases. Industrial Buffer gases are also commonly used in compressors used in power plants for supplying gas to gas turbines. The buffer gas fills the spaces between seals in the compressor. This space is usually about 2 micrometres wide. The gas must be completely dry and free of any contaminants. Contaminants can potentially lodge in the space between the seal and cause metal to metal contact in the compressor, leading to compressor failure. In this case the buffer gas acts in a way much like oil does in an automotive engine's bearings. Buffer gas cooling Buffer gas loading techniques have been developed for use in cooling charged or paramagnetic atoms and molecules at ultra-cold temperatures. The buffer gas most commonly used in this sort of application is helium. Suppose we have some very cold helium gas as cryogenic buffer gas, then any cloud of particles floating within that buffer gas would exchange energy with the buffer gas, until it reaches the same temperature (thermalized). The problem is that the cloud of particles would diffuse away. In buffer gas cooling, the cloud of particles we want to cool down is caught in a trap that lets the helium atom pass through. If the particles are electrically charged, then the trap can be the Penning trap or the Paul trap. If the particles are electrically neutral, but paramagnetic, then the trap can be a magnetic trap (as helium is diamagnetic), such as the anti-Helmholtz pair. Paramagnetic atoms are low-field-seeking while diamagnetic atoms are high-field-seeking, so in a magnetic trap, there is a central region where the magnetic field is zero, rising in all directions. Paramagnetic atoms would be trapped in that zero-field region while the diamagnetic atoms would be repelled away. Buffer gas cooling can be used on just about any molecule, as long as the molecule is capable of surviving multiple collisions with low energy helium atoms, which most molecules are capable of doing. Buffer gas cooling is allowing the molecules of interest to be cooled through elastic collisions with a cold buffer gas inside a chamber. If there are enough collisions between the buffer gas and the other molecules of interest before the molecules hit the walls of the chamber and are gone, the buffer gas will sufficiently cool the atoms. Of the two isotopes of helium (3He and 4He), the rarer 3He is sometimes used over 4He as it provides significantly higher vapor pressures and buffer gas density at sub-kelvin temperatures. References External links Buffer Gas Cooling Buffer Gas on Mars Buffer Gas Cooling of Diatomic Molecules Buffer Gas on Microwave Transition Gases
Buffer gas
[ "Physics", "Chemistry" ]
774
[ "Statistical mechanics", "Gases", "Phases of matter", "Matter" ]
753,145
https://en.wikipedia.org/wiki/Del%20in%20cylindrical%20and%20spherical%20coordinates
This is a list of some vector calculus formulae for working with common curvilinear coordinate systems. Notes This article uses the standard notation ISO 80000-2, which supersedes ISO 31-11, for spherical coordinates (other sources may reverse the definitions of θ and φ): The polar angle is denoted by : it is the angle between the z-axis and the radial vector connecting the origin to the point in question. The azimuthal angle is denoted by : it is the angle between the x-axis and the projection of the radial vector onto the xy-plane. The function can be used instead of the mathematical function owing to its domain and image. The classical arctan function has an image of , whereas atan2 is defined to have an image of . Coordinate conversions Note that the operation must be interpreted as the two-argument inverse tangent, atan2. Unit vector conversions Del formula This page uses for the polar angle and for the azimuthal angle, which is common notation in physics. The source that is used for these formulae uses for the azimuthal angle and for the polar angle, which is common mathematical notation. In order to get the mathematics formulae, switch and in the formulae shown in the table above. Defined in Cartesian coordinates as . An alternative definition is . Defined in Cartesian coordinates as . An alternative definition is . Calculation rules (Lagrange's formula for del) (From ) Cartesian derivation The expressions for and are found in the same way. Cylindrical derivation Spherical derivation Unit vector conversion formula The unit vector of a coordinate parameter u is defined in such a way that a small positive change in u causes the position vector to change in direction. Therefore, where is the arc length parameter. For two sets of coordinate systems and , according to chain rule, Now, we isolate the th component. For , let . Then divide on both sides by to get: See also Del Orthogonal coordinates Curvilinear coordinates Vector fields in cylindrical and spherical coordinates References External links Maxima Computer Algebra system scripts to generate some of these operators in cylindrical and spherical coordinates. Vector calculus Coordinate systems
Del in cylindrical and spherical coordinates
[ "Mathematics" ]
439
[ "Coordinate systems" ]
753,349
https://en.wikipedia.org/wiki/Boolean%20function
In mathematics, a Boolean function is a function whose arguments and result assume values from a two-element set (usually {true, false}, {0,1} or {-1,1}). Alternative names are switching function, used especially in older computer science literature, and truth function (or logical function), used in logic. Boolean functions are the subject of Boolean algebra and switching theory. A Boolean function takes the form , where is known as the Boolean domain and is a non-negative integer called the arity of the function. In the case where , the function is a constant element of . A Boolean function with multiple outputs, with is a vectorial or vector-valued Boolean function (an S-box in symmetric cryptography). There are different Boolean functions with arguments; equal to the number of different truth tables with entries. Every -ary Boolean function can be expressed as a propositional formula in variables , and two propositional formulas are logically equivalent if and only if they express the same Boolean function. Examples The rudimentary symmetric Boolean functions (logical connectives or logic gates) are: NOT, negation or complement - which receives one input and returns true when that input is false ("not") AND or conjunction - true when all inputs are true ("both") OR or disjunction - true when any input is true ("either") XOR or exclusive disjunction - true when one of its inputs is true and the other is false ("not equal") NAND or Sheffer stroke - true when it is not the case that all inputs are true ("not both") NOR or logical nor - true when none of the inputs are true ("neither") XNOR or logical equality - true when both inputs are the same ("equal") An example of a more complicated function is the majority function (of an odd number of inputs). Representation A Boolean function may be specified in a variety of ways: Truth table: explicitly listing its value for all possible values of the arguments Marquand diagram: truth table values arranged in a two-dimensional grid (used in a Karnaugh map) Binary decision diagram, listing the truth table values at the bottom of a binary tree Venn diagram, depicting the truth table values as a colouring of regions of the plane Algebraically, as a propositional formula using rudimentary Boolean functions: Negation normal form, an arbitrary mix of AND and ORs of the arguments and their complements Disjunctive normal form, as an OR of ANDs of the arguments and their complements Conjunctive normal form, as an AND of ORs of the arguments and their complements Canonical normal form, a standardized formula which uniquely identifies the function: Algebraic normal form or Zhegalkin polynomial, as a XOR of ANDs of the arguments (no complements allowed) Full (canonical) disjunctive normal form, an OR of ANDs each containing every argument or complement (minterms) Full (canonical) conjunctive normal form, an AND of ORs each containing every argument or complement (maxterms) Blake canonical form, the OR of all the prime implicants of the function Boolean formulas can also be displayed as a graph: Propositional directed acyclic graph Digital circuit diagram of logic gates, a Boolean circuit And-inverter graph, using only AND and NOT In order to optimize electronic circuits, Boolean formulas can be minimized using the Quine–McCluskey algorithm or Karnaugh map. Analysis Properties A Boolean function can have a variety of properties: Constant: Is always true or always false regardless of its arguments. Monotone: for every combination of argument values, changing an argument from false to true can only cause the output to switch from false to true and not from true to false. A function is said to be unate in a certain variable if it is monotone with respect to changes in that variable. Linear: for each variable, flipping the value of the variable either always makes a difference in the truth value or never makes a difference (a parity function). Symmetric: the value does not depend on the order of its arguments. Read-once: Can be expressed with conjunction, disjunction, and negation with a single instance of each variable. Balanced: if its truth table contains an equal number of zeros and ones. The Hamming weight of the function is the number of ones in the truth table. Bent: its derivatives are all balanced (the autocorrelation spectrum is zero) Correlation immune to mth order: if the output is uncorrelated with all (linear) combinations of at most m arguments Evasive: if evaluation of the function always requires the value of all arguments A Boolean function is a Sheffer function if it can be used to create (by composition) any arbitrary Boolean function (see functional completeness) The algebraic degree of a function is the order of the highest order monomial in its algebraic normal form Circuit complexity attempts to classify Boolean functions with respect to the size or depth of circuits that can compute them. Derived functions A Boolean function may be decomposed using Boole's expansion theorem in positive and negative Shannon cofactors (Shannon expansion), which are the (k-1)-ary functions resulting from fixing one of the arguments (to zero or one). The general (k-ary) functions obtained by imposing a linear constraint on a set of inputs (a linear subspace) are known as subfunctions. The Boolean derivative of the function to one of the arguments is a (k-1)-ary function that is true when the output of the function is sensitive to the chosen input variable; it is the XOR of the two corresponding cofactors. A derivative and a cofactor are used in a Reed–Muller expansion. The concept can be generalized as a k-ary derivative in the direction dx, obtained as the difference (XOR) of the function at x and x + dx. The Möbius transform (or Boole-Möbius transform) of a Boolean function is the set of coefficients of its polynomial (algebraic normal form), as a function of the monomial exponent vectors. It is a self-inverse transform. It can be calculated efficiently using a butterfly algorithm ("Fast Möbius Transform"), analogous to the Fast Fourier Transform. Coincident Boolean functions are equal to their Möbius transform, i.e. their truth table (minterm) values equal their algebraic (monomial) coefficients. There are 2^2^(k−1) coincident functions of k arguments. Cryptographic analysis The Walsh transform of a Boolean function is a k-ary integer-valued function giving the coefficients of a decomposition into linear functions (Walsh functions), analogous to the decomposition of real-valued functions into harmonics by the Fourier transform. Its square is the power spectrum or Walsh spectrum. The Walsh coefficient of a single bit vector is a measure for the correlation of that bit with the output of the Boolean function. The maximum (in absolute value) Walsh coefficient is known as the linearity of the function. The highest number of bits (order) for which all Walsh coefficients are 0 (i.e. the subfunctions are balanced) is known as resiliency, and the function is said to be correlation immune to that order. The Walsh coefficients play a key role in linear cryptanalysis. The autocorrelation of a Boolean function is a k-ary integer-valued function giving the correlation between a certain set of changes in the inputs and the function output. For a given bit vector it is related to the Hamming weight of the derivative in that direction. The maximal autocorrelation coefficient (in absolute value) is known as the absolute indicator. If all autocorrelation coefficients are 0 (i.e. the derivatives are balanced) for a certain number of bits then the function is said to satisfy the propagation criterion to that order; if they are all zero then the function is a bent function. The autocorrelation coefficients play a key role in differential cryptanalysis. The Walsh coefficients of a Boolean function and its autocorrelation coefficients are related by the equivalent of the Wiener–Khinchin theorem, which states that the autocorrelation and the power spectrum are a Walsh transform pair. Linear approximation table These concepts can be extended naturally to vectorial Boolean functions by considering their output bits (coordinates) individually, or more thoroughly, by looking at the set of all linear functions of output bits, known as its components. The set of Walsh transforms of the components is known as a Linear Approximation Table (LAT) or correlation matrix; it describes the correlation between different linear combinations of input and output bits. The set of autocorrelation coefficients of the components is the autocorrelation table, related by a Walsh transform of the components to the more widely used Difference Distribution Table (DDT) which lists the correlations between differences in input and output bits (see also: S-box). Real polynomial form On the unit hypercube Any Boolean function can be uniquely extended (interpolated) to the real domain by a multilinear polynomial in , constructed by summing the truth table values multiplied by indicator polynomials:For example, the extension of the binary XOR function iswhich equalsSome other examples are negation (), AND () and OR (). When all operands are independent (share no variables) a function's polynomial form can be found by repeatedly applying the polynomials of the operators in a Boolean formula. When the coefficients are calculated modulo 2 one obtains the algebraic normal form (Zhegalkin polynomial). Direct expressions for the coefficients of the polynomial can be derived by taking an appropriate derivative:this generalizes as the Möbius inversion of the partially ordered set of bit vectors:where denotes the weight of the bit vector . Taken modulo 2, this is the Boolean Möbius transform, giving the algebraic normal form coefficients:In both cases, the sum is taken over all bit-vectors a covered by m, i.e. the "one" bits of a form a subset of the one bits of m. When the domain is restricted to the n-dimensional hypercube , the polynomial gives the probability of a positive outcome when the Boolean function f is applied to n independent random (Bernoulli) variables, with individual probabilities x. A special case of this fact is the piling-up lemma for parity functions. The polynomial form of a Boolean function can also be used as its natural extension to fuzzy logic. On the symmetric hypercube Often, the Boolean domain is taken as , with false ("0") mapping to 1 and true ("1") to -1 (see Analysis of Boolean functions). The polynomial corresponding to is then given by:Using the symmetric Boolean domain simplifies certain aspects of the analysis, since negation corresponds to multiplying by -1 and linear functions are monomials (XOR is multiplication). This polynomial form thus corresponds to the Walsh transform (in this context also known as Fourier transform) of the function (see above). The polynomial also has the same statistical interpretation as the one in the standard Boolean domain, except that it now deals with the expected values (see piling-up lemma for an example). Applications Boolean functions play a basic role in questions of complexity theory as well as the design of processors for digital computers, where they are implemented in electronic circuits using logic gates. The properties of Boolean functions are critical in cryptography, particularly in the design of symmetric key algorithms (see substitution box). In cooperative game theory, monotone Boolean functions are called simple games (voting games); this notion is applied to solve problems in social choice theory. See also Pseudo-Boolean function Boolean-valued function Boolean algebra topics Algebra of sets Decision tree model Indicator function Signed set References Further reading Boolean algebra Binary arithmetic Logic gates Programming constructs
Boolean function
[ "Mathematics" ]
2,520
[ "Boolean algebra", "Mathematical logic", "Fields of abstract algebra", "Arithmetic", "Binary arithmetic" ]
753,403
https://en.wikipedia.org/wiki/Dawn%20chorus%20%28electromagnetic%29
The electromagnetic dawn chorus is a phenomenon that occurs most often at or shortly after dawn local time. It is believed to be generated by a Doppler-shifted cyclotron interaction between anisotropic distributions of energetic (> 40 keV) electrons and ambient background VLF noise. These energetic electrons are generally injected into the inner magnetosphere at the onset of the substorm expansion phase. Dawn choruses occur more frequently during magnetic storms. This phenomenon also occurs during aurorae, when it is termed an auroral chorus. With the proper radio equipment, dawn chorus can be converted to sounds that resemble, coincidentally, birds' dawn chorus. See also Auroral chorus Dawn chorus (birds) Hiss (electromagnetic) Whistler (radio) "Cluster One," a Pink Floyd track using sferics and dawn chorus as an overture Notes Further reading External links Natural VLF Radio - Sounds of Space Weather 2018 recording by NASA RBSP (Radiation Belt Storm Probe) Electrical phenomena Geomagnetism
Dawn chorus (electromagnetic)
[ "Physics" ]
202
[ "Physical phenomena", "Electrical phenomena" ]
753,944
https://en.wikipedia.org/wiki/Commensurability%20%28mathematics%29
In mathematics, two non-zero real numbers a and b are said to be commensurable if their ratio is a rational number; otherwise a and b are called incommensurable. (Recall that a rational number is one that is equivalent to the ratio of two integers.) There is a more general notion of commensurability in group theory. For example, the numbers 3 and 2 are commensurable because their ratio, , is a rational number. The numbers and are also commensurable because their ratio, , is a rational number. However, the numbers and 2 are incommensurable because their ratio, , is an irrational number. More generally, it is immediate from the definition that if a and b are any two non-zero rational numbers, then a and b are commensurable; it is also immediate that if a is any irrational number and b is any non-zero rational number, then a and b are incommensurable. On the other hand, if both a and b are irrational numbers, then a and b may or may not be commensurable. History of the concept The Pythagoreans are credited with the proof of the existence of irrational numbers. When the ratio of the lengths of two line segments is irrational, the line segments themselves (not just their lengths) are also described as being incommensurable. A separate, more general and circuitous ancient Greek doctrine of proportionality for geometric magnitude was developed in Book V of Euclid's Elements in order to allow proofs involving incommensurable lengths, thus avoiding arguments which applied only to a historically restricted definition of number. Euclid's notion of commensurability is anticipated in passing in the discussion between Socrates and the slave boy in Plato's dialogue entitled Meno, in which Socrates uses the boy's own inherent capabilities to solve a complex geometric problem through the Socratic Method. He develops a proof which is, for all intents and purposes, very Euclidean in nature and speaks to the concept of incommensurability. The usage primarily comes from translations of Euclid's Elements, in which two line segments a and b are called commensurable precisely if there is some third segment c that can be laid end-to-end a whole number of times to produce a segment congruent to a, and also, with a different whole number, a segment congruent to b. Euclid did not use any concept of real number, but he used a notion of congruence of line segments, and of one such segment being longer or shorter than another. That is rational is a necessary and sufficient condition for the existence of some real number c, and integers m and n, such that a = mc and b = nc. Assuming for simplicity that a and b are positive, one can say that a ruler, marked off in units of length c, could be used to measure out both a line segment of length a, and one of length b. That is, there is a common unit of length in terms of which a and b can both be measured; this is the origin of the term. Otherwise the pair a and b are incommensurable. In group theory In group theory, two subgroups Γ1 and Γ2 of a group G are said to be commensurable if the intersection Γ1 ∩ Γ2 is of finite index in both Γ1 and Γ2. Example: Let a and b be nonzero real numbers. Then the subgroup of the real numbers R generated by a is commensurable with the subgroup generated by b if and only if the real numbers a and b are commensurable, in the sense that a/b is rational. Thus the group-theoretic notion of commensurability generalizes the concept for real numbers. There is a similar notion for two groups which are not given as subgroups of the same group. Two groups G1 and G2 are (abstractly) commensurable if there are subgroups H1 ⊂ G1 and H2 ⊂ G2 of finite index such that H1 is isomorphic to H2. In topology Two path-connected topological spaces are sometimes said to be commensurable if they have homeomorphic finite-sheeted covering spaces. Depending on the type of space under consideration, one might want to use homotopy equivalences or diffeomorphisms instead of homeomorphisms in the definition. If two spaces are commensurable, then their fundamental groups are commensurable. Example: any two closed surfaces of genus at least 2 are commensurable with each other. References Real numbers Infinite group theory
Commensurability (mathematics)
[ "Mathematics" ]
975
[ "Real numbers", "Mathematical objects", "Numbers" ]
754,487
https://en.wikipedia.org/wiki/Permeability%20%28electromagnetism%29
In electromagnetism, permeability is the measure of magnetization produced in a material in response to an applied magnetic field. Permeability is typically represented by the (italicized) Greek letter μ. It is the ratio of the magnetic induction to the magnetizing field in a material. The term was coined by William Thomson, 1st Baron Kelvin in 1872, and used alongside permittivity by Oliver Heaviside in 1885. The reciprocal of permeability is magnetic reluctivity. In SI units, permeability is measured in henries per meter (H/m), or equivalently in newtons per ampere squared (N/A2). The permeability constant μ0, also known as the magnetic constant or the permeability of free space, is the proportionality between magnetic induction and magnetizing force when forming a magnetic field in a classical vacuum. A closely related property of materials is magnetic susceptibility, which is a dimensionless proportionality factor that indicates the degree of magnetization of a material in response to an applied magnetic field. Explanation In the macroscopic formulation of electromagnetism, there appear two different kinds of magnetic field: the magnetizing field H which is generated around electric currents and displacement currents, and also emanates from the poles of magnets. The SI units of H are amperes per meter. the magnetic flux density B which acts back on the electrical domain, by curving the motion of charges and causing electromagnetic induction. The SI units of B are volt-seconds per square meter, a ratio equivalent to one tesla. The concept of permeability arises since in many materials (and in vacuum), there is a simple relationship between H and B at any location or time, in that the two fields are precisely proportional to each other: where the proportionality factor μ is the permeability, which depends on the material. The permeability of vacuum (also known as permeability of free space) is a physical constant, denoted μ0. The SI units of μ are volt-seconds per ampere-meter, equivalently henry per meter. Typically μ would be a scalar, but for an anisotropic material, μ could be a second rank tensor. However, inside strong magnetic materials (such as iron, or permanent magnets), there is typically no simple relationship between H and B. The concept of permeability is then nonsensical or at least only applicable to special cases such as unsaturated magnetic cores. Not only do these materials have nonlinear magnetic behaviour, but often there is significant magnetic hysteresis, so there is not even a single-valued functional relationship between B and H. However, considering starting at a given value of B and H and slightly changing the fields, it is still possible to define an incremental permeability as: assuming B and H are parallel. In the microscopic formulation of electromagnetism, where there is no concept of an H field, the vacuum permeability μ0 appears directly (in the SI Maxwell's equations) as a factor that relates total electric currents and time-varying electric fields to the B field they generate. In order to represent the magnetic response of a linear material with permeability μ, this instead appears as a magnetization M that arises in response to the B field: . The magnetization in turn is a contribution to the total electric current—the magnetization current. Relative permeability and magnetic susceptibility Relative permeability, denoted by the symbol , is the ratio of the permeability of a specific medium to the permeability of free space μ0: where 4 × 10−7 H/m is the magnetic permeability of free space. In terms of relative permeability, the magnetic susceptibility is The number χm is a dimensionless quantity, sometimes called volumetric or bulk susceptibility, to distinguish it from χp (magnetic mass or specific susceptibility) and χM (molar or molar mass susceptibility). Diamagnetism Diamagnetism is the property of an object which causes it to create a magnetic field in opposition of an externally applied magnetic field, thus causing a repulsive effect. Specifically, an external magnetic field alters the orbital velocity of electrons around their atom's nuclei, thus changing the magnetic dipole moment in the direction opposing the external field. Diamagnets are materials with a magnetic permeability less than μ0 (a relative permeability less than 1). Consequently, diamagnetism is a form of magnetism that a substance exhibits only in the presence of an externally applied magnetic field. It is generally a quite weak effect in most materials, although superconductors exhibit a strong effect. Paramagnetism Paramagnetism is a form of magnetism which occurs only in the presence of an externally applied magnetic field. Paramagnetic materials are attracted to magnetic fields, hence have a relative magnetic permeability greater than one (or, equivalently, a positive magnetic susceptibility). The magnetic moment induced by the applied field is linear in the field strength, and it is rather weak. It typically requires a sensitive analytical balance to detect the effect. Unlike ferromagnets, paramagnets do not retain any magnetization in the absence of an externally applied magnetic field, because thermal motion causes the spins to become randomly oriented without it. Thus the total magnetization will drop to zero when the applied field is removed. Even in the presence of the field, there is only a small induced magnetization because only a small fraction of the spins will be oriented by the field. This fraction is proportional to the field strength and this explains the linear dependency. The attraction experienced by ferromagnets is non-linear and much stronger so that it is easily observed, for instance, in magnets on one's refrigerator. Gyromagnetism For gyromagnetic media (see Faraday rotation) the magnetic permeability response to an alternating electromagnetic field in the microwave frequency domain is treated as a non-diagonal tensor expressed by: Values for some common materials The following table should be used with caution as the permeability of ferromagnetic materials varies greatly with field strength and specific composition and fabrication. For example, 4% electrical steel has an initial relative permeability (at or near 0 T) of 2,000 and a maximum of 38,000 at T = 1 and different range of values at different percent of Si and manufacturing process, and, indeed, the relative permeability of any material at a sufficiently high field strength trends toward 1 (at magnetic saturation). A good magnetic core material must have high permeability. For passive magnetic levitation a relative permeability below 1 is needed (corresponding to a negative susceptibility). Permeability varies with a magnetic field. Values shown above are approximate and valid only at the magnetic fields shown. They are given for a zero frequency; in practice, the permeability is generally a function of the frequency. When the frequency is considered, the permeability can be complex, corresponding to the in-phase and out of phase response. Complex permeability A useful tool for dealing with high frequency magnetic effects is the complex permeability. While at low frequencies in a linear material the magnetic field and the auxiliary magnetic field are simply proportional to each other through some scalar permeability, at high frequencies these quantities will react to each other with some lag time. These fields can be written as phasors, such that where is the phase delay of from . Understanding permeability as the ratio of the magnetic flux density to the magnetic field, the ratio of the phasors can be written and simplified as so that the permeability becomes a complex number. By Euler's formula, the complex permeability can be translated from polar to rectangular form, The ratio of the imaginary to the real part of the complex permeability is called the loss tangent, which provides a measure of how much power is lost in material versus how much is stored. See also Antiferromagnetism Diamagnetism Electromagnet Ferromagnetism Magnetic reluctance Paramagnetism Permittivity SI electromagnetism units Notes References External links Electromagnetism – a chapter from an online textbook Permeability calculator Relative Permeability Magnetic Properties of Materials RF Cafe's Conductor Bulk Resistivity & Skin Depths Electric and magnetic fields in matter Physical quantities
Permeability (electromagnetism)
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,760
[ "Physical phenomena", "Physical quantities", "Quantity", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", "Physical properties" ]
754,488
https://en.wikipedia.org/wiki/Permeability%20%28materials%20science%29
In fluid mechanics, materials science and Earth sciences, permeability (commonly symbolized as k) is a measure of the ability of a porous material (often, a rock or an unconsolidated material) to allow fluids to pass through it. Permeability Permeability is a property of porous materials that is an indication of the ability for fluids (gas or liquid) to flow through them. Fluids can more easily flow through a material with high permeability than one with low permeability. The permeability of a medium is related to the porosity, but also to the shapes of the pores in the medium and their level of connectedness. Fluid flows can also be influenced in different lithological settings by brittle deformation of rocks in fault zones; the mechanisms by which this occurs are the subject of fault zone hydrogeology. Permeability is also affected by the pressure inside a material. Units The SI unit for permeability is the square metre (m2). A practical unit for permeability is the darcy (d), or more commonly the millidarcy (md) The name honors the French Engineer Henry Darcy who first described the flow of water through sand filters for potable water supply. Permeability values for most materials commonly range typically from a fraction to several thousand millidarcys. The unit of square centimetre (cm2) is also sometimes used Applications The concept of permeability is of importance in determining the flow characteristics of hydrocarbons in oil and gas reservoirs, and of groundwater in aquifers. For a rock to be considered as an exploitable hydrocarbon reservoir without stimulation, its permeability must be greater than approximately 100 md (depending on the nature of the hydrocarbon – gas reservoirs with lower permeabilities are still exploitable because of the lower viscosity of gas in comparison with oil). Rocks with permeabilities significantly lower than 100 md can form efficient seals (see petroleum geology). Unconsolidated sands may have permeabilities of over 5000 md. The concept also has many practical applications outside of geology, for example in chemical engineering (e.g., filtration), as well as in Civil Engineering when determining whether the ground conditions of a site are suitable for construction. Description Permeability is part of the proportionality constant in Darcy's law which relates discharge (flow rate) and fluid physical properties (e.g. dynamic viscosity), to a pressure gradient applied to the porous media: (for linear flow) Therefore: where: is the fluid velocity through the porous medium (i.e., the average flow velocity calculated as if the fluid was the only phase present in the porous medium) (m/s) is the permeability of a medium (m2) is the dynamic viscosity of the fluid (Pa·s) is the applied pressure difference (Pa) is the thickness of the bed of the porous medium (m) In naturally occurring materials, the permeability values range over many orders of magnitude (see table below for an example of this range). Relation to hydraulic conductivity The global proportionality constant for the flow of water through a porous medium is called the hydraulic conductivity (, unit: m/s). Permeability, or intrinsic permeability, (, unit: m2) is a part of this, and is a specific property characteristic of the solid skeleton and the microstructure of the porous medium itself, independently of the nature and properties of the fluid flowing through the pores of the medium. This allows to take into account the effect of temperature on the viscosity of the fluid flowing though the porous medium and to address other fluids than pure water, e.g., concentrated brines, petroleum, or organic solvents. Given the value of hydraulic conductivity for a studied system, the permeability can be calculated as follows: where is the permeability, m2 is the hydraulic conductivity, m/s is the dynamic viscosity of the fluid, Pa·s is the density of the fluid, kg/m3 is the acceleration due to gravity, m/s2. Anisotropic permeability Tissue such as brain, liver, muscle, etc can be treated as a heterogeneous porous medium. Describing the flow of biofluids (blood, cerebrospinal fluid, etc.) within such a medium requires a full 3-dimensional anisotropic treatment of the tissue. In this case the scalar hydraulic permeability is replaced with the hydraulic permeability tensor so that Darcy's Law reads is the Darcy flux, or filtration velocity, which describes the bulk (not microscopic) velocity field of the fluid, is the dynamic viscosity of the fluid, is the hydraulic permeability tensor, is the gradient operator, is the pressure field in the fluid, Connecting this expression to the isotropic case, , where k is the scalar hydraulic permeability, and 1 is the identity tensor. Determination Permeability is typically determined in the lab by application of Darcy's law under steady state conditions or, more generally, by application of various solutions to the diffusion equation for unsteady flow conditions. Permeability needs to be measured, either directly (using Darcy's law), or through estimation using empirically derived formulas. However, for some simple models of porous media, permeability can be calculated (e.g., random close packing of identical spheres). Permeability model based on conduit flow Based on the Hagen–Poiseuille equation for viscous flow in a pipe, permeability can be expressed as: where: is the intrinsic permeability [length2] is a dimensionless constant that is related to the configuration of the flow-paths is the average, or effective pore diameter [length]. Absolute permeability (aka intrinsic or specific permeability) Absolute permeability denotes the permeability in a porous medium that is 100% saturated with a single-phase fluid. This may also be called the intrinsic permeability or specific permeability. These terms refer to the quality that the permeability value in question is an intensive property of the medium, not a spatial average of a heterogeneous block of material ; and that it is a function of the material structure only (and not of the fluid). They explicitly distinguish the value from that of relative permeability. Permeability to gases Sometimes, permeability to gases can be somewhat different than that for liquids in the same media. One difference is attributable to the "slippage" of gas at the interface with the solid when the gas mean free path is comparable to the pore size (about 0.01 to 0.1 μm at standard temperature and pressure). See also Knudsen diffusion and constrictivity. For example, measurement of permeability through sandstones and shales yielded values from 9.0×10−19 m2 to 2.4×10−12 m2 for water and between 1.7×10−17 m2 to 2.6×10−12 m2 for nitrogen gas. Gas permeability of reservoir rock and source rock is important in petroleum engineering, when considering the optimal extraction of gas from unconventional sources such as shale gas, tight gas, or coalbed methane. Permeability tensor To model permeability in anisotropic media, a permeability tensor is needed. Pressure can be applied in three directions, and for each direction, permeability can be measured (via Darcy's law in 3D) in three directions, thus leading to a 3 by 3 tensor. The tensor is realised using a 3 by 3 matrix being both symmetric and positive definite (SPD matrix): The tensor is symmetric by the Onsager reciprocal relations The tensor is positive definite because the energy being expended (the inner product of fluid flow and negative pressure gradient) is always positive The permeability tensor is always diagonalizable (being both symmetric and positive definite). The eigenvectors will yield the principal directions of flow where flow is parallel to the pressure gradient, and the eigenvalues represent the principal permeabilities. Ranges of common intrinsic permeabilities These values do not depend on the fluid properties; see the table derived from the same source for values of hydraulic conductivity, which are specific to the material through which the fluid is flowing. See also Fault zone hydrogeology Hydraulic conductivity Hydrogeology Permeation Petroleum geology Relative permeability Klinkenberg correction Electrical resistivity measurement of concrete Footnotes References Wang, H. F., 2000. Theory of Linear Poroelasticity with Applications to Geomechanics and Hydrogeology, Princeton University Press. External links Defining Permeability Tailoring porous media to control permeability Permeability of Porous Media Graphical depiction of different flow rates through materials of differing permeability Web-based porosity and permeability calculator given flow characteristics Multiphase fluid flow in porous media Florida Method of Test For Concrete Resistivity as an Electrical Indicator of its Permeability Aquifers Hydrology Soil mechanics Soil physics Porous media In situ geotechnical investigations
Permeability (materials science)
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering", "Environmental_science" ]
1,928
[ "Physical phenomena", "Hydrology", "Applied and interdisciplinary physics", "Physical quantities", "Porous media", "Quantity", "Soil mechanics", "Soil physics", "Materials science", "Aquifers", "Environmental engineering", "Physical properties" ]
755,300
https://en.wikipedia.org/wiki/Curvilinear%20coordinates
In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible (a one-to-one map) at each point. This means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The name curvilinear coordinates, coined by the French mathematician Lamé, derives from the fact that the coordinate surfaces of the curvilinear systems are curved. Well-known examples of curvilinear coordinate systems in three-dimensional Euclidean space (R3) are cylindrical and spherical coordinates. A Cartesian coordinate surface in this space is a coordinate plane; for example z = 0 defines the x-y plane. In the same space, the coordinate surface r = 1 in spherical coordinates is the surface of a unit sphere, which is curved. The formalism of curvilinear coordinates provides a unified and general description of the standard coordinate systems. Curvilinear coordinates are often used to define the location or distribution of physical quantities which may be, for example, scalars, vectors, or tensors. Mathematical expressions involving these quantities in vector calculus and tensor analysis (such as the gradient, divergence, curl, and Laplacian) can be transformed from one coordinate system to another, according to transformation rules for scalars, vectors, and tensors. Such expressions then become valid for any curvilinear coordinate system. A curvilinear coordinate system may be simpler to use than the Cartesian coordinate system for some applications. The motion of particles under the influence of central forces is usually easier to solve in spherical coordinates than in Cartesian coordinates; this is true of many physical problems with spherical symmetry defined in R3. Equations with boundary conditions that follow coordinate surfaces for a particular curvilinear coordinate system may be easier to solve in that system. While one might describe the motion of a particle in a rectangular box using Cartesian coordinates, it is easier to describe the motion in a sphere with spherical coordinates. Spherical coordinates are the most common curvilinear coordinate systems and are used in Earth sciences, cartography, quantum mechanics, relativity, and engineering. Orthogonal curvilinear coordinates in 3 dimensions Coordinates, basis, and vectors For now, consider 3-D space. A point P in 3-D space (or its position vector r) can be defined using Cartesian coordinates (x, y, z) [equivalently written (x1, x2, x3)], by , where ex, ey, ez are the standard basis vectors. It can also be defined by its curvilinear coordinates (q1, q2, q3) if this triplet of numbers defines a single point in an unambiguous way. The relation between the coordinates is then given by the invertible transformation functions: The surfaces q1 = constant, q2 = constant, q3 = constant are called the coordinate surfaces; and the space curves formed by their intersection in pairs are called the coordinate curves. The coordinate axes are determined by the tangents to the coordinate curves at the intersection of three surfaces. They are not in general fixed directions in space, which happens to be the case for simple Cartesian coordinates, and thus there is generally no natural global basis for curvilinear coordinates. In the Cartesian system, the standard basis vectors can be derived from the derivative of the location of point P with respect to the local coordinate Applying the same derivatives to the curvilinear system locally at point P defines the natural basis vectors: Such a basis, whose vectors change their direction and/or magnitude from point to point is called a local basis. All bases associated with curvilinear coordinates are necessarily local. Basis vectors that are the same at all points are global bases, and can be associated only with linear or affine coordinate systems. For this article e is reserved for the standard basis (Cartesian) and h or b is for the curvilinear basis. These may not have unit length, and may also not be orthogonal. In the case that they are orthogonal at all points where the derivatives are well-defined, we define the Lamé coefficients (after Gabriel Lamé) by and the curvilinear orthonormal basis vectors by These basis vectors may well depend upon the position of P; it is therefore necessary that they are not assumed to be constant over a region. (They technically form a basis for the tangent bundle of at P, and so are local to P.) In general, curvilinear coordinates allow the natural basis vectors hi not all mutually perpendicular to each other, and not required to be of unit length: they can be of arbitrary magnitude and direction. The use of an orthogonal basis makes vector manipulations simpler than for non-orthogonal. However, some areas of physics and engineering, particularly fluid mechanics and continuum mechanics, require non-orthogonal bases to describe deformations and fluid transport to account for complicated directional dependences of physical quantities. A discussion of the general case appears later on this page. Vector calculus Differential elements In orthogonal curvilinear coordinates, since the total differential change in r is so scale factors are In non-orthogonal coordinates the length of is the positive square root of (with Einstein summation convention). The six independent scalar products gij=hi.hj of the natural basis vectors generalize the three scale factors defined above for orthogonal coordinates. The nine gij are the components of the metric tensor, which has only three non zero components in orthogonal coordinates: g11=h1h1, g22=h2h2, g33=h3h3. Covariant and contravariant bases Spatial gradients, distances, time derivatives and scale factors are interrelated within a coordinate system by two groups of basis vectors: basis vectors that are locally tangent to their associated coordinate pathline: are contravariant vectors (denoted by lowered indices), and basis vectors that are locally normal to the isosurface created by the other coordinates: are covariant vectors (denoted by raised indices), ∇ is the del operator. Note that, because of Einstein's summation convention, the position of the indices of the vectors is the opposite of that of the coordinates. Consequently, a general curvilinear coordinate system has two sets of basis vectors for every point: {b1, b2, b3} is the contravariant basis, and {b1, b2, b3} is the covariant (a.k.a. reciprocal) basis. The covariant and contravariant basis vectors types have identical direction for orthogonal curvilinear coordinate systems, but as usual have inverted units with respect to each other. Note the following important equality: wherein denotes the generalized Kronecker delta. A vector v can be specified in terms of either basis, i.e., Using the Einstein summation convention, the basis vectors relate to the components by and where g is the metric tensor (see below). A vector can be specified with covariant coordinates (lowered indices, written vk) or contravariant coordinates (raised indices, written vk). From the above vector sums, it can be seen that contravariant coordinates are associated with covariant basis vectors, and covariant coordinates are associated with contravariant basis vectors. A key feature of the representation of vectors and tensors in terms of indexed components and basis vectors is invariance in the sense that vector components which transform in a covariant manner (or contravariant manner) are paired with basis vectors that transform in a contravariant manner (or covariant manner). Integration Constructing a covariant basis in one dimension Consider the one-dimensional curve shown in Fig. 3. At point P, taken as an origin, x is one of the Cartesian coordinates, and q1 is one of the curvilinear coordinates. The local (non-unit) basis vector is b1 (notated h1 above, with b reserved for unit vectors) and it is built on the q1 axis which is a tangent to that coordinate line at the point P. The axis q1 and thus the vector b1 form an angle with the Cartesian x axis and the Cartesian basis vector e1. It can be seen from triangle PAB that where |e1|, |b1| are the magnitudes of the two basis vectors, i.e., the scalar intercepts PB and PA. PA is also the projection of b1 on the x axis. However, this method for basis vector transformations using directional cosines is inapplicable to curvilinear coordinates for the following reasons: By increasing the distance from P, the angle between the curved line q1 and Cartesian axis x increasingly deviates from . At the distance PB the true angle is that which the tangent at point C forms with the x axis and the latter angle is clearly different from . The angles that the q1 line and that axis form with the x axis become closer in value the closer one moves towards point P and become exactly equal at P. Let point E be located very close to P, so close that the distance PE is infinitesimally small. Then PE measured on the q1 axis almost coincides with PE measured on the q1 line. At the same time, the ratio PD/PE (PD being the projection of PE on the x axis) becomes almost exactly equal to . Let the infinitesimally small intercepts PD and PE be labelled, respectively, as dx and dq1. Then . Thus, the directional cosines can be substituted in transformations with the more exact ratios between infinitesimally small coordinate intercepts. It follows that the component (projection) of b1 on the x axis is . If qi = qi(x1, x2, x3) and xi = xi(q1, q2, q3) are smooth (continuously differentiable) functions the transformation ratios can be written as and . That is, those ratios are partial derivatives of coordinates belonging to one system with respect to coordinates belonging to the other system. Constructing a covariant basis in three dimensions Doing the same for the coordinates in the other 2 dimensions, b1 can be expressed as: Similar equations hold for b2 and b3 so that the standard basis {e1, e2, e3} is transformed to a local (ordered and normalised) basis {b1, b2, b3} by the following system of equations: By analogous reasoning, one can obtain the inverse transformation from local basis to standard basis: Jacobian of the transformation The above systems of linear equations can be written in matrix form using the Einstein summation convention as . This coefficient matrix of the linear system is the Jacobian matrix (and its inverse) of the transformation. These are the equations that can be used to transform a Cartesian basis into a curvilinear basis, and vice versa. In three dimensions, the expanded forms of these matrices are In the inverse transformation (second equation system), the unknowns are the curvilinear basis vectors. For any specific location there can only exist one and only one set of basis vectors (else the basis is not well defined at that point). This condition is satisfied if and only if the equation system has a single solution. In linear algebra, a linear equation system has a single solution (non-trivial) only if the determinant of its system matrix is non-zero: which shows the rationale behind the above requirement concerning the inverse Jacobian determinant. Generalization to n dimensions The formalism extends to any finite dimension as follows. Consider the real Euclidean n-dimensional space, that is Rn = R × R × ... × R (n times) where R is the set of real numbers and × denotes the Cartesian product, which is a vector space. The coordinates of this space can be denoted by: x = (x1, x2,...,xn). Since this is a vector (an element of the vector space), it can be written as: where e1 = (1,0,0...,0), e2 = (0,1,0...,0), e3 = (0,0,1...,0),...,en = (0,0,0...,1) is the standard basis set of vectors for the space Rn, and i = 1, 2,...n is an index labelling components. Each vector has exactly one component in each dimension (or "axis") and they are mutually orthogonal (perpendicular) and normalized (has unit magnitude). More generally, we can define basis vectors bi so that they depend on q = (q1, q2,...,qn), i.e. they change from point to point: bi = bi(q). In which case to define the same point x in terms of this alternative basis: the coordinates with respect to this basis vi also necessarily depend on x also, that is vi = vi(x). Then a vector v in this space, with respect to these alternative coordinates and basis vectors, can be expanded as a linear combination in this basis (which simply means to multiply each basis vector ei by a number vi – scalar multiplication): The vector sum that describes v in the new basis is composed of different vectors, although the sum itself remains the same. Transformation of coordinates From a more general and abstract perspective, a curvilinear coordinate system is simply a coordinate patch on the differentiable manifold En (n-dimensional Euclidean space) that is diffeomorphic to the Cartesian coordinate patch on the manifold. Two diffeomorphic coordinate patches on a differential manifold need not overlap differentiably. With this simple definition of a curvilinear coordinate system, all the results that follow below are simply applications of standard theorems in differential topology. The transformation functions are such that there's a one-to-one relationship between points in the "old" and "new" coordinates, that is, those functions are bijections, and fulfil the following requirements within their domains: Vector and tensor algebra in three-dimensional curvilinear coordinates Elementary vector and tensor algebra in curvilinear coordinates is used in some of the older scientific literature in mechanics and physics and can be indispensable to understanding work from the early and mid-1900s, for example the text by Green and Zerna. Some useful relations in the algebra of vectors and second-order tensors in curvilinear coordinates are given in this section. The notation and contents are primarily from Ogden, Naghdi, Simmonds, Green and Zerna, Basar and Weichert, and Ciarlet. Tensors in curvilinear coordinates A second-order tensor can be expressed as where denotes the tensor product. The components Sij are called the contravariant components, Si j the mixed right-covariant components, Si j the mixed left-covariant components, and Sij the covariant components of the second-order tensor. The components of the second-order tensor are related by The metric tensor in orthogonal curvilinear coordinates At each point, one can construct a small line element , so the square of the length of the line element is the scalar product dx • dx and is called the metric of the space, given by: . The following portion of the above equation is a symmetric tensor called the fundamental (or metric) tensor of the Euclidean space in curvilinear coordinates. Indices can be raised and lowered by the metric: Relation to Lamé coefficients Defining the scale factors hi by gives a relation between the metric tensor and the Lamé coefficients, and where hij are the Lamé coefficients. For an orthogonal basis we also have: Example: Polar coordinates If we consider polar coordinates for R2, (r, θ) are the curvilinear coordinates, and the Jacobian determinant of the transformation (r,θ) → (r cos θ, r sin θ) is r. The orthogonal basis vectors are br = (cos θ, sin θ), bθ = (−r sin θ, r cos θ). The scale factors are hr = 1 and hθ= r. The fundamental tensor is g11 =1, g22 =r2, g12 = g21 =0. The alternating tensor In an orthonormal right-handed basis, the third-order alternating tensor is defined as In a general curvilinear basis the same tensor may be expressed as It can also be shown that Christoffel symbols Christoffel symbols of the first kind where the comma denotes a partial derivative (see Ricci calculus). To express Γkij in terms of gij, Since using these to rearrange the above relations gives Christoffel symbols of the second kind This implies that since . Other relations that follow are Vector operations Vector and tensor calculus in three-dimensional curvilinear coordinates Adjustments need to be made in the calculation of line, surface and volume integrals. For simplicity, the following restricts to three dimensions and orthogonal curvilinear coordinates. However, the same arguments apply for n-dimensional spaces. When the coordinate system is not orthogonal, there are some additional terms in the expressions. Simmonds, in his book on tensor analysis, quotes Albert Einstein saying The magic of this theory will hardly fail to impose itself on anybody who has truly understood it; it represents a genuine triumph of the method of absolute differential calculus, founded by Gauss, Riemann, Ricci, and Levi-Civita. Vector and tensor calculus in general curvilinear coordinates is used in tensor analysis on four-dimensional curvilinear manifolds in general relativity, in the mechanics of curved shells, in examining the invariance properties of Maxwell's equations which has been of interest in metamaterials and in many other fields. Some useful relations in the calculus of vectors and second-order tensors in curvilinear coordinates are given in this section. The notation and contents are primarily from Ogden, Simmonds, Green and Zerna, Basar and Weichert, and Ciarlet. Let φ = φ(x) be a well defined scalar field and v = v(x) a well-defined vector field, and λ1, λ2... be parameters of the coordinates Geometric elements Integration {| class="wikitable" |- !scope=col width="10px"| Operator !scope=col width="200px"| Scalar field !scope=col width="200px"| Vector field |- |Line integral || || |- | Surface integral || || |- | Volume integral || || |- |} Differentiation The expressions for the gradient, divergence, and Laplacian can be directly extended to n-dimensions, however the curl is only defined in 3D. The vector field bi is tangent to the qi coordinate curve and forms a natural basis at each point on the curve. This basis, as discussed at the beginning of this article, is also called the covariant curvilinear basis. We can also define a reciprocal basis, or contravariant curvilinear basis, bi. All the algebraic relations between the basis vectors, as discussed in the section on tensor algebra, apply for the natural basis and its reciprocal at each point x. {| class="wikitable" |- !scope=col width="10px"| Operator !scope=col width="200px"| Scalar field !scope=col width="200px"| Vector field !scope=col width="200px"| 2nd order tensor field |- | Gradient || || || |- | Divergence || N/A || || where a is an arbitrary constant vector. In curvilinear coordinates, |- | Laplacian || || (First equality in 3D only; second equality in Cartesian components only) || |- | Curl || N/A || For vector fields in 3D only, where is the Levi-Civita symbol. || See Curl of a tensor field |} Fictitious forces in general curvilinear coordinates By definition, if a particle with no forces acting on it has its position expressed in an inertial coordinate system, (x1, x2, x3, t), then there it will have no acceleration (d2xj/dt2 = 0). In this context, a coordinate system can fail to be "inertial" either due to non-straight time axis or non-straight space axes (or both). In other words, the basis vectors of the coordinates may vary in time at fixed positions, or they may vary with position at fixed times, or both. When equations of motion are expressed in terms of any non-inertial coordinate system (in this sense), extra terms appear, called Christoffel symbols. Strictly speaking, these terms represent components of the absolute acceleration (in classical mechanics), but we may also choose to continue to regard d2xj/dt2 as the acceleration (as if the coordinates were inertial) and treat the extra terms as if they were forces, in which case they are called fictitious forces. The component of any such fictitious force normal to the path of the particle and in the plane of the path's curvature is then called centrifugal force. This more general context makes clear the correspondence between the concepts of centrifugal force in rotating coordinate systems and in stationary curvilinear coordinate systems. (Both of these concepts appear frequently in the literature.) For a simple example, consider a particle of mass m moving in a circle of radius r with angular speed w relative to a system of polar coordinates rotating with angular speed W. The radial equation of motion is mr” = Fr + mr(w + W)2. Thus the centrifugal force is mr times the square of the absolute rotational speed A = w + W of the particle. If we choose a coordinate system rotating at the speed of the particle, then W = A and w = 0, in which case the centrifugal force is mrA2, whereas if we choose a stationary coordinate system we have W = 0 and w = A, in which case the centrifugal force is again mrA2. The reason for this equality of results is that in both cases the basis vectors at the particle's location are changing in time in exactly the same way. Hence these are really just two different ways of describing exactly the same thing, one description being in terms of rotating coordinates and the other being in terms of stationary curvilinear coordinates, both of which are non-inertial according to the more abstract meaning of that term. When describing general motion, the actual forces acting on a particle are often referred to the instantaneous osculating circle tangent to the path of motion, and this circle in the general case is not centered at a fixed location, and so the decomposition into centrifugal and Coriolis components is constantly changing. This is true regardless of whether the motion is described in terms of stationary or rotating coordinates. See also Covariance and contravariance Introduction to the mathematics of general relativity Special cases: Orthogonal coordinates Skew coordinates Tensors in curvilinear coordinates Frenet–Serret formulas Covariant derivative Tensor derivative (continuum mechanics) Curvilinear perspective Del in cylindrical and spherical coordinates References Further reading External links Planetmath.org Derivation of Unit vectors in curvilinear coordinates MathWorld's page on Curvilinear Coordinates Prof. R. Brannon's E-Book on Curvilinear Coordinates Wikiversity:Introduction to Elasticity/Tensors#The divergence of a tensor field – Wikiversity, Introduction to Elasticity/Tensors. Coordinate systems 3
Curvilinear coordinates
[ "Mathematics", "Engineering" ]
5,068
[ "Tensors", "Metric tensors", "Coordinate systems" ]
755,400
https://en.wikipedia.org/wiki/Orthogonal%20basis
In mathematics, particularly linear algebra, an orthogonal basis for an inner product space is a basis for whose vectors are mutually orthogonal. If the vectors of an orthogonal basis are normalized, the resulting basis is an orthonormal basis. As coordinates Any orthogonal basis can be used to define a system of orthogonal coordinates Orthogonal (not necessarily orthonormal) bases are important due to their appearance from curvilinear orthogonal coordinates in Euclidean spaces, as well as in Riemannian and pseudo-Riemannian manifolds. In functional analysis In functional analysis, an orthogonal basis is any basis obtained from an orthonormal basis (or Hilbert basis) using multiplication by nonzero scalars. Extensions Symmetric bilinear form The concept of an orthogonal basis is applicable to a vector space (over any field) equipped with a symmetric bilinear form , where orthogonality of two vectors and means . For an orthogonal basis : where is a quadratic form associated with (in an inner product space, ). Hence for an orthogonal basis , where and are components of and in the basis. Quadratic form The concept of orthogonality may be extended to a vector space over any field of characteristic not 2 equipped with a quadratic form . Starting from the observation that, when the characteristic of the underlying field is not 2, the associated symmetric bilinear form allows vectors and to be defined as being orthogonal with respect to when . See also References External links Functional analysis Linear algebra de:Orthogonalbasis
Orthogonal basis
[ "Mathematics" ]
306
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Linear algebra", "Algebra" ]
755,615
https://en.wikipedia.org/wiki/Poundal
The poundal (symbol: pdl) is a unit of force, introduced in 1877, that is part of the Absolute English system of units, which itself is a coherent subsystem of the foot–pound–second system. The poundal is defined as the force necessary to accelerate 1 pound-mass at 1 foot per second squared. 1 pdl = exactly. Background English units require re-scaling of either force or mass to eliminate a numerical proportionality constant in the equation F = ma. The poundal represents one choice, which is to rescale units of force. Since a pound of force (pound force) accelerates a pound of mass (pound mass) at 32.174 049 ft/s2 (9.80665 m/s2; the acceleration of gravity, g), we can scale down the unit of force to compensate, giving us one that accelerates 1 pound mass at 1 ft/s2 rather than at 32.174 049 ft/s2; and that is the poundal, which is approximately pound force. For example, a force of 1200 poundals is required to accelerate a person of 150 pounds mass at 8 feet per second squared: The poundal-as-force, pound-as-mass system is contrasted with an alternative system in which pounds are used as force (pounds-force), and instead, the mass unit is rescaled by a factor of roughly 32. That is, one pound-force will accelerate one pound-mass at 32 feet per second squared; we can scale up the unit of mass to compensate, which will be accelerated by 1 ft/s2 (rather than 32 ft/s2) given the application of one pound force; this gives us a unit of mass called the slug, which is about 32 pounds mass. Using this system (slugs and pounds-force), the above expression could be expressed as: Note: Slugs () and poundals (1/) are never used in the same system, since they are opposite solutions of the same problem. Rather than changing either force or mass units, one may choose to express acceleration in units of the acceleration due to Earth's gravity (called g). In this case, we can keep both pounds-mass and pounds-force, such that applying one pound force to one pound mass accelerates it at one unit of acceleration (g): Expressions derived using poundals for force and lb for mass (or lbf for force and slugs for mass) have the advantage of not being tied to conditions on the surface of the earth. Specifically, computing on the moon or in deep space as poundals, lb⋅ft/s2 or avoids the constant tied to acceleration of gravity on earth. Conversion See also Slug (unit) References Obert, Edward F., “Thermodynamics”, McGraw-Hill Book Company Inc., New York 1948; Chapter I, Survey of Dimensions and Units, pages 1–24. Units of force Imperial units Customary units of measurement in the United States
Poundal
[ "Physics", "Mathematics" ]
626
[ "Force", "Physical quantities", "Quantity", "Units of force", "Units of measurement" ]