id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
3,406,245
https://en.wikipedia.org/wiki/Routhian%20mechanics
In classical mechanics, Routh's procedure or Routhian mechanics is a hybrid formulation of Lagrangian mechanics and Hamiltonian mechanics developed by Edward John Routh. Correspondingly, the Routhian is the function which replaces both the Lagrangian and Hamiltonian functions. Although Routhian mechanics is equivalent to Lagrangian mechanics and Hamiltonian mechanics, and introduces no new physics, it offers an alternative way to solve mechanical problems. Definitions The Routhian, like the Hamiltonian, can be obtained from a Legendre transform of the Lagrangian, and has a similar mathematical form to the Hamiltonian, but is not exactly the same. The difference between the Lagrangian, Hamiltonian, and Routhian functions are their variables. For a given set of generalized coordinates representing the degrees of freedom in the system, the Lagrangian is a function of the coordinates and velocities, while the Hamiltonian is a function of the coordinates and momenta. The Routhian differs from these functions in that some coordinates are chosen to have corresponding generalized velocities, the rest to have corresponding generalized momenta. This choice is arbitrary, and can be done to simplify the problem. It also has the consequence that the Routhian equations are exactly the Hamiltonian equations for some coordinates and corresponding momenta, and the Lagrangian equations for the rest of the coordinates and their velocities. In each case the Lagrangian and Hamiltonian functions are replaced by a single function, the Routhian. The full set thus has the advantages of both sets of equations, with the convenience of splitting one set of coordinates to the Hamilton equations, and the rest to the Lagrangian equations. In the case of Lagrangian mechanics, the generalized coordinates , ... and the corresponding velocities , and possibly time , enter the Lagrangian, where the overdots denote time derivatives. In Hamiltonian mechanics, the generalized coordinates and the corresponding generalized momenta and possibly time, enter the Hamiltonian, where the second equation is the definition of the generalized momentum corresponding to the coordinate (partial derivatives are denoted using ). The velocities are expressed as functions of their corresponding momenta by inverting their defining relation. In this context, is said to be the momentum "canonically conjugate" to . The Routhian is intermediate between and ; some coordinates are chosen to have corresponding generalized momenta , the rest of the coordinates to have generalized velocities , and time may appear explicitly; where again the generalized velocity is to be expressed as a function of generalized momentum via its defining relation. The choice of which coordinates are to have corresponding momenta, out of the coordinates, is arbitrary. The above is used by Landau and Lifshitz, and Goldstein. Some authors may define the Routhian to be the negative of the above definition. Given the length of the general definition, a more compact notation is to use boldface for tuples (or vectors) of the variables, thus , , , and , so that where · is the dot product defined on the tuples, for the specific example appearing here: Equations of motion For reference, the Euler-Lagrange equations for degrees of freedom are a set of coupled second order ordinary differential equations in the coordinates where , and the Hamiltonian equations for degrees of freedom are a set of coupled first order ordinary differential equations in the coordinates and momenta Below, the Routhian equations of motion are obtained in two ways, in the process other useful derivatives are found that can be used elsewhere. Two degrees of freedom Consider the case of a system with two degrees of freedom, and , with generalized velocities and , and the Lagrangian is time-dependent. (The generalization to any number of degrees of freedom follows exactly the same procedure as with two). The Lagrangian of the system will have the form The differential of is Now change variables, from the set (, , , ) to (, , , ), simply switching the velocity to the momentum . This change of variables in the differentials is the Legendre transformation. The differential of the new function to replace will be a sum of differentials in , , , , and . Using the definition of generalized momentum and Lagrange's equation for the coordinate : we have and to replace by , recall the product rule for differentials, and substitute to obtain the differential of a new function in terms of the new set of variables: Introducing the Routhian where again the velocity is a function of the momentum , we have but from the above definition, the differential of the Routhian is Comparing the coefficients of the differentials , , , , and , the results are Hamilton's equations for the coordinate , and Lagrange's equation for the coordinate which follow from and taking the total time derivative of the second equation and equating to the first. Notice the Routhian replaces the Hamiltonian and Lagrangian functions in all the equations of motion. The remaining equation states the partial time derivatives of and are negatives Any number of degrees of freedom For coordinates as defined above, with Routhian the equations of motion can be derived by a Legendre transformation of this Routhian as in the previous section, but another way is to simply take the partial derivatives of with respect to the coordinates and , momenta , and velocities , where , and . The derivatives are The first two are identically the Hamiltonian equations. Equating the total time derivative of the fourth set of equations with the third (for each value of ) gives the Lagrangian equations. The fifth is just the same relation between time partial derivatives as before. To summarize The total number of equations is , there are Hamiltonian equations plus Lagrange equations. Energy Since the Lagrangian has the same units as energy, the units of the Routhian are also energy. In SI units this is the Joule. Taking the total time derivative of the Lagrangian leads to the general result If the Lagrangian is independent of time, the partial time derivative of the Lagrangian is zero, , so the quantity under the total time derivative in brackets must be a constant, it is the total energy of the system (If there are external fields interacting with the constituents of the system, they can vary throughout space but not time). This expression requires the partial derivatives of with respect to all the velocities and . Under the same condition of being time independent, the energy in terms of the Routhian is a little simpler, substituting the definition of and the partial derivatives of with respect to the velocities , Notice only the partial derivatives of with respect to the velocities are needed. In the case that and the Routhian is explicitly time-independent, then , that is, the Routhian equals the energy of the system. The same expression for in when is also the Hamiltonian, so in all . If the Routhian has explicit time dependence, the total energy of the system is not constant. The general result is which can be derived from the total time derivative of in the same way as for . Cyclic coordinates Often the Routhian approach may offer no advantage, but one notable case where this is useful is when a system has cyclic coordinates (also called "ignorable coordinates"), by definition those coordinates which do not appear in the original Lagrangian. The Lagrangian equations are powerful results, used frequently in theory and practice, since the equations of motion in the coordinates are easy to set up. However, if cyclic coordinates occur there will still be equations to solve for all the coordinates, including the cyclic coordinates despite their absence in the Lagrangian. The Hamiltonian equations are useful theoretical results, but less useful in practice because coordinates and momenta are related together in the solutions - after solving the equations the coordinates and momenta must be eliminated from each other. Nevertheless, the Hamiltonian equations are perfectly suited to cyclic coordinates because the equations in the cyclic coordinates trivially vanish, leaving only the equations in the non cyclic coordinates. The Routhian approach has the best of both approaches, because cyclic coordinates can be split off to the Hamiltonian equations and eliminated, leaving behind the non cyclic coordinates to be solved from the Lagrangian equations. Overall fewer equations need to be solved compared to the Lagrangian approach. The Routhian formulation is useful for systems with cyclic coordinates, because by definition those coordinates do not enter , and hence . The corresponding partial derivatives of and with respect to those coordinates are zero, which equates to the corresponding generalized momenta reducing to constants. To make this concrete, if the are all cyclic coordinates, and the are all non cyclic, then where the are constants. With these constants substituted into the Routhian, is a function of only the non cyclic coordinates and velocities (and in general time also) The Hamiltonian equation in the cyclic coordinates automatically vanishes, and the Lagrangian equations are in the non cyclic coordinates Thus the problem has been reduced to solving the Lagrangian equations in the non cyclic coordinates, with the advantage of the Hamiltonian equations cleanly removing the cyclic coordinates. Using those solutions, the equations for can be integrated to compute . If we are interested in how the cyclic coordinates change with time, the equations for the generalized velocities corresponding to the cyclic coordinates can be integrated. Examples Routh's procedure does not guarantee the equations of motion will be simple, however it will lead to fewer equations. Central potential in spherical coordinates One general class of mechanical systems with cyclic coordinates are those with central potentials, because potentials of this form only have dependence on radial separations and no dependence on angles. Consider a particle of mass under the influence of a central potential in spherical polar coordinates Notice is cyclic, because it does not appear in the Lagrangian. The momentum conjugate to is the constant in which and can vary with time, but the angular momentum is constant. The Routhian can be taken to be We can solve for and using Lagrange's equations, and do not need to solve for since it is eliminated by Hamiltonian's equations. The equation is and the equation is The Routhian approach has obtained two coupled nonlinear equations. By contrast the Lagrangian approach leads to three nonlinear coupled equations, mixing in the first and second time derivatives of in all of them, despite its absence from the Lagrangian. The equation is the equation is the equation is Symmetric mechanical systems Spherical pendulum Consider the spherical pendulum, a mass (known as a "pendulum bob") attached to a rigid rod of length of negligible mass, subject to a local gravitational field . The system rotates with angular velocity which is not constant. The angle between the rod and vertical is and is not constant. The Lagrangian is and is the cyclic coordinate for the system with constant momentum which again is physically the angular momentum of the system about the vertical. The angle and angular velocity vary with time, but the angular momentum is constant. The Routhian is The equation is found from the Lagrangian equations or simplifying by introducing the constants gives This equation resembles the simple nonlinear pendulum equation, because it can swing through the vertical axis, with an additional term to account for the rotation about the vertical axis (the constant is related to the angular momentum ). Applying the Lagrangian approach there are two nonlinear coupled equations to solve. The equation is and the equation is Heavy symmetrical top The heavy symmetrical top of mass has Lagrangian where are the Euler angles, is the angle between the vertical -axis and the top's -axis, is the rotation of the top about its own -axis, and the azimuthal of the top's -axis around the vertical -axis. The principal moments of inertia are about the top's own axis, about the top's own axes, and about the top's own -axis. Since the top is symmetric about its -axis, . Here the simple relation for local gravitational potential energy is used where is the acceleration due to gravity, and the centre of mass of the top is a distance from its tip along its -axis. The angles are cyclic. The constant momenta are the angular momenta of the top about its axis and its precession about the vertical, respectively: From these, eliminating : we have and to eliminate , substitute this result into and solve for to find The Routhian can be taken to be and since we have The first term is constant, and can be ignored since only the derivatives of R will enter the equations of motion. The simplified Routhian, without loss of information, is thus The equation of motion for is, by direct calculation, or by introducing the constants a simpler form of the equation is obtained Although the equation is highly nonlinear, there is only one equation to solve for, it was obtained directly, and the cyclic coordinates are not involved. By contrast, the Lagrangian approach leads to three nonlinear coupled equations to solve, despite the absence of the coordinates and in the Lagrangian. The equation is the equation is and the equation is Velocity-dependent potentials Classical charged particle in a uniform magnetic field Consider a classical charged particle of mass and electric charge in a static (time-independent) uniform (constant throughout space) magnetic field . The Lagrangian for a charged particle in a general electromagnetic field given by the magnetic potential and electric potential is It is convenient to use cylindrical coordinates , so that In this case of no electric field, the electric potential is zero, , and we can choose the axial gauge for the magnetic potential and the Lagrangian is Notice this potential has an effectively cylindrical symmetry (although it also has angular velocity dependence), since the only spatial dependence is on the radial length from an imaginary cylinder axis. There are two cyclic coordinates, and . The canonical momenta conjugate to and are the constants so the velocities are The angular momentum about the z axis is not , but the quantity , which is not conserved due to the contribution from the magnetic field. The canonical momentum is the conserved quantity. It is still the case that is the linear or translational momentum along the z axis, which is also conserved. The radial component and angular velocity can vary with time, but is constant, and since is constant it follows is constant. The Routhian can take the form where in the last line, the term is a constant and can be ignored without loss of continuity. The Hamiltonian equations for and automatically vanish and do not need to be solved for. The Lagrangian equation in is by direct calculation which after collecting terms is and simplifying further by introducing the constants the differential equation is To see how changes with time, integrate the momenta expression for above where is an arbitrary constant, the initial value of to be specified in the initial conditions. The motion of the particle in this system is helicoidal, with the axial motion uniform (constant) but the radial and angular components varying in a spiral according to the equation of motion derived above. The initial conditions on , , , , will determine if the trajectory of the particle has a constant or varying . If initially is nonzero but , while and are arbitrary, then the initial velocity of the particle has no radial component, is constant, so the motion will be in a perfect helix. If r is constant, the angular velocity is also constant according to the conserved . With the Lagrangian approach, the equation for would include which has to be eliminated, and there would be equations for and to solve for. The equation is the equation is and the equation is The equation is trivial to integrate, but the and equations are not, in any case the time derivatives are mixed in all the equations and must be eliminated. See also Calculus of variations Phase space Configuration space Many-body problem Rigid body mechanics Footnotes Notes References Classical mechanics Mathematical physics Applied mathematics ru:Функция Рауса
Routhian mechanics
[ "Physics", "Mathematics" ]
3,324
[ "Applied mathematics", "Theoretical physics", "Classical mechanics", "Mechanics", "Mathematical physics" ]
3,407,113
https://en.wikipedia.org/wiki/Measurement%20system%20analysis
A measurement system analysis (MSA) is a thorough assessment of a measurement process, and typically includes a specially designed experiment that seeks to identify the components of variation in that measurement process. Just as processes that produce a product may vary, the process of obtaining measurements and data may also have variation and produce incorrect results. A measurement systems analysis evaluates the test method, measuring instruments, and the entire process of obtaining measurements to ensure the integrity of data used for analysis (usually quality analysis) and to understand the implications of measurement error for decisions made about a product or process. Proper measurement system analysis is critical for producing a consistent product in manufacturing and when left uncontrolled can result in a drift of key parameters and unusable final products. MSA is also an important element of Six Sigma methodology and of other quality management systems. MSA analyzes the collection of equipment, operations, procedures, software and personnel that affects the assignment of a number to a measurement characteristic. A measurement system analysis considers the following: Selecting the correct measurement and approach Assessing the measuring device Assessing procedures and operators Assessing any measurement interactions Calculating the measurement uncertainty of individual measurement devices and/or measurement systems Common tools and techniques of measurement system analysis include: calibration studies, fixed effect ANOVA, components of variance, attribute gage study, gage R&R, ANOVA gage R&R, and destructive testing analysis. The tool selected is usually determined by characteristics of the measurement system itself. An introduction to MSA can be found in chapter 8 of Doug Montgomery's Quality Control book. These tools and techniques are also described in the books by Donald Wheeler and Kim Niles. Advanced procedures for designing MSA studies can be found in Burdick et al. Equipment: measuring instrument, calibration, fixturing. People: operators, training, education, skill, care. Process: test method, specification. Samples: materials, items to be tested (sometimes called "parts"), sampling plan, sample preparation. Environment: temperature, humidity, conditioning, pre-conditioning. Management: training programs, metrology system, support of people, support of quality management system. These can be plotted in a "fishbone" Ishikawa diagram to help identify potential sources of measurement variation. Goals The goals of a MSA are: Quantification of measurement uncertainty, including the accuracy, precision including repeatability and reproducibility, the stability and linearity of these quantities over time and across the intended range of use of the measurement process. Development of improvement plans, when needed. Decision about whether a measurement process is adequate for a specific engineering/manufacturing application. ASTM Procedures The ASTM has several procedures for evaluating measurement systems and test methods, including: ASTM E2782 - Standard Guide for Measurement System Analysis ASTM D4356 - Standard Practice for Establishing Consistent Test Method Tolerances ASTM E691 - Standard Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method ASTM E1169 - Standard Guide for Conducting Ruggedness Tests ASTM E1488 - Standard Guide for Statistical Procedures to Use in Developing and Applying Test Methods ASME Procedures The American Society of Mechanical Engineers (ASME) has several procedures and reports targeted at task-specific uncertainty budgeting and methods for utilizing those uncertainty estimates when evaluating the measurand for compliance to specification. They are: B89.7.3.1 - 2001 Guidelines for Decision Rules: Considering Measurement Uncertainty Determining Conformance to Specifications B89.7.3.2 - 2007 Guidelines for the Evaluation of Dimensional Measurement Uncertainty (Technical Report) B89.7.3.3 - 2002 Guidelines for Assessing the Reliability of Dimensional Measurement Uncertainty Statements AIAG Procedures The Automotive Industry Action Group (AIAG), a non-profit association of automotive companies, has documented a recommended measurement system analysis procedure in their MSA manual. This book is part of a series of inter-related manuals the AIAG controls and publishes, including: The measurement system analysis manual The failure mode and effects analysis (FMEA) and Control Plan manual The statistical process control (SPC) manual The production part approval process (PPAP) manual Note that the AIAG's website has a list of "errata sheets" for its publications. See also Measurement uncertainty Round robin test Verification and validation References Measurement
Measurement system analysis
[ "Physics", "Mathematics" ]
884
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
1,792,340
https://en.wikipedia.org/wiki/Stishovite
Stishovite is an extremely hard, dense tetragonal form (polymorph) of silicon dioxide. It is very rare on the Earth's surface; however, it may be a predominant form of silicon dioxide in the Earth, especially in the lower mantle. Stishovite was named after , a Russian high-pressure physicist who first synthesized the mineral in 1961. It was discovered in Meteor Crater in 1962 by Edward C. T. Chao. Unlike other silica polymorphs, the crystal structure of stishovite resembles that of rutile (TiO2). The silicon in stishovite adopts an octahedral coordination geometry, being bound to six oxides. Similarly, the oxides are three-connected, unlike low-pressure forms of SiO2. In most silicates, silicon is tetrahedral, being bound to four oxides. It was long considered the hardest known oxide (~30 GPa Vickers); however, boron suboxide has been discovered in 2002 to be much harder. At normal temperature and pressure, stishovite is metastable. Stishovite can be separated from quartz by applying hydrogen fluoride (HF); unlike quartz, stishovite will not react. Appearance Large natural crystals of stishovite are extremely rare and are usually found as clasts of 1 to 2 mm in length. When found, they can be difficult to distinguish from regular quartz without laboratory analysis. It has a vitreous luster, is transparent (or translucent), and is extremely hard. Stishovite usually sits as small rounded gravels in a matrix of other minerals. Synthesis Until recently, the only known occurrences of stishovite in nature formed at the very high shock pressures (>100 kbar, or 10 GPa) and temperatures (> 1200 °C) present during hypervelocity meteorite impact into quartz-bearing rock. Minute amounts of stishovite have been found within diamonds, and post-stishovite phases were identified within ultra-high-pressure mantle rocks. Stishovite may also be synthesized by duplicating these conditions in the laboratory, either isostatically or through shock (see shocked quartz). At 4.287 g/cm3, it is the second densest polymorph of silica, after seifertite. It has tetragonal crystal symmetry, P42/mnm, No. 136, Pearson symbol tP6. See also Coesite, a related mineral Thaumasite, another rare mineral with hexacoordinated octahedral silica References External links Properties of stishovite Stishovite's origin in meteor impacts Superhard materials Impact event minerals Tetragonal minerals Minerals in space group 136 Silica polymorphs Soviet inventions
Stishovite
[ "Physics", "Materials_science" ]
590
[ "Silica polymorphs", "Polymorphism (materials science)", "Materials", "Superhard materials", "Matter" ]
1,792,505
https://en.wikipedia.org/wiki/Two-level%20game%20theory
Two-level game theory is a political model, derived from game theory, that illustrates the domestic-international interactions between states. It was originally introduced in 1988 by Robert D. Putnam in his publication "Diplomacy and Domestic Politics: The Logic of Two-Level Games". Putnam had been involved in research around the G7 summits between 1976 and 1979. However, at the fourth summit, held in Bonn in 1978, he observed a qualitative shift in how the negotiations worked. He noted that attending countries agreed to adopt policies in contrast to what they might have in the absence of their international counterparts. However, the agreement was only viable due to strong domestic influence - within each international government - in favour of implementing the agreement internationally. This culminated in international policy co-ordination as a result of the entanglement of international and domestic agendas. The Model The model views international negotiations between states as consisting of simultaneous negotiations at two levels. Level 1: The international level (between governments), and Level 2: The intranational level (domestic). At the international level, the national government (i.e., chief negotiator) seeks an agreement, with an opposing country, relating to topics of concern. At the domestic level, societal actors pressure the chief negotiator for favourable policies. The chief negotiator absorbs the concern of societal actors and builds coalitions with them. Simultaneously, the chief negotiator then seeks to maximise the domestic concerns, yet minimise the impact of any contrary views from the opposing country. Win-Sets At the international level, countries will approach negotiations with a defined set of objectives. It is expected that chief negotiators of both states arrive at a range of outcomes where their objectives overlap. However, before committing to this, the chief negotiator must seek approval from domestic actors. This ratification can be in the form of both formal voting requirements or informal methods, such as public opinion polls. Due to a potential difference in domestic concerns, the full range of agreement outcomes at the international level may not necessarily be approved. As such, the possible agreement outcomes at the international level that are accepted by domestic interest groups is defined as a state's "win-set". International agreements only occur when there is an overlap between the win-sets of the states involved in the international negotiations. Win-set size plays an important role in determining the success of negotiations at the international level. Naturally, the larger the win-set, the more likely the win-sets will overlap, potentially leading to successful negotiations. Conversely, negotiations are more likely to fail when opposing state's win-sets are smaller. The perceived win-set size, however, is just as important as the actual win-set size. If a state's win-set size is perceived to be large, the opposing state will, therefore, have greater bargaining power. Alternatively, if a state's win-set is perceived to be rather small, this can lead to them attaining an advantage in negotiations, whereby they can influence the opposing state to concede more in order for negotiations to be a success. Examples Paris Agreement In the context of climate change, all countries are negatively affected. But, when compared to the costs of steps taken to mitigate this, the majority of states benefit from the actions of a minority of large contributors. This lop-sided costs-to-benefits creates an incentive for some states to neglect their responsibilities and free-ride on the actions taken by others. As a classic case of the Prisoner's Dilemma, states are therefore more incentivised to do nothing, rather than contributing to mitigating climate change. This unequal burden-sharing has led to varying conceptions between states of what is fair under the Paris Agreement, resulting in both small and large countries utilising their negotiating assets to arrive at an agreement. However, as with any two-level game, domestic forces have influence on a state's win-set, which impacts the ability to negotiate an outcome at the international level. A recent example of this is the United States withdrawal from the Paris Agreement, which was supported by many Republicans as well as domestic interest groups aligned with the first Trump Administration. Falklands War In the period leading to the Falklands War, Anglo-Argentine negotiations resulted in several tentative agreements. The failure of domestic political forces to ratify these agreements meant the win-sets of the two countries did not overlap. References Bibliography Further reading Dispute resolution Game theory International relations Negotiation
Two-level game theory
[ "Mathematics" ]
919
[ "Game theory" ]
1,792,627
https://en.wikipedia.org/wiki/Sandmeyer%20reaction
The Sandmeyer reaction is a chemical reaction used to synthesize aryl halides from aryl diazonium salts using copper salts as reagents or catalysts. It is an example of a radical-nucleophilic aromatic substitution. The Sandmeyer reaction provides a method through which one can perform unique transformations on benzene, such as halogenation, cyanation, trifluoromethylation, and hydroxylation. The reaction was discovered in 1884 by Swiss chemist Traugott Sandmeyer, when he attempted to synthesize phenylacetylene from benzenediazonium chloride and copper(I) acetylide. Instead, the main product he isolated was chlorobenzene. In modern times, the Sandmeyer reaction refers to any method for substitution of an aromatic amino group via preparation of its diazonium salt followed by its displacement with a nucleophile in the presence of catalytic copper(I) salts. The most commonly employed Sandmeyer reactions are the chlorination, bromination, cyanation, and hydroxylation reactions using CuCl, CuBr, CuCN, and Cu2O, respectively. More recently, trifluoromethylation of diazonium salts has been developed and is referred to as a 'Sandmeyer-type' reaction. Diazonium salts also react with boronates, iodide, thiols, water, hypophosphorous acid and others, and fluorination can be carried out using tetrafluoroborate anions (Balz–Schiemann reaction). However, since these processes do not require a metal catalyst, they are not usually referred to as Sandmeyer reactions. In numerous variants that have been developed, other transition metal salts, including copper(II), iron(III) and cobalt(III) have also been employed. Due to its wide synthetic applicability, the Sandmeyer reaction, along with other transformations of diazonium compounds, is complementary to electrophilic aromatic substitution. Reaction mechanism The Sandmeyer reaction is an example of a radical-nucleophilic aromatic substitution (SRNAr). The radical mechanism of the Sandmeyer reaction is supported by the detection of biaryl byproducts. The substitution of the aromatic diazo group with a halogen or pseudohalogen is initiated by a one-electron transfer mechanism catalyzed by copper(I) to form an aryl radical with loss of nitrogen gas. The substituted arene is possibly formed by direct transfer of Cl, Br, CN, or OH from a copper(II) species to the aryl radical to produce the substituted arene and regenerate the copper(I) catalyst. In an alternative proposal, a transient copper(III) intermediate, formed from coupling of the aryl radical with the copper(II) species, undergoes rapid reductive elimination to afford the product and regenerate copper(I). However, evidence for such an organocopper intermediate is weak and mostly circumstantial, and the exact pathway may depend on the substrate and reaction conditions. Single electron transfer Synthetic applications Variations on the Sandmeyer reaction have been developed to fit multiple synthetic applications. These reactions typically proceed through the formation of an aryl diazonium salt followed by a reaction with a copper(I) salt to yield a substituted arene: There are many synthetic applications of the Sandmeyer reaction. Halogenation One of the most important uses of the Sandmeyer reaction is the formation of aryl halides. The solvent of choice for the synthesis of iodoarenes is diiodomethane, while for the synthesis of bromoarenes, bromoform is used. For the synthesis of chloroarenes, chloroform is the solvent of choice. The synthesis of (+)-curcuphenol, a bioactive compound that displays antifungal and anticancer activity, employs the Sandmeyer reaction to substitute an amine group by a bromo group. One bromination protocol employs a Cu(I)/Cu(II) mixture with additional amounts of the bidentate ligand phenanthroline and phase-transfer catalyst dibenzo-18-crown-6 to convert an aryl diazonium tetrafluoroborate salt to an aryl bromide. The Balz–Schiemann reaction uses tetrafluoroborate and delivers the halide-substituted product, fluorobenzene, which is not obtained by the use of copper fluorides. This reaction displays motifs characteristic of the Sandmeyer reaction. Cyanation Another use of the Sandmeyer reaction is for cyanation which allows for the formation of benzonitriles, an important class of organic compounds. A key intermediate in the synthesis of the antipsychotic drug Fluanxol is synthesized by a cyanation through the Sandmeyer reaction. The Sandmeyer reaction has also been employed in the synthesis of neoamphimedine, a compound that is suggested to target topoisomerase II as an anti-cancer drug. Trifluoromethylation It has been demonstrated that Sandmeyer-type reactions can be used to generate aryl compounds functionalized by trifluoromethyl substituent groups. This process of trifluoromethylation provides unique chemical properties with a wide variety of practical applications. Particularly, pharmaceuticals with CF3 groups have enhanced metabolic stability, lipophilicity, and bioavailability. Sandmeyer-type trifluoromethylation reactions feature mild reaction conditions and greater functional group tolerance relative to earlier methods of trifluoromethylation. An example of a Sandmeyer-type trifluoromethylation reaction is presented below. Hydroxylation The Sandmeyer reaction can also be used to convert aryl amines to phenols proceeding through the formation of an aryl diazonium salt. In the presence of copper catalyst, such as copper(I) oxide, and an excess of copper(II) nitrate, this reaction takes place readily at room temperature neutral water. This is in contrast to the classical procedure (known by the German name ), which calls for boiling the diazonium salt in aqueous acid, a process that is believed to involve the aryl cation instead of radical and is known to generate other nucleophilic addition side products in addition to the desired hydroxylation product. References External links http://www.name-reaction.com/sandmeyer-reaction Substitution reactions Name reactions
Sandmeyer reaction
[ "Chemistry" ]
1,363
[ "Name reactions" ]
1,794,051
https://en.wikipedia.org/wiki/Armstrong%27s%20mixture
Armstrong's mixture is a highly shock and friction sensitive explosive. Formulations vary, but one consists of 67% potassium chlorate, 27% red phosphorus, 3% sulfur, and 3% calcium carbonate. It is named for Sir William Armstrong, who invented it sometime prior to 1872 for use in explosive shells. Toys Armstrong's mixture can be used as ammunition for toy cap guns. The mixture is suspended in water with some gum arabic or similar binder and deposited in drops, each containing a few milligrams of explosive, to dry between layers of paper backing. The dots explode with some smoke when struck. Armstrong's mixture can be used in impact firecrackers known as cap torpedoes, which explode on impact when the ball (made of clay or papier-mâché) is thrown or (with some types) launched by slingshot. The firecrackers may include gravel with the explosive mixture to ensure enough friction is generated to produce a detonation. Military use With the addition of a grit such as boron carbide (in a modified formulation given as 70% KClO3, 19% red phosphorus, 3% sulfur, 3% chalk, and 5% boron carbide by weight), Armstrong's mixture has been considered for use in firearm primers. This use as primer for artillery propellants may have been Armstrong's original purpose. It also was seen in various patents for matches, novelty fireworks, and signalling devices. Armstrong's mixture has been used in thrown impact-detonated improvised explosive devices, made simply by loading it into hollow balls. Safety Armstrong's mixture is both very sensitive and very explosive, a dangerous combination that limits its practical use to toy caps. Such toy caps and fireworks typically contain no more than 10 milligrams each, but gram quantities can cause maiming hand injuries. The mixture is likely to explode if mixed dry and is even dangerous wet. If the pH is not made neutral, phosphoric acids that may have been generated by oxidized phosphorus on contact with the water could cause it to deteriorate while slowly drying. Generally the wet slurry or paste is loaded into the final casing while wet and was heat-dried in rotating drums prior to being coated with water glass to securely protect them from leakage when globe torpedos were still in production commercially. Simple mixtures of red phosphorus and potassium chlorate can detonate at a wide range of proportions; a 20% phosphorus mixture had 27% of the equivalent power of a like mass of TNT in a laboratory experiment, and the detonation of the 10% and 20% phosphorus mixtures even in small unconfined samples of 1 gram was described by the authors of one study as "impressive" and "scary". Pyrotechnician John Donner wrote in 1996 that it "is the most hazardous mixture commonly used in small fireworks." Davis Tenney called it "a combination which is the most sensitive, dangerous, and unpredictable of the many with which the pyrotechnist has to deal. Their preparation ought under no conditions to be attempted by an amateur." Toy charges, such as the several-milligram dots used for cap guns, are individually harmless but potentially dangerous in large numbers. On May 14, 1878, such an accident occurred in Paris. A store containing some six to eight million paper caps, totaling about 64 kilograms of explosive mass, caught fire and exploded, killing 14 and injuring 16 more. References Explosives Pyrotechnic compositions
Armstrong's mixture
[ "Chemistry" ]
726
[ "Pyrotechnic compositions", "Explosives", "Explosions" ]
1,794,410
https://en.wikipedia.org/wiki/Vienna%20Ab%20initio%20Simulation%20Package
The Vienna Ab initio Simulation Package, better known as VASP, is a package written primarily in Fortran for performing ab initio quantum mechanical calculations using either Vanderbilt pseudopotentials, or the projector augmented wave method, and a plane wave basis set. The basic methodology is density functional theory (DFT), but the code also allows use of post-DFT corrections such as hybrid functionals mixing DFT and Hartree–Fock exchange (e.g. HSE, PBE0 or B3LYP), many-body perturbation theory (the GW method) and dynamical electronic correlations within the random phase approximation (RPA) and MP2. Originally, VASP was based on code written by Mike Payne (then at MIT), which was also the basis of CASTEP. It was then brought to the University of Vienna, Austria, in July 1989 by Jürgen Hafner. The main program was written by Jürgen Furthmüller, who joined the group at the Institut für Materialphysik in January 1993, and Georg Kresse. An early version of VASP was called VAMP. VASP is currently being developed by Georg Kresse; recent additions include the extension of methods frequently used in molecular quantum chemistry to periodic systems. VASP is currently used by more than 1400 research groups in academia and industry worldwide on the basis of software licence agreements with the University of Vienna. Because VASP can be used for a wide range of applications such as phonon calculations and structure calculations, it is widely employed in the fields of condensed matter physics, materials science, and quantum chemistry. Recent version history: VASP.6.4.1 on 7 April 2023, VASP.6.4.2 on 20 July 2023, VASP.6.4.3 on 19 March 2024 and VASP.6.5.0 on 17 December 2024. See also Quantum chemistry computer programs References External links http://www.vasp.at/ Computational chemistry software Computational physics Density functional theory software Physics software
Vienna Ab initio Simulation Package
[ "Physics", "Chemistry" ]
428
[ "Computational chemistry software", "Chemistry software", "Computational physics", "Computational chemistry", "Density functional theory software", "Physics software" ]
1,794,929
https://en.wikipedia.org/wiki/Military%20geography
Military geography is a sub-field of geography that is used by the military, as well as academics and politicians, to understand the geopolitical sphere through the military lens. To accomplish these ends, military geographers consider topics from geopolitics to physical locations’ influences on military operations and the cultural and economic impacts of a military presence. On a tactical level, a military geographer might put together the terrain and the drainage system below the surface, so a unit is not at a disadvantage if the enemy uses the drainage system to ambush it, especially in urban warfare. On a strategic level, an emerging field of strategic and military geography seeks to understand the changing human and biophysical environments that alter the security and military domains. Climate change, for example, is adding and multiplying the complexity of military strategy, planning and training. Emerging responsibilities for the military to be involved in: protection of civilian populations (Responsibility to protect), women and ethnic groups; provision of humanitarian aid and disaster response (HADR); new technology and domains of training and operations, such as in cybergeography, make military geography a dynamic frontier. — Baron De Jomini History and Development of Military Geography Military geography has a long and practical history. For example, Imperial Military Geography in 1938 shows how a colonial empire approach to military geography could describe the geographical setting of empire, the responsibilities and the resources that could be mobilised for national or imperial needs. Environmental determinism, regional geography, geographic information systems and geography more generally have all evolved and entwined over hundreds of years. The revival of geography and military geography as a sub-discipline is a remarkable trend since 2000 with a number of key geopolitical, international relations, historical geography and geographical approaches being developed. The American Association of Geographers and Institute of Australian Geographers have interest groups that continue to develop the sub-discipline of military geography. In 2018, Australian Contributions to Strategic and Military Geography outlined a new Australian approach and included chapters on themes and specific regions. Urbanistics Russian colonel N. S. Olesik terms the field of analyzing the complex urban environment in particular “military geo-urbanistics.” In the open country, units only deal with terrain, weather, and the enemy. In urban warfare, the terrain is more complex, filled with many structures and transformations of the land by the inhabitants, which restrict visibility from the air and create obstacles to ground units. Spaces may be narrow, and convoys may be restricted to certain routes between buildings, where they face roadside bombs and ambushes. Units must work with or work around local people, some of whom may cooperate and others of whom may oppose, while others are neutral or caught between the two factions. Guerilla fighters may count on an enemy's unwillingness to bomb or fire on heavily populated areas. Types of terrain Several types of terrain and associated climate are prevalent, each affecting combatants differently. Desert warfare In an arid climate, as in many desert areas across the globe, sand is a main concern. Sand can hamper an army's attempts to remain hydrated, sapping moisture from skin; sand also jams machinery including the firing mechanisms of firearms. Terrain is usually fairly flat, though in some regions there are vast, rolling sand dunes. The desert environment can also contain mountains; as in Afghanistan and in certain areas around Israel. Due to the ongoing conflicts in the Middle East, the U.S. military has redesigned the uniforms for the different branches of service. All of the uniforms have a digital camouflage pattern that is very effective in the desert environment, and the boots have been changed from the standard polished black boots to light brown colored suede leather boots. These boots are cooler under the intense heat of the desert sun. Jungle and forest warfare The conditions of these regions are basically the opposite of those found in desert regions. There are thousands of flora and fauna, and there is always moisture present which presents its own difficulties. The moisture speeds up the rotting processes as well as causing wounds to become infected much easier because of all of the bacteria that live in the water. With proper filtration systems an army should have no problem keeping hydrated. The densely packed trees and underbrush provide concealment from the air as well as from the ground. Ambushes can be easily conducted in this environment just like they can in an urban environment. The jungle can also contain mountains, but these mountains are organized differently from those that exist in the desert. The jungle mountains have far more plant life, and are usually much more difficult to ascend. Helicopters have been proven as a very useful means of transportation over jungle and forested areas; Vietnam was, of course, the testing ground for this. Tanks and other vehicles have difficulty maneuvering through and around the densely packed trees, and most military aircraft fly too fast to accurately observe the ground through the trees. Winter warfare This type of warfare is not based on a geographical design, but is based on the drastic differences in this particular climate. During war it is much harder to remain warm than it is to remain cool. Even Forested areas can, and many do, experience winter weather conditions. For this specific type of combat there are soldiers that are specifically trained to fight under the conditions individual to the winter season. These conditions call for a drastically thicker and thus warmer uniform, and the weapons even need to be refitted with the proper devices to ensure that they will operate in the cold. Mountain warfare No two mountains are alike, but there is less oxygen at higher altitude. Fighting up a mountain can be very treacherous. There can be avalanches, rockslides, cliffs, and ambushes from higher up the slopes, and there are almost guaranteed to be caves somewhere in the mountain, as in Afghanistan. Mud Mud is a universal menace to all armies. While it does not hamper the use of air power, it slows and sometimes stops ground movements all together. The most common season for mud across the globe is spring. After the thawing of winter's snow and the addition of the rains that the season brings, the ground becomes very soft, and almost any military vehicle can get bogged down if it is not properly equipped. The mud is not always dependent on the spring. Rather, in some parts of the world, it is determined by the monsoons. Ocean fronts (harbors, beaches and sea cliffs) Harbors In the concerns of a seaport, especially if it is a goal to either capture or defend it, there are more difficulties than in defending a city that is in land. With a harbor there is also the threat from the sea in addition to the land and the air. A harbor is always a key objective for an army to capture when an invasion is commenced. The sooner the harbor can be captured, the sooner it can be used to bring in massive amounts of reinforcements and material. The trouble is to capture the harbor before the enemy can sabotage it by blocking the entrance with wreckage or by deploying mines throughout the harbor. Defending the harbor is a treacherous task because odds are that the enemy can observe your position from both the air and the sea. The harbor is on the periphery of the defense network of many nations, and even more so if the navy is either deployed or nonexistent. The best ways to defend the harbor are to have military airfields in close vicinity, to have naval units based in the harbor on a permanent basis, and to be ready to make the harbor unusable by the enemy if they should overcome your defenses. Beaches Beaches have always been a favourable place for landings. Beaches with naturally shallow inclines are often used for deploying troops and armoured vehicles. Often, however, they can be blocked by mines and other anti-tank defences. This makes them a high risk place to land but, if there is no prior warning, a beach-landing can be a very effective route into enemy territory. Sea cliffs The same rules apply to this category as to the preceding one with the exception of the mines. Here there is almost no need for anti-vehicle mines, and so, the defenses could be planned without much concern for an armored attack. Resources; future flashpoints The Middle East contains numerous valuable resources that major nations may compete over when supplies begin to fall around the world. The first Gulf War was an example of the United States’ willingness to go to war to protect its access to the rich oilfields of the Persian Gulf. The strong military presence influenced some leaders to aid the United States with cheap oil, but over time those forces began to be viewed as a threat to the Muslim world. The attacks of September 11, 2001, have brought new hostilities to the region with the invasions of both Afghanistan and Iraq. Other hotspots around the globe centering on oil are the areas around Venezuela, the Caspian Sea region, and possibly the offshore oil deposits around Vietnam and China. The most precious and most needed resource of all is water, and in some parts of the world, that is a very expensive resource to obtain. The most obvious areas that conflict may arise over disputes for water supplies would be in the desert, but at the moment oil is the most valuable liquid in the Middle East. However, oil will not always be there, and if those people are going to survive, they must have water. Several times, countries that are upriver have threatened to build dams across the rivers in order to cut off the country down river, starving that particular country of its water supply. This has been the case with both the Nile and the River Jordan, and the results in both cases have been the same: the countries that are down river have threatened retaliation if such an event should occur. As the global warming trend continues, weather patterns will continue to shift, and that means that some places will fall into a severe drought. These people may become desperate when they do not have the resources to obtain water if such a disaster should occur. Densely wooded regions of the world are constantly shrinking, and as the oil runs out, people will need to keep warm in the winter. Odds are that they will return to using wood as a primary fuel source for keeping warm. As the forests shrink, neighboring countries will turn on each other for this resource in order to appease their populations. The forests of Latin America and the Pacific Islands are the key hotspots for this resource; this is in part due to the already tense situations in and around those regions because of growing tensions over global oil supplies. "Conflict diamonds" are a currency used to fund illegal weapons deals. Such diamonds are not sold through an internationally recognized company. They are conflict diamonds because warlords in Africa fight for these diamonds in order to sell them to acquire larger wealth and new weapons for continued fighting. The same is true for the gold fields in southern Africa. See also Military crest Loss of Strength Gradient Geostrategy Natural lines of drift Strategic depth Defence in depth Military geology References Sources Baron De Jomini, Antoine Henri, The Art of War, Plain Label Books, (from original French) 1862 translation Bibliography Bayles, William J. "Terrain Intelligence and Battlefield Success: a Historical Perspective." Engineer 23 (1993): 50–53. Cole, D.H. Imperial Military Geography: The Geographical Background of the Defense Problems of the British Commonwealth. London: S. Praed (1950). Collins, John M. Military Geography for Professionals and the Public. Washington, D.C.: National Defense University Press (1998). Dibb, Paul. "STRATEGIC TRENDS." Naval War College Review 54 (2001): 22–39. Dupuy, R. Ernest. World in Arms: A Study in Military Geography. Harrisburg, PA: Military Service Publishing Company (1940). Flint, Colin. The Geography of War and Peace: From Death Camps to Diplomats. Oxford: Oxford University Press (2005). Galgano, Francis, and Eugene J. Palka (eds.). Modern Military Geography. London: Routledge (2010). Johnson, Douglas Wilson. Topography and Strategy in the War. NY: Henry Holt (1917). Johnson, Douglas Wilson. Battlefields of the World, Western and Southern Fronts: A Study in Military Geography. Oxford: Oxford University Press (1921). Kirby, Robert F. "Why Study Military Geography?" Engineer 20 (1990): 1–2. Kirsch, Scott, and Colin Flint (eds.). Reconstructing Conflict: Integrating War and Post-War Geographies. Burlington, VT: Ashgate (2011). Klare, Michael T. "The New Geography of Conflict." Foreign Affairs 80 (2001): 49–61. Olesik, Nikolai S. "Military Geography and Urbanistics." Military Thought 15 (2006): 81–91. Maguire, T. Miller. Outlines of Military Geography. Cambridge, MA: Cambridge University Press (1899) O'Sullivan, Patrick M. The Geography of War in the Post Cold War World. Lewiston: Edwin Mellen Press (2001). Palka, Eugene J., and Francis Galgano, Jr. Military Geography: From Peace to War. Boston: McGraw-Hill (2005). Peltier, Louis C. Bibliography of Military Geography. Military Geography Committee, Association of American Geographers (1962). Peltier, Louis C., and G. Etzel Pearcy. Military Geography. Princeton, NJ: Van Nostrand (1966). Rech, M., Bos, D., Jenkings, K. N., Williams, A., & Woodward, R. Geography, military geography, and critical military studies. Critical Military Studies, 1(1), (2015): 47–60. Rosenburgh, Bob. "Training for Warfare." Soldiers 62 (2007): 34–36. Rottman, Gordon. L. World War II Pacific Island Guide: A Geo-Military Study. Westport: Greenwood Press (2002). Woodward, Rachel. Military Geographies. Malden, MA: Blackwell (2004). Zakharenko, I. A. "Military Geography: Past and Present." Military Thought 10 (2001): 32–37. The United States Air War College Powerpoint of Flashpoints Human geography Political geography
Military geography
[ "Environmental_science" ]
2,902
[ "Environmental social science", "Human geography" ]
19,321,876
https://en.wikipedia.org/wiki/Acta%20Biomaterialia
Acta Biomaterialia is a monthly peer-reviewed scientific journal published by Elsevier. It is published on behalf of Acta Materialia, Inc., and is sponsored by ASM International and The Minerals, Metals & Materials Society. The journal was established in January 2005. The editor-in-chief is W.R. Wagner (University of Pittsburgh). The journal covers research in biomaterials science, including the interrelationship of biomaterial structure and function from macroscale to nanoscale. Topical coverage includes biomedical and biocompatible materials. Formats of publication include original research reports, review papers, and rapid communications ("letters"). Abstracting and indexing Acta Biomaterialia is abstracted and indexed in: Chemical Abstracts Service EMBASE EMBiology Elsevier BIOBASE MEDLINE/PubMed Materials Science Citation Index Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2022 impact factor of 9.7. References External links Biochemistry journals Elsevier academic journals Materials science journals Monthly journals Academic journals established in 2005 Biotechnology journals English-language journals Academic journals associated with learned and professional societies
Acta Biomaterialia
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
238
[ "Biotechnology literature", "Biochemistry journals", "Materials science journals", "Materials science", "Biochemistry literature", "Biotechnology journals" ]
19,321,930
https://en.wikipedia.org/wiki/Acta%20Biotheoretica
Acta Biotheoretica: Mathematical and philosophical foundations of biological and biomedical science is a quarterly peer-reviewed scientific journal published by Springer Science+Business Media. It is the official journal of the Jan van der Hoeven Society for Theoretical Biology. The editor-in-chief is F.J.A. Jacobs (Leiden University). Aims and scope The journal's focus is theoretical biology which includes mathematical representation, treatment, and modeling for simulations and quantitative descriptions. The journal's focus also includes the philosophy of biology which emphasizes looking at the methods developed to form biological theory. Topical coverage also includes biomathematics, computational biology, genetics, ecology, and morphology. Abstracting and indexing This journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.185. According to the SCImago Journal Rank (SJR), the journal h-index is 35. References Further reading External links Biology journals Springer Science+Business Media academic journals Mathematical and theoretical biology Academic journals established in 1935 Mathematical and theoretical biology journals English-language journals
Acta Biotheoretica
[ "Mathematics" ]
226
[ "Applied mathematics", "Mathematical and theoretical biology" ]
19,327,051
https://en.wikipedia.org/wiki/Discovery%20and%20exploration%20of%20the%20Solar%20System
Discovery and exploration of the Solar System is observation, visitation, and increase in knowledge and understanding of Earth's "cosmic neighborhood". This includes the Sun, Earth and the Moon, the major planets Mercury, Venus, Mars, Jupiter, Saturn, Uranus, and Neptune, their satellites, as well as smaller bodies including comets, asteroids, and dust. In ancient and medieval times, only objects visible to the naked eye—the Sun, the Moon, the five classical planets, and comets, along with phenomena now known to take place in Earth's atmosphere, like meteors and aurorae—were known. Ancient astronomers were able to make geometric observations with various instruments. The collection of precise observations in the early modern period and the invention of the telescope helped determine the overall structure of the Solar System. Telescopic observations resulted in the discovery of moons and rings around planets, and new planets, comets and the asteroids; the recognition of planets as other worlds, of Earth as another planet, and stars as other suns; the identification of the Solar System as an entity in itself, and the determination of the distances to some nearby stars. For millennia, what today is known to be the Solar System was regarded as the "whole universe", so the knowledge of both mostly advanced in parallel. A clear distinction was not made until around the mid-17th century. Since then, incremental knowledge has been gained not only about the Solar System, but also about outer space and its deep-sky objects. The composition of stars and planets was investigated with spectroscopy. Observations of Solar System bodies with other types of electromagnetic radiation became possible with radio astronomy, infrared astronomy, ultraviolet astronomy, X-ray astronomy, and gamma-ray astronomy. Robotic space probes, the Apollo program landings of humans on the Moon, and space telescopes have vastly increased human knowledge about the atmosphere, geology, and electromagnetic properties of other planets, giving rise to the new field of planetary science. The Solar System is one of many planetary systems in the galaxy. The planetary system that contains Earth is named the "Solar" System. The word "solar" is derived from the Latin word for Sun, Sol (genitive Solis). Anything related to the Sun is called "solar": for example, stellar wind from the Sun is called solar wind. Pre-telescope The first humans had limited understanding of the celestial bodies that could be seen in the sky. The Sun, however, was of immediate interest, as it generates the day-night cycle. Even more, the dawn and sunset always take part at roughly the same points of the horizon, which helped to develop the cardinal directions. The Moon was another body of immediate interest, because of its higher visual size. The Lunar phases allowed to measure time in longer periods than those of days, and predict the duration of seasons. Prehistoric beliefs about the structure and origin of the universe were highly diverse, often rooted in religious cosmology, and many are unrecorded. Many associated the classical planets (these star-like points visible with the naked eye) with deities, in part due to their puzzling forward and retrograde motion against the otherwise fixed stars, which gave them their nickname of "wanderer stars", πλάνητες ἀστέρες (planētes asteres) in Ancient Greek, from which today's word "planet" was derived. Systematic astronomical observations were performed in many areas around the world, and started to inform cosmological knowledge, although they were mostly driven by astrological purposes such as divination and/or omens. Early historic civilizations in Egypt, the Levant, pre-Socratic Greece, Mesopotamia, and ancient China, recorded beliefs in a flat Earth. Vedic texts proposed a number of shapes, including a wheel (flat) and a bag (concave), though they likely promote a spherical Earth, which they refer to as bhugol (or भूगोल in Hindi and Sanskrit), which literally translates to "spherical land". Ancient models were typically geocentric, putting the Earth at the center of the universe, based solely in the common experience of seeing the skies slowly moving around above our heads, and by feeling the land under our feet to be firmly at rest. Some traditions in Chinese cosmology proposed an outer surface to which planets and the Sun and Moon were attached; another proposed they were free-floating. All remaining stars were regarded as "fixed" in the background. One important discovery made at different times in different places is that the bright planet sometimes seen near the sunrise (called Phosphorus by the Greeks) and the bright planet sometimes seen near the sunset (called Hesperus by the Greeks) were actually the same planet, Venus. Though unclear if motivated by empirical observations, the concept of a spherical Earth apparently first gained intellectual dominance in the Pythagorean school in Ancient Greece in the 5th century BC. Meanwhile, the Pythagorean astronomical system proposed the Earth and Sun and a counter-Earth rotate around an unseen "Central Fire". Influenced by Pythagoran thinking and Plato, philosophers Eudoxus, Callippus, and Aristotle all developed models of the solar system based on concentric spheres. These required more than one sphere per planet in order to account for the complicated curves they traced across the sky. Aristotelian physics used the Earth's place at the center of the universe along with the theory of classical elements to explain phenomena such as falling rocks and rising flames; objects in the sky were theorized to be composed of a unique element called aether. A later geocentric model developed by Ptolemy attached smaller spheres to a smaller number of large spheres to explain the complex motions of the planets, a device known as deferent and epicycle first developed by Apollonius of Perga. Published in the Almagest, this model of celestial spheres surrounding a spherical Earth was reasonably accurate and predictive, and became dominant among educated people in various cultures, spreading from Ancient Greece to Ancient Rome, Christian Europe, the Islamic world, South Asia, and China via inheritance and copying of texts, conquest, trade, and missionaries. It remained in widespread use until the 16th century. Various astronomers, especially those who had access to more precise observations, were skeptical of the geocentric model and proposed alternatives, including the heliocentric theory where the planets and the Earth orbit the Sun. Many proposals did not diffuse outside the local culture, or did not become locally dominant. Aristarchus of Samos had speculated about heliocentrism in Ancient Greece; Martianus Capella taught in the early Middle Ages that both Mercury and Venus orbit the Sun, while the Moon, the Sun and the other planets orbit the Earth; in Al-Andalus, Arzachel proposed that Mercury orbits the Sun, and heliocentric astronomers worked in the Maragha school in Persia. Kerala-based astronomer Nilakantha Somayaji proposed a geoheliocentric system, in which the planets circled the Sun while the Sun, Moon and stars orbited the Earth. Finally, Polish astronomer Nicolaus Copernicus developed in full a system called Copernican heliocentrism, in which the planets and the Earth orbit the Sun, and the Moon orbits the Earth. Though the by-then-late Copernicus' theory was known to Danish astronomer Tycho Brahe, he did not accept it, and proposed his own geoheliocentric Tychonic system. Brahe undertook a substantial series of more accurate observations. German natural philosopher Johannes Kepler at first worked to combine Copernican system with Platonic solids in line with his interpretation of Christianity and an ancient musical resonance theory known as Musica universalis. After becoming an assistant for Brahe, Kepler inherited the observations and was directed to mathematically analyze the orbit of Mars. After many failed attempts, he eventually made the groundbreaking discovery that the planets moved around the Sun in ellipses. He formulated and published what are now known as Kepler's laws of planetary motion from 1609 to 1619. This became the dominant model among astronomers, though as with celestial sphere models, the physical mechanism by which this motion occurred was somewhat mysterious and theories abounded. It took some time for the new theories to diffuse across the world. For example, with the Age of Discovery already well underway, astronomical thought in America was based on the older Greek theories, but newer western European ideas began to appear in writings by 1659. Telescopic observations Early telescopic discoveries The invention of the telescope revolutionized astronomy, making it possible to see details about the Sun, Moon, and planets not available to the naked eye. It appeared around 1608 in the Netherlands, and was quickly adopted among European enthusiasts and astronomers to study the skies. Italian polymath Galileo Galilei was an early user and made prolific discoveries, including the phases of Venus, which definitively disproved the arrangement of spheres in the Ptolemaic system. Galileo also discovered that the Moon was cratered, that the Sun was marked with sunspots, and that Jupiter had four satellites in orbit around it. Christiaan Huygens followed on from Galileo's discoveries by discovering Saturn's moon Titan and the shape of the rings of Saturn. Giovanni Domenico Cassini later discovered four more moons of Saturn and the Cassini division in Saturn's rings. Around 1677, Edmond Halley observed a transit of Mercury across the Sun, leading him to realise that observations of the solar parallax of a planet (more ideally using the transit of Venus) could be used to trigonometrically determine the distances between Earth, Venus, and the Sun. In 1705, Halley realised that repeated sightings of a comet were recording the same object, returning regularly once every 75–76 years. This was the first evidence that anything other than the planets orbited the Sun, though this had been theorized about comets in the 1st century by Seneca. Around 1704, the term "Solar System" first appeared in English. Newtonian physics English astronomer and mathematician Isaac Newton, incidentally building on recent scientific inquiries into the speed at which objects fall, was inspired by claims by rival Robert Hooke of a proof of Kepler's laws. Newton was able to explain the motions of the planets by hypothesizing a force of gravity acting between all solar system objects in proportion to their mass and an inverse-square law for distance - Newton's law of universal gravitation. Newton's 1687 Philosophiæ Naturalis Principia Mathematica explained this along with Newton's laws of motion, for the first time providing a unified explanation for astronomical and terrestrial phenomena. These concepts became the basis of classical mechanics, which enabled future advancements in many fields of physics. Discovery of additional planets and moons The telescope made it possible for the first time to detect objects not visible to the naked eye. This took some time to accomplish, due to various logistical considerations such as the low magnification power of early equipment, the small area of the sky covered in any given observation, and the work involved in comparing multiple observations over different nights. In 1781, William Herschel was looking for binary stars in the constellation of Taurus when he observed what he thought was a new comet. Its orbit revealed that it was a new planet, Uranus, the first ever discovered telescopically. Giuseppe Piazzi discovered Ceres in 1801, a small world between Mars and Jupiter. It was considered another planet, but after subsequent discoveries of other small worlds in the same region, it and the others were eventually reclassified as asteroids. By 1846, discrepancies in the orbit of Uranus led many to suspect a large planet must be tugging at it from farther out. John Adams and Urbain Le Verrier's calculations eventually led to the discovery of Neptune. The excess perihelion precession of Mercury's orbit led Le Verrier to postulate the intra-Mercurian planet Vulcan in 1859, but that would turn out not to exist: the excess perihelion precession was finally explained by Einstein's general relativity, which displaced Newton's theory as the most accurate description of gravity on large scales. Eventually, new moons were discovered also around Uranus starting in 1787 by Herschel, around Neptune starting in 1846 by William Lassell and around Mars in 1877 by Asaph Hall. Further apparent discrepancies in the orbits of the outer planets led Percival Lowell to conclude that yet another planet, "Planet X", must lie beyond Neptune. After his death, his Lowell Observatory conducted a search that ultimately led to Clyde Tombaugh's discovery of Pluto in 1930. Pluto was, however, found to be too small to have disrupted the orbits of the outer planets, and its discovery was therefore coincidental. Like Ceres, it was initially considered to be a planet, but after the discovery of many other similarly sized objects in its vicinity it was reclassified in 2006 as a dwarf planet by the IAU. More technical improvements In 1668 Isaac Newton builds his own reflecting telescope, the first fully functional of this kind, and a landmark for future developments as it reduces spherical aberration with no chromatic aberration. Today, most powerful telescopes in the world are of that type. In 1840 John W. Draper takes a daguerreotype of the Moon, the first astronomical photograph. Since then, astrophotography is a key tool in the observational studies of the skies. Spectroscopy is a method that permits to study materials by means of the light they emit, developed around 1835–1860 by Charles Wheatstone, Léon Foucault, Anders Jonas Ångström and others. Robert Bunsen and Gustav Kirchhoff further develop the spectroscope, which they used to pioneer the identification of the chemical elements in Earth, and also in the Sun. Around 1862 Father Angelo Secchi developed the heliospectrograph, enabling him to study both the Sun and the stars, and identifying them as things intrinsically of the same kind. In 1868 Jules Janssen and Norman Lockyer discovered a new element in the Sun unknown on Earth, helium, which currently comprises 23.8% of the mass in the solar photosphere. As of today, spectroscopes are an important tool to know about the chemical composition of the celestial bodies. By the mid-20th century, new important technologies for remote sensing and observation arose, as radar, radio astronomy and astronautics. Discovery of the solar system as one among many In ancient times, there was a common belief in the so-called "sphere of fixed stars", a giant dome-like structure or firmament centered on Earth which acted as the confinement of the whole universe, its edge, rotating daily around. Since Hellenistic astronomy and through the Middle Ages, the estimated radius of such sphere was becoming increasingly large, up to inconceivable distances. But by the European Renaissance, the possibility that such a huge sphere could complete a single revolution of 360° around the Earth in only 24 hours was deemed improbable, and this point was one of the arguments of Nicholas Copernicus for leaving behind the centuries-old geocentric model. In the sixteenth century, a number of writers inspired by Copernicus, such as Thomas Digges, Giordano Bruno and William Gilbert argued for an indefinitely extended or even infinite universe, with other stars as distant suns, paving the way to deprecate the Aristotelian sphere of the fixed stars. When Galileo Galilei examined the skies and constellations through a telescope, he concluded that the "fixed stars" which had been studied and mapped were only a tiny portion of the massive universe that lay beyond the reach of the naked eye. He also aimed his telescope to the faint strip of the Milky Way, and he found it resolves into countless white star-like spots, presumably farther stars themselves. The term "Solar System" entered the English language by 1704, when John Locke used it to refer to the Sun, planets, and comets as a whole. By then it had been stablished beyond doubt that planets are other worlds, then the stars would be other distant suns, so the whole Solar System is actually only a small part of an immensely large universe, and definitively something distinct. Although it is debatable when the Solar System as such was truly "discovered", three 19th century observations determined its nature and place in the Universe beyond reasonable doubt. First, by 1835–1838, Thomas Henderson and Friedrich Bessel successfully measured two stellar parallax, an apparent shift in the position of a nearby star created by Earth's motion around the Sun. This was not only a direct, experimental proof of heliocentrism (James Bradley already did it in 1729 when he discovered the cause of the aberration of starlight is the Earth's motion around the Sun), but also accurately revealed, for the first time, the vast distance between the Solar System and the closest stars. Then, in 1859, Robert Bunsen and Gustav Kirchhoff, using the newly invented spectroscope, examined the spectral signature of the Sun and discovered that it was composed of the same elements as existed on Earth, establishing for the first time a physical similarity between Earth and the other bodies visible from Earth. Then, Father Angelo Secchi compared the spectral signature of the Sun with those of other stars, and found them virtually identical. The realisation that the Sun is a star led to a scientifically updated hypothesis that other stars could have planetary systems of their own, though this was not to be proven for nearly 140 years. Observational cosmology began with attempts by William Herschel to describe the shape of the then known universe. In 1785, he proposed the Milky Way was a disk, but assumed the Sun was at the center. This heliocentric theory was overturned by galactocentrism in the 1910s, after more observations by Harlow Shapley placed the Galactic Center relatively far away. Extrasolar planets and the Kuiper belt In 1992, the first evidence of a planetary system other than our own was discovered, orbiting the pulsar PSR B1257+12. Three years later, 51 Pegasi b, the first extrasolar planet around a Sunlike star, was discovered. NASA announced in March 2022 that the number of discovered exoplanets reached 5,000, of several types and sizes. Also in 1992, astronomers David C. Jewitt of the University of Hawaii and Jane Luu of the Massachusetts Institute of Technology discovered Albion. This object proved to be the first of a new population, which became known as the Kuiper belt; an icy analogue to the asteroid belt of which such objects as Pluto and Charon were deemed a part, the Kuiper belt objects (KBO). Teams by Mike Brown, Chad Trujillo and David Rabinowitz discovered the trans-Neptunian objects (TNO) Quaoar in 2002, Sedna in 2003, Orcus and Haumea in 2004 and Makemake in 2005, part of the most notable KBOs, some now regarded as dwarf planets. Also in 2005 they announced the discovery of Eris, a scattered disc object initially thought to be larger than Pluto, which would make it the largest object discovered in orbit around the Sun since Neptune. New Horizons fly-by of Pluto in July 2015 resulted in more-accurate measurements of Pluto, which is slightly larger, though less massive, than Eris. Observations by radar Radar astronomy is the technique for observing nearby astronomical objects by reflecting radio waves or microwaves off target objects and analyzing their reflections, which provide information about the shapes and surface properties of solid bodies, unavailable by other means. Radar can also accurately measure the position and track the movement of such bodies, specially when they are small, as comets and asteroids, as well as to determine distances between objects in the Solar System. In certain cases radar imaging has produced images with up to 7.5-meter resolution. The Moon is comparatively close and was studied by radar soon after the invention of the technique in 1946, mainly precise measurements of its distance and its surface roughness. Other bodies that have been observed by this means include: Mercury – Improved value for the distance from the Earth observed (test of theory of General relativity). Rotational period, libration, surface mapping, study of polar regions. Venus – First radar detection in 1961. Measurement of the size of the astronomical unit. Rotation period, gross surface properties. The Magellan mission mapped the entire planet using a radar altimeter, a task that cannot be made by optical means due to the opaque atmosphere of this planet. Earth – Numerous airborne and spacecraft radars have mapped the entire planet, for various purposes. One example is the Shuttle Radar Topography Mission, which mapped large parts of the surface of Earth at 30 m resolution. Mars – Mapping of surface roughness from Arecibo Observatory. The Mars Express mission carries a ground-penetrating radar. Jupiter system – Survey of moon Europa. Saturn system – Rings and Titan from Arecibo Observatory. Mapping of Titan's surface and observations of other moons from the Cassini spacecraft. As Venus, Titan also possesses an opaque atmosphere. By 2018, there have been radar observations of 138 main belt asteroids, 789 near-Earth asteroids, and 20 comets, including 73P/Schwassmann-Wachmann. Observations by spacecraft Since the start of the Space Age, a great deal of exploration has been performed by robotic spacecraft missions that have been organized and executed by various space agencies. All planets in the Solar System, plus their major moons along some asteroids and comets, have now been visited to varying degrees by spacecraft launched from Earth. Through these uncrewed missions, humans have been able to get close-up photographs of all the planets and, in the case of landers, perform tests of the soils and atmospheres of some. The first artificial object sent into space was the Soviet satellite Sputnik 1, launched on 4 October 1957, which successfully orbited Earth until 4 January the following year. The American probe Explorer 6, launched in 1959, was the first satellite to image Earth from space. Flybys The first successful probe to fly by another Solar System body was Luna 1, which sped past the Moon in 1959. Originally meant to impact with the Moon, it instead missed its target and became the first artificial object to orbit the Sun. Mariner 2 was the first planetary flyby, passing Venus in 1962. The first successful flyby of Mars was made by Mariner 4 in 1965. Mariner 10 first passed Mercury in 1974. The first probe to explore the outer planets was Pioneer 10, which flew by Jupiter in 1973. Pioneer 11 was the first to visit Saturn, in 1979. The Voyager probes performed a Grand Tour of the outer planets following their launch in 1977, with both probes passing Jupiter in 1979 and Saturn in 1980–1981. Voyager 2 then went on to make close approaches to Uranus in 1986 and Neptune in 1989. The two Voyager probes are now far beyond Neptune's orbit, and are on course to find and study the termination shock, heliosheath, and heliopause. According to NASA, both Voyager probes have encountered the termination shock at a distance of approximately 93 AU from the Sun. The first flyby of a comet occurred in 1985, when the International Cometary Explorer (ICE) passed by the comet Giacobini–Zinner, whereas the first flybys of asteroids were conducted by the Galileo space probe, which imaged both 951 Gaspra (in 1991) and 243 Ida (in 1993) on its way to Jupiter. Launched on January 19, 2006, the New Horizons probe is the first human-made spacecraft to explore the Kuiper belt. This uncrewed mission flew by Pluto in July 2015. The mission was extended to observe a number of other Kuiper belt objects, including a close flyby of 486958 Arrokoth on New Year's Day, 2019. As of 2011, American scientists are concerned that exploration beyond the Asteroid Belt will hampered by a shortage of Plutonium-238. Orbiters, landers, rovers and flying probes In 1966, the Moon became the first Solar System body beyond Earth to be orbited by an artificial satellite (Luna 10), followed by Mars in 1971 (Mariner 9), Venus in 1975 (Venera 9), Jupiter in 1995 (Galileo), the asteroid Eros in 2000 (NEAR Shoemaker), Saturn in 2004 (Cassini–Huygens), and Mercury and Vesta in 2011 (MESSENGER and Dawn respectively). Dawn was orbiting the asteroid–dwarf planet Ceres since 2015 and it is still there as of 2023, but it became inactive since 2018. In 2014 Rosetta spacecraft becomes the first comet orbiter, around Churyumov–Gerasimenko. The first probe to land on another Solar System body was the Soviet Luna 2 probe, which impacted the Moon in 1959. Since then, increasingly distant planets have been reached, with probes landing on or impacting the surfaces of Venus in 1966 (Venera 3), Mars in 1971 (Mars 3, although a fully successful landing didn't occur until Viking 1 in 1976), the asteroid Eros in 2001 (NEAR Shoemaker), Saturn's moon Titan in 2004 (Huygens), the comets Tempel 1 (Deep Impact) in 2005, and Churyumov–Gerasimenko (Philae) in 2014. The Galileo orbiter also dropped a probe into Jupiter's atmosphere in 1995, this was intended to descend as far as possible into the gas giant before being destroyed by heat and pressure. , three bodies in the Solar System, the Moon, Mars and Ryugu have been visited by mobile rovers. The first robotic rover to visit another celestial body was the Soviet Lunokhod 1, which landed on the Moon in 1970. The first to visit another planet was Sojourner, which travelled 500 metres across the surface of Mars in 1997. The first flying probe on in Solar System was the Vega balloons in 1985, while first powered flight was undertook by Ingenuity in 2020. The only crewed rover to visit another world was NASA's Lunar Roving Vehicle, which traveled with Apollos 15, 16 and 17 between 1971 and 1972. In 2022, the DART impactor crashed into Dimorphos, the minor-planet moon of the asteroid Didymos, with the explicit purpose of intentionally deviate (slightly) the orbit of a Solar System body for the first time ever, which it accomplished. Sample return In some instances, both human and robotic explorers have taken physical samples of the visited bodies and return them back to Earth. Other extraterrestrial materials came to Earth naturally, as meteorites, or became stuck to artificial satellites; they are specimens which also allows studying Solar System matter. Moon (various missions including Apollo, Luna and Chang'e). Asteroid (Hayabusa and Hayabusa2). Comet dust (Stardust). Solar wind (Genesis). Spacecraft exploration Overview of some missions to the Solar System. See also the categories for missions to comets, asteroids, the Moon, and the Sun. Crewed exploration The first human being to reach space (defined as an altitude of over 100 km) and to orbit Earth was Yuri Gagarin, a Soviet cosmonaut who was launched in Vostok 1 on April 12, 1961. The first human to walk on the surface of another Solar System body was Neil Armstrong, who stepped onto the Moon on July 21, 1969 during the Apollo 11 mission; five more Moon landings occurred through 1972. The United States' reusable Space Shuttle flew 135 missions between 1981 and 2011. Two of the five shuttles were destroyed in accidents. The first orbital space station to host more than one crew was NASA's Skylab, which successfully held three crews from 1973 to 1974. True human settlement in space began with the Soviet space station Mir, which was continuously occupied for close to ten years, from 1989 to 1999. Its successor, the International Space Station, has maintained a continuous human presence in space since 2001. In 2004, U.S. President George W. Bush announced the Vision for Space Exploration, which called for a replacement for the aging Shuttle, a return to the Moon and, ultimately, a crewed mission to Mars. Exploration by country Legend: ☄ - orbit or flyby❏ - Space observatory Ѫ - successful landing on an object ⚗ - sample return ⚘ - crewed mission ↂ - permanent inhabited space station Exploration survey Bodies imaged up close: Objects imaged only at low resolution: {| class="wikitable" style="text-align:center" ! colspan="8"| Satellites |- ! style="font-weight:normal" | Jupiter ! style="font-weight:normal" | Saturn ! style="font-weight:normal" | Uranus ! style="font-weight:normal" colspan="3" | Neptune ! style="font-weight:normal" colspan="2" | Pluto |- | width=100 | Metis | width=100 | Polydeuces | width=100 | Puck | width=100 | Nereid | width=100 | Despina | width=100 | Larissa | width=100 | Kerberos | width=100 | Styx |- ! colspan="6"| Selected asteroids, by number ! colspan="2"| Selected comets |- | Juno | Hebe | Egeria | Eunomia | Psyche | Amphitrite | Halley's | Hyakutake |- | Daphne | Bamberga | Davida | Interamnia | Annefrank | Braille | Holmes | Giacobini–Zinner |- ! colspan="8"| Trans-Neptunian objects (TNO), named and/or with radius above 200 km, ordered by size |- | Eris | Haumea | Makemake | Gonggong | Quaoar | Sedna | Orcus | Salacia |- | Varda | Ixion | Varuna | Gǃkúnǁʼhòmdímà | Dziewanna | Huya | | |} See also the radar images at "Near-Earth object". See also Astronomy Timeline of Solar System astronomy Timeline of discovery of Solar System planets and their moons List of former planets List of hypothetical Solar System objects in astronomy Fixed stars Historical models of the Solar System Exploration Timeline of spaceflight Timeline of space exploration Timeline of Solar System exploration Timeline of first images of Earth from space List of Solar System probes References Bibliography Solar System
Discovery and exploration of the Solar System
[ "Astronomy" ]
6,297
[ "Discovery and exploration of the Solar System", "Outer space", "Solar System", "History of astronomy" ]
2,479,157
https://en.wikipedia.org/wiki/Monte%20Carlo%20N-Particle%20Transport%20Code
Monte Carlo N-Particle Transport (MCNP) is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation transport code designed to track many particle types over broad ranges of energies and is developed by Los Alamos National Laboratory. Specific areas of application include, but are not limited to, radiation protection and dosimetry, radiation shielding, radiography, medical physics, nuclear criticality safety, detector design and analysis, nuclear oil well logging, accelerator target design, fission and fusion reactor design, decontamination and decommissioning. The code treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and fourth-degree elliptical tori. Point-wise cross section data are typically used, although group-wise data also are available. For neutrons, all reactions given in a particular cross-section evaluation (such as ENDF/B-VI) are accounted for. Thermal neutrons are described by both the free gas and S(α,β) models. For photons, the code accounts for incoherent and coherent scattering, the possibility of fluorescent emission after photoelectric absorption, absorption in pair production with local emission of annihilation radiation, and bremsstrahlung. A continuous-slowing-down model is used for electron transport that includes positrons, k x-rays, and bremsstrahlung but does not include external or self-induced fields. Important standard features that make MCNP very versatile and easy to use include a powerful general source, criticality source, and surface source; both geometry and output tally plotters; a rich collection of variance reduction techniques; a flexible tally structure; and an extensive collection of cross-section data. MCNP contains numerous flexible tallies: surface current and flux, volume flux (track length), point or ring detectors, particle heating, fission heating, pulse height tally for energy or charge deposition, mesh tallies, and radiography tallies. The key value MCNP provides is a predictive capability that can replace expensive or impossible-to-perform experiments. It is often used to design large-scale measurements providing a significant time and cost savings to the community. LANL's latest version of the MCNP code, version 6.2, represents one piece of a set of synergistic capabilities each developed at LANL; it includes evaluated nuclear data (ENDF) and the data processing code, NJOY. The international user community's high confidence in MCNP's predictive capabilities are based on its performance with verification and validation test suites, comparisons to its predecessor codes, automated testing, underlying high quality nuclear and atomic databases and significant testing by its users. History The Monte Carlo method for radiation particle transport has its origins at LANL dates back to 1946. The creators of these methods were Stanislaw Ulam, John von Neumann, Robert Richtmyer, and Nicholas Metropolis. Monte Carlo for radiation transport was conceived by Stanislaw Ulam in 1946 while playing Solitaire while recovering from an illness. "After spending a lot of time trying to estimate success by combinatorial calculations, I wondered whether a more practical method...might be to lay it out say one hundred times and simply observe and count the number of successful plays." In 1947, John von Neumann sent a letter to Robert Richtmyer proposing the use of a statistical method to solve neutron diffusion and multiplication problems in fission devices. His letter contained an 81-step pseudo code and was the first formulation of a Monte Carlo computation for an electronic computing machine. Von Neumann's assumptions were: time-dependent, continuous-energy, spherical but radially-varying, one fissionable material, isotropic scattering and fission production, and fission multiplicities of 2, 3, or 4. He suggested 100 neutrons each to be run for 100 collisions and estimated the computational time to be five hours on ENIAC. Richtmyer proposed suggestions to allow for multiple fissionable materials, no fission spectrum energy dependence, single neutron multiplicity, and running the computation for computer time and not for the number of collisions. The code was finalized in December 1947. The first calculations were run in April/May 1948 on ENIAC. While waiting for ENIAC to be physically relocated, Enrico Fermi invented a mechanical device called FERMIAC to trace neutron movements through fissionable materials by the Monte Carlo method. Monte Carlo methods for particle transport have been driving computational developments since the beginning of modern computers; this continues today. In the 1950s and 1960s, these new methods were organized into a series of special-purpose Monte Carlo codes, including MCS, MCN, MCP, and MCG. These codes were able to transport neutrons and photons for specialized LANL applications. In 1977, these separate codes were combined to create the first generalized Monte Carlo radiation particle transport code, MCNP. In 1977, MCNP was first created by merging MCNG with MCP to create MCNP. The first release of the MCNP code was version 3 and was released in 1983. It is distributed by the Radiation Safety Information Computational Center in Oak Ridge, TN. Monte Carlo N-Particle eXtended Monte Carlo N-Particle eXtended (MCNPX) was also developed at Los Alamos National Laboratory, and is capable of simulating particle interactions of 34 different types of particles (nucleons and ions) and 2000+ heavy ions at nearly all energies, including those simulated by MCNP. Both codes can be used to judge whether or not nuclear systems are critical and to determine doses from sources, among other things. MCNP6 is a merger of MCNP5 and MCNPX. Comparison MCNP6 is less accurate than MCNPX. Geant4 is less accurate than MCNPX. Geant4 is less accurate than MCNP5. Geant4 is slower than MCNPX. See also Safety code (nuclear reactor) Nuclear data Monte Carlo method Monte Carlo methods for electron transport Nuclear reactor Nuclear engineering Neutron FLUKA Geant4 MELCOR RELAP5-3D Serpent (software) Notes External links LANL MCNP website Radiation Safety Information Computational Center Nuclear technology Nuclear safety and security Monte Carlo software Physics software Fortran software Scientific simulation software Monte Carlo particle physics software
Monte Carlo N-Particle Transport Code
[ "Physics" ]
1,283
[ "Nuclear technology", "Physics software", "Nuclear physics", "Computational physics" ]
2,479,508
https://en.wikipedia.org/wiki/Helmholtz%20theorem%20%28classical%20mechanics%29
The Helmholtz theorem of classical mechanics reads as follows: Let be the Hamiltonian of a one-dimensional system, where is the kinetic energy and is a "U-shaped" potential energy profile which depends on a parameter . Let denote the time average. Let Then Remarks The thesis of this theorem of classical mechanics reads exactly as the heat theorem of thermodynamics. This fact shows that thermodynamic-like relations exist between certain mechanical quantities. This in turn allows to define the "thermodynamic state" of a one-dimensional mechanical system. In particular the temperature is given by time average of the kinetic energy, and the entropy by the logarithm of the action (i.e., ). The importance of this theorem has been recognized by Ludwig Boltzmann who saw how to apply it to macroscopic systems (i.e. multidimensional systems), in order to provide a mechanical foundation of equilibrium thermodynamics. This research activity was strictly related to his formulation of the ergodic hypothesis. A multidimensional version of the Helmholtz theorem, based on the ergodic theorem of George David Birkhoff is known as the generalized Helmholtz theorem. Generalized version The generalized Helmholtz theorem is the multi-dimensional generalization of the Helmholtz theorem, and reads as follows. Let be the canonical coordinates of a s-dimensional Hamiltonian system, and let be the Hamiltonian function, where , is the kinetic energy and is the potential energy which depends on a parameter . Let the hyper-surfaces of constant energy in the 2s-dimensional phase space of the system be metrically indecomposable and let denote time average. Define the quantities , , , , as follows: , , , Then: References Helmholtz, H., von (1884a). Principien der Statik monocyklischer Systeme. Borchardt-Crelle’s Journal für die reine und angewandte Mathematik, 97, 111–140 (also in Wiedemann G. (Ed.) (1895) Wissenschafltliche Abhandlungen. Vol. 3 (pp. 142–162, 179–202). Leipzig: Johann Ambrosious Barth). Helmholtz, H., von (1884b). Studien zur Statik monocyklischer Systeme. Sitzungsberichte der Kö niglich Preussischen Akademie der Wissenschaften zu Berlin, I, 159–177 (also in Wiedemann G. (Ed.) (1895) Wissenschafltliche Abhandlungen. Vol. 3 (pp. 163–178). Leipzig: Johann Ambrosious Barth). Boltzmann, L. (1884). Über die Eigenschaften monocyklischer und anderer damit verwandter Systeme.Crelles Journal, 98: 68–94 (also in Boltzmann, L. (1909). Wissenschaftliche Abhandlungen (Vol. 3, pp. 122–152), F. Hasenöhrl (Ed.). Leipzig. Reissued New York: Chelsea, 1969). Gallavotti, G. (1999). Statistical mechanics: A short treatise. Berlin: Springer. Campisi, M. (2005) On the mechanical foundations of thermodynamics: The generalized Helmholtz theorem Studies in History and Philosophy of Modern Physics 36: 275–290 Classical mechanics Statistical mechanics theorems Hermann von Helmholtz
Helmholtz theorem (classical mechanics)
[ "Physics", "Mathematics" ]
753
[ "Statistical mechanics stubs", "Theorems in dynamical systems", "Classical mechanics stubs", "Classical mechanics", "Statistical mechanics theorems", "Theorems in mathematical physics", "Mechanics", "Statistical mechanics", "Physics theorems" ]
2,480,516
https://en.wikipedia.org/wiki/Reliability%2C%20availability%20and%20serviceability
Reliability, availability and serviceability (RAS), also known as reliability, availability, and maintainability (RAM), is a computer hardware engineering term involving reliability engineering, high availability, and serviceability design. The phrase was originally used by IBM as a term to describe the robustness of their mainframe computers. Computers designed with higher levels of RAS have many features that protect data integrity and help them stay available for long periods of time without failure. This data integrity and uptime is a particular selling point for mainframes and fault-tolerant systems. Definitions While RAS originated as a hardware-oriented term, systems thinking has extended the concept of reliability-availability-serviceability to systems in general, including software: Reliability can be defined as the probability that a system will produce correct outputs up to some given time t. Reliability is enhanced by features that help to avoid, detect and repair hardware faults. A reliable system does not silently continue and deliver results that include uncorrected corrupted data. Instead, it detects and, if possible, corrects the corruption, for example: by retrying an operation for transient (soft) or intermittent errors, or else, for uncorrectable errors, isolating the fault and reporting it to higher-level recovery mechanisms (which may failover to redundant replacement hardware, etc.), or else by halting the affected program or the entire system and reporting the corruption. Reliability can be characterized in terms of mean time between failures (MTBF), with reliability = exp(−t/MTBF). Availability means the probability that a system is operational at a given time, i.e. the amount of time a device is actually operating as the percentage of total time it should be operating. High-availability systems may report availability in terms of minutes or hours of downtime per year. Availability features allow the system to stay operational even when faults do occur. A highly available system would disable the malfunctioning portion and continue operating at a reduced capacity. In contrast, a less capable system might crash and become totally nonoperational. Availability is typically given as a percentage of the time a system is expected to be available, e.g., 99.999 percent ("five nines"). Serviceability or maintainability is the simplicity and speed with which a system can be repaired or maintained; if the time to repair a failed system increases, then availability will decrease. Serviceability includes various methods of easily diagnosing the system when problems arise. Early detection of faults can decrease or avoid system downtime. For example, some enterprise systems can automatically call a service center (without human intervention) when the system experiences a system fault. The traditional focus has been on making the correct repairs with as little disruption to normal operations as possible. Note the distinction between reliability and availability: reliability measures the ability of a system to function correctly, including avoiding data corruption, whereas availability measures how often the system is available for use, even though it may not be functioning correctly. For example, a server may run forever and so have ideal availability, but may be unreliable, with frequent data corruption. Failure types Physical faults can be temporary or permanent: Permanent faults lead to a continuing error and are typically due to some physical failure such as metal electromigration or dielectric breakdown. Temporary faults include transient and intermittent faults. Transient (a.k.a. soft) faults lead to independent one-time errors and are not due to permanent hardware faults: examples include alpha particles flipping a memory bit, electromagnetic noise, or power-supply fluctuations. Intermittent faults occur due to a weak system component, e.g. circuit parameters degrading, leading to errors that are likely to recur. Failure responses Transient and intermittent faults can typically be handled by detection and correction by e.g., ECC codes or instruction replay (see below). Permanent faults will lead to uncorrectable errors which can be handled by replacement by duplicate hardware, e.g., processor sparing, or by the passing of the uncorrectable error to high level recovery mechanisms. A successfully corrected intermittent fault can also be reported to the operating system (OS) to provide information for predictive failure analysis. Hardware features Example hardware features for improving RAS include the following, listed by subsystem: Processor: Processor instruction error detection (e.g. residue checking of results) with instruction retry e.g. alternative processor recovery in IBM mainframes, or "Instruction replay technology" in Itanium systems. Processors running in lock-step to perform master-checker or voting schemes. Machine Check Architecture and ACPI Platform Error Interface to report errors to the OS. Memory: Parity or ECC (including single device correction) protection of memory components (cache and main memory); bad cache line disabling; memory scrubbing; memory sparing, memory mirroring; bad page offlining; redundant bit steering; redundant array of independent memory (RAIM). I/O: Cyclic redundancy check checksums for data transmission/retry and data storage, e.g. PCI Express (PCIe) Advanced Error Reporting (AER), redundant I/O paths. Storage: RAID configurations for hard disk drive and solid-state drive storage. Journaling file systems for file repair after crashes. Checksums on both data and metadata, and background scrubbing. Self-Monitoring, Analysis, and Reporting Technology for hard disk drive and solid-state drive. Power/cooling: Duplicating components to avoid single points of failure, e.g., power-supplies. Over-designing the system for the specified operating ranges of clock frequency, temperature, voltage, vibration. Temperature sensors to throttle operating frequency when temperature goes out of specification. Surge protector, uninterruptible power supply, auxiliary power. System: Hot swapping of components: CPUs, RAMs, hard disk drives and solid-state drives. Predictive failure analysis to predict which intermittent correctable errors will lead eventually to hard non-correctable errors. Partitioning/domaining of computer components to allow one large system to act as several smaller systems. Virtual machines to decrease the severity of operating system software faults. Redundant I/O domains or I/O partitions for providing virtual I/O to guest virtual machines. Computer clustering capability with failover capability, for complete redundancy of hardware and software. Dynamic software updating to avoid the need to reboot the system for a kernel software update, for example Ksplice under Linux. Independent management processor for serviceability: remote monitoring, alerting and control. Fault-tolerant designs extended the idea by making RAS to be the defining feature of their computers for applications like stock market exchanges or air traffic control, where system crashes would be catastrophic. Fault-tolerant computers (e.g., see Tandem Computers and Stratus Technologies), which tend to have duplicate components running in lock-step for reliability, have become less popular, due to their high cost. High availability systems, using distributed computing techniques like computer clusters, are often used as cheaper alternatives. See also Machine Check Architecture (MCA) Machine-check exception (MCE) High availability (HA) Redundancy (engineering) Integrated logistics support RAMS (reliability, availability, maintainability and safety) References External links Itanium Reliability, Availability and Serviceability (RAS) Features Overview of RAS features in general and specific features of the Itanium processor. POWER7 System RAS Key Aspects of Power Systems Reliability, Availability, and Serviceability. Daniel Henderson, Jim Mitchell, and George Ahrens. February 10, 2012 Overview of RAS features in Power processors. Intel Corp. Reliability, Availability, and Serviceability for the Always-on Enterprise (appendix B) and Intel Xeon Processor E7 Family: supporting next generation RAS servers. White paper. Overview of RAS features in Xeon processors. zEnterprise 196 System Overview. IBM Corp. (Chapter 10) Overview of RAS features of IBM z196 processor and zEnterprise 196 server. Maximizing Application Reliability and Availability with the SPARC M5-32 Server RAS features of Oracle’s SPARC M5-32 server Fault-tolerant computer systems Systems engineering
Reliability, availability and serviceability
[ "Technology", "Engineering" ]
1,682
[ "Systems engineering", "Fault-tolerant computer systems", "Reliability engineering", "Computer systems" ]
2,481,420
https://en.wikipedia.org/wiki/Loschmidt%20constant
The Loschmidt constant or Loschmidt's number (symbol: n0) is the number of particles (atoms or molecules) of an ideal gas per volume (the number density), and usually quoted at standard temperature and pressure. The 2018 CODATA recommended value is at 0 °C and 1 atm. It is named after the Austrian physicist Johann Josef Loschmidt, who was the first to estimate the physical size of molecules in 1865. The term Loschmidt constant is also sometimes used to refer to the Avogadro constant, particularly in German texts. By ideal gas law, , and since , the Loschmidt constant is given by the relationship where kB is the Boltzmann constant, p0 is the standard pressure, and T0 is the standard thermodynamic temperature. Since the Avogadro constant NA satisfies , the Loschmidt constant satisfies where R is the ideal gas constant. Being a measure of number density, the Loschmidt constant is used to define the amagat, a practical unit of number density for gases and other substances: , such that the Loschmidt constant is exactly 1 amagat. Modern determinations In the CODATA set of recommended values for physical constants, the Loschmidt constant is calculated from the Avogadro constant and the molar volume of an ideal gas, or equivalently the Boltzmann constant: where Vm is the molar volume of an ideal gas at the specified temperature and pressure, which can be chosen freely and must be quoted with values of the Loschmidt constant. The Loschmidt constant is exactly defined for exact temperatures and pressures since the 2019 revision of the SI. First determinations Loschmidt did not actually calculate a value for the constant which now bears his name, but it is a simple and logical manipulation of his published results. James Clerk Maxwell described the paper in these terms in a public lecture eight years later: Loschmidt has deduced from the dynamical theory the following remarkable proportion:—As the volume of a gas is to the combined volume of all the molecules contained in it, so is the mean path of a molecule to one-eighth of the diameter of a molecule. To derive this "remarkable proportion", Loschmidt started from Maxwell's own definition of the mean free path (there is an inconsistency between the result on this page and the page cross-referenced to the mean free path; here appears an additional factor 3/4): where n has the same sense as the Loschmidt constant, that is the number of molecules per unit volume, and d is the effective diameter of the molecules (assumed to be spherical). This rearranges to where 1/n is the volume occupied by each molecule in the gas phase, and πℓd/4 is the volume of the cylinder made by the molecule in its trajectory between two collisions. However, the true volume of each molecule is given by πd/6, and so nπd/6 is the volume occupied by all the molecules not counting the empty space between them. Loschmidt equated this volume with the volume of the liquified gas. Dividing both sides of the equation by nπd/6 has the effect of introducing a factor of V/V, which Loschmidt called the "condensation coefficient" and which is experimentally measurable. The equation reduces to relating the diameter of a gas molecule to measurable phenomena. The number density, the constant which now bears Loschmidt's name, can be found by simply substituting the diameter of the molecule into the definition of the mean free path and rearranging: Instead of taking this step, Loschmidt decided to estimate the mean diameter of the molecules in air. This was no minor undertaking, as the condensation coefficient was unknown and had to be estimated – it would be another twelve years before Raoul Pictet and Louis Paul Cailletet would liquify nitrogen for the first time. The mean free path was also uncertain. Nevertheless, Loschmidt arrived at a diameter of about one nanometre, of the correct order of magnitude. Loschmidt's estimated data for air give a value of n = . Eight years later, Maxwell was citing a figure of "about 19 million million million" per cm, or . See also Avogadro's law List of scientists whose names are used in physical constants References Amount of substance Physical constants
Loschmidt constant
[ "Physics", "Chemistry", "Mathematics" ]
932
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Chemical quantities", "Amount of substance", "Physical constants", "Wikipedia categories named after physical quantities" ]
2,481,686
https://en.wikipedia.org/wiki/Kramers%E2%80%93Kronig%20relations
The Kramers–Kronig relations, sometimes abbreviated as KK relations, are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. The relations are often used to compute the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the condition of analyticity, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics, these relations are known by the names Sokhotski–Plemelj theorem and Hilbert transform. Formulation Let be a complex function of the complex variable , where and are real. Suppose this function is analytic in the closed upper half-plane of and tends to as . The Kramers–Kronig relations are given by and where is real and where denotes the Cauchy principal value. The real and imaginary parts of such a function are not independent, allowing the full function to be reconstructed given just one of its parts. Derivation The proof begins with an application of Cauchy's residue theorem for complex integration. Given any analytic function in the closed upper half-plane, the function , where is real, is analytic in the (open) upper half-plane. The residue theorem consequently states that for any closed contour within this region. When the contour is chosen to trace the real axis, a hump over the pole at , and a large semicircle in the upper half-plane. This follows decomposition of the integral into its contributions along each of these three contour segments and pass them to limits. The length of the semicircular segment increases proportionally to , but the integral over it vanishes in the limit because vanishes faster than . We are left with the segments along the real axis and the half-circle around the pole. We pass the size of the half-circle to zero and obtain The second term in the last expression is obtained using the theory of residues, more specifically, the Sokhotski–Plemelj theorem. Rearranging, we arrive at the compact form of the Kramers–Kronig relations: The single in the denominator effectuates the connection between the real and imaginary components. Finally, split and the equation into their real and imaginary parts to obtain the forms quoted above. Physical interpretation and alternate form The Kramers–Kronig formalism can be applied to response functions. In certain linear physical systems, or in engineering fields such as signal processing, the response function describes how some time-dependent property of a physical system responds to an impulse force at time For example, could be the angle of a pendulum and the applied force of a motor driving the pendulum motion. The response must be zero for since a system cannot respond to a force before it is applied. It can be shown (for instance, by invoking Titchmarsh's theorem) that this causality condition implies that the Fourier transform of is analytic in the upper half plane. Additionally, if the system is subjected to an oscillatory force with a frequency much higher than its highest resonant frequency, there will be almost no time for the system to respond before the forcing has switched direction, and so the frequency response will converge to zero as becomes very large. From these physical considerations, it results that will typically satisfy the conditions needed for the Kramers–Kronig relations. The imaginary part of a response function describes how a system dissipates energy, since it is in phase with the driving force. The Kramers–Kronig relations imply that observing the dissipative response of a system is sufficient to determine its out of phase (reactive) response, and vice versa. The integrals run from to , implying we know the response at negative frequencies. Fortunately, in most physical systems, the positive frequency-response determines the negative-frequency response because is the Fourier transform of a real-valued response . We will make this assumption henceforth. As a consequence, . This means is an even function of frequency and is odd. Using these properties, we can collapse the integration ranges to . Consider the first relation, which gives the real part . We transform the integral into one of definite parity by multiplying the numerator and denominator of the integrand by and separating: Since is odd, the second integral vanishes, and we are left with The same derivation for the imaginary part gives These are the Kramers–Kronig relations in a form that is useful for physically realistic response functions. Related proof from the time domain Hu and Hall and Heck give a related and possibly more intuitive proof that avoids contour integration. It is based on the facts that: A causal impulse response can be expressed as the sum of an even function and an odd function, where the odd function is the even function multiplied by the sign function. The even and odd parts of a time domain waveform correspond to the real and imaginary parts of its Fourier integral, respectively. Multiplication by the sign function in the time domain corresponds to the Hilbert transform (i.e. convolution by the Hilbert kernel ) in the frequency domain. Combining the formulas provided by these facts yields the Kramers–Kronig relations. This proof covers slightly different ground from the previous one in that it relates the real and imaginary parts in the frequency domain of any function that is causal in the time domain, offering an approach somewhat different from the condition of analyticity in the upper half plane of the frequency domain. An article with an informal, pictorial version of this proof is also available. Magnitude (gain)–phase relation The conventional form of Kramers–Kronig above relates the real and imaginary part of a complex response function. A related goal is to find a relation between the magnitude and phase of a complex response function. In general, unfortunately, the phase cannot be uniquely predicted from the magnitude. A simple example of this is a pure time delay of time T, which has amplitude 1 at any frequency regardless of T, but has a phase dependent on T (specifically, phase = 2π × T × frequency). There is, however, a unique amplitude-vs-phase relation in the special case of a minimum phase system, sometimes called the Bode gain–phase relation. The terms Bayard–Bode relations and Bayard–Bode theorem, after the works of Marcel Bayard (1936) and Hendrik Wade Bode (1945) are also used for either the Kramers–Kronig relations in general or the amplitude–phase relation in particular, particularly in the fields of telecommunication and control theory. Applications in physics Optics Complex refractive index The Kramers–Kronig relations are used to relate the real and imaginary portions for the complex refractive index of a medium, where is the extinction coefficient. Hence, in effect, this also applies for the complex relative permittivity and electric susceptibility. The Sellmeier equation is directly connected to the Kramer-Kronig relations, and is used to approximate real and complex refractive index of materials far away from any resonances. Circular birefringence In optical rotation, the Kramers–Kronig relations establish a connection between optical rotary dispersion and circular dichroism. Magneto-optics Kramers–Kronig relations enable exact solutions of nontrivial scattering problems, which find applications in magneto-optics. Ellipsometry In ellipsometry, Kramer-Kronig relations are applied to verify the measured values for the real and complex parts of the refractive index of thin films. Electron spectroscopy In electron energy loss spectroscopy, Kramers–Kronig analysis allows one to calculate the energy dependence of both real and imaginary parts of a specimen's light optical permittivity, together with other optical properties such as the absorption coefficient and reflectivity. In short, by measuring the number of high energy (e.g. 200 keV) electrons which lose a given amount of energy in traversing a very thin specimen (single scattering approximation), one can calculate the imaginary part of permittivity at that energy. Using this data with Kramers–Kronig analysis, one can calculate the real part of permittivity (as a function of energy) as well. This measurement is made with electrons, rather than with light, and can be done with very high spatial resolution. One might thereby, for example, look for ultraviolet (UV) absorption bands in a laboratory specimen of interstellar dust less than a 100 nm across, i.e. too small for UV spectroscopy. Although electron spectroscopy has poorer energy resolution than light spectroscopy, data on properties in visible, ultraviolet and soft x-ray spectral ranges may be recorded in the same experiment. In angle resolved photoemission spectroscopy the Kramers–Kronig relations can be used to link the real and imaginary parts of the electrons self-energy. This is characteristic of the many body interaction the electron experiences in the material. Notable examples are in the high temperature superconductors, where kinks corresponding to the real part of the self-energy are observed in the band dispersion and changes in the MDC width are also observed corresponding to the imaginary part of the self-energy. Hadronic scattering The Kramers–Kronig relations are also used under the name "integral dispersion relations" with reference to hadronic scattering. In this case, the function is the scattering amplitude. Through the use of the optical theorem the imaginary part of the scattering amplitude is then related to the total cross section, which is a physically measurable quantity. Electron scattering Similarly to Hadronic scattering, the Kramers–Kronig relations are employed in high energy electron scattering. In particular, they enter the derivation of the Gerasimov–Drell–Hearn sum rule. Geophysics For seismic wave propagation, the Kramer–Kronig relation helps to find right form for the quality factor in an attenuating medium. Electrochemical impedance spectroscopy The Kramers-Kronig test is used in battery and fuel cell applications (dielectric spectroscopy) to test for linearity, causality and stationarity. Since, it is not possible in practice to obtain data in the whole frequency range, as the Kramers-Kronig formula requires, approximations are necessarily made. At high frequencies (> 1 MHz) it is usually safe to assume, that the impedance is dominated by ohmic resistance of the electrolyte, although inductance artefacts are often observed. At low frequencies, the KK test can be used to verify whether experimental data are reliable. In battery practice, data obtained with experiments of duration less than one minute usually fail the test for frequencies below 10 Hz. Therefore, care should be exercised, when interpreting such data. In electrochemistry practice, due to the finite frequency range of experimental data, Z-HIT relation is used instead of Kramers-Kronig relations. Unlike Kramers-Kronig (which is written for an infinite frequency range), Z-HIT integration requires only a finite frequency range. Furthermore, Z-HIT is more robust with respect to error in the Re and Im of impedance, since its accuracy depends mostly on the accuracy of the phase data. See also Dispersion (optics) Linear response function Numerical analytic continuation References Citations Sources Complex analysis Electric and magnetic fields in matter
Kramers–Kronig relations
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,350
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
2,482,050
https://en.wikipedia.org/wiki/X-ray%20image%20intensifier
An X-ray image intensifier (XRII) is an image intensifier that converts X-rays into visible light at higher intensity than the more traditional fluorescent screens can. Such intensifiers are used in X-ray imaging systems (such as fluoroscopes) to allow low-intensity X-rays to be converted to a conveniently bright visible light output. The device contains a low absorbency/scatter input window, typically aluminum, input fluorescent screen, photocathode, electron optics, output fluorescent screen and output window. These parts are all mounted in a high vacuum environment within glass or, more recently, metal/ceramic. By its intensifying effect, It allows the viewer to more easily see the structure of the object being imaged than fluorescent screens alone, whose images are dim. The XRII requires lower absorbed doses due to more efficient conversion of X-ray quanta to visible light. This device was originally introduced in 1948. Operation The overall function of an image intensifier is to convert incident x-ray photons to light photons of sufficient intensity to provide a viewable image. This occurs in several stages. The input window is convex is shape, made up of aluminium to minimise the scattering of X-rays. The window is 1 mm in thickness. Once X-rays pass through the aluminium windows, it encounters input phosphor that converts X-rays into light photons. The thickness of input phosphor range from 300 to 450 micrometres reach a compromise between absorption efficiency of X-rays and spatial resolution. Thicker input phosphor has higher absorption efficiency but poor spatial resolution and vice versa. Sodium activated Caesium Iodide is typically used due to its higher conversion efficiency thanks to high atomic number and mass attenuation coefficient, when compared to zinc-cadmium sulfide. The input phosphor are arranged into small tubes, to allow photons to pass through the tube, without scattering, this improving the spatial resolution. The light photons are then converted to electrons by a photocathode. The photocathode is made up of antimony caesium, which is to match the photons produced from input phosphor, thus maximise the efficiency of producing photoelectrons. The photocathode has a thickness of 20 nm with absorption efficacy of 10 to 15%. A potential difference (25-35 kilovolts) created between the anode and photocathode then accelerates these photoelectrons while electron lenses focus the beam down to the size of the output window. The output window is typically made of silver-activated zinc-cadmium sulfide and converts incident electrons back to visible light photons. At the input and output phosphors the number of photons is multiplied by several thousands, so that overall there is a large brightness gain. This gain makes image intensifiers highly sensitive to X-rays such that relatively low doses can be used for fluoroscopic procedures. History X-ray image intensifiers became available in the early 1950s and were viewed through a microscope. Viewing of the output was via mirrors and optical systems until the adaption of television systems in the 1960s. Additionally, the output was able to be captured on systems with a 100mm cut film camera using pulsed outputs from an X-ray tube similar to a normal radiographic exposure; the difference being the II rather than a film screen cassette provided the image for the film to record. The input screens range from 15–57 cm, with the 23 cm, 33 cm and 40 cm being among the most common. Within each image intensifier, the actual field size can be changed using the voltages applied to the internal electron optics to achieve magnification and reduced viewing size. For example, the 23 cm commonly used in cardiac applications can be set to a format of 23, 17, and 13 cm. Because the output screen remains fixed in size, the output appears to "magnify" the input image. High-speed digitalisation with analogue video signal came about in the mid-1970s, with pulsed fluoroscopy developed in the mid-1980s harnessing low dose rapid switching X-ray tubes. In the late 1990s image intensifiers began being replaced with flat panel detectors (FPDs) on fluoroscopy machines giving competition to the image intensifiers. Clinical applications "C-arm" mobile fluoroscopy machines are often colloquially referred to as image intensifiers (or IIs), however strictly speaking the image intensifier is only one part of the machine (namely the detector). Fluoroscopy, using an X-ray machine with an image intensifier, has applications in many areas of medicine. Fluoroscopy allows live images to be viewed so that image-guided surgery is feasible. Common uses include orthopedics, gastroenterology and cardiology. Less common applications can include dentistry. Configurations A system containing an image intensifier may be used either as a fixed piece of equipment in a dedicated screening room or as mobile equipment for use in an operating theatre. A mobile fluoroscopy unit generally consists of two units, the X-ray generator and image detector (II) on a moveable C-arm, and a separate workstation unit used to store and manipulate the images. The patient is positioned between the two arms, typically on a radiolucent bed. Fixed systems may have a c-arm mounted to a ceiling gantry, with a separate control area. Most systems arranged as c-arms can have the image intensifier positioned above or below the patient (with the X-ray tube below or above respectively), although some static in room systems may have fixed orientations. From a radiation protection standpoint, under-couch (X-ray tube) operation is preferable as it reduces the amount of scattered radiation on operators and workers. Smaller "mini" mobile c-arms are also available, primarily used to image extremities, for example for minor hand surgery. Flat panel detectors Flat Detectors are an alternative to Image Intensifiers. The advantages of this technology include: lower patient dose and increased image quality because the X-rays are always pulsed, and no deterioration of the image quality over time. Despite FPD being at a higher cost than II/TV systems, the noteworthy changes in the physical size and accessibility for the patients is worth it, especially when dealing with paediatric patients. Feature comparison of II/TV and FPD Systems See also References Image Radiography Image Medical imaging
X-ray image intensifier
[ "Physics", "Technology", "Engineering" ]
1,350
[ "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum", "X-ray instrumentation", "Measuring instruments" ]
2,482,408
https://en.wikipedia.org/wiki/Photochromism
Photochromism is the reversible change of color upon exposure to light. It is a transformation of a chemical species (photoswitch) between two forms by the absorption of electromagnetic radiation (photoisomerization), where the two forms have different absorption spectra. Applications Sunglasses One of the most famous reversible photochromic applications is color changing lenses for sunglasses. The largest limitation in using photochromic technology is that the materials cannot be made stable enough to withstand thousands of hours of outdoor exposure so long-term outdoor applications are not appropriate at this time. The switching speed of photochromic dyes is highly sensitive to the rigidity of the environment around the dye. As a result, they switch most rapidly in solution and slowest in the rigid environment like a polymer lens. In 2005 it was reported that attaching flexible polymers with low glass transition temperature (for example siloxanes or polybutyl acrylate) to the dyes allows them to switch much more rapidly in a rigid lens. Some spirooxazines with siloxane polymers attached switch at near solution-like speeds even though they are in a rigid lens matrix. Supramolecular chemistry Photochromic units have been employed extensively in supramolecular chemistry. Their ability to give a light-controlled reversible shape change means that they can be used to make or break molecular recognition motifs, or to cause a consequent shape change in their surroundings. Thus, photochromic units have been demonstrated as components of molecular switches. The coupling of photochromic units to enzymes or enzyme cofactors even provides the ability to reversibly turn enzymes "on" and "off", by altering their shape or orientation in such a way that their functions are either "working" or "broken". Data storage The possibility of using photochromic compounds for data storage was first suggested in 1956 by Yehuda Hirshberg. Since that time, there have been many investigations by various academic and commercial groups, particularly in the area of 3D optical data storage which promises discs that can hold a terabyte of data. Initially, issues with thermal back-reactions and destructive reading dogged these studies, but more recently more stable systems have been developed. Novelty items Reversible photochromics are also found in applications such as toys, cosmetics, clothing and industrial applications. If necessary, they can be made to change between desired colors by combination with a permanent pigment. Solar energy storage Researchers at the Center for Exploitation of Solar Energy at the University of Copenhagen Department of Chemistry are studying the photochromic dihydroazulene–vinylheptafulvene system as a method to harvest and store solar energy. History Photochromism was discovered in the late 1880s, including work by Markwald, who studied the reversible change of color of 2,3,4,4-tetrachloronaphthalen-1(4H)-one in the solid state. He labeled this phenomenon "phototropy", and this name was used until the 1950s when Yehuda Hirshberg, of the Weizmann Institute of Science in Israel proposed the term "photochromism". Photochromism can take place in both organic and inorganic compounds, and also has its place in biological systems (for example retinal in the vision process). Overview Photochromism does not have a rigorous definition, but is usually used to describe compounds that undergo a reversible photochemical reaction where an absorption band in the visible part of the electromagnetic spectrum changes dramatically in strength or wavelength. In many cases, an absorbance band is present in only one form. The degree of change required for a photochemical reaction to be dubbed "photochromic" is that which appears dramatic by eye, but in essence there is no dividing line between photochromic reactions and other photochemistry. Therefore, while the trans-cis isomerization of azobenzene is considered a photochromic reaction, the analogous reaction of stilbene is not. Since photochromism is just a special case of a photochemical reaction, almost any photochemical reaction type may be used to produce photochromism with appropriate molecular design. Some of the most common processes involved in photochromism are pericyclic reactions, cis-trans isomerizations, intramolecular hydrogen transfer, intramolecular group transfers, dissociation processes and electron transfers (oxidation-reduction). Another requirement of photochromism is two states of the molecule should be thermally stable under ambient conditions for a reasonable time. All the same, nitrospiropyran (which back-isomerizes in the dark over ~10 minutes at room temperature) is considered photochromic. All photochromic molecules back-isomerize to their more stable form at some rate, and this back-isomerization is accelerated by heating. There is therefore a close relationship between photochromic and thermochromic compounds. The timescale of thermal back-isomerization is important for applications, and may be molecularly engineered. Photochromic compounds considered to be "thermally stable" include some diarylethenes, which do not back isomerize even after heating at 80 C for 3 months. Since photochromic chromophores are dyes, and operate according to well-known reactions, their molecular engineering to fine-tune their properties can be achieved relatively easily using known design models, quantum mechanics calculations, and experimentation. In particular, the tuning of absorbance bands to particular parts of the spectrum and the engineering of thermal stability have received much attention. Sometimes, and particularly in the dye industry, the term irreversible photochromic is used to describe materials that undergo a permanent color change upon exposure to ultraviolet or visible light radiation. Because by definition photochromics are reversible, there is technically no such thing as an "irreversible photochromic"—this is loose usage, and these compounds are better referred to as "photochangable" or "photoreactive" dyes. Apart from the qualities already mentioned, several other properties of photochromics are important for their use. These include quantum yield, fatigue resistance, photostationary state, and polarity and solubility. The quantum yield of the photochemical reaction determines the efficiency of the photochromic change with respect to the amount of light absorbed. The quantum yield of isomerization can be strongly dependent on conditions. In photochromic materials, fatigue refers to the loss of reversibility by processes such as photodegradation, photobleaching, photooxidation, and other side reactions. All photochromics suffer fatigue to some extent, and its rate is strongly dependent on the activating light and the conditions of the sample. Photochromic materials have two states, and their interconversion can be controlled using different wavelengths of light. Excitation with any given wavelength of light will result in a mixture of the two states at a particular ratio, called the photostationary state. In a perfect system, there would exist wavelengths that can be used to provide 1:0 and 0:1 ratios of the isomers, but in real systems this is not possible, since the active absorbance bands always overlap to some extent. In order to incorporate photochromics in working systems, they suffer the same issues as other dyes. They are often charged in one or more state, leading to very high polarity and possible large changes in polarity. They also often contain large conjugated systems that limit their solubility. Tenebrescence Tenebrescence, also known as reversible photochromism, is the ability of minerals to change color when exposed to light. The effect can be repeated indefinitely, but is destroyed by heating. Tenebrescent minerals include hackmanite, spodumene and tugtupite. Photochromic complexes A photochromic complex is a kind of chemical compound that has photoresponsive parts on its ligand. These complexes have a specific structure: photoswitchable organic compounds are attached to metal complexes. For the photocontrollable parts, thermally and photochemically stable chromophores (azobenzene, diarylethene, spiropyran, etc.) are usually used. And for the metal complexes, a wide variety of compounds that have various functions (redox response, luminescence, magnetism, etc.) are applied. The photochromic parts and metal parts are so close that they can affect each other's molecular orbitals. The physical properties of these compounds shown by parts of them (i.e., chromophores or metals) thus can be controlled by switching their other sites by external stimuli. For example, photoisomerization behaviors of some complexes can be switched by oxidation and reduction of their metal parts. Some other compounds can be changed in their luminescence behavior, magnetic interaction of metal sites, or stability of metal-to-ligand coordination by photoisomerization of their photochromic parts. Classes of photochromic materials Photochromic molecules can belong to various classes: triarylmethanes, stilbenes, azastilbenes, nitrones, fulgides, spiropyrans, naphthopyrans, spiro-oxazines, quinones and others. Spiropyrans and spirooxazines One of the oldest, and perhaps the most studied, families of photochromes are the spiropyrans. Very closely related to these are the spirooxazines. For example, the spiro form of an oxazine is a colorless leuco dye; the conjugated system of the oxazine and another aromatic part of the molecule is separated by a sp³-hybridized "spiro" carbon. After irradiation with UV light, the bond between the spiro-carbon and the oxazine breaks, the ring opens, the spiro carbon achieves sp² hybridization and becomes planar, the aromatic group rotates, aligns its π-orbitals with the rest of the molecule, and a conjugated system forms with ability to absorb photons of visible light, and therefore appear colorful. When the UV source is removed, the molecules gradually relax to their ground state, the carbon-oxygen bond reforms, the spiro-carbon becomes sp³ hybridized again, and the molecule returns to its colorless state. This class of photochromes in particular are thermodynamically unstable in one form and revert to the stable form in the dark unless cooled to low temperatures. Their lifetime can also be affected by exposure to UV light. Like most organic dyes they are susceptible to degradation by oxygen and free radicals. Incorporation of the dyes into a polymer matrix, adding a stabilizer, or providing a barrier to oxygen and chemicals by other means prolongs their lifetime. Diarylethenes The "diarylethenes" were first introduced by Irie and have since gained widespread interest, largely on account of their high thermodynamic stability. They operate by means of a 6-pi electrocyclic reaction, the thermal analog of which is impossible due to steric hindrance. Pure photochromic dyes usually have the appearance of a crystalline powder, and in order to achieve the color change, they usually have to be dissolved in a solvent or dispersed in a suitable matrix. However, some diarylethenes have so little shape change upon isomerization that they can be converted while remaining in crystalline form. Azobenzenes The photochromic trans-cis isomerization of azobenzenes has been used extensively in molecular switches, often taking advantage of its shape change upon isomerization to produce a supramolecular result. In particular, azobenzenes incorporated into crown ethers give switchable receptors and azobenzenes in monolayers can provide light-controlled changes in surface properties. Photochromic quinones Some quinones, and phenoxynaphthacene quinone in particular, have photochromicity resulting from the ability of the phenyl group to migrate from one oxygen atom to another. Quinones with good thermal stability have been prepared, and they also have the additional feature of redox activity, leading to the construction of many-state molecular switches that operate by a mixture of photonic and electronic stimuli. Inorganic photochromics Many inorganic substances also exhibit photochromic properties, often with much better resistance to fatigue than organic photochromics. In particular, silver chloride is extensively used in the manufacture of photochromic lenses. Other silver and zinc halides are also photochromic. Yttrium oxyhydride is another inorganic material with photochromic properties. Photochromic coordination compounds Photochromic coordination complexes are relatively rare in comparison to the organic compounds listed above. There are two major classes of photochromic coordination compounds. Those based on sodium nitroprusside and the ruthenium sulfoxide compounds. The ruthenium sulfoxide complexes were created and developed by Rack and coworkers. The mode of action is an excited state isomerization of a sulfoxide ligand on a ruthenium polypyridine fragment from S to O or O to S. The difference in bonding from between Ru and S or O leads to the dramatic color change and change in Ru(III/II) reduction potential. The ground state is always S-bonded and the metastable state is always O-bonded. Typically, absorption maxima changes of nearly 100 nm are observed. The metastable states (O-bonded isomers) of this class often revert thermally to their respective ground states (S-bonded isomers), although a number of examples exhibit two-color reversible photochromism. Ultrafast spectroscopy of these compounds has revealed exceptionally fast isomerization lifetimes ranging from 1.5 nanoseconds to 48 picoseconds. See also Photosensitive glass Hexaarylbiimidazole References Photochemistry Chromism Minerals
Photochromism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,978
[ "Spectrum (physical sciences)", "Chromism", "Materials science", "nan", "Smart materials", "Spectroscopy" ]
2,484,577
https://en.wikipedia.org/wiki/Schottky%20effect
The Schottky effect or field enhanced thermionic emission is a phenomenon in condensed matter physics named after Walter H. Schottky. In electron emission devices, especially electron guns, the thermionic electron emitter will be biased negative relative to its surroundings. This creates an electric field of magnitude F at the emitter surface. Without the field, the surface barrier seen by an escaping Fermi-level electron has height W equal to the local work-function. The electric field lowers the surface barrier by an amount ΔW, and increases the emission current. It can be modeled by a simple modification of the Richardson equation, by replacing W by (W − ΔW). This gives the equation where J is the emission current density, T is the temperature of the metal, W is the work function of the metal, k is the Boltzmann constant, qe is the Elementary charge, ε0 is the vacuum permittivity, and AG is the product of a universal constant A0 multiplied by a material-specific correction factor λR which is typically of order 0.5. Electron emission that takes place in the field-and-temperature-regime where this modified equation applies is often called Schottky emission. This equation is relatively accurate for electric field strengths lower than about 108 V  m−1. For electric field strengths higher than 108 V m−1, so-called Fowler–Nordheim (FN) tunneling begins to contribute significant emission current. In this regime, the combined effects of field-enhanced thermionic and field emission can be modeled by the Murphy–Good equation for thermo-field (T-F) emission. At even higher fields, FN tunneling becomes the dominant electron emission mechanism, and the emitter operates in the so-called "cold field electron emission (CFE)" regime. Thermionic emission can also be enhanced by interaction with other forms of excitation such as light. For example, excited Cs-vapours in thermionic converters form clusters of Cs-Rydberg matter which yield a decrease of collector emitting work function from 1.5 eV to 1.0–0.7 eV. Due to long-lived nature of Rydberg matter this low work function remains low which essentially increases the low-temperature converter’s efficiency. References External links Condensed matter physics Electron beam
Schottky effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
483
[ "Electron", "Electron beam", "Phases of matter", "Materials science", "Condensed matter physics", "Matter" ]
2,484,769
https://en.wikipedia.org/wiki/Catalytic%20cycle
In chemistry, a catalytic cycle is a multistep reaction mechanism that involves a catalyst. The catalytic cycle is the main method for describing the role of catalysts in biochemistry, organometallic chemistry, bioinorganic chemistry, materials science, etc. Since catalysts are regenerated, catalytic cycles are usually written as a sequence of chemical reactions in the form of a loop. In such loops, the initial step entails binding of one or more reactants by the catalyst, and the final step is the release of the product and regeneration of the catalyst. Articles on the Monsanto process, the Wacker process, and the Heck reaction show catalytic cycles. A catalytic cycle is not necessarily a full reaction mechanism. For example, it may be that the intermediates have been detected, but it is not known by which mechanisms the actual elementary reactions occur. Precatalysts Precatalysts are not catalysts but are precursors to catalysts. Precatalysts are converted in the reactor to the actual catalytic species. The identification of catalysts vs precatalysts is an important theme in catalysis research. The conversion of a precatalyst to a catalyst is often called catalyst activation. Many metal halides are precatalysts for alkene polymerization, see Kaminsky catalyst and Ziegler-Natta catalysis. The precatalysts, e.g. titanium trichloride, are activated by organoaluminium compounds, which function as catalyst activators. Metal oxides are often classified as catalysts, but in fact are almost always precatalysts. Applications include olefin metathesis and hydrogenation. The metal oxides require some activating reagent, usually a reducing agent, to enter the catalytic cycle. Often catalytic cycles show the conversion of a precatalyst to the catalyst. Sacrificial catalysts Often a so-called sacrificial catalyst is also part of the reaction system with the purpose of regenerating the true catalyst in each cycle. As the name implies, the sacrificial catalyst is not regenerated and is irreversibly consumed, thereby not a catalyst at all. This sacrificial compound is also known as a stoichiometric catalyst when added in stoichiometric quantities compared to the main reactant. Usually the true catalyst is an expensive and complex molecule and added in quantities as small as possible. The stoichiometric catalyst on the other hand should be cheap and abundant. "Sacrificial catalysts" are more accurately referred to by their actual role in the catalytic cycle, for example as a reductant. References Reaction mechanisms Catalysis
Catalytic cycle
[ "Chemistry" ]
552
[ "Catalysis", "Reaction mechanisms", "Chemical kinetics", "Physical organic chemistry" ]
2,485,027
https://en.wikipedia.org/wiki/Non-covalent%20interaction
In chemistry, a non-covalent interaction differs from a covalent bond in that it does not involve the sharing of electrons, but rather involves more dispersed variations of electromagnetic interactions between molecules or within a molecule. The chemical energy released in the formation of non-covalent interactions is typically on the order of 1–5 kcal/mol (1000–5000 calories per 6.02 molecules). Non-covalent interactions can be classified into different categories, such as electrostatic, π-effects, van der Waals forces, and hydrophobic effects. Non-covalent interactions are critical in maintaining the three-dimensional structure of large molecules, such as proteins and nucleic acids. They are also involved in many biological processes in which large molecules bind specifically but transiently to one another (see the properties section of the DNA page). These interactions also heavily influence drug design, crystallinity and design of materials, particularly for self-assembly, and, in general, the synthesis of many organic molecules. The non-covalent interactions may occur between different parts of the same molecule (e.g. during protein folding) or between different molecules and therefore are discussed also as intermolecular forces. Electrostatic interactions Ionic Ionic interactions involve the attraction of ions or molecules with full permanent charges of opposite signs. For example, sodium fluoride involves the attraction of the positive charge on sodium (Na+) with the negative charge on fluoride (F−). However, this particular interaction is easily broken upon addition to water, or other highly polar solvents. In water ion pairing is mostly entropy driven; a single salt bridge usually amounts to an attraction value of about ΔG =5 kJ/mol at an intermediate ion strength I, at I close to zero the value increases to about 8 kJ/mol. The ΔG values are usually additive and largely independent of the nature of the participating ions, except for transition metal ions etc. These interactions can also be seen in molecules with a localized charge on a particular atom. For example, the full negative charge associated with ethoxide, the conjugate base of ethanol, is most commonly accompanied by the positive charge of an alkali metal salt such as the sodium cation (Na+). Hydrogen bonding A hydrogen bond (H-bond), is a specific type of interaction that involves dipole–dipole attraction between a partially positive hydrogen atom and a highly electronegative, partially negative oxygen, nitrogen, sulfur, or fluorine atom (not covalently bound to said hydrogen atom). It is not a covalent bond, but instead is classified as a strong non-covalent interaction. It is responsible for why water is a liquid at room temperature and not a gas (given water's low molecular weight). Most commonly, the strength of hydrogen bonds lies between 0–4 kcal/mol, but can sometimes be as strong as 40 kcal/mol In solvents such as chloroform or carbon tetrachloride one observes e.g. for the interaction between amides additive values of about 5 kJ/mol. According to Linus Pauling the strength of a hydrogen bond is essentially determined by the electrostatic charges. Measurements of thousands of complexes in chloroform or carbon tetrachloride have led to additive free energy increments for all kind of donor-acceptor combinations. Halogen bonding Halogen bonding is a type of non-covalent interaction which does not involve the formation nor breaking of actual bonds, but rather is similar to the dipole–dipole interaction known as hydrogen bonding. In halogen bonding, a halogen atom acts as an electrophile, or electron-seeking species, and forms a weak electrostatic interaction with a nucleophile, or electron-rich species. The nucleophilic agent in these interactions tends to be highly electronegative (such as oxygen, nitrogen, or sulfur), or may be anionic, bearing a negative formal charge. As compared to hydrogen bonding, the halogen atom takes the place of the partially positively charged hydrogen as the electrophile. Halogen bonding should not be confused with halogen–aromatic interactions, as the two are related but differ by definition. Halogen–aromatic interactions involve an electron-rich aromatic π-cloud as a nucleophile; halogen bonding is restricted to monatomic nucleophiles. Van der Waals forces Van der Waals forces are a subset of electrostatic interactions involving permanent or induced dipoles (or multipoles). These include the following: permanent dipole–dipole interactions, alternatively called the Keesom force dipole-induced dipole interactions, or the Debye force induced dipole-induced dipole interactions, commonly referred to as London dispersion forces Hydrogen bonding and halogen bonding are typically not classified as Van der Waals forces. Dipole–dipole Dipole-dipole interactions are electrostatic interactions between permanent dipoles in molecules. These interactions tend to align the molecules to increase attraction (reducing potential energy). Normally, dipoles are associated with electronegative atoms, including oxygen, nitrogen, sulfur, and fluorine. For example, acetone, the active ingredient in some nail polish removers, has a net dipole associated with the carbonyl (see figure 2). Since oxygen is more electronegative than the carbon that is covalently bonded to it, the electrons associated with that bond will be closer to the oxygen than the carbon, creating a partial negative charge (δ−) on the oxygen, and a partial positive charge (δ+) on the carbon. They are not full charges because the electrons are still shared through a covalent bond between the oxygen and carbon. If the electrons were no longer being shared, then the oxygen-carbon bond would be an electrostatic interaction. Often molecules contain dipolar groups, but have no overall dipole moment. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane. Note that the dipole-dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole. See atomic dipoles. Dipole-induced dipole A dipole-induced dipole interaction (Debye force) is due to the approach of a molecule with a permanent dipole to another non-polar molecule with no permanent dipole. This approach causes the electrons of the non-polar molecule to be polarized toward or away from the dipole (or "induce" a dipole) of the approaching molecule. Specifically, the dipole can cause electrostatic attraction or repulsion of the electrons from the non-polar molecule, depending on orientation of the incoming dipole. Atoms with larger atomic radii are considered more "polarizable" and therefore experience greater attractions as a result of the Debye force. London dispersion forces London dispersion forces are the weakest type of non-covalent interaction. In organic molecules, however, the multitude of contacts can lead to larger contributions, particularly in the presence of heteroatoms. They are also known as "induced dipole-induced dipole interactions" and present between all molecules, even those which inherently do not have permanent dipoles. Dispersive interactions increase with the polarizability of interacting groups, but are weakened by solvents of increased polarizability. They are caused by the temporary repulsion of electrons away from the electrons of a neighboring molecule, leading to a partially positive dipole on one molecule and a partially negative dipole on another molecule. Hexane is a good example of a molecule with no polarity or highly electronegative atoms, yet is a liquid at room temperature due mainly to London dispersion forces. In this example, when one hexane molecule approaches another, a temporary, weak partially negative dipole on the incoming hexane can polarize the electron cloud of another, causing a partially positive dipole on that hexane molecule. In absence of solvents hydrocarbons such as hexane form crystals due to dispersive forces ; the sublimation heat of crystals is a measure of the dispersive interaction. While these interactions are short-lived and very weak, they can be responsible for why certain non-polar molecules are liquids at room temperature. π-effects π-effects can be broken down into numerous categories, including π-stacking, cation-π and anion-π interactions, and polar-π interactions. In general, π-effects are associated with the interactions of molecules with the π-systems of arenes. π–π interaction π–π interactions are associated with the interaction between the π-orbitals of a molecular system. The high polarizability of aromatic rings lead to dispersive interactions as major contribution to so-called stacking effects. These play a major role for interactions of nucleobases e.g. in DNA. For a simple example, a benzene ring, with its fully conjugated π cloud, will interact in two major ways (and one minor way) with a neighboring benzene ring through a π–π interaction (see figure 3). The two major ways that benzene stacks are edge-to-face, with an enthalpy of ~2 kcal/mol, and displaced (or slip stacked), with an enthalpy of ~2.3 kcal/mol. The sandwich configuration is not nearly as stable of an interaction as the previously two mentioned due to high electrostatic repulsion of the electrons in the π orbitals. Cation–π and anion–π interaction Cation–pi interactions can be as strong or stronger than H-bonding in some contexts. Anion–π interactions are very similar to cation–π interactions, but reversed. In this case, an anion sits atop an electron-poor π-system, usually established by the presence of electron-withdrawing substituents on the conjugated molecule Polar–π Polar–π interactions involve molecules with permanent dipoles (such as water) interacting with the quadrupole moment of a π-system (such as that in benzene (see figure 5). While not as strong as a cation-π interaction, these interactions can be quite strong (~1-2 kcal/mol), and are commonly involved in protein folding and crystallinity of solids containing both hydrogen bonding and π-systems. In fact, any molecule with a hydrogen bond donor (hydrogen bound to a highly electronegative atom) will have favorable electrostatic interactions with the electron-rich π-system of a conjugated molecule. Hydrophobic effect The hydrophobic effect is the desire for non-polar molecules to aggregate in aqueous solutions in order to separate from water. This phenomenon leads to minimum exposed surface area of non-polar molecules to the polar water molecules (typically spherical droplets), and is commonly used in biochemistry to study protein folding and other various biological phenomenon. The effect is also commonly seen when mixing various oils (including cooking oil) and water. Over time, oil sitting on top of water will begin to aggregate into large flattened spheres from smaller droplets, eventually leading to a film of all oil sitting atop a pool of water. However the hydrophobic effect is not considered a non-covalent interaction as it is a function of entropy and not a specific interaction between two molecules, usually characterized by entropy.enthalpy compensation. An essentially enthalpic hydrophobic effect materializes if a limited number of water molecules are restricted within a cavity; displacement of such water molecules by a ligand frees the water molecules which then in the bulk water enjoy a maximum of hydrogen bonds close to four. Examples Drug design Most pharmaceutical drugs are small molecules which elicit a physiological response by "binding" to enzymes or receptors, causing an increase or decrease in the enzyme's ability to function. The binding of a small molecule to a protein is governed by a combination of steric, or spatial considerations, in addition to various non-covalent interactions, although some drugs do covalently modify an active site (see irreversible inhibitors). Using the "lock and key model" of enzyme binding, a drug (key) must be of roughly the proper dimensions to fit the enzyme's binding site (lock). Using the appropriately sized molecular scaffold, drugs must also interact with the enzyme non-covalently in order to maximize binding affinity binding constant and reduce the ability of the drug to dissociate from the binding site. This is achieved by forming various non-covalent interactions between the small molecule and amino acids in the binding site, including: hydrogen bonding, electrostatic interactions, pi stacking, van der Waals interactions, and dipole–dipole interactions. Non-covalent metallo drugs have been developed. For example, dinuclear triple-helical compounds in which three ligand strands wrap around two metals, resulting in a roughly cylindrical tetracation have been prepared. These compounds bind to the less-common nucleic acid structures, such as duplex DNA, Y-shaped fork structures and 4-way junctions. Protein folding and structure The folding of proteins from a primary (linear) sequence of amino acids to a three-dimensional structure is directed by all types of non-covalent interactions, including the hydrophobic forces and formation of intramolecular hydrogen bonds. Three-dimensional structures of proteins, including the secondary and tertiary structures, are stabilized by formation of hydrogen bonds. Through a series of small conformational changes, spatial orientations are modified so as to arrive at the most energetically minimized orientation achievable. The folding of proteins is often facilitated by enzymes known as molecular chaperones. Sterics, bond strain, and angle strain also play major roles in the folding of a protein from its primary sequence to its tertiary structure. Single tertiary protein structures can also assemble to form protein complexes composed of multiple independently folded subunits. As a whole, this is called a protein's quaternary structure. The quaternary structure is generated by the formation of relatively strong non-covalent interactions, such as hydrogen bonds, between different subunits to generate a functional polymeric enzyme. Some proteins also utilize non-covalent interactions to bind cofactors in the active site during catalysis, however a cofactor can also be covalently attached to an enzyme. Cofactors can be either organic or inorganic molecules which assist in the catalytic mechanism of the active enzyme. The strength with which a cofactor is bound to an enzyme may vary greatly; non-covalently bound cofactors are typically anchored by hydrogen bonds or electrostatic interactions. Boiling points Non-covalent interactions have a significant effect on the boiling point of a liquid. Boiling point is defined as the temperature at which the vapor pressure of a liquid is equal to the pressure surrounding the liquid. More simply, it is the temperature at which a liquid becomes a gas. As one might expect, the stronger the non-covalent interactions present for a substance, the higher its boiling point. For example, consider three compounds of similar chemical composition: sodium n-butoxide (C4H9ONa), diethyl ether (C4H10O), and n-butanol (C4H9OH). The predominant non-covalent interactions associated with each species in solution are listed in the above figure. As previously discussed, ionic interactions require considerably more energy to break than hydrogen bonds, which in turn are require more energy than dipole–dipole interactions. The trends observed in their boiling points (figure 8) shows exactly the correlation expected, where sodium n-butoxide requires significantly more heat energy (higher temperature) to boil than n-butanol, which boils at a much higher temperature than diethyl ether. The heat energy required for a compound to change from liquid to gas is associated with the energy required to break the intermolecular forces each molecule experiences in its liquid state. References Chemical bonding Supramolecular chemistry pt:Interação não covalente
Non-covalent interaction
[ "Physics", "Chemistry", "Materials_science" ]
3,337
[ "Condensed matter physics", "nan", "Nanotechnology", "Chemical bonding", "Supramolecular chemistry" ]
1,184,376
https://en.wikipedia.org/wiki/Bargmann%27s%20limit
In quantum mechanics, Bargmann's limit, named for Valentine Bargmann, provides an upper bound on the number of bound states with azimuthal quantum number in a system with central potential . It takes the form This limit is the best possible upper bound in such a way that for a given , one can always construct a potential for which is arbitrarily close to this upper bound. Note that the Dirac delta function potential attains this limit. After the first proof of this inequality by Valentine Bargmann in 1953, Julian Schwinger presented an alternative way of deriving it in 1961. Rigorous formulation and proof Stated in a formal mathematical way, Bargmann's limit goes as follows. Let be a spherically symmetric potential, such that it is piecewise continuous in , for and for , where and . If then the number of bound states with azimuthal quantum number for a particle of mass obeying the corresponding Schrödinger equation, is bounded from above by Although the original proof by Valentine Bargmann is quite technical, the main idea follows from two general theorems on ordinary differential equations, the Sturm Oscillation Theorem and the Sturm-Picone Comparison Theorem. If we denote by the wave function subject to the given potential with total energy and azimuthal quantum number , the Sturm Oscillation Theorem implies that equals the number of nodes of . From the Sturm-Picone Comparison Theorem, it follows that when subject to a stronger potential (i.e. for all ), the number of nodes either grows or remains the same. Thus, more specifically, we can replace the potential by . For the corresponding wave function with total energy and azimuthal quantum number , denoted by , the radial Schrödinger equation becomes with . By applying variation of parameters, one can obtain the following implicit solution where is given by If we now denote all successive nodes of by , one can show from the implicit solution above that for consecutive nodes and From this, we can conclude that proving Bargmann's limit. Note that as the integral on the right is assumed to be finite, so must be and . Furthermore, for a given value of , one can always construct a potential for which is arbitrarily close to Bargmann's limit. The idea to obtain such a potential, is to approximate Dirac delta function potentials, as these attain the limit exactly. An example of such a construction can be found in Bargmann's original paper. References Quantum mechanics
Bargmann's limit
[ "Physics" ]
520
[ "Theoretical physics", "Quantum mechanics" ]
1,184,860
https://en.wikipedia.org/wiki/Organogenesis
Organogenesis is the phase of embryonic development that starts at the end of gastrulation and continues until birth. During organogenesis, the three germ layers formed from gastrulation (the ectoderm, endoderm, and mesoderm) form the internal organs of the organism. The cells of each of the three germ layers undergo differentiation, a process where less-specialized cells become more-specialized through the expression of a specific set of genes. Cell differentiation is driven by cell signaling cascades. Differentiation is influenced by extracellular signals such as growth factors that are exchanged to adjacent cells which is called juxtracrine signaling or to neighboring cells over short distances which is called paracrine signaling. Intracellular signals – a cell signaling itself (autocrine signaling) – also play a role in organ formation. These signaling pathways allow for cell rearrangement and ensure that organs form at specific sites within the organism. The organogenesis process can be studied using embryos and organoids. Organs produced by the germ layers The endoderm is the inner most germ layer of the embryo which gives rise to gastrointestinal and respiratory organs by forming epithelial linings and organs such as the liver, lungs, and pancreas. The mesoderm or middle germ layer of the embryo will form the blood, heart, kidney, muscles, and connective tissues. The ectoderm or outermost germ layer of the developing embryo forms epidermis, the brain, and the nervous system. Mechanism of organ formation While each germ layer forms specific organs, in the 1820s, embryologist Heinz Christian Pander discovered that the germ layers cannot form their respective organs without the cellular interactions from other tissues. In humans, internal organs begin to develop within 3–8 weeks after fertilization. The germ layers form organs by three processes: folds, splits, and condensation. Folds form in the germinal sheet of cells and usually form an enclosed tube which you can see in the development of vertebrates neural tube. Splits or pockets may form in the germinal sheet of cells forming vesicles or elongations. The lungs and glands of the organism may develop this way. A primary step in organogenesis for chordates is the development of the notochord, which induces the formation of the neural plate, and ultimately the neural tube in vertebrate development. The development of the neural tube will give rise to the brain and spinal cord. Vertebrates develop a neural crest that differentiates into many structures, including bones, muscles, and components of the central nervous system. Differentiation of the ectoderm into the neural crest, neural tube, and surface ectoderm is sometimes referred to as neurulation and the embryo in this phase is the neurula. The coelom of the body forms from a split of the mesoderm along the somite axis Plant organogenesis In plants, organogenesis occurs continuously and only stops when the plant dies. In the shoot, the shoot apical meristems regularly produce new lateral organs (leaves or flowers) and lateral branches. In the root, new lateral roots form from weakly differentiated internal tissue (e.g. the xylem-pole pericycle in the model plant Arabidopsis thaliana). In vitro and in response to specific cocktails of hormones (mainly auxins and cytokinins), most plant tissues can de-differentiate and form a mass of dividing totipotent stem cells called a callus. Organogenesis can then occur from those cells. The type of organ that is formed depends on the relative concentrations of the hormones in the medium. Plant organogenesis can be induced in tissue culture and used to regenerate plants. See also Ectoderm Embryogenesis Endoderm Eye development Gastrulation Germ layer Germ line development Gonadogenesis Heart development Histogenesis Limb development List of human cell types derived from the germ layers Mesoderm Morphogenesis Organoid References External links Developmental biology Embryology Organogenesis
Organogenesis
[ "Biology" ]
849
[ "Behavior", "Developmental biology", "Reproduction" ]
1,185,139
https://en.wikipedia.org/wiki/Resolved%20sideband%20cooling
Resolved sideband cooling is a laser cooling technique allowing cooling of tightly bound atoms and ions beyond the Doppler cooling limit, potentially to their motional ground state. Aside from the curiosity of having a particle at zero point energy, such preparation of a particle in a definite state with high probability (initialization) is an essential part of state manipulation experiments in quantum optics and quantum computing. Historical notes As of the writing of this article, the scheme behind what we refer to as resolved sideband cooling today is attributed to D. J. Wineland and H. Dehmelt, in their article "Proposed laser fluorescence spectroscopy on mono-ion oscillator III (sideband cooling)". The clarification is important, as at the time of the latter article, the term also designated what we call today Doppler cooling, which was experimentally realized with atomic ion clouds in 1978 by W. Neuhauser and independently by D. J. Wineland. An experiment that demonstrates resolved sideband cooling unequivocally in its contemporary meaning is that of Diedrich et al. Similarly unequivocal realization with non-Rydberg neutral atoms was demonstrated in 1998 by S. E. Hamann et al. via Raman cooling. Conceptual description Resolved sideband cooling is a laser-cooling technique that can be used to cool strongly trapped atoms to the quantum ground state of their motion. The atoms are usually precooled using the Doppler laser cooling. Subsequently, the resolved sideband cooling is used to cool the atoms beyond the Doppler cooling limit. A cold trapped atom can be treated to a good approximation as a quantum-mechanical harmonic oscillator. If the spontaneous decay rate is much smaller than the vibrational frequency of the atom in the trap, the energy levels of the system will be an evenly spaced frequency ladder, with adjacent levels spaced by an energy . Each level is denoted by a motional quantum number n, which describes the amount of motional energy present at that level. These motional quanta can be understood in the same way as for the quantum harmonic oscillator. A ladder of levels will be available for each internal state of the atom. For example, in the figure at right both the ground (g) and excited (e) states have their own ladder of vibrational levels. Suppose a two-level atom whose ground state is denoted by g and excited state by e. Efficient laser cooling occurs when the frequency of the laser beam is tuned to the red sideband i.e. where is the internal atomic transition frequency corresponding to at transition between g and e, and is the harmonic-oscillation frequency of the atom. In this case the atom undergoes the transition where represents the state of an ion whose internal atomic state is a, and the motional state is m. If the recoil energy of the atom is negligible compared with the vibrational quantum energy, subsequent spontaneous emission occurs predominantly at the carrier frequency. This means that the vibrational quantum number remains constant. This transition is The overall effect of one of these cycles is to reduce the vibrational quantum number of the atom by one. To cool to the ground state, this cycle is repeated many times until is reached with a high probability. Theoretical basis The core process that provides the cooling assumes a two level system that is well localized compared to the wavelength () of the transition (Lamb–Dicke regime), such as a trapped and sufficiently cooled ion or atom. Modeling the system as a harmonic oscillator interacting with a classical monochromatic electromagnetic field yields (in the rotating wave approximation) the Hamiltonian with and where is the number operator, is the frequency spacing of the oscillator, is the Rabi frequency due to the atom-light interaction, is the laser detuning from , is the laser wave vector. That is, incidentally, the Jaynes–Cummings Hamiltonian used to describe the phenomenon of an atom coupled to a cavity in cavity QED. The absorption(emission) of photons by the atom is then governed by the off-diagonal elements, with probability of a transition between vibrational states proportional to , and for each there is a manifold coupled to its neighbors with strength proportional to . Three such manifolds are shown in the picture. If the transition linewidth satisfies , a sufficiently narrow laser can be tuned to a red sideband, . For an atom starting at , the predominantly probable transition will be to . This process is depicted by arrow "1" in the picture. In the Lamb–Dicke regime, the spontaneously emitted photon (depicted by arrow "2") will be, on average, at frequency , and the net effect of such a cycle, on average, will be the removing of motional quanta. After some cycles, the average phonon number is , where is the ratio of the intensities of the red to blue -th sidebands. In practice, this process is normally done on the first motional sideband for optimal efficiency. Repeating the processes many times while ensuring that spontaneous emission occurs provides cooling to . More rigorous mathematical treatment is given in Turchette et al. and Wineland et al. Specific treatment of cooling multiple ions can be found in Morigi et al. Experimental implementations For resolved sideband cooling to be effective, the process needs to start at sufficiently low . To that end, the particle is usually first cooled to the Doppler limit, then some sideband cooling cycles are applied, and finally, a measurement is taken or state manipulation is carried out. A more or less direct application of this scheme was demonstrated by Diedrich et al. with the caveat that the narrow quadrupole transition used for cooling connects the ground state to a long-lived state, and the latter had to be pumped out to achieve optimal cooling efficiency. It is not uncommon, however, that additional steps are needed in the process, due to the atomic structure of the cooled species. Examples of that are the cooling of ions and the Raman sideband cooling of atoms. Example: cooling of ions The energy levels relevant to the cooling scheme for ions are the S1/2, P1/2, P3/2, D3/2, and D5/2, which are additionally split by a static magnetic field to their Zeeman manifolds. Doppler cooling is applied on the dipole S1/2 - P1/2 transition (397 nm), however, there is about 6% probability of spontaneous decay to the long-lived D3/2 state, so that state is simultaneously pumped out (at 866 nm) to improve Doppler cooling. Sideband cooling is performed on the narrow quadrupole transition S1/2 - D5/2 (729 nm), however, the long-lived D5/2 state needs to be pumped out to the short lived P3/2 state (at 854 nm) to recycle the ion to the ground S1/2 state and maintain cooling performance. One possible implementation was carried out by Leibfried et al. and a similar one is detailed by Roos. For each data point in the 729 nm absorption spectrum, a few hundred iterations of the following are executed: the ion is Doppler cooled with 397 nm and 866 nm light, with 854 nm light on as well the ion is spin polarized to the S1/2(m=-1/2) state by applying a 397 nm light for the last few moments of the Doppler cooling process sideband cooling loops are applied at the first red sideband of the D5/2(m=-5/2) 729 nm transition to ensure the population ends up in the S1/2(m=-1/2) state, another 397 nm pulse is applied manipulation is carried out and analysis is carried out by applying 729 nm light at the frequency of interest detection is carried out with 397 nm and 866 nm light: discrimination between dark (D) and bright (S) state is based on a pre-determined threshold value of fluorescence counts Variations of this scheme relaxing the requirements or improving the results are being investigated/used by several ion-trapping groups. Example: Raman sideband cooling of atoms A Raman transition replaces the one-photon transition used in the sideband above by a two-photon process via a virtual level. In the cooling experiment carried out by Hamann et al., trapping is provided by an isotropic optical lattice in a magnetic field, which also provides Raman coupling to the red sideband of the Zeeman manifolds. The process followed in is: preparation of cold sample of atoms is carried out in optical molasses, in a magneto-optic trap atoms are allowed to occupy a 2D, near resonance lattice the lattice is changed adiabatically to a far off resonance lattice, which leaves the sample sufficiently well cooled for sideband cooling to be effective (Lamb-Dicke regime) a magnetic field is turned on to tune the Raman coupling to the red motional sideband relaxation between the hyperfine states is provided by a pump/repump laser pair after some time, pumping is intensified to transfer the population to a specific hyperfine state lattice is turned off and time of flight techniques are employed to perform Stern-Gerlach analysis See also Laser cooling Amplitude modulation References Laser applications Cooling technology Atomic physics Plasma technology and applications
Resolved sideband cooling
[ "Physics", "Chemistry" ]
1,940
[ "Plasma physics", "Plasma technology and applications", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
1,185,486
https://en.wikipedia.org/wiki/Kawhia%20Harbour
Kawhia Harbour () is one of three large natural inlets in the Tasman Sea coast of the Waikato region of New Zealand's North Island. It is located to the south of Raglan Harbour, Ruapuke and Aotea Harbour, 40 kilometres southwest of Hamilton. Kawhia is part of the Ōtorohanga District and is in the King Country. It has a high-tide area of and a low-tide area of . Te Motu Island is located in the harbour. The settlement of Kawhia is located on the northern coast of the inlet, and was an important port in early colonial New Zealand. The area of Kawhia comprises and is the town block that was owned by the New Zealand Government. The government bought it from the Europeans in 1880 "not from the original Māori owners, but from a European who claimed ownership in payment of money owed by another European". History and culture Early history Kawhia Harbour is the southernmost location where kauri trees historically grew. Kawhia is known in Māori lore as the final resting-place of the ancestral waka (canoe) Tainui. Soon after arrival, captain Hoturoa made it first priority to establish a whare wananga (sacred school of learning) which was named Ahurei. Ahurei is situated at the summit of the sacred hill behind Kawhia’s seaside marae – Maketu Marae. The harbour area was the birthplace of the prominent Māori warrior chief Te Rauparaha of the Ngāti Toa tribe, who lived in the area until the 1820s, when he, and his tribe along with Ngāti Rārua and Ngāti Koata migrated southwards. Tainui was buried at the base of Ahurei by Hoturoa himself, and other members of the iwi. Hoturoa marked out the waka with two limestone pillars, which he blessed. Firstly, there is "Hani (Hani-a-te-waewae-i-kimi-atu) which is on the higher ground and marked the prow of the canoe". Marking the stern of the canoe, Hoturoa placed the symbol of Puna, the spirit-goddess of that creation story. "In full it is named Puna-whakatupu-tangata, and represents female fertility, the spring or source of humanity". It is said that a pure woman who touches this stone will be given the gift of a child, and become pregnant. There have been cases of women using Puna when they have had difficulty conceiving a child. Marae Maketu Marae is located next to Kawhia Harbour. The main meeting house of the marae, Auau ki te Rangi, is named after Hoturoa’s father, who was a high chief (ariki) and was built and opened in 1962. The eldest and most prestigious meeting house that was first built on Maketu Marae is Te Ruruhi (the Old Lady) which was used as the dining hall until 1986. It was replaced by a two-storey dining hall, Te Tini O Tainui, to cater for the large numbers that visit for occasions such as annual poukai, tangi and hui. The marae is affiliated to Waikato through the hapū of Ngāti Mahuta, with connections to Ngāti Apakura, Ngāti Hikairo, and Ngāti Te Wehi. Six other marae are also based at or near Kawhia Harbour: Mōkai Kainga Marae and Ko Te Mōkai meeting house is a meeting place for the Ngāti Maniapoto hapū of Apakura and Hikairo, and the Waikato Tainui hapū of Apakura. Mokoroa Marae and Ngā Roimata meeting house is a meeting place for the Waikato hapū of Ngati Kiriwai. Ōkapu or Oakapu Marae and Te Kotahitanga o Ngāti Te Wehi meeting house is a meeting place for the Waikato hapū of Ngāti Mahuta and Ngāti Te Wehi. Te Māhoe Marae is a meeting ground for the Ngāti Maniapoto hapū of Peehi, Te Kanawa, Te Urupare and Uekaha. Waipapa Marae and Ngā Tai Whakarongorua and Takuhiahia meeting houses are a meeting place for the Ngāti Maniapoto hapū of Hikairo, and the Waikato Tainui hapū of Ngāti Hikairo and Ngāti Puhiawe. Rākaunui Marae and Moana Kahakore meeting house is on Ngati Tamainu (Waikato) land, the hapu of whichu are Ngāti Te Kiriwai, Ngati Huiarangi, Ngati Te Kanawa, and Ngati Mahuta). It also affiliates to Ngāti Ngutu, Ngāti Paretekawa of Maniapoto, and Ngāti Apakura. In October 2020, the Government committed $196,684 from the Provincial Growth Fund to upgrade Ōkapu Marae, creating 16 jobs. European history The Kawhia Harbour area was important to the kauri gum trade of the late 19th/early 20th centuries, as it was the southernmost area where the gum could be found. The Kawhia Settler and Raglan Advertiser was established in May 1901 by William Murray Thompson and Thomas Elliott Wilson, who also ran the Bruce Herald, Waimate Times, Egmont Settler (later briefly part of Taranaki Central Press at Stratford) and the Mangaweka Settler. From 1909 Edward Henry Schnackenberg, whose father was a missionary here from 1858 to 1864, owned the paper, until it closed in April 1936. In January 2018, the health board issued a statement that there was no additional risk from tuberculosis in Kawhia after reports of three possible cases. Demographics Statistics New Zealand describes Kawhia as a rural settlement, which covers and had an estimated population of as of with a population density of people per km2. The settlement is part of the larger Pirongia Forest statistical area. Kawhia had a population of 384 at the 2018 New Zealand census, an increase of 45 people (13.3%) since the 2013 census, and a decrease of 6 people (−1.5%) since the 2006 census. There were 162 households, comprising 198 males and 186 females, giving a sex ratio of 1.06 males per female, with 66 people (17.2%) aged under 15 years, 51 (13.3%) aged 15 to 29, 147 (38.3%) aged 30 to 64, and 120 (31.2%) aged 65 or older. Ethnicities were 55.5% European/Pākehā, 57.0% Māori, 5.5% Pacific peoples, 1.6% Asian, and 1.6% other ethnicities. People may identify with more than one ethnicity. Although some people chose not to answer the census's question about religious affiliation, 46.1% had no religion, 37.5% were Christian, 7.0% had Māori religious beliefs and 1.6% had other religions. Of those at least 15 years old, 39 (12.3%) people had a bachelor's or higher degree, and 99 (31.1%) people had no formal qualifications. 18 people (5.7%) earned over $70,000 compared to 17.2% nationally. The employment status of those at least 15 was that 81 (25.5%) people were employed full-time, 69 (21.7%) were part-time, and 21 (6.6%) were unemployed. Before 2018, Kawhia was in its own statistical area In 2013 231 dwellings were unoccupied. In the much wider Pirongia Forest area, 396 dwellings were unoccupied in 2018, when it was estimated that 70% of Kawhia's houses were holiday homes. As of 2017, New Zealand's median centre of population is located around one kilometre off the coast of Kawhia. Pirongia Forest statistical area Pirongia Forest statistical area covers and had an estimated population of as of with a population density of people per km2. Pirongia Forest, which includes Pirongia Forest Park had a population of 966 at the 2018 New Zealand census, an increase of 138 people (16.7%) since the 2013 census, and an increase of 69 people (7.7%) since the 2006 census. There were 393 households, comprising 498 males and 468 females, giving a sex ratio of 1.06 males per female. The median age was 50.5 years (compared with 37.4 years nationally), with 189 people (19.6%) aged under 15 years, 117 (12.1%) aged 15 to 29, 417 (43.2%) aged 30 to 64, and 243 (25.2%) aged 65 or older. Ethnicities were 64.3% European/Pākehā, 46.9% Māori, 3.1% Pacific peoples, 1.6% Asian, and 1.2% other ethnicities. People may identify with more than one ethnicity. The percentage of people born overseas was 6.8, compared with 27.1% nationally. Although some people chose not to answer the census's question about religious affiliation, 54.0% had no religion, 31.4% were Christian, 3.7% had Māori religious beliefs and 1.6% had other religions. Of those at least 15 years old, 81 (10.4%) people had a bachelor's or higher degree, and 246 (31.7%) people had no formal qualifications. The median income was $19,700, compared with $31,800 nationally. 60 people (7.7%) earned over $70,000 compared to 17.2% nationally. The employment status of those at least 15 was that 270 (34.7%) people were employed full-time, 141 (18.1%) were part-time, and 39 (5.0%) were unemployed. Te Puia Hot Springs 2 hours either side of low tide (for tide times, see tide-forecast.com) about 100 m off the Tasman Sea beach, 4 km from Kawhia (see 1:50,000 map), oozes hot water, which can be formed into shallow bathing pools with a spade. A council sample taken on 30 March 2006 listed these in the water. Kawhia County Council Kawhia County Council was formed in 1905 and first met on 12 July 1905. New offices were built by Buchanan Bros in 1915-16 over the former beach, and designed by Hamilton architects and engineers, Warren and Blechynden. In 1923, Kawhia County covered and had a population of 1,098, with of gravel roads, of mud roads and of tracks. Kawhia Town Board was formed in 1906, with an area of 470 acres (190 ha). Its population in 1923 was 195, when it had 6 mi 14 ch (9.9 km) of streets and a 10 acres (4.0 ha) domain. The County merged into Ōtorohanga and Waitomo in 1956, after a Local Government Commission inquiry. Kāwhia Community Board The Community Board meets monthly and consists of 4 members, plus the Kāwhia - Tihiroa Ward councillor. Three members are elected from the Kawhia area and one from Aotea. Pou Maumahara In 2016, a tall pou maumahara (remembrance pillar) was put up at Omimiti Reserve, behind the museum. Te Kuiti Stewart began carving it in 2014, from a Pureora Forest totara. It represents 150 years of Kīngitanga on one side and the Elizabeth Henrietta's 1824 arrival, on the other. At night it is floodlit, with coloured LED lights inside. Hospital Kawhia hospital overlooked the town, on the site of Te Puru pa, which became the Armed Constabulary redoubt in 1863. Like the County Office, the hospital was also designed by Warren and Blechynden and opened in 1918. It was still a cottage hospital in 1948, but had become a maternity hospital by 1959 and closed in March 1967. Education Kawhia School is a Year 1–8 co-educational state primary school. It is a decile 1 school with a roll of as of Notable people Te Rangihaeata, chief, born about 1780 John Kent, European trader, 1820s–1830s John Whiteley, Cort and Annie Jane Schnackenberg, missionaries Hoana Riutoto, signatory of Treaty of Waitangi Jim Rukutai, rugby player, born about 1877 Mary Reidy, sister at Kawhia Hospital 1921–1947 Carole Shepheard (born 1945), artist See also SH31 Kairuku waewaeroa, extinct giant penguin References External links 1911 map of Kawhia County Ōtorohanga District Geography of Waikato Ports and harbours of New Zealand Kauri gum
Kawhia Harbour
[ "Physics" ]
2,720
[ "Amorphous solids", "Unsolved problems in physics", "Kauri gum" ]
1,185,498
https://en.wikipedia.org/wiki/Fugacity
In thermodynamics, the fugacity of a real gas is an effective partial pressure which replaces the mechanical partial pressure in an accurate computation of chemical equilibrium. It is equal to the pressure of an ideal gas which has the same temperature and molar Gibbs free energy as the real gas. Fugacities are determined experimentally or estimated from various models such as a Van der Waals gas that are closer to reality than an ideal gas. The real gas pressure and fugacity are related through the dimensionless fugacity coefficient For an ideal gas, fugacity and pressure are equal, and so . Taken at the same temperature and pressure, the difference between the molar Gibbs free energies of a real gas and the corresponding ideal gas is equal to . The fugacity is closely related to the thermodynamic activity. For a gas, the activity is simply the fugacity divided by a reference pressure to give a dimensionless quantity. This reference pressure is called the standard state and normally chosen as 1 atmosphere or 1 bar. Accurate calculations of chemical equilibrium for real gases should use the fugacity rather than the pressure. The thermodynamic condition for chemical equilibrium is that the total chemical potential of reactants is equal to that of products. If the chemical potential of each gas is expressed as a function of fugacity, the equilibrium condition may be transformed into the familiar reaction quotient form (or law of mass action) except that the pressures are replaced by fugacities. For a condensed phase (liquid or solid) in equilibrium with its vapor phase, the chemical potential is equal to that of the vapor, and therefore the fugacity is equal to the fugacity of the vapor. This fugacity is approximately equal to the vapor pressure when the vapor pressure is not too high. Pure substance Fugacity is closely related to the chemical potential . In a pure substance, is equal to the Gibbs energy for a mole of the substance, and where and are the temperature and pressure, is the volume per mole and is the entropy per mole. Gas For an ideal gas the equation of state can be written as where is the ideal gas constant. The differential change of the chemical potential between two states of slightly different pressures but equal temperature (i.e., ) is given by where ln p is the natural logarithm of p. For real gases the equation of state will depart from the simpler one, and the result above derived for an ideal gas will only be a good approximation provided that (a) the typical size of the molecule is negligible compared to the average distance between the individual molecules, and (b) the short range behavior of the inter-molecular potential can be neglected, i.e., when the molecules can be considered to rebound elastically off each other during molecular collisions. In other words, real gases behave like ideal gases at low pressures and high temperatures. At moderately high pressures, attractive interactions between molecules reduce the pressure compared to the ideal gas law; and at very high pressures, the sizes of the molecules are no longer negligible and repulsive forces between molecules increases the pressure. At low temperatures, molecules are more likely to stick together instead of rebounding elastically. The ideal gas law can still be used to describe the behavior of a real gas if the pressure is replaced by a fugacity , defined so that and That is, at low pressures is the same as the pressure, so it has the same units as pressure. The ratio is called the fugacity coefficient. If a reference state is denoted by a zero superscript, then integrating the equation for the chemical potential gives Note this can also be expressed with , a dimensionless quantity, called the activity. Numerical example: Nitrogen gas (N2) at 0 °C and a pressure of atmospheres (atm) has a fugacity of  atm. This means that the molar Gibbs energy of real nitrogen at a pressure of 100 atm is equal to the molar Gibbs energy of nitrogen as an ideal gas at . The fugacity coefficient is . The contribution of nonideality to the molar Gibbs energy of a real gas is equal to . For nitrogen at 100 atm, , which is less than the ideal value because of intermolecular attractive forces. Finally, the activity is just without units. Condensed phase The fugacity of a condensed phase (liquid or solid) is defined the same way as for a gas: and It is difficult to measure fugacity in a condensed phase directly; but if the condensed phase is saturated (in equilibrium with the vapor phase), the chemical potentials of the two phases are equal (). Combined with the above definition, this implies that When calculating the fugacity of the compressed phase, one can generally assume the volume is constant. At constant temperature, the change in fugacity as the pressure goes from the saturation press to is This fraction is known as the Poynting factor. Using , where is the fugacity coefficient, This equation allows the fugacity to be calculated using tabulated values for saturated vapor pressure. Often the pressure is low enough for the vapor phase to be considered an ideal gas, so the fugacity coefficient is approximately equal to 1. Unless pressures are very high, the Poynting factor is usually small and the exponential term is near 1. Frequently, the fugacity of the pure liquid is used as a reference state when defining and using mixture activity coefficients. Mixture The fugacity is most useful in mixtures. It does not add any new information compared to the chemical potential, but it has computational advantages. As the molar fraction of a component goes to zero, the chemical potential diverges but the fugacity goes to zero. In addition, there are natural reference states for fugacity (for example, an ideal gas makes a natural reference state for gas mixtures since the fugacity and pressure converge at low pressure). Gases In a mixture of gases, the fugacity of each component has a similar definition, with partial molar quantities instead of molar quantities (e.g., instead of and instead of ): and where is the partial pressure of component . The partial pressures obey Dalton's law: where is the total pressure and is the mole fraction of the component (so the partial pressures add up to the total pressure). The fugacities commonly obey a similar law called the Lewis and Randall rule: where is the fugacity that component would have if the entire gas had that composition at the same temperature and pressure. Both laws are expressions of an assumption that the gases behave independently. Liquids In a liquid mixture, the fugacity of each component is equal to that of a vapor component in equilibrium with the liquid. In an ideal solution, the fugacities obey the Lewis-Randall rule: where is the mole fraction in the liquid and is the fugacity of the pure liquid phase. This is a good approximation when the component molecules have similar size, shape and polarity. In a dilute solution with two components, the component with the larger molar fraction (the solvent) may still obey Raoult's law even if the other component (the solute) has different properties. That is because its molecules experience essentially the same environment that they do in the absence of the solute. By contrast, each solute molecule is surrounded by solvent molecules, so it obeys a different law known as Henry's law. By Henry's law, the fugacity of the solute is proportional to its concentration. The constant of proportionality (a measured Henry's constant) depends on whether the concentration is represented by the mole fraction, molality or molarity. Temperature and pressure dependence The pressure dependence of fugacity (at constant temperature) is given by and is always positive. The temperature dependence at constant pressure is where is the change in molar enthalpy as the gas expands, liquid vaporizes, or solid sublimates into a vacuum. Also, if the pressure is , then Since the temperature and entropy are positive, decreases with increasing temperature. Measurement The fugacity can be deduced from measurements of volume as a function of pressure at constant temperature. In that case, This integral can also be calculated using an equation of state. The integral can be recast in an alternative form using the compressibility factor Then This is useful because of the theorem of corresponding states: If the pressure and temperature at the critical point of the gas are and , we can define reduced properties and . Then, to a good approximation, most gases have the same value of for the same reduced temperature and pressure. However, in geochemical applications, this principle ceases to be accurate at pressures where metamorphism occurs. For a gas obeying the van der Waals equation, the explicit formula for the fugacity coefficient is This formula is based on the molar volume. Since the pressure and the molar volume are related through the equation of state; a typical procedure would be to choose a volume, calculate the corresponding pressure, and then evaluate the right-hand side of the equation. History The word fugacity is derived from the Latin fugere, to flee. In the sense of an "escaping tendency", it was introduced to thermodynamics in 1901 by the American chemist Gilbert N. Lewis and popularized in an influential textbook by Lewis and Merle Randall, Thermodynamics and the Free Energy of Chemical Substances, in 1923. The "escaping tendency" referred to the flow of matter between phases and played a similar role to that of temperature in heat flow. See also Electrochemical potential Excess chemical potential Fugacity capacity Multimedia fugacity model Thermodynamic equilibrium References Further reading External links Video lectures Thermodynamics, University of Colorado-Boulder, 2011 Introduction to fugacity: Where did it come from? What is fugacity? What is fugacity in mixtures? Physical chemistry Chemical thermodynamics Thermodynamic properties State functions
Fugacity
[ "Physics", "Chemistry", "Mathematics" ]
2,056
[ "State functions", "Thermodynamic properties", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Thermodynamics", "nan", "Chemical thermodynamics", "Physical chemistry" ]
1,186,403
https://en.wikipedia.org/wiki/Pacific%20Northwest%20Seismic%20Network
The Pacific Northwest Seismic Network, or PNSN, collects and studies ground motions from about 400 seismometers in the U.S. states of Oregon and Washington. PNSN monitors volcanic and tectonic activity, gives advice and information to the public and policy makers, and works to mitigate earthquake hazard. Motivation Damaging earthquakes are well known in the Pacific Northwest, including several larger than magnitude 7, most notably the M9 1700 Cascadia earthquake and the M7.0–7.3 earthquake in about 900AD on the Seattle Fault. The M6.5 1965 Puget Sound earthquake shook the Seattle, Washington, area, causing substantial damage and seven deaths. This event spurred the installation of the Pacific Northwest Seismic Network in 1969 to monitor regional earthquake activity. Early in 1980 PNSN scientists detected unrest under Mt. St. Helens and by March 1980 predicted an eruption was likely to occur "soon". On March 27 the first steam and ash explosion occurred. The PNSN expanded to better monitor Mt. St. Helens and other Cascade Volcanos leading up to the deadly May 18 eruption and in the years following. Observations and efficacy Earthquakes are recorded frequently beneath Mount St. Helens, Mount Rainier, and Mount Hood. After successfully using seismic activity to predict the 1980 Mt. St. Helens eruption, monitoring was expanded to other Cascade Mountains volcanoes. The PNSN, in conjunction with the Cascades Volcano Observatory of the USGS, now monitors seismicity at all the Cascade volcanoes in Washington and Oregon. The network was significantly expanded after the damaging 2001 Nisqually earthquake. After an earthquake on January 30, 2009, the network's emergency notification system failed. A magnitude 4.3 earthquake in February 2015 showed that the present architecture of the network results in a significant delay in the early warning notification program, depending upon the location of the quake, leading to proposals to again expand the network. The early warning notification program was implemented with its reliability contingent upon unknown future funding, but with the election of Donald Trump "future funding is uncertain" according to Washington Congressman Derek Kilmer. The president's budget for the fiscal year commencing October 1, 2018, calls for reductions in funding and staff for the early warning notification program. Operations and data archiving The network operates from the Earth and Space Sciences Department at the University of Washington in Seattle, and its data archiving is at the Data Management Center of the IRIS Consortium in Seattle. The network is also affiliated with the University of Oregon Department of Geology. It is the second largest of the regional seismic networks in the ANSS (Advanced National Seismic System) and has produced more data than the networks in the states of Alaska, Utah, Nevada, Hawaii and the New Madrid, Missouri-Tennessee-Kentucky-Arkansas area. The network is funded primarily by the United States Geological Survey, which stations its own staff on the campus, and the network is managed by UW staff. Additional funding is provided by the Department of Energy, the State of Washington, and the State of Oregon. References External links Pacific Northwest Seismic Network (official website) PNSN—Pacific Northwest Seismograph Network – United States Geological Survey ShakeAlert Implementation Plan – United States Geological Survey Seismological observatories, organisations and projects Seismic networks University of Oregon Earthquake engineering Geology of Washington (state) Geology of Oregon University of Washington projects
Pacific Northwest Seismic Network
[ "Engineering" ]
682
[ "Earthquake engineering", "Civil engineering", "Structural engineering" ]
1,186,707
https://en.wikipedia.org/wiki/Bipyridine
Bipyridines are a family of organic compounds with the formula (C5H4N)2, consisting of two pyridyl (C5H4N) rings. Pyridine is an aromatic nitrogen-containing heterocycle. The bipyridines are all colourless solids, which are soluble in organic solvents and slightly soluble in water. Bipyridines, especially the 4,4' isomer, are mainly of significance in pesticides. Six isomers of bipyridine exist, but two are prominent. 2,2′-bipyridine, also known as bipyridyl, dipyridyl, and dipyridine, is a popular ligand in coordination chemistry 2,2′-Bipyridine 2,2′-Bipyridine (2,2′-bipy) is a chelating ligand that forms complexes with most transition metal ions that are of broad academic interest. Many of these complexes have distinctive optical properties, and some are of interest for analysis. Its complexes are used in studies of electron and energy transfer, supramolecular, and materials chemistry, and catalysis. 2,2′-Bipyridine is used in the manufacture of diquat. 4,4′-Bipyridine 4,4′-Bipyridine (4,4′-bipy) is mainly used as a precursor to the N,N′-dimethyl-4,4′-bipyridinium dication commonly known as paraquat. This species is redox active, and its toxicity arises from its ability to interrupt biological electron transfer processes. Because of its structure, 4,4′-bipyridine can bridge between metal centres to give coordination polymers. 3,4′-Bipyridine The 3,4′-bipyridine derivatives inamrinone and milrinone are used occasionally for short term treatment of congestive heart failure. They inhibit phosphodiesterase and thus increasing cAMP, exerting positive inotropy and causing vasodilation. Inamrinone causes thrombocytopenia. Milrinone decreases survival in heart failure. References Chelating agents Ligands
Bipyridine
[ "Chemistry" ]
465
[ "Chelating agents", "Ligands", "Coordination chemistry", "Process chemicals" ]
1,188,009
https://en.wikipedia.org/wiki/Diaphragm%20%28mechanical%20device%29
In mechanics, a diaphragm is a sheet of a semi-flexible material anchored at its periphery and most often round in shape. It serves either as a barrier between two chambers, moving slightly up into one chamber or down into the other depending on differences in pressure, or as a device that vibrates when certain frequencies are applied to it. A diaphragm pump uses a diaphragm to pump a fluid. A typical design is to have air on one side constantly vary in pressure, with fluid on the other side. The increase and decrease in volume caused by the action of the diaphragm alternately forces fluid out the chamber and draws more fluid in from its source. The action of the diaphragm is very similar to the action of a plunger with the exception that a diaphragm responds to changes in pressure rather than the mechanical force of the shaft. A diaphragm pressure tank is a tank which has pressurant sealed inside on one side of the diaphragm. It is favored in certain applications due to its high durability and reliability. This comes with a downside, as the vessel needs to be replaced in the case of a rupture of the diaphragm. Diaphragm tanks are used to store hypergolic propellant aboard space probes and various other spacecraft. Pressure regulators use diaphragms as part of their design. Most uses of compressed gasses, for example, in gas welding and scuba diving rely on regulators to deliver their gas output at appropriate pressures. Automotive fuel systems frequently require fuel-pressure regulators; this is true of many fuel injection systems as well as in vehicles fueled with liquefied petroleum gas (autogas) and compressed natural gas. See also Gas engine Hybrid vehicle References Mechanics.
Diaphragm (mechanical device)
[ "Physics", "Engineering" ]
367
[ "Mechanical engineering stubs", "Mechanics", "Mechanical engineering" ]
1,188,460
https://en.wikipedia.org/wiki/Ripple%20effect
A ripple effect occurs when an initial disturbance to a system propagates outward to disturb an increasingly larger portion of the system, like ripples expanding across the water when an object is dropped into it. The ripple effect is often used colloquially to mean a multiplier in macroeconomics. For example, an individual's reduction in spending reduces the incomes of others and their ability to spend. In a broader global context, research has shown how monetary policy decisions, especially by major economies like the US, can create ripple effects impacting economies worldwide, emphasizing the interconnectedness of today's global economy. In sociology, the ripple effect can be observed in how social interactions can affect situations not directly related to the initial interaction, and in charitable activities where information can be disseminated and passed from the community to broaden its impact. The concept has been applied in computer science within the field of software metrics as a complexity measure. Examples The Weinstein effect and the rise of the Me Too movement In October 2017, according to The New York Times and The New Yorker, dozens of women have accused American film producer Harvey Weinstein, former founder of Miramax Films and The Weinstein Company, of rape, sexual assault and sexual abuse for over a period of three decades. Shortly after over eighty accusations, Harvey was dismissed from his own company, expelled from the Academy of Motion Picture Arts and Sciences and other professional associations, and even retired from public view. The allegations against him resulted in a special case of ripple effect, now called the Weinstein effect. This means a global trend involving a serial number of sexual misconduct allegations towards other famous men in Hollywood, such as Louis CK and Kevin Spacey. The effect led to the formation of the controversial Me Too movement, where people share their experiences of sexual harassment/assault. Corporate social responsibility The effects of one company's decision to adopt a corporate social responsibility (CSR) programme on the attitudes and behaviours of rival companies has been likened to a ripple effect. Research by an international team in 2018 found that in many cases, one company's CSR initiative was seen as a competitive threat to other businesses in the same market, resulting in the adoption of further CSR initiatives. See also Butterfly effect⁣ — an effect where a minimal change in one state of a system results in large differences in its later state. Clapotis — a non-breaking standing wave with higher amplitude than the waves it's composed of. Domino effect — an effect where one event sets off a chain of non-incremental other events. Snowball effect — an effect where a process starting from an initial state of small significance builds upon itself in time. References Metaphors referring to objects Causality Social phenomena Economics effects Software metrics
Ripple effect
[ "Physics", "Mathematics", "Engineering" ]
557
[ "Software engineering", "Quantity", "Metrics", "Software metrics" ]
18,319,136
https://en.wikipedia.org/wiki/Code%20Saturne
code_saturne is a general-purpose computational fluid dynamics free computer software package. Developed since 1997 at Électricité de France R&D, code_saturne is distributed under the GNU GPL licence. It is based on a co-located finite-volume approach that accepts meshes with any type of cell (tetrahedral, hexahedral, prismatic, pyramidal, polyhedral...) and any type of grid structure (unstructured, block structured, hybrid, conforming or with hanging nodes...). Its basic capabilities enable the handling of either incompressible or expandable flows with or without heat transfer and turbulence (mixing length, 2-equation models, v2f, Reynolds stress models, Large eddy simulation...). Dedicated modules are available for specific physics such as radiative heat transfer, combustion (gas, coal, ...), magneto-hydro dynamics, compressible flows, two-phase flows (Euler-Lagrange approach with two-way coupling), extensions to specific applications (e.g. for atmospheric environment). The current production version is 8.0 (2023-06-30). code saturne install code_saturne may be installed on a Linux or other Unix-like system by downloading and building it. No system files are changed, so administrator privileges are not required if the code is installed in a user's directory. Packages for code_saturne are also available on Debian and Ubuntu. Alternatively, CAE Linux (latest version ), includes code_saturne pre-installed. The code also works well in the Windows subsystem for Linux. Interoperability code saturne supports multiple mesh formats. The following formats, from open source or commercial tools, are currently supported by Code Saturne: Supported mesh input formats (source): SIMAIL (NOPO) – (INRIA/Distene) I-DEAS universal MED CGNS EnSight 6 EnSight Gold GAMBIT neutral Gmsh Simcenter STAR-CCM+ Supported post-processing output formats EnSight Gold MED CGNS Alternative software Advanced Simulation Library (open source software AGPL) ANSYS CFX (proprietary software) ANSYS Fluent (proprietary software) Basilisk COMSOL Multiphysics FEATool Multiphysics Gerris Flow Solver (GPL) OpenFOAM (GPL) Palabos Flow Solver (AGPL) STAR-CCM+ (proprietary software) SU2 code (LGPL) See also SALOME References External links Official English website Official french website Code Saturne Installation on Mandriva Linux Code_Saturne Overview (pdf, 2 pages) Overview of EDF's Open Source initiative (pdf, 2 pages) code-saturne.blogspot.com : Independent user's Blog about SALOME, Code_Saturne, ParaView and Numerical Modelling CAE Linux : LiveDVD with Code_Saturne, Code_Aster and the Salomé platform Website at the University of Manchester Computational fluid dynamics Free science software Engineering software that uses Qt Computer-aided design software for Linux Computer-aided engineering software for Linux Articles with underscores in the title
Code Saturne
[ "Physics", "Chemistry" ]
666
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
18,320,085
https://en.wikipedia.org/wiki/Fungivore
Fungivory or mycophagy is the process of organisms consuming fungi. Many different organisms have been recorded to gain their energy from consuming fungi, including birds, mammals, insects, plants, amoebas, gastropods, nematodes, bacteria and other fungi. Some of these, which only eat fungi, are called fungivores whereas others eat fungi as only part of their diet, being omnivores. Animals Mammals Many mammals eat fungi, but only a few feed exclusively on fungi; most are opportunistic feeders and fungi only make up part of their diet. At least 22 species of primate, including humans, bonobos, colobines, gorillas, lemurs, macaques, mangabeys, marmosets and vervet monkeys are known to feed on fungi. Most of these species spend less than 5% of the time they spend feeding eating fungi, and fungi therefore form only a small part of their diet. Some species spend longer foraging for fungi, and fungi account for a greater part of their diet; buffy-tufted marmosets spend up to 12% of their time consuming sporocarps, Goeldi’s monkeys spend up to 63% of their time doing so and the Yunnan snub-nosed monkey spends up to 95% of its feeding time eating lichens. Fungi are comparatively very rare in tropical rainforests compared to other food sources such as fruit and leaves, and they are also distributed more sparsely and appear unpredictably, making them a challenging source of food for Goeldi’s monkeys. Fungi are renowned for their poisons to deter animals from feeding on them: even today humans die from eating poisonous fungi. A natural consequence of this is the virtual absence of obligate vertebrate fungivores, with the diprotodont family Potoridae being the major exception. One of the few extant vertebrate fungivores is the northern flying squirrel, but it is believed that in the past there were numerous vertebrate fungivores and that toxin development greatly lessened their number and forced these species to abandon fungi or diversify. Mollusks Many terrestrial gastropod mollusks are known to feed on fungi. It is the case in several species of slugs from distinct families. Among them are the Philomycidae (e. g. Philomycus carolinianus and Phylomicus flexuolaris) and Ariolimacidae (Ariolimax californianus), which respectively feed on slime molds (myxomycetes) and mushrooms (basidiomycetes). Species of mushroom producing fungi used as food source by slugs include milk-caps, Lactarius spp., the oyster mushroom, Pleurotus ostreatus and the penny bun, Boletus edulis. Other species pertaining to different genera, such as Agaricus, Pleurocybella and Russula, are also eaten by slugs. Slime molds used as food source by slugs include Stemonitis axifera and Symphytocarpus flaccidus. Some slugs are selective towards certain parts or developmental stages of the fungi they eat, though this behavior varies greatly. Depending on the species and other factors, slugs eat only fungi at specific stages of development. Moreover, in other cases, whole mushrooms can be eaten, without any trace of selectivity. Insects In 2008, Euprenolepis procera a species of ant from the rainforests of South East Asia was found to harvest mushrooms from the rainforest. Witte & Maschwitz found that their diet consisted almost entirely of mushrooms, representing a previously undiscovered feeding strategy in ants. Several beetle families, including the Erotylidae, Endomychidae, and certain Tenebrionidae also are specialists on fungi, though they may eat other foods occasionally. Other insects, like fungus gnats and scuttle flies, utilize fungi at their larval stage. Feeding on fungi is crucial for dead wood eaters as this is the only way to acquire nutrients not available in nutritionally scarce dead wood. Birds Jays (Perisoreus) are believed to be the first birds in which mycophagy was recorded. Canada jays (P. canadensis), Siberian jays (P. infaustus) and Oregon jays (P. obscurus) have all been recorded to eat mushrooms, with the stomachs of Siberian jays containing mostly fungi in the early winter. The ascomycete, Phaeangium lefebvrei found in north Africa and the Middle East is eaten by migrating birds in winter and early spring, mainly by species of lark (Alaudidae). Bedouin hunters have been reported to use P. lefebvrei as bait in traps to attract birds. The ground-foraging superb lyrebird Menura novaehollandiae has also been found to opportunistically forage on fungi. Fungi are known to form an important part of the diet of the southern cassowary (Casuarius casuarius) of Australia. Bracket fungi have been found in their droppings throughout the year, and Simpson in the Australasian Mycological Newsletter suggested it is likely they also eat species of Agaricales and Pezizales but these have not been found in their droppings since they disintegrate when they are eaten. Emus (Dromaius novaehollandiae) will eat immature Lycoperdon and Bovista fungi if presented to them as will brush turkeys (Alectura lathami) if offered Mycena, suggesting that species of Megapodiidae may feed opportunistically on mushrooms. Microbial Fungi Mycoparasitism occurs when any fungus feeds on other fungi, a form of parasitism, our knowledge of it in natural environments is very limited. Collybia grow on dead mushrooms. The fungal genus, Trichoderma produces enzymes such as chitinases which degrade the cell walls of other fungi. They are able to detect other fungi and grow towards them, they then bind to the hyphae of other fungi using lectins on the host fungi as a receptor, forming an appressorium. Once this is formed, Trichoderma inject toxic enzymes into the host and probably peptaibol antibiotics, which create holes in the cell wall, allowing Trichoderma to grow inside of the host and feed. Trichoderma are able to digest sclerotia, durable structures which contain food reserves, which is important if they are to control pathogenic fungi in the long term. Trichoderma species have been recorded as protecting crops from Botrytis cinerea, Rhizoctonia solani, Alternaria solani, Glomerella graminicola, Phytophthora capsici, Magnaporthe grisea and Colletotrichum lindemuthianum; although this protection may not be entirely due to Trichoderma digesting these fungi, but by them improving plant disease resistance indirectly. Bacteria Bacterial mycophagy was a term coined in 2005, to describe the ability of some bacteria to "grow at the expense of living fungal hyphae". In a 2007 review in the New Phytologist this definition was adapted to only include bacteria which play an active role in gaining nutrition from fungi, excluding those that feed off passive secretions by fungi, or off dead or damaged hyphae. The majority of our knowledge in this area relates to interactions between bacteria and fungi in the soil and in or around plants, little is known about interactions in marine and freshwater habitats, or those occurring on or inside animals. It is not known what effects bacterial mycophagy has on the fungal communities in nature. There are three mechanisms by which bacteria feed on fungi; they either kill fungal cells, cause them to secrete more material out of their cells or enter into the cells to feed internally and they are categorised according to these habits. Those that kill fungal cells are called necrotrophs, the molecular mechanisms of this feeding are thought to overlap considerably with bacteria that feed on fungi after they have died naturally. Necrotrophs may kill the fungi through digesting their cell wall or by producing toxins which kill fungi, such as tolaasin produced by Pseudomonas tolaasii. Both of these mechanisms may be required since fungal cell walls are highly complex, so require many different enzymes to degrade them, and because experiments demonstrate that bacteria that produce toxins cannot always infect fungi. It is likely that these two systems act synergistically, with the toxins killing or inhibiting the fungi and exoenzymes degrading the cell wall and digesting the fungus. Examples of necrotrophs include Staphylococcus aureus which feed on Cryptococcus neoformans, Aeromonas caviae which feed on Rhizoctonia solani, Sclerotium rolfsii and Fusarium oxysporum, and some myxobacteria which feed on Cochliobolus miyabeanus and Rhizoctonia solani. Bacteria which manipulate fungi to produce more secretions which they in turn feed off are called extracellular biotrophs; many bacteria feed on fungal secretions, but do not interact directly with the fungi and these are called saprotrophs, rather than biotrophs. Extracellular biotrophs could alter fungal physiology in three ways; they alter their development, the permeability of their membranes (including the efflux of nutrients) and their metabolism. The precise signalling molecules that are used to achieve these changes are unknown, but it has been suggested that auxins (better known for their role as a plant hormone) and quorum sensing molecules may be involved. Bacteria have been identified that manipulate fungi in these ways, for example mycorrhiza helper bacteria (MHBs) and Pseudomonas putida, but it remains to be demonstrated whether the changes they cause are directly beneficial to the bacteria. In the case of MHBs, which increase infection of plant roots by mycorrhizal fungi, they may benefit, because the fungi gain nutrition from the plant and in turn the fungi will secrete more sugars. The third group, that enter into living fungal cells are called endocellular biotrophs. Some of these are transmitted vertically whereas others are able to actively invade and subvert fungal cells. The molecular interactions involved in these interactions are mostly unknown. Many endocellular biotrophs, for example some Burkholderia species, belong to the β-proteobacteria which also contains species which live inside the cells of mammals and amoeba. Some of them, for example Candidatus Glomeribacter gigasporarum, which colonises the spores of Gigaspora margarita, have reduced genome sizes indicating that they have become entirely dependent on the metabolic functions of the fungal cells in which they live. When all the endocellular bacteria inside G. margarita were removed, the fungus grew differently and was less fit, suggesting that some bacteria may also provide services to the fungi they live in. Ciliates The ciliate family Grossglockneridae, including the species Grossglockneria acuta, feed exclusively on fungi. G. acuta first attaches themselves to a hyphae or sporangium via a feeding tube and then a ring-shaped structure, around 2 μm in diameter is observed to appear on the fungus, possibly consisting of degraded cell wall material. G. acuta then feeds through the hole in the cell wall for, on average, 10 minutes, before detaching itself and moving away. The precise mechanism of feeding is not known, but it conceivably involves enzymes including acid phosphatases, cellulases and chitinases. Microtubules are visible in the feeding tube, as are possible reserves of cell membrane, which may be used to form food vacuoles filled with the cytoplasm of the fungus, via endocytosis, which are then transported back into G. acuta. The holes made by G. acuta bear some similarities to those made by amoeba, but unlike amoeba G. acuta never engulfs the fungus. Plants Around 90% of land plants live in symbiosis with mycorrhizal fungi, where fungi gain sugars from plants and plants gain nutrients from the soil via the fungi. Some species of plant have evolved to manipulate this symbiosis, so that they no longer give fungi sugars that they produce and instead gain sugars from the fungi, a process called myco-heterotrophy. Some plants are only dependent on fungi as a source of sugars during the early stages of their development, these include most of the orchids as well as many ferns and lycopods. Others are dependent on this food source for their entire lifetime, including some orchids and Gentianaceae, and all species of Monotropaceae and Triuridaceae. Those that are dependent on fungi, but still photosynthesise are called mixotrophs since they gain nutrition in more than one way, by gaining a significant amount of sugars from fungi, they are able to grow in the deep shade of forests. Examples include the orchids Epipactis, Cephalanthera and Plantanthera and the tribe Pyroleae of the family Ericaceae. Others, such as Monotropastrum humile, no longer photosynthesise and are totally dependent on fungi for nutrients. Around 230 such species exist, and this trait is thought to have evolved independently on five occasions outside of the orchid family. Some individuals of the orchid species Cephalanthera damasonium are mixotrophs, but others do not photosynthesise. Because the fungi that myco-heterotrophic plants gain sugars from in turn gain them from plants that do photosynthesise, they are considered indirect parasites of other plants. The relationship between orchids and orchid mycorrhizae has been suggested to be somewhere between predation and parasitism. The precise mechanisms by which these plants gain sugars from fungi are not known and has not been demonstrated scientifically. Two pathways have been proposed; they may either degrade fungal biomass, particularly the fungal hyphae which penetrate plant cells in a similar manner to in arbuscular mycorrhizae, or absorb sugars from the fungi by disrupting their cell membranes, through mass flow. To prevent the sugars returning to the fungi, they must compartmentalise the sugars or convert them into forms which the fungi cannot use. Fungal farming Insects Three insect lineages, beetles, ants and termites, independently evolved the ability to farm fungi between 40 and 60 million years ago. In a similar way to the way that human societies became more complex after the development of plant-based agriculture, the same occurred in these insect lineages when they evolved this ability and these insects are now of major importance in ecosystems. The methods that insects use to farm fungi share fundamental similarities with human agriculture. Firstly, insects inoculate a particular habitat or substrate with fungi, much in the same as humans plant seeds in fields. Secondly, they cultivate the fungi by regulating the growing environment to try to improve the growth of the fungus, as well as protecting it from pests and diseases. Thirdly they harvest the fungus when it is mature and feed on it. Lastly they are dependent on the fungi they grow, in the same way that humans are dependent on crops. Beetles Ambrosia beetles, for example Austroplatypus incompertus, farm ambrosia fungi inside of trees and feed on them. The mycangia (organs which carry fungal spores) of ambrosia beetles contain various species of fungus, including species of Ambrosiomyces, Ambrosiella, Ascoidea, Ceratocystis, Dipodascus, Diplodia, Endomycopsis, Monacrosporium and Tuberculariella. The ambrosia fungi are only found in the beetles and their galleries, suggesting that they and the beetles have an obligate symbiosis. Termites Around 330 species of termites in twelve genera of the subfamily Macrotermitinae cultivate a specialised fungus in the genus Termitomyces. The fungus is kept in a specialised part of the nest in fungus cones. Worker termites eat plant matter, producing faecal pellets which they continuously place on top of the cone. The fungus grows into this material and soon produces immature mushrooms, a rich source of protein, sugars and enzymes, which the worker termites eat. The nodules also contain indigestible asexual spores, meaning that the faecal pellets produced by the workers always contain spores of the fungus that colonise the plant material that they defaecate. The Termitomyces also fruits, forming mushrooms above ground, which mature at the same time that the first workers emerge from newly formed nests. The mushrooms produce spores that are wind dispersed, and through this method, new colonies acquire a fungal strain. In some species, the genetic variation of the fungus is very low, suggesting that spores of the fungus are transmitted vertically from nest to nest, rather than from wind dispersed spores. Ants Around 220 described species, and more undescribed species of ants in the tribe Attini cultivate fungi. They are only found in the New World and are thought to have evolved in the Amazon Rainforest, where they are most diverse today. For these ants, farmed fungi are the only source of food on which their larvae are raised on and are also an important food for adults. Queen ants carry a small part of fungus in small pouches in their mouthparts when they leave the nest to mate, allowing them to establish a new fungus garden when they form a new nest. Different lineages cultivate fungi on different substrates, those that evolved earlier do so on a wide range of plant matter, whereas leaf cutter ants are more selective, mainly using only fresh leaves and flowers. The fungi are members of the families Lepiotaceae and Pterulaceae. Other fungi in the genus Escovopsis parasitise the gardens and antibiotic-producing bacteria also inhabit the gardens. Humans Gastropods The marine snail Littoraria irrorata, which lives in the salt marshes of the southeast of the United States feeds on fungi that it encourages to grow. It creates and maintains wounds on the grass, Spartina alterniflora which are then infected by fungi, probably of the genera Phaeosphaeria and Mycosphaerella, which are the preferred diet of the snail. They also deposit faeces on the wounds that they create, which encourage the growth of the fungi because they are rich in nitrogen and fungal hyphae. Juvenile snails raised on uninfected leaves do not grow and are more likely to die, indicating the importance of the fungi in the diet of L. irrorata. See also Edible mushroom Mushroom diet Mycophagy References Animals by eating behaviors Ecology terminology
Fungivore
[ "Biology" ]
3,960
[ "Ecology terminology", "Behavior", "Ethology", "Animals by eating behaviors" ]
18,321,202
https://en.wikipedia.org/wiki/Cyprodime
Cyprodime is an opioid antagonist from the morphinan family of drugs. Cyprodime is a selective opioid antagonist which blocks the μ-opioid receptor, but without affecting the δ-opioid or κ-opioid receptors. This makes it useful for scientific research as it allows the μ-opioid receptor to be selectively deactivated so that the actions of the δ and κ receptors can be studied separately, in contrast to better known opioid antagonists such as naloxone which block all three opioid receptor subtypes. See also Tianeptine, an atypical, selective MOR full-agonist licensed for major depression since 1989. Samidorphan, an opioid antagonist preferring the MOR, which is under development for major depression. References Synthetic opioids Morphinans Mu-opioid receptor antagonists Ketones Ethers Phenol ethers
Cyprodime
[ "Chemistry" ]
198
[ "Organic compounds", "Ketones", "Functional groups", "Ethers" ]
18,324,880
https://en.wikipedia.org/wiki/Peter%20Schwerdtfeger
Peter Schwerdtfeger (born 1 September 1955) is a German scientist. He holds a chair in theoretical chemistry at Massey University in Auckland, New Zealand, serves as director of the Centre for Theoretical Chemistry and Physics, is the head of the New Zealand Institute for Advanced Study, and is a former president of the Alexander von Humboldt Foundation. Academic career Schwerdtfeger took his first degree in chemical engineering at Aalen University in 1976, after finishing a degree as chemical-technical assistant at the Institute Dr. Flad in Stuttgart in 1973. He studied chemistry, physics and mathematics at Stuttgart University where he received his PhD in theoretical chemistry in 1986. He received a Feodor-Lynen fellowship of the Alexander von Humboldt Foundation to join the chemistry department and later the School of Engineering at University of Auckland in 1987. After a two years research fellowship at the Research School of Chemistry (Australian National University), he returned to Auckland University in 1991 for a lectureship in chemistry. He received his habilitation and venia legendi (Privatdozent) in 1995 from the Philipps University of Marburg. He held a personal chair in physical chemistry for five years until moving to Massey University Albany in 2004, where he established the Centre for Theoretical Chemistry and Physics. He became a founding member of the New Zealand Institute for Advanced Study in 2007. In 2007 he received the Royal Society Australasian Chemistry Lectureship, and was the Källen Lecturer in Physics at Lund University (Sweden) in 2015. From 2017-2018 he was member of the Centre for Advanced Study at the Norwegian Academy of Science and Letters. He has published 350 papers in international journals. He was awarded eight consecutive Marsden awards by the Royal Society of New Zealand. One of Schwedtfeger's notable doctoral students is Patricia Hunt, professor at Victoria University of Wellington. Fellowships and awards 2001 James Cook Fellowship 2011 Fukui Medal 2012 Fellow of the International Academy of Quantum Molecular Science. 2014 Royal Society of New Zealand's Rutherford Medal. 2019 Dan Walls Medal Selected publications References External links Official web site 1955 births Living people 20th-century German chemists Academic staff of Massey University Scientists from Stuttgart Recipients of the Rutherford Medal 21st-century New Zealand chemists Fellows of the Australian Academy of Technological Sciences and Engineering Theoretical chemists James Cook Research Fellows
Peter Schwerdtfeger
[ "Chemistry" ]
472
[ "Quantum chemistry", "Theoretical chemistry", "Theoretical chemists", "Physical chemists" ]
18,326,471
https://en.wikipedia.org/wiki/Cool%20Earth%2050
Cool Earth 50 (also known as Cool Earth) is a plan developed by Japan to reduce global CO2 emissions 50% by 2050, which was discussed at the 34th G8 summit. Cool Earth 50 is planned to be a framework that would continue towards the goals set forth in the Kyoto Protocols. This plan includes three proposals: a long-term strategy, a mid-term strategy and launching a national campaign for achieving the Kyoto Protocol Target. The plan was first proposed on May 24, 2007, at an international conference called Asian Future and was initiated by Japanese Prime Minister Shinzo Abe. The program's goal is to reduce current global green house emissions by 50% by the year 2050. The goal of reduction was aimed particular towards the largest green house emitting countries The United States, China, Japan, and India. Also, for the major green house emitters to create a frame work for reduction. Cool Earth aims at reducing green house emissions by improving technology in energy fields. A large goal of Cool Earth is to promote economic prosperity through green technology and to encourage political stability domestically and internationally. Proposals The proposals of this program include: A long-term strategy for global reduction of greenhouse gas emissions. Propose three principles for establishing an international framework for addressing global warming from 2013 onward. To launch a national campaign to ensure Japan achieves the Kyoto Protocol goal. In addition, the proposal sets to make technological advancements in: Zero-emissions coal-fired power generation Reactors for nuclear power generation Technology for high-efficiency and low-cost solar power generation Technology for the use of hydrogen Ultra high energy efficiency technology Course 50 Course 50 is a reduction strategy to reduce emissions by 30%. The aim of Course 50 is to suppress emissions from blast furnaces and to capture from blast furnaces. The goal is to reach reduction by the year 2030. The programs first phase was initiated in the year 2008 and funded by New Energy and Industrial Technology Development Organization.The original budget was approximately 10 billion yen. Course 50 is encouraging innovation in technology towards more effective capturing polymers, as well as temperature reduction and improved efficiency of blast furnaces in the steel industrious. Solar Japan with Cool Earth has been expanding their solar power industry offering subsidies to improving solar powered infrastructure. The main research goal is to achieve a low cost high efficiency solar cell that offers a conversion efficiency of 40%. Hydrogen power In 2009, Japan fitted over 100,000 homes with hydrogen powered fuel cells, improving its hydrogen powered infrastructure. Energy efficient technology New development of LED light bulbs that utilize blue and white light has improved efficiency by over 25% since 2008. The use of SerDes router technology having the capability to reduce energy waste from routers by over 50%. See also Climate change in Japan Emissions reduction efforts References External links Cool Earth 50 at the Japanese Ministry of Foreign Affairs "Cool Earth 50" welcome (in Nihongo), 2007 Environment of Japan G7 summits Climate change in Japan Emissions reduction 2007 introductions 2007 establishments in Japan
Cool Earth 50
[ "Chemistry" ]
604
[ "Greenhouse gases", "Emissions reduction" ]
15,396,952
https://en.wikipedia.org/wiki/Short%20fiber%20reinforced%20blends
Short Fiber Reinforced Blends are partial case of ternary composites, i.e. composites prepared of three ingredients. In particular they can be considered as a combination of an immiscible polymer blend and a short fiber reinforced composite. These blends have the potential to integrate the easy processing solutions available for short fiber reinforced composites with the high mechanical performance of continuous fiber reinforced composites. The performance of these complex, ternary systems is controlled by their morphology. Depending on the aspect ratio of the filler particles (length/diameter) and their compatibility to the polymeric components one can achieve different morphologies: (i) filler contained within the dispersed phase, (ii) within the matrix phase or (iii) at the interface between the two phases. If the fibers are sufficiently long and are preferentially wetted by the dispersed phase an effectively continuous network consisting of fibers welded together by the dispersed phase can be created. In such manner a pseudo-continuous fibrous reinforcement is spontaneously formed during the processing step and a composite material with better mechanical performance can be obtained. See also Composite materials References External links The Macrogalleria - Immiscible Polymer Blends Composite materials Fibre-reinforced polymers
Short fiber reinforced blends
[ "Physics" ]
247
[ "Materials", "Composite materials", "Matter" ]
15,397,255
https://en.wikipedia.org/wiki/Developmental%20psychobiology
Developmental psychobiology is an interdisciplinary field, encompassing developmental psychology, biological psychology, neuroscience and many other areas of biology. The field covers all phases of ontogeny, with particular emphasis on prenatal, perinatal and early childhood development. Conducting research into basic aspects of development, for example, the development of infant attachment, sleep, eating, thermoregulation, learning, attention and acquisition of language occupies most developmental psychobiologists. At the same time, they are actively engaged in research on applied problems such as sudden infant death syndrome, the development and care of the preterm infant, autism, and the effects of various prenatal insults (e.g., maternal stress, alcohol exposure) on the development of brain and behavior (see Michel & Moore, 1995). Developmental psychobiologists employ and integrate both biological and psychological concepts and methods (cf. Michel & Moore, 1995) and have historically been highly concerned with the interrelation between ontogeny and phylogeny (or individual development and evolutionary processes; see, e.g., Blumberg, 2002, 2005; Gottlieb, 1991; Moore, 2001). Developmental psychobiologists also tend to be systems thinkers, avoiding the reification of artificial dichotomies (e.g., "nature" vs. "nurture"). Many developmental psychobiologists thus take exception to both the favored methods and theoretical underpinnings of fields like evolutionary psychology (see, e.g., Lickliter & Honeycutt, 2003; Narvaez et al., 2022). One of the goals of developmental psychobiology is to explain the physical development of the nervous system and how that affects the individual's development in the long term. As seen in a study performed by Molly J. Goodfellow and Derick H. Lindquist, rats exposed to ethanol during early postnatal development experience structural and functional impairments throughout the brain, including the hypothalamus. These developmental complications caused the ethanol-exposed rats to lose their long-term memory capabilities, but maintain a nearly equal short-term memory capacity to that of the control rats. For more information about how ethanol affects the postnatal development of rats, see (e.g., Molly J. Goodfellow and Derick H. Lindquist, 2014). Morphology problem One of the essential issues in developmental psychobiology is the Morphology problem of proper nervous system development. This direction of research attempts to explain the precise coordination of all cells in space and time during embryological processes of cells and tissue differentiation for the shaping of the particular nervous system structure. In cognitive development, shaping the proper nervous system is necessary for emerging multiple brain-based functions that enable humans to perform mental processes such as perception, learning, memory, understanding, awareness, reasoning, judgment, intuition, and language. Our nervous system operates over everything that makes us human. It means that only the formation of neural tissues in a certain way contributes to shaping cognitive functions. However, a lack of knowledge about the precise coordination of all cells in space and time during the embryonal period does not allow us to understand how this formation of neural tissues in a certain way proceeds: what forces at the cellular level coordinate four very general classes of tissue deformation, namely tissue folding and invagination, tissue flow and extension, tissue hollowing, and, finally, tissue branching (Collinet, C., Lecuit, T., 2021). Gene activity from interaction with events and experiences in the environment cannot alone shape tissues in morphogenesis since these processes may not be coordinated in time at the gene level. Again, the nervous system structures operate over everything that makes us human; therefore, forming neural tissues in a certain way is essential for shaping cognitive functions (Val Danilov, I., 2023). These findings mean that the formation of the nervous system's specific structure should be closely related to the precise coordination in time of all general classes of tissue deformation at the cell level. A complete developmental program with a template to create the final biological structure of the nervous system is also required for such a complex dynamic process (Val Danilov, I., 2023). The Shared intentionality approach proposes one of the solutions to the morphology problem, explaining this temporal cell coordination due to non-local coupling in the low-frequency electromagnetic field of the mother's heart (Val Danilov, I., 2023). This position states that, since the reflex stage of development, and even earlier, the embryonal nervous system evolves in a certain way by copying the maternal ecological dynamics (Val Danilov, I., 2023). See also Behavioral neuroscience Pre- and perinatal psychology Childbirth Pregnancy References Michel, G. F., & Moore, C. L. (1995). Developmental Psychobiology: An Interdisciplinary Science. Cambridge, MA: MIT Press Blumberg, M.S. (2002). Body Heat: Temperature and Life On Earth. Harvard University Press Blumberg, M.S. (2005). Basic Instinct: The Genesis of Behavior. Basic Books Gottlieb, G. (1991). Individual Development and Evolution: The Genesis of Novel Behavior. Oxford University Press Moore, D. S. (2001). The Dependent Gene: The Fallacy of "Nature vs. Nurture". New York, NY: Henry Holt Collinet, C., Lecuit, T.(2021). "Programmed and self-organized flow of information during morphogenesis." Nature Reviews Molecular Cell Biology.;22(4):245-65.(2021) https://doi.org/10.1038/s41580-020-00318-6 Val Danilov, I.(2023). "Low-Frequency Oscillations for Nonlocal Neuronal Coupling in Shared Intentionality Before and After Birth: Toward the Origin of Perception." OBM Neurobiology 2023; 7(4): 192; doi:10.21926/obm.neurobiol.2304192 https://www.lidsen.com/journals/neurobiology/neurobiology-07-04-192 External links The International Society for Developmental Psychobiology - An annual forum for the presentation and dissemination of new research and findings in developmental psychobiology. Developmental Psychobiology journal Behavioral neuroscience Developmental psychology
Developmental psychobiology
[ "Biology" ]
1,322
[ "Behavioural sciences", "Behavior", "Behavioral neuroscience", "Developmental psychology" ]
15,398,059
https://en.wikipedia.org/wiki/Anisatin
Anisatin is an extremely toxic, insecticidally active component of the shikimi plant. The lethal dose is 1 mg/kg (i.p.) in mice. Symptoms begin to appear about 1–6 hours after ingestion, beginning with gastrointestinal ailments, such as diarrhea, vomiting, and stomach pain, followed by nervous system excitation, seizures, loss of consciousness, and respiratory paralysis, which is the ultimate cause of death. Role in the GABA system The GABA system is an important site of action by a variety of chemicals, including alcohols, heavy metals, and insecticides. A study conducted on frog spinal cords and rat brains indicated that anisatin was a strong non-competitive GABA antagonist. Anisatin was shown to suppress GABA-induced signals, but when anisatin was added without GABA, there was no change in the signal. Anisatin was also found to share the same binding site as picrotoxinin, and did not cause additional suppression of GABA-induced signals in the presence of high concentrations of picrotoxinin. Anisatin poisoning has been shown to cause epilepsy, hallucinations, nausea, and convulsions. Diazepam has been studied as an anti-convulsive on the GABA system, and has been shown to be an effective treatment for anisatin-induced convulsions. Synthesis A total synthesis of (-)-anisatin was reported in 1990. References External links Plant toxins Neurotoxins GABAA receptor negative allosteric modulators Tetrols Sesquiterpene lactones Oxetanes Spiro compounds
Anisatin
[ "Chemistry" ]
357
[ "Chemical ecology", "Plant toxins", "Organic compounds", "Neurochemistry", "Neurotoxins", "Spiro compounds" ]
15,399,117
https://en.wikipedia.org/wiki/Electromechanical%20modeling
The purpose of electromechanical modeling is to model and simulate an electromechanical system, such that its physical parameters can be examined before the actual system is built. Parameter estimation utilizing different estimation theory coupled with physical experiments and physical realization by doing proper stability criteria evaluation of the overall system is the major objective of electromechanical modeling. Theory driven mathematical model can be used or applied to other system to judge the performance of the joint system as a whole. This is a well known and proven technique for designing large control system for industrial as well as academic multi-disciplinary complex system. This technique is also being employed in MEMS technology recently. Different types of mathematical modeling The modeling of purely mechanical systems is mainly based on the Lagrangian which is a function of the generalized coordinates and the associated velocities. If all forces are derivable from a potential, then the time behavior of the dynamical systems is completely determined. For simple mechanical systems, the Lagrangian is defined as the difference of the kinetic energy and the potential energy. There exists a similar approach for electrical system. By means of the electrical coenergy and well defined power quantities, the equations of motions are uniquely defined. The currents of the inductors and the voltage drops across the capacitors play the role of the generalized coordinates. All constraints, for instance caused by the Kirchhoff laws, are eliminated from the considerations. After that, a suitable transfer function is to be derived from the system parameters which eventually governs the behavior of the system. In consequence, we have quantities (kinetic and potential energy, generalized forces) which determine the mechanical part and quantities (coenergy, powers) for the description of the electrical part. This offers a combination of the mechanical and electrical parts by means of an energy approach. As a result, an extended Lagrangian format is produced. See also Mechatronics Mechanical–electrical analogies References __notoc__ Electromechanical engineering Mechanics Electrodynamics Microelectronic_and_microelectromechanical_systems
Electromechanical modeling
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
428
[ "Microtechnology", "Materials science", "Mechanics", "Mechanical engineering by discipline", "Mechanical engineering", "Electromechanical engineering", "Electrodynamics", "Electrical engineering", "Microelectronic and microelectromechanical systems", "Dynamical systems" ]
15,402,259
https://en.wikipedia.org/wiki/MAIFI
The Momentary Average Interruption Frequency Index (MAIFI) is a reliability index used by electric power utilities. MAIFI is the average number of momentary interruptions that a customer would experience during a given period (typically a year). Electric power utilities may define momentary interruptions differently, with some considering a momentary interruption to be an outage of less than 1 minute in duration while others may consider a momentary interruption to be an outage of less than 5 minutes in duration. Calculation MAIFI is calculated as Reporting MAIFI has tended to be less reported than other reliability indicators, such as SAIDI, SAIFI, and CAIDI. However, MAIFI is useful for tracking momentary power outages, or "blinks," that can be hidden or misrepresented by an overall outage duration index like SAIDI or SAIFI. Causes Momentary power outages are often caused by transient faults, such as lightning strikes or vegetation contacting a power line, and many utilities use reclosers to automatically restore power quickly after a transient fault has cleared. Comparisons MAIFI is specific to the area ( power utility, state, region, county, power line, etc. ) because of the many variables that affect the measure: high/low lightning, number & type of trees, high/low winds, etc. Therefore, comparing MAIFI of one power utility to another is not valid and should not be used in this type of benchmarking. It also is difficult to compare this measure of reliability within a single utility. One year may have had an unusually high number of thunderstorms and thus skew any comparison to another year's MAIFI. References Electric power Reliability indices
MAIFI
[ "Physics", "Engineering" ]
338
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
15,407,327
https://en.wikipedia.org/wiki/Absolute%20electrode%20potential
Absolute electrode potential, in electrochemistry, according to an IUPAC definition, is the electrode potential of a metal measured with respect to a universal reference system (without any additional metal–solution interface). Definition According to a more specific definition presented by Trasatti, the absolute electrode potential is the difference in electronic energy between a point inside the metal (Fermi level) of an electrode and a point outside the electrolyte in which the electrode is submerged (an electron at rest in vacuum). This potential is difficult to determine accurately. For this reason, a standard hydrogen electrode is typically used for reference potential. The absolute potential of the SHE is 4.44 ± 0.02 V at 25 °C. Therefore, for any electrode at 25 °C: where: is electrode potential V is the unit volt M denotes the electrode made of metal M (abs) denotes the absolute potential (SHE) denotes the electrode potential relative to the standard hydrogen electrode. A different definition for the absolute electrode potential (also known as absolute half-cell potential and single electrode potential) has also been discussed in the literature. In this approach, one first defines an isothermal absolute single-electrode process (or absolute half-cell process.) For example, in the case of a generic metal being oxidized to form a solution-phase ion, the process would be M(metal) → M+(solution) + (gas) For the hydrogen electrode, the absolute half-cell process would be H2 (gas) → H+(solution) + (gas) Other types of absolute electrode reactions would be defined analogously. In this approach, all three species taking part in the reaction, including the electron, must be placed in thermodynamically well-defined states. All species, including the electron, are at the same temperature, and appropriate standard states for all species, including the electron, must be fully defined. The absolute electrode potential is then defined as the Gibbs free energy for the absolute electrode process. To express this in volts one divides the Gibbs free energy by the negative of Faraday's constant. Rockwood's approach to absolute-electrode thermodynamics is easily expendable to other thermodynamic functions. For example, the absolute half-cell entropy has been defined as the entropy of the absolute half-cell process defined above. An alternative definition of the absolute half-cell entropy has recently been published by Fang et al. who define it as the entropy of the following reaction (using the hydrogen electrode as an example): H2 (gas) → H+(solution) + (metal) This approach differs from the approach described by Rockwood in the treatment of the electron, i.e. whether it is placed in the gas phase or the metal. The electron can also be in another state, that of a solvated electron in solution, as studied by Alexander Frumkin and B. Damaskin and others. Determination The basis for determination of the absolute electrode potential under the Trasatti definition is given by the equation: where: is the absolute potential of the electrode made of metal M is the electron work function of metal M is the contact (Volta) potential difference at the metal(M)–solution(S) interface. For practical purposes, the value of the absolute electrode potential of the standard hydrogen electrode is best determined with the utility of data for an ideally-polarizable mercury (Hg) electrode: where: is the absolute standard potential of the hydrogen electrode denotes the condition of the point of zero charge at the interface. The types of physical measurements required under the Rockwood definition are similar to those required under the Trasatti definition, but they are used in a different way, e.g. in Rockwood's approach they are used to calculate the equilibrium vapour pressure of the electron gas. The numerical value for the absolute potential of the standard hydrogen electrode one would calculate under the Rockwood definition is sometimes fortuitously close to the value one would obtain under the Trasatti definition. This near-agreement in the numerical value depends on the choice of ambient temperature and standard states, and is the result of the near-cancellation of certain terms in the expressions. For example, if a standard state of one atmosphere ideal gas is chosen for the electron gas then the cancellation of terms occurs at a temperature of 296 K, and the two definitions give an equal numerical result. At 298.15 K a near-cancellation of terms would apply and the two approaches would produce nearly the same numerical values. However, there is no fundamental significance to this near agreement because it depends on arbitrary choices, such as temperature and definitions of standard states. See also Electrochemical potential Galvani potential Standard electrode potential References Electrochemistry Electrochemical potentials
Absolute electrode potential
[ "Chemistry" ]
975
[ "Electrochemistry", "Electrochemical potentials" ]
15,409,174
https://en.wikipedia.org/wiki/Piezoelectric%20accelerometer
A piezoelectric accelerometer is an accelerometer that employs the piezoelectric effect of certain materials to measure dynamic changes in mechanical variables (e.g., acceleration, vibration, and mechanical shock). As with all transducers, piezoelectrics convert one form of energy into another and provide an electrical signal in response to a quantity, property, or condition that is being measured. Using the general sensing method upon which all accelerometers are based, acceleration acts upon a seismic mass that is restrained by a spring or suspended on a cantilever beam, and converts a physical force into an electrical signal. Before the acceleration can be converted into an electrical quantity it must first be converted into either a force or displacement. This conversion is done via the mass spring system shown in the figure to the right. Introduction The word piezoelectric finds its roots in the Greek word piezein, which means to squeeze or press. When a physical force is exerted on the accelerometer, the seismic mass loads the piezoelectric element according to Newton's second law of motion (). The force exerted on the piezoelectric material can be observed in the change in the electrostatic force or voltage generated by the piezoelectric material. This differs from a piezoresistive effect in that piezoresistive materials experience a change in the resistance of the material rather than a change in charge or voltage. Physical force exerted on the piezoelectric can be classified as one of two types; bending or compression. Stress of the compression type can be understood as a force exerted to one side of the piezoelectric while the opposing side rests against a fixed surface, while bending involves a force being exerted on the piezoelectric from both sides. Piezoelectric materials used for the purpose of accelerometers fall into two categories: single crystal and ceramic materials. The first and more widely used are single-crystal materials (usually quartz). Though these materials do offer a long life span in terms of sensitivity, their disadvantage is that they are generally less sensitive than some piezoelectric ceramics. The other category, ceramic materials, have a higher piezoelectric constant (sensitivity) than single-crystal materials, and are less expensive to produce. Ceramics use barium titanate, lead-zirconate-lead-titanate, lead metaniobate, and other materials whose composition is considered proprietary by the company responsible for their development. The disadvantage of piezoelectric ceramics, however, is that their sensitivity degrades with time making the longevity of the device less than that of single-crystal materials. In applications when low sensitivity piezoelectrics are used, two or more crystals can be connected together for output multiplication. The proper material can be chosen for particular applications based on the sensitivity, frequency response, bulk-resistivity, and thermal response. Due to the low output signal and high output impedance that piezoelectric accelerometers possess, there is a need for amplification and impedance conversion of the signal produced. In the past this problem has been solved using a separate (external) amplifier/impedance converter. This method, however, is generally impractical due to the noise that is introduced as well as the physical and environmental constraints posed on the system as a result. Today IC amplifiers/impedance converters are commercially available and are generally packaged within the case of the accelerometer itself. History Behind the mystery of the operation of the piezoelectric accelerometer lie some very fundamental concepts governing the behavior of crystallographic structures. In 1880, Pierre and Jacques Curie published an experimental demonstration connecting mechanical stress and surface charge on a crystal. This phenomenon became known as the piezoelectric effect. Closely related to this phenomenon is the Curie point, named for the physicist Pierre Curie, which is the temperature above which piezoelectric material loses spontaneous polarization of its atoms. The development of the commercial piezoelectric accelerometer came about through a number of attempts to find the most effective method to measure the vibration on large structures such as bridges and on vehicles in motion such as aircraft. One attempt involved using the resistance strain gage as a device to build an accelerometer. Incidentally, it was Hans J. Meier who, through his work at MIT, is given credit as the first to construct a commercial strain gage accelerometer (circa 1938). However, the strain gage accelerometers were fragile and could only produce low resonant frequencies and they also exhibited a low frequency response. These limitations in dynamic range made it unsuitable for testing naval aircraft structures. On the other hand, the piezoelectric sensor was proven to be a much better choice over the strain gage in designing an accelerometer. The high modulus of elasticity of piezoelectric materials makes the piezoelectric sensor a more viable solution to the problems identified with the strain gage accelerometer. Simply stated, the inherent properties of the piezoelectric accelerometers made it a much better alternative to the strain gage types because of its high frequency response, and its ability to generate high resonant frequencies. The piezoelectric accelerometer allowed for a reduction in its physical size at the manufacturing level and it also provided for a higher g (standard gravity) capability relative to the strain gage type. By comparison, the strain gage type exhibited a flat frequency response up to 200 Hz while the piezoelectric type provided a flat response up to 10,000 Hz. These improvements made it possible for measuring the high frequency vibrations associated with the quick movements and short duration shocks of aircraft which before was not possible with the strain gage types. Before long, the technological benefits of the piezoelectric accelerometer became apparent and in the late 1940s, large scale production of piezoelectric accelerometers began. Today, piezoelectric accelerometers are used for instrumentation in the fields of engineering, health and medicine, aeronautics and many other different industries. Manufacturing There are two common methods used to manufacture accelerometers. One is based upon the principles of piezoresistance and the other is based on the principles of piezoelectricity. Both methods ensure that unwanted orthogonal acceleration vectors are excluded from detection. Manufacturing an accelerometer that uses piezoresistance first starts with a semiconductor layer that is attached to a handle wafer by a thick oxide layer. The semiconductor layer is then patterned to the accelerometer's geometry. This semiconductor layer has one or more apertures so that the underlying mass will have the corresponding apertures. Next the semiconductor layer is used as a mask to etch out a cavity in the underlying thick oxide. A mass in the cavity is supported in cantilever fashion by the piezoresistant arms of the semiconductor layer. Directly below the accelerometer's geometry is a flex cavity that allows the mass in the cavity to flex or move in direction that is orthogonal to the surface of the accelerometer. Accelerometers based upon piezoelectricity are constructed with two piezoelectric transducers. The unit consists of a hollow tube that is sealed by a piezoelectric transducer on each end. The transducers are oppositely polarized and are selected to have a specific series capacitance. The tube is then partially filled with a heavy liquid and the accelerometer is excited. While excited the total output voltage is continuously measured and the volume of the heavy liquid is microadjusted until the desired output voltage is obtained. Finally the outputs of the individual transducers are measured, the residual voltage difference is tabulated, and the dominant transducer is identified. In 1943 the Danish company Brüel & Kjær launched Type 4301 - the world's first charge accelerometer. Applications of piezoelectric accelerometers Piezoelectric accelerometers are used in many different industries, environments, and applications - all typically requiring measurement of short duration impulses. Piezoelectric measuring devices are widely used today in the laboratory, on the production floor, and as original equipment for measuring and recording dynamic changes in mechanical variables including shock and vibration. Some accelerometers have built-in electronics to amplify the signal before transmitting it to the recording device. This work was pioneered by PCB Piezotronics, released in 1967 as ICP® Integrated circuit piezoelectric, later evolving to be the IEPE standard (see Integrated Electronics Piezo-Electric). Other related, brand specific descriptors of IEPE are: CCLD, IsoTron or DeltaTron. Accelerometers also have had the addition of onboard memory to contain serial number and calibration data, typically referred to as TEDS Transducer Electronic Data Sheet per the IEEE 1451 standard. References Norton, Harry N.(1989). Handbook of Transducers. Prentice Hall PTR. 'PDF Link' External links 'Piezoelectric Tranducers' 'Piezoelectric Sensors' 'Piezoelectric Accelerometers - Theory and Application' 'Access to Accels' - Tutorial about PE accelerometers Piezoelectric materials Transducers Accelerometers
Piezoelectric accelerometer
[ "Physics", "Technology", "Engineering" ]
1,962
[ "Accelerometers", "Physical phenomena", "Physical quantities", "Acceleration", "Measuring instruments", "Materials", "Electrical phenomena", "Piezoelectric materials", "Matter" ]
15,409,192
https://en.wikipedia.org/wiki/SAT%20solver
In computer science and formal methods, a SAT solver is a computer program which aims to solve the Boolean satisfiability problem. On input a formula over Boolean variables, such as "(x or y) and (x or not y)", a SAT solver outputs whether the formula is satisfiable, meaning that there are possible values of x and y which make the formula true, or unsatisfiable, meaning that there are no such values of x and y. In this case, the formula is satisfiable when x is true, so the solver should return "satisfiable". Since the introduction of algorithms for SAT in the 1960s, modern SAT solvers have grown into complex software artifacts involving a large number of heuristics and program optimizations to work efficiently. By a result known as the Cook–Levin theorem, Boolean satisfiability is an NP-complete problem in general. As a result, only algorithms with exponential worst-case complexity are known. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s, which have contributed to dramatic advances in the ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints. SAT solvers often begin by converting a formula to conjunctive normal form. They are often based on core algorithms such as the DPLL algorithm, but incorporate a number of extensions and features. Most SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot find a solution, with an output such as "unknown" in the latter case. Often, SAT solvers do not just provide an answer, but can provide further information including an example assignment (values for x, y, etc.) in case the formula is satisfiable or minimal set of unsatisfiable clauses if the formula is unsatisfiable. Modern SAT solvers have had a significant impact on fields including software verification, program analysis, constraint solving, artificial intelligence, electronic design automation, and operations research. Powerful solvers are readily available as free and open-source software and are built into some programming languages such as exposing SAT solvers as constraints in constraint logic programming. Overview A Boolean formula is any expression that can be written using Boolean (propositional) variables x, y, z, ... and the Boolean operations AND, OR, and NOT. For example, (x AND y) OR (x AND (NOT z)) An assignment consists of choosing, for each variable, an assignment TRUE or FALSE. For any assignment v, the Boolean formula can be evaluated, and evaluates to true or false. The formula is satisfiable if there exists an assignment (called a satisfying assignment) for which the formula evaluates to true. The Boolean satisfiability problem is the decision problem which asks, on input a Boolean formula, to determine whether the formula is satisfiable or not. This problem is NP-complete. Core algorithms SAT solvers are usually developed using one of two core approaches: the Davis–Putnam–Logemann–Loveland algorithm (DPLL) and conflict-driven clause learning (CDCL). DPLL A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal papers in the early 1960s (see references below) and is now commonly referred to as the DPLL algorithm. Many modern approaches to practical SAT solving are derived from the DPLL algorithm and share the same structure. Often they only improve the efficiency of certain classes of SAT problems such as instances that appear in industrial applications or randomly generated instances. Theoretically, exponential lower bounds have been proved for the DPLL family of algorithms. CDCL Modern SAT solvers (developed in the 2000s) come in two flavors: "conflict-driven" and "look-ahead". Both approaches descend from DPLL. Conflict-driven solvers, such as conflict-driven clause learning (CDCL), augment the basic DPLL search algorithm with efficient conflict analysis, clause learning, backjumping, a "two-watched-literals" form of unit propagation, adaptive branching, and random restarts. These "extras" to the basic systematic search have been empirically shown to be essential for handling the large SAT instances that arise in electronic design automation (EDA). Most state-of-the-art SAT solvers are based on the CDCL framework as of 2019. Well known implementations include Chaff and GRASP. Look-ahead solvers have especially strengthened reductions (going beyond unit-clause propagation) and the heuristics, and they are generally stronger than conflict-driven solvers on hard instances (while conflict-driven solvers can be much better on large instances which actually have an easy instance inside). The conflict-driven MiniSAT, which was relatively successful at the 2005 SAT competition, only has about 600 lines of code. A modern Parallel SAT solver is ManySAT. It can achieve super linear speed-ups on important classes of problems. An example for look-ahead solvers is march_dl, which won a prize at the 2007 SAT competition. Google's CP-SAT solver, part of OR-Tools, won gold medals at the Minizinc constraint programming competitions in 2018, 2019, 2020, and 2021. Certain types of large random satisfiable instances of SAT can be solved by survey propagation (SP). Particularly in hardware design and verification applications, satisfiability and other logical properties of a given propositional formula are sometimes decided based on a representation of the formula as a binary decision diagram (BDD). Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. All of these behaviors can be seen in the SAT solving contests. Parallel approaches Parallel SAT solvers come in three categories: portfolio, divide-and-conquer and parallel local search algorithms. With parallel portfolios, multiple different SAT solvers run concurrently. Each of them solves a copy of the SAT instance, whereas divide-and-conquer algorithms divide the problem between the processors. Different approaches exist to parallelize local search algorithms. The International SAT Solver Competition has a parallel track reflecting recent advances in parallel SAT solving. In 2016, 2017 and 2018, the benchmarks were run on a shared-memory system with 24 processing cores, therefore solvers intended for distributed memory or manycore processors might have fallen short. Portfolios In general there is no SAT solver that performs better than all other solvers on all SAT problems. An algorithm might perform well for problem instances others struggle with, but will do worse with other instances. Furthermore, given a SAT instance, there is no reliable way to predict which algorithm will solve this instance particularly fast. These limitations motivate the parallel portfolio approach. A portfolio is a set of different algorithms or different configurations of the same algorithm. All solvers in a parallel portfolio run on different processors to solve of the same problem. If one solver terminates, the portfolio solver reports the problem to be satisfiable or unsatisfiable according to this one solver. All other solvers are terminated. Diversifying portfolios by including a variety of solvers, each performing well on a different set of problems, increases the robustness of the solver. Many solvers internally use a random number generator. Diversifying their seeds is a simple way to diversify a portfolio. Other diversification strategies involve enabling, disabling or diversifying certain heuristics in the sequential solver. One drawback of parallel portfolios is the amount of duplicate work. If clause learning is used in the sequential solvers, sharing learned clauses between parallel running solvers can reduce duplicate work and increase performance. Yet, even merely running a portfolio of the best solvers in parallel makes a competitive parallel solver. An example of such a solver is PPfolio. It was designed to find a lower bound for the performance a parallel SAT solver should be able to deliver. Despite the large amount of duplicate work due to lack of optimizations, it performed well on a shared memory machine. HordeSat is a parallel portfolio solver for large clusters of computing nodes. It uses differently configured instances of the same sequential solver at its core. Particularly for hard SAT instances HordeSat can produce linear speedups and therefore reduce runtime significantly. In recent years parallel portfolio SAT solvers have dominated the parallel track of the International SAT Solver Competitions. Notable examples of such solvers include Plingeling and painless-mcomsps. Divide-and-conquer In contrast to parallel portfolios, parallel divide-and-conquer tries to split the search space between the processing elements. Divide-and-conquer algorithms, such as the sequential DPLL, already apply the technique of splitting the search space, hence their extension towards a parallel algorithm is straight forward. However, due to techniques like unit propagation, following a division, the partial problems may differ significantly in complexity. Thus the DPLL algorithm typically does not process each part of the search space in the same amount of time, yielding a challenging load balancing problem. Due to non-chronological backtracking, parallelization of conflict-driven clause learning is more difficult. One way to overcome this is the Cube-and-Conquer paradigm. It suggests solving in two phases. In the "cube" phase the Problem is divided into many thousands, up to millions, of sections. This is done by a look-ahead solver, that finds a set of partial configurations called "cubes". A cube can also be seen as a conjunction of a subset of variables of the original formula. In conjunction with the formula, each of the cubes forms a new formula. These formulas can be solved independently and concurrently by conflict-driven solvers. As the disjunction of these formulas is equivalent to the original formula, the problem is reported to be satisfiable, if one of the formulas is satisfiable. The look-ahead solver is favorable for small but hard problems, so it is used to gradually divide the problem into multiple sub-problems. These sub-problems are easier but still large which is the ideal form for a conflict-driven solver. Furthermore, look-ahead solvers consider the entire problem whereas conflict-driven solvers make decisions based on information that is much more local. There are three heuristics involved in the cube phase. The variables in the cubes are chosen by the decision heuristic. The direction heuristic decides which variable assignment (true or false) to explore first. In satisfiable problem instances, choosing a satisfiable branch first is beneficial. The cutoff heuristic decides when to stop expanding a cube and instead forward it to a sequential conflict-driven solver. Preferably the cubes are similarly complex to solve. Treengeling is an example for a parallel solver that applies the Cube-and-Conquer paradigm. Since its introduction in 2012 it has had multiple successes at the International SAT Solver Competition. Cube-and-Conquer was used to solve the Boolean Pythagorean triples problem. Cube-and-Conquer is a modification or a generalization of the DPLL-based Divide-and-conquer approach used to compute the Van der Waerden numbers w(2;3,17) and w(2;3,18) in 2010 where both the phases (splitting and solving the partial problems) were performed using DPLL. Local search One strategy towards a parallel local search algorithm for SAT solving is trying multiple variable flips concurrently on different processing units. Another is to apply the aforementioned portfolio approach, however clause sharing is not possible since local search solvers do not produce clauses. Alternatively, it is possible to share the configurations that are produced locally. These configurations can be used to guide the production of a new initial configuration when a local solver decides to restart its search. Randomized approaches Algorithms that are not part of the DPLL family include stochastic local search algorithms. One example is WalkSAT. Stochastic methods try to find a satisfying interpretation but cannot deduce that a SAT instance is unsatisfiable, as opposed to complete algorithms, such as DPLL. In contrast, randomized algorithms like the PPSZ algorithm by Paturi, Pudlak, Saks, and Zane set variables in a random order according to some heuristics, for example bounded-width resolution. If the heuristic can't find the correct setting, the variable is assigned randomly. The PPSZ algorithm has a of for 3-SAT. This was the best-known runtime for this problem until 2019, when Hansen, Kaplan, Zamir and Zwick published a modification of that algorithm with a runtime of for 3-SAT. The latter is currently the fastest known algorithm for k-SAT at all values of k. In the setting with many satisfying assignments the randomized algorithm by Schöning has a better bound. Applications In mathematics SAT solvers have been used to assist in proving mathematical theorems through computer-assisted proof. In Ramsey theory, several previously unknown Van der Waerden numbers were computed with the help of specialized SAT solvers running on FPGAs. In 2016, Marijn Heule, Oliver Kullmann, and Victor Marek solved the Boolean Pythagorean triples problem by using a SAT solver to show that there is no way to color the integers up to 7825 in the required fashion. Small values of the Schur numbers were also computed by Heule using SAT solvers. In software verification SAT solvers are used in formal verification of hardware and software. In model checking (in particular, bounded model checking), SAT solvers are used to check whether a finite-state system satisfies a specification of its intended behavior. SAT solvers are the core component on which satisfiability modulo theories (SMT) solvers are built, which are used for problems such as job scheduling, symbolic execution, program model checking, program verification based on hoare logic, and other applications. These techniques are also closely related to constraint programming and logic programming. In other areas In operations research, SAT solvers have been applied to solve optimization and scheduling problems. In social choice theory, SAT solvers have been used to prove impossibility theorems. Tang and Lin used SAT solvers to prove Arrow's theorem and other classic impossibility theorems. Geist and Endriss used it to find new impossibilities related to set extensions. Brandt and Geist used this approach to prove an impossibility about strategyproof tournament solutions. Other authors used this technology to prove new impossibilities about the no-show paradox, half-way monotonicity, and probabilistic voting rules. Brandl, Brandt, Peters and Stricker used it to prove the impossibility of a strategyproof, efficient and fair rule for fractional social choice. See also :Category:SAT solvers Computer-assisted proof Satisfiability modulo theories References External links Overview of Sat competitions since 2002 Formal methods Logic in computer science Satisfiability problems
SAT solver
[ "Mathematics", "Engineering" ]
3,162
[ "Logic in computer science", "Automated theorem proving", "Mathematical logic", "Computational problems", "Software engineering", "Mathematical problems", "Formal methods", "Satisfiability problems" ]
7,937,743
https://en.wikipedia.org/wiki/Spin%20echo
In magnetic resonance, a spin echo or Hahn echo is the refocusing of spin magnetisation by a pulse of resonant electromagnetic radiation. Modern nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) make use of this effect. The NMR signal observed following an initial excitation pulse decays with time due to both spin relaxation and any inhomogeneous effects which cause spins in the sample to precess at different rates. The first of these, relaxation, leads to an irreversible loss of magnetisation. But the inhomogeneous dephasing can be removed by applying a 180° inversion pulse that inverts the magnetisation vectors. Examples of inhomogeneous effects include a magnetic field gradient and a distribution of chemical shifts. If the inversion pulse is applied after a period t of dephasing, the inhomogeneous evolution will rephase to form an echo at time 2t. In simple cases, the intensity of the echo relative to the initial signal is given by e–2t/T2 where T2 is the time constant for spin–spin relaxation. The echo time (TE) is the time between the excitation pulse and the peak of the signal. Echo phenomena are important features of coherent spectroscopy which have been used in fields other than magnetic resonance including laser spectroscopy and neutron scattering. History Echoes were first detected in nuclear magnetic resonance by Erwin Hahn in 1950, and spin echoes are sometimes referred to as Hahn echoes. In nuclear magnetic resonance and magnetic resonance imaging, radiofrequency radiation is most commonly used. In 1972 F. Mezei introduced spin-echo neutron scattering, a technique that can be used to study magnons and phonons in single crystals. The technique is now applied in research facilities using triple axis spectrometers. In 2020 two teams demonstrated that when strongly coupling an ensemble of spins to a resonator, the Hahn pulse sequence does not just lead to a single echo, but rather to a whole train of periodic echoes. In this process the first Hahn echo acts back on the spins as a refocusing pulse, leading to self-stimulated secondary echoes. Principle The spin-echo effect was discovered by Erwin Hahn when he applied two successive 90° pulses separated by short time period, but detected a signal, the echo, when no pulse was applied. This phenomenon of spin echo was explained by Erwin Hahn in his 1950 paper, and further developed by Carr and Purcell who pointed out the advantages of using a 180° refocusing pulse for the second pulse. The pulse sequence may be better understood by breaking it down into the following steps: Several simplifications are used in this sequence: no decoherence is included and each spin experiences perfect pulses during which the environment provides no spreading. Six spins are shown above and these are not given the chance to dephase significantly. The spin-echo technique is more useful when the spins have dephased more significantly such as in the animation below: Spin-echo decay A Hahn-echo decay experiment can be used to measure the spin–spin relaxation time, as shown in the animation below. The size of the echo is recorded for different spacings of the two pulses. This reveals the decoherence which is not refocused by the π pulse. In simple cases, an exponential decay is measured which is described by the T2 time. Stimulated echo Hahn's 1950 paper showed that another method for generating spin echoes is to apply three successive 90° pulses. After the first 90° pulse, the magnetization vector spreads out as described above, forming what can be thought of as a "pancake" in the x-y plane. The spreading continues for a time , and then a second 90° pulse is applied such that the "pancake" is now in the x-z plane. After a further time a third pulse is applied and a stimulated echo is observed after waiting for a time after the last pulse. Photon echo Hahn echos have also been observed at optical frequencies. For this, resonant light is applied to a material with an inhomogeneously broadened absorption resonance. Instead of using two spin states in a magnetic field, photon echoes use two energy levels that are present in the material even in zero magnetic field. Fast spin echo Fast spin echo (RARE, FAISE or FSE), also called turbo spin echo (TSE) is an MRI sequence that results in fast scan times. In this sequence, several 180 refocusing radio-frequency pulses are delivered during each echo time (TR) interval, and the phase-encoding gradient is briefly switched on between echoes. The FSE/TSE pulse sequence superficially resembles a conventional spin-echo (CSE) sequence in that it uses a series of 180º-refocusing pulses after a single 90º-pulse to generate a train of echoes. The FSE/TSE technique, however, changes the phase-encoding gradient for each of these echoes (a conventional multi-echo sequence collects all echoes in a train with the same phase encoding). As a result of changing the phase-encoding gradient between echoes, multiple lines of k-space (i.e., phase-encoding steps) can be acquired within a given repetition time (TR). As multiple phase-encoding lines are acquired during each TR interval, FSE/TSE techniques may significantly reduce imaging time. See also Nuclear magnetic resonance Magnetic resonance imaging Neutron spin echo Electron paramagnetic resonance Photon echoes in semiconductor optics References Further reading External links Animations and simulations Spin Echo Simulation scratch.mit.edu Magnetic resonance imaging Nuclear magnetic resonance Quantum mechanics Scientific techniques Electron paramagnetic resonance
Spin echo
[ "Physics", "Chemistry" ]
1,161
[ "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Magnetic resonance imaging", "Theoretical physics", "Quantum mechanics", "Electron paramagnetic resonance", "Nuclear physics", "Spectroscopy" ]
7,949,248
https://en.wikipedia.org/wiki/Expanded%20bed%20adsorption
Expanded bed adsorption (EBA) is a preparative chromatographic technique which makes processing of viscous and particulate liquids possible. Principle The protein binding principles in EBA are the same as in classical column chromatography and the common ion-exchange, hydrophobic interaction and affinity chromatography ligands can be used. After the adsorption step is complete, the fluidized bed is washed to flush out any remaining particulates. Elution of the adsorbed proteins was commonly performed with the eluent flow in the reverse direction; that is, as a conventional packed bed, in order to recover the adsorbed solutes in a smaller volume of eluent. However, a new generation of EBA columns has been developed, which maintain the bed in the expanded state during this phase, producing high-purity, high yields of e.g. MAbs [monoclonal antibodies] in even smaller volumes of eluent. Process duration at manufacturing scale has also been cut considerably (under 7 hours in some cases). EBA may be considered to combine both the "Removal of Insolubles" and the "Isolation" steps of the 4-step downstream processing heuristic. The major limitations associated with EBA technology is biomass interactions and aggregations onto adsorbent during processing. Where classical column chromatography uses a solid phase made by a packed bed, EBA uses particles in a fluidized state, ideally expanded by a factor of 2. Expanded bed adsorption is, however, different from fluidised bed chromatography in essentially two ways: one, the EBA resin contains particles of varying size and density which results in a gradient of particle size when expanded; and two, when the bed is in its expanded state, local loops are formed. Particles such as whole cells or cell debris, which would clog a packed bed column, readily pass through a fluidized bed. EBA can therefore be used on crude culture broths or slurries of broken cells, thereby bypassing initial clearing steps such as centrifugation and filtration, which is mandatory when packed beds are used. In older EBA column designs, the feed flow rate is kept low enough that the solid packing remains stratified and does not fluidize completely. Hence EBA can be modelled as frontal adsorption in a packed bed, rather than as a well-mixed, continuous-flow adsorber. References External links "Expanded-bed adsorption", at Modern Drug Discovery Introduction to Expanded Bed Adsorption Biochemistry methods Chromatography Protein methods
Expanded bed adsorption
[ "Chemistry", "Biology" ]
540
[ "Biochemistry methods", "Chromatography", "Separation processes", "Protein methods", "Biotechnology stubs", "Protein biochemistry", "Biochemistry stubs", "Analytical chemistry stubs", "Biochemistry" ]
4,597,084
https://en.wikipedia.org/wiki/Ginzburg%E2%80%93Landau%20equation
The Ginzburg–Landau equation, named after Vitaly Ginzburg and Lev Landau, describes the nonlinear evolution of small disturbances near a finite wavelength bifurcation from a stable to an unstable state of a system. At the onset of finite wavelength bifurcation, the system becomes unstable for a critical wavenumber which is non-zero. In the neighbourhood of this bifurcation, the evolution of disturbances is characterised by the particular Fourier mode for with slowly varying amplitude (more precisely the real part of ). The Ginzburg–Landau equation is the governing equation for . The unstable modes can either be non-oscillatory (stationary) or oscillatory. For non-oscillatory bifurcation, satisfies the real Ginzburg–Landau equation which was first derived by Alan C. Newell and John A. Whitehead and by Lee Segel in 1969. For oscillatory bifurcation, satisfies the complex Ginzburg–Landau equation which was first derived by Keith Stewartson and John Trevor Stuart in 1971. Here and are real constants. When the problem is homogeneous, i.e., when is independent of the spatial coordinates, the Ginzburg–Landau equation reduces to Stuart–Landau equation. The Swift–Hohenberg equation results in the Ginzburg–Landau equation. Substituting , where is the amplitude and is the phase, one obtains the following equations Some solutions of the real Ginzburg–Landau equation Steady plane-wave type If we substitute in the real equation without the time derivative term, we obtain This solution is known to become unstable due to Eckhaus instability for wavenumbers Steady solution with absorbing boundary condition Once again, let us look for steady solutions, but with an absorbing boundary condition at some location. In a semi-infinite, 1D domain , the solution is given by where is an arbitrary real constant. Similar solutions can be constructed numerically in a finite domain. Some solutions of the complex Ginzburg–Landau equation Traveling wave The traveling wave solution is given by The group velocity of the wave is given by The above solution becomes unstable due to Benjamin–Feir instability for wavenumbers Hocking–Stewartson pulse Hocking–Stewartson pulse refers to a quasi-steady, 1D solution of the complex Ginzburg–Landau equation, obtained by Leslie M. Hocking and Keith Stewartson in 1972. The solution is given by where the four real constants in the above solution satisfy Coherent structure solutions The coherent structure solutions are obtained by assuming where . This leads to where and See also Davey–Stewartson equation Stuart–Landau equation Swift–Hohenberg equation Gross–Pitaevskii equation References Fluid dynamics Mechanics Lev Landau Functions of space and time
Ginzburg–Landau equation
[ "Physics", "Chemistry", "Engineering" ]
563
[ "Functions of space and time", "Chemical engineering", "Mechanics", "Mechanical engineering", "Piping", "Spacetime", "Fluid dynamics" ]
4,597,562
https://en.wikipedia.org/wiki/Universal%20extra%20dimensions
In particle physics, models with universal extra dimensions include one or more spatial dimensions beyond the three spatial and one temporal dimensions that are observed. Overview Models with universal extra dimensions, studied in 2001 assume that all fields propagate universally in the extra dimensions; in contrast, the ADD model requires that the fields of the Standard Model be confined to a four-dimensional membrane, while only gravity propagates in the extra dimensions. The universal extra dimensions are assumed to be compactified with radii much larger than the traditional Planck length, although smaller than in the ADD model, ~10−18 m. Generically, the—so far unobserved—Kaluza–Klein resonances of the Standard Model fields in such a theory would appear at an energy scale that is directly related to the inverse size ("compactification scale") of the extra dimension, The experimental bounds (based on Large Hadron Collider data) on the compactification scale of one or two universal extra dimensions are about 1 TeV. Other bounds come from electroweak precision measurements at the Z pole, the muon's magnetic moment, and limits on flavor-changing neutral currents, and reach several hundred GeV. Using universal extra dimensions to explain dark matter yields an upper limit on the compactification scale of several TeV. See also Large extra dimensions Kaluza–Klein theory Randall–Sundrum model Notes References Particle physics Physics beyond the Standard Model Dimension
Universal extra dimensions
[ "Physics" ]
293
[ "Geometric measurement", "Physical quantities", "Unsolved problems in physics", "Particle physics", "Theory of relativity", "Dimension", "Physics beyond the Standard Model" ]
4,597,574
https://en.wikipedia.org/wiki/Effective%20potential
The effective potential (also known as effective potential energy) combines multiple, perhaps opposing, effects into a single potential. In its basic form, it is the sum of the 'opposing' centrifugal potential energy with the potential energy of a dynamical system. It may be used to determine the orbits of planets (both Newtonian and relativistic) and to perform semi-classical atomic calculations, and often allows problems to be reduced to fewer dimensions. Definition The basic form of potential is defined as: where L is the angular momentum r is the distance between the two masses μ is the reduced mass of the two bodies (approximately equal to the mass of the orbiting body if one mass is much larger than the other); and U(r) is the general form of the potential. The effective force, then, is the negative gradient of the effective potential: where denotes a unit vector in the radial direction. Important properties There are many useful features of the effective potential, such as To find the radius of a circular orbit, simply minimize the effective potential with respect to , or equivalently set the net force to zero and then solve for : After solving for , plug this back into to find the maximum value of the effective potential . A circular orbit may be either stable or unstable. If it is unstable, a small perturbation could destabilize the orbit, but a stable orbit would return to equilibrium. To determine the stability of a circular orbit, determine the concavity of the effective potential. If the concavity is positive, the orbit is stable: The frequency of small oscillations, using basic Hamiltonian analysis, is where the double prime indicates the second derivative of the effective potential with respect to and it is evaluated at a minimum. Gravitational potential Consider a particle of mass m orbiting a much heavier object of mass M. Assume Newtonian mechanics, which is both classical and non-relativistic. The conservation of energy and angular momentum give two constants E and L, which have values when the motion of the larger mass is negligible. In these expressions, is the derivative of r with respect to time, is the angular velocity of mass m, G is the gravitational constant, E is the total energy, and L is the angular momentum. Only two variables are needed, since the motion occurs in a plane. Substituting the second expression into the first and rearranging gives where is the effective potential. The original two-variable problem has been reduced to a one-variable problem. For many applications the effective potential can be treated exactly like the potential energy of a one-dimensional system: for instance, an energy diagram using the effective potential determines turning points and locations of stable and unstable equilibria. A similar method may be used in other applications, for instance determining orbits in a general relativistic Schwarzschild metric. Effective potentials are widely used in various condensed matter subfields, e.g. the Gauss-core potential (Likos 2002, Baeurle 2004) and the screened Coulomb potential (Likos 2001). See also Geopotential Notes References Further reading . Mechanics Potentials
Effective potential
[ "Physics", "Engineering" ]
644
[ "Mechanics", "Mechanical engineering" ]
4,597,716
https://en.wikipedia.org/wiki/Fermi%20point
The term Fermi point has two applications but refers to the same phenomena (special relativity): Fermi point (quantum field theory) Fermi point (nanotechnology) For both applications count that the symmetry between particles and anti-particles in weak interactions is violated: At this point the particle energy is zero. In nanotechnology this concept can be applied to electron behavior. An electron as a single particle is a fermion obeying the Pauli exclusion principle. Fermi point (quantum field theory) Fermionic systems that have a Fermi surface (FS) belong to a universality class in quantum field theory. Any collection of fermions with weak repulsive interactions belongs to this class. At the Fermi point, the break of symmetry can be explained by assuming a vortex or singularity will appear as a result of the spin of a fermi particle (quasiparticle, fermion) in one dimension of the three-dimensional momentum space. Fermi point (nanoscience) The Fermi point is one particular electron state. The Fermi point refers to an event chirality of electrons is involved and the diameter of a carbon nanotube for which the nanotube becomes metallic. As the structure of a carbon nanotube determines the energy levels that the carbon's electrons may occupy, the structure affects macroscopic properties of the nanotube structure, most notably electrical and thermal conductivity. Flat graphite is a conductor except when rolled up into small cylinders. This circular structure inhibits the internal flow of electrons and the graphite becomes a semiconductor; a transition point forms between the valence band and conduction band. This point is called the Fermi point. If the diameter of the carbon nanotube is sufficiently great, the necessary transition phase disappears and the nanotube may be considered a conductor. See also Fermi energy Fermi surface Bandgap Notes Critical phenomena Nanoelectronics Condensed matter physics Quantum field theory Special relativity
Fermi point
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
405
[ "Quantum field theory", "Physical phenomena", "Phases of matter", "Quantum mechanics", "Critical phenomena", "Materials science", "Special relativity", "Condensed matter physics", "Nanoelectronics", "Theory of relativity", "Nanotechnology", "Statistical mechanics", "Matter", "Dynamical sys...
4,598,143
https://en.wikipedia.org/wiki/Paliperidone
Paliperidone, sold under the brand name Invega among others, is an atypical antipsychotic. It is indicated in the treatment of schizophrenia and schizoaffective disorder. It is marketed by J&J Innovative Medicine Paliperidone was approved by the US Food and Drug Administration (FDA) for the treatment of schizophrenia in December 2006, and in the European Union in June 2007. Paliperidone palmitate is a long-acting injectable formulation of paliperidone palmitoyl ester. It is on the World Health Organization's List of Essential Medicines. Paliperidone is available as a generic medication. Medical use In the US, paliperidone is indicated for the treatment of schizophrenia and for the treatment of schizoaffective disorder as monotherapy and as an adjunct to mood stabilizers and/or antidepressants. In the EU, paliperidone is indicated for the treatment of schizophrenia in adults and in adolescents fifteen years of age and older and for the treatment of schizoaffective disorder in adults. Paliperidone is used for the treatment of schizophrenia and schizoaffective disorder. Adverse effects The most frequent side effects include headache, insomnia, sleepiness, parkinsonism (effects similar to Parkinson's disease such as shaking, muscle stiffness and slow movement), dystonia (involuntary muscle contractions), tremor (shaking), dizziness, akathisia (restlessness), agitation, anxiety, depression, increased weight, nausea, vomiting, constipation, dyspepsia (heartburn), diarrhea, dry mouth, tiredness, toothache, muscle and bone pain, back pain, asthenia (weakness), tachycardia (increased heart rate), high blood pressure, prolonged QT interval (an alteration of the electrical activity of the heart), upper respiratory tract infection (nose and throat infections) and cough. A 2023 study found that paliperidone may worsen verbal learning and memory compared to placebo in the early months of psychosis treatment. Discontinuation The British National Formulary recommends a gradual withdrawal when discontinuing antipsychotics to avoid acute withdrawal syndrome or rapid relapse. Symptoms of withdrawal commonly include nausea, vomiting, and loss of appetite. Other symptoms may include restlessness, increased sweating, and trouble sleeping. Less commonly there may be a feeling of the world spinning, numbness, or muscle pains. Symptoms generally resolve after a short period of time. Deaths In April 2014, it was reported that 21 Japanese people who had received shots of the long-acting injectable paliperidone palmitate had died, out of 10,700 individuals prescribed the drug. Pharmacology Paliperidone is the primary active metabolite of the older antipsychotic risperidone. While its specific mechanism of action is unknown, it is believed paliperidone and risperidone act via similar, if not identical, pathways. Its efficacy is believed to result from central dopaminergic and serotonergic antagonism except Paliperidone like its parent compound functions as an inverse agonist at 5-HT2A 15. Paliperidone is also active by acting as an antagonist of the alpha 1 and alpha 2 adrenergic receptors as well as the H1 histaminergic receptors. Food is known to increase the absorption of Invega type ER OROS prolonged-release tablets. Food increased exposure of paliperidone by up to 50-60%, however, half-life was not significantly affected. The effect was probably due to a delay in the transit of the ER OROS formulation in the upper part of the GI channel, resulting in increased absorption. The half-life is 23 hours. Risperidone and its metabolite paliperidone are reduced in efficacy by P-glycoprotein inducers such as St John's wort Values are Ki (nM). The smaller the value, the more strongly the drug binds to the site. History Paliperidone (as Invega) was approved by the Food and Drug Administration (FDA) for the treatment of schizophrenia in 2006. Paliperidone was approved by the FDA for the treatment of schizoaffective disorder in 2009. The long-acting injectable form of paliperidone, marketed as Invega Sustenna in the US, and Xeplion in the EU, was approved by the FDA in July 2009. It was initially approved in the European Union in 2007, for schizophrenia, the extended release form and use for schizoaffective disorder were approved in the EU in 2010, and extension to use in adolescents older than 15 years old was approved in 2014. Society and culture Brand names In May 2015, a formulation of paliperidone palmitate was approved by the FDA under the brand name Invega Trinza. A similar prolonged release suspension was approved in 2016 by the European Medicines Agency originally under the brand name Paliperidone Janssen, later renamed to Trevicta. On September 1, 2021, a newer formulation of paliperidone palmitate, Invega Hafyera, was approved by the US FDA. References External links Alpha-2 blockers Antipsychotic esters Atypical antipsychotics Belgian inventions Benzisoxazoles Drugs developed by Johnson & Johnson Fluoroarenes Human drug metabolites Lactams Mood stabilizers Palmitate esters Piperidines Prolactin releasers Pyridopyrimidines World Health Organization essential medicines
Paliperidone
[ "Chemistry" ]
1,175
[ "Chemicals in medicine", "Human drug metabolites" ]
4,598,351
https://en.wikipedia.org/wiki/4-Ethylguaiacol
4-Ethylguaiacol, often abbreviated to 4-EG, is a phenolic compound with the molecular formula C9H12O2. It can be produced in wine and beer by Brettanomyces. It is also frequently present in bio-oil produced by pyrolysis of lignocellulosic biomass. Winemaking It is produced along with 4-ethylphenol (4-EP) in wine and beer by the spoilage yeast Brettanomyces. When it is produced by the yeast to concentrations greater than the sensory threshold of >600 μg/L, it can contribute bacon, spice, clove, or smoky aromas to the wine. On their own these characters can be quite attractive in a wine, however as the compound usually occurs with 4-EP whose aromas can be more aggressive, the presence of the compound often signifies a wine fault. The ratio in which 4-EP and 4-EG are present can greatly affect the organoleptic properties of the wine. Bio-oil 4-Ethylguaiacol can also be produced by pyrolysis of lignocellulosic biomass. It is produced from the lignin, along with many of the other phenolic compounds present in bio-oil. In particular, 4-ethylguaiacol is derived from guaiacyl in the lignin. See also Yeast in winemaking Wine chemistry References Natural phenols Alkylphenols Phenol ethers
4-Ethylguaiacol
[ "Chemistry" ]
313
[ "Biomolecules by chemical classification", "Natural phenols" ]
4,599,061
https://en.wikipedia.org/wiki/Carvonic%20acid
Carvonic acid, or α-methylene-4-methyl-5-oxo-3-cyclohexene-1-acetic acid, is a terpenoid formed by metabolism of carvone in humans. References Carboxylic acids Monoterpenes Enones Cyclohexenes
Carvonic acid
[ "Chemistry" ]
68
[ "Carboxylic acids", "Functional groups" ]
4,600,462
https://en.wikipedia.org/wiki/Etomidate
Etomidate (USAN, INN, BAN; marketed as Amidate) is a short-acting intravenous anaesthetic agent used for the induction of general anaesthesia and sedation for short procedures such as reduction of dislocated joints, tracheal intubation, cardioversion and electroconvulsive therapy. It was developed at Janssen Pharmaceutica in 1964 and was introduced as an intravenous agent in 1972 in Europe and in 1983 in the United States. The most common side effects include venous pain on injection and skeletal muscle movements. Medical uses Sedation and anesthesia In emergency settings, etomidate can be used as a sedative hypnotic agent. It is used for conscious sedation and as a part of a rapid sequence induction to induce anaesthesia. It is used as an anaesthetic agent since it has a rapid onset of action and a safe cardiovascular risk profile, and therefore is less likely to cause a significant drop in blood pressure than other induction agents. In addition, etomidate is often used because of its easy dosing profile, limited suppression of ventilation, lack of histamine liberation and protection from myocardial and cerebral ischemia. Thus, etomidate is a good induction agent for people who are hemodynamically unstable. Etomidate also has interesting characteristics for people with traumatic brain injury because it is one of the only anesthetic agents able to decrease intracranial pressure and maintain a normal arterial pressure. In those with sepsis, one dose of the medication does not appear to affect the risk of death. Speech and memory test Another use for etomidate is to determine speech lateralization in people prior to performing lobectomies to remove epileptogenic centres in the brain. This is called the etomidate speech and memory test, or eSAM, and is used at the Montreal Neurological Institute. However, only retrospective cohort studies support the use and safety of etomidate for this test. Steroidogenesis inhibitor In addition to its action and use as an anesthetic, etomidate has also been found to directly inhibit the enzymatic biosynthesis of steroid hormones, including corticosteroids in the adrenal gland. As the only adrenal steroidogenesis inhibitor available for intravenous or parenteral administration, it is useful in situations in which rapid control of hypercortisolism is necessary or in which oral administration is unfeasible. Use in executions The U.S. state of Florida used the drug in a death penalty procedure when Mark James Asay, 53, was executed on August 24, 2017. He became the first person in the U.S. to be executed with etomidate as one of the drugs. Etomidate replaces midazolam as the sedative. Drug companies have made it harder to buy midazolam for executions. The etomidate was followed by rocuronium bromide, a paralytic, and finally, potassium acetate in place of the commonly used potassium chloride injection to stop the heart. Potassium acetate was first used for this purpose inadvertently in a 2015 execution in Oklahoma. Adverse effects Etomidate suppresses corticosteroid synthesis in the adrenal cortex by reversibly inhibiting 11β-hydroxylase, an enzyme important in adrenal steroid production; it leads to primary adrenal suppression. Using a continuous etomidate infusion for sedation of critically ill trauma patients in intensive care units has been associated with increased mortality due to adrenal suppression. Continuous intravenous administration of etomidate leads to adrenocortical dysfunction. The mortality of patients exposed to a continuous infusion of etomidate for more than 5 days increased from 25% to 44%, mainly due to infectious causes such as pneumonia. Because of etomidate-induced adrenal suppression, its use for patients with sepsis is controversial. Cortisol levels have been reported to be suppressed up to 72 hours after a single bolus of etomidate in this population at risk for adrenal insufficiency. For this reason, many authors have suggested that etomidate should never be used for critically ill patients with septic shock because it could increase mortality. However, other authors continue to defend etomidate's use for septic patients because of etomidate's safe hemodynamic profile and lack of clear evidence of harm. A study by Jabre et al. showed that a single dose of etomidate used for Rapid Sequence Induction prior to endrotracheal intubation has no effect on mortality compared to ketamine even though etomidate did cause transient adrenal suppression. In addition, a recent meta-analysis done by Hohl could not conclude that etomidate increased mortality. The authors of this meta-analysis concluded more studies were needed because of lack of statistical power to conclude definitively about the effect of etomidate on mortality. Thus, Hohl suggests a burden to prove etomidate is safe for use in septic patients, and more research is needed before it is used. Other authors advise giving a prophylactic dose of steroids (e.g. hydrocortisone) if etomidate is used, but only one small prospective controlled study in patients undergoing colorectal surgery has verified the safety of giving stress dose corticosteroids to all patients receiving etomidate. In a retrospective review of almost 32,000 people, etomidate, when used for the induction of anaesthesia, was associated 2.5-fold increase in the risk of dying compared with those given propofol. People given etomidate also had significantly greater odds of having cardiovascular morbidity and significantly longer hospital stay. Given the retrospective design of this study, it is difficult to draw any firm conclusions from the data. In people with traumatic brain injury, etomidate use is associated with a blunting of an ACTH stimulation test. The clinical impact of this effect has yet to be determined. In addition, concurrent use of etomidate with opioids and/or benzodiazepines, is hypothesized to exacerbate etomidate-related adrenal insufficiency. However, only retrospective evidence of this effect exists and prospective studies are needed to measure the clinical impact of this interaction. Etomidate is associated with a high incidence of burning on injection, postoperative nausea and vomiting, and superficial thrombophlebitis (with rates higher than propofol). Pharmacology Pharmacodynamics (R)-Etomidate is tenfold more potent than its (S)-enantiomer. At low concentrations (R)-etomidate is a modulator at GABAA receptors containing β2 and β3 subunits. At higher concentrations, it can elicit currents in the absence of GABA and behaves as an allosteric agonist. Its binding site is located in the transmembrane section of this receptor between the beta and alpha subunits (β+α−). β3-containing GABAA receptors are involved in the anesthetic actions of etomidate, while the β2-containing receptors are involved in some of the sedation and other actions that can be elicited by this drug. Pharmacokinetics At the typical dose, anesthesia is induced for the duration of about 5–10 minutes, though the half-life of drug metabolism is about 75 minutes, because etomidate is redistributed from the plasma to other tissues. Onset of action: 30–60 seconds Peak effect: 1 minute Duration: 3–5 minutes; terminated by redistribution Distribution: Vd: 2–4.5 L/kg Protein binding: 76% Metabolism: Hepatic and plasma esterases Half-life distribution: 2.7 minutes Half-life redistribution: 29 minutes Half-life elimination: 2.9 to 5.3 hours Metabolism Etomidate is highly protein-bound in blood plasma and is metabolised by hepatic and plasma esterases to inactive products. It exhibits a biexponential decline. Formulation Etomidate is usually presented as a clear colourless solution for injection containing 2 mg/mL of etomidate in an aqueous solution of 35% propylene glycol, although a lipid emulsion preparation (of equivalent strength) has also been introduced. Etomidate was originally formulated as a racemic mixture, but the R form is substantially more active than its enantiomer. It was later reformulated as a single-enantiomer drug, becoming the first general anesthetic in that class to be used clinically. Society and culture Etomidate has been made into an e-cigarette liquid known as space oil since 2023 and may be mixed with other drugs, including cannabis and ketamine. Hong Kong Etomidate is regulated as Part 1 poison under the Pharmacy and Poisons Regulations (Cap. 138A), which states that possessing etomidate without provisions is punishable by a fine of up to HK$100,000 and imprisonment for two years. Due to the increasing imports of space oil, the 2024 Policy Address has stated that the control of etomidate will be tightened. Etomidate will be classed as a controlled drug on 14 February 2025 by the Dangerous Drugs Ordinance (Cap. 134), which states that illegal possession or smoking, inhaling, ingesting and injecting space oil is liable to a maximum penalty of imprisonment of seven years and a fine of HK$1,000,000, while trafficking or illegal importing etomidate is liable to a maximum penalty of life imprisonment and a fine of HK$5,000,000. References Further reading ; discussion 67. ; author reply 809. External links 11β-Hydroxylase inhibitors Belgian inventions Chemical substances for emergency medicine Cholesterol side-chain cleavage enzyme inhibitors CYP17A1 inhibitors Ethyl esters GABAA receptor positive allosteric modulators General anesthetics Glycine receptor agonists Imidazoles Janssen Pharmaceutica Lethal injection components
Etomidate
[ "Chemistry" ]
2,115
[ "Chemicals in medicine", "Chemical substances for emergency medicine" ]
4,600,562
https://en.wikipedia.org/wiki/Surface%20finish
Surface finish, also known as surface texture or surface topography, is the nature of a surface as defined by the three characteristics of lay, surface roughness, and waviness. It comprises the small, local deviations of a surface from the perfectly flat ideal (a true plane). Surface texture is one of the important factors that control friction and transfer layer formation during sliding. Considerable efforts have been made to study the influence of surface texture on friction and wear during sliding conditions. Surface textures can be isotropic or anisotropic. Sometimes, stick-slip friction phenomena can be observed during sliding, depending on surface texture. Each manufacturing process (such as the many kinds of machining) produces a surface texture. The process is usually optimized to ensure that the resulting texture is usable. If necessary, an additional process will be added to modify the initial texture. The latter process may be grinding (abrasive cutting), polishing, lapping, abrasive blasting, honing, electrical discharge machining (EDM), milling, lithography, industrial etching/chemical milling, laser texturing, or other processes. Lay Lay is the direction of the predominant surface pattern, ordinarily determined by the production method used. The term is also used to denote the winding direction of fibers and strands of a rope. Surface roughness Surface roughness, commonly shortened to roughness, is a measure of the total spaced surface irregularities. In engineering, this is what is usually meant by "surface finish." A Lower number constitutes finer irregularities, i.e., a smoother surface. Waviness Waviness is the measure of surface irregularities with a spacing greater than that of surface roughness. These irregularities usually occur due to warping, vibrations, or deflection during machining. Measurement Surface finish may be measured in two ways: contact and non-contact methods. Contact methods involve dragging a measurement stylus across the surface; these instruments are called profilometers. Non-contact methods include: interferometry, confocal microscopy, focus variation, structured light, electrical capacitance, electron microscopy, atomic force microscopy and photogrammetry. Specification In the United States, surface finish is usually specified using the ASME Y14.36M standard. The other common standard is International Organization for Standardization (ISO) 1302:2002, although the same has been withdrawn in favour of ISO 21920-1:2021. Many factors contribute to the surface finish in manufacturing. In forming processes, such as molding or metal forming, surface finish of the die determines the surface finish of the workpiece. In machining, the interaction of the cutting edges and the microstructure of the material being cut both contribute to the final surface finish. In general, the cost of manufacturing a surface increases as the surface finish improves. Any given manufacturing process is usually optimized enough to ensure that the resulting texture is usable for the part's intended application. If necessary, an additional process will be added to modify the initial texture. The expense of this additional process must be justified by adding value in some way—principally better function or longer lifespan. Parts that have sliding contact with others may work better or last longer if the roughness is lower. Aesthetic improvement may add value if it improves the saleability of the product. A practical example is as follows. An aircraft maker contracts with a vendor to make parts. A certain grade of steel is specified for the part because it is strong enough and hard enough for the part's function. The steel is machinable although not free-machining. The vendor decides to mill the parts. The milling can achieve the specified roughness (for example, ≤ 3.2 μm) as long as the machinist uses premium-quality inserts in the end mill and replaces the inserts after every 20 parts (as opposed to cutting hundreds before changing the inserts). There is no need to add a second operation (such as grinding or polishing) after the milling as long as the milling is done well enough (correct inserts, frequent-enough insert changes, and clean coolant). The inserts and coolant cost money, but the costs that grinding or polishing would incur (more time and additional materials) would cost even more than that. Obviating the second operation results in a lower unit cost and thus a lower price. The competition between vendors elevates such details from minor to crucial importance. It was certainly possible to make the parts in a slightly less efficient way (two operations) for a slightly higher price; but only one vendor can get the contract, so the slight difference in efficiency is magnified by competition into the great difference between the prospering and shuttering of firms. Just as different manufacturing processes produce parts at various tolerances, they are also capable of different roughnesses. Generally, these two characteristics are linked: manufacturing processes that are dimensionally precise create surfaces with low roughness. In other words, if a process can manufacture parts to a narrow dimensional tolerance, the parts will not be very rough. Due to the abstractness of surface finish parameters, engineers usually use a tool that has a variety of surface roughnesses created using different manufacturing methods. See also Gloss (optics) References Bibliography Metalworking terminology Tribology F
Surface finish
[ "Chemistry", "Materials_science", "Engineering" ]
1,092
[ "Tribology", "Mechanical engineering", "Materials science", "Surface science" ]
4,601,361
https://en.wikipedia.org/wiki/Electromagnetic%20stress%E2%80%93energy%20tensor
In relativistic physics, the electromagnetic stress–energy tensor is the contribution to the stress–energy tensor due to the electromagnetic field. The stress–energy tensor describes the flow of energy and momentum in spacetime. The electromagnetic stress–energy tensor contains the negative of the classical Maxwell stress tensor that governs the electromagnetic interactions. Definition ISQ convention The electromagnetic stress–energy tensor in the International System of Quantities (ISQ), which underlies the SI, is where is the electromagnetic tensor and where is the Minkowski metric tensor of metric signature and the Einstein summation convention over repeated indices is used. Explicitly in matrix form: where is the volumetric energy density, is the Poynting vector, is the Maxwell stress tensor, and is the speed of light. Thus, each component of is dimensionally equivalent to pressure (with SI unit pascal). Gaussian CGS conventions The in the Gaussian system (shown here with a prime) that correspond to the permittivity of free space and permeability of free space are then: and in explicit matrix form: where the energy density becomes and the Poynting vector becomes The stress–energy tensor for an electromagnetic field in a dielectric medium is less well understood and is the subject of the Abraham–Minkowski controversy. The element of the stress–energy tensor represents the flux of the component with index of the four-momentum of the electromagnetic field, , going through a hyperplane. It represents the contribution of electromagnetism to the source of the gravitational field (curvature of spacetime) in general relativity. Algebraic properties The electromagnetic stress–energy tensor has several algebraic properties: The symmetry of the tensor is as for a general stress–energy tensor in general relativity. The trace of the energy–momentum tensor is a Lorentz scalar; the electromagnetic field (and in particular electromagnetic waves) has no Lorentz-invariant energy scale, so its energy–momentum tensor must have a vanishing trace. This tracelessness eventually relates to the masslessness of the photon. Conservation laws The electromagnetic stress–energy tensor allows a compact way of writing the conservation laws of linear momentum and energy in electromagnetism. The divergence of the stress–energy tensor is: where is the (4D) Lorentz force per unit volume on matter. This equation is equivalent to the following 3D conservation laws respectively describing the electromagnetic energy density and electromagnetic momentum density where is the electric current density, the electric charge density, and is the Lorentz force density. See also Ricci calculus Covariant formulation of classical electromagnetism Mathematical descriptions of the electromagnetic field Maxwell's equations Maxwell's equations in curved spacetime General relativity Einstein field equations Magnetohydrodynamics Vector calculus References Tensor physical quantities Electromagnetism
Electromagnetic stress–energy tensor
[ "Physics", "Mathematics", "Engineering" ]
563
[ "Electromagnetism", "Physical phenomena", "Tensors", "Physical quantities", "Quantity", "Tensor physical quantities", "Fundamental interactions" ]
17,109,517
https://en.wikipedia.org/wiki/Supersymmetry%20nonrenormalization%20theorems
In theoretical physics a nonrenormalization theorem is a limitation on how a certain quantity in the classical description of a quantum field theory may be modified by renormalization in the full quantum theory. Renormalization theorems are common in theories with a sufficient amount of supersymmetry, usually at least 4 supercharges. Perhaps the first nonrenormalization theorem was introduced by Marcus T. Grisaru, Martin Rocek and Warren Siegel in their 1979 paper Improved methods for supergraphs. Nonrenormalization in supersymmetric theories and holomorphy Nonrenormalization theorems in supersymmetric theories are often consequences of the fact that certain objects must have a holomorphic dependence on the quantum fields and coupling constants. In this case the nonrenormalization theory is said to be a consequence of holomorphy. The more supersymmetry a theory has, the more renormalization theorems apply. Therefore a renormalization theorem that is valid for a theory with supersymmetries will also apply to any theory with more than supersymmetries. Examples in 4-dimensional theories In 4 dimensions the number counts the number of 4-component Majorana spinors of supercharges. Some examples of nonrenormalization theorems in 4-dimensional supersymmetric theories are: In an 4D SUSY theory involving only chiral superfields, the superpotential is immune from renormalization. With an arbitrary field content it is immune from renormalization in perturbation theory but may be renormalized by nonperturbative effects such as instantons. In an 4D SUSY theory the moduli space of the hypermultiplets, called the Higgs branch, has a hyper-Kähler metric and is not renormalized. In the article Lagrangians of N=2 Supergravity - Matter Systems it was further shown that this metric is independent of the scalars in the vector multiplets. They also proved that the metric of the Coulomb branch, which is a rigid special Kähler manifold parametrized by the scalars in vector multiplets, is independent of the scalars in the hypermultiplets. Therefore the vacuum manifold is locally a product of a Coulomb and Higgs branch. The derivations of these statements appear in The Moduli Space of N=2 SUSY QCD and Duality in N=1 SUSY QCD. In an 4D SUSY theory the superpotential is entirely determined by the matter content of the theory. Also there are no perturbative corrections to the β-function beyond one-loop, as was shown in 1983 in the article Superspace Or One Thousand and One Lessons in Supersymmetry by Sylvester James Gates, Marcus Grisaru, Martin Rocek and Warren Siegel. In super Yang–Mills the β-function is zero for all couplings, meaning that the theory is conformal. This was demonstrated perturbatively by Martin Sohnius and Peter West in the 1981 article Conformal Invariance in N=4 Supersymmetric Yang-Mills Theory under certain symmetry assumptions on the theory, and then with no assumptions by Stanley Mandelstam in the 1983 article Light Cone Superspace and the Ultraviolet Finiteness of the N=4 Model. The full nonperturbative proof by Nathan Seiberg appeared in the 1988 article Supersymmetry and Nonperturbative beta Functions. Examples in 3-dimensional theories In 3 dimensions the number counts the number of 2-component Majorana spinors of supercharges. When there is no holomorphicity and few exact results are known. When the superpotential cannot depend on the linear multiplets and in particular is independent of the Fayet–Iliopoulos terms (FI) and Majorana mass terms. On the other hand the central charge is independent of the chiral multiplets, and so is a linear combination of the FI and Majorana mass terms. These two theorems were stated and proven in Aspects of N=2 Supersymmetric Gauge Theories in Three Dimensions. When , unlike , the R-symmetry is the nonabelian group SU(2) and so the representation of each field is not renormalized. In a super conformal field theory the conformal dimension of a chiral multiplet is entirely determined by its R-charge, and so these conformal dimensions are not renormalized. Therefore matter fields have no wave function renormalization in superconformal field theories, as was shown in On Mirror Symmetry in Three Dimensional Abelian Gauge Theories. These theories consist of vector multiplets and hypermultiplets. The hypermultiplet metric is hyperkähler and may not be lifted by quantum corrections, but its metric may be modified. No renormalizable interaction between hyper and abelian vector multiplets is possible except for Chern–Simons terms. When , unlike the hypermultiplet metric may no longer be modified by quantum corrections. Examples in 2-dimensional theories In linear sigma models, which are superrenormalizable abelian gauge theories with matter in chiral supermultiplets, Edward Witten has argued in Phases of N=2 theories in two-dimensions that the only divergent quantum correction is the logarithmic one-loop correction to the FI term. Nonrenormalization from a quantization condition In supersymmetric and nonsupersymmetric theories, the nonrenormalization of a quantity subject to the Dirac quantization condition is often a consequence of the fact that possible renormalizations would be inconsistent with the quantization condition, for example the quantization of the level of a Chern–Simons theory implies that it may only be renormalized at one-loop. In the 1994 article Nonrenormalization Theorem for Gauge Coupling in 2+1D the authors find the renormalization of the level can only be a finite shift, independent of the energy scale, and extended this result to topologically massive theories in which one includes a kinetic term for the gluons. In Notes on Superconformal Chern-Simons-Matter Theories the authors then showed that this shift needs to occur at one loop, because any renormalization at higher loops would introduce inverse powers of the level, which are nonintegral and so would be in conflict with the quantization condition. References N. Seiberg (1993) "Naturalness Versus Supersymmetric Non-renormalization Theorems" External links Non-Renormalization Theorems in Supersymmetry Supersymmetric quantum field theory Renormalization group
Supersymmetry nonrenormalization theorems
[ "Physics" ]
1,389
[ "Physical phenomena", "Supersymmetric quantum field theory", "Critical phenomena", "Renormalization group", "Statistical mechanics", "Supersymmetry", "Symmetry" ]
17,118,781
https://en.wikipedia.org/wiki/Phoronix%20Test%20Suite
Phoronix Test Suite (PTS) is a free and open-source benchmark software for Linux and other operating systems. The Phoronix Test Suite, developed by Michael Larabel and Matthew Tippett, has been endorsed by sites such as Linux.com, LinuxPlanet, and Softpedia. Features Phoronix Test Suite supports over 220 test profiles and over 60 test suites. It uses an XML-based testing architecture. Tests available to use include MEncoder, FFmpeg and lm sensors, along with OpenGL games such as Doom 3, Nexuiz, and Enemy Territory: Quake Wars, and many more. The suite also contains a feature called PTS Global where users may upload their test results and system information for sharing. By executing a single command, other users can compare their test results to a selected system in an easy-comparison mode. Before 2014, these benchmark results could be uploaded to the Phoronix Global online database, but since 2013, these benchmark results can be uploaded to openbenchmarking.org. Phoronix supports automated Git bisecting on a performance basis to find performance regressions, and features statistical significance verification. Components Phoromatic Phoromatic is a web-based remote test management system for the Phoronix Test Suite. It allows the automatic scheduling of tests. It's aimed at the enterprise. It can manage multiple test nodes simultaneously within a test farm or distributed environment. Phoromatic Tracker Phoromatic Tracker is an extension of Phoromatic that provides a public interface into test farms. Currently, their reference implementations autonomously monitor the performance of the Linux kernel on a daily basis, Fedora Rawhide, and Ubuntu. PTS Desktop Live PTS Desktop Live was a stripped-down x86-64 Linux distribution, which included Phoronix Test Suite 2.4. It was designed for testing/benchmarking computers from a LiveDVD / LiveUSB environment. Phodevi Phodevi (Phoronix Device Interface) is a library that provides a clean, stable, platform-independent API for accessing software and hardware information. PCQS Phoronix Certification & Qualification Suite (PCQS) is a reference specification for the Phoronix Test Suite. Phoronix website Phoronix is a technology website that offers information on the development of the Linux kernel, product reviews, interviews, and news regarding free and open-source software by monitoring the Linux kernel mailing list or interviews. Phoronix was started in June 2004 by Michael Larabel, who currently serves as the owner and editor-in-chief. History Founded on June 5, 2004, Phoronix started as a website with a handful of hardware reviews and guides, moving to articles covering operating systems based on Linux and open-source software such as Ubuntu, Fedora, SUSE, and Mozilla (Firefox/Thunderbird) around the start of 2005. Phoronix focuses on benchmarking hardware running Linux, with a slant toward graphics articles that monitor and compare free and open-source graphics device drivers and Mesa 3D with AMD's and Nvidia's proprietary graphics device drivers. In June 2006, the website added forums to accompany news content. On April 20, 2007, Phoronix redesigned its website and began publishing Solaris hardware reviews and news in addition to Linux content. Other technical publications, such as CNET News, have cited Phoronix benchmarks. Open Benchmarking OpenBenchmarking.org is a web-based service created to work with the Phoronix Test Suite. It is a collaborative platform that allows users to share their hardware and software benchmarks through an organized online interface. It is primarily used for performance benchmarking and testing hardware/software performance, typically in the context of Linux-based systems (unlike SoapUI, which is used for testing web services). Release history On June 5, 2008, Phoronix Test Suite 1.0 was released under the codename Trondheim. This 1.0 release was made up of 57 test profiles and 23 test suites. On September 3, 2008, Phoronix Test Suite 1.2 was released with support for the OpenSolaris operating system, a module framework accompanied by tests focusing upon new areas, and new test profiles. Phoronix Test Suite 1.8 includes a graphical user interface (GUI) using GTK+ written using the PHP-GTK bindings. 3.4 includes MATISK benchmarking module and initial support for the GNU Hurd. See also Inquisitor Stresslinux References External links 2008 software Benchmarking software for Linux Benchmarks (computing) Free software programmed in PHP
Phoronix Test Suite
[ "Technology" ]
986
[ "Benchmarks (computing)", "Computing comparisons", "Computer performance" ]
17,119,657
https://en.wikipedia.org/wiki/Pharmacoinformatics
Drug discovery and development requires the integration of multiple scientific and technological disciplines. These include chemistry, biology, pharmacology, pharmaceutical technology and extensive use of information technology. The latter is increasingly recognised as Pharmacoinformatics. Pharmacoinformatics relates to the broader field of bioinformatics. Introduction The main idea behind the field is to integrate different informatics branches (e.g. bioinformatics, chemoinformatics, immunoinformatics, etc.) into a single platform, resulting in a seamless process of drug discovery. The first reference of the term "Pharmacoinformatics" can be found in the year of 1993. The first dedicated department for Pharmacoinformatics was established at the National Institute Of Pharmaceutical Education And Research, S.A.S. Nagar, India in 2003. This has been followed by different universities worldwide including a program by European universities named the European Pharmacoinformatics Initiative (Europin). Definition Pharmacoinformatics is also referred to as pharmacy informatics. According to the article "Pharmacy Informatics: What You Need to Know Now" by the University of Illinois at Chicago Pharmacoinformatics may be defined as: “the scientific field that focuses on medication-related data and knowledge within the continuum of healthcare systems.” It is the application of computers to the storage, retrieval and analysis of drug and prescription information. Pharmacy informaticists work with pharmacy information management systems that help the pharmacist safe decisions about patient drug therapies with respect to, medical insurance records, drug interactions, as well as prescription and patient information. Pharmacy informatics can be thought of as a sub-domain of the larger professional discipline of health informatics. Health informatics is the study of interactions between people, their work processes and engineered systems within health care with a focus on pharmaceutical care and improved patient safety. For example, the Health Information Management Systems Society (HIMSS) defines pharmacy informatics as, "the scientific field that focuses on medication-related data and knowledge within the continuum of healthcare systems - including its acquisition, storage, analysis, use and dissemination - in the delivery of optimal medication-related patient care and health outcomes" See also Software programs for pharmacy workflow management References Pharmacology Pharmaceutical industry Drug discovery Cheminformatics Bioinformatics
Pharmacoinformatics
[ "Chemistry", "Engineering", "Biology" ]
485
[ "Pharmacology", "Biological engineering", "Life sciences industry", "Drug discovery", "Pharmaceutical industry", "Bioinformatics", "Computational chemistry", "nan", "Cheminformatics", "Medicinal chemistry" ]
12,588,438
https://en.wikipedia.org/wiki/Lanthanum%20strontium%20cobalt%20ferrite
Lanthanum strontium cobalt ferrite (LSCF), also called lanthanum strontium cobaltite ferrite is a specific ceramic oxide derived from lanthanum cobaltite of the ferrite group. It is a phase containing lanthanum(III) oxide, strontium oxide, cobalt oxide and iron oxide with the formula , where 0.1≤x≤0.4 and 0.2≤y≤0.8. It is black in color and crystallizes in a distorted hexagonal perovskite structure. LSCF undergoes phase transformations at various temperatures depending on the composition. This material is a mixed ionic electronic conductor with comparatively high electronic conductivity (200+ S/cm) and good ionic conductivity (0.2 S/cm). It is typically non-stoichiometric and can be reduced further at high temperature in low oxygen partial pressures or in the presence of a reducing agent such as carbon. LSCF is being investigated as a material for intermediate temperature solid oxide fuel cell cathodes and, potentially as a direct carbon fuel cell anode. LSCF is also investigated as a membrane material for separation of oxygen from air, for use in e.g. cleaner burning power plants. See also Lanthanum strontium manganite (LSM) Lanthanum strontium ferrite (LSF) Lanthanum calcium manganite (LCM) Lanthanum strontium chromite (LSC) Lanthanum strontium gallate magnesite (LSGM) References External links LSCF supplier and info American Elements Ceramic materials Fuel cells Lanthanum compounds Strontium compounds Cobalt compounds Oxides Non-stoichiometric compounds Ferrites
Lanthanum strontium cobalt ferrite
[ "Physics", "Chemistry", "Engineering" ]
367
[ "Non-stoichiometric compounds", "Materials stubs", "Oxides", "Salts", "Materials", "Ceramic materials", "Ceramic engineering", "Matter" ]
12,591,223
https://en.wikipedia.org/wiki/Filter%20press
An industrial filter press is a tool used in separation processes, specifically to separate solids and liquids. The machine stacks many filter elements and allows the filter to be easily opened to remove the filtered solids, and allows easy cleaning or replacement of the filter media. Filter presses cannot be operated in a continuous process but can offer very high performance, particularly when low residual liquid in the solid is desired. Among other uses, filter presses are utilised in marble factories in order to separate water from mud in order to reuse the water during the marble cutting process. Concept behind filter press technology Generally, the slurry that will be separated is injected into the centre of the press and each chamber of the press is filled. Optimal filling time will ensure the last chamber of the press is loaded before the mud in the first chamber begins to cake. As the chambers fill, pressure inside the system will increase due to the formation of thick sludge. Then, the liquid is strained through filter cloths by force using pressurized air, but the use of water could be more cost-efficient in certain cases, such as if water was re-used from a previous process. History The first form of filter press was invented in the United Kingdom in 1853, used in obtaining seed oil through the use of pressure cells. However, there were many disadvantages associated with them, such as high labour requirement and discontinuous process. Major developments in filter press technology started in the middle of 20th century. In Japan in 1958, Kenichiro Kurita and Seiichi Suwa succeeded in developing the world's first automatic horizontal-type filter press to improve the cake removal efficiency and moisture absorption. Nine years later, Kurita Company began developing flexible diaphragms to decrease moisture in filter cakes. The device enables optimisation of the automatic filtration cycle, cake compression, cake discharge and filter-cloth washing leading to the increment in opportunities for various industrial applications. A detailed historical review, dating back to when the Shang Dynasty used presses to extract tea from camellia the leaves and oil from the hips in 1600 BC, was compiled by K. McGrew. Types of filter presses There are four main basic types of filter presses: plate and frame filter presses, recessed plate and frame filter presses, membrane filter presses and (fully) automatic filter presses. Plate and frame filter press A plate and frame filter press is the most fundamental design, and may be referred to as a "membrane plate filter." This type of filter press consists of many alternating plates and frames assembled with the supports of a pair of rails, with filter membranes inserted between each plate-frame pair. Plates provide support to the filter membranes under pressure, and have narrow slots to allow the filtrate to flow through the membrane into the plate, then out into a collection system. Frames provide a chamber between the membranes and plates into which the slurry is pumped and the filter cake accumulates. The stack is compressed with sufficient force to provide a liquid-tight seal between each plate and frame, the filter membrane may have an integrated seal around the edge or the filter material itself may act as a gasket when compressed. As the slurry is pumped through the membranes, the filter cake accumulates and becomes thicker. The filter resistance increases as well, and the process is stopped when the pressure differential reaches a point where the plates are considered full enough. To remove the filter cake and clear the filters, the stack of plates and frames are separated and the cake either falls off or is scraped from the membranes to be collected in a tray below. The filter membranes are then cleaned using wash liquid and the stack is re-compressed ready to start the next cycle. An early example of this is the Dehne filter press, developed by A L G Dehne (1832–1906) of Halle, Germany, and commonly used in the late 19th and early 20th century for extracting sugar from sugar beet and from sugar cane, and for drying ore slurries. Its great disadvantage was the amount of labor involved in its operation. (Fully) Automatic filter press An automatic filter press has the same concept as the manual filter and frame filter, except that the whole process is fully automated. It consists of larger plate and frame filter presses with mechanical "plate shifters". The function of the plate shifter is to move the plates and allow rapid discharge of the filter cakes accumulated in between the plates. It also contains a diaphragm compressor in the filter plates which aids in optimizing the operating condition by further drying the filter cakes. Fully automatic filter presses provide a high degree of automation while providing uninterrupted operation at the same time. The option of the simultaneous filter plate opening system, for example, helps to realise a particularly fast cake release reducing the cycle time to a minimum. The result is a high-speed filter press that allows increased production per unit area of filter. For this reason, these machines are used in applications with highly filterable products where high filtration speeds are required. These include, e.g. mining concentrates and residues. There are different systems for fully automatic operation. These include, e.g. the vibration/shaking devices, spreader clamp/spreader cloth version or scraping devices. The unmanned operating time of a fully automatic filter press is 24/7. Recessed plate filter press A recessed plate filter press does not use frames and instead has a recess in each plate with sloping edges in which the filter cloths lie, the filter cake builds up in the recess directly between two plates and when the plates are separated the sloping edges allow the cake to fall out with minimal effort. To simplify construction and usage the plates typically have a hole through the centre, passing through the filter cloth and around which it is sealed so that the slurry flows through the centre of each plate down the stack rather than inward from the edge of each plate. Although easier to clean, there are disadvantages to this method, such as longer cloth changing time, inability to accommodate filter media that cannot conform to the curved recess such as paper, and the possibility of forming uneven cake. Membrane filter press Membrane filter presses have a great influence on the dryness of the solid by using an inflatable membrane in the filter plates to compress remaining liquid from the filter cake before the plates are opened. Compared to conventional filtration processes, it achieves the lowest residual moisture values in the filter cake. This makes the membrane filter press a powerful and widely used system. Depending on the degree of dewatering, different dry matter contents (dry matter content – percentage by weight of dry material in the filter cake) can be achieved in the filter cake by squeezing with membrane plates. The range of achievable dry matter contents extends from 30 to over 80 percent. Membrane filter presses not only offer the advantage of an extremely high degree of dewatering; they also reduce the filtration cycle time by more than 50 percent on average, depending on the suspension. This results in faster cycle and turnaround times, which lead to an increase in productivity. The membrane inflation medium consists either of compressed air or a liquid medium (e.g. water). Applications Filter presses are used in a huge variety of different applications, from dewatering of mineral mining slurries to blood plasma purification. At the same time, filter press technology is widely established for ultrafine coal dewatering as well as filtrate recovery in coal preparation plants. According to G.Prat, the "filter press is proven to be the most effective and reliable technique to meet today's requirement". One of the examples is Pilot scale plate filter press, which is specialized in dewatering coal slurries. In 2013 the Society for Mining, Metallurgy and Exploration published an article highlighting this specific application. It was mentioned that the use of the filter press is very beneficial to plant operations, since it offers dewatering ultraclean coal as product, as well as improving quality of water removed to be available for equipment cleaning. Other industrial uses for automatic membrane filter presses include municipal waste sludge dewatering, ready mix concrete water recovery, metal concentrate recovery, and large-scale fly ash pond dewatering. Many specialized applications are associated with different types of filter press that are currently used in various industries. Plate filter press is extensively used in sugaring operations such as the production of maple syrup in Canada, since it offers very high efficiency and reliability. According to M.Isselhardt, "appearance can affect the value of maple syrup and customer's perception of quality". This makes the raw syrup filtration process extremely crucial in achieving desired product with high quality and appealing form, which again suggested how highly appreciated filter press methods are in industry. Assessment of important characteristics Here are some typical filter press calculation used for handling operation applied in waste water treatment: Solids loading rate S= Where, S is the solid loadings rate in . B is biosolids in s is the % solids/ 100. A is the plate area in ft2. Net filter yield Where: NFY is the net filter yield in kg/h/m2. S is the solids loadings rate in kg/h/m2. P is the period in h. TCT is the total cycle time in h. (S × P) gives the filter run time. Flow rate of filtrate Where: u is flow rate of filtrate through cloth and cake (m/s), dV/dt is volumetric filtration rate (m3/s), Rc is the resistance of the filter cake (m-1), Rf is the initial resistance of the filter (resistance of an initial layer of cake, filter cloths, plate and channel) (m-1), μ is the viscosity of the filtrate (N·s/m2), ΔP is the applied pressure difference (N/m2) one side to another side of the filter medium, A is the filtration area (m2). Those are the most important factors that affect the rate of filtration. When filtrate pass through the filter plate, deposition of solids are formed and increases the cake thickness, which also increase Rc while Rf is assumed to be constant. The flow resistance from cake and filter medium can be studied by calculating the flow rate of filtration through them. If the flow rate is constant, the relationship between pressure and time can be obtained. The filtration must be operated by increasing pressure difference to cope with the increase in flow resistance resulting from pore clogging. The filtration rate is mainly affected by viscosity of the filtrate as well as resistance of the filter plate and cake. Optimum time cycle High filtration rate can be obtained from producing thin cake. However, a conventional filter press is a batch system and the process must be stopped to discharge the filter cake and reassemble the press, which is time-consuming. Practically, maximum filtration rate is obtained when the filtration time is greater than the time taken to discharge the cake and reassemble the press to allow for cloth's resistance. Properties of the filter cake affect the filtration rate, and it is desirable for the particle's size to be as large as possible to prevent pore blockage by using a coagulant. From experimental work, flow rate of liquid through the filter medium is proportional to the pressure difference. As the cake layer forms, pressure applies to the system increases and the flow rate of filtrate decreases. If the solid is desired, the purity of the solid can be increased by cake washing and air drying. Sample of filter cake can be taken from different locations and weighed to determine the moisture content by using overall material balance. Possible heuristics to be used during design of the process The selecting of filter press type depends on the value of liquid phase or the solid phase. If extracting liquid phase is desired, then filter press is among the most appropriate methods to be used. Materials Nowadays, filter plates are made from polymers or steel coated with polymer. They give good drainage surface for filter cloths. The plate sizes are ranged from 10 by 10 cm to 2.4 by 2.4 m and 0.3 to 20 cm for the frame thickness. Filter medium Typical cloth areas can range from 1 m2 or less on laboratory scale to 1000 m2 in a production environment, even though plates can provide filter areas up to 2000 m2. Normally, plate and frame filter press can form up to 50 mm of cake thickness, however, it can be push up to 200 mm for extreme cases. Recessed plate press can form up to 32 mm of cake thickness. In the early days of press use in the municipal waste biosolids treatment industry, issues with cake sticking to the cloth was problematic and many treatment plants adopted less effective centrifuge or belt filter press technologies. Since then, there have been great enhancements in fabric quality and manufacturing technology that have made this issue obsolete. Unlike the US, automatic membrane filter technology is the most common method to dewater municipal waste biosolids in Asia. Moisture is typically 10-15% lower and less polymer is required—which saves on trucking and overall disposal cost. Operating condition The operating pressure is commonly up to 7 bars for metal. The improvement of the technology makes it possible to remove large amount of moisture at 16 bar of pressure and operate at 30 bars. However, the pressure is 4-5 bars for wood or plastic frames. If the concentration of solids in the feed tank increase until the solid particles are attached to each other. It is possible to install moving blades in the filter press to reduce resistance to flow of liquid through the slurry. For the process prior to cake discharge, air blowing is used for cakes that have permeability of 10−11 to 10−15 m2. Pre-treatment Pre-treatment of the slurries before filtration is required if the solid suspension has settled down. Coagulation as pre-treatment can improve the performance of filter press because it increases the porosity of the filter cake leading to faster filtration. Varying the temperature, concentration and pH can control the size of the flocs. Moreover, if the filter cake is impermeable and difficult for the flow of filtrate, filter aid chemical can be added to the pre-treatment process to increase the porosity of the cake, reduce the cake resistance and obtain thicker cake. However, filter aids need to be able to remove from the filter cake either by physical or chemical treatment. A common filter aid is Kieselguhr, which give 0.85 voidage. In terms of cake handling, batch filter press requires large discharge tray size in order to contain large amount of cake and the system is more expensive compared to continuous filter press with the same output. Washing There are two possible methods of washing that are being employed, the "simple washing" and the "thorough washing". For simple washing, the wash liquor flows through the same channel as the slurry with high velocity, causing erosion of the cakes near the point of entry. Thus the channels formed are constantly enlarged and therefore uneven cleaning is normally obtained. A better technique is by thorough washing in which the wash liquor is introduced through a different channel behind the filter cloth called washing plates. It flows through the whole thickness of the cakes in opposite direction first and then with the same direction as the filtrate. The wash liquor is normally discharged through the same channel as the filtrate. After washing, the cakes can be easily removed by supplying compressed air to remove the excess liquid. Waste Nowadays filter presses are widely used in many industries, they would also produce different types of wastes. Harmful wastes such as toxic chemical from dye industries, as well as pathogen from waste stream might accumulate in the waste cakes; hence the requirement for treating those wastes would be different. Therefore, before discharge waste stream into the environment, application of post-treatment would be an important disinfection stage. It is to prevent health risks to the local population and the workers that are dealing with the waste (filter cakes) as well as preventing negative impacts to our ecosystem. Since filter press would produce large amount of waste, if it was to be disposed by land reclamation, it is recommended to dispose to the areas that are drastically altered like mining areas where development and fixation of vegetation are not possible. Another method is by incineration, which would destroy the organic pollutants and decrease the mass of the waste. It is usually done in a closed device by using a controlled flame. Advantages and disadvantages compared to other competitive methods Many debates have been discussed about whether or not filter presses are sufficient to compete with modern equipment currently as well as in the future, since filter presses were one of the oldest machine-driven dewatering devices. Efficiency improvements are possible in many applications where modern filter presses have the best characteristics for the job, however, despite the fact that many mechanical improvements have been made, filter presses still remain to operate on the same concept as when first invented. A lack of progress in efficiency improvements as well as a lack of research on conquering associated issues surrounding filter presses have suggested a possibility of performance inadequacy. At the same time, many other types of filter could do the same or better job as press filters. In certain cases, it is crucial to compare characteristics and performances. Batch filter press versus a continuous vacuum belt filter Filter presses offer a wide range of application, one of its main propositions is the ability to provide a large filter area in a relatively small footprint. Surface area available is one of the most important dimensions in any filtering process, since it maximises filter flow rate and capacity. A standard size filter press offers a filter area of 216 m2, whereas a standard belt filter only offers approximately 15 m2. High-solids slurries: continuous pressure operation Filter presses are commonly used to dewater high-solids slurries in metal processing plants, one of the press filter technology that could deliver the job is the Rotary Pressure Filter method, which provides continuous production in a single unit, where filtration is directed via pressure. However, in cases where solids concentration in high-solids slurries is too high (50%+), it is better to handle these slurries using vacuum filtration, such as a continuous Indexing Vacuum Belt Filter, since high concentration of solids in slurries will increase pressure and if pressure is too high, the equipment might be damaged and/or less efficient operation. Current development In the future, market demands for modern filtration industry are going to become finer and higher degree in separation, and particularly on the purpose of material recycling, energy saving, and green technology. In order to meet increasing demands for higher degree of dewatering from difficult-to-filter material, super-high pressure filters are required. Therefore, the trend in increasing the pressure for the automatic filter press will keep on developing in the future. The conventional filter press mechanisms usually use mechanical compression and air to de-liquoring; however, the efficiency of producing low-moisture cake is limited. An alternative method has been introduced by using steam instead of air for cake dewatering. Steam dewatering technique can be a competitive method since it offers product of low-moisture cake. References Filters Liquid-solid separation
Filter press
[ "Chemistry", "Engineering" ]
3,956
[ "Separation processes by phases", "Chemical equipment", "Filters", "Filtration", "Liquid-solid separation" ]
14,263,657
https://en.wikipedia.org/wiki/C8H8O4
The molecular formula C8H8O4 (molar mass : 168.15 g/mol, exact mass : 168.042259 u) may refer to: Dehydroacetic acid 3,4-Dihydroxyphenylacetic acid 3,4-Dihydroxyphenylglycolaldehyde 2,6-Dimethoxy-1,4-benzoquinone Homogentisic acid 4-Hydroxymandelic acid 5-Methoxysalicylic acid Norcantharidin Orsellinic acid Quinolacetic acid Trihydroxyacetophenones Gallacetophenone (2,3,4-trihydroxyacetophenone) 2,4,6-Trihydroxyacetophenone Vanillic acid Molecular formulas
C8H8O4
[ "Physics", "Chemistry" ]
178
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
14,266,112
https://en.wikipedia.org/wiki/Latent%20human%20error
Latent human error is a term used in safety work and accident prevention, especially in aviation, to describe human errors which are likely to be made due to systems or routines that are formed in such a way that humans are disposed to making these errors. Latent human errors are frequently components in causes of accidents. The error is latent and may not materialize immediately, thus, latent human error does not cause immediate or obvious damage. Discovering latent errors is therefore difficult and requires a systematic approach. Latent human error is often discussed in aviation incident investigation, and contributes to over 70% of the accidents. By gathering data about errors made, then collating, grouping and analyzing them, it can be determined whether a disproportionate amount of similar errors are being made. If this is the case, a contributing factor may be disharmony between the respective systems/routines and human nature or propensities. The routines or systems can then be analyzed, potential problems identified, and amendments made if necessary, in order to prevent future errors, incidents or accidents from occurring. See also Air safety Error Citations Defense Technical Information Center (1994-12-01). DTIC ADA492127: Behind Human Error: Cognitive Systems, Computers and Hindsight. Further reading James Reason: Human Error, Cambridge University Press; 1st edition (October 26, 1990) External links Erik Hollnagel, "The Elusiveness of "Human Error"", 2005 Human error: models and management – James Reason British Medical Journal 2000;320:768–70 (Internet Archive) Human factors view of accident causation Safety engineering Error Accidents Human factors References
Latent human error
[ "Engineering" ]
337
[ "Safety engineering", "Systems engineering" ]
14,267,269
https://en.wikipedia.org/wiki/GPR98
ADGRV1, also known as G protein-coupled receptor 98 (GPR98) or Very Large G-protein coupled receptor 1 (VLGR1), is a protein that in humans is encoded by the GPR98 gene. Several alternatively spliced transcripts have been described. The adhesion GPCR VLGR1 is the largest GPCR known, with a size of 6300 amino acids and consisting of 90 exons. There are 8 splice variants of VlgR1, named VlgR1a-1e and Mass1.1-1.3. The N-terminus consists of 5800 amino acids containing 35 Calx-beta domains, one pentraxin domain, and one epilepsy associated repeat. Mutations of VlgR1 have been shown to result in Usher's syndrome. Knockouts of Vlgr1 in mice have been shown to phenocopy Usher's syndrome and lead to audiogenic seizures. Function This gene encodes a member of the adhesion-GPCR family of receptors. The protein binds calcium and is expressed in the central nervous system. It is also known as very large G-protein coupled receptor 1 because it is 6300 residues long. It contains a C-terminal 7-transmembrane receptor domain, whereas the large N-terminal segment (5900 residues) includes 35 calcium binding Calx-beta domains, and 6 EAR domains. Evolution The sea urchin genome has a homolog of VLGR1 in it. Clinical significance Mutations in this gene are associated with Usher syndrome 2 and familial febrile seizures. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Usher Syndrome Type II Receptors G protein-coupled receptors
GPR98
[ "Chemistry" ]
368
[ "G protein-coupled receptors", "Receptors", "Signal transduction" ]
14,271,033
https://en.wikipedia.org/wiki/Bulk%20temperature
In thermofluids dynamics, the bulk temperature, or the average bulk temperature in the thermal fluid, is a convenient reference point for evaluating properties related to convective heat transfer, particularly in applications related to flow in pipes and ducts. The concept of the bulk temperature is that adiabatic mixing of the fluid from a given cross section of the duct will result in some equilibrium temperature that accurately reflects the average temperature of the moving fluid, more so than a simple average like the film temperature. References Continuum mechanics Heat transfer Temperature
Bulk temperature
[ "Physics", "Chemistry" ]
109
[ "Thermodynamics stubs", "Scalar physical quantities", "Temperature", "Heat transfer", "Thermodynamic properties", "Physical quantities", "Physical phenomena", "Continuum mechanics", "Transport phenomena", "SI base quantities", "Intensive quantities", "Classical mechanics", "Thermodynamics", ...
14,271,130
https://en.wikipedia.org/wiki/Histaminergic
Histaminergic means "working on the histamine system", and histaminic means "related to histamine". A histaminergic agent (or drug) is a chemical which functions to directly modulate the histamine system in the body or brain. Examples include histamine receptor agonists and histamine receptor antagonists (or antihistamines). Subdivisions of histamine antagonists include H1 receptor antagonists, H2 receptor antagonists, and H3 receptor antagonists. See also Adenosinergic Adrenergic Cannabinoidergic Cholinergic Dopaminergic GABAergic Glycinergic Melatonergic Monoaminergic Opioidergic Serotonergic References Neurochemistry Neurotransmitters
Histaminergic
[ "Chemistry", "Biology" ]
171
[ "Biochemistry", "Neurochemistry", "Neurotransmitters" ]
14,271,172
https://en.wikipedia.org/wiki/KMT2A
Histone-lysine N-methyltransferase 2A, also known as acute lymphoblastic leukemia 1 (ALL-1), myeloid/lymphoid or mixed-lineage leukemia 1 (MLL1), or zinc finger protein HRX (HRX), is an enzyme that in humans is encoded by the KMT2A gene. MLL1 is a histone methyltransferase deemed a positive global regulator of gene transcription. This protein belongs to the group of histone-modifying enzymes comprising transactivation domain 9aaTAD and is involved in the epigenetic maintenance of transcriptional memory. Its role as an epigenetic regulator of neuronal function is an ongoing area of research. Function Transcriptional regulation KMT2A gene encodes a transcriptional coactivator that plays an essential role in regulating gene expression during early development and hematopoiesis. The encoded protein contains multiple conserved functional domains. One of these domains, the SET domain, is responsible for its histone H3 lysine 4 (H3K4) methyltransferase activity which mediates chromatin modifications associated with epigenetic transcriptional activation. Enriched in the nucleus, the MLL1 enzyme trimethylates H3K4 (H3K4me3). It also upregulates mono- and dimethylation of H3K4. This protein is processed by the enzyme Taspase 1 into two fragments, MLL-C (~180 kDa) and MLL-N (~320 kDa). These fragments then assemble into different multi-protein complexes that regulate the transcription of specific target genes, including many of the HOX genes. Transcriptome profiling after deletion of MLL1 in cortical neurons revealed decreased promoter-bound H3K4me3 peaks at 318 genes, with 31 of these having significantly decreased expression and promoter binding. Among them were Meis2, a homeobox transcription factor critical for development of forebrain neurons and Satb2, a protein involved in neuronal differentiation. Multiple chromosomal translocations involving this gene are the cause of certain acute lymphoid leukemias and acute myeloid leukemias. Alternate splicing results in multiple transcript variants. Cognition and emotion MLL1 has been shown to be an important epigenetic regulator of complex behaviors. Rodent models of MLL1 dysfunction in forebrain neurons showed that conditional deletion results in elevated anxiety and defective cognition. Prefrontal cortex-specific knockout of MLL1 results in the same phenotypes, as well as working memory deficits. Stem cells MLL1 has been found to be an important regulator of epiblast-derived stem cells, post-implantation epiblast derived stem cells which display pluripotency yet many recognizable differences from the traditional embryonic stem cells derived from inner cell mass prior to implantation. Suppression of MLL1 expression was shown to be adequate for inducing ESC-like morphology and behavior within 72 hours of treatment. It has been proposed that the small molecule inhibitor MM-401, which was used to inhibit MLL1, changes the distribution of H3K4me1, the single methylation of the histone H3 lysine 4, to be significantly downregulated at MLL1 targets thus leading to decreased expression of MLL1 targets, rather than a direct regulation of pluripotency core markers. Structure Gene KMT2A gene has 37 exons and resides on chromosome 11 at q23. Protein KMT2A has over a dozen binding partners and is cleaved into two pieces, a larger N-terminal fragment, involved in gene repression, and a smaller C-terminal fragment, which is a transcriptional activator. The cleavage, followed by the association of the two fragments, is necessary for KMT2A to be fully active. Like many other methyltransferases, the KMT2 family members exist in multisubunit nuclear complexes (human COMPASS), where other subunits also mediate the enzymatic activity. Clinical significance Abnormal H3K4 trimethylation has been implicated in several neurological disorders such as autism. Humans with cognitive and neurodevelopmental disease often have dysregulation of H3K4 methylation in prefrontal cortex (PFC) neurons. It also may participate in the process of GAD67 downregulation in schizophrenia. MLL1 is required for the expression of senescence-associated secretory phenotype (SASP)-related genes and promotes increased inflammation. Rearrangements of the MLL1 gene are associated with aggressive acute leukemias, both lymphoblastic and myeloid. Despite being an aggressive leukemia, the MLL1 rearranged sub-type had the lowest mutation rates reported for any cancer. Mutations in MLL1 cause Wiedemann-Steiner syndrome and acute lymphoblastic leukemia. The leukemia cells of up to 80 percent of infants with ALL-1 have a chromosomal rearrangement that fuses the MLL1 gene to a gene on a different chromosome. Interactions MLL (gene) has been shown to interact with: ASH2L, CREBBP, CTBP1, HDAC1, HCFC1, MEN1, PPIE, PPP1R15A, RBBP5, and WDR5. References Further reading External links MLL OMIM Entry: MYELOID/LYMPHOID OR MIXED LINEAGE LEUKEMIA GENE; MLL Gene MLL on the Atlas of Genetics and Oncology Epigenetics Proteins Transcription factors Human proteins
KMT2A
[ "Chemistry", "Biology" ]
1,189
[ "Biomolecules by chemical classification", "Gene expression", "Signal transduction", "Induced stem cells", "Molecular biology", "Proteins", "Transcription factors" ]
14,272,194
https://en.wikipedia.org/wiki/Saint-Venant%27s%20theorem
In solid mechanics, it is common to analyze the properties of beams with constant cross section. Saint-Venant's theorem states that the simply connected cross section with maximal torsional rigidity is a circle. It is named after the French mathematician Adhémar Jean Claude Barré de Saint-Venant. Given a simply connected domain D in the plane with area A, the radius and the area of its greatest inscribed circle, the torsional rigidity P of D is defined by Here the supremum is taken over all the continuously differentiable functions vanishing on the boundary of D. The existence of this supremum is a consequence of Poincaré inequality. Saint-Venant conjectured in 1856 that of all domains D of equal area A the circular one has the greatest torsional rigidity, that is A rigorous proof of this inequality was not given until 1948 by Pólya. Another proof was given by Davenport and reported in. A more general proof and an estimate is given by Makai. Notes Elasticity (physics) Eponymous theorems of physics Calculus of variations Inequalities
Saint-Venant's theorem
[ "Physics", "Materials_science", "Mathematics" ]
224
[ "Physical phenomena", "Mathematical theorems", "Equations of physics", "Elasticity (physics)", "Deformation (mechanics)", "Binary relations", "Eponymous theorems of physics", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Physical properties", "Physics theore...
6,042,516
https://en.wikipedia.org/wiki/Kearny%20air%20pump
The Kearny air pump is an expedient air pump used to ventilate a shelter. The design is such that a person with normal mechanical skills can construct and operate one. It is usually human-powered and designed to be employed during a time of crisis. It was designed to be used in a fallout shelter, but can be used in any situation where emergency ventilation is needed, as after a hurricane. It was developed from research performed at Oak Ridge National Laboratory by Cresson Kearny and published in Nuclear War Survival Skills. The basic principle is to create a flat surface with vanes that close when moving air and open when going back to the starting position. The design was derived from the punkah. See also Kearny fallout meter Nuclear War Survival Skills () References External links Online version of the plans in Nuclear War Survival Skills Air raid shelters Nuclear warfare Disaster preparedness Pumps
Kearny air pump
[ "Physics", "Chemistry" ]
183
[ "Pumps", "Turbomachinery", "Physical systems", "Hydraulics", "Nuclear warfare", "Radioactivity" ]
6,043,761
https://en.wikipedia.org/wiki/Texas%20Low%20Emission%20Diesel%20standards
Texas Low Emission Diesel standards (TxLED) are rules regulating the quality of diesel fuels, intended to reduce pollutants (especially NOx). Since October 31, 2005, diesel fuel to be consumed by engines in 110 counties in Eastern Texas must meet these requirements: Maximum aromatic hydrocarbon content of 10% by volume. Minimum cetane number of 48. Alternatively, diesel fuel that complies with the specifications of a California Air Resources Board (CARB) certified alternative diesel formulation that was approved by CARB before January 18, 2005, may be used, as can fuel approved by the Texas Commission on Environmental Quality (TCEQ) that is proven to have emissions equivalent to or less than TxLED compliant fuel. References Petroleum products Fuels
Texas Low Emission Diesel standards
[ "Chemistry" ]
152
[ "Petroleum", "Petroleum products", "Fuels", "Chemical energy sources" ]
6,044,113
https://en.wikipedia.org/wiki/Occulting%20disk
An occulting disk is a small disk placed centrally in the eyepiece of a telescope or at its focal point, to block the view of a bright object so that fainter objects can be seen more easily. The coronagraph, at its simplest, is an occulting disk in the focal plane of a telescope, or in front of the entrance aperture, that blocks out the image of the solar disk, so that the corona can be seen. Starshade is one designed to fly in formation with a space telescope to image exoplanets. See also New Worlds Mission Space sunshade Telescope for Habitable Exoplanets and Interstellar/Intergalactic Astronomy References Optical telescope components Optical devices Star images Stellar astronomy
Occulting disk
[ "Materials_science", "Astronomy", "Technology", "Engineering" ]
147
[ "Glass engineering and science", "Optical telescope components", "Optical devices", "Astronomy stubs", "Stellar astronomy stubs", "Components", "Astronomical sub-disciplines", "Stellar astronomy" ]
6,044,329
https://en.wikipedia.org/wiki/Haber%E2%80%93Weiss%20reaction
The Haber–Weiss reaction generates •OH (hydroxyl radicals) from H2O2 (hydrogen peroxide) and superoxide (•O2−) catalyzed by iron ions. It was first proposed by Fritz Haber and his student Joseph Joshua Weiss in 1932. This reaction has long been studied and revived in different contexts, including organic chemistry, free radicals, radiochemistry, and water radiolysis. In the 1970, with the emerging interest for the effect of free radicals onto the ageing mechanisms of living cells due to oxygen (O2), it was proposed that the Haber–Weiss reaction was a source of radicals responsible for cellular oxidative stress. However, this hypothesis was later disproved by several research works. The oxidative stress toxicity is not caused by the Haber–Weiss reaction as a whole, but by the Fenton reaction, which is one specific part of it. The reaction is kinetically slow, but is catalyzed by dissolved iron ions. The first step of the catalytic cycle involves the reduction of the ferric (Fe3+) ion into the ferrous (Fe2+) ion: Fe3+ + •O2− → Fe2+ + O2 The second step is the Fenton reaction: Fe2+ + H2O2 → Fe3+ + OH− + •OH Net reaction: •O2− + H2O2 → •OH + OH− + O2 Haber-Weiss chain reaction The main finding of Haber and Weiss was that hydrogen peroxide (H2O2) is decomposed by a chain reaction. The Haber–Weiss reaction chain proceeds by successive steps: (i) initiation, (ii) propagation and (iii) termination. The chain is initiated by the Fenton reaction: Fe2+ + H2O2 → Fe3+ + HO– + HO•     (step 1: initiation) Then, the reaction chain propagates by means of two successive steps: HO• + H2O2 → H2O + O2•– + H+        (step 2: propagation) O2•– + H+ + H2O2 → O2 + HO• + H2O    (step 3: propagation) Finally, the chain is terminated when the hydroxyl radical is scavenged by a ferrous ion: Fe2+ + HO• + H+ → Fe3+ + H2O        (step 4: termination) George showed in 1947 that, in water, step 3 cannot compete with the spontaneous disproportionation of superoxide, and proposed an improved mechanism for the disappearance of hydrogen peroxide. See for a summary. The reactions proposed therein are: Fe2+ + H2O2 → Fe3+ + HO– + HO•    (initiation) Fe2+ + HO• → Fe3+ + HO–    (termination) H2O2 + HO• → H2O + HO2•    (propagation) Fe2+ + HO2• → Fe3+ + HO2−    (termination) Fe3+ + HO2• → Fe2+ + H2O + H+    (termination)(Warning! Unbalanced reaction!) Hydroperoxyl and superoxide radicals With time, various chemical notations for the hydroperoxyl (perhydroxyl) radical coexist in the literature. Haber, Wilstätter and Weiss simply wrote HO2 or O2H, but sometimes HO2• or •O2H can also be found to stress the radical character of the species. The hydroperoxyl radical is a weak acid and gives rise to the superoxide radical (O2•–) when it loses a proton: HO2 → H+ + O2– sometimes also written as: HO2• → H+ + O2•– A first pKa value of 4.88 for the dissociation of the hydroperoxyl radical was determined in 1970. The presently accepted value is 4.7. This pKa value is close to that of acetic acid. Below a pH of 4.7, the protonated hydroperoxyl radical will dominate in solution while at pH above 4.7 the superoxide radical anion will be the main species. Effect of pH on the reaction rate As the Haber–Weiss reaction depends on the presence of both Fe3+ and Fe2+ in solution, its kinetics is influenced by the respective solubilities of both species whose are directly function of the solution pH. As Fe3+ is about 100 times less soluble than Fe2+ in natural waters at near-neutral pH, the ferric ion concentration is the limiting factor for the reaction rate. The reaction can only proceed with a fast enough rate under sufficiently acidic conditions. At high pH, under alkaline conditions, the reaction considerably slows down because of the precipitation of Fe(OH)3 which notably lowers the concentration of the Fe3+ species in solution. Moreover, the pH value also directly influences the acid-base dissociation equilibrium involving the hydroperoxyl and the superoxide radicals (pKa = 4.7) as mentioned above. See also Fenton's reagent References Catalysis Environmental chemistry Free radical reactions Fritz Haber Iron compounds Name reactions Oxidizing agents Peroxides Radiation effects
Haber–Weiss reaction
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Environmental_science" ]
1,100
[ "Catalysis", "Physical phenomena", "Redox", "Free radical reactions", "Environmental chemistry", "Materials science", "Oxidizing agents", "Organic reactions", "Name reactions", "Radiation", "Condensed matter physics", "nan", "Radiation effects", "Chemical kinetics" ]
6,044,675
https://en.wikipedia.org/wiki/Faber%E2%80%93Jackson%20relation
The Faber–Jackson relation provided the first empirical power-law relation between the luminosity and the central stellar velocity dispersion of elliptical galaxy, and was presented by the astronomers Sandra M. Faber and Robert Earl Jackson in 1976. Their relation can be expressed mathematically as: with the index approximately equal to 4. In 1962, Rudolph Minkowski had discovered and wrote that a "correlation between velocity dispersion and [luminosity] exists, but it is poor" and that "it seems important to extend the observations to more objects, especially at low and medium absolute magnitudes". This was important because the value of depends on the range of galaxy luminosities that is fitted, with a value of 2 for low-luminosity elliptical galaxies discovered by a team led by Roger Davies, and a value of 5 reported by Paul L. Schechter for luminous elliptical galaxies. The Faber–Jackson relation is understood as a projection of the fundamental plane of elliptical galaxies. One of its main uses is as a tool for determining distances to external galaxies. Theory The gravitational potential of a mass distribution of radius and mass is given by the expression: Where α is a constant depending e.g. on the density profile of the system and G is the gravitational constant. For a constant density, The kinetic energy is: (Recall is the 1-dimensional velocity dispersion. Therefore, .) From the virial theorem ( ) it follows If we assume that the mass to light ratio, , is constant, e.g. we can use this and the above expression to obtain a relation between and : Let us introduce the surface brightness, and assume this is a constant (which from a fundamental theoretical point of view, is a totally unjustified assumption) to get Using this and combining it with the relation between and , this results in and by rewriting the above expression, we finally obtain the relation between luminosity and velocity dispersion: that is Given that massive galaxies originate from homologous merging, and the fainter ones from dissipation, the assumption of constant surface brightness can no longer be supported. Empirically, surface brightness exhibits a peak at about . The revised relation then becomes for the less massive galaxies, and for the more massive ones. With these revised formulae, the fundamental plane splits into two planes inclined by about 11 degrees to each other. Even first-ranked cluster galaxies do not have constant surface brightness. A claim supporting constant surface brightness was presented by astronomer Allan R. Sandage in 1972 based on three logical arguments and his own empirical data. In 1975, Donald Gudehus showed that each of the logical arguments was incorrect and that first-ranked cluster galaxies exhibited a standard deviation of about half a magnitude. Estimating distances to galaxies Like the Tully–Fisher relation, the Faber–Jackson relation provides a means of estimating the distance to a galaxy, which is otherwise hard to obtain, by relating it to more easily observable properties of the galaxy. In the case of elliptical galaxies, if one can measure the central stellar velocity dispersion, which can be done relatively easily by using spectroscopy to measure the Doppler shift of light emitted by the stars, then one can obtain an estimate of the true luminosity of the galaxy via the Faber–Jackson relation. This can be compared to the apparent magnitude of the galaxy, which provides an estimate of the distance modulus and, hence, the distance to the galaxy. By combining a galaxy's central velocity dispersion with measurements of its central surface brightness and radius parameter, it is possible to improve the estimate of the galaxy's distance even more. This standard yardstick, or "reduced galaxian radius-parameter", , devised by Gudehus in 1991, can yield distances, free of systematic bias, accurate to about 31%. See also Fundamental plane (elliptical galaxies) M–sigma relation Sigma-D relation Tully–Fisher relation References External links The original paper by Faber & Jackson Gudehus's revision of the Faber–Jackson relation - Extragalactic astronomy Equations of astronomy Physical cosmological concepts
Faber–Jackson relation
[ "Physics", "Astronomy" ]
837
[ "Physical cosmological concepts", "Concepts in astrophysics", "Concepts in astronomy", "Equations of astronomy", "Extragalactic astronomy", "Astronomical sub-disciplines" ]
731,401
https://en.wikipedia.org/wiki/Ideal%20solution
An ideal solution or ideal mixture is a solution that exhibits thermodynamic properties analogous to those of a mixture of ideal gases. The enthalpy of mixing is zero as is the volume change on mixing by definition; the closer to zero the enthalpy of mixing is, the more "ideal" the behavior of the solution becomes. The vapor pressures of the solvent and solute obey Raoult's law and Henry's law, respectively, and the activity coefficient (which measures deviation from ideality) is equal to one for each component. The concept of an ideal solution is fundamental to both thermodynamics and chemical thermodynamics and their applications, such as the explanation of colligative properties. Physical origin Ideality of solutions is analogous to ideality for gases, with the important difference that intermolecular interactions in liquids are strong and cannot simply be neglected as they can for ideal gases. Instead we assume that the mean strength of the interactions are the same between all the molecules of the solution. More formally, for a mix of molecules of A and B, then the interactions between unlike neighbors (UAB) and like neighbors UAA and UBB must be of the same average strength, i.e., 2 UAB = UAA + UBB and the longer-range interactions must be nil (or at least indistinguishable). If the molecular forces are the same between AA, AB and BB, i.e., UAB = UAA = UBB, then the solution is automatically ideal. If the molecules are almost identical chemically, e.g., 1-butanol and 2-butanol, then the solution will be almost ideal. Since the interaction energies between A and B are almost equal, it follows that there is only a very small overall energy (enthalpy) change when the substances are mixed. The more dissimilar the nature of A and B, the more strongly the solution is expected to deviate from ideality. Formal definition Different related definitions of an ideal solution have been proposed. The simplest definition is that an ideal solution is a solution for which each component obeys Raoult's law for all compositions. Here is the vapor pressure of component above the solution, is its mole fraction and is the vapor pressure of the pure substance at the same temperature. This definition depends on vapor pressure, which is a directly measurable property, at least for volatile components. The thermodynamic properties may then be obtained from the chemical potential μ (which is the partial molar Gibbs energy g) of each component. If the vapor is an ideal gas, The reference pressure may be taken as = 1 bar, or as the pressure of the mix, whichever is simpler. On substituting the value of from Raoult's law, This equation for the chemical potential can be used as an alternate definition for an ideal solution. However, the vapor above the solution may not actually behave as a mixture of ideal gases. Some authors therefore define an ideal solution as one for which each component obeys the fugacity analogue of Raoult's law . Here is the fugacity of component in solution and is the fugacity of as a pure substance. Since the fugacity is defined by the equation this definition leads to ideal values of the chemical potential and other thermodynamic properties even when the component vapors above the solution are not ideal gases. An equivalent statement uses thermodynamic activity instead of fugacity. Thermodynamic properties Volume If we differentiate this last equation with respect to at constant we get: Since we know from the Gibbs potential equation that: with the molar volume , these last two equations put together give: Since all this, done as a pure substance, is valid in an ideal mix just adding the subscript to all the intensive variables and changing to , with optional overbar, standing for partial molar volume: Applying the first equation of this section to this last equation we find: which means that the partial molar volumes in an ideal mix are independent of composition. Consequently, the total volume is the sum of the volumes of the components in their pure forms: Enthalpy and heat capacity Proceeding in a similar way but taking the derivative with respect to we get a similar result for molar enthalpies: Remembering that we get: which in turn means that and that the enthalpy of the mix is equal to the sum of its component enthalpies. Since and , similarly It is also easily verifiable that Entropy of mixing Finally since we find that Since the Gibbs free energy per mole of the mixture is then At last we can calculate the molar entropy of mixing since and Consequences Solvent–solute interactions are the same as solute–solute and solvent–solvent interactions, on average. Consequently, the enthalpy of mixing (solution) is zero and the change in Gibbs free energy on mixing is determined solely by the entropy of mixing. Hence the molar Gibbs free energy of mixing is or for a two-component ideal solution where m denotes molar, i.e., change in Gibbs free energy per mole of solution, and is the mole fraction of component . Note that this free energy of mixing is always negative (since each , each or its limit for must be negative (infinite)), i.e., ideal solutions are miscible at any composition and no phase separation will occur. The equation above can be expressed in terms of chemical potentials of the individual components where is the change in chemical potential of on mixing. If the chemical potential of pure liquid is denoted , then the chemical potential of in an ideal solution is Any component of an ideal solution obeys Raoult's Law over the entire composition range: where is the equilibrium vapor pressure of pure component and is the mole fraction of component in solution. Non-ideality Deviations from ideality can be described by the use of Margules functions or activity coefficients. A single Margules parameter may be sufficient to describe the properties of the solution if the deviations from ideality are modest; such solutions are termed regular. In contrast to ideal solutions, where volumes are strictly additive and mixing is always complete, the volume of a non-ideal solution is not, in general, the simple sum of the volumes of the component pure liquids and solubility is not guaranteed over the whole composition range. By measurement of densities, thermodynamic activity of components can be determined. See also Activity coefficient Entropy of mixing Margules function Regular solution Coil-globule transition Apparent molar property Dilution equation Virial coefficient References Solutions Thermodynamics Chemical thermodynamics
Ideal solution
[ "Physics", "Chemistry", "Mathematics" ]
1,375
[ "Homogeneous chemical mixtures", "Thermodynamics", "Chemical thermodynamics", "Solutions", "Dynamical systems" ]
731,884
https://en.wikipedia.org/wiki/Electromagnetic%20four-potential
An electromagnetic four-potential is a relativistic vector function from which the electromagnetic field can be derived. It combines both an electric scalar potential and a magnetic vector potential into a single four-vector. As measured in a given frame of reference, and for a given gauge, the first component of the electromagnetic four-potential is conventionally taken to be the electric scalar potential, and the other three components make up the magnetic vector potential. While both the scalar and vector potential depend upon the frame, the electromagnetic four-potential is Lorentz covariant. Like other potentials, many different electromagnetic four-potentials correspond to the same electromagnetic field, depending upon the choice of gauge. This article uses tensor index notation and the Minkowski metric sign convention . See also covariance and contravariance of vectors and raising and lowering indices for more details on notation. Formulae are given in SI units and Gaussian-cgs units. Definition The contravariant electromagnetic four-potential can be defined as: {| class="wikitable" |- ! SI units ! Gaussian units |- | || |} in which ϕ is the electric potential, and A is the magnetic potential (a vector potential). The unit of Aα is V·s·m−1 in SI, and Mx·cm−1 in Gaussian-CGS. The electric and magnetic fields associated with these four-potentials are: {| class="wikitable" |- ! SI units ! Gaussian units |- | || |- | || |} In special relativity, the electric and magnetic fields transform under Lorentz transformations. This can be written in the form of a rank two tensor – the electromagnetic tensor. The 16 contravariant components of the electromagnetic tensor, using Minkowski metric convention , are written in terms of the electromagnetic four-potential and the four-gradient as: If the said signature is instead then: This essentially defines the four-potential in terms of physically observable quantities, as well as reducing to the above definition. In the Lorenz gauge Often, the Lorenz gauge condition in an inertial frame of reference is employed to simplify Maxwell's equations as: {| class="wikitable" |- ! SI units ! Gaussian units |- | || |} where Jα are the components of the four-current, and is the d'Alembertian operator. In terms of the scalar and vector potentials, this last equation becomes: {| class="wikitable" |- ! SI units ! Gaussian units |- | || |- | || |} For a given charge and current distribution, and , the solutions to these equations in SI units are: where is the retarded time. This is sometimes also expressed with where the square brackets are meant to indicate that the time should be evaluated at the retarded time. Of course, since the above equations are simply the solution to an inhomogeneous differential equation, any solution to the homogeneous equation can be added to these to satisfy the boundary conditions. These homogeneous solutions in general represent waves propagating from sources outside the boundary. When the integrals above are evaluated for typical cases, e.g. of an oscillating current (or charge), they are found to give both a magnetic field component varying according to r (the induction field) and a component decreasing as r (the radiation field). Gauge freedom When flattened to a one-form (in tensor notation, ), the four-potential (normally written as a vector or, in tensor notation) can be decomposed via the Hodge decomposition theorem as the sum of an exact, a coexact, and a harmonic form, . There is gauge freedom in in that of the three forms in this decomposition, only the coexact form has any effect on the electromagnetic tensor . Exact forms are closed, as are harmonic forms over an appropriate domain, so and , always. So regardless of what and are, we are left with simply . In infinite flat Minkowski space, every closed form is exact. Therefore the term vanishes. Every gauge transform of can thus be written as . See also Four-vector Covariant formulation of classical electromagnetism Jefimenko's equations Gluon field Aharonov–Bohm effect References Theory of relativity Electromagnetism Four-vectors
Electromagnetic four-potential
[ "Physics" ]
917
[ "Physical phenomena", "Electromagnetism", "Physical quantities", "Four-vectors", "Fundamental interactions", "Vector physical quantities", "Theory of relativity" ]
732,446
https://en.wikipedia.org/wiki/Lorentz%20factor
The Lorentz factor or Lorentz term (also known as the gamma factor) is a dimensionless quantity expressing how much the measurements of time, length, and other physical properties change for an object while it moves. The expression appears in several equations in special relativity, and it arises in derivations of the Lorentz transformations. The name originates from its earlier appearance in Lorentzian electrodynamics – named after the Dutch physicist Hendrik Lorentz. It is generally denoted (the Greek lowercase letter gamma). Sometimes (especially in discussion of superluminal motion) the factor is written as (Greek uppercase-gamma) rather than . Definition The Lorentz factor is defined as where: is the relative velocity between inertial reference frames, is the speed of light in vacuum, is the ratio of to , is coordinate time, is the proper time for an observer (measuring time intervals in the observer's own frame). This is the most frequently used form in practice, though not the only one (see below for alternative forms). To complement the definition, some authors define the reciprocal see velocity addition formula. Occurrence Following is a list of formulae from Special relativity which use as a shorthand: The Lorentz transformation: The simplest case is a boost in the -direction (more general forms including arbitrary directions and rotations not listed here), which describes how spacetime coordinates change from one inertial frame using coordinates to another with relative velocity : Corollaries of the above transformations are the results: Time dilation: The time () between two ticks as measured in the frame in which the clock is moving, is longer than the time () between these ticks as measured in the rest frame of the clock: Length contraction: The length () of an object as measured in the frame in which it is moving, is shorter than its length () in its own rest frame: Applying conservation of momentum and energy leads to these results: Relativistic mass: The mass of an object in motion is dependent on and the rest mass : Relativistic momentum: The relativistic momentum relation takes the same form as for classical momentum, but using the above relativistic mass: Relativistic kinetic energy: The relativistic kinetic energy relation takes the slightly modified form: As is a function of , the non-relativistic limit gives , as expected from Newtonian considerations. Numerical values In the table below, the left-hand column shows speeds as different fractions of the speed of light (i.e. in units of ). The middle column shows the corresponding Lorentz factor, the final is the reciprocal. Values in bold are exact. Alternative representations There are other ways to write the factor. Above, velocity was used, but related variables such as momentum and rapidity may also be convenient. Momentum Solving the previous relativistic momentum equation for leads to This form is rarely used, although it does appear in the Maxwell–Jüttner distribution. Rapidity Applying the definition of rapidity as the hyperbolic angle : also leads to (by use of hyperbolic identities): Using the property of Lorentz transformation, it can be shown that rapidity is additive, a useful property that velocity does not have. Thus the rapidity parameter forms a one-parameter group, a foundation for physical models. Bessel function The Bunney identity represents the Lorentz factor in terms of an infinite series of Bessel functions: Series expansion (velocity) The Lorentz factor has the Maclaurin series: which is a special case of a binomial series. The approximation may be used to calculate relativistic effects at low speeds. It holds to within 1% error for  < 0.4  ( < 120,000 km/s), and to within 0.1% error for  < 0.22  ( < 66,000 km/s). The truncated versions of this series also allow physicists to prove that special relativity reduces to Newtonian mechanics at low speeds. For example, in special relativity, the following two equations hold: For and , respectively, these reduce to their Newtonian equivalents: The Lorentz factor equation can also be inverted to yield This has an asymptotic form The first two terms are occasionally used to quickly calculate velocities from large values. The approximation holds to within 1% tolerance for and to within 0.1% tolerance for Applications in astronomy The standard model of long-duration gamma-ray bursts (GRBs) holds that these explosions are ultra-relativistic (initial greater than approximately 100), which is invoked to explain the so-called "compactness" problem: absent this ultra-relativistic expansion, the ejecta would be optically thick to pair production at typical peak spectral energies of a few 100 keV, whereas the prompt emission is observed to be non-thermal. Muons, a subatomic particle, travel at a speed such that they have a relatively high Lorentz factor and therefore experience extreme time dilation. Since muons have a mean lifetime of just 2.2 μs, muons generated from cosmic-ray collisions high in Earth's atmosphere should be nondetectable on the ground due to their decay rate. However, roughly 10% of muons from these collisions are still detectable on the surface, thereby demonstrating the effects of time dilation on their decay rate. See also Inertial frame of reference Proper velocity Pseudorapidity References External links Doppler effects Equations Hendrik Lorentz Minkowski spacetime Special relativity
Lorentz factor
[ "Physics", "Mathematics" ]
1,140
[ "Physical phenomena", "Physical quantities", "Quantity", "Mathematical objects", "Astrophysics", "Theory of relativity", "Special relativity", "Equations", "Dimensionless quantities", "Doppler effects" ]
732,746
https://en.wikipedia.org/wiki/Conductive%20polymer
Conductive polymers or, more precisely, intrinsically conducting polymers (ICPs) are organic polymers that conduct electricity. Such compounds may have metallic conductivity or can be semiconductors. The main advantage of conductive polymers is that they are easy to process, mainly by dispersion. Conductive polymers are generally not thermoplastics, i.e., they are not thermoformable. But, like insulating polymers, they are organic materials. They can offer high electrical conductivity but do not show similar mechanical properties to other commercially available polymers. The electrical properties can be fine-tuned using the methods of organic synthesis and by advanced dispersion techniques. History Polyaniline was first described in the mid-19th century by Henry Letheby, who investigated the electrochemical and chemical oxidation products of aniline in acidic media. He noted that the reduced form was colourless but the oxidized forms were deep blue. The first highly-conductive organic compounds were the charge transfer complexes. In the 1950s, researchers reported that polycyclic aromatic compounds formed semi-conducting charge-transfer complex salts with halogens. In 1954, researchers at Bell Labs and elsewhere reported organic charge transfer complexes with resistivities as low as 8 Ω.cm. In the early 1970s, researchers demonstrated salts of tetrathiafulvalene show almost metallic conductivity, while superconductivity was demonstrated in 1980. Broad research on salts of charge transfer complexes continues today. While these compounds were technically not polymers, this indicated that organic compounds can carry current. While organic conductors were previously intermittently discussed, the field was particularly energized by the prediction of superconductivity following the discovery of BCS theory. In 1963 Australians B.A. Bolto, D.E. Weiss, and coworkers reported derivatives of polypyrrole with resistivities as low as 1 Ω.cm. There have been multiple reports of similar high-conductivity oxidized polyacetylenes. With the notable exception of charge transfer complexes (some of which are even superconductors), organic molecules were previously considered insulators or at best weakly conducting semiconductors. Subsequently, DeSurville and coworkers reported high conductivity in a polyaniline. Likewise, in 1980, Diaz and Logan reported films of polyaniline that can serve as electrodes. While mostly operating at the scale of less than 100 nanometers, "molecular" electronic processes can collectively manifest on a macro scale. Examples include quantum tunneling, negative resistance, phonon-assisted hopping and polarons. In 1977, Alan J. Heeger, Alan MacDiarmid and Hideki Shirakawa reported similar high conductivity in oxidized iodine-doped polyacetylene. For this research, they were awarded the 2000 Nobel Prize in Chemistry "for the discovery and development of conductive polymers." Polyacetylene itself did not find practical applications, but drew the attention of scientists and encouraged the rapid growth of the field. Since the late 1980s, organic light-emitting diodes (OLEDs) have emerged as an important application of conducting polymers. Types Linear-backbone "polymer blacks" (polyacetylene, polypyrrole, polyindole and polyaniline) and their copolymers are the main class of conductive polymers. Poly(p-phenylene vinylene) (PPV) and its soluble derivatives have emerged as the prototypical electroluminescent semiconducting polymers. Today, poly(3-alkylthiophenes) are the archetypical materials for solar cells and transistors. The following table presents some organic conductive polymers according to their composition. The well-studied classes are written in bold and the less well studied ones are in italic. Synthesis Conductive polymers are prepared by many methods. Most conductive polymers are prepared by oxidative coupling of monocyclic precursors. Such reactions entail dehydrogenation: n H–[X]–H H–[X]n–H + 2(n–1) H+ + 2(n–1) e− The low solubility of most polymers presents challenges. Some researchers add solubilizing functional groups to some or all monomers to increase solubility. Others address this through the formation of nanostructures and surfactant-stabilized conducting polymer dispersions in water. These include polyaniline nanofibers and PEDOT:PSS. In many cases, the molecular weights of conductive polymers are lower than conventional polymers such as polyethylene. However, in some cases, the molecular weight need not be high to achieve the desired properties. There are two main methods used to synthesize conductive polymers, chemical synthesis and electro (co)polymerization. The chemical synthesis means connecting carbon-carbon bond of monomers by placing the simple monomers under various condition, such as heating, pressing, light exposure and catalyst. The advantage is high yield. However, there are many plausible impurities in the end product. The electro (co)polymerization means inserting three electrodes (reference electrode, counter electrode and working electrode) into solution including reactors or monomers. By applying voltage to electrodes, redox reaction to synthesize polymer is promoted. Electro (co)polymerization can also be divided into Cyclic voltammetry and Potentiostatic method by applying cyclic voltage and constant voltage, respectively. The advantage of Electro (co)polymerization are the high purity of products. But the method can only synthesize a few products at a time. Molecular basis of electrical conductivity The conductivity of such polymers is the result of several processes. For example, in traditional polymers such as polyethylenes, the valence electrons are bound in sp3 hybridized covalent bonds. Such "sigma-bonding electrons" have low mobility and do not contribute to the electrical conductivity of the material. However, in conjugated materials, the situation is completely different. Conducting polymers have backbones of contiguous sp2 hybridized carbon centers. One valence electron on each center resides in a pz orbital, which is orthogonal to the other three sigma-bonds. All the pz orbitals combine with each other to a molecule wide delocalized set of orbitals. The electrons in these delocalized orbitals have high mobility when the material is "doped" by oxidation, which removes some of these delocalized electrons. Thus, the conjugated p-orbitals form a one-dimensional electronic band, and the electrons within this band become mobile when it is partially emptied. The band structures of conductive polymers can easily be calculated with a tight binding model. In principle, these same materials can be doped by reduction, which adds electrons to an otherwise unfilled band. In practice, most organic conductors are doped oxidatively to give p-type materials. The redox doping of organic conductors is analogous to the doping of silicon semiconductors, whereby a small fraction of silicon atoms are replaced by electron-rich, e.g., phosphorus, or electron-poor, e.g., boron, atoms to create n-type and p-type semiconductors, respectively. Although typically "doping" conductive polymers involves oxidizing or reducing the material, conductive organic polymers associated with a protic solvent may also be "self-doped." Undoped conjugated polymers are semiconductors or insulators. In such compounds, the energy gap can be > 2 eV, which is too great for thermally activated conduction. Therefore, undoped conjugated polymers, such as polythiophenes, polyacetylenes only have a low electrical conductivity of around 10−10 to 10−8 S/cm. Even at a very low level of doping (< 1%), electrical conductivity increases several orders of magnitude up to values of around 0.1 S/cm. Subsequent doping of the conducting polymers will result in a saturation of the conductivity at values around 0.1–10 kS/cm (10–1000 S/m) for different polymers. Highest values reported up to now are for the conductivity of stretch oriented polyacetylene with confirmed values of about 80 kS/cm (8 MS/m). Although the pi-electrons in polyacetylene are delocalized along the chain, pristine polyacetylene is not a metal. Polyacetylene has alternating single and double bonds which have lengths of 1.44 and 1.36 Å, respectively. Upon doping, the bond alteration is diminished in conductivity increases. Non-doping increases in conductivity can also be accomplished in a field effect transistor (organic FET or OFET) and by irradiation. Some materials also exhibit negative differential resistance and voltage-controlled "switching" analogous to that seen in inorganic amorphous semiconductors. Despite intensive research, the relationship between morphology, chain structure and conductivity is still poorly understood. Generally, it is assumed that conductivity should be higher for the higher degree of crystallinity and better alignment of the chains, however this could not be confirmed for polyaniline and was only recently confirmed for PEDOT, which are largely amorphous. Properties and applications Conductive polymers show promise in antistatic materials and they have been incorporated into commercial displays and batteries. Literature suggests they are also promising in organic solar cells, printed electronic circuits, organic light-emitting diodes, actuators, electrochromism, supercapacitors, chemical sensors, chemical sensor arrays, and biosensors, flexible transparent displays, electromagnetic shielding and possibly replacement for the popular transparent conductor indium tin oxide. Another use is for microwave-absorbent coatings, particularly radar-absorptive coatings on stealth aircraft. Conducting polymers are rapidly gaining attraction in new applications with increasingly processable materials with better electrical and physical properties and lower costs. The new nano-structured forms of conducting polymers particularly, augment this field with their higher surface area and better dispersability. Research reports showed that nanostructured conducting polymers in the form of nanofibers and nanosponges exhibit significantly improved capacitance values as compared to their non-nanostructured counterparts. With the availability of stable and reproducible dispersions, PEDOT and polyaniline have gained some large-scale applications. While PEDOT (poly(3,4-ethylenedioxythiophene)) is mainly used in antistatic applications and as a transparent conductive layer in form of PEDOT:PSS dispersions (PSS=polystyrene sulfonic acid), polyaniline is widely used for printed circuit board manufacturing – in the final finish, for protecting copper from corrosion and preventing its solderability. Moreover, polyindole is also starting to gain attention for various applications due to its high redox activity, thermal stability, and slow degradation properties than competitors polyaniline and polypyrrole. Electroluminescence Electroluminescence is light emission stimulated by electric current. In organic compounds, electroluminescence has been known since the early 1950s, when Bernanose and coworkers first produced electroluminescence in crystalline thin films of acridine orange and quinacrine. In 1960, researchers at Dow Chemical developed AC-driven electroluminescent cells using doping. In some cases, similar light emission is observed when a voltage is applied to a thin layer of a conductive organic polymer film. While electroluminescence was originally mostly of academic interest, the increased conductivity of modern conductive polymers means enough power can be put through the device at low voltages to generate practical amounts of light. This property has led to the development of flat panel displays using organic LEDs, solar panels, and optical amplifiers. Barriers to applications Since most conductive polymers require oxidative doping, the properties of the resulting state are crucial. Such materials are salt-like (polymer salt), which makes them less soluble in organic solvents and water and hence harder to process. Furthermore, the charged organic backbone is often unstable towards atmospheric moisture. Improving processability for many polymers requires the introduction of solubilizing substituents, which can further complicate the synthesis. Experimental and theoretical thermodynamical evidence suggests that conductive polymers may even be completely and principally insoluble so that they can only be processed by dispersion. Trends Most recent emphasis is on organic light emitting diodes and organic polymer solar cells. The Organic Electronics Association is an international platform to promote applications of organic semiconductors. Conductive polymer products with embedded and improved electromagnetic interference (EMI) and electrostatic discharge (ESD) protection have led to both prototypes and products. For example, Polymer Electronics Research Center at University of Auckland is developing a range of novel DNA sensor technologies based on conducting polymers, photoluminescent polymers and inorganic nanocrystals (quantum dots) for simple, rapid and sensitive gene detection. Typical conductive polymers must be "doped" to produce high conductivity. As of 2001, there remains to be discovered an organic polymer that is intrinsically electrically conducting. Recently (as of 2020), researchers from IMDEA Nanoscience Institute reported experimental demonstration of the rational engineering of 1D polymers that are located near the quantum phase transition from the topologically trivial to non-trivial class, thus featuring a narrow bandgap. See also Organic electronics Organic semiconductor Molecular electronics List of emerging technologies Conjugated microporous polymer References Further reading Hyungsub Choi and Cyrus C.M. Mody The Long History of Molecular Electronics Social Studies of Science, vol 39. F. L. Carter, R. E. Siatkowski and H. Wohltjen (eds.), Molecular Electronic Devices, 229–244, North Holland, Amsterdam, 1988. External links Conducting Polymers for Carbon Electronics – a Chem Soc Rev themed issue with a foreword from Alan Heeger Molecular electronics Organic semiconductors Polymer material properties
Conductive polymer
[ "Chemistry", "Materials_science" ]
2,900
[ "Molecular physics", "Semiconductor materials", "Molecular electronics", "Polymer material properties", "Polymer chemistry", "Nanotechnology", "Conductive polymers", "Organic semiconductors" ]
732,808
https://en.wikipedia.org/wiki/Suramin
Suramin is a medication used to treat African sleeping sickness and river blindness. It is the treatment of choice for sleeping sickness without central nervous system involvement. It is given by injection into a vein. Suramin causes a fair number of side effects. Common side effects include nausea, vomiting, diarrhea, headache, skin tingling, and weakness. Sore palms of the hands and soles of the feet, trouble seeing, fever, and abdominal pain may also occur. Severe side effects may include low blood pressure, decreased level of consciousness, kidney problems, and low blood cell levels. It is unclear if it is safe when breastfeeding. Suramin was made at least as early as 1916. It is on the World Health Organization's List of Essential Medicines. In the United States it can be acquired from the Centers for Disease Control (CDC). In regions of the world where the disease is common suramin is provided for free by the World Health Organization (WHO). Medical uses Suramin is used for treatment of human sleeping sickness caused by trypanosomes. Specifically, it is used for treatment of first-stage African trypanosomiasis caused by Trypanosoma brucei rhodesiense and Trypanosoma brucei gambiense without involvement of central nervous system. It is considered first-line treatment for Trypanosoma brucei rhodesiense, and second-line treatment for early-stage Trypanosoma brucei gambiense, where pentamidine is recommended as first line. It has been used in the treatment of river blindness (onchocerciasis). Pregnancy and breastfeeding It is unknown whether it is safe for the baby when a woman takes it while breastfeeding. Adverse reactions The most frequent adverse reactions are nausea, vomiting, diarrhea, abdominal pain, and a feeling of general discomfort. It is also common to experience various sensations in the skin, from crawling or tingling sensations, tenderness of palms and the soles, and numbness of hands, arm, legs or feet. Other skin reactions include skin rash, swelling and stinging sensation. Suramin can also cause loss of appetite and irritability. Suramin causes non-harmful changes in urine during use, specifically making the urine cloudy. It may exacerbate kidney disease. Less common side effects include extreme fatigue, ulcers in the mouth, and painful tender glands in the neck, armpits and groin. Suramin uncommonly affects the eyes causing watery eyes, swelling around the eyes, photophobia, and changes or loss of vision. Rare side effects include hypersensitivity reactions causing difficulty breathing. Other rare systemic effects include decreased blood pressure, fever, rapid heart rate, and convulsions. Other rare side effects include symptoms of liver dysfunction such as tenderness in upper abdomen, jaundice in eyes and skin, unusual bleeding or bruising. Suramin has been applied clinically to HIV/AIDS patients resulting in a significant number of fatal occurrences and as a result the application of this molecule was abandoned for this condition. Pharmacology Pharmacokinetics Suramin is not orally bioavailable and must be given intravenously. Intramuscular and subcutaneous administration could result in local tissue inflammation or necrosis . Suramin is approximately 99-98% protein bound in the serum and has a half-life of 41–78 days average of 50 days; however, the pharmacokinetics of suramin can vary substantially between individual patients. Suramin does not distribute well into cerebral spinal fluid and its concentration in the tissues is equivalently lower than its concentration in the plasma. Suramin is not extensively metabolized and about 80% is eliminated via the kidneys. Mechanism of action The mechanism of action for suramin is unclear, but it is thought that parasites are able to selectively uptake suramin via receptor-mediated endocytosis of drug that is bound to low-density lipoproteins and, to a lesser extent, other serum proteins. Once inside parasites, suramin combines with proteins, especially trypanosomal glycolytic enzymes, to inhibit energy metabolism. Chemistry The molecular formula of suramin is C51H40N6O23S6. It is a symmetric molecule in the center of which lies a urea (NH–CO–NH) functional group. Suramin contains six aromatic systems – four benzene rings, sandwiched by a pair of naphthalene moieties – plus four amide functional groups (in addition to the urea) and six sulfonic acid groups. When given as a medication, it is usually delivered as the sodium sulfonate salt as this formulation is water-soluble, though it does deteriorate rapidly in air. The synthesis of suramin itself and structural analogs is by successive formation of the amide bonds from their corresponding amine (aniline) and carboxyl (as acyl chloride) components. Various routes to these compounds have been developed, including starting from separate naphthalene structures and building towards an eventual unification by formation of the urea or starting with a urea and appending successive groups. History Suramin was first made by the chemists Oskar Dressel, Richard Kothe and Bernhard Heymann at Bayer AG laboratories in Elberfeld, after research on a series of urea-like compounds. The drug is still sold by Bayer under the brand name Germanin. The chemical structure of suramin was kept secret by Bayer for commercial and strategic reasons, but it was elucidated and published in 1924 by Ernest Fourneau and his team at the Pasteur Institute. Research It is also used as a research reagent to inhibit the activation of heterotrimeric G proteins in a variety of GPCRs with varying potency. It prevents the association of heteromeric G proteins and therefore the receptors guanine exchange functionality (GEF). With this blockade the GDP will not release from the Gα subunit so it can not be replaced by a GTP and become activated. This has the effect of blocking downstream G protein mediated signaling of various GPCR proteins including rhodopsin, the A1 adenosine receptor, the D2 receptor, the P2 receptor, and ryanodine receptors. Suramin is also an inhibitor of ABC-type and P-type ATPases, which acts competitively with ATP. Suramin was studied as a possible treatment for prostate cancer in a clinical trial. Suramin has been studied in a mouse model of autism and in a small phase I/II human trial. Results from a randomized clinical study found no statistically significant effects of suramin (in either 10mg or 20mg doses) versus placebo on boys with moderate to severe autism spectrum disorder. Suramin is a reversible and competitive protein tyrosine phosphatase (PTPases) inhibitor, also is the potent inhibitor of sirtuins, purified topoisomerase II and SARS-CoV-2 RNA-dependent RNA polymerase (RdRp). References Further reading External links Suramin sodium National Cancer Institute 1916 in science Anthelmintics Antiprotozoal agents Drugs developed by Bayer Benzanilides Naphthalenesulfonic acids Ureas World Health Organization essential medicines Wikipedia medicine articles ready to translate
Suramin
[ "Chemistry", "Biology" ]
1,532
[ "Organic compounds", "Antiprotozoal agents", "Biocides", "Ureas" ]
733,002
https://en.wikipedia.org/wiki/Neil%20Turok
Neil Geoffrey Turok (born 16 November 1958) is a South African physicist. He has held the Higgs Chair of Theoretical Physics at the University of Edinburgh since 2020, and has been director emeritus of the Perimeter Institute for Theoretical Physics since 2019. He specializes in mathematical physics and early-universe physics, including the cosmological constant and a cyclic model for the universe. Early life and career Turok was born on 16 November 1958 in Johannesburg, South Africa, to Mary Turok and Byelorussian-born Ben Turok, who were activists in the anti-apartheid movement and the African National Congress. After graduating from Churchill College, Cambridge, Turok gained his doctorate from Imperial College, London, under the supervision of David Olive, one of the inventors of superstring theory. After a postdoctoral post at Santa Barbara, he was an associate scientist at Fermilab, Illinois. In 1992 he was awarded the Maxwell medal of the Institute of Physics for his contributions to theoretical physics. In 1994 he was appointed Professor of Physics at Princeton University, then held the Chair of Mathematical Physics at the University of Cambridge starting in 1997. He was appointed Director of the Perimeter Institute in 2008. In 2020, Turok was appointed as the Inaugural Higgs Chair of Theoretical Physics at the University of Edinburgh. Research and other contributions Turok has worked in a number of areas of mathematical physics and early universe physics, focusing on observational tests of fundamental physics in cosmology. In the early 1990s, his group showed how the polarisation and temperature anisotropies of the Cosmic microwave background would be correlated, a prediction which has been confirmed in detail by recent precision measurements by the WMAP spacecraft. They also developed a key test for the presence of a cosmological constant, also recently confirmed. Turok and collaborators developed the theory of open inflation. With Stephen Hawking, he later developed the so-called Hawking-Turok instanton solutions which, according to the no-boundary proposal of Hawking and James Hartle, can describe the birth of an inflationary universe. Together with Justin Khoury, Burt Ovrut and Paul Steinhardt, Turok introduced the notion of the Ekpyrotic Universe, "... a cosmological model in which the hot big bang universe is produced by the collision of a brane in the bulk space with a bounding orbifold plane, beginning from an otherwise cold, vacuous, static universe". Most recently, with Paul Steinhardt at Princeton, Turok has been developing a cyclic model for the universe, in which the big bang is explained as a collision between two "brane-worlds" in M theory. The predictions of this model are in agreement with current cosmological data, but there are interesting differences with the predictions of cosmological inflation which will be probed by future experiments (probably by the Planck space observatory). In 2006, Steinhardt and Turok showed how the cyclic model could naturally incorporate a mechanism for relaxing the cosmological constant to very small values, consistent with current observations. In 2007, Steinhardt and Turok co-authored the popular science book Endless Universe. In 2012, Turok's Massey Lectures were published as The Universe Within: from Quantum to Cosmos. In 2003, Turok founded the African Institute for Mathematical Sciences in Muizenberg, a postgraduate educational centre supporting the development of mathematics and science across the African continent. Awards and honours He was awarded the 2008 TED Prize for his work in mathematical physics and in establishing the African Institute for Mathematical Sciences in Muizenberg. He also received a "Most Innovative People Award," for Social Innovation, at the World Summit on Innovation and Entrepreneurship (WSIE) in 2008. On 9 May 2008, Mike Lazaridis announced that Turok would become the new Executive Director of the Perimeter Institute for Theoretical Physics starting on 1 October 2008. In 2010 Turok received a prize from the World Innovation Summit for Education in Qatar and an award from the South African Mathematical Society. In 2011 Turok received an Honorary Doctorate from the University of Ottawa. On 3 November 2011, Turok was selected to deliver the Massey Lectures for the 2012 season. This involves five separate lectures to be delivered in various locations across Canada in October 2012, aired on CBC's Ideas shortly thereafter. Turok received an honorary doctorate from Heriot-Watt University in 2012. In 2012 Turok was the recipient of the Lane Anderson Award for his book The Universe Within: From Quantum to Cosmos. Turok was awarded the honorary degrees of Doctor of Science, honoris causa from UCLouvain (4 February 2019), Saint Mary's University (16 May 2014), the Nelson Mandela Metropolitan University (9 April 2014) and Stellenbosch University (26 March 2015). Turok was awarded the 2016 John Torrence Tate Award at the 2016 SPS Quadrennial Congress in San Francisco. References External links Neil Turok's home page African Institute for Mathematical Sciences Perimeter Institute for Theoretical Physics http://news.nmmu.ac.za/News/Acclaimed-physicist-to-receive-honorary-doctorate Interviewed by Tina Kosir on 19 February 2008 (video) Interviewed by Alan Macfarlane 27 April 2017 (video) Interviewed by Dr. Brian Greene, 26 December 2024 (video) 1958 births Living people Alumni of Churchill College, Cambridge Alumni of Imperial College London Cambridge mathematicians South African cosmologists Fellows of Downing College, Cambridge Maxwell Medal and Prize recipients Officers of the Order of Canada People educated at William Ellis School People from Waterloo, Ontario 20th-century South African physicists Theoretical physicists String theorists
Neil Turok
[ "Physics" ]
1,144
[ "Theoretical physics", "Theoretical physicists" ]
733,009
https://en.wikipedia.org/wiki/Nanopore%20sequencing
Nanopore sequencing is a third generation approach used in the sequencing of biopolymers — specifically, polynucleotides in the form of DNA or RNA. Nanopore sequencing allows a single molecule of DNA or RNA be sequenced without PCR amplification or chemical labeling. Nanopore sequencing has the potential to offer relatively low-cost genotyping, high mobility for testing, and rapid processing of samples, including the ability to display real-time results. It has been proposed for rapid identification of viral pathogens, monitoring ebola, environmental monitoring, food safety monitoring, human genome sequencing, plant genome sequencing, monitoring of antibiotic resistance, haplotyping and other applications. Development Nanopore sequencing took 25 years to materialize. David Deamer was one of the first to push the idea. In 1989 he sketched out a plan to push single-strands of DNA through a protein nanopore embedded into a thin membrane as part his work to synthesize RNA. Realizing that the approach might allow DNA sequencing, Deamer and his team spent a decade refining the concept. In 1999 they published the first paper using the term 'nanopore sequencing' and two years later produced an image capturing a DNA hairpin passing through a nanopore in real time. Another foundation for nanopore sequencing was the work of Hagan Bayley's team, who from the 1990s independently developed stochastic sensing, a technique that measures the change in an ionic current passing through a nanopore to determine the concentration and identity of a substance. By 2005 Bayley had made progress with the DNA sequencing method. He co-founded Oxford Nanopore to push the technology. In 2014 the company released its first portable nanopore sequencing device. This made it possible for DNA sequencing to be carried out almost anywhere, even with limited resources. A quarter of the world's SARS-CoV-2 viral genomes were sequenced with nanopore devices. The technology offers an important tool for combating antimicrobial resistance. In 2020, China-based Qitan Technology launched its nanopore single-molecule gene sequencer, while in 2024 MGI Tech launched its own products. Principles The biological or solid-state membrane, where the nanopore is found, is surrounded by an electrolyte solution. The membrane splits the solution into two chambers. Applying a bias voltage across the membrane induces an electric field that drives charged particles, in this case the ions, into motion. This effect is known as electrophoresis. For a high enough concentrations, the electrolyte solution is well distributed and the voltage drop concentrates near and inside the nanopore. This means charged particles in the solution feel a force only from the electric field when they are near the pore region. This region is typically referred to as the capture region. Inside the capture region, ions have a directed motion that can be recorded as a steady ionic current by placing electrodes near the membrane. A nano-sized polymer such as DNA or protein placed in one of the chambers has a net charge that feels a force from the electric field in the capture region. The molecule approaches this capture region aided by Brownian motion. Any attraction it might have to the surface of the membrane. Once inside the nanopore, the molecule translocates via a combination of electro-phoretic, electro-osmotic and sometimes thermo-phoretic forces. Inside the pore the molecule occupies a volume that partially restricts the ion flow, observed as an ionic current drop. Based on various factors such as geometry, size and chemical composition, the change in magnitude of the ionic current and the duration of the translocation vary. Different molecules can then be sensed and potentially identified based on this current modulation. Base identification The magnitude of the electric current density across a nanopore surface depends on the nanopore's dimensions and the composition of DNA or RNA that is occupying the nanopore. Sequencing was made possible because passing through the channel of the nanopore, the samples cause characteristic changes in the density of the electric current. The total charge flowing through a nanopore channel is equal to the surface integral of electric current density flux across the nanopore unit normal surfaces. Types Biological Biological nanopore sequencing relies on the use of transmembrane proteins, called protein nanopores, in particular, formed by protein toxins, that are embedded in lipid membranes so as to create size dependent porous surfaces - with nanometer scale "holes" distributed across the membranes. Sufficiently low translocation velocity can be attained through the incorporation of various proteins that facilitate the movement of DNA or RNA through the pores of the lipid membranes. Alpha hemolysin Alpha hemolysin (αHL), a nanopore from bacteria that causes lysis of red blood cells, has been studied for over 15 years. To this point, studies have shown that all four bases can be identified using ionic current measured across the αHL pore. The structure of αHL is advantageous to identify specific bases moving through the pore. The αHL pore is ~10 nm long, with two distinct 5 nm sections. The upper section consists of a larger, vestibule-like structure and the lower section consists of three possible recognition sites (R1, R2, R3), and is able to discriminate between each base. Sequencing using αHL has been developed through basic study and structural mutations, moving towards the sequencing of very long reads. Protein mutation of αHL has improved the detection abilities of the pore. The next proposed step is to bind an exonuclease onto the αHL pore. The enzyme would periodically cleave single bases, enabling the pore to identify successive bases. Coupling an exonuclease to the biological pore would slow the translocation of the DNA through the pore, and increase the accuracy of data acquisition. Notably, theorists have shown that sequencing via exonuclease enzymes as described here is not feasible. This is mainly due to diffusion related effects imposing a limit on the capture probability of each nucleotide as it is cleaved. This results in a significant probability that a nucleotide is either not captured before it diffuses into the bulk or captured out of order, and therefore is not properly sequenced by the nanopore, leading to insertion and deletion errors. Therefore, major changes are needed to this method before it can be considered a viable strategy. A recent study has pointed to the ability of αHL to detect nucleotides at two separate sites in the lower half of the pore. The R1 and R2 sites enable each base to be monitored twice as it moves through the pore, creating 16 different measurable ionic current values instead of 4. This method improves upon the single read through the nanopore by doubling the sites that the sequence is read per nanopore. MspA Mycobacterium smegmatis porin A (MspA) is the second biological nanopore currently being investigated for DNA sequencing. The MspA pore has been identified as a potential improvement over αHL due to a more favorable structure. The pore is described as a goblet with a thick rim and a diameter of 1.2 nm at the bottom of the pore. A natural MspA, while favorable for DNA sequencing because of shape and diameter, has a negative core that prohibited single stranded DNA(ssDNA) translocation. The natural nanopore was modified to improve translocation by replacing three negatively charged aspartic acids with neutral asparagines. The electric current detection of nucleotides across the membrane has been shown to be tenfold more specific than αHL for identifying bases. Utilizing this improved specificity, a group at the University of Washington has proposed using double stranded DNA (dsDNA) between each single stranded molecule to hold the base in the reading section of the pore. The dsDNA would halt the base in the correct section of the pore and enable identification of the nucleotide. A recent grant has been awarded to a collaboration from UC Santa Cruz, the University of Washington, and Northeastern University to improve the base recognition of MspA using phi29 polymerase in conjunction with the pore. MspA with electric current detection can also be used to sequence peptides. Solid state Solid state nanopore sequencing approaches, unlike biological nanopore sequencing, do not incorporate proteins into their systems. Instead, solid state nanopore technology uses various metal or metal alloy substrates with nanometer sized pores that allow DNA or RNA to pass through. These substrates most often serve integral roles in the sequence recognition of nucleic acids as they translocate through the channels along the substrates. Tunneling current Measurement of electron tunneling through bases as ssDNA translocates through the nanopore is an improved solid state nanopore sequencing method. Most research has focused on proving bases could be determined using electron tunneling. These studies were conducted using a scanning probe microscope as the sensing electrode, and have proved that bases can be identified by specific tunneling currents. After the proof of principle research, a functional system must be created to couple the solid state pore and sensing devices. Researchers at the Harvard Nanopore group have engineered solid state pores with single walled carbon nanotubes across the diameter of the pore. Arrays of pores are created and chemical vapor deposition is used to create nanotubes that grow across the array. Once a nanotube has grown across a pore, the diameter of the pore is adjusted to the desired size. Successful creation of a nanotube coupled with a pore is an important step towards identifying bases as the ssDNA translocates through the solid state pore. Another method is the use of nanoelectrodes on either side of a pore. The electrodes are specifically created to enable a solid state nanopore's formation between the two electrodes. This technology could be used to not only sense the bases but to help control base translocation speed and orientation. Fluorescence An effective technique to determine a DNA sequence has been developed using solid state nanopores and fluorescence. This fluorescence sequencing method converts each base into a characteristic representation of multiple nucleotides which bind to a fluorescent probe strand-forming dsDNA. With the two color system proposed, each base is identified by two separate fluorescences, and will therefore be converted into two specific sequences. Probes consist of a fluorophore and quencher at the start and end of each sequence, respectively. Each fluorophore will be extinguished by the quencher at the end of the preceding sequence. When the dsDNA is translocating through a solid state nanopore, the probe strand will be stripped off, and the upstream fluorophore will fluoresce. This sequencing method has a capacity of 50-250 bases per second per pore, and a four color fluorophore system (each base could be converted to one sequence instead of two), will sequence over 500 bases per second. Advantages of this method are based on the clear sequencing readout—using a camera instead of noisy current methods. However, the method does require sample preparation to convert each base into an expanded binary code before sequencing. Instead of one base being identified as it translocates through the pore, ~12 bases are required to find the sequence of one base. Purposes Nanopore devices can be used for eDNA analysis in environmental monitoring and crop epidemiology. These can be miniaturised more than earlier technologies and so have been made into portable devices, especially the MinION. The MinION is especially known for the studies of crop viruses by Boykin et al 2018 & Shaffer 2019 and studies of species prevalence by Menegon et al 2017 and Pomerantz et al 2018. Owing to its high portability, low cost and easiness to use for rapid sequencing applications, it also raised ethical, legal and social concerns along with other next generation sequencing technologies. SARS-CoV-2 variants in Prague wastewater were detected by nanopore-based sequencing. Sequencing of sub-sewershed samples benefits epidemiological early warning systems. Comparison between types Major constraints Low Translocation Velocity:  The speed at which a sample passes through a unit's pore slow enough to be measured Dimensional Reproducibility:  The likelihood of a unit's pore to be made the proper size Stress Tolerance:  The sensitivity of a unit to internal environmental conditions Longevity: The length of time that a unit is expected to remain functioning Ease of Fabrication: The ability to produce a unit- usually in regards to mass-production Biological: advantages and disadvantages Biological nanopore sequencing systems have several fundamental characteristics that make them advantageous as compared with solid state systems- with each advantageous characteristic of this design approach stemming from the incorporation of proteins into their technology. Uniform pore structure, the precise control of sample translocation through pore channels, and even the detection of individual nucleotides in samples can be facilitated by unique proteins from a variety of organism types. The use of proteins in biological nanopore sequencing systems, despite the various benefits, also brings with it some negative characteristics. The sensitivity of the proteins in these systems to local environmental stress has a large impact on the longevity of the units, overall. One example is that a motor protein may only unzip samples with sufficient speed at a certain pH range while not operating fast enough outside of the range- this constraint impacts the functionality of the whole sequencing unit. Another example is that a transmembrane porin may only operate reliably for a certain number of runs before it breaks down. Both of these examples would have to be controlled for in the design of any viable biological nanopore system- something that may be difficult to achieve while keeping the costs of such a technology as low and as competitive, to other systems, as possible. Challenges One challenge for the 'strand sequencing' method was in refining the method to improve its resolution to be able to detect single bases. In the early papers methods, a nucleotide needed to be repeated in a sequence about 100 times successively in order to produce a measurable characteristic change. This low resolution is because the DNA strand moves rapidly at the rate of 1 to 5μs per base through the nanopore. This makes recording difficult and prone to background noise, failing in obtaining single-nucleotide resolution. As of 2006, the problem has been tackled by either improving the recording technology or by controlling the speed of DNA strand by various protein engineering strategies and Oxford Nanopore employs a 'kmer approach', analyzing more than one base at any one time so that stretches of DNA are subject to repeat interrogation as the strand moves through the nanopore one base at a time. Various techniques including algorithmic have been used to improve the performance of the MinION technology since it was first made available to users. More recently effects of single bases due to secondary structure or released mononucleotides have been shown. In 2010 Hagan Bayley proposed that creating two recognition sites within an alpha-hemolysin pore may confer advantages in base recognition. As of 2009, one challenge for the 'exonuclease approach', where a processive enzyme feeds individual bases, in the correct order, into the nanopore, has been to integrate the exonuclease and the nanopore detection systems. In particular, the problem is that when an exonuclease hydrolyzes the phosphodiester bonds between nucleotides in DNA, the subsequently released nucleotide is not necessarily guaranteed to directly move into, say, a nearby alpha-hemolysin nanopore. In 2009, one idea has been to attach the exonuclease to the nanopore, perhaps through biotinylation to the beta barrel hemolysin. The central pore of the protein may be lined with charged residues arranged so that the positive and negative charges appear on opposite sides of the pore. However, this mechanism is primarily discriminatory and does not constitute a mechanism to guide nucleotides down some particular path. References Reviews DNA sequencing methods Laboratory techniques Nanotechnology
Nanopore sequencing
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
3,315
[ "Genetics techniques", "Materials science", "DNA sequencing methods", "DNA sequencing", "nan", "Nanotechnology" ]
733,038
https://en.wikipedia.org/wiki/Enrico%20Fermi%20Institute
The Institute for Nuclear Studies was founded September 1945 as part of the University of Chicago with Samuel King Allison as director. On November 20, 1955, it was renamed The Enrico Fermi Institute for Nuclear Studies. The name was shortened to The Enrico Fermi Institute (EFI) in January 1968. Physicist Enrico Fermi was heavily involved in the founding years of the institute, and it was at his request that Allison took the position as the first director. In addition to Fermi and Allison, the initial faculty included Harold C. Urey, Edward Teller, Joseph E. Mayer, and Maria Goeppert Mayer. Research activities Theoretical and experimental particle physics; Theoretical and experimental astrophysics and cosmology; General relativity; Electron microscopy; Ion microscopy and secondary ion mass spectrometry; Nonimaging optics and solar energy concentration; Geochemistry, cosmochemistry and nuclear chemistry. Notable staff Herbert L. Anderson, nuclear physicist James Cronin, Nobel laureate in physics Enrico Fermi, Nobel laureate in physics Riazuddin, nuclear physicist Robert Geroch, general relativist James Hartle, general relativist Craig Hogan, astronomer Faheem Hussain, string theorist Leo Kadanoff, condensed matter physicist Edward Kolb, cosmologist Willard Libby, chemist Emil Martinec, string theorist Joseph E. Mayer, chemist Maria Goeppert Mayer, Nobel laureate in physics Yoichiro Nambu, Nobel laureate in physics Marcel Schein, cosmic ray physicist John Alexander Simpson, nuclear and cosmic ray physicist Edward Teller, nuclear physicist, father of the hydrogen bomb Michael Turner, cosmologist Harold C. Urey, Nobel laureate in chemistry Carlos E.M. Wagner, particle phenomenologist Robert M. Wald, general relativist Gregor Wentzel, quantum physicist See also Fermilab Particle physics James Franck Institute References External links Guide to the University of Chicago Institute for Nuclear Studies Cyclotron. Records 1946-1952 at the University of Chicago Special Collections Research Center Institute Research institutes of the University of Chicago Nuclear research institutes Organizations established in 1945 Research institutes in Illinois Theoretical physics institutes Physics research institutes Academic and educational organizations in Chicago
Enrico Fermi Institute
[ "Physics", "Engineering" ]
438
[ "Nuclear research institutes", "Theoretical physics", "Nuclear organizations", "Theoretical physics institutes" ]
733,204
https://en.wikipedia.org/wiki/Polythiophene
Polythiophenes (PTs) are polymerized thiophenes, a sulfur heterocycle. The parent PT is an insoluble colored solid with the formula (C4H2S)n. The rings are linked through the 2- and 5-positions. Poly(alkylthiophene)s have alkyl substituents at the 3- or 4-position(s). They are also colored solids, but tend to be soluble in organic solvents. PTs become conductive when oxidized. The electrical conductivity results from the delocalization of electrons along the polymer backbone. Conductivity however is not the only interesting property resulting from electron delocalization. The optical properties of these materials respond to environmental stimuli, with dramatic color shifts in response to changes in solvent, temperature, applied potential, and binding to other molecules. Changes in both color and conductivity are induced by the same mechanism, twisting of the polymer backbone and disrupting conjugation, making conjugated polymers attractive as sensors that can provide a range of optical and electronic responses. The development of polythiophenes and related conductive organic polymers was recognized by the awarding of the 2000 Nobel Prize in Chemistry to Alan J. Heeger, Alan MacDiarmid, and Hideki Shirakawa "for the discovery and development of conductive polymers". Mechanism of conductivity and doping PT is an ordinary organic polymer, being a red solid that is poorly soluble in most solvents. Upon treatment with oxidizing agents (electron-acceptors) however, the material takes on a dark color and becomes electrically conductive. Oxidation is referred to as "doping". Around 0.2 equivalent of oxidant is used to convert PTs (and other conducting polymers) into the optimally conductive state. Thus about one of every five rings is oxidized. Many different oxidants are used. Because of the redox reaction, the conductive form of polythiophene is a salt. An idealized stoichiometry is shown using the oxidant [A]PF6: (C4H2S)n + 1/5n [A]PF6 → (C4H2S)n(PF6)0.2n + 1/5 nA In principle, PT can be n-doped using reducing agents, but this approach is rarely practiced. Upon "p-doping", charged unit called a bipolaron is formed. The bipolaron moves as a unit along the polymer chain and is responsible for the macroscopically observed conductivity of the material. Conductivity can approach 1000 S/cm. In comparison, the conductivity of copper is approximately 5×105 S/cm. Generally, the conductivity of PTs is lower than 1000 S/cm, but high conductivity is not necessary for many applications, e.g. as an antistatic film. Oxidants A variety of reagents have been used to dope PTs. Iodine and bromine produce highly conductive materials, which are unstable owing to slow evaporation of the halogen. Organic acids, including trifluoroacetic acid, propionic acid, and sulfonic acids produce PTs with lower conductivities than iodine, but with higher environmental stabilities. Oxidative polymerization with ferric chloride can result in doping by residual catalyst, although matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) studies have shown that poly(3-hexylthiophene)s are also partially halogenated by the residual oxidizing agent. Poly(3-octylthiophene) dissolved in toluene can be doped by solutions of ferric chloride hexahydrate dissolved in acetonitrile, and can be cast into films with conductivities reaching 1 S/cm. Other, less common p-dopants include gold trichloride and trifluoromethanesulfonic acid. Structure and optical properties Conjugation length The extended π-systems of conjugated PTs produce some of the most interesting properties of these materials—their optical properties. As an approximation, the conjugated backbone can be considered as a real-world example of the "electron-in-a-box" solution to the Schrödinger equation; however, the development of refined models to accurately predict absorption and fluorescence spectra of well-defined oligo(thiophene) systems is ongoing. Conjugation relies upon overlap of the π-orbitals of the aromatic rings, which, in turn, requires the thiophene rings to be coplanar. The number of coplanar rings determines the conjugation length—the longer the conjugation length, the lower the separation between adjacent energy levels, and the longer the absorption wavelength. Deviation from coplanarity may be permanent, resulting from mislinkages during synthesis or especially bulky side chains; or temporary, resulting from changes in the environment or binding. This twist in the backbone reduces the conjugation length, and the separation between energy levels is increased. This results in a shorter absorption wavelength. Determining the maximum effective conjugation length requires the synthesis of regioregular PTs of defined length. The absorption band in the visible region is increasingly red-shifted as the conjugation length increases, and the maximum effective conjugation length is calculated as the saturation point of the red-shift. Early studies by ten Hoeve et al. estimated that the effective conjugation extended over 11 repeat units, while later studies increased this estimate to 20 units. Using the absorbance and emission profile of discrete conjugated oligo(3-hexylthiophene)s prepared through polymerization and separation, Lawrence et al. determined the effective conjugation length of poly(3-hexylthiophene) to be 14 units. The effective conjugation length of polythiophene derivatives depend on the chemical structure of side chains, and thiophene backbones. The absorption band of poly (3-thiophene acetic acid) in aqueous solutions of poly(vinyl alcohol) (PVA) shifts from 480 nm at pH 7 to 415 nm at pH 4. This is attributed to formation of a compact coil structure, which can form hydrogen bonds with PVA upon partial deprotonation of the acetic acid group. Shifts in PT absorption bands due to changes in temperature result from a conformational transition from a coplanar, rodlike structure at lower temperatures to a nonplanar, coiled structure at elevated temperatures. For example, poly(3-(octyloxy)-4-methylthiophene) undergoes a color change from red–violet at 25 °C to pale yellow at 150 °C. An isosbestic point (a point where the absorbance curves at all temperatures overlap) indicates coexistence between two phases, which may exist on the same chain or on different chains. Not all thermochromic PTs exhibit an isosbestic point: highly regioregular poly(3-alkylthiophene)s (PATs) show a continuous blue-shift with increasing temperature if the side chains are short enough so that they do not melt and interconvert between crystalline and disordered phases at low temperatures. Optical effects The optical properties of PTs can be sensitive to many factors. PTs exhibit absorption shifts due to application of electric potentials (electrochromism), or to introduction of alkali ions (ionochromism). Soluble PATs exhibit both thermochromism and solvatochromism (see above) in chloroform and 2,5-dimethyltetrahydrofuran. Substituted polythiophenes Polythiophene and its oxidized derivatives have poor processing properties. They are insoluble in ordinary solvents and do not melt readily. For example, doped unsubstituted PTs are only soluble in exotic solvents such as arsenic trifluoride and arsenic pentafluoride. Although only poorly processable, "the expected high temperature stability and potentially very high electrical conductivity of PT films (if made) still make it a highly desirable material." Nonetheless, intense interest has focused on soluble polythiophenes, which usually translates to polymers derived from 3-alkylthiophenes, which give the so-called polyalkylthiophenes (PATs). 3-Alkylthiophenes Soluble polymers are derivable from 3-substituted thiophenes where the 3-substituent is butyl or longer. Copolymers also are soluble, e.g., poly(3-methylthiophene-'co'-3'-octylthiophene). One undesirable feature of 3-alkylthiophenes is the variable regioregularity of the polymer. Focusing on the polymer microstructure at the dyad level, 3-substituted thiophenes can couple to give any of three dyads: 2,5', or head–tail (HT), coupling 2,2', or head–head (HH), coupling 5,5', or tail–tail (TT), coupling These three diads can be combined into four distinct triads. The triads are distinguishable by NMR spectroscopy. Regioregularity affects the properties of PTs. A regiorandom copolymer of 3-methylthiophene and 3-butylthiophene possessed a conductivity of 50 S/cm, whereas a more regioregular copolymer with a 2:1 ratio of HT to HH couplings had a higher conductivity of 140 S/cm. Films of regioregular poly(3-(4-octylphenyl)thiophene) (POPT) with greater than 94% HT content possessed conductivities of 4 S/cm, compared with 0.4 S/cm for regioirregular POPT. PATs prepared using Rieke zinc formed "crystalline, flexible, and bronze-colored films with a metallic luster". On the other hand, the corresponding regiorandom polymers produced "amorphous and orange-colored films". Comparison of the thermochromic properties of the Rieke PATs showed that, while the regioregular polymers showed strong thermochromic effects, the absorbance spectra of the regioirregular polymers did not change significantly at elevated temperatures. Finally, Fluorescence absorption and emission maxima of poly(3-hexylthiophene)s occur at increasingly lower wavelengths (higher energy) with increasing HH dyad content. The difference between absorption and emission maxima, the Stokes shift, also increases with HH dyad content, which they attributed to greater relief from conformational strain in the first excited state. Special substituents Water-soluble PT's are represented by sodium poly(3-thiophenealkanesulfonate)s. In addition to conferring water solubility, the pendant sulfonate groups act as counterions, producing self-doped conducting polymers. Substituted PTs with tethered carboxylic acids also exhibit water solubility. and urethanes Thiophenes with chiral substituents at the 3 position have been polymerized. Such chiral PTs in principle could be employed for detection or separation of chiral analytes. Poly(3-(perfluorooctyl)thiophene)s is soluble in supercritical carbon dioxide Oligothiophenes capped at both ends with thermally-labile alkyl esters were cast as films from solution, and then heated to remove the solublizing end groups. Atomic force microscopy (AFM) images showed a significant increase in long-range order after heating. Fluorinated polythiophene yield 7% efficiency in polymer-fullerene solar cells. PEDOT The 3,4-disubstituted thiophene called ethylenedioxythiophene (EDOT) is the precursor to the polymer PEDOT. Regiochemistry is not an issue in since this monomer is symmetrical. PEDOT is found in electrochromic displays, photovoltaics, electroluminescent displays, printed wiring, and sensors. Synthesis Electrochemical synthesis In an electrochemical polymerization, a solution containing thiophene and an electrolyte produces a conductive PT film on the anode. Electrochemical polymerization is convenient, since the polymer does not need to be isolated and purified, but it can produce polymers with undesirable alpha-beta linkages and varying degrees of regioregularity. The stoichiometry of the electropolymerization is: n C4H4S → (C4H2S)n + 2n H+ + 2n e− The degree of polymerization and quality of the resulting polymer depends upon the electrode material, current density, temperature, solvent, electrolyte, presence of water, and monomer concentration. Electron-donating substituents lower the oxidation potential, whereas electron-withdrawing groups increase the oxidation potential. Thus, 3-methylthiophene polymerizes in acetonitrile and tetrabutylammonium tetrafluoroborate at a potential of about 1.5 V vs. SCE, whereas unsubstituted thiophene requires an additional 0.2 V. Steric hindrance resulting from branching at the α-carbon of a 3-substituted thiophene inhibits polymerization. In terms of mechanism, oxidation of the thiophene monomer produces a radical cation, which then couple with another monomer to produce a radical cation dimer. From bromothiophenes Chemical synthesis offers two advantages compared with electrochemical synthesis of PTs: a greater selection of monomers, and, using the proper catalysts, the ability to synthesize perfectly regioregular substituted PTs. PTs were chemically synthesized by accident more than a century ago. Chemical syntheses from 2,5-dibromothiophene use Kumada coupling and related reactions Regioregular PTs have been prepared by lithiation 2-bromo-3-alkylthiophenes using Kumada cross-coupling. This method produces approximately 100% HT–HT couplings, according to NMR spectroscopy analysis of the diads. 2,5-Dibromo-3-alkylthiophene when treated with highly reactive "Rieke zinc" is an alternative method. Routes employing chemical oxidants In contrast to methods that require brominated monomers, the oxidative polymerization of thiophenes using ferric chloride proceeds at room temperature. The approach was reported by Sugimoto et al. in 1986. The stoichiometry is analogous to that of electropolymerization. This method has proven to be extremely popular; antistatic coatings are prepared on a commercial scale using ferric chloride. In addition to ferric chloride, other oxidizing agents have been reported. Slow addition of ferric chloride to the monomer solution produced poly(3-(4-octylphenyl)thiophene)s with approximately 94% H–T content. Precipitation of ferric chloride in situ (in order to maximize the surface area of the catalyst) produced significantly higher yields and monomer conversions than adding monomer directly to crystalline catalyst. Higher molecular weights were reported when dry air was bubbled through the reaction mixture during polymerization. Exhaustive Soxhlet extraction after polymerization with polar solvents was found to effectively fractionate the polymer and remove residual catalyst before NMR spectroscopy. Using a lower ratio of catalyst to monomer (2:1, rather than 4:1) may increase the regioregularity of poly(3-dodecylthiophene)s. Andreani et al. reported higher yields of soluble poly(dialkylterthiophene)s in carbon tetrachloride rather than chloroform, which they attributed to the stability of the radical species in carbon tetrachloride. Higher-quality catalyst, added at a slower rate and at reduced temperature, was shown to produce high molecular weight PATs with no insoluble polymer residue. Factorial experiments indicate that the catalyst/monomer ratio correlated with increased yield of poly(3-octylthiophene). Longer polymerization time also increased the yield. In terms of mechanism, the oxidative polymerization using ferric chloride, a radical pathway has been proposed. Niemi et al. reported that polymerization was only observed in solvents where the catalyst was either partially or completely insoluble (chloroform, toluene, carbon tetrachloride, pentane, and hexane, and not diethyl ether, xylene, acetone, or formic acid), and speculated that the polymerization may occur at the surface of solid ferric chloride. However, this is challenged by the fact that the reaction also proceeds in acetonitrile, which FeCl3 is soluble in. Quantum mechanical calculations also point to a radical mechanism. The mechanism can also be inferred from the regiochemistry of the dimerization of 3-methylthiophene since C2 in [3-methylthiophene]+ has the highest spin density. A carbocation mechanism is inferred from the structure of 3-(4-octylphenyl)thiophene prepared from ferric chloride. Polymerization of thiophene can be effected by a solution of ferric chloride in acetonitrile. The kinetics of thiophene polymerization also seemed to contradict the predictions of the radical polymerization mechanism. Barbarella et al. studied the oligomerization of 3-(alkylsulfanyl)thiophenes, and concluded from their quantum mechanical calculations, and considerations of the enhanced stability of the radical cation when delocalized over a planar conjugated oligomer, that a radical cation mechanism analogous to that generally accepted for electrochemical polymerization was more likely. Given the difficulties of studying a system with a heterogeneous, strongly oxidizing catalyst that produces difficult to characterize rigid-rod polymers, the mechanism of oxidative polymerization is by no means decided. The radical cation mechanism is generally accepted. Applications As an example of a static application, poly(3,4-ethylenedioxythiophene)-poly(styrene sulfonate) (PEDOT-PSS) product ("Clevios P") from Heraeus has been extensively used as an antistatic coating (as packaging materials for electronic components, for example). AGFA coats 200 m × 10 m of photographic film per year with PEDOT:PSS because of its antistatic properties. The thin layer of PEDOT:PSS is virtually transparent and colorless, prevents electrostatic discharges during film rewinding, and reduces dust buildup on the negatives after processing. Proposed applications PEDOT also has been proposed for dynamic applications where a potential is applied to a polymer film. PEDOT-coated windows and mirrors become opaque or reflective upon the application of an electric potential, a manifestation of its electrochromic properties. Widespread adoption of electrochromic windows promise significant savings in air conditioning costs. Another potential application include field-effect transistors, electroluminescent devices, solar cells, photochemical resists, nonlinear optic devices, batteries, diodes, and chemical sensors. In general, two categories of applications are proposed for conducting polymers. Static applications rely upon the intrinsic conductivity of the materials, combined with their processing and material properties common to polymeric materials. Dynamic applications utilize changes in the conductive and optical properties, resulting either from application of electric potentials or from environmental stimuli. PTs have been touted as sensor elements. In addition to biosensor applications, PTs can also be functionalized with receptors for detecting metal ions or chiral molecules as well. PTs with pendant crown ether functionalities exhibit properties that vary with the alkali metal. and main-chain. Polythiophenes show potential in the treatment of prion diseases. Notes References Further reading Handbook of Conducting Polymers (Eds: T. A. Skotheim, R. L. Elsenbaumer, J. R. Reynolds), Marcel Dekker, New York, 1998. G. Schopf, G. Koßmehl, Polythiophenes: Electrically Conductive Polymers, Springer, Berlin, 1997, ; Synthetic Metals (journal). ISSN 0379-6779 Molecular electronics Thiophenes Organic polymers Plastics Organic semiconductors Conductive polymers
Polythiophene
[ "Physics", "Chemistry", "Materials_science" ]
4,399
[ "Organic polymers", "Molecular physics", "Semiconductor materials", "Unsolved problems in physics", "Molecular electronics", "Plastics", "Organic compounds", "Nanotechnology", "Amorphous solids", "Conductive polymers", "Organic semiconductors" ]
733,241
https://en.wikipedia.org/wiki/Polyaniline
Polyaniline (PANI) is a conducting polymer and organic semiconductor of the semi-flexible rod polymer family. The compound has been of interest since the 1980s because of its electrical conductivity and mechanical properties. Polyaniline is one of the most studied conducting polymers. Historical development Polyaniline was discovered in the 19th century by F. Ferdinand Runge (1794–1867), Carl Fritzsche (1808–1871), John Lightfoot (1831–1872), and Henry Letheby (1816–1876). Lightfoot studied the oxidation of aniline, which had been isolated only 20 years previously. He developed the first commercially successful route to the dye called Aniline black. The first definitive report of polyaniline did not occur until 1862, which included an electrochemical method for the determination of small quantities of aniline. From the early 20th century on, occasional reports about the structure of PANI were published. Polymerized from the inexpensive aniline, polyaniline can be found in one of three idealized oxidation states: leucoemeraldine – white/clear & colorless (C6H4NH)n emeraldine – green for the emeraldine salt, blue for the emeraldine base ([C6H4NH]2[C6H4N]2)n (per)nigraniline – blue/violet (C6H4N)n In the figure, x equals half the degree of polymerization (DP). Leucoemeraldine with n = 1, m = 0 is the fully reduced state. Pernigraniline is the fully oxidized state (n = 0, m = 1) with imine links instead of amine links. Studies have shown that most forms of polyaniline are one of the three states or physical mixtures of these components. The emeraldine (n = m = 0.5) form of polyaniline, often referred to as emeraldine base (EB), is neutral, if doped (protonated) it is called emeraldine salt (ES), with the imine nitrogens protonated by an acid. Protonation helps to delocalize the otherwise trapped diiminoquinone-diaminobenzene state. Emeraldine base is regarded as the most useful form of polyaniline due to its high stability at room temperature and the fact that, upon doping with acid, the resulting emeraldine salt form of polyaniline is highly electrically conducting. Leucoemeraldine and pernigraniline are poor conductors, even when doped with an acid. The colour change associated with polyaniline in different oxidation states can be used in sensors and electrochromic devices. Polyaniline sensors typically exploit changes in electrical conductivity between the different oxidation states or doping levels. Treatment of emeraldine with acids increases the electrical conductivity by up to ten orders of magnitude. Undoped polyaniline has a conductivity of S/m, whereas conductivities of S/m can be achieved by doping to 4% HBr. The same material can be prepared by oxidation of leucoemeraldine. Synthesis Although the synthetic methods to produce polyaniline are quite simple, the mechanism of polymerization is probably complex. The formation of leucoemeraldine can be described as follows, where [O] is a generic oxidant: n C6H5NH2 + [O] → [C6H4NH]n + H2O A common oxidant is ammonium persulfate in 1 M hydrochloric acid (other acids can be used). The polymer precipitates as an unstable dispersion with micrometer-scale particulates. (Per)nigraniline is prepared by oxidation of the emeraldine base with a peracid: {[C6H4NH]2[C6H4N]2}n + RCO3H → [C6H4N]n + H2O + RCO2H Processing The synthesis of polyaniline nanostructures is facile. Using surfactant dopants, the polyaniline can be made dispersible and hence useful for practical applications. Bulk synthesis of polyaniline nanofibers has been researched extensively. A multi-stage model for the formation of emeraldine base is proposed. In the first stage of the reaction the pernigraniline PS salt oxidation state is formed. In the second stage pernigraniline is reduced to the emeraldine salt as aniline monomer gets oxidized to the radical cation. In the third stage this radical cation couples with ES salt. This process can be followed by light scattering analysis which allows the determination of the absolute molar mass. According to one study in the first step a DP of 265 is reached with the DP of the final polymer at 319. Approximately 19% of the final polymer is made up of the aniline radical cation which is formed during the reaction. Polyaniline is typically produced in the form of long-chain polymer aggregates, surfactant (or dopant) stabilized nanoparticle dispersions, or stabilizer-free nanofiber dispersions depending on the supplier and synthetic route. Surfactant or dopant stabilized polyaniline dispersions have been available for commercial sale since the late 1990s. Potential applications The major applications are printed circuit board manufacturing: final finishes, used in millions of m2 every year, antistatic and ESD coatings, and corrosion protection. Polyaniline and its derivatives are also used as the precursor for the production of N-doped carbon materials through high-temperature heat treatment. Printed emeraldine polyaniline-based sensors have also gained much attention for widespread applications where devices are typically fabricated via screen, inkjet or aerosol jet printing. References Organic polymers Polyamines Molecular electronics Organic semiconductors Polyelectrolytes Conductive polymers
Polyaniline
[ "Chemistry", "Materials_science" ]
1,230
[ "Organic polymers", "Molecular physics", "Semiconductor materials", "Molecular electronics", "Organic compounds", "Nanotechnology", "Conductive polymers", "Organic semiconductors" ]
733,259
https://en.wikipedia.org/wiki/Polyphenylene%20sulfide
Polyphenylene sulfide (PPS) is an organic polymer consisting of aromatic rings linked by sulfides. Synthetic fiber and textiles derived from this polymer resist chemical and thermal attack. PPS is used in filter fabric for coal boilers, papermaking felts, electrical insulation, film capacitors, specialty membranes, gaskets, and packings. PPS is the precursor to a conductive polymer of the semi-flexible rod polymer family. The PPS, which is otherwise insulating, can be converted to the semiconducting form by oxidation or use of dopants. Polyphenylene sulfide is an engineering plastic, commonly used today as a high-performance thermoplastic. PPS can be molded, extruded, or machined to tight tolerances. In its pure solid form, it may be opaque white to light tan in color. Maximum service temperature is . PPS has not been found to dissolve in any solvent at temperatures below approximately . An easy way to identify the compound is by the metallic sound it makes when struck. Manufacturers and trade names PPS is marketed by different brand names by different manufacturers. The major industry players are China Lumena New Materials, Solvay, Kureha, HDC Polyall, Celanese, DIC Corporation, Toray Industries, Zhejiang NHU Special Materials, SABIC, and Tosoh. Other manufacturers include Chengdu Letian Plastics, Lion Idemitsu Composites, and Initz (a joint venture of SK Chemicals and Teijin). The following are examples of brand names by manufacturer and PPS type: Tedur, Albis Plastic, linear type DIC.PPS, DIC Corporation, linear and cross-linked DURAFIDE, Polyplastics Co. Ltd, linear type ECOTRAN, HDC Polyall, distributed and compounded via A. Schulman Fortron, Ticona, linear type Petcoal, Tōsō Therma-Tech TT9200-5001, PolyOne Corporation Ryton, Solvay Specialty Polymers, linear and cross-linked Torelina Toray NHU-PPS, Zhejiang NHU Company Ltd., linear type and cross-linked Characteristics PPS is one of the most important high temperature thermoplastic polymers because it exhibits a number of desirable properties. These properties include resistance to heat, acids, alkalies, mildew, bleaches, aging, sunlight, and abrasion. It absorbs only small amounts of solvents and resists dyeing. Production The Federal Trade Commission definition for sulfur fiber is "A manufactured fiber in which the fiber-forming substance is a long chain synthetic polysulfide in which at least 85% of the sulfide (–S–) linkages are attached directly to two (2) aromatic rings." The generic name for this synthetic fiber is Sulfar. The PPS (polyphenylene sulfide) polymer is formed by reaction of sodium sulfide with 1,4-dichlorobenzene: The process for commercially producing this material was initially developed by Dr. H. Wayne Hill Jr. and James T. Edmonds at Phillips Petroleum. N-Methyl-2-pyrrolidone (NMP) is used as the reaction solvent because it is stable at the high temperatures required for the synthesis and it dissolves both the sulfiding agent and the oligomeric intermediates. Linear, high-molecular-weight PPS that is capable of being extruded into film or melt spun into fiber was invented by Robert W. Campbell. The first U.S. commercial sulfur fiber was produced in 1983 by Phillips Fibers Corporation, a subsidiary of Phillips 66. References Molecular electronics Organic polymers Organic semiconductors Synthetic fibers Thermoplastics Thioethers
Polyphenylene sulfide
[ "Chemistry", "Materials_science" ]
782
[ "Organic polymers", "Molecular physics", "Synthetic fibers", "Synthetic materials", "Semiconductor materials", "Molecular electronics", "Organic compounds", "Nanotechnology", "Organic semiconductors" ]
734,256
https://en.wikipedia.org/wiki/Molecular%20modelling
Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling protons and neutrons with its quarks, anti-quarks and gluons and electrons with its photons (a quantum chemistry approach). Molecular mechanics Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics. This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using chemical theory, experimental reference data, and high level quantum calculations. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, . Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects. Variables Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations. Coordinate representations Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule, make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy. Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method. Applications Molecular modelling methods are used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. A large number of molecular models of force field are today readily available in databases. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes. See also References Further reading Bioinformatics Molecular biology Computational chemistry
Molecular modelling
[ "Chemistry", "Engineering", "Biology" ]
1,132
[ "Biological engineering", "Molecular physics", "Bioinformatics", "Theoretical chemistry", "Molecular modelling", "Computational chemistry", "Molecular biology", "Biochemistry" ]
20,501,535
https://en.wikipedia.org/wiki/Dislocation%20creep
Dislocation creep is a deformation mechanism in crystalline materials. Dislocation creep involves the movement of dislocations through the crystal lattice of the material, in contrast to diffusion creep, in which diffusion (of vacancies) is the dominant creep mechanism. It causes plastic deformation of the individual crystals, and thus the material itself. Dislocation creep is highly sensitive to the differential stress on the material. At low temperatures, it is the dominant deformation mechanism in most crystalline materials. Some of the mechanisms described below are speculative, and either cannot be or have not been verified by experimental microstructural observation. Principles Dislocations in crystals Dislocation creep takes place due to the movement of dislocations through a crystal lattice. Each time a dislocation moves through a crystal, part of the crystal shifts by one lattice point along a plane, relative to the rest of the crystal. The plane that separates the shifted and unshifted regions along which the movement takes place is the slip plane. To allow for this movement, all ionic bonds along the plane must be broken. If all bonds were broken at once, this would require so much energy that dislocation creep would only be possible in theory. When it is assumed that the movement takes place step by step, the breaking of bonds is immediately followed by the creation of new ones and the energy required is much lower. Calculations of molecular dynamics and analysis of deformed materials have shown that dislocation creep can be an important factor in deformation processes. By moving a dislocation step by step through a crystal lattice, a linear lattice defect is created between parts of the crystal lattice. Two types of dislocations exist: edge and screw dislocations. Edge dislocations form the edge of an extra layer of atoms inside the crystal lattice. Screw dislocations form a line along which the crystal lattice jumps one lattice point. In both cases the dislocation line forms a linear defect through the crystal lattice, but the crystal can still be perfect on all sides of the line. The length of the displacement in the crystal caused by the movement of the dislocation is called the Burgers vector. It equals the distance between two atoms or ions in the crystal lattice. Therefore, each material has its own characteristic Burgers vectors for each glide plane. Glide planes in crystals Both edge and screw dislocations move (slip) in directions parallel to their Burgers vector. Edge dislocations move in directions perpendicular to their dislocation lines and screw dislocations move in directions parallel to their dislocation lines. This causes a part of the crystal to shift relative to its other parts. Meanwhile, the dislocation itself moves further on along a glide plane. The crystal system of the material (mineral or metal) determines how many glide planes are possible, and in which orientations. The orientation of the differential stress determines which glide planes are active and which are not. The Von Mises criterion states that to deform a material, movement along at least five different glide planes is required. A dislocation will not always be a straight line and can thus move along more than one glide plane. Where the orientation of the dislocation line changes, a screw dislocation can continue as an edge dislocation and vice versa. Origin of dislocations When a crystalline material is put under differential stress, dislocations form at the grain boundaries and begin moving through the crystal. New dislocations can also form from Frank–Read sources. These form when a dislocation is stopped in two places. The part of the dislocation in between will move forward, causing the dislocation line to curve. This curving can continue until the dislocation curves over itself to form a circle. In the centre of the circle, the source will produce a new dislocation, and this process will produce a sequence of concentric dislocations on top of each other. Frank–Read sources are also created when screw dislocations double cross-slip (change slip planes twice), as the jogs in the dislocation line pin the dislocation in the 3rd plane. Dislocation movement Dislocation glide A dislocation can ideally move through a crystal until it reaches a grain boundary (the boundary between two crystals). When it reaches a grain boundary, the dislocation will disappear. In that case the whole crystal is sheared a little (needs a reference). There are however different ways in which the movement of a dislocation can be slowed or stopped. When a dislocation moves along several different glide planes, it can have different velocities in these different planes, due to the anisotropy of some materials. Dislocations can also encounter other defects in the crystal on their ways, such as other dislocations or point defects. In such cases a part of the dislocation could slow down or even stop moving altogether. In alloy design, this effect is used to a great extent. On adding a dissimilar atom or phase, such as a small amount of carbon to iron, it is hardened, meaning deformation of the material will be more difficult (the material becomes stronger). The carbon atoms act as interstitial particles (point defects) in the crystal lattice of the iron, and dislocations will not be able to move as easily as before. Dislocation climb and recovery Dislocations are imperfections in a crystal lattice, that from a thermodynamic point of view increase the amount of free energy in the system. Therefore, parts of a crystal that have more dislocations will be relatively unstable. By recrystallisation, the crystal can heal itself. Recovery of the crystal structure can also take place when two dislocations with opposite displacement meet each other. A dislocation that has been brought to a halt by an obstacle (a point defect) can overcome the obstacle and start moving again by a process called dislocation climb. For dislocation climb to occur, vacancies have to be able to move through the crystal. When a vacancy arrives at the place where the dislocation is stuck it can cause the dislocation to climb out of its glide plane, after which the point defect is no longer in its way. Dislocation climb is therefore dependent from the velocity of vacancy diffusion. As with all diffusion processes, this is highly dependent on the temperature. At higher temperatures dislocations will more easily be able to move around obstacles. For this reason, many hardened materials become exponentially weaker at higher temperatures. To reduce the free energy in the system, dislocations tend to concentrate themselves in low-energy regions, so other regions will be free of dislocations. This leads to the formation of 'dislocation walls', or planes in a crystal where dislocations localize. Edge dislocations form tilt walls, while screw dislocations form twist walls. In both cases, the increasing localisation of dislocations in the wall will increase the angle between the orientation of the crystal lattice on both sides of the wall. This leads to the formation of subgrains. The process is called subgrain rotation (SGR) and can eventually lead to the formation of new grains when the dislocation wall becomes a new grain boundary. Kinetics In general, the power law for stage 2 creep is: where is the stress exponent and is the creep activation energy, is the ideal gas constant, is temperature, and is a mechanism-dependent constant. The exponent describes the degree of stress-dependence the creep mechanism exhibits. Diffusional creep exhibits an of 1 to 2, climb-controlled creep an of 3 to 5, and glide-controlled creep an of 5 to 7. Dislocation glide The rate of dislocation glide creep can be determined using an Arrhenius equation for the rate of dislocation motion. The forward rate can be written as: where is the energy of the barrier and is the work provided by the applied stress and from thermal energy which helps the dislocation cross the barrier. is the Boltzmann constant and is the temperature of the system. Similarly, the backward rate is given by: The total creep rate is as follows: Thus, the rate of creep due to dislocation glide is: At low temperatures, this expression becomes: The energy supplied to the dislocation is: where is the applied stress, is the Burgers vector, and is the area of the slip plane. Thus, the overall expression for the rate of dislocation glide can be rewritten as: The numerator is the energy coming from the stress and the denominator is the thermal energy. This expression is derived from a model from which plastic strain does not devolve from atomic diffusion. The creep rate is defined by the intrinsic activation energy ( ) and the ratio of stress-assisted energy () to thermal energy ( ). The creep rate increases as this ratio increases, or as stress-assisted energy increases more than thermal energy. All creep rate expressions have similar terms, but the strength of the dependency (i.e. the exponent) on internal activation energy or stress-assisted energy varies with the creep mechanism. Creep by Dislocation and Diffusional Flow Creep mechanisms which involve both dislocation creep and diffusional creep include solute-drag creep, dislocation climb-glide creep, and Harper–Dorn creep. Solute-Drag creep Solute-drag creep is characterized by serrated flow and is typically observed in metallic alloys that do not exhibit short-time creep behavior – the creep rate of these material increases during transient creep before reaching steady-state. Similar to solid-solution strengthening, the size misfit parameter between solute atoms and dislocations results in the restriction of dislocation motion. At low temperatures, the solute atoms do not have enough energy to move. However, at higher temperatures, the solute atoms become mobile and contribute to creep. Solute drag creep occurs when a dislocation breaks away from a solute atom, followed by the solute atom "catching up" to the dislocation. The dislocations are originally pinned into place by solute atoms. After some initial energy input, the dislocation breaks away and begins to move with velocity . This strain rate, is : where is the dislocation density, is the burgers vector, and is the average velocity of the dislocation. When the dislocation velocity is not too high (or the creep rate is not too high), the solute atom can follow the dislocations, and thus introduce "drag" on the dislocation motion. A high diffusivity decreases the drag, and greater misfit parameters lead to greater binding energies between the solute atom and the dislocation, resulting in an increase in drag. Lastly, increasing the solute concentration increases the drag effect. The velocity can thus be described as follows: where is the size misfit parameter and is the concentration of solute. As stress is applied, the dislocation velocity increases until the dislocation breaks away from the solute atoms. Then, the stress begins to decrease as the dislocation is breaking away, so the dislocation velocity decreases. This permits solute atoms to catch up to the dislocation, thereby increasing the stress once more. The stress then increases, and the cycle begins again, resulting in the serrations observed in the stress–strain diagram. This phenomenon is the Portevin–Le Chatelier effect and is only observed over limited strain-rate conditions. If the strain rate is high enough, the flow stress is greater than the breakaway stress, and the dislocation continues to move and the solute atom cannot "catch up"; thus, serrated flow is not observed. It is also known that , which implies dislocation multiplication (an increase in stress increases the dislocation density). Thus, the solute drag creep rate can be rewritten as: where it is noted that the diffusion coefficient is a function of temperature. This expression resembles the power law for creep above, with exponent . Dislocation climb-glide creep Dislocation climb-glide creep is observed in materials that exhibit a higher initial creep rate than the steady-state creep rate. Dislocations glide along a slip plane until they reach an obstacle. The applied stress is not enough for the dislocation to overcome the obstacle, but it is enough for the dislocation to climb to a parallel slip plane via diffusion. This is conceptually similar to a high-temperature cross-slip, where dislocations circumvent obstacles via climb at low temperatures. The dislocation motion involves climb and glide, thus the name climb-glide creep. The rate is determined by the slower (lower velocity) of the climb and glide processes, thus the creep rate is often determined by the climb rate. Starting with the general strain rate form: where is the dislocation density and is the dislocation glide velocity. The dislocation glide velocity is higher than the dislocation climb velocity, . Climb and glide are related via this expression: where is the distance that dislocations glide in the slip plane and is the separation between parallel slip planes. Considering a model in which dislocations are emitted by a source, to maintain the constant microstructure evolution from Stage I to Stage II creep, each source is associated with a constant number of dislocation loops that it has emitted. Thus, to dislocations may only continue to be emitted if some are annihilated. Annihilation is possible via climb, which results in mass transfer between sides of the loop (i.e. either removal of vacancies, resulting in the addition of atoms, or vice versa). Assuming there are dislocation sources per unit volume, the dislocation can be rewritten in terms of the average loop diameter , the climb-glide creep rate is: As the microstructure must remain fixed for the transition between these stages, remains fixed. Thus, it can be multiplied by the volume per source and remain constant, thus . The expression for the climb-glide creep rate reduces to: As dislocation climb is driven by stress but accomplished by diffusion, we can say where is the lattice diffusion constant. can be expressed in its normalized form, , where is the atomic volume. Thus, the dislocation climb-glide creep rate can be expressed as follows: where is a constant that encompasses details of the loop geometry. At higher stress levels, a finer microstructure is observed, which correlates to the inverse relationship between and . If is independent of stress, which has not been shown yet, the exponent for this dislocation creep is 4.5 . Harper–Dorn creep Harper–Dorn creep is a climb-controlled creep mechanism. At low stresses, materials with a low initial dislocation density may creep by dislocation climb alone. Harper-Dorn creep is characterized by a linear steady-state strain rate relationship with stress at constant temperature and as independent of grain size, and activation energies that are typically close to those expected for lattice diffusion. The Harper-Dorn creep rate can be described as follows: where is the creep rate, is the dislocation density, is the material diffusivity, is the shear modulus, is the burgers vector, is the Boltzmann constant, is the temperature, and is the applied stress. In Harper-Dorn creep, the dislocation density is constant. See also Creep Diffusion creep Dislocations Notes Literature ; 1976: Plasticité à haute température des solides cristallins, Eyrolles, Paris. , 2000: Structural Geology, W.H. Freeman & co (6th ed.), Continuum mechanics Crystallographic defects
Dislocation creep
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,291
[ "Continuum mechanics", "Crystallographic defects", "Classical mechanics", "Materials science", "Crystallography", "Materials degradation" ]
20,510,073
https://en.wikipedia.org/wiki/Gravity-vacuum%20transit
Gravity-vacuum transit (GVT) was a form of transportation developed by American inventor Lawrence Edwards in the early 1960s. Origin The origin of this technology is Alfred Ely Beach in 1965. When the U.S. Department of Defense charged all contractors to contemplate what will sustain them if defense funding should taper off, Lockheed Management called for ideas from the troops. Over the long weekend following the assassination of U.S. President John F. Kennedy, Edwards sorted through industries and product lines, and focused on passenger railroads, which had lost their former popularity due to the speed of airplanes and the convenience of automobiles. He wondered if trains could travel at airplane speed and converge at city centers rather than at airports 20 miles away. Clearly, such speed demands a nearly straight path, avoiding the jumble of city streets and buildings, and even the subways and utilities immediately underground. But just a little deeper, near-straight tunnels would be practical, even passing beneath rivers and bays alongside many major cities. This pointed to a design with each tunnel enclosing a pair of steel tubes for two-way traffic, each tube having been pumped out until the air pressure is below that experienced by modern passenger planes. Drawing on the wisdom of technologists and urban planners, as well as lengthy visits to major libraries, Edwards progressively synthesized his system wherein trains nearly ten feet in diameter and with 500 – 1500 passengers would speed up to 250 mph (urban) and 400 mph (regional) through the tubes, protected from the weather and other hazards. The Regional Plan Association offered tips and encouragement, visualizing three major suburban lines passing through Manhattan, New York. It also published a map for Boston to Washington, D.C., with the Manhattan-to-Washington portion taking only 75 minutes, even with over 10 intermediate stops. Specifics The key to this dramatic performance, validated in peer-reviewed professional papers, is the combined effect of vacuum and gravity. Leaving a station with full atmospheric pressure behind it but near-vacuum ahead, the train is subject to 75 tons of thrust, far exceeding what a locomotive can do at moderate speeds. Approaching the next station, the train is decelerated by a similar pressure differential, but in reverse. Passengers experience swift but acceptable acceleration/ deceleration, provided designers are careful not to make the steel cars too light. There is no propulsive equipment on the train at all; instead, there are massive (but commercial-scale) vacuum pumps steadily pulling air out of the tubes and exhausting it outdoors. And their task is eased by the fact that the amount of air admitted to the tube to accelerate a train is only a little more than that pushed back into the atmosphere as the vehicle comes to a stop. The pumps make up the difference, and can do that while running at a constant rate. Stanford's Dr. Holt Ashley, while a national science executive in 1974, was asked about GVT and stated that it was "the most energy-efficient form of transportation we ever saw." Unique features GVT has a powerful advantage not shared by airplanes or any form of transit that moves horizontally. Rolling down a moderate slope, for example 20%, there is robust acceleration that the passengers "don't feel at all". This can be superimposed on the pneumatic acceleration discussed above. Then with the maximum tunnel depth limited to about 1000 feet, gravity alone can add 100 mph to the train's speed at the midpoint of a three-mile segment, for an elapsed time of 1.5 minutes stop-to-stop without exceeding customary passenger-comfort limits. The essential feature of this phenomenon was recognized by a British engineer, Kearney, in about 1910; he wanted to apply it to streetcars but couldn't convince his peers and it was forgotten." Edwards read of it in the New York public library, adapted it for vastly higher speeds, and improvised ways to convince the skeptics. This unique feature was further validated in a contract study by Johns Hopkins University Applied Physics Laboratory and others. End of the line Further study and lab tests of GVT were suggested, but these were not funded, and were a casualty of a general cutback in Federal funding for most forms of advanced rail transit in 1969. Edwards' company, Tube Transit Inc., closed its doors and he went on to pursue an aerial transit system, Project 21 Monobeam, which was first conceived as a local system to feed passengers to GVT. References Transport by mode Hypothetical technology
Gravity-vacuum transit
[ "Physics" ]
920
[ "Physical systems", "Transport", "Transport by mode" ]
20,510,749
https://en.wikipedia.org/wiki/Partial%20impact%20theory
Partial impact theory is an astronomical theory describing the partial collision of two stars and the temporary creation of a bright third star as a consequence. The theory was explained in Alexander William Bickerton's book The Romance of the Heavens published in 1901. It is not part of contemporary astrophysics. In The Romance of the Heavens Bickerton states that a slight "grazing" collision between stars would be much more common than a head on impact between stars. So he believed this phenomenon needed to be explained to account for the appearance of bright new stars that would appear in the night sky and disappear within a year or even days. The theory explains that when the two stellar bodies graze each other, the grazed parts will shear off from the main body of each star. Their velocity will cancel each other's out transforming this energy into heat. While the main mass of each star will continue moving as they did before the collision. The third body created from the two sheared parts of the stars will form between the two original stars. The temporary star expands after the impact displaying an intense increase in light, after all molecular reactions have taken place the light is replaced by a hollow shell of gas or possibly a planetary nebula, and eventually dissipates into space. Bickerton explains this bright temporary star by saying that it doesn't disappear due to cooling, but that it was too hot to hold together. The temperature of the third star, isn't dependent on the amount of contact between the two original stars, but rather the chemical makeup of the stars and their velocities going into the collision. The stability of the third body depends on the size of the contact of the original stars, if the contact was small then the mass of the newly created third body will find it harder to attract molecules to it. Rather than if it had a larger mass where molecules would find it more difficult to escape from its larger gravitational pull. References Physical cosmology Astrophysics theories
Partial impact theory
[ "Physics", "Astronomy" ]
397
[ "Astrophysics theories", "Theoretical physics", "Astrophysics", "Physical cosmology", "Astronomical sub-disciplines" ]
20,511,149
https://en.wikipedia.org/wiki/Entanglement%20distillation
Entanglement distillation (also called entanglement purification) is the transformation of N copies of an arbitrary entangled state into some number of approximately pure Bell pairs, using only local operations and classical communication. Entanglement distillation can overcome the degenerative influence of noisy quantum channels by transforming previously shared, less-entangled pairs into a smaller number of maximally-entangled pairs. History The limits for entanglement dilution and distillation are due to C. H. Bennett, H. Bernstein, S. Popescu, and B. Schumacher, who presented the first distillation protocols for pure states in 1996; entanglement distillation protocols for mixed states were introduced by Bennett, Brassard, Popescu, Schumacher, Smolin and Wootters the same year. Bennett, DiVincenzo, Smolin and Wootters established the connection to quantum error-correction in a ground-breaking paper published in August 1996, also in the journal of Physical Review, which has stimulated a lot of subsequent research. Motivation Suppose that two parties, Alice and Bob, would like to communicate classical information over a noisy quantum channel. Either classical or quantum information can be transmitted over a quantum channel by encoding the information in a quantum state. With this knowledge, Alice encodes the classical information that she intends to send to Bob in a (quantum) product state, as a tensor product of reduced density matrices where each is diagonal and can only be used as a one time input for a particular channel . The fidelity of the noisy quantum channel is a measure of how closely the output of a quantum channel resembles the input, and is therefore a measure of how well a quantum channel preserves information. If a pure state is sent into a quantum channel emerges as the state represented by density matrix , the fidelity of transmission is defined as . The problem that Alice and Bob now face is that quantum communication over large distances depends upon successful distribution of highly entangled quantum states, and due to unavoidable noise in quantum communication channels, the quality of entangled states generally decreases exponentially with channel length as a function of the fidelity of the channel. Entanglement distillation addresses this problem of maintaining a high degree of entanglement between distributed quantum states by transforming N copies of an arbitrary entangled state into approximately Bell pairs, using only local operations and classical communication. The objective is to share strongly correlated qubits between distant parties (Alice and Bob) in order to allow reliable quantum teleportation or quantum cryptography. Entanglement entropy Entanglement entropy quantifies entanglement. Several different definitions have been proposed. Von Neumann Entropy Von Neumann entropy is a measure of the "quantum uncertainty" or "quantum randomness" associated with a quantum state, analogous to the concept of Shannon entropy in classical information theory. Von Neumann entropy measures how "mixed" or "pure" a quantum state is. Pure states (e.g., states that are entirely definite like ) have a von Neumann entropy of 0. In pure states, there is no uncertainty about the system’s state. Mixed states (e.g., probabilistic mixtures of pure states) have a positive entropy value, reflecting an inherent uncertainty in the system's state. For a given quantum system, the von Neumann entropy is defined as: where is the density matrix representing the state of the quantum system and \textrm{Tr} denotes the trace operation, summing over the diagonal elements of a matrix. For a maximally mixed state (where all states are equally probable), von Neumann entropy is maximal. Von Neumann entropy is invariant under unitary transformations, meaning that if is transformed by a unitary matrix , . It is widely used in quantum information theory to study entanglement, quantum thermodynamics, and the coherence of quantum systems. Rényi entanglement entropy Rényi entropy is a generalization of the various concepts of entropy, depending on a parameter , which adjusts the sensitivity of the entropy measure to different probabilities. For a quantum state represented by a density matrix , the Rényi entropy of order is defined as: where is the trace of raised to the power . Rényi entropy is a non-increasing function of , meaning that higher values of emphasize the more probable outcomes more heavily, leading to a lower entropy value. Different values of allow Rényi entropy to highlight different aspects of the probability distribution (or quantum state), with higher emphasizing high-probability events. Rényi entropy is often used in contexts such as fractal dimensions, signal processing, and statistical mechanics, where a flexible measure of uncertainty or diversity is useful. As an example of Renyi entropy, a two qubit system can be written as a superposition of possible computational basis qubit states: , each with an associated complex coefficient : As in the case of a single qubit, the probability of measuring a particular computational basis state is the square of the modulus of its amplitude, or associated coefficient, , subject to the normalization condition . The normalization condition guarantees that the sum of the probabilities add up to 1, meaning that upon measurement, one of the states will be observed. The Bell state is a particularly important example of a two qubit state: Bell states possess the property that measurement outcomes on the two qubits are correlated. As can be seen from the expression above, the two possible measurement outcomes are zero and one, both with probability of 50%. As a result, a measurement of the second qubit always gives the same result as the measurement of the first qubit. Bell states can be used to quantify entanglement. Let m be the number of high-fidelity copies of a Bell state that can be produced using local operations and classical communication (LOCC). Given a large number of Bell states the amount of entanglement present in a pure state can then be defined as the ratio of , where is the number of states transform to Bell state, called the distillable entanglement of a particular state , which gives a quantified measure of the amount of entanglement present in a given system. The process of entanglement distillation aims to saturate this limiting ratio. The number of copies of a pure state that may be converted into a maximally entangled state is equal to the von Neumann entropy of the state, which is an extension of the concept of classical entropy for quantum systems. Mathematically, for a given density matrix , the von Neumann entropy is . Entanglement can then be quantified as the entropy of entanglement, which is the von Neumann entropy of either or as: Which ranges from 0 for a product state to for a maximally entangled state (if the is replaced by then maximally entangled has a value of 1). Entanglement concentration Pure states Given n particles in the singlet state shared between Alice and Bob, local actions and classical communication will suffice to prepare m arbitrarily good copies of with a yield Let an entangled state have a Schmidt decomposition: where coefficients p(x) form a probability distribution, and thus are positive valued and sum to unity. The tensor product of this state is then, Now, omitting all terms which are not part of any sequence which is likely to occur with high probability, known as the typical set: the new state is And renormalizing, Then the fidelity Suppose that Alice and Bob are in possession of m copies of . Alice can perform a measurement onto the typical set subset of , converting the state with high fidelity. The theorem of typical sequences then shows us that is the probability that the given sequence is part of the typical set, and may be made arbitrarily close to 1 for sufficiently large m, and therefore the Schmidt coefficients of the renormalized Bell state will be at most a factor larger. Alice and Bob can now obtain a smaller set of n Bell states by performing LOCC on the state with which they can overcome the noise of a quantum channel to communicate successfully. Mixed states Many techniques have been developed for doing entanglement distillation for mixed states, giving a lower bounds on the value of the distillable entanglement for specific classes of states . One common method involves Alice not using the noisy channel to transmit source states directly but instead preparing a large number of Bell states, sending half of each Bell pair to Bob. The result from transmission through the noisy channel is to create the mixed entangled state , so that Alice and Bob end up sharing copies of . Alice and Bob then perform entanglement distillation, producing almost perfectly entangled states from the mixed entangled states by performing local unitary operations and measurements on the shared entangled pairs, coordinating their actions through classical messages, and sacrificing some of the entangled pairs to increase the purity of the remaining ones. Alice can now prepare an qubit state and teleport it to Bob using the Bell pairs which they share with high fidelity. What Alice and Bob have then effectively accomplished is having simulated a noiseless quantum channel using a noisy one, with the aid of local actions and classical communication. Let be a general mixed state of two spin-1/2 particles which could have resulted from the transmission of an initially pure singlet state through a noisy channel between Alice and Bob, which will be used to distill some pure entanglement. The fidelity of is a convenient expression of its purity relative to a perfect singlet. Suppose that M is already a pure state of two particles for some . The entanglement for , as already established, is the von Neumann entropy where and likewise for , represent the reduced density matrices for either particle. The following protocol is then used: Performing a random bilateral rotation on each shared pair, choosing a random SU(2) rotation independently for each pair and applying it locally to both members of the pair transforms the initial general two-spin mixed state M into a rotationally symmetric mixture of the singlet state and the three triplet states and : The Werner state has the same purity F as the initial mixed state M from which it was derived due to the singlet's invariance under bilateral rotations. Each of the two pairs is then acted on by a unilateral rotation, which we can call , which has the effect of converting them from mainly Werner states to mainly states with a large component of while the components of the other three Bell states are equal. The two impure states are then acted on by a bilateral XOR, and afterwards the target pair is locally measured along the z axis. The unmeasured source pair is kept if the target pair's spins come out parallel as in the case of both inputs being true states; and it is discarded otherwise. If the source pair has not been discarded it is converted back to a predominantly state by a unilateral rotation, and made rotationally symmetric by a random bilateral rotation. Repeating the outlined protocol above will distill Werner states whose purity may be chosen to be arbitrarily high from a collection M of input mixed states of purity but with a yield tending to zero in the limit . By performing another bilateral XOR operation, this time on a variable number of source pairs, as opposed to 1, into each target pair prior to measuring it, the yield can be made to approach a positive limit as . This method can then be combined with others to obtain an even higher yield. Distillation Protocols BBPSSW Protocol The BBPSSW protocol is one of the simplest protocols that uses CNOT (Controlled-NOT) gates and measurements to probabilistically increase the entanglement of Bell states (standard maximally entangled two-qubit states). Here’s a step-by-step example: Setup: Suppose Alice and Bob share many copies of a noisy Bell state, represented by the density matrix: where , and are the other Bell states: , , . The parameter represents the fidelity of with respect to , and the goal is to increase closer to 1 through distillation. Protocol Steps: CNOT Operation: Alice and Bob each take two qubits, say and , and apply a CNOT gate between the pairs, with one qubit of each state as the control and the other as the target:, where is addition modulo 2. This step correlates the two copies. Measurement and Postselection: Alice and Bob each measure the target qubits in the -basis (measuring 0 or 1). If both measure same output (i.e., 0 or 1), they keep the control qubits and discard the targets; otherwise, they discard both pairs. This postselection step has a probability of success but increases the fidelity of the remaining entangled pair. Example Calculation: After one round, if the initial fidelity of was 0.6, the protocol can increase it to around 0.8 with some probability. If multiple rounds are performed on the surviving pairs, can approach 1, producing a near-perfect state. DEJMPS Protocol The DEJMPS protocol is an optimized version of BBPSSW and works especially well for Bell diagonal states. Setup: Assume the initial state is in the form: where , and we assume is the largest coefficient. Protocol Steps: Apply Local Unitaries: Alice and Bob apply unitary operations on their qubits to transform the state into a form where can be maximized. This involves bit and phase flips to swap the Bell states without affecting the target state, . CNOT Operations: Similar to the BBPSSW protocol, Alice and Bob each apply a CNOT operation between their pairs. Basis Measurement: After the CNOT, Alice and Bob measure the target qubits in the Bell basis, postselecting on successful outcomes. Example Calculation: If the initial fidelity of is 0.6, a single round of DEJMPS can increase it more effectively than BBPSSW, pushing fidelity closer to 0.9, depending on the values of , , and . Filtering protocol Filtering protocols apply local filtering operations to probabilistically enhance entanglement without requiring multiple pairs. This approach is useful when operations are limited, such as in photon-based quantum communication. Protocol Steps: Consider a noisy entangled state: where . Local Filtering Operators: Alice and Bob apply filtering operators and : . Normalization and Success Probability: After applying the filters, the resulting state is re-normalized: . The probability of successful filtering (success probability) is: . Resulting Fidelity: If initially, filtering can increase the fidelity to 0.8 or higher, but it reduces the probability of obtaining this result due to the probabilistic nature of the filter. Procrustean method The Procrustean method of entanglement concentration can be used for as little as one partly entangled pair, being more efficient than the Schmidt projection method for entangling less than 5 pairs, and requires Alice and Bob to know the bias () of the n pairs in advance. The method derives its name from Procrustes because it produces a perfectly entangled state by chopping off the extra probability associated with the larger term in the partial entanglement of the pure states: Assuming a collection of particles for which is known as being either less than or greater than the Procrustean method may be carried out by keeping all particles which, when passed through a polarization-dependent absorber, or a polarization-dependent-reflector, which absorb or reflect a fraction of the more likely outcome, are not absorbed or deflected. Therefore, if Alice possesses particles for which , she can separate out particles which are more likely to be measured in the up/down basis, and left with particles in maximally mixed state of spin up and spin down. This treatment corresponds to a POVM (positive-operator-valued measurement). To obtain a perfectly entangled state of two particles, Alice informs Bob of the result of her generalized measurement while Bob doesn't measure his particle at all but instead discards his if Alice discards hers. Stabilizer protocol The purpose of an entanglement distillation protocol is to distill pure ebits from noisy ebits where . The yield of such a protocol is . Two parties can then use the noiseless ebits for quantum communication protocols. The two parties establish a set of shared noisy ebits in the following way. The sender Alice first prepares Bell states locally. She sends the second qubit of each pair over a noisy quantum channel to a receiver Bob. Let be the state rearranged so that all of Alice's qubits are on the left and all of Bob's qubits are on the right. The noisy quantum channel applies a Pauli error in the error set to the set of qubits sent over the channel. The sender and receiver then share a set of noisy ebits of the form where the identity acts on Alice's qubits and is some Pauli operator in acting on Bob's qubits. A one-way stabilizer entanglement distillation protocol uses a stabilizer code for the distillation procedure. Suppose the stabilizer for an quantum error-correcting code has generators . The distillation procedure begins with Alice measuring the generators in . Let be the set of the projectors that project onto the orthogonal subspaces corresponding to the generators in . The measurement projects randomly onto one of the subspaces. Each commutes with the noisy operator on Bob's side so that The following important Bell-state matrix identity holds for an arbitrary matrix : Then the above expression is equal to the following: Therefore, each of Alice's projectors projects Bob's qubits onto a subspace corresponding to Alice's projected subspace . Alice restores her qubits to the simultaneous +1-eigenspace of the generators in . She sends her measurement results to Bob. Bob measures the generators in . Bob combines his measurements with Alice's to determine a syndrome for the error. He performs a recovery operation on his qubits to reverse the error. He restores his qubits . Alice and Bob both perform the decoding unitary corresponding to stabilizer to convert their logical ebits to physical ebits. Entanglement-assisted stabilizer code Luo and Devetak provided a straightforward extension of the above protocol (Luo and Devetak 2007). Their method converts an entanglement-assisted stabilizer code into an entanglement-assisted entanglement distillation protocol. Luo and Devetak form an entanglement distillation protocol that has entanglement assistance from a few noiseless ebits. The crucial assumption for an entanglement-assisted entanglement distillation protocol is that Alice and Bob possess noiseless ebits in addition to their noisy ebits. The total state of the noisy and noiseless ebits is where is the identity matrix acting on Alice's qubits and the noisy Pauli operator affects Bob's first qubits only. Thus the last ebits are noiseless, and Alice and Bob have to correct for errors on the first ebits only. The protocol proceeds exactly as outlined in the previous section. The only difference is that Alice and Bob measure the generators in an entanglement-assisted stabilizer code. Each generator spans over qubits where the last qubits are noiseless. We comment on the yield of this entanglement-assisted entanglement distillation protocol. An entanglement-assisted code has generators that each have Pauli entries. These parameters imply that the entanglement distillation protocol produces ebits. But the protocol consumes initial noiseless ebits as a catalyst for distillation. Therefore, the yield of this protocol is . Entanglement dilution The reverse process of entanglement distillation is entanglement dilution, where large copies of the Bell state are converted into less entangled states using LOCC with high fidelity. The aim of the entanglement dilution process, then, is to saturate the inverse ratio of n to m, defined as the distillable entanglement. Applications Besides its important application in quantum communication, entanglement purification also plays a crucial role in error correction for quantum computation, because it can significantly increase the quality of logic operations between different qubits. The role of entanglement distillation is discussed briefly for the following applications. Quantum error correction Entanglement distillation protocols for mixed states can be used as a type of error-correction for quantum communications channels between two parties Alice and Bob, enabling Alice to reliably send mD(p) qubits of information to Bob, where D(p) is the distillable entanglement of p, the state that results when one half of a Bell pair is sent through the noisy channel connecting Alice and Bob. In some cases, entanglement distillation may work when conventional quantum error-correction techniques fail. Entanglement distillation protocols are known which can produce a non-zero rate of transmission D(p) for channels which do not allow the transmission of quantum information due to the property that entanglement distillation protocols allow classical communication between parties as opposed to conventional error-correction which prohibits it. Quantum cryptography The concept of correlated measurement outcomes and entanglement is central to quantum key exchange, and therefore the ability to successfully perform entanglement distillation to obtain maximally entangled states is essential for quantum cryptography. If an entangled pair of particles is shared between two parties, anyone intercepting either particle will alter the overall system, allowing their presence (and the amount of information they have gained) to be determined so long as the particles are in a maximally entangled state. Also, in order to share a secret key string, Alice and Bob must perform the techniques of privacy amplification and information reconciliation to distill a shared secret key string. Information reconciliation is error-correction over a public channel which reconciles errors between the correlated random classical bit strings shared by Alice and Bob while limiting the knowledge that a possible eavesdropper Eve can have about the shared keys. After information reconciliation is used to reconcile possible errors between the shared keys that Alice and Bob possess and limit the possible information Eve could have gained, the technique of privacy amplification is used to distill a smaller subset of bits maximizing Eve's uncertainty about the key. Quantum teleportation In quantum teleportation, a sender wishes to transmit an arbitrary quantum state of a particle to a possibly distant receiver. Quantum teleportation is able to achieve faithful transmission of quantum information by substituting classical communication and prior entanglement for a direct quantum channel. Using teleportation, an arbitrary unknown qubit can be faithfully transmitted via a pair of maximally-entangled qubits shared between sender and receiver, and a 2-bit classical message from the sender to the receiver. Quantum teleportation requires a noiseless quantum channel for sharing perfectly entangled particles, and therefore entanglement distillation satisfies this requirement by providing the noiseless quantum channel and maximally entangled qubits. See also Quantum channel Quantum cryptography Quantum entanglement Quantum state Quantum teleportation LOCC Purification theorem Notes and references . . . Mark M. Wilde, "From Classical to Quantum Shannon Theory", arXiv:1106.1445. Quantum information science Statistical mechanics
Entanglement distillation
[ "Physics" ]
4,830
[ "Statistical mechanics" ]
18,327,991
https://en.wikipedia.org/wiki/QCD%20sum%20rules
In quantum chromodynamics, the confining and strong coupling nature of the theory means that conventional perturbative techniques often fail to apply. The QCD sum rules (or Shifman–Vainshtein–Zakharov sum rules) are a way of dealing with this. The idea is to work with gauge invariant operators and operator product expansions of them. The vacuum to vacuum correlation function for the product of two such operators can be reexpressed as where we have inserted hadronic particle states on the right hand side. Overview Instead of a model-dependent treatment in terms of constituent quarks, hadrons are represented by their interpolating quark currents taken at large virtualities. The correlation function of these currents is introduced and treated in the framework of the operator product expansion (OPE), where the short and long-distance quark-gluon interactions are separated. The former are calculated using QCD perturbation theory, whereas the latter are parametrized in terms of universal vacuum condensates or light-cone distribution amplitudes. The result of the QCD calculation is then matched, via dispersion relation, to a sum over hadronic states. The sum rule obtained in this way allows to calculate observable characteristics of the hadronic ground state. Inversely, the parameters of QCD such as quark masses and vacuum condensate densities can be extracted from sum rules which have experimentally known hadronic parts. The interactions of quark-gluon currents with QCD vacuum fields critically depend on the quantum numbers (spin, parity, flavor content) of these currents. See also Quantum chromodynamics Lattice QCD Sum rules (Quantum Field Theory) Sum rule in quantum mechanics External links SVZ sum rules at Scholarpedia , (published in the Boris Ioffe Festschrift; most of the material above is an extended quotation and/or paraphrase of the introduction to this article). Institute for Theoretical and Experimental Physics Quantum chromodynamics
QCD sum rules
[ "Physics" ]
422
[ "Particle physics stubs", "Particle physics" ]
18,330,578
https://en.wikipedia.org/wiki/Exa%20Corporation
Exa Corporation was a developer and distributor of computer-aided engineering (CAE) software. Its main product was PowerFLOW, a lattice-boltzmann derived implementation of computational fluid dynamics (CFD), which can very accurately simulate internal and external flows in low-Mach regimes. PowerFLOW is used extensively in the international automotive and transportation industries. On November 17, 2017, Dassault Systèmes completed acquisition of Exa Corporation. Exa became part of Dassault's SIMULIA brand. History Exa was founded in November, 1991 in Lexington, Massachusetts. Exa raised about $2.4 million in a series of venture capital investments from April 1993 though 1994 from Fidelity Ventures and individuals. More funding was obtained in 1994, 1996, 1998 and 2005, including Boston Capital Ventures as an investor. In 1999, Stephen A. Remondi became chief executive. The company filed for an initial public offering in June 2012. On September 28, 2017, Dassault Systèmes announced the signing of a definitive merger agreement to acquire Exa, valuing the company at about 400 million USD. For fiscal year 2012, Exa recorded total revenues, net income and Adjusted EBITDA of $45.9 million, $14.5 million and $7.1 million, respectively. Since generating its first commercial revenue in 1994, Exa's annual revenue had increased for 18 consecutive years. The company was profitable in fiscal years 2011 and 2012 after recording net losses in the three preceding fiscal years. Exa's total revenues and Adjusted EBITDA in fiscal year 2012 increased 21% and 51%, respectively, compared with fiscal year 2011. Exa reported $61.4 million in total revenue for the full year fiscal 2015. The company's total revenue was expected to be in the range of $64.7 million to $67.0 million for the full year fiscal 2016. The Exa corporate headquarters were located in Burlington, Massachusetts. The company also had U.S. offices in Livonia, Michigan, and Brisbane, California, along with offices in Europe and Asia. Exa's European headquarters were located in Paris, France, and it also had European offices in Germany, Italy and the United Kingdom. Exa's Asia headquarters were located in Japan, and its Asia offices were based out of China, India and South Korea. Exa employed over 350 people worldwide. References Further reading Miller, R.; Strumolo, G.; Russ, S.; Madin, M.; Affes, H.; Slike, J.; Chu, D. (1999). A Comparison of Experimental and Analytical Steady State Intake Port Flow Data Using Digital Physics. Society of Automotive Engineers. Lietz, Robert; Pien, William; Remondi, Stephen (2000). A CFD Validation Study for Automotive Aerodynamics. Society of Automotive Engineers. Gaylard (2001). Comparison of A Conventional RANS and a Lattice Gas Dynamics Simulation - A Case Study in High Speed Rail Aerodynamics. In: Rhodes, Norman. Computational Fluid Dynamics in Practice. Oxford, UK. Succi, Sauro (2001). The Lattice Boltzmann Equation for Fluid Dynamics and Beyond. Oxford University Press. Chen, Hudong; Kandasamy, Satheesh; Orszag, Steven; Shock, Rick; Succi, Sauro; Yakhot, Victor (2003). Extended Boltzmann Kinetic Equation for Turbulent Flows. Science Magazine. Vol. 301 Kotapati, R., Keating, A., Kandasamy, S., Duncan, B., Shock, R. and Chen, H., "The Lattice-Boltzmann-VLES Method for Automotive Fluid Dynamics Simulation, a Review," SAE Technical Paper 2009-26-0057, 2009, doi:10.4271/2009-26-0057. RUPESH B. KOTAPATI, RICHARD SHOCK, and HUDONG CHEN, "LATTICE-BOLTZMANN SIMULATIONS OF FLOWS OVER BACKWARD-FACING INCLINED STEPS," Int. J. Mod. Phys. C 25, 1340021 (2014) [14 pages DOI: 10.1142/S0129183113400214. Computational fluid dynamics Defunct software companies of the United States Software companies established in 1991 Companies formerly listed on the Nasdaq Simulation software Computer-aided engineering software 2017 mergers and acquisitions 1991 establishments in Massachusetts 2017 disestablishments in Massachusetts Software companies disestablished in 2017 Software companies based in Massachusetts
Exa Corporation
[ "Physics", "Chemistry" ]
928
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
18,334,481
https://en.wikipedia.org/wiki/ROHR2
ROHR2 is a CAE system for pipe stress analysis from SIGMA Ingenieurgesellschaft mbH, based in Unna, Germany. The software performs both static and dynamic analysis of complex piping and skeletal structures, and runs on Microsoft Windows platform. ROHR2 software comes with built-in industry standard stress codes; such as ASME B31.1, B31.3, B31.4, B31.5, B31.8, EN 13480, CODETI; along with several GRP pipe codes; as well as nuclear stress codes such as ASME Cl. 1-3, KTA 3201.2, KTA 3211.2. Name The brand name comes from the German word "Rohr" (pronounced "ROAR") which means "pipe". History Early years as a MBP product: 1960's to 1989 ROHR2 was created in the late 1960s by one of the first software companies in Germany, (MBP), based in Dortmund. ROHR2 first ran on mainframes such as UNIVAC 1, CRAY, and later Prime computer. At the time, the program was command line driven with a proprietary programming language to describe the piping systems and define the various load conditions. The 1987 launched version 26 was released for IBM PC and IBM PC compatible systems. As a EDS / SIGMA product: 1989 to 2000 MBP was later taken over by EDS (then a part of General Motors Corp., now part of HP Enterprise Services). In 1989, SIGMA Ingenieurgesellschaft mbH was founded in Dortmund, and the ROHR2 development and support team moved to the new office premises of SIGMA. The graphical user interface was added in 1994 to the product, which allowed the editing of piping systems without the need of mastering the earlier required programming language. Sigma Ingenieurgesellschaft mbH product: 2000 to present From the year 2000 onwards, the complete licensing and sales activities came under the management of SIGMA Ingenieurgesellschaft mbH; which by then evolved into an engineering company specializing in pipe engineering, as well as a software development firm. The recent developments include new bi-directional interfaces based on open standards for transfer of data with other CAD/CAE products such as AVEVA PDMS, CADISON, Intergraph's PDS, Intergraph's SmartPlant, HICAD, MPDS4, Bentley System's AutoPLANT, Autodesk's PLANT3D and other PCF supported software. The integration of ROHR2 into the users workflow is supported by third-party interface products to ensure interoperability - a norm in the present engineering software industry. Software packages The ROHR2 program system comes with the graphical user interface ROHR2win, the calculation core ROHR2, and various additional programs, see section related products. Calculation basics The static analysis includes the calculation of static loads of any value, or combination in accordance with the theories of first- and second order for linear and non-linear boundary conditions (friction, support lift). Additional load conditions can also be applied, such as dynamic loads or harmonic excitation. Furthermore, the dynamic analysis include the calculation of eigenvalues and mode shapes as well as their processing in various modal response methods - for the analysis of f.i. earthquakes and fluid hammer. A non-linear time history module (ROHR2stoss) allows the analysis of dynamic events in the time domain, while taking into account non-linear components such as snubbers or visco dampers based on the Maxwell model. An efficient superposition module enables a manifold selection and combination of static and dynamic results. Related products ROHR2fesu - Finite element analysis of substructures in ROHR2 ROHR2iso - Creation of isometric drawings in ROHR2 ROHR2stoss - Structural analysis with dynamic loads using direct integration ROHR2nozzle - Analysis of nozzles in piping systems according to API 610, 617, 661, NEMA SM23, DIN EN ISO 5199, 9905, 10437 and others ROHR2press - Internal pressure analysis of piping components SINETZ - Steady state calculation of flow distribution, pressure drop and heat loss in branched and intermeshed piping networks for compressible and incompressible media SINETZfluid - Calculation of flow distribution and pressure drop of incompressible media in branched and intermeshed piping networks PROBAD - Code-based strength calculations of pressure parts See also Pipe stress analysis References External links ROHR2 Homepage English Structural analysis Computer-aided design Computer-aided design software for Windows
ROHR2
[ "Engineering" ]
966
[ "Structural engineering", "Computer-aided design", "Design engineering", "Structural analysis", "Mechanical engineering", "Aerospace engineering" ]
18,334,913
https://en.wikipedia.org/wiki/Sialome
The word, sialome, is a junction of the Greek word for saliva (sialos) and the suffix used in molecular biology to reference a totality of some sort, -ome. The name relating to its role in biochemistry. In biochemistry, the term sialome may refer to two distinct concepts: The set of mRNA and proteins expressed in the salivary glands, especially of mosquitoes, ticks, and other blood-sucking arthropods. The total complement of sialic acid types and linkages and their modes of presentation on a particular organelle, cell, tissue, organ or organism - as found at a particular time and under specific conditions. Thus, the sialome can refer to the totality of the salivary gland mRNA and proteins expressed by an organism, at a particular time and under specific cellular conditions. Sialome can also refer to the total complement of sialic acid derivative modifications found at the terminal ends of glycan chains that cover the surfaces of proteins, organelles, and cells. These modifications can be found in vertebrates and more complex invertebrates, and consists of the anionic nine-carbon monosaccharide structure of sialic acid with various structural additions to the hydroxyl groups of the molecule resulting in various derivatives with varying chemical properties. The modifications are responsible for conferring proteins, organelles, and cells with physical and electrostatic properties to facilitate specific functions, like facilitating protein folding, cell transport, or mediate non specific interactions with other macromolecules or cells. Furthermore, the terminal saccharide in glycan chains can serve to function in specific ligand interactions with lectins(carbohydrate binding molecules), these lectins can originate from the host and can interact with the terminal saccharides of a glycan chain in a specific process or they can originate from pathogens and interact with terminal saccharides to aid in pathogen entry into cells. References Biochemistry
Sialome
[ "Chemistry", "Biology" ]
407
[ "Biochemistry", "nan" ]
15,411,515
https://en.wikipedia.org/wiki/DMPU
N,N′-Dimethylpropyleneurea (DMPU) is a cyclic urea sometimes used as a polar, aprotic organic solvent. Along with the dimethylethyleneurea, it was introduced as an analog of tetramethylurea. In 1985, Dieter Seebach showed that it is possible to replace the suspected carcinogen hexamethylphosphoramide (HMPA) with DMPU. References Further reading Solvents Green chemistry Amide solvents Amides Ureas Nitrogen heterocycles Heterocyclic compounds with 1 ring
DMPU
[ "Chemistry", "Engineering", "Environmental_science" ]
127
[ "Green chemistry", "Chemical engineering", "Environmental chemistry", "Functional groups", "Organic compounds", "nan", "Amides", "Organic compound stubs", "Organic chemistry stubs", "Ureas" ]
15,412,032
https://en.wikipedia.org/wiki/Overflow%20%28software%29
OVERFLOW - the OVERset grid FLOW solver - is a software package for simulating fluid flow around solid bodies using computational fluid dynamics (CFD). It is a compressible 3-D flow solver that solves the time-dependent, Reynolds-averaged, Navier–Stokes equations using multiple overset structured grids. History OVERFLOW was developed as part of a collaborative effort between NASA's Johnson Space Center in Houston, Texas and NASA Ames Research Center (ARC) in Moffett Field, California. The driving force behind this work was the need for evaluating the flow about the Space Shuttle launch vehicle. Originally developed in the early 1990s by NASA's Pieter Buning, Dennis Jespersen and others, the code is an outgrowth of earlier codes F3D and ARC3D, and a result of ARC's long history of flow-solver development. Usage Scientists use OVERFLOW to better understand the aerodynamic forces on a vehicle by evaluating the flowfield surrounding the vehicle. While wind tunnel testing provides limited data at many flow conditions, CFD simulations provide detailed information about selected conditions, and also provide a distribution of forces on the vehicle, aiding in structural design. OVERFLOW has also been used to simulate the effect of debris on the space shuttle launch vehicle. See also Computational fluid dynamics References External links Official NASA OVERFLOW CFD Code web site Article on OVERFLOW from NASA Insights Computational fluid dynamics Fluid dynamics
Overflow (software)
[ "Physics", "Chemistry", "Engineering" ]
290
[ "Computational fluid dynamics", "Chemical engineering", "Computational physics", "Piping", "Fluid dynamics" ]
15,413,167
https://en.wikipedia.org/wiki/Homogeneous%20isotropic%20turbulence
Within the field of fluid dynamics, Homogeneous isotropic turbulence is an idealized version of the realistic turbulence, but amenable to analytical studies. The concept of isotropic turbulence was first introduced by G.I. Taylor in 1935. The meaning of the turbulence is given below, homogeneous, the statistical properties are invariant under arbitrary translations of the coordinate axes isotropic, the statistical properties are invariant over a full rotation group, which includes rotations and reflections of the coordinate axes. G.I. Taylor also suggested a way of obtaining almost homogeneous isotropic turbulence by passing fluid over a uniform grid. The theory was further developed by Theodore von Kármán and Leslie Howarth (Kármán–Howarth equation) under dynamical considerations. Kolmogorov's theory of 1941 was developed using Taylor's idea as a platform. References Turbulence Fluid dynamics
Homogeneous isotropic turbulence
[ "Chemistry", "Engineering" ]
176
[ "Turbulence", "Chemical engineering", "Piping", "Fluid dynamics stubs", "Fluid dynamics" ]
15,414,727
https://en.wikipedia.org/wiki/Pentagonal%20pyramidal%20molecular%20geometry
In chemistry, pentagonal pyramidal molecular geometry describes the shape of compounds where in six atoms or groups of atoms or ligands are arranged around a central atom, at the vertices of a pentagonal pyramid. It is one of the few molecular geometries with uneven bond angles. Examples References Pentagonal pyramid, Wolfram MathWorld Molecular geometry
Pentagonal pyramidal molecular geometry
[ "Physics", "Chemistry" ]
70
[ "Molecular geometry", "Molecules", "Stereochemistry", "Stereochemistry stubs", "Matter" ]
3,407,453
https://en.wikipedia.org/wiki/Nd%3AYCOB
Nd-doped YCOB (Nd:YCa4O(BO3)3) is a nonlinear optical crystal, which is commonly used as an active laser medium. It can be grown from a melt by the Czochralski technique. It belongs to the monoclinic system with space group Cs2-Cm. Each neodymium ion replaces a yttrium ion in the YCOB crystal structure. Parameters in the Sellmeier equation Further reading Nonlinear optical materials Electro-optical materials Anisotropic optical materials Self-frequency-doubling materials Crystals Laser gain media Neodymium compounds Yttrium compounds Calcium compounds Boron compounds
Nd:YCOB
[ "Physics", "Chemistry", "Materials_science" ]
137
[ "Anisotropic optical materials", "Crystallography", "Crystals", "Asymmetry", "Symmetry" ]
3,408,308
https://en.wikipedia.org/wiki/Metabolic%20network%20modelling
Metabolic network modelling, also known as metabolic network reconstruction or metabolic pathway analysis, allows for an in-depth insight into the molecular mechanisms of a particular organism. In particular, these models correlate the genome with molecular physiology. A reconstruction breaks down metabolic pathways (such as glycolysis and the citric acid cycle) into their respective reactions and enzymes, and analyzes them within the perspective of the entire network. In simplified terms, a reconstruction collects all of the relevant metabolic information of an organism and compiles it in a mathematical model. Validation and analysis of reconstructions can allow identification of key features of metabolism such as growth yield, resource distribution, network robustness, and gene essentiality. This knowledge can then be applied to create novel biotechnology. In general, the process to build a reconstruction is as follows: Draft a reconstruction Refine the model Convert model into a mathematical/computational representation Evaluate and debug model through experimentation The related method of flux balance analysis seeks to mathematically simulate metabolism in genome-scale reconstructions of metabolic networks. Genome-scale metabolic reconstruction A metabolic reconstruction provides a highly mathematical, structured platform on which to understand the systems biology of metabolic pathways within an organism. The integration of biochemical metabolic pathways with rapidly available, annotated genome sequences has developed what are called genome-scale metabolic models. Simply put, these models correlate metabolic genes with metabolic pathways. In general, the more information about physiology, biochemistry and genetics is available for the target organism, the better the predictive capacity of the reconstructed models. Mechanically speaking, the process of reconstructing prokaryotic and eukaryotic metabolic networks is essentially the same. Having said this, eukaryote reconstructions are typically more challenging because of the size of genomes, coverage of knowledge, and the multitude of cellular compartments. The first genome-scale metabolic model was generated in 1995 for Haemophilus influenzae. The first multicellular organism, C. elegans, was reconstructed in 1998. Since then, many reconstructions have been formed. For a list of reconstructions that have been converted into a model and experimentally validated, see http://sbrg.ucsd.edu/InSilicoOrganisms/OtherOrganisms. Drafting a reconstruction Resources Because the timescale for the development of reconstructions is so recent, most reconstructions have been built manually. However, now, there are quite a few resources that allow for the semi-automatic assembly of these reconstructions that are utilized due to the time and effort necessary for a reconstruction. An initial fast reconstruction can be developed automatically using resources like PathoLogic or ERGO in combination with encyclopedias like MetaCyc, and then manually updated by using resources like PathwayTools. These semi-automatic methods allow for a fast draft to be created while allowing the fine tune adjustments required once new experimental data is found. It is only in this manner that the field of metabolic reconstructions will keep up with the ever-increasing numbers of annotated genomes. Databases Kyoto Encyclopedia of Genes and Genomes (KEGG): a bioinformatics database containing information on genes, proteins, reactions, and pathways. The ‘KEGG Organisms’ section, which is divided into eukaryotes and prokaryotes, encompasses many organisms for which gene and DNA information can be searched by typing in the enzyme of choice. BioCyc, EcoCyc, and MetaCyc: BioCyc Is a collection of 3,000 pathway/genome databases (as of Oct 2013), with each database dedicated to one organism. For example, EcoCyc is a highly detailed bioinformatics database on the genome and metabolic reconstruction of Escherichia coli, including thorough descriptions of E. coli signaling pathways and regulatory network. The EcoCyc database can serve as a paradigm and model for any reconstruction. Additionally, MetaCyc, an encyclopedia of experimentally defined metabolic pathways and enzymes, contains 2,100 metabolic pathways and 11,400 metabolic reactions (Oct 2013). ENZYME: An enzyme nomenclature database (part of the ExPASy proteonomics server of the Swiss Institute of Bioinformatics). After searching for a particular enzyme on the database, this resource gives you the reaction that is catalyzed. ENZYME has direct links to other gene/enzyme/literature databases such as KEGG, BRENDA, and PUBMED. BRENDA: A comprehensive enzyme database that allows for an enzyme to be searched by name, EC number, or organism. BiGG: A knowledge base of biochemically, genetically, and genomically structured genome-scale metabolic network reconstructions. metaTIGER: Is a collection of metabolic profiles and phylogenomic information on a taxonomically diverse range of eukaryotes which provides novel facilities for viewing and comparing the metabolic profiles between organisms. Tools for metabolic modeling Pathway Tools: A bioinformatics software package that assists in the construction of pathway/genome databases such as EcoCyc. Developed by Peter Karp and associates at the SRI International Bioinformatics Research Group, Pathway Tools has several components. Its PathoLogic module takes an annotated genome for an organism and infers probable metabolic reactions and pathways to produce a new pathway/genome database. Its MetaFlux component can generate a quantitative metabolic model from that pathway/genome database using flux-balance analysis. Its Navigator component provides extensive query and visualization tools, such as visualization of metabolites, pathways, and the complete metabolic network. ERGO: A subscription-based service developed by Integrated Genomics. It integrates data from every level including genomic, biochemical data, literature, and high-throughput analysis into a comprehensive user friendly network of metabolic and nonmetabolic pathways. KEGGtranslator: an easy-to-use stand-alone application that can visualize and convert KEGG files (KGML formatted XML-files) into multiple output formats. Unlike other translators, KEGGtranslator supports a plethora of output formats, is able to augment the information in translated documents (e.g., MIRIAM annotations) beyond the scope of the KGML document, and amends missing components to fragmentary reactions within the pathway to allow simulations on those. KEGGtranslator converts these files to SBML, BioPAX, SIF, SBGN, SBML with qualitative modeling extension, GML, GraphML, JPG, GIF, LaTeX, etc. ModelSEED: An online resource for the analysis, comparison, reconstruction, and curation of genome-scale metabolic models. Users can submit genome sequences to the RAST annotation system, and the resulting annotation can be automatically piped into the ModelSEED to produce a draft metabolic model. The ModelSEED automatically constructs a network of metabolic reactions, gene-protein-reaction associations for each reaction, and a biomass composition reaction for each genome to produce a model of microbial metabolism that can be simulated using Flux Balance Analysis. MetaMerge: algorithm for semi-automatically reconciling a pair of existing metabolic network reconstructions into a single metabolic network model. CoReCo: algorithm for automatic reconstruction of metabolic models of related species. The first version of the software used KEGG as reaction database to link with the EC number predictions from CoReCo. Its automatic gap filling using atom map of all the reactions produce functional models ready for simulation. Tools for literature PUBMED: This is an online library developed by the National Center for Biotechnology Information, which contains a massive collection of medical journals. Using the link provided by ENZYME, the search can be directed towards the organism of interest, thus recovering literature on the enzyme and its use inside of the organism. Methodology to draft a reconstruction A reconstruction is built by compiling data from the resources above. Database tools such as KEGG and BioCyc can be used in conjunction with each other to find all the metabolic genes in the organism of interest. These genes will be compared to closely related organisms that have already developed reconstructions to find homologous genes and reactions. These homologous genes and reactions are carried over from the known reconstructions to form the draft reconstruction of the organism of interest. Tools such as ERGO, Pathway Tools and Model SEED can compile data into pathways to form a network of metabolic and non-metabolic pathways. These networks are then verified and refined before being made into a mathematical simulation. The predictive aspect of a metabolic reconstruction hinges on the ability to predict the biochemical reaction catalyzed by a protein using that protein's amino acid sequence as an input, and to infer the structure of a metabolic network based on the predicted set of reactions. A network of enzymes and metabolites is drafted to relate sequences and function. When an uncharacterized protein is found in the genome, its amino acid sequence is first compared to those of previously characterized proteins to search for homology. When a homologous protein is found, the proteins are considered to have a common ancestor and their functions are inferred as being similar. However, the quality of a reconstruction model is dependent on its ability to accurately infer phenotype directly from sequence, so this rough estimation of protein function will not be sufficient. A number of algorithms and bioinformatics resources have been developed for refinement of sequence homology-based assignments of protein functions: InParanoid: Identifies eukaryotic orthologs by looking only at in-paralogs. CDD: Resource for the annotation of functional units in proteins. Its collection of domain models utilizes 3D structure to provide insights into sequence/structure/function relationships. InterPro: Provides functional analysis of proteins by classifying them into families and predicting domains and important sites. STRING: Database of known and predicted protein interactions. Once proteins have been established, more information about the enzyme structure, reactions catalyzed, substrates and products, mechanisms, and more can be acquired from databases such as KEGG, MetaCyc and NC-IUBMB. Accurate metabolic reconstructions require additional information about the reversibility and preferred physiological direction of an enzyme-catalyzed reaction which can come from databases such as BRENDA or MetaCyc database. Model refinement An initial metabolic reconstruction of a genome is typically far from perfect due to the high variability and diversity of microorganisms. Often, metabolic pathway databases such as KEGG and MetaCyc will have "holes", meaning that there is a conversion from a substrate to a product (i.e., an enzymatic activity) for which there is no known protein in the genome that encodes the enzyme that facilitates the catalysis. What can also happen in semi-automatically drafted reconstructions is that some pathways are falsely predicted and don't actually occur in the predicted manner. Because of this, a systematic verification is made in order to make sure no inconsistencies are present and that all the entries listed are correct and accurate. Furthermore, previous literature can be researched in order to support any information obtained from one of the many metabolic reaction and genome databases. This provides an added level of assurance for the reconstruction that the enzyme and the reaction it catalyzes do actually occur in the organism. Enzyme promiscuity and spontaneous chemical reactions can damage metabolites. This metabolite damage, and its repair or pre-emption, create energy costs that need to be incorporated into models. It is likely that many genes of unknown function encode proteins that repair or pre-empt metabolite damage, but most genome-scale metabolic reconstructions only include a fraction of all genes. Any new reaction not present in the databases needs to be added to the reconstruction. This is an iterative process that cycles between the experimental phase and the coding phase. As new information is found about the target organism, the model will be adjusted to predict the metabolic and phenotypical output of the cell. The presence or absence of certain reactions of the metabolism will affect the amount of reactants/products that are present for other reactions within the particular pathway. This is because products in one reaction go on to become the reactants for another reaction, i.e. products of one reaction can combine with other proteins or compounds to form new proteins/compounds in the presence of different enzymes or catalysts. Francke et al. provide an excellent example as to why the verification step of the project needs to be performed in significant detail. During a metabolic network reconstruction of Lactobacillus plantarum, the model showed that succinyl-CoA was one of the reactants for a reaction that was a part of the biosynthesis of methionine. However, an understanding of the physiology of the organism would have revealed that due to an incomplete tricarboxylic acid pathway, Lactobacillus plantarum does not actually produce succinyl-CoA, and the correct reactant for that part of the reaction was acetyl-CoA. Therefore, systematic verification of the initial reconstruction will bring to light several inconsistencies that can adversely affect the final interpretation of the reconstruction, which is to accurately comprehend the molecular mechanisms of the organism. Furthermore, the simulation step also ensures that all the reactions present in the reconstruction are properly balanced. To sum up, a reconstruction that is fully accurate can lead to greater insight about understanding the functioning of the organism of interest. Metabolic stoichiometric analysis A metabolic network can be broken down into a stoichiometric matrix where the rows represent the compounds of the reactions, while the columns of the matrix correspond to the reactions themselves. Stoichiometry is a quantitative relationship between substrates of a chemical reaction. In order to deduce what the metabolic network suggests, recent research has centered on a few approaches, such as extreme pathways, elementary mode analysis, flux balance analysis, and a number of other constraint-based modeling methods. Extreme pathways Price, Reed, and Papin, from the Palsson lab, use a method of singular value decomposition (SVD) of extreme pathways in order to understand regulation of a human red blood cell metabolism. Extreme pathways are convex basis vectors that consist of steady state functions of a metabolic network. For any particular metabolic network, there is always a unique set of extreme pathways available. Furthermore, Price, Reed, and Papin, define a constraint-based approach, where through the help of constraints like mass balance and maximum reaction rates, it is possible to develop a ‘solution space’ where all the feasible options fall within. Then, using a kinetic model approach, a single solution that falls within the extreme pathway solution space can be determined. Therefore, in their study, Price, Reed, and Papin, use both constraint and kinetic approaches to understand the human red blood cell metabolism. In conclusion, using extreme pathways, the regulatory mechanisms of a metabolic network can be studied in further detail. Elementary mode analysis Elementary mode analysis closely matches the approach used by extreme pathways. Similar to extreme pathways, there is always a unique set of elementary modes available for a particular metabolic network. These are the smallest sub-networks that allow a metabolic reconstruction network to function in steady state. According to Stelling (2002), elementary modes can be used to understand cellular objectives for the overall metabolic network. Furthermore, elementary mode analysis takes into account stoichiometrics and thermodynamics when evaluating whether a particular metabolic route or network is feasible and likely for a set of proteins/enzymes. Minimal metabolic behaviors (MMBs) In 2009, Larhlimi and Bockmayr presented a new approach called "minimal metabolic behaviors" for the analysis of metabolic networks. Like elementary modes or extreme pathways, these are uniquely determined by the network, and yield a complete description of the flux cone. However, the new description is much more compact. In contrast with elementary modes and extreme pathways, which use an inner description based on generating vectors of the flux cone, MMBs are using an outer description of the flux cone. This approach is based on sets of non-negativity constraints. These can be identified with irreversible reactions, and thus have a direct biochemical interpretation. One can characterize a metabolic network by MMBs and the reversible metabolic space. Flux balance analysis A different technique to simulate the metabolic network is to perform flux balance analysis. This method uses linear programming, but in contrast to elementary mode analysis and extreme pathways, only a single solution results in the end. Linear programming is usually used to obtain the maximum potential of the objective function that you are looking at, and therefore, when using flux balance analysis, a single solution is found to the optimization problem. In a flux balance analysis approach, exchange fluxes are assigned to those metabolites that enter or leave the particular network only. Those metabolites that are consumed within the network are not assigned any exchange flux value. Also, the exchange fluxes along with the enzymes can have constraints ranging from a negative to positive value (ex: -10 to 10). Furthermore, this particular approach can accurately define if the reaction stoichiometry is in line with predictions by providing fluxes for the balanced reactions. Also, flux balance analysis can highlight the most effective and efficient pathway through the network in order to achieve a particular objective function. In addition, gene knockout studies can be performed using flux balance analysis. The enzyme that correlates to the gene that needs to be removed is given a constraint value of 0. Then, the reaction that the particular enzyme catalyzes is completely removed from the analysis. Dynamic simulation and parameter estimation In order to perform a dynamic simulation with such a network it is necessary to construct an ordinary differential equation system that describes the rates of change in each metabolite's concentration or amount. To this end, a rate law, i.e., a kinetic equation that determines the rate of reaction based on the concentrations of all reactants is required for each reaction. Software packages that include numerical integrators, such as COPASI or SBMLsimulator, are then able to simulate the system dynamics given an initial condition. Often these rate laws contain kinetic parameters with uncertain values. In many cases it is desired to estimate these parameter values with respect to given time-series data of metabolite concentrations. The system is then supposed to reproduce the given data. For this purpose the distance between the given data set and the result of the simulation, i.e., the numerically or in few cases analytically obtained solution of the differential equation system is computed. The values of the parameters are then estimated to minimize this distance. One step further, it may be desired to estimate the mathematical structure of the differential equation system because the real rate laws are not known for the reactions within the system under study. To this end, the program SBMLsqueezer allows automatic creation of appropriate rate laws for all reactions with the network. Synthetic accessibility Synthetic accessibility is a simple approach to network simulation whose goal is to predict which metabolic gene knockouts are lethal. The synthetic accessibility approach uses the topology of the metabolic network to calculate the sum of the minimum number of steps needed to traverse the metabolic network graph from the inputs, those metabolites available to the organism from the environment, to the outputs, metabolites needed by the organism to survive. To simulate a gene knockout, the reactions enabled by the gene are removed from the network and the synthetic accessibility metric is recalculated. An increase in the total number of steps is predicted to cause lethality. Wunderlich and Mirny showed this simple, parameter-free approach predicted knockout lethality in E. coli and S. cerevisiae as well as elementary mode analysis and flux balance analysis in a variety of media. Applications of a reconstruction Several inconsistencies exist between gene, enzyme, reaction databases, and published literature sources regarding the metabolic information of an organism. A reconstruction is a systematic verification and compilation of data from various sources that takes into account all of the discrepancies. The combination of relevant metabolic and genomic information of an organism. Metabolic comparisons can be performed between various organisms of the same species as well as between different organisms. Analysis of synthetic lethality Predict adaptive evolution outcomes Use in metabolic engineering for high value outputs Reconstructions and their corresponding models allow the formulation of hypotheses about the presence of certain enzymatic activities and the production of metabolites that can be experimentally tested, complementing the primarily discovery-based approach of traditional microbial biochemistry with hypothesis-driven research. The results these experiments can uncover novel pathways and metabolic activities and decipher between discrepancies in previous experimental data. Information about the chemical reactions of metabolism and the genetic background of various metabolic properties (sequence to structure to function) can be utilized by genetic engineers to modify organisms to produce high value outputs whether those products be medically relevant like pharmaceuticals; high value chemical intermediates such as terpenoids and isoprenoids; or biotechnological outputs like biofuels, or polyhydroxybutyrates also known as bioplastics. Metabolic network reconstructions and models are used to understand how an organism or parasite functions inside of the host cell. For example, if the parasite serves to compromise the immune system by lysing macrophages, then the goal of metabolic reconstruction/simulation would be to determine the metabolites that are essential to the organism's proliferation inside of macrophages. If the proliferation cycle is inhibited, then the parasite would not continue to evade the host's immune system. A reconstruction model serves as a first step to deciphering the complicated mechanisms surrounding disease. These models can also look at the minimal genes necessary for a cell to maintain virulence. The next step would be to use the predictions and postulates generated from a reconstruction model and apply it to discover novel biological functions such as drug-engineering and drug delivery techniques. See also Computational systems biology Computer simulation Flux balance analysis Fluxomics Metabolic control analysis Metabolic flux analysis Metabolic network Metabolic pathway Biochemical systems equation Metagenomics References Further reading Overbeek R, Larsen N, Walunas T, D'Souza M, Pusch G, Selkov Jr, Liolios K, Joukov V, Kaznadzey D, Anderson I, Bhattacharyya A, Burd H, Gardner W, Hanke P, Kapatral V, Mikhailova N, Vasieva O, Osterman A, Vonstein V, Fonstein M, Ivanova N, Kyrpides N. (2003) The ERGO genome analysis and discovery system. Nucleic Acids Res. 31(1):164-71 Whitaker, J.W., Letunic, I., McConkey, G.A. and Westhead, D.R. metaTIGER: a metabolic evolution resource. Nucleic Acids Res. 2009 37: D531-8. External links ERGO GeneDB KEGG PathCase Case Western Reserve University BRENDA BioCyc and Cyclone - provides an open source Java API to the pathway tool BioCyc to extract Metabolic graphs. EcoCyc MetaCyc SEED ModelSEED ENZYME SBRI Bioinformatics Tools and Software TIGR Pathway Tools metaTIGER Stanford Genomic Resources Pathway Hunter Tool IMG The Integrated Microbial Genomes system, for genome analysis by the DOE-JGI. Systems Analysis, Modelling and Prediction Group at the University of Oxford, Biochemical reaction pathway inference techniques. efmtool provided by Marco Terzer SBMLsqueezer Cellnet analyzer from Klamt and von Kamp Copasi gEFM A graph-based tool for EFM computation Biological engineering Biomedical engineering Systems biology Bioinformatics Genomics Metabolism
Metabolic network modelling
[ "Chemistry", "Engineering", "Biology" ]
4,852
[ "Biological engineering", "Biomedical engineering", "Bioinformatics", "Cellular processes", "Biochemistry", "Medical technology", "Metabolism", "Systems biology" ]
3,408,660
https://en.wikipedia.org/wiki/Microscopic%20reversibility
The principle of microscopic reversibility in physics and chemistry is twofold: First, it states that the microscopic detailed dynamics of particles and fields is time-reversible because the microscopic equations of motion are symmetric with respect to inversion in time (T-symmetry); Second, it relates to the statistical description of the kinetics of macroscopic or mesoscopic systems as an ensemble of elementary processes: collisions, elementary transitions or reactions. For these processes, the consequence of the microscopic T-symmetry is: Corresponding to every individual process there is a reverse process, and in a state of equilibrium the average rate of every process is equal to the average rate of its reverse process. History of microscopic reversibility The idea of microscopic reversibility was born together with physical kinetics. In 1872, Ludwig Boltzmann represented kinetics of gases as statistical ensemble of elementary collisions. Equations of mechanics are reversible in time, hence, the reverse collisions obey the same laws. This reversibility of collisions is the first example of microreversibility. According to Boltzmann, this microreversibility implies the principle of detailed balance for collisions: at the equilibrium ensemble each collision is equilibrated by its reverse collision. These ideas of Boltzmann were analyzed in detail and generalized by Richard C. Tolman. In chemistry, J. H. van't Hoff (1884) came up with the idea that equilibrium has dynamical nature and is a result of the balance between the forward and backward reaction rates. He did not study reaction mechanisms with many elementary reactions and could not formulate the principle of detailed balance for complex reactions. In 1901, Rudolf Wegscheider introduced the principle of detailed balance for complex chemical reactions. He found that for a complex reaction the principle of detailed balance implies important and non-trivial relations between reaction rate constants for different reactions. In particular, he demonstrated that the irreversible cycles of reaction are impossible and for the reversible cycles the product of constants of the forward reactions (in the "clockwise" direction) is equal to the product of constants of the reverse reactions (in the "anticlockwise" direction). Lars Onsager (1931) used these relations in his well-known work, without direct citation but with the following remark: "Here, however, the chemists are accustomed to impose a very interesting additional restriction, namely: when the equilibrium is reached each individual reaction must balance itself. They require that the transition must take place just as frequently as the reverse transition etc." The quantum theory of emission and absorption developed by Albert Einstein (1916, 1917) gives an example of application of the microreversibility and detailed balance to development of a new branch of kinetic theory. Sometimes, the principle of detailed balance is formulated in the narrow sense, for chemical reactions only but in the history of physics it has the broader use: it was invented for collisions, used for emission and absorption of quanta, for transport processes and for many other phenomena. In its modern form, the principle of microreversibility was published by Lewis (1925). In the classical textbooks full theory and many examples of applications are presented. Time-reversibility of dynamics The Newton and the Schrödinger equations in the absence of the macroscopic magnetic fields and in the inertial frame of reference are T-invariant: if X(t) is a solution then X(-t) is also a solution (here X is the vector of all dynamic variables, including all the coordinates of particles for the Newton equations and the wave function in the configuration space for the Schrödinger equation). There are two sources of the violation of this rule: First, if dynamics depend on a pseudovector like the magnetic field or the rotation angular speed in the rotating frame then the T-symmetry does not hold. Second, in microphysics of weak interaction the T-symmetry may be violated and only the combined CPT symmetry holds. Macroscopic consequences of the time-reversibility of dynamics In physics and chemistry, there are two main macroscopic consequences of the time-reversibility of microscopic dynamics: the principle of detailed balance and the Onsager reciprocal relations. The statistical description of the macroscopic process as an ensemble of the elementary indivisible events (collisions) was invented by L. Boltzmann and formalised in the Boltzmann equation. He discovered that the time-reversibility of the Newtonian dynamics leads to the detailed balance for collision: in equilibrium collisions are equilibrated by their reverse collisions. This principle allowed Boltzmann to deduce simple and nice formula for entropy production and prove his famous H-theorem. In this way, microscopic reversibility was used to prove macroscopic irreversibility and convergence of ensembles of molecules to their thermodynamic equilibria. Another macroscopic consequence of microscopic reversibility is the symmetry of kinetic coefficients, the so-called reciprocal relations. The reciprocal relations were discovered in the 19th century by Thomson and Helmholtz for some phenomena but the general theory was proposed by Lars Onsager in 1931. He found also the connection between the reciprocal relations and detailed balance. For the equations of the law of mass action the reciprocal relations appear in the linear approximation near equilibrium as a consequence of the detailed balance conditions. According to the reciprocal relations, the damped oscillations in homogeneous closed systems near thermodynamic equilibria are impossible because the spectrum of symmetric operators is real. Therefore, the relaxation to equilibrium in such a system is monotone if it is sufficiently close to the equilibrium. References See also Detailed balance Onsager reciprocal relations Physical chemistry Statistical mechanics
Microscopic reversibility
[ "Physics", "Chemistry" ]
1,165
[ "Physical chemistry", "Statistical mechanics", "Applied and interdisciplinary physics", "nan" ]
3,409,726
https://en.wikipedia.org/wiki/Yttrium%20iron%20garnet
Yttrium iron garnet (YIG) is a kind of synthetic garnet, with chemical composition , or Y3Fe5O12. It is a ferrimagnetic material with a Curie temperature of 560 K. YIG may also be known as yttrium ferrite garnet, or as iron yttrium oxide or yttrium iron oxide, the latter two names usually associated with powdered forms. Production Several methods are utilized for synthesis of yttrium iron garnet each with their pros and cons. The solid-state reaction method is a traditional approach for YIG synthesis, involving the high-temperature firing of a mixture of yttrium and iron oxides. This cost-effective technique can produce pure YIG crystals but requires careful control of temperature and atmosphere to prevent impurities. Liquid phase epitaxy (LPE) is another key method, especially for creating thin YIG films with excellent uniformity. Ideal for optical and microwave devices, LPE enables precise film growth on substrates. However, its high equipment costs and complex procedures limit its use to applications where superior quality is essential. Properties In YIG, the five iron(III) ions occupy two octahedral and three tetrahedral sites, with the yttrium(III) ions coordinated by eight oxygen ions in an irregular cube. The iron ions in the two coordination sites exhibit different spins, resulting in magnetic behavior. By substituting specific sites with rare-earth elements, for example, interesting magnetic properties can be obtained. YIG has a high Verdet constant which results in the Faraday effect, high Q factor in microwave frequencies, low absorption of infrared wavelengths down to 1200 nm, and very small linewidth in electron spin resonance. These properties make it useful for MOI (magneto optical imaging) applications in superconductors. Applications YIG is used in microwave, acoustic, optical, and magneto-optical applications, e.g. microwave YIG filters, or acoustic transmitters and transducers. It is transparent for light wavelengths over 600 nm — the infrared end of the spectrum. It also finds use in solid-state lasers in Faraday rotators, in data storage, and in various nonlinear optics applications. See also Gadolinium gallium garnet Terbium gallium garnet Yttrium aluminium garnet YIG sphere References Synthetic minerals Iron(III) compounds Yttrium compounds Nonlinear optical materials Ferromagnetic materials Transition metal oxides
Yttrium iron garnet
[ "Physics", "Chemistry" ]
511
[ "Matter", "Synthetic materials", "Ferromagnetic materials", "Materials", "Synthetic minerals" ]