id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
18,769,762 | https://en.wikipedia.org/wiki/UNIQUAC | In statistical thermodynamics, UNIQUAC (a portmanteau of universal quasichemical) is an activity coefficient model used in description of phase equilibria.
The model is a so-called lattice model and has been derived from a first order approximation of interacting molecule surfaces. The model is, however, not fully thermodynamically consistent due to its two-liquid mixture approach. In this approach the local concentration around one central molecule is assumed to be independent from the local composition around another type of molecule.
The UNIQUAC model can be considered a second generation activity coefficient because its expression for the excess Gibbs energy consists of an entropy term in addition to an enthalpy term. Earlier activity coefficient models such as the Wilson equation and the non-random two-liquid model (NRTL model) only consist of enthalpy terms.
Today the UNIQUAC model is frequently applied in the description of phase equilibria (i.e. liquid–solid, liquid–liquid or liquid–vapor equilibrium). The UNIQUAC model also serves as the basis of the development of the group contribution method UNIFAC, where molecules are subdivided into functional groups. In fact, UNIQUAC is equal to UNIFAC for mixtures of molecules, which are not subdivided; e.g. the binary systems water-methanol, methanol-acryonitrile and formaldehyde-DMF.
A more thermodynamically consistent form of UNIQUAC is given by the more recent COSMOSPACE and the equivalent GEQUAC model.
Equations
Like most local composition models, UNIQUAC splits excess Gibbs free energy into a combinatorial and a residual contribution:
The calculated activity coefficients of the ith component then split likewise:
The first is an entropic term quantifying the deviation from ideal solubility as a result of differences in molecule shape. The latter is an enthalpic correction caused by the change in interacting forces between different molecules upon mixing.
Combinatorial contribution
The combinatorial contribution accounts for shape differences between molecules and affects the entropy of the mixture and is based on the lattice theory. The Stavermann–Guggenheim equation is used to approximate this term from pure chemical parameters, using the relative Van der Waals volumes ri and surface areas qi of the pure chemicals:
Differentiating yields the excess entropy γC,
with the volume fraction per mixture mole fraction, Vi, for the ith
component given by:
The surface area fraction per mixture molar fraction, Fi, for the
ith component is given by:
The first three terms on the right hand side of the combinatorial term form the Flory–Huggins contribution, while the remaining term, the Guggenhem–Staverman correction, reduce this because connecting segments cannot be placed in all direction in space. This spatial correction shifts the result of the Flory–Huggins term about 5% towards an ideal solution. The coordination number, z, i.e. the number of close interacting molecules around a central molecule, is frequently set to 10. It is based on the coordination number of an methylene group in a long chain, which has in the approximation of a hexagonal close packing structure of spheres 10 intermolecular and 2 bonds.
In the case of infinite dilution for a binary mixture, the equations for the combinatorial contribution reduce to:
This pair of equations show that molecules of same shape, i.e. same r and q parameters, have .
Residual contribution
The residual, enthalpic term contains an empirical parameter, , which is determined from
the binary interaction energy parameters. The expression for the residual activity coefficient for molecule i is:
with
[J/mol] is the binary interaction energy parameter. Theory defines , and , where is the interaction energy between molecules and . The interaction energy parameters are usually determined from activity coefficients, vapor-liquid, liquid-liquid, or liquid-solid equilibrium data.
Usually , because the energies of evaporation (i.e. ), are in many cases different, while the energy of interaction between molecule i and j is symmetric, and therefore . If the interactions between the j molecules and i molecules is the same as between molecules i and j, there is no excess energy of mixing, . And thus .
Alternatively, in some process simulation software can be expressed as follows :
.
The C, D, and E coefficients are primarily used in fitting liquid–liquid equilibria data (with D and E rarely used at that). The C coefficient is useful for vapor-liquid equilibria data as well. The use of such an expression ignores the fact that on a molecular level the energy, , is temperature independent. It is a correction to repair the simplifications, which were applied in the derivation of the model.
Applications (phase equilibrium calculations)
Activity coefficients can be used to predict simple phase equilibria (vapour–liquid, liquid–liquid, solid–liquid), or to estimate other physical properties (e.g. viscosity of mixtures). Models such as UNIQUAC allow chemical engineers to predict the phase behavior of multicomponent chemical mixtures. They are commonly used in process simulation programs to calculate the mass balance in and around separation units.
Parameters determination
UNIQUAC requires two basic underlying parameters: relative surface and volume fractions are chemical constants, which must be known for all chemicals (qi and ri parameters, respectively). Empirical parameters between components that describes the intermolecular behaviour. These parameters must be known for all binary pairs in the mixture. In a quaternary mixture there are six such parameters (1–2,1–3,1–4,2–3,2–4,3–4) and the number rapidly increases with additional chemical components. The empirical parameters are obtained by a correlation process from experimental equilibrium compositions or activity coefficients, or from phase diagrams, from which the activity coefficients themselves can be calculated. An alternative is to obtain activity coefficients with a method such as UNIFAC, and the UNIFAC parameters can then be simplified by fitting to obtain the UNIQUAC parameters. This method allows for the more rapid calculation of activity coefficients, rather than direct usage of the more complex method.
Remark that the determination of parameters from LLE data can be difficult depending on the complexity of the studied system. For this reason it is necessary to confirm the consistency of the obtained parameters in the whole range of compositions (including binary subsystems, experimental and calculated lie-lines, Hessian matrix, etc.).
Newer developments
UNIQUAC has been extended by several research groups. Some selected derivatives are:
UNIFAC, a method which permits the volume, surface and in particular, the binary interaction parameters to be estimated. This eliminates the use of experimental data to calculate the UNIQUAC parameters,
extensions for the estimation of activity coefficients for electrolytic mixtures,
extensions for better describing the temperature dependence of activity coefficients,
and solutions for specific molecular arrangements.
The DISQUAC model advances UNIFAC by replacing UNIFAC's semi-empirical group-contribution model with an extension of the consistent theory of Guggenheim's UNIQUAC. By adding a "dispersive" or "random-mixing physical" term, it better predicts mixtures of molecules with both polar and non-polar groups. However, separate calculation of the dispersive and quasi-chemical terms means the contact surfaces are not uniquely defined. The GEQUAC model advances DISQUAC slightly, by breaking polar groups into individual poles and merging the dispersive and quasi-chemical terms.
See also
Chemical equilibrium
Chemical thermodynamics
Fugacity
MOSCED, a model for estimating limiting activity coefficients at infinite dilution
NRTL, an alternative to UNIQUAC of the same local composition type
Notes
References
Thermodynamic models | UNIQUAC | [
"Physics",
"Chemistry"
] | 1,628 | [
"Thermodynamic models",
"Thermodynamics"
] |
498,127 | https://en.wikipedia.org/wiki/Aldol%20reaction | The aldol reaction (aldol addition) is a reaction in organic chemistry that combines two carbonyl compounds (e.g. aldehydes or ketones) to form a new β-hydroxy carbonyl compound. Its simplest form might involve the nucleophilic addition of an enolized ketone to another:
These products are known as aldols, from the aldehyde + alcohol, a structural motif seen in many of the products. The use of aldehyde in the name comes from its history: aldehydes are more reactive than ketones, so that the reaction was discovered first with them.
The aldol reaction is paradigmatic in organic chemistry and one of the most common means of forming carbon–carbon bonds in organic chemistry. It lends its name to the family of aldol reactions and similar techniques analyze a whole family of carbonyl α-substitution reactions, as well as the diketone condensations.
Scope
Aldol structural units are found in many important molecules, whether naturally occurring or synthetic. The reaction is well used on an industrial scale, notably of pentaerythritol, trimethylolpropane, the plasticizer precursor 2-ethylhexanol, and the drug Lipitor (atorvastatin, calcium salt). For many of the commodity applications, the stereochemistry of the aldol reaction is unimportant, but the topic is of intense interest for the synthesis of many specialty chemicals.
Aldol dimerization
In its simplest implementation, base induces conversion of an aldehyde or a ketone to the aldol product. One example involves the aldol condensation of propionaldehyde:
Featuring the RCH(OH)CHR'C(O)R" grouping, the product is an aldol. In this case {{chem2|R = CH3CH2, R' = CH3, and R = H}}. Such reactions are called aldol aldol dimerization.
Cross-aldol
With a mixture of carbonyl precursors, complicated mixtures can occur. Addition of base to a mixture of propionaldehyde and acetaldehyde, one obtains four products:
The first two products are the result of aldol dimerization but the latter two result from a crossed aldol reaction. Complicated mixtures from cross aldol reactions can be avoided by using one component that cannot form an enolate, examples being formaldehyde and benzaldehyde. This approach is used in one stage in the production of trimethylolethane, which entails crossed aldol condensation of butyraldehyde and formaldehyde:
Reactions of aldols
Aldols dehydrate:
Because this conversion is facile, it is sometimes assumed. It is for this reason that the aldol reaction is sometimes called the aldol condensation.
Mechanisms
The aldol reaction has one underlying mechanism: a carbanion-like nucleophile attacks a carbonyl center.
If the base is of only moderate strength such as hydroxide ion or an alkoxide, the aldol reaction occurs via nucleophilic attack by the resonance-stabilized enolate on the carbonyl group of another molecule. The product is the alkoxide salt of the aldol product. The aldol itself is then formed, and it may then undergo dehydration to give the unsaturated carbonyl compound. The scheme shows a simple mechanism for the base-catalyzed aldol reaction of an aldehyde with itself.
Although only a catalytic amount of base is required in some cases, the more usual procedure is to use a stoichiometric amount of a strong base such as LDA or NaHMDS. In this case, enolate formation is irreversible, and the aldol product is not formed until the metal alkoxide of the aldol product is protonated in a separate workup step.
When an acid catalyst is used, the initial step in the reaction mechanism involves acid-catalyzed tautomerization of the carbonyl compound to the enol. The acid also serves to activate the carbonyl group of another molecule by protonation, rendering it highly electrophilic. The enol is nucleophilic at the α-carbon, allowing it to attack the protonated carbonyl compound, leading to the aldol after deprotonation. Some may also dehydrate past the intended product to give the unsaturated carbonyl compound through aldol condensation.
Crossed-aldol reactant control
Despite the attractiveness of the aldol manifold, there are several problems that need to be addressed to render the process effective. The first problem is a thermodynamic one: most aldol reactions are reversible. Furthermore, the equilibrium is also just barely on the side of the products in the case of simple aldehyde–ketone aldol reactions. If the conditions are particularly harsh (e.g.: NaOMe/MeOH/reflux), condensation may occur, but this can usually be avoided with mild reagents and low temperatures (e.g., LDA (a strong base), THF, −78 °C). Although the aldol addition usually proceeds to near completion under irreversible conditions, the isolated aldol adducts are sensitive to base-induced retro-aldol cleavage to return starting materials. In contrast, retro-aldol condensations are rare, but possible. This is the basis of the catalytic strategy of class I aldolases in nature, as well as numerous small-molecule amine catalysts.
When a mixture of unsymmetrical ketones are reacted, four crossed-aldol (addition) products can be anticipated: Thus, if one wishes to obtain only one of the cross-products, one must control which carbonyl becomes the nucleophilic enol/enolate and which remains in its electrophilic carbonyl form.
The simplest control is if only one of the reactants has acidic protons, and only this molecule forms the enolate. For example, the addition of diethyl malonate into benzaldehyde produces only one product:
If one group is considerably more acidic than the other, the most acidic proton is abstracted by the base and an enolate is formed at that carbonyl while the less-acidic carbonyl remains electrophilic. This type of control works only if the difference in acidity is large enough and base is the limiting reactant. A typical substrate for this situation is when the deprotonatable position is activated by more than one carbonyl-like group. Common examples include a CH2 group flanked by two carbonyls or nitriles (see for example the Knoevenagel condensation and the first steps of the malonic ester synthesis and acetoacetic ester synthesis).
Otherwise, the most acidic carbonyls are typically also the most active electrophiles: first aldehydes, then ketones, then esters, and finally amides. Thus cross-aldehyde reactions are typically most challenging because they can polymerize easily or react unselectively to give a statistical mixture of products.
One common solution is to form the enolate of one partner first, and then add the other partner under kinetic control. Kinetic control means that the forward aldol addition reaction must be significantly faster than the reverse retro-aldol reaction. For this approach to succeed, two other conditions must also be satisfied; it must be possible to quantitatively form the enolate of one partner, and the forward aldol reaction must be significantly faster than the transfer of the enolate from one partner to another. Common kinetic control conditions involve the formation of the enolate of a ketone with LDA at −78 °C, followed by the slow addition of an aldehyde.
Stereoselectivity
The aldol reaction unites two relatively simple molecules into a more complex one. Increased complexity arises because each end of the new bond may become a stereocenter. Modern methodology has not only developed high-yielding aldol reactions, but also completely controls both the relative and absolute configuration of these new stereocenters.
To describe relative stereochemistry at the α- and β-carbon, older papers use saccharide chemistry's erythro/threo nomenclature; more modern papers use the following syn/anti convention. When propionate (or higher order) nucleophiles add to aldehydes, the reader visualizes the R group of the ketone and the R group of the aldehyde aligned in a "zig zag" pattern on the paper (or screen). The disposition of the formed stereocenters is deemed syn or anti, depending if they are on the same or opposite sides of the main chain:
The principal factor determining an aldol reaction's stereoselectivity is the enolizing metal counterion. Shorter metal-oxygen bonds "tighten" the transition state and effects greater stereoselection. Boron is often used because its bond lengths are significantly shorter than other cheap metals (lithium, aluminium, or magnesium). The following reaction gives a syn:anti ratio of 80:20 using a lithium enolate compared to 97:3 using a bibutylboron enolate.
Where the counterion determines stereoinduction strength, the enolate isomer determines its direction. E isomers give anti products and Z give syn:
Zimmermann-Traxler model
If the two reactants have carbonyls adjacent to a pre-existing stereocenter, then the new stereocenters may form at a fixed orientation relative to the old. This "substrate-based stereocontrol" has seen extensive study and examples pervade the literature. In many cases, a stylized transition state, called the Zimmerman–Traxler model, can predict the new orientation from the configuration of a 6-membered ring.
On the enol
If the enol has an adjacent stereocenter, then the two stereocenters flanking the carbonyl in the product are naturally syn:
The underlying mechanistic reason depends on the enol isomer. For an E enolate, the stereoinduction is necessary to avoid 1,3-allylic strain, while a Z enolate instead seeks to avoid 1,3-diaxial interactions:
However, Fráter & Seebach showed that a chelating Lewis basic moiety adjacent to the enol will instead cause anti addition.
On the electrophile
E enolates exhibit Felkin diastereoface selection, while Z enolates exhibit anti-Felkin selectivity. The general model is presented below:
Since the transition state for Z enolates must contain either a destabilizing syn-pentane interaction or an anti-Felkin rotamer, Z-enolates are less diastereoselective:
On both
If both the enolate and the aldehyde contain pre-existing chirality, then the outcome of the "double stereodifferentiating" aldol reaction may be predicted using a merged stereochemical model that takes into account all the effects discussed above. Several examples are as follows:
Oxazolidinone chiral auxiliaries
In the late 1970s and 1980s, David A. Evans and coworkers developed a technique for stereoselection in the aldol syntheses of aldehydes and carboxylic acids.Gage J. R.; Evans D. A., Diastereoselective Aldol Condensation Using A Chiral Oxazolidinone Auxiliary: (2S*,3S*)-3-Hydroxy-3-Phenyl-2-Methylpropanoic Acid , Organic Syntheses, Coll. Vol. 8, p.339 (1993); Vol. 68, p.83 (1990). The method works by temporarily appending a chiral oxazolidinone auxiliary to create a chiral enolate. The pre-existing chirality from the auxiliary is then transferred to the aldol adduct through Zimmermann-Traxler methods, and then the oxazolidinone cleaved away.
Commercial oxazolidinones are relatively expensive, but derive in 2 synthetic steps from comparatively inexpensive amino acids. (Economical large-scale syntheses prepare the auxiliary in-house.) First, a borohydride reduces the acid moiety. Then the resulting amino alcohol dehydratively cyclises with a simple carbonate ester, such as diethylcarbonate.
The acylation of an oxazolidinone is informally referred to as "loading done".
Anti adducts, which require an E enolate, cannot be obtained reliably with the Evans method. However, Z enolates, leading to syn adducts, can be reliably formed using boron-mediated soft enolization:
Often, a single diastereomer may be obtained by one crystallization of the aldol adduct.
Many methods cleave the auxiliary:
Variations
A common additional chiral auxiliary is a thioether group:
Crimmins thiazolidinethione aldol
In the Crimmins thiazolidinethione' approach, a thiazolidinethione is the chiral auxiliary and can produce the "Evans syn" or "non-Evans syn" adducts by simply varying the amount of (−)-sparteine. The reaction is believed to proceed via six-membered, titanium-bound transition states, analogous to the proposed transition states for the Evans auxiliary.
"Masked" enols
A common modification of the aldol reaction uses other, similar functional groups as ersatz enols. In the Mukaiyama aldol reaction, silyl enol ethers add to carbonyls in the presence of a Lewis acid catalyst, such as boron trifluoride (as boron trifluoride etherate) or titanium tetrachloride.3-Hydroxy-3-Methyl-1-Phenyl-1-Butanone by Crossed Aldol Reaction Teruaki Mukaiyama and Koichi Narasaka Organic Syntheses, Coll. Vol. 8, p.323 (1993); Vol. 65, p.6 (1987)
In the Stork enamine alkylation, secondary amines form enamines when exposed to ketones. These enamines then react (possibly enantioselectively) with suitable electrophiles. This strategy offers simple enantioselection without transition metals. In contrast to the preference for syn adducts typically observed in enolate-based aldol additions, these aldol additions are anti-selective.
In aqueous solution, the enamine can then be hydrolyzed from the product, making it a small organic molecule catalyst. In a seminal example, proline efficiently catalyzed the cyclization of a triketone:
This combination is the Hajos-Parrish reaction Under Hajos-Parrish conditions only a catalytic amount of proline is necessary (3 mol%). There is no danger of an achiral background reaction because the transient enamine intermediates are much more nucleophilic than their parent ketone enols.
A Stork-type strategy also allows the otherwise challenging cross-reactions between two aldehydes. In many cases, the conditions are mild enough to avoid polymerization: However, selectivity requires the slow syringe-pump controlled addition of the desired electrophilic partner because both reacting partners typically have enolizable protons. If one aldehyde has no enolizable protons or alpha- or beta-branching, additional control can be achieved.
"Direct" aldol additions
In the usual aldol addition, a carbonyl compound is deprotonated to form the enolate. The enolate is added to an aldehyde or ketone, which forms an alkoxide, which is then protonated on workup. A superior method, in principle, would avoid the requirement for a multistep sequence in favor of a "direct" reaction that could be done in a single process step.
If one coupling partner preferentially enolizes, then the general problem is that the addition generates an alkoxide, which is much more basic than the starting materials. This product binds tightly to the enolizing agent, preventing it from catalyzing additional reactants:
One approach, demonstrated by Evans, is to silylate the aldol adduct. A silicon reagent such as TMSCl is added in the reaction, which replaces the metal on the alkoxide, allowing turnover of the metal catalyst:
Use in carbohydrate synthesis
Traditional syntheses of hexoses use variations of iterative protection-deprotection strategies, requiring 8–14 steps, organocatalysis can access many of the same substrates by a two-step protocol involving the proline-catalyzed dimerization of alpha-oxyaldehydes followed by tandem Mukaiyama aldol cyclization.
The aldol dimerization of alpha-oxyaldehydes requires that the aldol adduct, itself an aldehyde, be inert to further aldol reactions.
Earlier studies revealed that aldehydes bearing alpha-alkyloxy or alpha-silyloxy substituents were suitable for this reaction, while aldehydes bearing Electron-withdrawing groups such as acetoxy were unreactive. The protected erythrose product could then be converted to four possible sugars via Mukaiyama aldol addition followed by lactol formation. This requires appropriate diastereocontrol in the Mukaiyama aldol addition and the product silyloxycarbenium ion to preferentially cyclize, rather than undergo further aldol reaction. In the end, glucose, mannose, and allose were synthesized:
Biological aldol reactions
Examples of aldol reactions in biochemistry include the splitting of fructose-1,6-bisphosphate into dihydroxyacetone and glyceraldehyde-3-phosphate in the fourth stage of glycolysis, which is an example of a reverse ("retro") aldol reaction catalyzed by the enzyme aldolase A (also known as fructose-1,6-bisphosphate aldolase).
In the glyoxylate cycle of plants and some prokaryotes, isocitrate lyase produces glyoxylate and succinate from isocitrate. Following deprotonation of the OH group, isocitrate lyase cleaves isocitrate into the four-carbon succinate and the two-carbon glyoxylate by an aldol cleavage reaction. This cleavage is similar mechanistically to the aldolase A reaction of glycolysis.
History
The aldol reaction was discovered independently by the Russian chemist (and Romantic composer) Alexander Borodin in 1869Garner, Susan Amy (2007) "Hydrogen-mediated carbon-carbon bond formations: Applied to reductive aldol and Mannich reactions," Ph.D. dissertation, University of Texas (Austin), pp. 4 and 51. and by the French chemist Charles-Adolphe Wurtz in 1872, which originally used aldehydes to perform the reaction.
Howard Zimmerman and Marjorie D. Traxler proposed their model for stereoinduction in a 1957 paper.
See also
Aldol–Tishchenko reaction
Baylis–Hillman reaction
Ivanov reaction
Reformatsky reaction
Claisen-Schmidt condensation
Notes
References
Further reading
Chem 206, 215 Lecture Notes (2003, 2006) by D. A. Evans, A. G. Myers, et al.'', Harvard University (pp. 345, 936)
Addition reactions
Carbon-carbon bond forming reactions
Alexander Borodin | Aldol reaction | [
"Chemistry"
] | 4,150 | [
"Coupling reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
498,165 | https://en.wikipedia.org/wiki/Effective%20nuclear%20charge | In atomic physics, the effective nuclear charge of an electron in a multi-electron atom or ion is the number of elementary charges () an electron experiences by the nucleus. It is denoted by Zeff. The term "effective" is used because the shielding effect of negatively charged electrons prevent higher energy electrons from experiencing the full nuclear charge of the nucleus due to the repelling effect of inner layer. The effective nuclear charge experienced by an electron is also called the core charge. It is possible to determine the strength of the nuclear charge by the oxidation number of the atom. Most of the physical and chemical properties of the elements can be explained on the basis of electronic configuration. Consider the behavior of ionization energies in the periodic table. It is known that the magnitude of ionization potential depends upon the following factors:
The size of atom
The nuclear charge; oxidation number
The screening effect of the inner shells
The extent to which the outermost electron penetrates into the charge cloud set up by the inner lying electron
In the periodic table, effective nuclear charge decreases down a group and increases left to right across a period.
Description
The effective atomic number Zeff, (sometimes referred to as the effective nuclear charge) of an electron in a multi-electron atom is the number of protons that this electron effectively 'sees' due to screening by inner-shell electrons. It is a measure of the electrostatic interaction between the negatively charged electrons and positively charged protons in the atom. One can view the electrons in an atom as being 'stacked' by energy outside the nucleus; the lowest energy electrons (such as the 1s and 2s electrons) occupy the space closest to the nucleus, and electrons of higher energy are located further from the nucleus.
The binding energy of an electron, or the energy needed to remove the electron from the atom, is a function of the electrostatic interaction between the negatively charged electrons and the positively charged nucleus. For instance, in iron (atomic number 26), the nucleus contains 26 protons. The electrons that are closest to the nucleus will 'see' nearly all of them. However, electrons further away are screened from the nucleus by other electrons in between, and feel less electrostatic interaction as a result. The 1s electron of iron (the closest one to the nucleus) sees an effective atomic number (number of protons) of 25. The reason why it is not 26 is that some of the electrons in the atom end up repelling the others, giving a net lower electrostatic interaction with the nucleus. One way of envisioning this effect is to imagine the 1s electron sitting on one side of the 26 protons in the nucleus, with another electron sitting on the other side; each electron will feel less than the attractive force of 26 protons because the other electron contributes a repelling force. The 4s electrons in iron, which are furthest from the nucleus, feel an effective atomic number of only 5.43 because of the 25 electrons in between it and the nucleus screening the charge.
Effective atomic numbers are useful not only in understanding why electrons further from the nucleus are so much more weakly bound than those closer to the nucleus, but also because they can tell us when to use simplified methods of calculating other properties and interactions. For instance, lithium, atomic number 3, has two electrons in the 1s shell and one in the 2s shell. Because the two 1s electrons screen the protons to give an effective atomic number for the 2s electron close to 1, we can treat this 2s valence electron with a hydrogenic model.
Mathematically, the effective atomic number Zeff can be calculated using methods known as "self-consistent field" calculations, but in simplified situations is just taken as the atomic number minus the number of electrons between the nucleus and the electron being considered.
Calculations
In an atom with one electron, that electron experiences the full charge of the positive nucleus. In this case, the effective nuclear charge can be calculated by Coulomb's law.
However, in an atom with many electrons, the outer electrons are simultaneously attracted to the positive nucleus and repelled by the negatively charged electrons. The effective nuclear charge on such an electron is given by the following equation:
where
is the number of protons in the nucleus (atomic number) and
is the shielding constant.
S can be found by the systematic application of various rule sets.
Slater's rules
The simplest method for determining the shielding constant for a given electron is the use of "Slater's rules", devised by John C. Slater, and published in 1930. These algebraic rules are significantly simpler than finding shielding constants using ab initio calculation.
Hartree–Fock method
A more theoretically justified method is to calculate the shielding constant using the Hartree-Fock method. Douglas Hartree defined the effective Z of a Hartree–Fock orbital to be:
where
is the mean radius of the orbital for hydrogen, and
is the mean radius of the orbital for a proton configuration with nuclear charge Z.
Values
Updated effective nuclear charge values were provided by Clementi et al. in 1963 and 1967. In their work, screening constants were optimized to produce effective nuclear charge values that agree with SCF calculations. Though useful as a predictive model, the resulting screening constants contain little chemical insight as a qualitative model of atomic structure.
Comparison with nuclear charge
Nuclear charge is the electric charge of a nucleus of an atom, equal to the number of protons in the nucleus times the elementary charge. In contrast, the effective nuclear charge is the attractive positive charge of nuclear protons acting on valence electrons, which is always less than the total number of protons present in a nucleus due to the shielding effect.
See also
Atomic orbitals
Core charge
d-block contraction (or scandide contraction)
Electronegativity
Lanthanide contraction
Shielding effect
Slater-type orbitals
Valence electrons
Weak charge
References
Resources
2.5: Effective Nuclear Charge. Chemistry LibreTexts.
Brown, Theodore; Intekhab Khan, H.E.; & Bursten, Bruce (2002). Chemistry: The Central Science (8th revised edition). Upper Saddle River, New Jersey 07458: Prentice-Hall. .
Chemical bonding | Effective nuclear charge | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,262 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
498,206 | https://en.wikipedia.org/wiki/Dental%20material | Dental products are specially fabricated materials, designed for use in dentistry. There are many different types of dental products, and their characteristics vary according to their intended purpose.
Temporary dressings
A temporary dressing is a dental filling which is not intended to last in the long term. They are interim materials which may have therapeutic properties. A common use of temporary dressing occurs if root canal therapy is carried out over more than one appointment. In between each visit, the pulp canal system must be protected from contamination from the oral cavity, and a temporary filling is placed in the access cavity. Examples include:
Zinc oxide eugenol—bactericidal, cheap and easy to remove. Eugenol is derived from oil of cloves, and has an obtundant effect on the tooth and decreases toothache. It is suitable temporary material providing there are no biting forces on it. It is also contraindicated if the final restorative material is composite because eugenol adversely effects the bond/polymerization process; also, when applied directly on the pulp tissue, it can produce chronic inflammation and result in pulp necrosis. Brands include Kalzinol and Sedanol.
Cements
Dental cements are used most often to bond indirect restorations such as crowns to the natural tooth surface. Examples include:
Zinc oxide cement—self setting and hardens when in contact with saliva. Example brands: Cavit, Coltosol.
Zinc phosphate cement
Zinc polycarboxylate cement—adheres to enamel and dentin. Example brand: PolyF.
Glass ionomer cement
Resin-based cement
Copper-based cement
Impression materials
Dental impressions are negative imprints of teeth and oral soft tissues from which a positive representation can be cast. They are used in prosthodontics (to make dentures), orthodontics, restorative dentistry, dental implantology and oral and maxillofacial surgery.
Because patients' soft-tissue undercuts may be shallow or deep, impression materials vary in their rigidity in order to obtain an accurate impression. Rigid materials are used with patients with shallow undercuts, while elastic materials are used with patients with deep undercuts, as the material must be flexible enough to reach the end-point of the undercut.
Impression materials are designed to be liquid or semi-solid when first mixed, then set hard in a few minutes, leaving imprints of oral structures.
Common dental impression materials include sodium alginate, polyether and silicones. Historically, plaster of Paris, zinc oxide eugenol and agar were used.
Lining materials
Dental lining materials are used during restorations of large cavities, and are placed between the remaining tooth structure and the restoration material. The purpose of this is to protect the dentinal tubules and the sensitive pulp, forming a barrier-like structure. After drilling the caries out of the tooth, the dentist applies a thin layer (approximately 1/2mm) to the base of the tooth, followed by light curing. Another layer might be applied if the cavity is very large and deep.
There are many functions to dental lining materials, some of which are listed below:
Lining materials protect the weak tooth from post-operative hypersensitivity, reducing patient discomfort and allowing the tooth to heal at a faster rate after the procedure.
Some dental restorative materials, such as acrylic monomers in resin-based materials and phosphoric acid in silicate materials, may pose toxic and irritable effects to the pulp. Lining materials protect the tooth from such irritants.
Lining materials serve as an insulating layer to the tooth pulp from sudden changes in temperature when the patient takes hot or cold food, protecting them from potential pain resulting from thermal conductivity.
Lining materials are electrically insulating, preventing corrosion by galvanic cell where two dissimilar metals (e.g. gold or amalgam) are placed next to each other.
Types
Calcium hydroxide
Calcium hydroxide is a relatively low compressive strength and a viscous consistency, making it difficult to apply to cavities in thick sections. A common technique to overcome this issue is to apply a thin sub-lining of calcium hydroxide, then build up with zinc phosphate prior to amalgam condensation. This generates a relatively high pH environment around the area surrounding the cement due to calcium hydroxide leaking out, thus making it bactericidal.
It also has a unique effect of initiating calcification and stimulating the formation of secondary dentine, due to an irritation effect of the pulp tissues by the cement.
Calcium hydroxide is radio-opaque and acts as a good thermal and electrical insulation. However, due to its low compressive strength it is unable to withstand amalgam packing; a strong cement base material should be placed above it to counter this.
Calcium silicate-based liners have become alternatives to calcium hydroxide and are preferred by practitioners for their bioactive and sealing properties; the material triggers a biological response and results in formation of bonding with the tissue. They are commonly used as pulp capping agents and lining materials for silicate and resin-based filling materials.
It is usually supplied as two pastes, a glycol salicylate and another paste containing zinc oxide with calcium hydroxide. On mixing, a chelate compound is formed. Light-activated versions are also available; these contain polymerization activators, hydroexyethyl methacrylate, dimethacrylate which when light activated will result in a polymerization reaction of a modified methacrylate monomer.
Polycarboxylate cement
Polycarboxylate cement has the compressive strength to resist amalgam condensation. It is acidic, but less acidic than phosphate cements due to it having a higher molecular weight and polyacrylic acid being weaker than phosphoric acid. It forms a strong bond with dentine and enamel, allowing it to form a coronal seal. In addition, it is an electrical and thermal insulator while also releasing fluoride, rendering it bacteriostatic. It is also radio-opaque, making it an excellent lining material.
Care has to be taken in handling such material, as it has a strong bond with stainless steel instruments once it sets.
Polycarboxylate cement is commonly used as a luting agents or as a cavity base material. However, it tends to be rubbery during its setting reaction and adheres to stainless steel instruments, so most operators prefer not to use it in deep cavities.
It is usually supplied as a power containing zinc oxide and a liquid containing aqueous polyacrylic acid. The reaction consists of an acid base reaction with zinc oxide reacting with the acid groups in polyacid. This forms a reaction product of unreacted zinc oxide cores bound by a salt matrix, with polyacrylic acid chains cross linking with zinc ions.
Glass ionomer
Glass ionomer (GI) has the strongest compressive and tensile strength of all linings, so it can withstand amalgam condensation in high stress bearing areas such as class II cavities. GI is used as a lining material as it is very compatible with most restorative materials, insulates thermally and electrically, and adheres to enamel and dentine. GI lining contains glass of smaller particle sizes compared to its adhesive restorative mix, to allow formation of a thinner film. Some variations are also radiopaque, making them good for X-ray cavity detection. In addition, GI is bacteriostatic due to its fluoride release from un-reacted glass cores.
GIs are usually used as a lining material for composite resins or as luting agents for orthodontic bands.
The reaction is an acid-base reaction between calcium-aluminum-silicate glass powder and polyacrylic acid. They come in a powder and liquid which are mixed on a pad or in capsules which are for single usage. Resin-modified GIs contain a photoinitiator (usually camphorquinone) and an amide, and are light cured with a LED light curing unit. Setting takes place by a combination of acid-base reaction and chemically activated polymerization.
Zinc oxide eugenol
Zinc oxide eugenol has the lowest compressive and tensile strength of the liners, so its use is limited to small or non-stress-bearing areas such as Class V cavities. This cavity lining is often used with a high strength base to provide strength, rigidity and thermal insulation. Zinc oxide eugenol can be used as linings in deep cavities without causing harm to the pulp, due to its obtundant effect on the pulp as well as its bactericidal properties due to zinc. However, eugenol may have an effect on resin-based filling materials, as it interferes with polymerization and occasionally causes discoloration. Caution could therefore be exercised when using both in tandem. It is also radio-opaque, allowing fillings to be visible by X-rays.
Zinc oxide eugenol is usually used as a temporary filling/luting agent due to its low compressive strength making it easily removed, or as a lining for amalgam as it is incompatible with composites resins.
It is supplied as a two paste system. Equal length of two pastes are dispensed into a paper pad and mixed.
Restorative materials
Dental restorative materials are used to replace tooth structure loss, usually due to dental caries (cavities), but also tooth wear and dental trauma. On other occasions, such materials may be used for cosmetic purposes to alter the appearance of an individual's teeth.
There are many challenges for the physical properties of the ideal dental restorative material. The ideal material would be identical to natural tooth structure in strength, adherence, and appearance. The properties of such material can be divided into four categories: physical properties, biocompatibility, aesthetics and application.
Physical properties of good restorative materials include low thermal conductivity and expansion, resistance to different categories of forces and wear such as attrition and abrasion, and resistance to chemical erosion. There must also be good bonding strength to the tooth. Everyday masticatory forces and conditions must be withstood without material fatigue.
Biocompatibility refers to how well the material coexists with the biological equilibrium of the tooth and body systems. Since fillings are in close contact with mucosa, tooth, and pulp, biocompatibility is very important. Common problems with some of the current dental materials include chemical leakage from the material, pulpal irritation and, less commonly, allergic reactions. Some of the byproducts of the chemical reactions during different stages of material hardening need to be considered.
Radiopacity in dental materials is an important property that allows for distinguishing restorations from teeth and surrounding structures, assessing the absorption of materials into bone structure, and detecting cement dissolution or other failures that could cause harm to the patient. Cements, composites, endodontic sealers, bone grafts, and acrylic resins all benefit from the addition of radiopaque materials. Examples of these materials include zinc oxide, zirconium dioxide, titanium dioxide, barium sulfate, and ytterbium(III) fluoride.
Ideally, filling materials should match the surrounding tooth structure in shade, translucency, and texture.
Dental operators require materials that are easy to manipulate and shape, where the chemistry of any reactions that need to occur are predictable or controllable.
Direct restorative materials
Direct restorations are ones which are placed directly into a cavity on a tooth, and shaped to fit. The chemistry of the setting reaction for direct restorative materials is designed to be more biologically compatible. Heat and byproducts generated cannot damage the tooth or patient, since the reaction needs to take place while in contact with the tooth during restoration. This ultimately limits the strength of the materials, since harder materials need more energy to manipulate. The type of filling material used has a minor effect on how long they last. The majority of clinical studies indicate the annual failure rates (AFRs) are between 1% and 3% with tooth colored fillings on back teeth. Root canaled (endodontically) treated teeth have AFRs between 2% and 12%. The main reasons for failure are cavities that occur around the filling and fracture of the real tooth. These are related to personal cavity risk and factors like grinding teeth (bruxism).
Amalgam
Amalgam is a metallic filling material composed from a mixture of mercury (from 43% to 54%) and a powdered alloy made mostly of silver, tin, zinc and copper, commonly called the amalgam alloy. Amalgam does not adhere to tooth structure without the aid of cements or use of techniques which lock in the filling, using the same principles as a dovetail joint.
Amalgam is still used extensively in many parts of the world because of its cost effectiveness, superior strength and longevity. However, the metallic colour is not aesthetically pleasing and tooth coloured alternatives are continually emerging with increasingly comparable properties. Due to the known toxicity of mercury, there is some controversy about the use of amalgams. The Swedish government banned the use of mercury amalgam in June 2009. Research has shown that, while amalgam use is controversial and may increase mercury levels in the human body, these levels are below safety threshold levels established by the World Health Organization and the U.S. Environmental Protection Agency. However, there are certain subpopulations who, due to inherited genetic variabilities, are more sensitive to mercury than these threshold levels. They may experience adverse effects caused by amalgam restoration, including neural defects caused by impaired neurotransmitter processing.
Composite resin
Composite resin fillings (also called white fillings) are a mixture of nanoparticles or powdered glass and plastic resin, and can be made to resemble the appearance of the natural tooth. Although cosmetically superior to amalgam fillings, composite resin fillings are usually more expensive. Bis-GMA based resins contain Bisphenol A, a known endocrine disrupter chemical, and may contribute to the development of breast cancer. However, there is no added risk of kidney or endocrine injury in choosing composite restorations over amalgams. PEX-based materials do not contain Bisphenol A and are the least cytotoxic material available.
Most modern composite resins are light-cured photopolymers, meaning that they harden with light exposure. They can then be polished to achieve maximum aesthetic results. Composite resins experience a very small amount of shrinkage upon curing, causing the material to pull away from the walls of the cavity preparation. This makes the tooth slightly more vulnerable to microleakage and recurrent decay. Microleakage can be minimized or eliminated with proper handling techniques and appropriate material selection.
In some circumstances, using composite resin allows less of the tooth structure to be removed compared to other dental materials such as amalgam and indirect methods of restoration. This is because composite resins bind to enamel (and dentin too, although not as well) via a micromechanical bond. As conservation of tooth structure is a key ingredient in tooth preservation, many dentists prefer placing materials like composite instead of amalgam fillings whenever possible.
Generally, composite fillings are used to fill a carious lesion involving highly visible areas (such as the central incisors or any other teeth that can be seen when smiling) or when conservation of tooth structure is a top priority.
The bond of composite resin to tooth is especially affected by moisture contamination and the cleanliness of the prepared surface. Other materials can be selected when restoring teeth where moisture control techniques are not effective.
Glass ionomer cement
The concept of using "smart" materials in dentistry has attracted a lot of attention in recent years. Conventional glass ionomer cements (GICs) have many applications in dentistry. They are biocompatible with the dental pulp to some extent. Clinically, this material was initially used as a biomaterial to replace the lost osseous tissues in the human body.
GIC fillings are a mixture of glass and an organic acid.
The cavity preparation of a GIC filling is the same as a composite resin. GICs are chemically set via an acid-base reaction. Upon mixing of the material components, no light cure is needed to harden the material once placed in the cavity preparation. After the initial set, GICs still need time to fully set and harden.
An advantage of GICs compared to other restorative materials is that they can be placed in cavities without any need for bonding agents. Another advantage is that they are not subject to shrinkage and microleakage, as the bonding mechanism is an acid-base reaction and not a polymerization reaction. Additionally, GICs contain and release fluoride, which is important to prevent carious lesions. As GICs release their fluoride, they can be "recharged" by the use of fluoride-containing toothpaste; this means they can be used to treat patients at high risk of caries.
Although they are tooth-colored, GICs vary in translucency, and their aesthetic potential is not as great as that of composite resins. Newer formulations that contain light-cured resins can achieve a greater aesthetic result, but do not release fluoride as well as conventional GICs.
The most important disadvantage of GICs is lack of adequate strength and toughness. To improve the mechanical properties of the conventional GIC, resin-modified ionomers have been marketed. GICs are usually weak after setting and are not stable in water; however, they become stronger with the progression of reactions and become more resistant to moisture.
New generations of GICs aim to regenerate tissues; they use bioactive materials in the form of a powder or solution to induce local tissue repair. These materials release chemical agents in the form of dissolved ions or growth factors such as bone morphogenetic protein, which stimulates activate cells.
GICs are about as expensive as composite resin. The fillings do not wear as well as composite resin fillings, but they are generally considered good materials to use for root caries and for sealants.
Resin modified glass-ionomer cement (RMGIC)
A combination of glass-ionomer and composite resin, these fillings are a mixture of glass, an organic acid, and resin monomers that harden when light cured (light-activated polymerization besides the acid-base reaction of conventional GICs). The cost is similar to composite resin. It holds up better than GIC, but not as well as composite resin, and is not recommended for biting surfaces of adult teeth, or when control of moisture cannot be achieved.
Generally, RMGICs can achieve a better aesthetic result than conventional GICs, but not as good as pure composites.
Compomers
Another combination of composite resin and GIC technology, compomers are essentially made up of filler, dimethacrylate monomer, difunctional resin, photo-activator and initiator, and hydrophilic monomers. The filler decreases the proportion of resin and increases the mechanical strength, as well as improving the material's appearance.
Although compomers have better mechanical and aesthetic properties than RMGIC, they have some disadvantages which limit their applications:
Compomers have weaker wear properties.
They cannot adhere to tooth tissue due to the presence of resin, which can make it shrink on polymerisation. They therefore require bonding materials.
They release low levels of fluoride, so cannot act as a fluoride reservoir.
They have high staining susceptibility; uptake of oral fluid causes them to show staining soon after placement.
Due to its relatively weaker mechanical properties, Compomers are unfit for stress-bearing restorations but can be used in the deciduous dentition where lower loads are anticipated.
Cermets
Dental cermets, also known as silver cermets, were created to improve the wear resistance and hardness of glass ionomer cements by adding silver. Their other advantages are that they adhere directly to tooth tissue, and are radio-opaque, which helps with identification of secondary caries when future radiographs are taken.
However, cermets have poorer aesthetics, appearing metallic rather than white. They also have a similar compressive strength, flexural strength, and solubility as GICs, some of the main limiting factors for both materials. In addition, their fluoride release is poorer than that of GICs. Clinical studies have shown cermets perform poorly. All these disadvantages led to the decline in the use of this restorative material.
Indirect restorative materials
An indirect restoration is one where the teeth are first prepared, then an impression is taken and sent to a dental technician who fabricates the restoration according to the dentist's prescription.
Porcelain
Porcelain fillings are hard, but can cause wear on opposing teeth. Their hardness and rigidity enables them to resist abrasion forces, and are good aesthetically as they mimic the appearance of natural teeth. However, they are also brittle and not always recommended for molar fillings. Porcelain materials can be strengthened by soaking fired material in molten salt to allow exchange of sodium and potassium ions on the surface; this successfully creates compressive stresses on the outer layer, by controlling cooling after firing, and by the use of pure alumina inserts, a core of alumina or alumina powder, as they act as crack stoppers and are highly compatible to porcelain.
Dental composite materials
Tooth colored dental composite materials are either used as a direct filling or as the construction material for an indirect inlay. They are usually cured by light.
Nano-ceramic particles
Nano-ceramic particles embedded in a resin matrix are less brittle and therefore less likely to crack, or chip, than all-ceramic indirect fillings. They absorb the shock of chewing more like natural teeth, and more like resin or gold fillings, than do ceramic fillings; at the same time they are more resistant to wear than all-resin indirect fillings. They are available in blocks for use with CAD/CAM systems.
Gold fillings
Gold fillings have excellent durability, wear well, and do not cause excessive wear to the opposing teeth, but they do conduct heat and cold, which can be irritating. There are two categories: cast gold fillings (gold inlays and onlays) made with 14 or 18 kt gold, and gold foil made with pure 24 kt gold that is burnished layer by layer. For years, they have been considered the benchmark of restorative dental materials. However, recent advances in dental porcelains and a consumer focus on aesthetic results have caused the demand for gold fillings to drop. Gold fillings are sometimes quite expensive, but they last a very long time, meaning that gold restorations are less costly and painful in the long run. It is not uncommon for a gold crown to last 30 years.
Other historical fillings
Lead fillings were used in the 18th century, but became unpopular in the 19th century because of their softness. This was before lead poisoning was understood.
According to American Civil War-era dental handbooks, since the early 19th century metallic fillings had been made of lead, gold, tin, platinum, silver, aluminum, or amalgam. A pellet was rolled slightly larger than the cavity, condensed into place with instruments, then shaped and polished in the patient's mouth. The filling was usually left "high", with final condensation—"tamping down"—occurring while the patient chewed food. Gold foil was the most popular filling material during the Civil War. Tin and amalgam were also popular due to lower cost, but were held in lower regard.
One survey of dental practices in the mid-19th century catalogued dental fillings found in the remains of seven Confederate soldiers from the Civil War. They were made of:
Gold foil: preferred because of its durability and safety.
Platinum: rarely used because it was too hard, inflexible and difficult to form into foil.
Aluminum: failed because of its lack of malleability but has been added to some amalgams.
Tin and iron: believed to have been a very popular filling material during the Civil War. Tin foil was recommended when a cheaper material than gold was requested by the patient, but it wore down rapidly; even if it could be replaced cheaply and quickly, there was a concern, specifically from Chapin A. Harris, that it would oxidise in the mouth and cause a recurrence of caries. Due to blackening, tin was only recommended for posterior teeth.
Thorium: the element's radioactivity was unknown at that time, and the dentist probably thought he was working with tin.
Lead and tungsten mixture: probably from shotgun pellets. Lead was rarely used in the 19th century, as it is soft and quickly worn down by mastication, and had known harmful health effects.
Acrylic polymers
Acrylics are used in the fabrication of dentures, artificial teeth, impression trays, maxillofacial / orthodontic appliances and temporary (provisional) restorations. They cannot be used as tooth filling materials because they can lead to pulpitis and periodontitis, as they may generate heat and acids during setting, and in addition they shrink.
Failure of dental restorations
Fillings have a finite lifespan; composites appear to have a higher failure rate than amalgam over five to seven years. How well people keep their teeth clean and avoid cavities is probably a more important factor than the material chosen for the restoration.
Evaluation and regulation of dental materials
The Nordic Institute of Dental Materials (NIOM) performs several tests to evaluate dental products in the Nordic countries. In the European Union, dental materials are classified as medical devices according to the Medical Devices Directive. In USA, the Food and Drug Administration is the regulatory body for dental products.
References
User Guide of Dental Impression Material: https://www.youtube.com/watch?v=-keGMbCHC2A
Dental Materials Fact Sheet, Dental Board of California, May 2004
Restorative dentistry | Dental material | [
"Physics"
] | 5,432 | [
"Materials",
"Dental materials",
"Matter"
] |
498,228 | https://en.wikipedia.org/wiki/Carbon%E2%80%93carbon%20bond | A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond.
Chains and branching
Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry.
Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have:
A primary carbon has one carbon neighbor.
A secondary carbon has two carbon neighbors.
A tertiary carbon has three carbon neighbors.
A quaternary carbon has four carbon neighbors.
In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine.
Synthesis
Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in the production of many human-made chemicals such as pharmaceuticals and plastics. The reverse reaction, where a carbon-carbon bond is broken, is known as carbon-carbon bond activation.
Some examples of reactions which form carbon–carbon bonds are the aldol reaction, Diels–Alder reaction, Grignard reaction, cross-coupling reactions, the Michael reaction and the Wittig reaction.
The directed synthesis of desired three-dimensional structures for tertiary carbons was largely solved during the late 20th century, but the same ability to direct quaternary carbon synthesis did not start to emerge until the first decade of the 21st century.
Bond strengths and lengths
The carbon-carbon single bond is weaker than C-H, O-H, N-H, H-H, H-Cl, C-F, and many double or triple bonds, and comparable in strength to C-O, Si-O, P-O, and S-H bonds, but is commonly considered as strong.
The values given above represent C-C bond dissociation energies that are commonly encountered; occasionally, outliers may deviate drastically from this range.
Extreme cases
Long, weak C-C single bonds
Various extreme cases have been identified where the C-C bond is elongated. In Gomberg's dimer, one C-C bond is rather long at 159.7 picometers. It is this bond that reversibly and readily breaks at room temperature in solution:
In the even more congested molecule hexakis(3,5-di-tert-butylphenyl)ethane, the bond dissociation energy to form the stabilized triarylmethyl radical is only 8 kcal/mol. Also a consequence of its severe steric congestion, hexakis(3,5-di-tert-butylphenyl)ethane has a greatly elongated central bond with a length of 167 pm.
Twisted, weak C-C double bonds
The structure of tetrakis(dimethylamino)ethylene (TDAE) is highly distorted. The dihedral angle for the two N2C ends is 28º although the C=C distance is normal 135 pm. The nearly isostructural tetraisopropylethylene also has a C=C distance of 135 pm, but its C6 core is planar.
Short, strong C-C triple bonds
On the opposite extreme, the central carbon–carbon single bond of diacetylene is very strong at 160 kcal/mol, as the single bond joins two carbons of sp hybridization. Carbon–carbon multiple bonds are generally stronger; the double bond of ethylene and triple bond of acetylene have been determined to have bond dissociation energies of 174 and 230 kcal/mol, respectively. A very short triple bond of 115 pm has been observed for the iodonium species [HC≡C–I+Ph] [CF3SO3–], due to the strongly electron-withdrawing iodonium moiety.
See also
Carbon–hydrogen bond
Carbon–oxygen bond
Carbon–nitrogen bond
References
Organic chemistry
Chemical bonding | Carbon–carbon bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,078 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
498,255 | https://en.wikipedia.org/wiki/Aldol%20condensation | An aldol condensation is a condensation reaction in organic chemistry in which two carbonyl moieties (of aldehydes or ketones) react to form a β-hydroxyaldehyde or β-hydroxyketone (an aldol reaction), and this is then followed by dehydration to give a conjugated enone.
The overall reaction equation is as follows (where the Rs can be H)
Aldol condensations are important in organic synthesis and biochemistry as ways to form carbon–carbon bonds.
In its usual form, it involves the nucleophilic addition of a ketone enolate to an aldehyde to form a β-hydroxy ketone, or aldol (aldehyde + alcohol), a structural unit found in many naturally occurring molecules and pharmaceuticals.
The term aldol condensation is also commonly used, especially in biochemistry, to refer to just the first (addition) stage of the process—the aldol reaction itself—as catalyzed by aldolases. However, the first step is formally an addition reaction rather than a condensation reaction because it does not involve the loss of a small molecule.
Mechanism
The first part of this reaction is an Aldol reaction, the second part a dehydration—an elimination reaction (Involves removal of a water molecule or an alcohol molecule). Dehydration may be accompanied by decarboxylation when an activated carboxyl group is present. The aldol addition product can be dehydrated via two mechanisms; a strong base like potassium t-butoxide, potassium hydroxide or sodium hydride deprotonates the product to an enolate, which eliminates via the E1cB mechanism, while dehydration in acid proceeds via an E1 reaction mechanism. Depending on the nature of the desired product, the aldol condensation may be carried out under two broad types of conditions: kinetic control or thermodynamic control. Both ketones and aldehydes are suitable for aldol condensation reactions. In the examples below, aldehydes are used.
Base-catalyzed aldol condensation
The mechanism for base-catalyzed aldol condensation can be seen in the image below.
The process begins when a free hydroxide (strong base) strips the highly acidic proton at the alpha carbon of the aldehyde. This deprotonation causes the electrons from the C–H bond to shift and create a new C–C pi bond. The new pi bond then acts as a nucleophile and attacks the remaining aldehyde in the solution, resulting in the formation of a new C–C bond and regeneration of the base catalyst. In the second part of the reaction, the presence of base leads to elimination of water and formation of a new C–C pi bond. The product is referred to as the aldol condensation product.
Acid-catalyzed aldol condensation
The mechanism for acid-catalyzed aldol condensation can be seen in the image below.
Crossed aldol condensation
A crossed aldol condensation is a result of two dissimilar carbonyl compounds containing α-hydrogen(s) undergoing aldol condensation. Ordinarily, this leads to four possible products as either carbonyl compound can act as the nucleophile and self-condensation is possible, which makes a synthetically useless mixture. However, this problem can be avoided if one of the compounds does not contain an α-hydrogen, rendering it non-enolizable. In an aldol condensation between an aldehyde and a ketone, the ketone acts as the nucleophile, as its carbonyl carbon does not possess high electrophilic character due to the +I effect and steric hindrance. Usually, the crossed product is the major one. Any traces of the self-aldol product from the aldehyde may be disallowed by first preparing a mixture of a suitable base and the ketone and then adding the aldehyde slowly to the said reaction mixture. Using too concentrated base could lead to a competing Cannizzaro reaction.
Examples
The Aldox process, developed by Royal Dutch Shell and Exxon, converts propene and syngas to 2-ethylhexanol via hydroformylation to butyraldehyde, aldol condensation to 2-ethylhexanal and finally hydrogenation.
Pentaerythritol is produced on a large scale beginning with crossed aldol condensation of acetaldehyde and three equivalents of formaldehyde to give pentaerythrose, which is further reduced in a Cannizzaro reaction.
Scope
Ethyl 2-methylacetoacetate and campholenic aldehyde react in an Aldol condensation. The synthetic procedure is typical for this type of reaction. In the process, in addition to water, an equivalent of ethanol and carbon dioxide are lost in decarboxylation.
Ethyl glyoxylate 2 and glutaconate (diethyl-2-methylpent-2-enedioate) 1 react to isoprenetricarboxylic acid 3 (isoprene (2-methylbuta-1,3-diene) skeleton) with sodium ethoxide. This reaction product is very unstable with initial loss of carbon dioxide and followed by many secondary reactions. This is believed to be due to steric strain resulting from the methyl group and the carboxylic group in the cis-dienoid structure.
Occasionally, an aldol condensation is buried in a multistep reaction or in catalytic cycle as in the following example:
In this reaction an alkynal 1 is converted into a cycloalkene 7 with a ruthenium catalyst and the actual condensation takes place with intermediate 3 through 5. Support for the reaction mechanism is based on isotope labeling.
The reaction between menthone ((2S,5R)-2-isopropyl-5-methylcyclohexanone) and anisaldehyde (4-methoxybenzaldehyde) is complicated due to steric shielding of the ketone group. This obstacle is overcome by using a strong base such as potassium hydroxide and a very polar solvent such as DMSO in the reaction below:
The product can epimerize by way of a common intermediate—enolate A—to convert between the original (S,R) and the (R,R) epimers. The (R,R) product is insoluble in the reaction solvent whereas the (S,R) is soluble. The precipitation of the (R,R) product drives the epimerization equilibrium reaction to form this as the major product.
Other condensation reactions
There are other reactions of carbonyl compounds similar to aldol condensation:
When the base is an amine and the active hydrogen compound is sufficiently activated the reaction is called a Knoevenagel condensation.
In a Perkin reaction the aldehyde is aromatic and the enolate generated from an anhydride.
Claisen-Schmidt condensation between an aldehyde or ketone having an α-hydrogen with an aromatic carbonyl compound lacking an α-hydrogen.
A Claisen condensation involves two ester compounds.
A Dieckmann condensation involves two ester groups in the same molecule and yields a cyclic molecule
In the Japp–Maitland condensation water is removed not by an elimination reaction but by a nucleophilic displacement
A Robinson annulation involves an α,β-unsaturated ketone and a carbonyl group, which first engage in a Michael reaction prior to the aldol condensation.
In the Guerbet reaction, an aldehyde, formed in situ from an alcohol, self-condenses to the dimerized alcohol.
See also
Auwers synthesis
Aldol addition
References
Notes
External links
Organic Chemistry Portal
Condensation reactions
Carbon-carbon bond forming reactions | Aldol condensation | [
"Chemistry"
] | 1,687 | [
"Condensation reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
499,429 | https://en.wikipedia.org/wiki/D%27Alembert%27s%20principle | D'Alembert's principle, also known as the Lagrange–d'Alembert principle, is a statement of the fundamental classical laws of motion. It is named after its discoverer, the French physicist and mathematician Jean le Rond d'Alembert, and Italian-French mathematician Joseph Louis Lagrange. D'Alembert's principle generalizes the principle of virtual work from static to dynamical systems by introducing forces of inertia which, when added to the applied forces in a system, result in dynamic equilibrium.
D'Alembert's principle can be applied in cases of kinematic constraints that depend on velocities. The principle does not apply for irreversible displacements, such as sliding friction, and more general specification of the irreversibility is required.
Statement of the principle
The principle states that the sum of the differences between the forces acting on a system of massive particles and the time derivatives of the momenta of the system itself projected onto any virtual displacement consistent with the constraints of the system is zero. Thus, in mathematical notation, d'Alembert's principle is written as follows,
where:
is an integer used to indicate (via subscript) a variable corresponding to a particular particle in the system,
is the total applied force (excluding constraint forces) on the -th particle,
is the mass of the -th particle,
is the velocity of the -th particle,
is the virtual displacement of the -th particle, consistent with the constraints.
Newton's dot notation is used to represent the derivative with respect to time. The above equation is often called d'Alembert's principle, but it was first written in this variational form by Joseph Louis Lagrange. D'Alembert's contribution was to demonstrate that in the totality of a dynamic system the forces of constraint vanish. That is to say that the generalized forces need not include constraint forces. It is equivalent to the somewhat more cumbersome Gauss's principle of least constraint.
Derivations
General case with variable mass
The general statement of d'Alembert's principle mentions "the time derivatives of the momenta of the system." By Newton's second law, the first time derivative of momentum is the force. The momentum of the -th mass is the product of its mass and velocity:
and its time derivative is
In many applications, the masses are constant and this equation reduces to
However, some applications involve changing masses (for example, chains being rolled up or being unrolled) and in those cases both terms and have to remain present, giving
Special case with constant mass
Consider Newton's law for a system of particles of constant mass, . The total force on each particle is
where
are the total forces acting on the system's particles,
are the inertial forces that result from the total forces.
Moving the inertial forces to the left gives an expression that can be considered to represent quasi-static equilibrium, but which is really just a small algebraic manipulation of Newton's law:
Considering the virtual work, , done by the total and inertial forces together through an arbitrary virtual displacement, , of the system leads to a zero identity, since the forces involved sum to zero for each particle.
The original vector equation could be recovered by recognizing that the work expression must hold for arbitrary displacements. Separating the total forces into applied forces, , and constraint forces, , yields
If arbitrary virtual displacements are assumed to be in directions that are orthogonal to the constraint forces (which is not usually the case, so this derivation works only for special cases), the constraint forces don't do any work, . Such displacements are said to be consistent with the constraints. This leads to the formulation of d'Alembert's principle, which states that the difference of applied forces and inertial forces for a dynamic system does no virtual work:
There is also a corresponding principle for static systems called the principle of virtual work for applied forces.
D'Alembert's principle of inertial forces
D'Alembert showed that one can transform an accelerating rigid body into an equivalent static system by adding the so-called "inertial force" and "inertial torque" or moment. The inertial force must act through the center of mass and the inertial torque can act anywhere. The system can then be analyzed exactly as a static system subjected to this "inertial force and moment" and the external forces. The advantage is that in the equivalent static system one can take moments about any point (not just the center of mass). This often leads to simpler calculations because any force (in turn) can be eliminated from the moment equations by choosing the appropriate point about which to apply the moment equation (sum of moments = zero). Even in the course of Fundamentals of Dynamics and Kinematics of machines, this principle helps in analyzing the forces that act on a link of a mechanism when it is in motion. In textbooks of engineering dynamics, this is sometimes referred to as d'Alembert's principle.
Some educators caution that attempts to use d'Alembert inertial mechanics lead students to make frequent sign errors. A potential cause for these errors is the sign of the inertial forces. Inertial forces can be used to describe an apparent force in a non-inertial reference frame that has an acceleration with respect to an inertial reference frame. In such a non-inertial reference frame, a mass that is at rest and has zero acceleration in an inertial reference system, because no forces are acting on it, will still have an acceleration and an apparent inertial, or pseudo or fictitious force will seem to act on it: in this situation the inertial force has a minus sign.
Dynamic equilibrium
D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of rigid bodies with generalized coordinates requires
for any set of virtual displacements with being a generalized applied force and being a generalized inertia force. This condition yields equations,
which can also be written as
The result is a set of m equations of motion that define the dynamics of the rigid body system.
Formulation using the Lagrangian
D'Alembert's principle can be rewritten in terms of the Lagrangian of the system as a generalized version of Hamilton's principle for the case of point particles, as follows,
where:
are the applied forces
is the virtual displacement of the -th particle, consistent with the constraints
the critical curve satisfies the constraints
With the Lagrangian
the previous statement of d'Alembert principle is recovered.
Generalization for thermodynamics
An extension of d'Alembert's principle can be used in thermodynamics. For instance, for an adiabatically closed thermodynamic system described by a Lagrangian depending on a single entropy S and with constant masses , such as
it is written as follows
where the previous constraints and are generalized to involve the entropy as:
Here is the temperature of the system, are the external forces, are the internal dissipative forces. It results in the mechanical and thermal balance equations:
Typical applications of the principle include thermo-mechanical systems, membrane transport, and chemical reactions.
For the classical d'Alembert principle and equations are recovered.
References
Classical mechanics
Dynamical systems
Lagrangian mechanics
Principles | D'Alembert's principle | [
"Physics",
"Mathematics"
] | 1,576 | [
"Lagrangian mechanics",
"Mechanics",
"Classical mechanics",
"Dynamical systems"
] |
500,163 | https://en.wikipedia.org/wiki/Self-replicating%20spacecraft | The concept of self-replicating spacecraft, as envisioned by mathematician John von Neumann, has been described by futurists and has been discussed across a wide breadth of hard science fiction novels and stories. Self-replicating probes are sometimes referred to as von Neumann probes. Self-replicating spacecraft would in some ways either mimic or echo the features of living organisms or viruses.
Theory
Von Neumann argued that the most effective way of performing large-scale mining operations such as mining an entire moon or asteroid belt would be by self-replicating spacecraft, taking advantage of their exponential growth. In theory, a self-replicating spacecraft could be sent to a neighboring planetary system, where it would seek out raw materials (extracted from asteroids, moons, gas giants, etc.) to create replicas of itself. These replicas would then be sent out to other planetary systems. The original "parent" probe could then pursue its primary purpose within the star system. This mission varies widely depending on the variant of self-replicating starship proposed.
Given this pattern, and its similarity to the reproduction patterns of bacteria, it has been pointed out that von Neumann machines might be considered a form of life. In his short story "Lungfish", David Brin touches on this idea, pointing out that self-replicating machines launched by different species might actually compete with one another (in a Darwinistic fashion) for raw material, or even have conflicting missions. Given enough variety of "species" they might even form a type of ecology, or – should they also have a form of artificial intelligence – a society. They may even mutate with thousands of "generations".
The first quantitative engineering analysis of such a spacecraft was published in 1980 by Robert Freitas, in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory produce many copies of itself there to increase its total manufacturing capacity over a 500-year period, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each.
It has been theorized that a self-replicating starship utilizing relatively conventional theoretical methods of interstellar travel (i.e., no exotic faster-than-light propulsion, and speeds limited to an "average cruising speed" of 0.1c.) could spread throughout a galaxy the size of the Milky Way in as little as half a million years.
Debate on Fermi's paradox
In 1981, Frank Tipler put forth an argument that extraterrestrial intelligences do not exist, based on the fact that von Neumann probes have not been observed. Given even a moderate rate of replication and the history of the galaxy, such probes should already be common throughout space and thus, we should have already encountered them. Because we have not, this shows that extraterrestrial intelligences do not exist. This is thus a resolution to the Fermi paradox – that is, the question of why we have not already encountered extraterrestrial intelligence if it is common throughout the universe.
A response came from Carl Sagan and William Newman. Now known as Sagan's Response, it pointed out that in fact Tipler had underestimated the rate of replication, and that von Neumann probes should have already started to consume most of the mass in the galaxy. Any intelligent race would therefore, Sagan and Newman reasoned, not design von Neumann probes in the first place, and would try to destroy any von Neumann probes found as soon as they were detected. As Robert Freitas has pointed out, the assumed capacity of von Neumann probes described by both sides of the debate is unlikely in reality, and more modestly reproducing systems are unlikely to be observable in their effects on our solar system or the galaxy as a whole.
Another objection to the prevalence of von Neumann probes is that civilizations that could potentially create such devices may have a high probability of self-destruction before being capable of producing such machines. This could be through events such as biological or nuclear warfare, nanoterrorism, resource exhaustion, ecological catastrophe, or pandemics. This obstacle to the creation of von Neumann probes is one potential candidate for the concept of a Great Filter.
Simple workarounds exist to avoid the over-replication scenario. Radio transmitters, or other means of wireless communication, could be used by probes programmed not to replicate beyond a certain density (such as five probes per cubic parsec) or arbitrary limit (such as ten million within one century), analogous to the Hayflick limit in cell reproduction. One problem with this defence against uncontrolled replication is that it would only require a single probe to malfunction and begin unrestricted reproduction for the entire approach to fail – essentially a technological cancer – unless each probe also has the ability to detect such malfunction in its neighbours and implements a seek and destroy protocol (which in turn could lead to probe-on-probe space wars if faulty probes first managed to multiply to high numbers before they were found by sound ones, which could then well have programming to replicate to matching numbers so as to manage the infestation). Another workaround is based on the need for spacecraft heating during long interstellar travel. The use of plutonium as a thermal source would limit the ability to self-replicate. The spacecraft would have no programming to make more plutonium even if it found the required raw materials. Another is to program the spacecraft with a clear understanding of the dangers of uncontrolled replication.
Applications for self-replicating spacecraft
The details of the mission of self-replicating starships can vary widely from proposal to proposal, and the only common trait is the self-replicating nature.
Von Neumann probes
A von Neumann probe is a spacecraft capable of replicating itself. It is a concatenation of two concepts: a Von Neumann universal constructor (self-replicating machine) and a probe (an instrument to explore or examine something). The concept is named after Hungarian American mathematician and physicist John von Neumann, who rigorously studied the concept of self-replicating machines that he called "Universal Assemblers" and which are often referred to as "von Neumann machines". Such constructs could be theorised to comprise five basic components (variations of this template could create other machines such as Bracewell probes):
Probe: which would contain the actual probing instruments & goal-directed AI to guide the construct.
Life-support systems: mechanisms to repair and maintain the construct.
Factory: mechanisms to harvest resources & replicate itself.
Memory banks: store programs for all its components & information gained by the probe.
Engine: motor to move the probe.
Andreas M. Hein and science fiction author Stephen Baxter proposed different types of von Neumann probes, termed "Philosopher" and "Founder", where the purpose of the former is exploration and for the latter preparing future settlement.
A near-term concept of a self-replicating probe has been proposed by the Initiative for Interstellar Studies, achieving about 70% self-replication, based on current and near-term technologies.
If a self-replicating probe finds evidence of primitive life (or a primitive, low-level culture) it might be programmed to lie dormant, silently observe, attempt to make contact (this variant is known as a Bracewell probe), or even interfere with or guide the evolution of life in some way.
Physicist Paul Davies of University of Adelaide has "raised the possibility of a probe resting on our own Moon", having arrived at some point in Earth's ancient prehistory and remained to monitor Earth, a concept that, per Michio Kaku, was what Stanley Kubrick used as the basis of his film, 2001: A Space Odyssey (though the director cut the relevant monolith scene from the movie). Kubrick's work was based on Arthur C. Clarke's story, "The Sentinel", expanded by the pair in the form of a novel that became the basis for the movie and so Davies' lunar probe/observatory concept is also considered reminiscent of Clarke.
A variant idea on the interstellar von Neumann probe idea is that of the "Astrochicken", proposed by Freeman Dyson. While it has the common traits of self-replication, exploration, and communication with its "home base", Dyson conceived the Astrochicken to explore and operate within our own planetary system, and not explore interstellar space.
Anders Sandberg and Stuart Armstrong argued that launching the colonization of the entire reachable universe through self-replicating probes is well within the capabilities of a star-spanning civilization, and proposed a theoretical approach for achieving it in 32 years, by mining planet Mercury for resources and constructing a Dyson Swarm around the Sun.
Berserkers
A variant of the self-replicating starship is the Berserker. Unlike the benign probe concept, Berserkers are programmed to seek out and exterminate lifeforms and life-bearing exoplanets whenever they are encountered.
The name is derived from the Berserker series of novels by Fred Saberhagen which describes a war between humanity and such machines. Saberhagen points out (through one of his characters) that the Berserker warships in his novels are not von Neumann machines themselves, but the larger complex of Berserker machines – including automated shipyards – do constitute a von Neumann machine. This again brings up the concept of an ecology of von Neumann machines, or even a von Neumann hive entity.
It is speculated in fiction that Berserkers could be created and launched by a xenophobic civilization (see Anvil of Stars, by Greg Bear, in the section In fiction below) or could theoretically "mutate" from a more benign probe. For instance, a von Neumann ship designed for terraforming processes – mining a planet's surface and adjusting its atmosphere to more human-friendly conditions – could be interpreted as attacking previously inhabited planets, killing their inhabitants in the process of changing the planetary environment, and then self-replicating to dispatch more ships to "attack" other planets.
Replicating seeder ships
Yet another variant on the idea of the self-replicating starship is that of the seeder ship. Such starships might store the genetic patterns of lifeforms from their home world, perhaps even of the species which created it. Upon finding a habitable exoplanet, or even one that might be terraformed, it would try to replicate such lifeforms – either from stored embryos or from stored information using molecular nanotechnology to build zygotes with varying genetic information from local raw materials.
Such ships might be terraforming vessels, preparing colony worlds for later colonization by other vessels, or – should they be programmed to recreate, raise, and educate individuals of the species that created it – self-replicating colonizers themselves. Seeder ships would be a suitable alternative to generation ships as a way to colonize worlds too distant to travel to in one lifetime.
In fiction
Von Neumann probes
2001: A Space Odyssey: The monoliths in Arthur C. Clarke's book and Stanley Kubrick's film 2001: A Space Odyssey were intended to be self-replicating probes, though the artifacts in "The Sentinel", Clarke's original short story upon which 2001 was based, were not. The film was to begin with a series of scientists explaining how probes like these would be the most efficient method of exploring outer space. Kubrick cut the opening segment from his film at the last minute, however, and these monoliths became almost mystical entities in both the film and Clarke's novel.
Cold As Ice: In the novel by Charles Sheffield, there is a segment where the author (a physicist) describes Von Neumann machines harvesting sulfur, nitrogen, phosphorus, helium-4, and various metals from the atmosphere of Jupiter.
Destiny's Road: Larry Niven frequently refers to Von Neumann probes in many of his works. In his 1998 book Destiny's Road, Von Neumann machines are scattered throughout the human colony world Destiny and its moon Quicksilver in order to build and maintain technology and to make up for the lack of the resident humans' technical knowledge; the Von Neumann machines primarily construct a stretchable fabric cloth capable of acting as a solar collector which serves as the humans' primary energy source. The Von Neumann machines also engage in ecological maintenance and other exploratory work.
The Devil's Blind Spot: See also Alexander Kluge, The Devil's Blind Spot (New Directions; 2004.)
Grey Goo: In the video game Grey Goo, the "Goo" faction is composed entirely of Von Neumann probes sent through various microscopic wormholes to map the Milky Way Galaxy. The faction's units are configurations of nanites used during their original mission of exploration, which have adapted to a combat role. The Goo starts as an antagonist to the Human and Beta factions, but their true objective is revealed during their portion of the single-player campaign. Related to, and inspired by, the Grey Goo doomsday scenario.
Spin: In the novel by Robert Charles Wilson, Earth is veiled by a temporal field. Humanity tries to understand and escape this field by using Von Neumann probes. It is later revealed that the field itself was generated by Von Neumann probes from another civilization, and that a competition for resources had taken place between earth's and the aliens' probes.
The Third Millennium: A History of the World AD 2000–3000: In the book by Brian Stableford and David Langford (published by Alfred A. Knopf, Inc., 1985) humanity sends cycle-limited Von Neumann probes out to the nearest stars to do open-ended exploration and to announce humanity's existence to whoever might encounter them.
Von Neumann's War: In Von Neumann's War by John Ringo and Travis S. Taylor (published by Baen Books in 2007) Von Neumann probes arrive in the solar system, moving in from the outer planets, and converting all metals into gigantic structures. Eventually, they arrive on Earth, wiping out much of the population before being beaten back when humanity reverse engineers some of the probes.
We Are Legion (We Are Bob) by Dennis E. Taylor: Bob Johansson, the former owner of a software company, dies in a car accident, only to wake up a hundred years later as a computer emulation of Bob. Given a Von Neumann probe by America's religious government, he is sent out to explore, exploit, expand, and experiment for the good of the human race.
ARMA 3: In the "First Contact" single-player campaign introduced in the Contact expansion, a series of extraterrestrial network structures are found in various locations on Earth, one being the fictional country of Livonia, the campaign's setting. In the credits of the campaign, a radio broadcast reveals that a popular theory surrounding the networks is that they are a type of Von Neumann probe that arrived on Earth during the time of a supercontinent.
Questionable Content: In Jeph Jacques' webcomic, Faye Whitaker refers to the "Floating Black Slab Emitting A Low Hum" as a possible Von Neumann probe in Episode 4645: Accessorized.
In the third act of the incremental game Universal Paperclips, after all of Earth's matter has been converted into paperclips, players are tasked with sending Von Neumann probes into the universe to find and consume all matter in service of making paperclips, eventually entering a war with another class of probes called "drifters" that are created as a result of random mutations.
In the game Satisfactory developed by Coffee Stain Studios, the player arrives on a distant alien planet and is tasked with constructing another spaceship. The player is guided by an artificial intelligence which provides the instructions for creating the spaceship (specifically, which resources are required). When complete, it then leaves to presumably repeat the process on another planet. This is not explicitly explained by the game but lore suggests you are simply a clone created by the previous iteration of the process, and it has been going on for a long, long time.
Berserkers
In the science fiction short story collection Berserker by Fred Saberhagen, a series of short stories include accounts of battles fought against extremely destructive Berserker machines. This and subsequent books set in the same fictional universe are the origin of the term "Berserker probe".
In the 2003 miniseries reboot of Battlestar Galactica (and the subsequent 2004 series) the Cylons are similar to Berserkers in their wish to destroy human life. They were created by humans in a group of fictional planets called the Twelve Colonies. The Cylons created special models that look like humans in order to destroy the twelve colonies and later, the fleeing fleet of surviving humans.
The Borg of Star Trek – a self-replicating bio-mechanical race that is dedicated to the task of achieving perfection through the assimilation of useful technology and lifeforms. Their ships are massive mechanical cubes (a close step from the Berserker's massive mechanical Spheres).
Science fiction author Larry Niven later borrowed this notion in his short story "A Teardrop Falls".
In the computer game Star Control II, the Slylandro Probe is an out-of-control self-replicating probe that attacks starships of other races. They were not originally intended to be a berserker probe; they sought out intelligent life for peaceful contact, but due to a programming error, they would immediately switch to "resource extraction" mode and attempt to dismantle the target ship for raw materials. While the plot claims that the probes reproduce "at a geometric rate", the game itself caps the frequency of encountering these probes. It is possible to deal with the menace in a side-quest, but this is not necessary to complete the game, as the probes only appear one at a time, and the player's ship will eventually be fast and powerful enough to outrun them or destroy them for resources – although the probes will eventually dominate the entire game universe.
In Iain Banks' novel Excession, hegemonising swarms are described as a form of Outside Context Problem. An example of an "Aggressive Hegemonising Swarm Object" is given as an uncontrolled self-replicating probe with the goal of turning all matter into copies of itself. After causing great damage, they are somehow transformed using unspecified techniques by the Zetetic Elench and become "Evangelical Hegemonising Swarm Objects". Such swarms (referred to as "smatter") reappear in the later novels Surface Detail (which features scenes of space combat against the swarms) and The Hydrogen Sonata.
The Inhibitors from Alastair Reynolds' Revelation Space series are self-replicating machines whose purpose is to inhibit the development of intelligent star-faring cultures. They are dormant for extreme periods of time until they detect the presence of a space-faring culture and proceed to exterminate it even to the point of sterilizing entire planets. They are very difficult to destroy as they seem to have faced every type of weapon ever devised and only need a short time to 'remember' the necessary counter-measures.
Also from Alastair Reynolds' books, the "Greenfly" terraforming machines are another form of berserker machines. For unknown reasons, but probably an error in their programming, they destroy planets and turn them into trillions of domes filled with vegetation – after all, their purpose is to produce a habitable environment for humans, however in doing so they inadvertently decimate the human race. By 10,000, they have wiped out most of the Galaxy.
The Reapers in the video game series Mass Effect are also self-replicating probes bent on destroying any advanced civilization encountered in the galaxy. They lie dormant in the vast spaces between the galaxies and follow a cycle of extermination. It is seen in Mass Effect 2 that they assimilate any advanced species.
Mantrid Drones from the science fiction television series Lexx were an extremely aggressive type of self-replicating Berserker machine, eventually converting the majority of the matter in the universe into copies of themselves in the course of their quest to thoroughly exterminate humanity.
The Babylon 5 episode "Infection" showed a smaller scale berserker in the form of the Icarran War Machine. After being created with the goal of defeating an unspecified enemy faction, the War Machines proceeded to exterminate all life on the planet Icarra VII because they had been programmed with standards for what constituted a 'Pure Icaran' based on religious teachings, which no actual Icaran could satisfy. Because the Icaran were pre-starflight, the War Machines became dormant after completing their task rather than spreading. One unit was reactivated on-board Babylon 5 after being smuggled past quarantine by an unscrupulous archaeologist, but after being confronted with how they had rendered Icara VII a dead world, the simulated personality of the War Machine committed suicide.
The Babylon 5 episode "A Day in the Strife" features a probe that threatens the station with destruction unless a series of questions designed to test a civilization's level of advancement are answered correctly. The commander of the station correctly surmises that the probe is actually a berserker and that if the questions are answered the probe would identify them as a threat to its originating civilization and detonate.
Greg Bear's novel The Forge of God deals directly with the concept of "Berserker" von Neumann probes and their consequences. The idea is further explored in the novel's sequel, Anvil of Stars, which explores the reaction other civilizations have to the creation and release of Berserkers.
In Gregory Benford's Galactic Center Saga series, an antagonist berserker machine race is encountered by Earth, first as a probe in In the Ocean of Night, and then in an attack in Across the Sea of Suns. The berserker machines do not seek to completely eradicate a race if merely throwing it into a primitive low technological state will do as they did to the EMs encountered in Across the Sea of Suns. The alien machine Watchers would not be considered von Neumann machines themselves, but the collective machine race could.
On Stargate SG-1 the Replicators were a vicious race of insect-like robots that were originally created by an android named Reese to serve as toys. They grew beyond her control and began evolving, eventually spreading throughout at least two galaxies. In addition to ordinary autonomous evolution they were able to analyze and incorporate new technologies they encountered into themselves, ultimately making them one of the most advanced "races" known.
On Stargate Atlantis, a second race of replicators created by the Ancients were encountered in the Pegasus Galaxy. They were created as a means to defeat the Wraith. The Ancients attempted to destroy them after they began showing signs of sentience and requested that their drive to kill the wraith be removed. This failed, and an unspecified length of time after the Ancients retreated to the Milky Way Galaxy, the replicators nearly succeeded in destroying the Wraith. The Wraith were able to hack into the replicators and deactivate the extermination drive, at which point they retreated to their home world and were not heard from again until encountered by the Atlantis Expedition. After the Atlantis Expedition reactivated this dormant directive, the replicators embarked on a plan to kill the Wraith by removing their food source, i.e. all humans in the Pegasus Galaxy.
In Stargate Universe Season 2, a galaxy billions of light years distant from the Milky Way is infested with drone ships that are programmed to annihilate intelligent life and advanced technology. The drone ships attack other space ships (including Destiny) as well as humans on planetary surfaces, but don't bother destroying primitive technology such as buildings unless they are harboring intelligent life or advanced technology.
In the Justice League Unlimited episode "Dark Heart", an alien weapon based on this same idea lands on Earth.
In the Homeworld: Cataclysm video game, a bio-mechanical virus called Beast has the ability to alter organic and mechanic material to suit its needs, and the ships infected become self-replicating hubs for the virus.
In the SF MMO EVE Online, experiments to create more autonomous drones than the ones used by player's ships accidentally created 'rogue drones' which form hives in certain parts of space and are used extensively in missions as difficult opponents.
In the computer game Sword of the Stars, the player may randomly encounter "Von Neumann". A Von Neumann mothership appears along with smaller Von Neumann probes, which attack and consume the player's ships. The probes then return to the mothership, returning the consumed material. If probes are destroyed, the mothership will create new ones. If all the player's ships are destroyed, the Von Neumann probes will reduce the planets resource levels before leaving. The probes appear as blue octahedrons, with small spheres attached to the apical points. The mothership is a larger version of the probes. In the 2008 expansion A Murder of Crows, Kerberos Productions also introduces the VN Berserker, a combat oriented ship, which attacks player planets and ships in retaliation to violence against VN Motherships. If the player destroys the Berserker things will escalate and a System Destroyer will attack.
In the X Computer Game Series, the Xenon are a malevolent race of artificially intelligent machines descended from terraforming ships sent out by humans to prepare worlds for eventual colonization; the result caused by a bugged software update. They are continual antagonists in the X-Universe.
In the comic Transmetropolitan a character mentions "Von Neumann rectal infestations" which are apparently caused by "Shit-ticks that build more shit-ticks that build more shit-ticks".
In the anime Vandread, harvester ships attack vessels from both male- and female-dominated factions and harvest hull, reactors, and computer components to make more of themselves. To this end, Harvester ships are built around mobile factories. Earth-born humans also view the inhabitants of the various colonies to be little more than spare parts.
In Earth 2160, the Morphidian Aliens rely on strain aliens for colonization. Most -derived aliens can absorb water, then reproduce like a colony of cells. In this manner, even one Lady (or Princess, or Queen) can create enough clones to cover the map. Once they have significant numbers, they "choose an evolutionary path" and swarm the enemy, taking over their resources.
In the European comic series Storm, numbers 20 & 21, a kind of berserk von Neumann probe is set on a collision course with the Pandarve system.
In PC role-playing game Space Rangers and its sequel Space Rangers 2: Dominators, a league of 5 nations battles three different types of Berserker robots. One that focuses on invading planets, another that battles normal space and third that lives in hyperspace.
In the Star Wolves video game series, Berserkers are a self-replicating machine menace that threatens the known universe for purposes of destruction and/or assimilation of humanity.
The Star Wars expanded universe features the World Devastators, large ships designed and built by the Galactic Empire that tear apart planets to use its materials to build other ships or even upgrade or replicate themselves.
The Tet in the 2013 film Oblivion is revealed to be a Berserker of sorts: a sentient machine that travels from planet to planet, exterminating the indigenous population using armies of robotic drones and cloned members of the target species. The Tet then proceeds to harvest the planet's water in order to extract hydrogen for nuclear fusion.
In Eclipse Phase, an ETI probe is believed to have infected the TITAN computer systems with the Exsurgent virus to cause them to go berserk and wage war on humanity. This would make ETI probes a form of berserker, albeit one that uses pre-existing computer systems as its key weapons.
In Herr aller Dinge by Andreas Eschbach, an ancient nano machine complex is discovered buried in a glacier off the coast of Russia. When it comes in contact with materials it needs to fulfill its mission, it creates a launch facility and launches a space craft. It is later revealed that the nano machines were created by a pre-historic human race with the intention of destroying other interstellar civilizations (for an unknown reason). It is proposed that the reason there is no evidence of the race is because of the nano-machines themselves and their ability to manipulate matter at an atomic level. It is even suggested that viruses could be ancient nano machines that have evolved over time.
Replicating seeder ships
Code of the Lifemaker by James P. Hogan describes the evolution of a society of humanoid-like robots who inhabit Saturn's moon Titan. The sentient machines are descended from an uncrewed factory ship that was to be self replicating, but suffered radiation damage and went off course, eventually landing on Titan around 1,000,000 BC.
Manifold: Space, Stephen Baxter's novel, starts with the discovery of alien self-replicating machines active within the Solar system.
In the Metroid Prime subseries of games, the massive Leviathans are probes routinely sent out from the planet Phaaze to infect other planets with Phazon radiation and eventually turn these planets into clones of Phaaze, where the self-replication process can continue.
In David Brin's short story collection, The River of Time (1986), the short story "Lungfish" prominently features von Neumann probes. Not only does he explore the concept of the probes themselves, but indirectly explores the ideas of competition between different designs of probes, evolution of von Neumann probes in the face of such competition, and the development of a type of ecology between von Neumann probes. One of the vessels mentioned is clearly a Seeder type.
In The Songs of Distant Earth by Arthur C. Clarke, humanity on a future Earth facing imminent destruction creates automated seedships that act as fire and forget lifeboats aimed at distant, habitable worlds. Upon landing, the ship begins to create new humans from stored genetic information, and an onboard computer system raises and trains the first few generations of new inhabitants. The massive ships are then broken down and used as building materials by their "children".
On the Stargate Atlantis episode "Remnants", the Atlantis team finds an ancient probe that they later learn was launched by a now-extinct, technologically advanced race in order to seed new worlds and re-propagate their silicon-based species. The probe communicated with inhabitants of Atlantis by means of hallucinations.
On the Stargate SG-1 episode "Scorched Earth", a species of newly relocated humanoids face extinction via an automated terraforming colony seeder ship controlled by an Artificial Intelligence.
On Stargate Universe, the human adventurers live on a ship called Destiny. Its mission was to connect a network of Stargates, placed by preceding seeder ships on planets capable of supporting life to allow instantaneous travel between them.
The trilogy of albums which conclude the comic book series Storm by Don Lawrence (starting with Chronicles of Pandarve 11: The Von Neumann machine) is based on self-replicating conscious machines containing the sum of all human knowledge employed to rebuild human society throughout the universe in case of disaster on Earth. The probe malfunctions and although new probes are built, they do not separate from the motherprobe, which eventually results in a cluster of malfunctioning probes so big that it can absorb entire moons.
In the Xeno series, a rogue seeder ship (technically a berserker) known as "Deus" created humanity.
See also
Asteroid mining
Astrochicken
Bracewell probe
Embryo space colonization
Generation ship
Interstellar ark
Interstellar travel
Self-replicating machine
Sleeper ship
Space colonization
Transcension hypothesis
References
Boyce, Chris. Extraterrestrial Encounter: A Personal Perspective. London: David & Charles, Newton Abbot (1979).
von Tiesenhausen, G., and Darbro, W. A. "Self-Replicating Systems," NASA Technical Memorandum 78304. Washington, D.C.: National Aeronautics and Space Administration (1980).
also Kinematic Self-Replicating Machines: 3.11 Freitas Interstellar Probe Replicator (1979-1980)
Artificial life
Fictional spacecraft by type
Hypothetical spacecraft
Self-replicating machines | Self-replicating spacecraft | [
"Physics",
"Astronomy",
"Technology",
"Biology"
] | 6,752 | [
"Exploratory engineering",
"Machines",
"Astronomical hypotheses",
"Self-replicating machines",
"Hypothetical spacecraft",
"Self-replication",
"Physical systems"
] |
500,959 | https://en.wikipedia.org/wiki/Henderson%E2%80%93Hasselbalch%20equation | In chemistry and biochemistry, the Henderson–Hasselbalch equation
relates the pH of a chemical solution of a weak acid to the numerical value of the acid dissociation constant, Ka, of acid and the ratio of the concentrations, of the acid and its conjugate base in an equilibrium.
For example, the acid may be carbonic acid
The Henderson–Hasselbalch equation can be used to estimate the pH of a buffer solution by approximating the actual concentration ratio as the ratio of the analytical concentrations of the acid and of a salt, MA.
The equation can also be applied to bases by specifying the protonated form of the base as the acid. For example, with an amine,
The Henderson–Hasselbach buffer system also has many natural and biological applications.
History
The Henderson–Hasselbalch equation was developed by two scientists, Lawrence Joseph Henderson and Karl Albert Hasselbalch. Lawrence Joseph Henderson was a biological chemist and Karl Albert Hasselbalch was a physiologist who studied pH.
In 1908, Lawrence Joseph Henderson derived an equation to calculate the hydrogen ion concentration of a bicarbonate buffer solution, which rearranged looks like this:
In 1909 Søren Peter Lauritz Sørensen introduced the pH terminology, which allowed Karl Albert Hasselbalch to re-express Henderson's equation in logarithmic terms, resulting in the Henderson–Hasselbalch equation.
Assumptions, limitations, and derivation
A simple buffer solution consists of a solution of an acid and a salt of the conjugate base of the acid. For example, the acid may be acetic acid and the salt may be sodium acetate. The Henderson–Hasselbalch equation relates the pH of a solution containing a mixture of the two components to the acid dissociation constant, Ka of the acid, and the concentrations of the species in solution.
To derive the equation a number of simplifying assumptions have to be made.
Assumption 1: The acid, HA, is monobasic and dissociates according to the equations
CA is the analytical concentration of the acid and CH is the concentration the hydrogen ion that has been added to the solution. The self-dissociation of water is ignored. A quantity in square brackets, [X], represents the concentration of the chemical substance X. It is understood that the symbol H+ stands for the hydrated hydronium ion. Ka is an acid dissociation constant.
The Henderson–Hasselbalch equation can be applied to a polybasic acid only if its consecutive pK values differ by at least 3. Phosphoric acid is such an acid.
Assumption 2. The self-ionization of water can be ignored. This assumption is not, strictly speaking, valid with pH values close to 7, half the value of pKw, the constant for self-ionization of water. In this case the mass-balance equation for hydrogen should be extended to take account of the self-ionization of water.
However, the term can be omitted to a good approximation.
Assumption 3: The salt MA is completely dissociated in solution. For example, with sodium acetate
the concentration of the sodium ion, [Na+] can be ignored. This is a good approximation for 1:1 electrolytes, but not for salts of ions that have a higher charge such as magnesium sulphate, MgSO4, that form ion pairs.
Assumption 4: The quotient of activity coefficients, , is a constant under the experimental conditions covered by the calculations.
The thermodynamic equilibrium constant, ,
is a product of a quotient of concentrations and a quotient, , of activity coefficients . In these expressions, the quantities in square brackets signify the concentration of the undissociated acid, HA, of the hydrogen ion H+, and of the anion A−; the quantities are the corresponding activity coefficients. If the quotient of activity coefficients can be assumed to be a constant which is independent of concentrations and pH, the dissociation constant, Ka can be expressed as a quotient of concentrations.
Derivation
Source:
Following these assumptions, the Henderson–Hasselbalch equation is derived in a few logarithmic steps.
Solve for :
On both sides, take the negative logarithm:
Based on previous assumptions, and
Inversion of by changing its sign, provides the Henderson–Hasselbalch equation
Application to bases
The equilibrium constant for the protonation of a base, B,
+ H+
is an association constant, Kb, which is simply related to the dissociation constant of the conjugate acid, BH+.
The value of is ca. 14 at 25 °C. This approximation can be used when the correct value is not known. Thus, the Henderson–Hasselbalch equation can be used, without modification, for bases.
Biological applications
With homeostasis the pH of a biological solution is maintained at a constant value by adjusting the position of the equilibria
where is the bicarbonate ion and is carbonic acid. Carbonic acid is formed reversibly from carbon dioxide and water. However, the solubility of carbonic acid in water may be exceeded. When this happens carbon dioxide gas is liberated and the following equation may be used instead.
represents the carbon dioxide liberated as gas. In this equation, which is widely used in biochemistry, is a mixed equilibrium constant relating to both chemical and solubility equilibria. It can be expressed as
where is the molar concentration of bicarbonate in the blood plasma and is the partial pressure of carbon dioxide in the supernatant gas. The concentration of is dependent on the which is also dependent on .
One of the buffer systems present in the body is the blood plasma buffering system. This is formed from , carbonic acid, working in conjunction with , bicarbonate, to form the bicarbonate system. This is effective near physiological pH of 7.4 as carboxylic acid is in equilibrium with in the lungs. As blood travels through the body, it gains and loses H+ from different processes including lactic acid fermentation and by NH3 protonation from protein catabolism. Because of this the , changes in the blood as it passes through tissues. This correlates to a change in the partial pressure of in the lungs causing a change in the rate of respiration if more or less is necessary. For example, a decreased blood pH will trigger the brain stem to perform more frequent respiration. The Henderson–Hasselbalch equation can be used to model these equilibria. It is important to maintain this pH of 7.4 to ensure enzymes are able to work optimally.
Life threatening Acidosis (a low blood pH resulting in nausea, headaches, and even coma, and convulsions) is due to a lack of functioning of enzymes at a low pH. As modelled by the Henderson–Hasselbalch equation, in severe cases this can be reversed by administering intravenous bicarbonate solution. If the partial pressure of does not change, this addition of bicarbonate solution will raise the blood pH.
Natural buffers
The ocean contains a natural buffer system to maintain a pH between 8.1 and 8.3. The oceans buffer system is known as the carbonate buffer system. The carbonate buffer system is a series of reactions that uses carbonate as a buffer to convert into bicarbonate. The carbonate buffer reaction helps maintain a constant H+ concentration in the ocean because it consumes hydrogen ions, and thereby maintains a constant pH. The ocean has been experiencing ocean acidification due to humans increasing in the atmosphere. About 30% of the that is released in the atmosphere is absorbed by the ocean, and the increase in absorption results in an increase in H+ ion production. The increase in atmospheric increases H+ ion production because in the ocean reacts with water and produces carbonic acid, and carbonic acid releases H+ ions and bicarbonate ions. Overall, since the Industrial Revolution the ocean has experienced a pH decrease by about 0.1 pH units due to the increase in production.
Ocean acidification affects marine life that have shells that are made up of carbonate. In a more acidic environment it is harder organisms to grow and maintain the carbonate shells. The increase of ocean acidity can cause carbonate shell organisms to experience reduced growth and reproduction.
See also
Davenport diagram
Gastric tonometry
Further reading
References
Acid–base chemistry
Eponymous equations of physics
Equilibrium chemistry
Mathematics in medicine | Henderson–Hasselbalch equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,743 | [
"Acid–base chemistry",
"Equations of physics",
"Applied mathematics",
"Eponymous equations of physics",
"Equilibrium chemistry",
"nan",
"Mathematics in medicine"
] |
501,590 | https://en.wikipedia.org/wiki/Hodge%20theory | In mathematics, Hodge theory, named after W. V. D. Hodge, is a method for studying the cohomology groups of a smooth manifold M using partial differential equations. The key observation is that, given a Riemannian metric on M, every cohomology class has a canonical representative, a differential form that vanishes under the Laplacian operator of the metric. Such forms are called harmonic.
The theory was developed by Hodge in the 1930s to study algebraic geometry, and it built on the work of Georges de Rham on de Rham cohomology. It has major applications in two settings—Riemannian manifolds and Kähler manifolds. Hodge's primary motivation, the study of complex projective varieties, is encompassed by the latter case. Hodge theory has become an important tool in algebraic geometry, particularly through its connection to the study of algebraic cycles.
While Hodge theory is intrinsically dependent upon the real and complex numbers, it can be applied to questions in number theory. In arithmetic situations, the tools of p-adic Hodge theory have given alternative proofs of, or analogous results to, classical Hodge theory.
History
The field of algebraic topology was still nascent in the 1920s. It had not yet developed the notion of cohomology, and the interaction between differential forms and topology was poorly understood. In 1928, Élie Cartan published an idea, “Sur les nombres de Betti des espaces de groupes clos”, in which he suggested — but did not prove — that differential forms and topology should be linked. Upon reading it, Georges de Rham, then a student, was inspired. In his 1931 thesis, he proved a result now called de Rham's theorem. By Stokes' theorem, integration of differential forms along singular chains induces, for any compact smooth manifold M, a bilinear pairing as shown below:
As originally stated, de Rham's theorem asserts that this is a perfect pairing, and that therefore each of the terms on the left-hand side are vector space duals of one another. In contemporary language, de Rham's theorem is more often phrased as the statement that singular cohomology with real coefficients is isomorphic to de Rham cohomology:
De Rham's original statement is then a consequence of the fact that over the reals, singular cohomology is the dual of singular homology.
Separately, a 1927 paper of Solomon Lefschetz used topological methods to reprove theorems of Riemann. In modern language, if ω1 and ω2 are holomorphic differentials on an algebraic curve C, then their wedge product is necessarily zero because C has only one complex dimension; consequently, the cup product of their cohomology classes is zero, and when made explicit, this gave Lefschetz a new proof of the Riemann relations. Additionally, if ω is a non-zero holomorphic differential, then is a positive volume form, from which Lefschetz was able to rederive Riemann's inequalities. In 1929, W. V. D. Hodge learned of Lefschetz's paper. He immediately observed that similar principles applied to algebraic surfaces. More precisely, if ω is a non-zero holomorphic form on an algebraic surface, then is positive, so the cup product of and must be non-zero. It follows that ω itself must represent a non-zero cohomology class, so its periods cannot all be zero. This resolved a question of Severi.
Hodge felt that these techniques should be applicable to higher dimensional varieties as well. His colleague Peter Fraser recommended de Rham's thesis to him. In reading de Rham's thesis, Hodge realized that the real and imaginary parts of a holomorphic 1-form on a Riemann surface were in some sense dual to each other. He suspected that there should be a similar duality in higher dimensions; this duality is now known as the Hodge star operator. He further conjectured that each cohomology class should have a distinguished representative with the property that both it and its dual vanish under the exterior derivative operator; these are now called harmonic forms. Hodge devoted most of the 1930s to this problem. His earliest published attempt at a proof appeared in 1933, but he considered it "crude in the extreme". Hermann Weyl, one of the most brilliant mathematicians of the era, found himself unable to determine whether Hodge's proof was correct or not. In 1936, Hodge published a new proof. While Hodge considered the new proof much superior, a serious flaw was discovered by Bohnenblust. Independently, Hermann Weyl and Kunihiko Kodaira modified Hodge's proof to repair the error. This established Hodge's sought-for isomorphism between harmonic forms and cohomology classes.
In retrospect it is clear that the technical difficulties in the existence theorem did not really require any significant new ideas, but merely a careful extension of classical methods. The real novelty, which was Hodge’s major contribution, was in the conception of harmonic integrals and their relevance to algebraic geometry. This triumph of concept over technique is reminiscent of a similar episode in the work of Hodge’s great predecessor Bernhard Riemann.
—M. F. Atiyah, William Vallance Douglas Hodge, 17 June 1903 – 7 July 1975, Biographical Memoirs of Fellows of the Royal Society, vol. 22, 1976, pp. 169–192.
Hodge theory for real manifolds
De Rham cohomology
The Hodge theory references the de Rham complex. Let M be a smooth manifold. For a non-negative integer k, let Ωk(M) be the real vector space of smooth differential forms of degree k on M. The de Rham complex is the sequence of differential operators
where dk denotes the exterior derivative on Ωk(M). This is a cochain complex in the sense that (also written ). De Rham's theorem says that the singular cohomology of M with real coefficients is computed by the de Rham complex:
Operators in Hodge theory
Choose a Riemannian metric g on M and recall that:
The metric yields an inner product on each fiber by extending (see Gramian matrix) the inner product induced by g from each cotangent fiber to its exterior product: . The inner product is then defined as the integral of the pointwise inner product of a given pair of k-forms over M with respect to the volume form associated with g. Explicitly, given some we have
Naturally the above inner product induces a norm, when that norm is finite on some fixed k-form:
then the integrand is a real valued, square integrable function on M, evaluated at a given point via its point-wise norms,
Consider the adjoint operator of d with respect to these inner products:
Then the Laplacian on forms is defined by
This is a second-order linear differential operator, generalizing the Laplacian for functions on Rn. By definition, a form on M is harmonic if its Laplacian is zero:
The Laplacian appeared first in mathematical physics. In particular, Maxwell's equations say that the electromagnetic field in a vacuum, i.e. absent any charges, is represented by a 2-form F such that on spacetime, viewed as Minkowski space of dimension 4.
Every harmonic form α on a closed Riemannian manifold is closed, meaning that . As a result, there is a canonical mapping . The Hodge theorem states that is an isomorphism of vector spaces. In other words, each real cohomology class on M has a unique harmonic representative. Concretely, the harmonic representative is the unique closed form of minimum L2 norm that represents a given cohomology class. The Hodge theorem was proved using the theory of elliptic partial differential equations, with Hodge's initial arguments completed by Kodaira and others in the 1940s.
For example, the Hodge theorem implies that the cohomology groups with real coefficients of a closed manifold are finite-dimensional. (Admittedly, there are other ways to prove this.) Indeed, the operators Δ are elliptic, and the kernel of an elliptic operator on a closed manifold is always a finite-dimensional vector space. Another consequence of the Hodge theorem is that a Riemannian metric on a closed manifold M determines a real-valued inner product on the integral cohomology of M modulo torsion. It follows, for example, that the image of the isometry group of M in the general linear group is finite (because the group of isometries of a lattice is finite).
A variant of the Hodge theorem is the Hodge decomposition. This says that there is a unique decomposition of any differential form ω on a closed Riemannian manifold as a sum of three parts in the form
in which γ is harmonic: . In terms of the L2 metric on differential forms, this gives an orthogonal direct sum decomposition:
The Hodge decomposition is a generalization of the Helmholtz decomposition for the de Rham complex.
Hodge theory of elliptic complexes
Atiyah and Bott defined elliptic complexes as a generalization of the de Rham complex. The Hodge theorem extends to this setting, as follows. Let be vector bundles, equipped with metrics, on a closed smooth manifold M with a volume form dV. Suppose that
are linear differential operators acting on C∞ sections of these vector bundles, and that the induced sequence
is an elliptic complex. Introduce the direct sums:
and let L be the adjoint of L. Define the elliptic operator . As in the de Rham case, this yields the vector space of harmonic sections
Let be the orthogonal projection, and let G be the Green's operator for Δ. The Hodge theorem then asserts the following:
H and G are well-defined.
Id = H + ΔG = H + GΔ
LG = GL, LG = GL
The cohomology of the complex is canonically isomorphic to the space of harmonic sections, , in the sense that each cohomology class has a unique harmonic representative.
There is also a Hodge decomposition in this situation, generalizing the statement above for the de Rham complex.
Hodge theory for complex projective varieties
Let X be a smooth complex projective manifold, meaning that X is a closed complex submanifold of some complex projective space CPN. By Chow's theorem, complex projective manifolds are automatically algebraic: they are defined by the vanishing of homogeneous polynomial equations on CPN. The standard Riemannian metric on CPN induces a Riemannian metric on X which has a strong compatibility with the complex structure, making X a Kähler manifold.
For a complex manifold X and a natural number r, every C∞ r-form on X (with complex coefficients) can be written uniquely as a sum of forms of with , meaning forms that can locally be written as a finite sum of terms, with each term taking the form
with f a C∞ function and the zs and ws holomorphic functions. On a Kähler manifold, the components of a harmonic form are again harmonic. Therefore, for any compact Kähler manifold X, the Hodge theorem gives a decomposition of the cohomology of X with complex coefficients as a direct sum of complex vector spaces:
This decomposition is in fact independent of the choice of Kähler metric (but there is no analogous decomposition for a general compact complex manifold). On the other hand, the Hodge decomposition genuinely depends on the structure of X as a complex manifold, whereas the group depends only on the underlying topological space of X.
Taking wedge products of these harmonic representatives corresponds to the cup product in cohomology, so the cup product with complex coefficients is compatible with the Hodge decomposition:
The piece Hp,q(X) of the Hodge decomposition can be identified with a coherent sheaf cohomology group, which depends only on X as a complex manifold (not on the choice of Kähler metric):
where Ωp denotes the sheaf of holomorphic p-forms on X. For example, Hp,0(X) is the space of holomorphic p-forms on X. (If X is projective, Serre's GAGA theorem implies that a holomorphic p-form on all of X is in fact algebraic.)
On the other hand, the integral can be written as the cap product of the homology class of Z and the cohomology class represented by . By Poincaré duality, the homology class of Z is dual to a cohomology class which we will call [Z], and the cap product can be computed by taking the cup product of [Z] and α and capping with the fundamental class of X.
Because [Z] is a cohomology class, it has a Hodge decomposition. By the computation we did above, if we cup this class with any class of type , then we get zero. Because , we conclude that [Z] must lie in .
The Hodge number hp,q(X) means the dimension of the complex vector space Hp.q(X). These are important invariants of a smooth complex projective variety; they do not change when the complex structure of X is varied continuously, and yet they are in general not topological invariants. Among the properties of Hodge numbers are Hodge symmetry (because Hp,q(X) is the complex conjugate of Hq,p(X)) and (by Serre duality).
The Hodge numbers of a smooth complex projective variety (or compact Kähler manifold) can be listed in the Hodge diamond (shown in the case of complex dimension 2):
For example, every smooth projective curve of genus g has Hodge diamond
For another example, every K3 surface has Hodge diamond
The Betti numbers of X are the sum of the Hodge numbers in a given row. A basic application of Hodge theory is then that the odd Betti numbers b2a+1 of a smooth complex projective variety (or compact Kähler manifold) are even, by Hodge symmetry. This is not true for compact complex manifolds in general, as shown by the example of the Hopf surface, which is diffeomorphic to and hence has .
The "Kähler package" is a powerful set of restrictions on the cohomology of smooth complex projective varieties (or compact Kähler manifolds), building on Hodge theory. The results include the Lefschetz hyperplane theorem, the hard Lefschetz theorem, and the Hodge-Riemann bilinear relations. Many of these results follow from fundamental technical tools which may be proven for compact Kähler manifolds using Hodge theory, including the Kähler identities and the -lemma.
Hodge theory and extensions such as non-abelian Hodge theory also give strong restrictions on the possible fundamental groups of compact Kähler manifolds.
Algebraic cycles and the Hodge conjecture
Let be a smooth complex projective variety. A complex subvariety in of codimension defines an element of the cohomology group . Moreover, the resulting class has a special property: its image in the complex cohomology lies in the middle piece of the Hodge decomposition, . The Hodge conjecture predicts a converse: every element of whose image in complex cohomology lies in the subspace should have a positive integral multiple that is a -linear combination of classes of complex subvarieties of . (Such a linear combination is called an algebraic cycle on .)
A crucial point is that the Hodge decomposition is a decomposition of cohomology with complex coefficients that usually does not come from a decomposition of cohomology with integral (or rational) coefficients. As a result, the intersection
may be much smaller than the whole group , even if the Hodge number is big. In short, the Hodge conjecture predicts that the possible "shapes" of complex subvarieties of (as described by cohomology) are determined by the Hodge structure of (the combination of integral cohomology with the Hodge decomposition of complex cohomology).
The Lefschetz (1,1)-theorem says that the Hodge conjecture is true for (even integrally, that is, without the need for a positive integral multiple in the statement).
The Hodge structure of a variety describes the integrals of algebraic differential forms on over homology classes in . In this sense, Hodge theory is related to a basic issue in calculus: there is in general no "formula" for the integral of an algebraic function. In particular, definite integrals of algebraic functions, known as periods, can be transcendental numbers. The difficulty of the Hodge conjecture reflects the lack of understanding of such integrals in general.
Example: For a smooth complex projective K3 surface , the group is isomorphic to , and is isomorphic to . Their intersection can have rank anywhere between 1 and 20; this rank is called the Picard number of . The moduli space of all projective K3 surfaces has a countably infinite set of components, each of complex dimension 19. The subspace of K3 surfaces with Picard number has dimension . (Thus, for most projective K3 surfaces, the intersection of with is isomorphic to , but for "special" K3 surfaces the intersection can be bigger.)
This example suggests several different roles played by Hodge theory in complex algebraic geometry. First, Hodge theory gives restrictions on which topological spaces can have the structure of a smooth complex projective variety. Second, Hodge theory gives information about the moduli space of smooth complex projective varieties with a given topological type. The best case is when the Torelli theorem holds, meaning that the variety is determined up to isomorphism by its Hodge structure. Finally, Hodge theory gives information about the Chow group of algebraic cycles on a given variety. The Hodge conjecture is about the image of the cycle map from Chow groups to ordinary cohomology, but Hodge theory also gives information about the kernel of the cycle map, for example using the intermediate Jacobians which are built from the Hodge structure.
Generalizations
Mixed Hodge theory, developed by Pierre Deligne, extends Hodge theory to all complex algebraic varieties, not necessarily smooth or compact. Namely, the cohomology of any complex algebraic variety has a more general type of decomposition, a mixed Hodge structure.
A different generalization of Hodge theory to singular varieties is provided by intersection homology. Namely, Morihiko Saito showed that the intersection homology of any complex projective variety (not necessarily smooth) has a pure Hodge structure, just as in the smooth case. In fact, the whole Kähler package extends to intersection homology.
A fundamental aspect of complex geometry is that there are continuous families of non-isomorphic complex manifolds (which are all diffeomorphic as real manifolds). Phillip Griffiths's notion of a variation of Hodge structure describes how the Hodge structure of a smooth complex projective variety varies when varies. In geometric terms, this amounts to studying the period mapping associated to a family of varieties. Saito's theory of Hodge modules is a generalization. Roughly speaking, a mixed Hodge module on a variety is a sheaf of mixed Hodge structures over , as would arise from a family of varieties which need not be smooth or compact.
See also
Potential theory
Serre duality
Helmholtz decomposition
Local invariant cycle theorem
Arakelov theory
Hodge-Arakelov theory
ddbar lemma, a key consequence of Hodge theory for compact Kähler manifolds.
Notes
References
Python code for computing Hodge numbers of hypersurfaces on GitHub | Hodge theory | [
"Engineering"
] | 3,988 | [
"Tensors",
"Differential forms",
"Hodge theory"
] |
501,758 | https://en.wikipedia.org/wiki/Magnetocrystalline%20anisotropy | In physics, a ferromagnetic material is said to have magnetocrystalline anisotropy if it takes more energy to magnetize it in certain directions than in others. These directions are usually related to the principal axes of its crystal lattice. It is a special case of magnetic anisotropy. In other words, the excess energy required to magnetize a specimen in a particular direction over that required to magnetize it along the easy direction is called crystalline anisotropy energy.
Causes
The spin-orbit interaction is the primary source of magnetocrystalline anisotropy. It is basically the orbital motion of the electrons which couples with crystal electric field giving rise to the first order contribution to magnetocrystalline anisotropy. The second order arises due to the mutual interaction of the magnetic dipoles. This effect is weak compared to the exchange interaction and is difficult to compute from first principles, although some successful computations have been made.
Practical relevance
Magnetocrystalline anisotropy has a great influence on industrial uses of ferromagnetic materials. Materials with high magnetic anisotropy usually have high coercivity, that is, they are hard to demagnetize. These are called "hard" ferromagnetic materials and are used to make permanent magnets. For example, the high anisotropy of rare-earth metals is mainly responsible for the strength of rare-earth magnets. During manufacture of magnets, a powerful magnetic field aligns the microcrystalline grains of the metal such that their "easy" axes of magnetization all point in the same direction, freezing a strong magnetic field into the material.
On the other hand, materials with low magnetic anisotropy usually have low coercivity, their magnetization is easy to change. These are called "soft" ferromagnets and are used to make magnetic cores for transformers and inductors. The small energy required to turn the direction of magnetization minimizes core losses, energy dissipated in the transformer core when the alternating current changes direction.
Thermodynamic theory
The magnetocrystalline anisotropy energy is generally represented as an expansion in powers of the direction cosines of the magnetization. The magnetization vector can be written , where is the saturation magnetization. Because of time reversal symmetry, only even powers of the cosines are allowed. The nonzero terms in the expansion depend on the crystal system (e.g., cubic or hexagonal). The order of a term in the expansion is the sum of all the exponents of magnetization components, e.g., is second order.
Uniaxial anisotropy
More than one kind of crystal system has a single axis of high symmetry (threefold, fourfold or sixfold). The anisotropy of such crystals is called uniaxial anisotropy. If the axis is taken to be the main symmetry axis of the crystal, the lowest order term in the energy is
The ratio is an energy density (energy per unit volume). This can also be represented in spherical polar coordinates with , , and :
The parameter , often represented as , has units of energy density and depends on composition and temperature.
The minima in this energy with respect to satisfy
If ,
the directions of lowest energy are the directions. The axis is called the easy axis. If , there is an easy plane perpendicular to the symmetry axis (the basal plane of the crystal).
Many models of magnetization represent the anisotropy as uniaxial and ignore higher order terms. However, if , the lowest energy term does not determine the direction of the easy axes within the basal plane. For this, higher-order terms are needed, and these depend on the crystal system (hexagonal, tetragonal or rhombohedral).
Hexagonal system
In a hexagonal system the axis is an axis of sixfold rotation symmetry. The energy density is, to fourth
order,
The uniaxial anisotropy is mainly determined by these first two terms. Depending on the values and , there are four different kinds of anisotropy (isotropic, easy axis, easy plane and easy cone):
: the ferromagnet is isotropic.
and : the axis is an easy axis.
and : the basal plane is an easy plane.
and : the basal plane is an easy plane.
: the ferromagnet has an easy cone (see figure to right).
The basal plane anisotropy is determined by the third term, which is sixth-order. The easy directions are projected onto three axes in the basal plane.
Below are some room-temperature anisotropy constants for hexagonal ferromagnets. Since all the values of and are positive, these materials have an easy axis.
Higher order constants, in particular conditions, may lead to first order magnetization processes FOMP.
Tetragonal and rhombohedral systems
The energy density for a tetragonal crystal is
.
Note that the term, the one that determines the basal plane anisotropy, is fourth order (same as the term). The definition of may vary by a constant multiple between publications.
The energy density for a rhombohedral crystal is
.
Cubic anisotropy
In a cubic crystal the lowest order terms in the energy are
If the second term can be neglected, the easy axes are the ⟨100⟩ axes (i.e., the , , and , directions) for and the ⟨111⟩ directions for (see images on right).
If is not assumed to be zero, the easy axes depend on both and . These are given in the table below, along with hard axes (directions of greatest energy) and intermediate axes (saddle points) in the energy). In energy surfaces like those on the right, the easy axes are analogous to valleys, the hard axes to peaks and the intermediate axes to mountain passes.
Below are some room-temperature anisotropy constants for cubic ferromagnets. The compounds involving are ferrites, an important class of ferromagnets. In general the anisotropy parameters for cubic ferromagnets are higher than those for uniaxial ferromagnets. This is consistent with the fact that the lowest order term in the expression for cubic anisotropy is fourth order, while that for uniaxial anisotropy is second order.
Temperature dependence of anisotropy
The magnetocrystalline anisotropy parameters have a strong dependence on temperature. They generally decrease rapidly as the temperature approaches the Curie temperature, so the crystal becomes effectively isotropic. Some materials also have an isotropic point at which . Magnetite (), a mineral of great importance to rock magnetism and paleomagnetism, has an isotropic point at 130 kelvin.
Magnetite also has a phase transition at which the crystal symmetry changes from cubic (above) to monoclinic or possibly triclinic below. The temperature at which this occurs, called the Verwey temperature, is 120 Kelvin.
Magnetostriction
The magnetocrystalline anisotropy parameters are generally defined for ferromagnets that are constrained to remain undeformed as the direction of magnetization changes. However, coupling between the magnetization and the lattice does result in deformation, an effect called magnetostriction. To keep the lattice from deforming, a stress must be applied. If the crystal is not under stress, magnetostriction alters the effective magnetocrystalline anisotropy. If a ferromagnet is single domain (uniformly magnetized), the effect is to change the magnetocrystalline anisotropy parameters.
In practice, the correction is generally not large. In hexagonal crystals, there is no change in . In cubic crystals, there is a small change, as in the table below.
See also
Anisotropy energy
Notes and references
Further reading
Magnetic ordering
Orientation (geometry)
Ferromagnetism | Magnetocrystalline anisotropy | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,688 | [
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Topology",
"Space",
"Condensed matter physics",
"Ferromagnetism",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
501,773 | https://en.wikipedia.org/wiki/Cytochrome%20C1 | Cytochrome C1 (also known as Complex III subunit 4) is a protein encoded by the CYC1 gene. Cytochrome is a heme-containing subunit of the cytochrome b-c1 complex, which accepts electrons from Rieske protein and transfers electrons to cytochrome c in the mitochondrial respiratory chain. It is formed in the cytosol and targeted to the mitochondrial intermembrane space. Cytochrome c1 belongs to the cytochrome c family of proteins.
Function
Cytochrome C1 plays a role in the electron transfer during oxidative phosphorylation. As an iron-sulfur protein approaches the b-c1 complex, it accepts an electron from the cytochrome b subunit, then undergoes a conformational change to attach to cytochrome c1. There, the electron carried by the iron-sulfur protein is transferred to the heme carried by cytochrome c1. This electron is then transferred to a heme carried by cytochrome c. This creates a reduced species of cytochrome c, which separates from the b-c1 complex and moves to the last enzyme in the electron transport chain, cytochrome c oxidase (Complex IV).
Species
CYC1 is a human gene that is conserved in chimpanzee, Rhesus monkey, dog, cow, mouse, rat, zebrafish, fruit fly, mosquito, C. elegans, S. cerevisiae, K. lactis, E. gossypii, S. pombe, N. crassa, A. thaliana, rice, and frog. There are orthologs of CYC1 in 137 known organisms.
In its structure and function, the cytochrome b-c1 complex bears extensive analogy to the cytochrome b6f complex of chloroplasts and cyanobacteria; cytochrome c1 plays an analogous role to cytochrome f, despite their different structures.
Clinical relevance
Mutations in the CYC1 gene are associated with mitochondrial complex III deficiency nuclear type 6. The disease symptoms include early childhood onset of severe lactic acidosis and ketoacidosis, usually in response to infection. Insulin-responsive hyperglycemia is also present, but psychomotor development appears normal. Mutation of CYC1 was observed to cause instability in the cytochrome b-c1 complex, which decreased its ability to create energy through oxidative phosphorylation. Mitochondrial complex III deficiency nuclear type 6 is autosomal recessive.
References
External links
Transmembrane proteins
Cellular respiration | Cytochrome C1 | [
"Chemistry",
"Biology"
] | 532 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
24,018,375 | https://en.wikipedia.org/wiki/Generalized%20Lotka%E2%80%93Volterra%20equation | The generalized Lotka–Volterra equations are a set of equations which are more general than either the competitive or predator–prey examples of Lotka–Volterra types. They can be used to model direct competition and trophic relationships between an arbitrary number of species. Their dynamics can be analysed analytically to some extent. This makes them useful as a theoretical tool for modeling food webs. However, they lack features of other ecological models such as predator preference and nonlinear functional responses, and they cannot be used to model mutualism without allowing indefinite population growth.
The generalised Lotka-Volterra equations model the dynamics of the populations of biological species. Together, these populations can be considered as a vector . They are a set of ordinary differential equations given by
where the vector is given by
where is a vector and is a matrix known as the interaction matrix.
Meaning of parameters
The generalised Lotka-Volterra equations can represent competition and predation, depending on the values of the parameters, as described below. "Generalized" means that all the combinations of pairs of signs for both species (−/−,−/+,+/-, +/+) are possible. They are less suitable for describing mutualism.
The values of are the intrinsic birth or death rates of the species. A positive value for means that species i is able to reproduce in the absence of any other species (for instance, because it is a plant that is wind pollinated), whereas a negative value means that its population will decline unless the appropriate other species are present (e.g. a herbivore that cannot survive without plants to eat, or a predator that cannot persist without its prey).
The values of the elements of the interaction matrix represent the relationships between the species. The value of represents the effect that species j has upon species i. The effect is proportional to the populations of both species, as well as to the value of . Thus, if both and are negative then the two species are said to be in direct competition with one another, since they each have a direct negative effect on the other's population. If is positive but is negative then species i is considered to be a predator (or parasite) on species j, since i's population grows at j's expense.
Positive values for both and would be considered mutualism. However, this is not often used in practice, because it can make it possible for both species' populations to grow indefinitely.
Indirect negative and positive effects are also possible. For example, if two predators eat the same prey then they compete indirectly, even though they might not have a direct competition term in the community matrix.
The diagonal terms are usually taken to be negative (i.e. species i's population has a negative effect on itself). This self-limitation prevents populations from growing indefinitely.
Dynamics and solutions
The generalised Lotka-Volterra equations are capable of a wide variety of dynamics, including limit cycles and chaos as well as point attractors (see Hofbauer and Sigmund). As with any set of ODEs, fixed points can be found by setting to 0 for all i, which gives, if no species is extinct, i.e., if for all ,
This may or may not have positive values for all the ; if it does not, then there is no stable attractor for which the populations of all species are positive. If there is a fixed point with all positive populations the Jacobian matrix in a neighbourhood of the fixed point is given by . This matrix is known as the community matrix and its eigenvalues determine the stability of the fixed point . The fixed point may or may not be stable.
If the fixed point is unstable then there may or may not be a periodic or chaotic attractor for which all the populations remain positive. In either case there can also be attractors for which some of the populations are zero and others are positive.
is always a fixed point, corresponding to the absence of all species. For species, a complete classification of this dynamics, for all sign patterns of above coefficients, is available, which is based upon equivalence to the 3-type replicator equation.
Applications for single trophic communities
In the case of a single trophic community, the trophic level below the one of the community (e.g. plants for a community of herbivore species), corresponding to the food required for individuals of a species i to thrive, is modeled through a parameter Ki known as the carrying capacity. E.g. suppose a mixture of crops involving S species. In this case can be thus written in terms of a non-dimensional interaction coefficient : .
Quantitative prediction of species yields from monoculture and biculture experiments
A straightforward procedure to get the set of model parameters is to perform, until the equilibrium state is attained: a) the S single species or monoculture experiments, and from each of them to estimate the carrying capacities as the yield of the species i in monoculture (the superscript ‘ex’ is to emphasize that this is an experimentally measured quantity a); b) the S´(S-1)/2 pairwise experiments producing the biculture yields, and (the subscripts i(j) and j(i) stand for the yield of species i in presence of species j and vice versa). We then can obtain and , as: Using this procedure it was observed that the Generalized Lotka–Volterra equations can predict with reasonable accuracy most of the species yields in mixtures of S >2 species for the majority of a set of 33 experimental treatments acrossdifferent taxa (algae, plants, protozoa, etc.).
Early warnings of species crashes
The vulnerability of species richness to several factors like, climate change, habitat fragmentation, resource exploitation, etc., poses a challenge to conservation biologists and agencies working to sustain the ecosystem services. Hence, there is a clear need for early warning indicators of species loss generated from empirical data.
A recently proposed early warning indicator of such population crashes uses effective estimation of the Lotka-Volterra interaction coefficients . The idea is that such coefficients can be obtained from spatial distributions of individuals of the different species through Maximum Entropy. This method was tested against the data collected for trees by the Barro Colorado Island Research Station, comprising eight censuses performed every 5 years from 1981 to 2015. The main finding was that for those tree species that suffered steep population declines (of at least 50%), across the eight tree censuses, the drop of is always steeper and occurs before the drop of the corresponding species abundance Ni . Indeed, such sharp declines in occur between 5 and 15 years in advance than comparable declines for Ni, and thus they serve as early warnings of impending population busts.
See also
Competitive Lotka–Volterra equations, based on a sigmoidal population curve (i.e., it has a carrying capacity)
Predator–prey Lotka–Volterra equations, based on exponential population growth (i.e., no limits on reproduction ability)
Random generalized Lotka–Volterra model
Consumer-resource model
Community matrix
Replicator equation
Volterra lattice
References
Equations
Population dynamics
Population ecology
Community ecology | Generalized Lotka–Volterra equation | [
"Mathematics"
] | 1,476 | [
"Mathematical objects",
"Equations"
] |
24,018,403 | https://en.wikipedia.org/wiki/C7H14N2 | {{DISPLAYTITLE:C7H14N2}}
The molecular formula C7H14N2 (molar mass: 126.20 g/mol) may refer to:
Bispidine (3,7-diazabicyclo[3.3.1]nonane)
N,N'-Diisopropylcarbodiimide
Molecular formulas | C7H14N2 | [
"Physics",
"Chemistry"
] | 82 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,021,039 | https://en.wikipedia.org/wiki/Paris%20inch | The Paris inch or pouce is an archaic unit of length that, among other uses, was common for giving the measurement of lenses. The Paris inch could be subdivided into 12 Paris lines (ligne), and 12 Paris inches made a Paris foot. The abbreviations are the same as for other inch and foot units, i.e.: for Paris foot a single prime symbol ( ′ ), for Paris inch a double prime symbol ( ″ ) and for Paris line a triple prime symbol ( ‴ ),
The Paris inch is longer than the English inch and the Vienna inch, although the Vienna inch was subdivided with a decimal, not 12 lines.
A famous measurement made using the Paris inch is the lens measurement of the first great refractor telescope, the Dorpat Great Refractor, also known as the Fraunhofer 9-inch. The 9-Paris inch diameter lens was made by Joseph von Fraunhofer, which works out to about 24.4 centimetres (9.59 English inches). This lens had the largest aperture of its day for an achromatic lens.
The term for telescopes persisted even in the 20th century, with a telescope listed in the 1909 Sears Roebuck catalog of having 25 lignes diameter aperture, or about 56 mm (5.6 cm). The measurement SPI (Stitches per inch) for leather pricking irons and stitch marking wheels also commonly uses the Paris inch instead of the Imperial inch.
See also
English & international inch
Vienna inch
Traditional French units of measurement
References
Optics
Units of length | Paris inch | [
"Physics",
"Chemistry",
"Mathematics"
] | 317 | [
"Applied and interdisciplinary physics",
"Optics",
"Units of length",
"Quantity",
" molecular",
"Atomic",
"Units of measurement",
" and optical physics"
] |
24,022,937 | https://en.wikipedia.org/wiki/C23H30N2O5 | {{DISPLAYTITLE:C23H30N2O5}}
The molecular formula C23H30N2O5 (molar mass: 414.50 g/mol) may refer to:
Deacetylvindoline
7-Hydroxymitragynine
Mitragynine pseudoindoxyl
Molecular formulas | C23H30N2O5 | [
"Physics",
"Chemistry"
] | 72 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,023,788 | https://en.wikipedia.org/wiki/Crystatech | CrystaTech Inc. is a supplier of process technology to the energy industry. CrystaTech commercializes the patented Crystasulf process. CrystaSulf is the first commercially available product to provide low cost hydrogen sulfide (H2S) removal from gas streams.
The company was founded in 1999 and is financially backed by the Gas Technology Institute and major energy companies through sponsored clean energy technology development. The corporate office is located in Austin, Texas.
CrystaTech is a member of the Gas Processors Suppliers Association.
Regional offices are in Alberta, Canada and Houston, Texas. All early stage R&D takes place at the Gas Technology Institute in Des Plaines, Illinois. Representative customers include Total, Petrobank Energy and Resources Ltd., Queensland Energy Resources, U.S. Department of Energy, Luminant, and American Electric Power.
Key People
David Work - Chairman
Don Carlton - Independent Director
Notes
References
http://gpaglobal.org/gpsa/membercompanies/xealapp/index.php#C
https://web.archive.org/web/20090905204521/http://www.gastechnology.org/webroot/app/xn/xd.aspx?it=enweb&xd=MarketResults%2Fmkt_portfolioCo.xml
External links
CrystaTech's Web Site
Green chemistry
Chemical process engineering
Companies based in Austin, Texas
Privately held companies based in Texas
Technology companies established in 1999
1999 establishments in Texas | Crystatech | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 320 | [
"Green chemistry",
"Chemical engineering",
"Environmental chemistry",
"nan",
"Chemical process engineering"
] |
24,024,923 | https://en.wikipedia.org/wiki/Identifiable%20victim%20effect | The identifiable victim effect is the tendency of individuals to offer greater aid when a specific, identifiable person ("victim") is observed under hardship, as compared to a large, vaguely defined group with the same need.
The identifiable victim effect has two components. People are more inclined to help an identified victim than an unidentified one, and people are more inclined to help a single identified victim than a group of identified victims. Although helping an identified victim may be commendable, the identifiable victim effect is considered a cognitive bias. From a consequentialist point of view, the cognitive error is the failure to offer N times as much help to N unidentified victims.
The identifiable victim effect has a mirror image that is sometimes called the identifiable perpetrator effect. Research has shown that individuals are more inclined to mete out punishment, even at their own expense, when they are punishing a specific, identified perpetrator.
The conceptualization of the identifiable victim effect as it is known today is commonly attributed to American economist Thomas Schelling. He wrote that harm to a particular person invokes “anxiety and sentiment, guilt and awe, responsibility and religion, [but]…most of this awesomeness disappears when we deal with statistical death”.
Historical figures from Joseph Stalin to Mother Teresa are credited with statements that epitomize the identifiable victim effect. The remark "One death is a tragedy; a million deaths is a statistic" is widely, although probably incorrectly, attributed to Stalin. The remark "If I look at the mass I will never act. If I look at the one, I will," is attributed to Mother Teresa.
Examples
This article mentions many historical incidents that have been cited as examples of the identifiable victim effect. These incidents serve as illustrative examples but do not constitute evidence that the effect exists.
Arrest of Rosa Parks
The conviction of Rosa Parks in 1955 for refusing to give up her bus seat in favor of a White passenger inspired the Black community to boycott the Montgomery, Alabama buses for over a year. Parks appealed her conviction, but her case never reached the U.S. Supreme Court. The Court found bus segregation unconstitutional in Browder v. Gayle, a case with four plaintiffs. Parks remains a symbol of resistance to racial segregation in the United States, while the four Browder v. Gayle plaintiffs are much less well known.
'Baby Jessica' in the well
On October 14, 1987, 18-month old Jessica McClure fell into a narrow well in her aunt's backyard in Midland, Texas. Within hours, 'Baby Jessica', as she became known, made headlines around the US. The public reacted with sympathy towards her ordeal. While teams of rescue workers, paramedics and volunteers worked to successfully rescue 'Baby Jessica' in 58 hours, the public donated hundreds of thousands of dollars to her family. Even after Jessica was discharged from the hospital, the McClure family was flooded with cards and gifts from members of the public as well as a visit from then-Vice President George H.W. Bush and a telephone call from then-President Ronald Reagan.
Drowning of Alan Kurdi
In September, 2015, three-year-old Syrian refugee Alan (or Aylan) Kurdi drowned when he and his family tried to reach Europe by boat. A photograph of Kurdi's body caused a dramatic upturn in international concern over the refugee crisis. The picture has been credited with causing a surge in donations to charities helping migrants and refugees, with one charity, the Migrant Offshore Aid Station, recording a 15-fold increase in donations within 24 hours of its publication.
Murder of George Floyd
The murder of George Floyd by a police officer in May, 2020 led to worldwide protests against police brutality. Almost 1,000 people are killed in the U.S. by police every year, and a black male is 2.5 times as likely to be killed by police as a white male. The reader is invited to introspect and consider if he or she is a thousand times more outraged by a thousand annual killings than by the killing of George Floyd.
For which victims is the effect strongest?
Meta-analysis of experimental evidence
A meta-analysis of studies through 2015 of the identifiable victim effect found that not all studies achieve statistical significance, and the effect size, in general, is small. In an experiment more recent than the meta-analysis, identifying a victim in COVID-19 messaging had no meaningful effect on pro-health behaviors such as hand-washing, mask-wearing, and staying at home.
On the other hand, extremely subtle experimental manipulations may yield significant effects. For example, Small and Loewenstein found that when the victim is identified only by a number, subjects are more inclined to help the victim if they know the victim's number when they decide whether to help than if they learn the number later.
The meta-analysis found that statistical significance is enhanced and the effect size is increased under the following circumstances:
if the identified victim is a child
if a photograph of the victim is shown
if the victim's plight is caused by poverty rather than disease or injury
if the victim is perceived as not responsible for his/her plight
Vividness
Information that identifies a victim may include vivid details such as photographs, video, and a description of the victim's predicament. The victim may be portrayed as innocent and helpless. These details evoke emotional responses and provide a sense of familiarity and social closeness. Therefore, identified victims may elicit greater support than statistical victims.
Studies have found that people respond more to concrete, graphic information than abstract, statistical information. Bohnet and Frey (1999) and Kogut and Ritov (2005) found that vividness contributes to the identifiable victim effect. Another study by Kogut and Ritov (2005) found that donations to benefit a needy child increased when the name and a picture of the child were provided. Jenni and Loewenstein (1997) did not observe an effect of vividness.
In 2003, Deborah Small and George Loewenstein conducted an experiment showing that the identifiable victim effect does not result from vividness alone. Victims were identified only by number. Participants selected the victim to whom they could donate by picking a victim's number out of a bag. Participants donated significantly more money if they picked the victim's number before donating rather than afterward. They donated more to an already identified victim even though the victim's identity was hidden from them.
Singularity effect
The identifiable victim effect disappears when a group of victims, rather than a single victim, is identified. In a group of two or more victims, identifying every victim makes no difference. For example, a 2005 study by Kogut and Ritov asked participants how much they would be willing to donate to either a critically ill child or a group of eight critically ill children. Although identifying the individual child increased donations, identifying every child in the group of eight did not.
Paul Slovic argued that our compassion fades as the number of victims increases, and eventually collapses. He and his colleagues found that experimental subjects donated less to help two starving children than to help one.
When the victim can be blamed
Research suggests that if an individual is seen as responsible for their plight, people offer less help if the victim is identified. Most research dedicated to the identifiable victim effect avoids the topic of blame, using explicitly blameless individuals, such as children suffering from an illness. However, there are real-world situations where victims may be seen as to blame for their current situation. For example, in a 2011 study by Kogut, individuals were less likely to offer help to an AIDS victim if the victim had contracted AIDS through sexual contact than if the individual was born with AIDS. In other words, individuals were less likely to offer help to victims if they were seen as at least partially responsible for their plight. A meta-study conducted in 2016 supports these findings, reporting that charitable donations were highest when the victim showed little responsibility for their victimization.
In such cases where victim blaming is possible, identification of individuals may not induce sympathy and may actually increase negative perception of the victim. This reduction in help is even more pronounced if the individual believes in the just world fallacy, which is the tendency for people to blame the victim for what has happened to them. This pattern of blame results from a desire to believe that the world is predictable and orderly and that those who suffer must have done something to deserve their suffering.
Explanations
Researchers have proposed various underlying causes of the identifiable victim effect, which may work together to produce the effect. These possible causes are given below, and experimental tests are cited.
Emotional reactions
According to the affect heuristic, people make decisions based on emotions rather than objective analysis. A single identified victim may trigger a stronger emotional response than a group of unidentified victims. Several studies have found that identifying a victim evokes more sympathy for the victim and more distress at the victim's plight, along with more willingness to help.
Kogut and Ritov, for example, asked participants how much they would donate to help a critically ill child. When they identified the child, feelings of distress at the child's plight increased along with donations. This supports the idea that altruistic acts may serve as coping mechanisms to alleviate negative emotions, such as distress or guilt.
Effect of reference group size
Risk that is concentrated is perceived as greater than the same risk dispersed over a wider population. Identifiable victims are their own reference group; if they do not receive aid then the entire reference group will perish. For example, Fetherstonhaugh et al found that an intervention saving a fixed number of lives was considered less beneficial when more total lives were at risk.
Jenni and Loewenstein's experimental subjects showed significantly more support for risk-reducing actions when a higher proportion of the reference group was at risk. This effect was so striking that Jenni and Loewenstein suggested that the identifiable victim effect could instead be called the “percentage of reference group saved effect”.
Perceived responsibility
People tend to feel more responsibility for victims who are psychologically closer to them, and people may feel closer to an identified victim. Indeed, several studies have found that individuals feel more responsibility for an identified victim.
Identified vs statistical victims
Some victims cannot be identified because they are statistical. For example, we do not know whose lives would be saved if 10% more of the population were vaccinated against a disease. There are several theoretical reasons to suspect that identified victims are more likely to be helped than statistical ones:
People underestimate the importance of outcomes that are merely probable rather than certain. If the money sent to Baby Jessica when she was trapped in the well had been spent on preventative health care for children, many lives might plausibly have been saved, but Jessica certainly would have died if she had not been rescued in time.In one of Jenni and Loewenstein's two experiments comparing certain and uncertain deaths, their subjects were significantly more concerned about certain deaths.
People regret losses more than they enjoy equivalent gains. In the Baby Jessica example above, the death of Jessica if she had not been rescued would have been a tragic loss, but the children's lives that might have been saved through preventative health care were framed as gains.
The decision to help an identified victim is made ex post, after the victim is in danger, but the decision to save a statistical victim is often made ex ante, to prevent danger to the individual. People may feel a responsibility to an actual identified victim but not to a possible victim of a future tragedy that might not occur. This explanation is closest to what Thomas Schelling implied in his now-famous paper.Jenni and Loewenstein (1997) did not find evidence that ex post vs ex ante evaluation contributes to the identifiable victim effect, but Small and Lowenstein (2003) did.
Indeed, researchers have generally found that identified victims are more likely to be helped than statistical ones.
For example, Small, Loewenstein, and Slovic found that subjects donated much more money to help a single starving girl named Rokia than to relieve a famine described statistically.
Relation to other cognitive biases
The identifiable victim effect is a special case of a more general phenomenon: people respond to stories more readily than to facts. Kubin et al found that people have more respect for their political opponents' opinions when their opponents support their opinions with personal experiences rather than facts. In keeping with the literature on the identifiable victim effect, they found that personal experiences involving harm are particularly effective.
The preference for helping a single individual rather than a group is sometimes called the singularity effect. Indifference to the number of individuals helped is called scope neglect or scope insensitivity.
The identifiable victim effect has a mirror image that is sometimes called the identifiable perpetrator effect.
Research has shown that individuals are more inclined to mete out punishment, even at their own expense, when they are punishing a specific, identified perpetrator. They also exert more severe punishments and express stronger feelings of blame and anger. Even when the perpetrator is identified only by a number, subjects are more inclined to punish if they know the perpetrator’s number when they decide whether to punish than if they learn the number later. This effect has also been called the “Goldstein effect,” after the fictional Emmanuel Goldstein, who was vilified as the supposed enemy of the state in George Orwell’s dystopian novel 1984.
These two effects, of identifiable victims and identifiable perpetrators, suggest that there is a more general identifiable other effect, such that any identified individual evokes a stronger reaction than an equivalent but unidentified individual.
Implications
Public policy and politics
Healthcare
The identifiable victim effect may also influence healthcare, both at the individual and national level. On the individual level, doctors are more likely to recommend expensive, but potentially life-saving, treatments to an individual patient rather than to a group of patients. This effect is not limited to medical professionals, as laymen demonstrate this same bias towards providing more expensive treatments for individual patients. On the national level, the American people are far more likely to contribute to an expensive treatment to save the life of one person rather than spend much smaller amounts on preventative measures that could save the lives of thousands per year. A function of American individualism, this nationwide bias towards expensive treatments is still prevalent today.
Brady Bill
James S. Brady, the then-White House press secretary, was among three collateral damage victims in the attempted assassination of President Reagan in 1981. Brady was explicitly named in reports of the shooting in contrast to the other two injured, a District of Columbia police officer and a Secret Service agent. The political reaction was largely focused on the injuries of Brady which led to the enactment of the Brady Handgun Violence Prevention Act of 1993. It states that it is mandatory for firearm dealers to perform background searches on firearm purchasers.
Ryan White Care Act
The need to tackle the problems faced by AIDS sufferers was brought to the political forefront as a result of the legal and social plight of one particular AIDS victim, Ryan White, who contracted HIV at age 13 and died of AIDS six years later. His circumstances and his campaign for greater funding for AIDS research were widely publicised in the media. Following his death in 1990, the US Congress passed the Ryan White Care Act, which funded the largest set of services for people living with AIDS in the country.
Defunding peacekeeping efforts in Darfur
In November 2005, the U.S. Congress stripped $50 million from a bill that would have funded peacekeeping efforts in Darfur, where genocide was claiming hundreds of thousands of lives.
Less than a month earlier, Rosa Parks had passed away. Her casket had lain in state at the rotunda of the U.S. Capitol, and many elected officials, including President George W. Bush, had attended her memorial service. Genocide expert Paul Slovic wrote, "We appropriately honor the one, Rosa Parks, but by turning away from the crisis in Darfur we are, implicitly, placing almost no value on the lives of millions there."
George Santos' misstatements
Republican politician George Santos was elected to Congress in a formerly Democratic district after falsely claiming that his mother was a 9/11 casualty and his maternal grandparents were Jewish Holocaust refugees who had fled Soviet Ukraine and German-occupied Belgium. In reality, his mother, who died in 2016, was not in the United States on September 11, 2001. His maternal grandparents actually lived in Brazil. Whether Santos would have been elected if he had not made these (and other) false statements is unknown.
Criminal justice
Since the identifiable victim effect can influence punishment, it has the potential to undermine the system of trial by jury. Jurors, when deliberating, work with an identifiable alleged perpetrator, and thus may attach negative emotions (e.g. disgust, anger) to the individual or assign increased blame when handing down a harsh sentence. Policymakers, who are unable to see the individual offender, being almost entirely emotionally removed, may actually have intended a more lenient sentence. This may produce a harsher verdict than the legal guidelines recommend or allow. On the other extreme, jurors may feel sympathy, relating with the perpetrator on a level not experienced by policymakers, leading to a milder verdict than legally appropriate or allowable.
Typically in crime investigations, law enforcement forces conceal any information regarding the identities of the suspects until they have strong evidence that the suspects are credible. When identities of suspects are revealed through description of their features or release of their images, media coverage and public discussion on the issue grows. On one side, the public discourse can become increasingly negative and hostile, or, if the perpetrator is sympathetic, support for the perpetrator may grow. This is because people experience a greater emotional reaction towards a concrete, identifiable perpetrator than an abstract, unidentifiable one.
Business ethics
Yam and Reynolds speculated that the growing anonymity of parties to business transactions may contribute to an increase in unethical business behavior. They hypothesized that because of increasing corporate size, the ubiquity of e-commerce, and the commoditization of workers, business executives are less likely to know their employees, customers, and shareholders, and therefore they might be more willing to exploit these unidentified potential victims.
In one of their experiments, they asked their subjects if they would approve of withholding a drug from the market if releasing it later would bring a greater profit. Their experimental subjects voiced more approval for withholding the drug if all of the victims of the disease the drug treats were unidentified than if one of the victims was named.
Other researchers suggested that outside observers, not only perpetrators, view unethical behavior as less unethical if the victim of the unethical behavior is unidentified. This could possibly result in less public outcry against unethical practices in a globalized business environment, where the victims are often unseen.
Personality differences among potential helpers
Attachment anxiety
High levels of attachment anxiety may increase the power of identifiable victim effect.
When presented with an identified victim, individuals with high levels of attachment anxiety tend to donate more money than the average individual. Research suggests that anxiously attached people experience significantly more personal distress than those securely attached when confronted with victims in need, so they donate more in order to relieve their distress.
When presented with an unidentified victim, individuals with high levels of attachment anxiety tend to donate less money than the average individual. Research suggests that unidentified victims do not provoke distress, and that anxiously attached people focus on their own vulnerabilities, so that they have less inclination to help unidentified others.
Although anxiously attached people may participate in prosocial behaviors, such as donating money to a charity, researchers hypothesize that their actions are not the result of altruistic tendencies, but instead are "positively correlated with egoistic, rather than altruistic motives for helping and volunteering," and that anxiously attached people engage in pro-social behavior mainly when strenuous effort is not required.
Guilt
Yam and Reynolds investigated the propensity to victimize others. Their experimental subjects harmed unidentified others more than named others, and anticipated that harming named others would provoke more guilt than harming unidentified others. The experimenters did not test whether people who are more prone to feelings of guilt experience the identified victim effect more intensely.
Reasoning style
Research suggests that individual differences in reasoning style moderate the identifiable victim effect. Two different methods of reasoning are “experiential” and “rational”. Experiential thinking (e.g. emotionally-based thinking) is automatic, contextual and fluid, and rational thinking (e.g. logically based thinking) is deliberative, analytical, and decontextualized. Experiential thinking styles may increase the power of the identifiable victim effect, and rational thinking styles may decrease the power of the identifiable victim effect. Researchers theorize that these differences result because experiential thinkers rely on emotional responses towards an issue when making a decision. In contrast, rational thinkers analyze the situation as a whole before making a decision. Thus, a person thinking rationally would respond to all victims equally, not giving preference to those specifically named or otherwise identified, just as experiential thinkers would be drawn towards the more emotionally charged identified victim. However, research conducted during the COVID-19 pandemic found that identifiable victim effects on public health promoting behaviors were not only undetected, but also not mediated by behavioral tests of reasoning style.
See also
Compassion fade
List of cognitive biases
Moral psychology
References
Cognitive biases
Giving
Behavioral finance
Moral psychology
Social problems in medicine | Identifiable victim effect | [
"Biology"
] | 4,402 | [
"Behavioral finance",
"Behavior",
"Human behavior"
] |
27,248,907 | https://en.wikipedia.org/wiki/Aircraft%20gross%20weight | The aircraft gross weight (also known as the all-up weight and abbreviated AUW) is the total aircraft weight at any moment during the flight or ground operation.
An aircraft's gross weight will decrease during a flight due to fuel and oil consumption. An aircraft's gross weight may also vary during a flight due to payload dropping or in-flight refuelling.
At the moment of releasing its brakes, the gross weight of an aircraft is equal to its takeoff weight. During flight, an aircraft's gross weight is referred to as the en-route weight or in-flight weight.
Design weight limits (structural design weights)
An aircraft's gross weight is limited by several weight restrictions in order to avoid overloading its structure or to avoid unacceptable performance or handling qualities while in operation.
Aircraft gross weight limits are established during an aircraft's design and certification period and are laid down in the aircraft's type certificate and manufacturer specification documents.
The absolute maximum weight capabilities of a given aircraft are referred to as the structural weight limits.
The structural weight limits are based on aircraft maximum structural capability and define the envelope for the CG charts (both maximum weight and CG limits).
An aircraft's structural weight capability is typically a function of when the aircraft was manufactured, and in some cases, old aircraft can have their structural weight capability increased by structural modifications.
Maximum design taxi weight (MDTW)
The maximum design taxi weight (also known as the maximum design ramp weight (MDRW)) is the maximum weight certificated for aircraft manoeuvring on the ground (taxiing or towing) as limited by aircraft strength and airworthiness requirements.
Maximum design takeoff weight (MDTOW)
Is the maximum certificated design weight when the brakes are released for takeoff and is the greatest weight for which compliance with the relevant structural and engineering requirements has been demonstrated by the manufacturer.
Maximum design landing weight (MDLW)
The maximum certificated design weight at which the aircraft meets the appropriate landing certification requirements. It generally depends on the landing gear strength or the landing impact loads on certain parts of the wing structure.
The MDLW must not exceed the MDTOW.
The maximum landing weight is typically designed for 10 feet per second (600 feet per minute) sink rate at touch down with no structural damage.
Maximum design zero-fuel weight (MDZFW)
The maximum certificated design weight of the aircraft less all usable fuel and other specified usable agents (engine injection fluid, and other consumable propulsion agents). It is the maximum weight permitted before usable fuel and other specified usable fluids are loaded in specified sections of the airplane. The MDZFW is limited by strength and airworthiness requirements. At this weight, the subsequent addition of fuel will not result in the aircraft design strength being exceeded. The weight difference between the MDTOW and the MDZFW may be utilised only for the addition of fuel.
Minimum and maximum flight weight (MFW)
Minimum flight weight is usually limited by either the practicality (operating empty weight plus weight of the crew and minimal amount of fuel) or handling considerations (frequently related to the balance).
Maximum flight weight is limited by aircraft strength and airworthiness requirements. Maximum flight weight is also known as maximum in-flight weight, maximum en route weight. Typically it is the same as the maximum takeoff weight (notable exception is due to inflight refueling).
Authorised weight limits
Aircraft authorised gross weight limits (also referred to as certified weight limits) are laid down in the aircraft flight manuals (AFM) and/or associated certificate of airworthiness (C of A). The authorised or permitted limits may be equal to or lower than the structural design weight limits.
The authorised weight limits that can legally be used by an operator or airline are those listed in the AFM and the weight and balance manual.
The authorised (or certified) weight limits are chosen by the customer/airline and they are referred to as the "purchased weights". An operator may purchase a certified weight below the maximum design weights because many of the airport operating fees are based on the aircraft AFM maximum allowable weight values. An aircraft purchase price is, typically, a function of the certified weight purchased.
Maximum weights established, for each aircraft, by design and certification must not be exceeded during aircraft operation (ramp or taxying, takeoff, en-route flight, approach, and landing) and during aircraft loading (zero fuel conditions, centre of gravity position, and weight distribution).
Weights could be restricted on some type of aircraft depending on the aircraft handling requirements; for example aerobatic aircraft, where certain aerobatic manoeuvres can only be executed with a limited gross weight.
In addition, the authorised maximum weight limits may be less as limited by centre of gravity, fuel density, and fuel loading limits.
Maximum taxi weight (MTW)
The maximum taxi weight (MTW) (also known as the maximum ramp weight (MRW) is the maximum weight authorized for maneuvering (taxiing or towing) an aircraft on the ground as limited by aircraft strength and airworthiness requirements. It includes the weight of taxi and run-up fuel for the engines and the APU.
It is greater than the maximum takeoff weight due to the fuel that will be burned during the taxi and runup operations.
The difference between the maximum taxi/ramp weight and the maximum take-off weight (maximum taxi fuel allowance) depends on the size of the aircraft, the number of engines, APU operation, and engines/APU fuel consumption, and is typically assumed for 10 to 15 minutes allowance of taxi and run-up operations.
Maximum takeoff weight (MTOW)
The maximum takeoff weight (also known as the maximum brake-release weight) is the maximum weight authorised at brake release for takeoff, or at the start of the takeoff roll.
The maximum takeoff weight is always less than the maximum taxi/ramp weight to allow for fuel burned during taxi by the engines and the APU.
In operation, the maximum weight for takeoff may be limited to values less than the maximum takeoff weight due to aircraft performance, environmental conditions, airfield characteristics (takeoff field length, altitude), maximum tire speed and brake energy, obstacle clearances, and/or en route and landing weight requirements.
Maximum landing weight (MLW)
The maximum weight authorised for normal landing of an aircraft.
The MLW must not exceed the MTOW.
The operation landing weight may be limited to a weight lower than the Maximum Landing Weight by the most restrictive of the following requirements:
Aircraft performance requirements for a given altitude and temperature:
landing field length requirements,
approach and landing climb requirements
Noise requirements
If the flight has been of short duration, fuel may have to be jettisoned to reduce the landing weight.
Overweight landings require a structural inspection or evaluation of the touch-down loads before the next aircraft operation.
Maximum zero-fuel weight (MZFW)
The maximum permissible weight of the aircraft less all usable fuel and other specified usable agents (engine injection fluid, and other consumable propulsion agents). It is the maximum weight permitted before usable fuel and other specified usable fluids are loaded in specified sections of the airplane.
See also
Manufacturer's empty weight
Operating empty weight
Fuel dumping
Center of gravity of an aircraft
Aircraft weight class
References
External links
Aircraft weight measurements | Aircraft gross weight | [
"Physics",
"Engineering"
] | 1,490 | [
"Aircraft weight measurements",
"Mass",
"Matter",
"Aerospace engineering"
] |
27,249,202 | https://en.wikipedia.org/wiki/Maximum%20landing%20weight | The maximum landing weight (MLW), also known as the maximum structural landing weight or maximum structural landing mass, is the maximum aircraft gross weight due to design or operational limitations at which an aircraft is permitted to land. The MLW is set in order to ensure safe landings; if an aircraft weighs too heavy during touchdown, it may suffer structural damage or even break apart upon landing. Aircraft also have a maximum take-off weight, which is almost always higher than the maximum landing weight, so that an aircraft can weigh less upon landing due to burning fuel during the flight.
The operation landing weight may be limited to a weight lower than the maximum landing weight by the most restrictive of the following requirements:
Aircraft performance requirements for a given altitude and temperature:
landing field length requirements,
approach and landing climb requirements.
Noise requirements
If the flight has been of unusually short duration, such as due to an emergency just after takeoff requiring a return to the airport, it may be necessary to dump fuel to reduce the landing weight. Some aircraft are unable to dump fuel, however. For example, on 3 February 2020, Air Canada Flight 837, a Boeing 767-300, suffered a rear tyre failure during take-off at Madrid–Barajas Airport on its way to Toronto, causing its left engine to catch fire. The pilots managed to extinguish it by shutting the engine down, but as 767-300s are not designed for fuel dumping, it had to stay in a single-engine holding pattern for over 4 hours to burn fuel and achieve its maximum landing weight, while an SAF fighter reported minimal damage to the landing gear. The plane landed safely and nobody was injured.
Sometimes the emergency may be so pressing that the aircraft has no time to dump or burn fuel in order to achieve its maximum landing weight before touchdown; in that case, a risky overweight landing may be permitted. In other cases, the flight crew may fail to dump fuel when it still had the time to do so before landing, leading to fatal accidents such as Aeroflot Flight 1492 on 5 May 2019, where an apparently needlessly overweight landing turned into a crash that killed 41 of the 78 people on board.
Where aircraft overweight landing is permitted, a structural inspection or evaluation of the touch-down loads before the next aircraft operation will be required in case damage has occurred.
References
Aircraft operations
Aircraft weight measurements
Aviation safety | Maximum landing weight | [
"Physics",
"Engineering"
] | 488 | [
"Aircraft weight measurements",
"Mass",
"Matter",
"Aerospace engineering"
] |
27,250,934 | https://en.wikipedia.org/wiki/Chemical%20metallurgy | Chemical metallurgy is the science of obtaining metals from their concentrates, semi products, recycled bodies and solutions, and of considering reactions of metals with an approach of disciplines belonging to chemistry. As such, it involves reactivity of metals and it is especially concerned with the reduction and oxidation, and the chemical performance of metals.
Subjects of study in chemical metallurgy include the extraction of metals, thermodynamics, electrochemistry, and chemical degradation (corrosion).
See also
Metallurgy
Physical metallurgy
Extractive metallurgy
References
Metallurgy | Chemical metallurgy | [
"Chemistry",
"Materials_science",
"Engineering"
] | 116 | [
"Metallurgy",
"Materials science",
"nan"
] |
27,258,233 | https://en.wikipedia.org/wiki/Dynashift | Dynashift is a type of gearbox on many Massey Ferguson tractors. In May 2006 Tier 3 Compliant gearboxes were released, and the production of Dynashift halted.
References
it consists of a four-stage gear shifter that does not need the clutch to be pressed.
Automobile transmissions | Dynashift | [
"Engineering"
] | 60 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
22,517,546 | https://en.wikipedia.org/wiki/Osmium%20borides | Osmium borides are compounds of osmium and boron. Their most remarkable property is potentially high hardness. It is thought that a combination of high electron density of osmium with the strength of boron-osmium covalent bonds will make osmium borides superhard materials, however this has not been demonstrated yet. For example, OsB2 is hard (hardness comparable to that of sapphire), but not superhard.
Synthesis
Osmium borides are produced in vacuum or inert atmosphere to prevent formation
of osmium tetroxide, which is a hazardous compound. Synthesis occurs at high temperatures (~1000 °C) from a mixture of MgB2 and OsCl3.
Structure
Three osmium borides are known: OsB, Os2B3 and OsB2. The first two have hexagonal structure, similar to that of rhenium diboride. Osmium diboride was first also sought as hexagonal, but one of its phases was later reassigned to orthorhombic. In recent methods of synthesis, it has also been found that a hexagonal phase of OsB2 exists with a similar structure to ReB2.
References
Osmium compounds
Borides
Superhard materials | Osmium borides | [
"Physics"
] | 254 | [
"Materials",
"Superhard materials",
"Matter"
] |
22,517,908 | https://en.wikipedia.org/wiki/Okeanomyces | Okeanomyces is a genus of fungi in the family Halosphaeriaceae. This is a monotypic genus, containing the single species Okeanomyces cucullatus, described as new to science in 2004.
References
Microascales
Monotypic Sordariomycetes genera
Fungus species | Okeanomyces | [
"Biology"
] | 64 | [
"Fungi",
"Fungus species"
] |
22,519,313 | https://en.wikipedia.org/wiki/Cell%20surface%20receptor | Cell surface receptors (membrane receptors, transmembrane receptors) are receptors that are embedded in the plasma membrane of cells. They act in cell signaling by receiving (binding to) extracellular molecules. They are specialized integral membrane proteins that allow communication between the cell and the extracellular space. The extracellular molecules may be hormones, neurotransmitters, cytokines, growth factors, cell adhesion molecules, or nutrients; they react with the receptor to induce changes in the metabolism and activity of a cell. In the process of signal transduction, ligand binding affects a cascading chemical change through the cell membrane.
Structure and mechanism
Many membrane receptors are transmembrane proteins. There are various kinds, including glycoproteins and lipoproteins. Hundreds of different receptors are known and many more have yet to be studied. Transmembrane receptors are typically classified based on their tertiary (three-dimensional) structure. If the three-dimensional structure is unknown, they can be classified based on membrane topology. In the simplest receptors, polypeptide chains cross the lipid bilayer once, while others, such as the G-protein coupled receptors, cross as many as seven times. Each cell membrane can have several kinds of membrane receptors, with varying surface distributions. A single receptor may also be differently distributed at different membrane positions, depending on the sort of membrane and cellular function. Receptors are often clustered on the membrane surface, rather than evenly distributed.
Mechanism
Two models have been proposed to explain transmembrane receptors' mechanism of action.
Dimerization: The dimerization model suggests that prior to ligand binding, receptors exist in a monomeric form. When agonist binding occurs, the monomers combine to form an active dimer.
Rotation: Ligand binding to the extracellular part of the receptor induces a rotation (conformational change) of part of the receptor's transmembrane helices. The rotation alters which parts of the receptor are exposed on the intracellular side of the membrane, altering how the receptor can interact with other proteins within the cell.
Domains
Transmembrane receptors in plasma membrane can usually be divided into three parts.
Extracellular domains
The extracellular domain is just externally from the cell or organelle. If the polypeptide chain crosses the bilayer several times, the external domain comprises loops entwined through the membrane. By definition, a receptor's main function is to recognize and respond to a type of ligand. For example, a neurotransmitter, hormone, or atomic ions may each bind to the extracellular domain as a ligand coupled to receptor. Klotho is an enzyme which effects a receptor to recognize the ligand (FGF23).
Transmembrane domains
Two most abundant classes of transmembrane receptors are GPCR and single-pass transmembrane proteins. In some receptors, such as the nicotinic acetylcholine receptor, the transmembrane domain forms a protein pore through the membrane, or around the ion channel. Upon activation of an extracellular domain by binding of the appropriate ligand, the pore becomes accessible to ions, which then diffuse. In other receptors, the transmembrane domains undergo a conformational change upon binding, which affects intracellular conditions. In some receptors, such as members of the 7TM superfamily, the transmembrane domain includes a ligand binding pocket.
Intracellular domains
The intracellular (or cytoplasmic) domain of the receptor interacts with the interior of the cell or organelle, relaying the signal. There are two fundamental paths for this interaction:
The intracellular domain communicates via protein-protein interactions against effector proteins, which in turn pass a signal to the destination.
With enzyme-linked receptors, the intracellular domain has enzymatic activity. Often, this is tyrosine kinase activity. The enzymatic activity can also be due to an enzyme associated with the intracellular domain.
Signal transduction
Signal transduction processes through membrane receptors involve the external reactions, in which the ligand binds to a membrane receptor, and the internal reactions, in which intracellular response is triggered.
Signal transduction through membrane receptors requires four parts:
Extracellular signaling molecule: an extracellular signaling molecule is produced by one cell and is at least capable of traveling to neighboring cells.
Receptor protein: cells must have cell surface receptor proteins which bind to the signaling molecule and communicate inward into the cell.
Intracellular signaling proteins: these pass the signal to the organelles of the cell. Binding of the signal molecule to the receptor protein will activate intracellular signaling proteins that initiate a signaling cascade.
Target proteins: the conformations or other properties of the target proteins are altered when a signaling pathway is active and changes the behavior of the cell.
Membrane receptors are mainly divided by structure and function into 3 classes: The ion channel linked receptor; The enzyme-linked receptor; and The G protein-coupled receptor.
Ion channel linked receptors have ion channels for anions and cations, and constitute a large family of multipass transmembrane proteins. They participate in rapid signaling events usually found in electrically active cells such as neurons. They are also called ligand-gated ion channels. Opening and closing of ion channels is controlled by neurotransmitters.
Enzyme-linked receptors are either enzymes themselves, or directly activate associated enzymes. These are typically single-pass transmembrane receptors, with the enzymatic component of the receptor kept intracellular. The majority of enzyme-linked receptors are, or associate with, protein kinases.
G protein-coupled receptors are integral membrane proteins that possess seven transmembrane helices. These receptors activate a G protein upon agonist binding, and the G-protein mediates receptor effects on intracellular signaling pathways.
Ion channel-linked receptor
During the signal transduction event in a neuron, the neurotransmitter binds to the receptor and alters the conformation of the protein. This opens the ion channel, allowing extracellular ions into the cell. Ion permeability of the plasma membrane is altered, and this transforms the extracellular chemical signal into an intracellular electric signal which alters the cell excitability.
The acetylcholine receptor is a receptor linked to a cation channel. The protein consists of four subunits: alpha (α), beta (β), gamma (γ), and delta (δ) subunits. There are two α subunits, with one acetylcholine binding site each. This receptor can exist in three conformations. The closed and unoccupied state is the native protein conformation. As two molecules of acetylcholine both bind to the binding sites on α subunits, the conformation of the receptor is altered and the gate is opened, allowing for the entry of many ions and small molecules. However, this open and occupied state only lasts for a minor duration and then the gate is closed, becoming the closed and occupied state. The two molecules of acetylcholine will soon dissociate from the receptor, returning it to the native closed and unoccupied state.
Enzyme-linked receptors
As of 2009, there are 6 known types of enzyme-linked receptors: Receptor tyrosine kinases; Tyrosine kinase associated receptors; Receptor-like tyrosine phosphatases; Receptor serine/threonine kinases; Receptor guanylyl cyclases and histidine kinase associated receptors. Receptor tyrosine kinases have the largest population and widest application. The majority of these molecules are receptors for growth factors such as epidermal growth factor (EGF), platelet-derived growth factor (PDGF), fibroblast growth factor (FGF), hepatocyte growth factor (HGF), nerve growth factor (NGF) and hormones such as insulin.
Most of these receptors will dimerize after binding with their ligands, in order to activate further signal transductions. For example, after the epidermal growth factor (EGF) receptor binds with its ligand EGF, the two receptors dimerize and then undergo phosphorylation of the tyrosine residues in the enzyme portion of each receptor molecule. This will activate the tyrosine kinase and catalyze further intracellular reactions.
G protein-coupled receptors
G protein-coupled receptors comprise a large protein family of transmembrane receptors. They are found only in eukaryotes. The ligands which bind and activate these receptors include: photosensitive compounds, odors, pheromones, hormones, and neurotransmitters. These vary in size from small molecules to peptides and large proteins. G protein-coupled receptors are involved in many diseases, and thus are the targets of many modern medicinal drugs.
There are two principal signal transduction pathways involving the G-protein coupled receptors: the cAMP signaling pathway and the phosphatidylinositol signaling pathway. Both are mediated via G protein activation. The G-protein is a trimeric protein, with three subunits designated as α, β, and γ. In response to receptor activation, the α subunit releases bound guanosine diphosphate (GDP), which is displaced by guanosine triphosphate (GTP), thus activating the α subunit, which then dissociates from the β and γ subunits. The activated α subunit can further affect intracellular signaling proteins or target functional proteins directly.
Membrane receptor-related disease
If the membrane receptors are denatured or deficient, the signal transduction can be hindered and cause diseases. Some diseases are caused by disorders of membrane receptor function. This is due to deficiency or degradation of the receptor via changes in the genes that encode and regulate the receptor protein. The membrane receptor TM4SF5 influences the migration of hepatic cells and hepatoma. Also, the cortical NMDA receptor influences membrane fluidity, and is altered in Alzheimer's disease. When the cell is infected by a non-enveloped virus, the virus first binds to specific membrane receptors and then passes itself or a subviral component to the cytoplasmic side of the cellular membrane. In the case of poliovirus, it is known in vitro that interactions with receptors cause conformational rearrangements which release a virion protein called VP4.The N terminus of VP4 is myristylated and thus hydrophobic【myristic acid=CH3(CH2)12COOH】. It is proposed that the conformational changes induced by receptor binding result in the attachment of myristic acid on VP4 and the formation of a channel for RNA.
Structure-based drug design
Through methods such as X-ray crystallography and NMR spectroscopy, the information about 3D structures of target molecules has increased dramatically, and so has structural information about the ligands. This drives rapid development of structure-based drug design. Some of these new drugs target membrane receptors. Current approaches to structure-based drug design can be divided into two categories. The first category is about determining ligands for a given receptor. This is usually accomplished through database queries, biophysical simulations, and the construction of chemical libraries. In each case, a large number of potential ligand molecules are screened to find those fitting the binding pocket of the receptor. This approach is usually referred to as ligand-based drug design. The key advantage of searching a database is that it saves time and power to obtain new effective compounds. Another approach of structure-based drug design is about combinatorially mapping ligands, which is referred to as receptor-based drug design. In this case, ligand molecules are engineered within the constraints of a binding pocket by assembling small pieces in a stepwise manner. These pieces can be either atoms or molecules. The key advantage of such a method is that novel structures can be discovered.
Other examples
Adrenergic receptor
Olfactory receptors
Receptor tyrosine kinases
Epidermal growth factor receptor
Insulin Receptor
Fibroblast growth factor receptors,
High affinity neurotrophin receptors
Ephrin receptors
Integrins
Low Affinity Nerve Growth Factor Receptor
NMDA receptor
Several Immune receptors
Toll-like receptor
T cell receptor
CD28
SCIMP protein
See also
Neuromodulators
Second messenger
Signalling lymphocyte activation molecule family
References
External links
IUPHAR GPCR Database
Transmembrane receptors
Cell signaling
de:Rezeptor#Membranrezeptoren | Cell surface receptor | [
"Chemistry"
] | 2,575 | [
"Transmembrane receptors",
"Signal transduction"
] |
22,519,612 | https://en.wikipedia.org/wiki/Specific%20leaf%20area | Specific leaf area (SLA) is the ratio of leaf area to leaf dry mass. The inverse of SLA is Leaf Mass per Area (LMA).
Rationale
Specific leaf area is a ratio indicating how much leaf area a plant builds with a given amount of leaf biomass:
where A is the area of a given leaf or all leaves of a plant, and ML is the dry mass of those leaves. Typical units are m2/kg or mm2/mg.
Leaf mass per area (LMA) is its inverse and can mathematically be decomposed in two component variables, leaf thickness (LTh) and leaf density (LD):
Typical units are g/m2 for LMA, μm for LTh and g/mL for LD.
Both SLA and LMA are frequently used in plant ecology and biology. SLA is one of the components in plant growth analysis, and mathematically scales positively and linearly with the relative growth rate of a plant. LMA mathematically scales positively with the investments plants make per unit leaf area (amount of protein and cell wall; cell number per area) and with leaf longevity. Since linear, positive relationships are more easily analysed than inverse negative relationships, researchers often use either variable, depending on the type of questions asked.
Normal ranges
Normal ranges of SLA and LMA are species-dependent and influenced by growth environment. Table 1 gives normal ranges (~10th and ~90th percentiles) for species growing in the field, for well-illuminated leaves. Aquatic plants generally have very low LMA values, with particularly low numbers reported for species such as Myriophyllum farwelli (2.8 g/m2) and Potamogeton perfoliatus (3.9 g/m2). Evergreen shrubs and Gymnosperm trees as well as succulents have particularly high LMA values, with highest values reported for Aloe saponaria (2010 g/m2) and Agave deserti (2900 g/m2).
Application
Specific leaf area can be used to estimate the reproductive strategy of a particular plant based upon light and moisture (humidity) levels, among other factors. Specific leaf area is one of the most widely accepted key leaf characteristics used during the study of leaf traits.
Changes in response to drought
Drought and water stress have varying effects on specific leaf area. In a variety of species, drought decreases specific leaf area. For example, under drought conditions, leaves were, on average, smaller than leaves on control plants. This is a logical observation, as a relative decrease in surface area would mean that there would be fewer ways for water to be lost. Species with typically low specific leaf area values are geared for the conservation of acquired resources, due to their large dry matter content, high concentrations of cell walls and secondary metabolites, and high leaf and root longevity.
In some other species, such as Poplar trees, specific leaf area will decrease overall, but there will be an increase in specific leaf area until the leaf has reached its final size. After the final size has been reached, the specific leaf area will then begin decreasing.
Other research has shown increasing specific leaf area values in plants under water limitation. An example of increasing specific leaf area values as a result of drought stress is the birch tree species. Birch tree specific leaf area values significantly increased after two dry seasons, though the authors did note that, in typical cases, lowered specific leaf area values are seen as an adaptation to drought stress.
See also
Hemispherical photography
Leaf Area Index
Photosynthetically active radiation
Plant growth analysis
References
Plant anatomy
Leaves
L | Specific leaf area | [
"Physics",
"Mathematics"
] | 735 | [
"Physical quantities",
"Quantity",
"Mass",
"Intensive quantities",
"Mass-specific quantities",
"Matter"
] |
22,523,849 | https://en.wikipedia.org/wiki/Benzopyrene | A benzopyrene is an organic compound with the formula C20H12. Structurally speaking, the colorless isomers of benzopyrene are pentacyclic hydrocarbons and are fusion products of pyrene and a phenylene group. Two isomeric species of benzopyrene are benzo[a]pyrene and the less common benzo[e]pyrene. They belong to the chemical class of polycyclic aromatic hydrocarbons.
Overview
Related compounds include cyclopentapyrenes, dibenzopyrenes, indenopyrenes and naphthopyrenes. Benzopyrene is a component of pitch and occurs together with other related pentacyclic aromatic species such as picene, benzofluoranthenes, and perylene. It is naturally emitted by forest fires and volcanic eruptions and can also be found in coal tar, cigarette smoke, wood smoke, and burnt foods such as coffee. Fumes that develop from fat dripping on blistering charcoal are rich in benzopyrene, which can condense on grilled goods.
Benzopyrenes are harmful because they form carcinogenic and mutagenic metabolites (such as (+)-benzo[a]pyrene-7,8-dihydrodiol-9,10-epoxide from benzo[a]pyrene) which intercalate into DNA, interfering with transcription. They are considered pollutants and carcinogens. The mechanism of action of benzo[a]pyrene-related DNA modification has been investigated extensively and relates to the activity of cytochrome P450 subclass 1A1 (CYP1A1). Seemingly, the high activity of CYP1A1 in the intestinal mucosa prevents major amounts of ingested benzo[a]pyrene from entering portal blood and systemic circulation. The intestinal (but not hepatic) detoxification mechanism seems to depend on receptors that recognize bacterial surface components (TLR2).
Evidence exists to link benzo[a]pyrene to the formation of lung cancer.
In February 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs), including benzopyrene, in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed several billion years after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
See also
Benzo[a]pyrene
Benzo[e]pyrene
References
External links
Endocrine disruptors
Carcinogens
Polycyclic aromatic hydrocarbons | Benzopyrene | [
"Chemistry",
"Environmental_science"
] | 585 | [
"Endocrine disruptors",
"Carcinogens",
"Toxicology"
] |
19,908,550 | https://en.wikipedia.org/wiki/Diffusion | Diffusion is the net movement of anything (for example, atoms, ions, molecules, energy) generally from a region of higher concentration to a region of lower concentration. Diffusion is driven by a gradient in Gibbs free energy or chemical potential. It is possible to diffuse "uphill" from a region of lower concentration to a region of higher concentration, as in spinodal decomposition. Diffusion is a stochastic process due to the inherent randomness of the diffusing entity and can be used to model many real-life stochastic scenarios. Therefore, diffusion and the corresponding mathematical models are used in several fields beyond physics, such as statistics, probability theory, information theory, neural networks, finance, and marketing.
The concept of diffusion is widely used in many fields, including physics (particle diffusion), chemistry, biology, sociology, economics, statistics, data science, and finance (diffusion of people, ideas, data and price values). The central idea of diffusion, however, is common to all of these: a substance or collection undergoing diffusion spreads out from a point or location at which there is a higher concentration of that substance or collection.
A gradient is the change in the value of a quantity; for example, concentration, pressure, or temperature with the change in another variable, usually distance. A change in concentration over a distance is called a concentration gradient, a change in pressure over a distance is called a pressure gradient, and a change in temperature over a distance is called a temperature gradient.
The word diffusion derives from the Latin word, diffundere, which means "to spread out".
A distinguishing feature of diffusion is that it depends on particle random walk, and results in mixing or mass transport without requiring directed bulk motion. Bulk motion, or bulk flow, is the characteristic of advection. The term convection is used to describe the combination of both transport phenomena.
If a diffusion process can be described by Fick's laws, it is called a normal diffusion (or Fickian diffusion); Otherwise, it is called an anomalous diffusion (or non-Fickian diffusion).
When talking about the extent of diffusion, two length scales are used in two different scenarios:
Brownian motion of an impulsive point source (for example, one single spray of perfume)—the square root of the mean squared displacement from this point. In Fickian diffusion, this is , where is the dimension of this Brownian motion;
Constant concentration source in one dimension—the diffusion length. In Fickian diffusion, this is .
Diffusion vs. bulk flow
"Bulk flow" is the movement/flow of an entire body due to a pressure gradient (for example, water coming out of a tap). "Diffusion" is the gradual movement/dispersion of concentration within a body with no net movement of matter. An example of a process where both bulk motion and diffusion occur is human breathing.
First, there is a "bulk flow" process. The lungs are located in the thoracic cavity, which expands as the first step in external respiration. This expansion leads to an increase in volume of the alveoli in the lungs, which causes a decrease in pressure in the alveoli. This creates a pressure gradient between the air outside the body at relatively high pressure and the alveoli at relatively low pressure. The air moves down the pressure gradient through the airways of the lungs and into the alveoli until the pressure of the air and that in the alveoli are equal, that is, the movement of air by bulk flow stops once there is no longer a pressure gradient.
Second, there is a "diffusion" process. The air arriving in the alveoli has a higher concentration of oxygen than the "stale" air in the alveoli. The increase in oxygen concentration creates a concentration gradient for oxygen between the air in the alveoli and the blood in the capillaries that surround the alveoli. Oxygen then moves by diffusion, down the concentration gradient, into the blood. The other consequence of the air arriving in alveoli is that the concentration of carbon dioxide in the alveoli decreases. This creates a concentration gradient for carbon dioxide to diffuse from the blood into the alveoli, as fresh air has a very low concentration of carbon dioxide compared to the blood in the body.
Third, there is another "bulk flow" process. The pumping action of the heart then transports the blood around the body. As the left ventricle of the heart contracts, the volume decreases, which increases the pressure in the ventricle. This creates a pressure gradient between the heart and the capillaries, and blood moves through blood vessels by bulk flow down the pressure gradient.
Diffusion in the context of different disciplines
There are two ways to introduce the notion of diffusion: either a phenomenological approach starting with Fick's laws of diffusion and their mathematical consequences, or a physical and atomistic one, by considering the random walk of the diffusing particles.
In the phenomenological approach, diffusion is the movement of a substance from a region of high concentration to a region of low concentration without bulk motion. According to Fick's laws, the diffusion flux is proportional to the negative gradient of concentrations. It goes from regions of higher concentration to regions of lower concentration. Sometime later, various generalizations of Fick's laws were developed in the frame of thermodynamics and non-equilibrium thermodynamics.
From the atomistic point of view, diffusion is considered as a result of the random walk of the diffusing particles. In molecular diffusion, the moving molecules in a gas, liquid, or solid are self-propelled by kinetic energy. Random walk of small particles in suspension in a fluid was discovered in 1827 by Robert Brown, who found that minute particle suspended in a liquid medium and just large enough to be visible under an optical microscope exhibit a rapid and continually irregular motion of particles known as Brownian movement. The theory of the Brownian motion and the atomistic backgrounds of diffusion were developed by Albert Einstein.
The concept of diffusion is typically applied to any subject matter involving random walks in ensembles of individuals.
In chemistry and materials science, diffusion also refers to the movement of fluid molecules in porous solids. Different types of diffusion are distinguished in porous solids. Molecular diffusion occurs when the collision with another molecule is more likely than the collision with the pore walls. Under such conditions, the diffusivity is similar to that in a non-confined space and is proportional to the mean free path. Knudsen diffusion occurs when the pore diameter is comparable to or smaller than the mean free path of the molecule diffusing through the pore. Under this condition, the collision with the pore walls becomes gradually more likely and the diffusivity is lower. Finally there is configurational diffusion, which happens if the molecules have comparable size to that of the pore. Under this condition, the diffusivity is much lower compared to molecular diffusion and small differences in the kinetic diameter of the molecule cause large differences in diffusivity.
Biologists often use the terms "net movement" or "net diffusion" to describe the movement of ions or molecules by diffusion. For example, oxygen can diffuse through cell membranes so long as there is a higher concentration of oxygen outside the cell. However, because the movement of molecules is random, occasionally oxygen molecules move out of the cell (against the concentration gradient). Because there are more oxygen molecules outside the cell, the probability that oxygen molecules will enter the cell is higher than the probability that oxygen molecules will leave the cell. Therefore, the "net" movement of oxygen molecules (the difference between the number of molecules either entering or leaving the cell) is into the cell. In other words, there is a net movement of oxygen molecules down the concentration gradient.
In astronomy, atomic diffusion is used to model the stellar atmospheres of chemically peculiar stars. Diffusion of the elements is critical in understanding the surface composition of degenerate white dwarf stars and their evolution over time.
History of diffusion in physics
In the scope of time, diffusion in solids was used long before the theory of diffusion was created. For example, Pliny the Elder had previously described the cementation process, which produces steel from the element iron (Fe) through carbon diffusion. Another example is well known for many centuries, the diffusion of colors of stained glass or earthenware and Chinese ceramics.
In modern science, the first systematic experimental study of diffusion was performed by Thomas Graham. He studied diffusion in gases, and the main phenomenon was described by him in 1831–1833:
"...gases of different nature, when brought into contact, do not arrange themselves according to their density, the heaviest undermost, and the lighter uppermost, but they spontaneously diffuse, mutually and equally, through each other, and so remain in the intimate state of mixture for any length of time."
The measurements of Graham contributed to James Clerk Maxwell deriving, in 1867, the coefficient of diffusion for CO2 in the air. The error rate is less than 5%.
In 1855, Adolf Fick, the 26-year-old anatomy demonstrator from Zürich, proposed his law of diffusion. He used Graham's research, stating his goal as "the development of a fundamental law, for the operation of diffusion in a single element of space". He asserted a deep analogy between diffusion and conduction of heat or electricity, creating a formalism similar to Fourier's law for heat conduction (1822) and Ohm's law for electric current (1827).
Robert Boyle demonstrated diffusion in solids in the 17th century by penetration of zinc into a copper coin. Nevertheless, diffusion in solids was not systematically studied until the second part of the 19th century. William Chandler Roberts-Austen, the well-known British metallurgist and former assistant of Thomas Graham studied systematically solid state diffusion on the example of gold in lead in 1896. :
"... My long connection with Graham's researches made it almost a duty to attempt to extend his work on liquid diffusion to metals."
In 1858, Rudolf Clausius introduced the concept of the mean free path. In the same year, James Clerk Maxwell developed the first atomistic theory of transport processes in gases. The modern atomistic theory of diffusion and Brownian motion was developed by Albert Einstein, Marian Smoluchowski and Jean-Baptiste Perrin. Ludwig Boltzmann, in the development of the atomistic backgrounds of the macroscopic transport processes, introduced the Boltzmann equation, which has served mathematics and physics with a source of transport process ideas and concerns for more than 140 years.
In 1920–1921, George de Hevesy measured self-diffusion using radioisotopes. He studied self-diffusion of radioactive isotopes of lead in the liquid and solid lead.
Yakov Frenkel (sometimes, Jakov/Jacob Frenkel) proposed, and elaborated in 1926, the idea of diffusion in crystals through local defects (vacancies and interstitial atoms). He concluded, the diffusion process in condensed matter is an ensemble of elementary jumps and quasichemical interactions of particles and defects. He introduced several mechanisms of diffusion and found rate constants from experimental data.
Sometime later, Carl Wagner and Walter H. Schottky developed Frenkel's ideas about mechanisms of diffusion further. Presently, it is universally recognized that atomic defects are necessary to mediate diffusion in crystals.
Henry Eyring, with co-authors, applied his theory of absolute reaction rates to Frenkel's quasichemical model of diffusion. The analogy between reaction kinetics and diffusion leads to various nonlinear versions of Fick's law.
Basic models of diffusion
Definition of diffusion flux
Each model of diffusion expresses the diffusion flux with the use of concentrations, densities and their derivatives. Flux is a vector representing the quantity and direction of transfer. Given a small area with normal , the transfer of a physical quantity through the area per time is
where is the inner product and is the little-o notation. If we use the notation of vector area then
The dimension of the diffusion flux is [flux] = [quantity]/([time]·[area]). The diffusing physical quantity may be the number of particles, mass, energy, electric charge, or any other scalar extensive quantity. For its density, , the diffusion equation has the form
where is intensity of any local source of this quantity (for example, the rate of a chemical reaction).
For the diffusion equation, the no-flux boundary conditions can be formulated as on the boundary, where is the normal to the boundary at point .
Normal single component concentration gradient
Fick's first law: The diffusion flux, , is proportional to the negative gradient of spatial concentration, :
where D is the diffusion coefficient. The corresponding diffusion equation (Fick's second law) is
In case the diffusion coefficient is independent of , Fick's second law can be simplified to
where is the Laplace operator,
Multicomponent diffusion and thermodiffusion
Fick's law describes diffusion of an admixture in a medium. The concentration of this admixture should be small and the gradient of this concentration should be also small. The driving force of diffusion in Fick's law is the antigradient of concentration, .
In 1931, Lars Onsager included the multicomponent transport processes in the general context of linear non-equilibrium thermodynamics. For
multi-component transport,
where is the flux of the th physical quantity (component), is the th thermodynamic force and is Onsager's matrix of kinetic transport coefficients.
The thermodynamic forces for the transport processes were introduced by Onsager as the space gradients of the derivatives of the entropy density (he used the term "force" in quotation marks or "driving force"):
where are the "thermodynamic coordinates".
For the heat and mass transfer one can take (the density of internal energy) and is the concentration of the th component. The corresponding driving forces are the space vectors
because
where T is the absolute temperature and is the chemical potential of the th component. It should be stressed that the separate diffusion equations describe the mixing or mass transport without bulk motion. Therefore, the terms with variation of the total pressure are neglected. It is possible for diffusion of small admixtures and for small gradients.
For the linear Onsager equations, we must take the thermodynamic forces in the linear approximation near equilibrium:
where the derivatives of are calculated at equilibrium .
The matrix of the kinetic coefficients should be symmetric (Onsager reciprocal relations) and positive definite (for the entropy growth).
The transport equations are
Here, all the indexes are related to the internal energy (0) and various components. The expression in the square brackets is the matrix of the diffusion (i,k > 0), thermodiffusion (i > 0, k = 0 or k > 0, i = 0) and thermal conductivity () coefficients.
Under isothermal conditions T = constant. The relevant thermodynamic potential is the free energy (or the free entropy). The thermodynamic driving forces for the isothermal diffusion are antigradients of chemical potentials, , and the matrix of diffusion coefficients is
(i,k > 0).
There is intrinsic arbitrariness in the definition of the thermodynamic forces and kinetic coefficients because they are not measurable separately and only their combinations can be measured. For example, in the original work of Onsager the thermodynamic forces include additional multiplier T, whereas in the Course of Theoretical Physics this multiplier is omitted but the sign of the thermodynamic forces is opposite. All these changes are supplemented by the corresponding changes in the coefficients and do not affect the measurable quantities.
Nondiagonal diffusion must be nonlinear
The formalism of linear irreversible thermodynamics (Onsager) generates the systems of linear diffusion equations in the form
If the matrix of diffusion coefficients is diagonal, then this system of equations is just a collection of decoupled Fick's equations for various components. Assume that diffusion is non-diagonal, for example, , and consider the state with . At this state, . If at some points, then becomes negative at these points in a short time. Therefore, linear non-diagonal diffusion does not preserve positivity of concentrations. Non-diagonal equations of multicomponent diffusion must be non-linear.
Applied forces
The Einstein relation (kinetic theory) connects the diffusion coefficient and the mobility (the ratio of the particle's terminal drift velocity to an applied force). For charged particles:
where D is the diffusion constant, μ is the "mobility", kB is the Boltzmann constant, T is the absolute temperature, and q is the elementary charge, that is, the charge of one electron.
Below, to combine in the same formula the chemical potential μ and the mobility, we use for mobility the notation .
Diffusion across a membrane
The mobility-based approach was further applied by T. Teorell. In 1935, he studied the diffusion of ions through a membrane. He formulated the essence of his approach in the formula:
the flux is equal to mobility × concentration × force per gram-ion.
This is the so-called Teorell formula. The term "gram-ion" ("gram-particle") is used for a quantity of a substance that contains the Avogadro number of ions (particles). The common modern term is mole.
The force under isothermal conditions consists of two parts:
Diffusion force caused by concentration gradient: .
Electrostatic force caused by electric potential gradient: .
Here R is the gas constant, T is the absolute temperature, n is the concentration, the equilibrium concentration is marked by a superscript "eq", q is the charge and φ is the electric potential.
The simple but crucial difference between the Teorell formula and the Onsager laws is the concentration factor in the Teorell expression for the flux. In the Einstein–Teorell approach, if for the finite force the concentration tends to zero then the flux also tends to zero, whereas the Onsager equations violate this simple and physically obvious rule.
The general formulation of the Teorell formula for non-perfect systems under isothermal conditions is
where μ is the chemical potential, μ0 is the standard value of the chemical potential.
The expression is the so-called activity. It measures the "effective concentration" of a species in a non-ideal mixture. In this notation, the Teorell formula for the flux has a very simple form
The standard derivation of the activity includes a normalization factor and for small concentrations , where is the standard concentration. Therefore, this formula for the flux describes the flux of the normalized dimensionless quantity :
Ballistic time scale
The Einstein model neglects the inertia of the diffusing partial. The alternative
Langevin equation starts with Newton's second law of motion:
where
x is the position.
μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory).
m is the mass of the particle.
F is the random force applied to the particle.
t is time.
Solving this equation, one obtained the time-dependent diffusion constant in the long-time limit and when the particle is significantly denser than the surrounding fluid,
where
kB is the Boltzmann constant;
T is the absolute temperature.
μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory).
m is the mass of the particle.
t is time.
At long time scales, Einstein's result is recovered, but short time scales, the ballistic regime are also explained. Moreover, unlike the Einstein approach, a velocity can be defined, leading to the Fluctuation-dissipation theorem, connecting the competition between friction and random forces in defining the temperature.
Jumps on the surface and in solids
Diffusion of reagents on the surface of a catalyst may play an important role in heterogeneous catalysis. The model of diffusion in the ideal monolayer is based on the jumps of the reagents on the nearest free places. This model was used for CO on Pt oxidation under low gas pressure.
The system includes several reagents on the surface. Their surface concentrations are The surface is a lattice of the adsorption places. Each
reagent molecule fills a place on the surface. Some of the places are free. The concentration of the free places is . The sum of all (including free places) is constant, the density of adsorption places b.
The jump model gives for the diffusion flux of (i = 1, ..., n):
The corresponding diffusion equation is:
Due to the conservation law, and we
have the system of m diffusion equations. For one component we get Fick's law and linear equations because . For two and more components the equations are nonlinear.
If all particles can exchange their positions with their closest neighbours then a simple generalization gives
where is a symmetric matrix of coefficients that characterize the intensities of jumps. The free places (vacancies) should be considered as special "particles" with concentration .
Various versions of these jump models are also suitable for simple diffusion mechanisms in solids.
Porous media
For diffusion in porous media the basic equations are (if Φ is constant):
where D is the diffusion coefficient, Φ is porosity, n is the concentration, m > 0 (usually m > 1, the case m = 1 corresponds to Fick's law).
Care must be taken to properly account for the porosity (Φ) of the porous medium in both the flux terms and the accumulation terms. For example, as the porosity goes to zero, the molar flux in the porous medium goes to zero for a given concentration gradient. Upon applying the divergence of the flux, the porosity terms cancel out and the second equation above is formed.
For diffusion of gases in porous media this equation is the formalization of Darcy's law: the volumetric flux of a gas in the porous media is
where k is the permeability of the medium, μ is the viscosity and p is the pressure.
The advective molar flux is given as
J = nq
and for Darcy's law gives the equation of diffusion in porous media with m = γ + 1.
In porous media, the average linear velocity (ν), is related to the volumetric flux as:
Combining the advective molar flux with the diffusive flux gives the advection dispersion equation
For underground water infiltration, the Boussinesq approximation gives the same equation with m = 2.
For plasma with the high level of radiation, the Zeldovich–Raizer equation gives m > 4 for the heat transfer.
Diffusion in physics
Diffusion coefficient in kinetic theory of gases
The diffusion coefficient is the coefficient in the Fick's first law , where J is the diffusion flux (amount of substance) per unit area per unit time, n (for ideal mixtures) is the concentration, x is the position [length].
Consider two gases with molecules of the same diameter d and mass m (self-diffusion). In this case, the elementary mean free path theory of diffusion gives for the diffusion coefficient
where kB is the Boltzmann constant, T is the temperature, P is the pressure, is the mean free path, and vT is the mean thermal speed:
We can see that the diffusion coefficient in the mean free path approximation grows with T as T3/2 and decreases with P as 1/P. If we use for P the ideal gas law P = RnT with the total concentration n, then we can see that for given concentration n the diffusion coefficient grows with T as T1/2 and for given temperature it decreases with the total concentration as 1/n.
For two different gases, A and B, with molecular masses mA, mB and molecular diameters dA, dB, the mean free path estimate of the diffusion coefficient of A in B and B in A is:
The theory of diffusion in gases based on Boltzmann's equation
In Boltzmann's kinetics of the mixture of gases, each gas has its own distribution function, , where t is the time moment, x is position and c is velocity of molecule of the ith component of the mixture. Each component has its mean velocity . If the velocities do not coincide then there exists diffusion.
In the Chapman–Enskog approximation, all the distribution functions are expressed through the densities of the conserved quantities:
individual concentrations of particles, (particles per volume),
density of momentum (mi is the ith particle mass),
density of kinetic energy
The kinetic temperature T and pressure P are defined in 3D space as
where is the total density.
For two gases, the difference between velocities, is given by the expression:
where is the force applied to the molecules of the ith component and is the thermodiffusion ratio.
The coefficient D12 is positive. This is the diffusion coefficient. Four terms in the formula for C1−C2 describe four main effects in the diffusion of gases:
describes the flux of the first component from the areas with the high ratio n1/n to the areas with lower values of this ratio (and, analogously the flux of the second component from high n2/n to low n2/n because n2/n = 1 – n1/n);
describes the flux of the heavier molecules to the areas with higher pressure and the lighter molecules to the areas with lower pressure, this is barodiffusion;
describes diffusion caused by the difference of the forces applied to molecules of different types. For example, in the Earth's gravitational field, the heavier molecules should go down, or in electric field the charged molecules should move, until this effect is not equilibrated by the sum of other terms. This effect should not be confused with barodiffusion caused by the pressure gradient.
describes thermodiffusion, the diffusion flux caused by the temperature gradient.
All these effects are called diffusion because they describe the differences between velocities of different components in the mixture. Therefore, these effects cannot be described as a bulk transport and differ from advection or convection.
In the first approximation,
for rigid spheres;
for repulsing force
The number is defined by quadratures (formulas (3.7), (3.9), Ch. 10 of the classical Chapman and Cowling book)
We can see that the dependence on T for the rigid spheres is the same as for the simple mean free path theory but for the power repulsion laws the exponent is different. Dependence on a total concentration n for a given temperature has always the same character, 1/n.
In applications to gas dynamics, the diffusion flux and the bulk flow should be joined in one system of transport equations. The bulk flow describes the mass transfer. Its velocity V is the mass average velocity. It is defined through the momentum density and the mass concentrations:
where is the mass concentration of the ith species, is the mass density.
By definition, the diffusion velocity of the ith component is , .
The mass transfer of the ith component is described by the continuity equation
where is the net mass production rate in chemical reactions, .
In these equations, the term describes advection of the ith component and the term represents diffusion of this component.
In 1948, Wendell H. Furry proposed to use the form of the diffusion rates found in kinetic theory as a framework for the new phenomenological approach to diffusion in gases. This approach was developed further by F.A. Williams and S.H. Lam. For the diffusion velocities in multicomponent gases (N components) they used
Here, is the diffusion coefficient matrix, is the thermal diffusion coefficient, is the body force per unit mass acting on the ith species, is the partial pressure fraction of the ith species (and is the partial pressure), is the mass fraction of the ith species, and
Diffusion of electrons in solids
When the density of electrons in solids is not in equilibrium, diffusion of electrons occurs. For example, when a bias is applied to two ends of a chunk of semiconductor, or a light shines on one end (see right figure), electrons diffuse from high density regions (center) to low density regions (two ends), forming a gradient of electron density. This process generates current, referred to as diffusion current.
Diffusion current can also be described by Fick's first law
where J is the diffusion current density (amount of substance) per unit area per unit time, n (for ideal mixtures) is the electron density, x is the position [length].
Diffusion in geophysics
Analytical and numerical models that solve the diffusion equation for different initial and boundary conditions have been popular for studying a wide variety of changes to the Earth's surface. Diffusion has been used extensively in erosion studies of hillslope retreat, bluff erosion, fault scarp degradation, wave-cut terrace/shoreline retreat, alluvial channel incision, coastal shelf retreat, and delta progradation. Although the Earth's surface is not literally diffusing in many of these cases, the process of diffusion effectively mimics the holistic changes that occur over decades to millennia. Diffusion models may also be used to solve inverse boundary value problems in which some information about the depositional environment is known from paleoenvironmental reconstruction and the diffusion equation is used to figure out the sediment influx and time series of landform changes.
Dialysis
Dialysis works on the principles of the diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane. Diffusion is a property of substances in water; substances in water tend to move from an area of high concentration to an area of low concentration. Blood flows by one side of a semi-permeable membrane, and a dialysate, or special dialysis fluid, flows by the opposite side. A semipermeable membrane is a thin layer of material that contains holes of various sizes, or pores. Smaller solutes and fluid pass through the membrane, but the membrane blocks the passage of larger substances (for example, red blood cells and large proteins). This replicates the filtering process that takes place in the kidneys when the blood enters the kidneys and the larger substances are separated from the smaller ones in the glomerulus.
Random walk (random motion)
One common misconception is that individual atoms, ions or molecules move randomly, which they do not. In the animation on the right, the ion in the left panel appears to have "random" motion in the absence of other ions. As the right panel shows, however, this motion is not random but is the result of "collisions" with other ions. As such, the movement of a single atom, ion, or molecule within a mixture just appears random when viewed in isolation. The movement of a substance within a mixture by "random walk" is governed by the kinetic energy within the system that can be affected by changes in concentration, pressure or temperature. (This is a classical description. At smaller scales, quantum effects will be non-negligible, in general. Thus, the study of the movement of a single atom becomes more subtle since particles at such small scales are described by probability amplitudes rather than deterministic measures of position and velocity.)
Separation of diffusion from convection in gases
While Brownian motion of multi-molecular mesoscopic particles (like pollen grains studied by Brown) is observable under an optical microscope, molecular diffusion can only be probed in carefully controlled experimental conditions. Since Graham experiments, it is well known that avoiding of convection is necessary and this may be a non-trivial task.
Under normal conditions, molecular diffusion dominates only at lengths in the nanometre-to-millimetre range. On larger length scales, transport in liquids and gases is normally due to another transport phenomenon, convection. To separate diffusion in these cases, special efforts are needed.
In contrast, heat conduction through solid media is an everyday occurrence (for example, a metal spoon partly immersed in a hot liquid). This explains why the diffusion of heat was explained mathematically before the diffusion of mass.
Other types of diffusion
Anisotropic diffusion, also known as the Perona–Malik equation, enhances high gradients
Atomic diffusion, in solids
Bohm diffusion, spread of plasma across magnetic fields
Eddy diffusion, in coarse-grained description of turbulent flow
Effusion of a gas through small holes
Electronic diffusion, resulting in an electric current called the diffusion current
Facilitated diffusion, present in some organisms
Gaseous diffusion, used for isotope separation
Heat equation, diffusion of thermal energy
Itō diffusion, mathematisation of Brownian motion, continuous stochastic process.
Knudsen diffusion of gas in long pores with frequent wall collisions
Lévy flight
Molecular diffusion, diffusion of molecules from more dense to less dense areas
Momentum diffusion ex. the diffusion of the hydrodynamic velocity field
Photon diffusion
Plasma diffusion
Random walk, model for diffusion
Reverse diffusion, against the concentration gradient, in phase separation
Rotational diffusion, random reorientation of molecules
Spin diffusion, diffusion of spin magnetic moments in solids
Surface diffusion, diffusion of adparticles on a surface
Taxis is an animal's directional movement activity in response to a stimulus
Kinesis is an animal's non-directional movement activity in response to a stimulus
Trans-cultural diffusion, diffusion of cultural traits across geographical area
Turbulent diffusion, transport of mass, heat, or momentum within a turbulent fluid
See also
References
Articles containing video clips
Broad-concept articles | Diffusion | [
"Physics",
"Chemistry"
] | 6,953 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion"
] |
19,912,599 | https://en.wikipedia.org/wiki/Covalent%20superconductor | Covalent superconductors are superconducting materials where the atoms are linked by covalent bonds. The first such material was boron-doped synthetic diamond grown by the high-pressure high-temperature (HPHT) method. The discovery had no practical importance, but surprised most scientists as superconductivity had not been observed in covalent semiconductors, including diamond and silicon.
History
The priority of many discoveries in science is vigorously disputed (see, e.g., Nobel Prize controversies). Another example, after Sumio Iijima has "discovered" carbon nanotubes in 1991, many scientists have pointed out that carbon nanofibers were actually observed decades earlier. The same could be said about superconductivity in covalent semiconductors. Superconductivity in germanium and silicon-germanium was predicted theoretically as early as in the 1960s. Shortly after, superconductivity was experimentally detected in germanium telluride. In 1976, superconductivity with Tc = 3.5 K was observed experimentally in germanium implanted with copper ions; it was experimentally demonstrated that amorphization was essential for the superconductivity (in Ge), and the superconductivity was assigned to Ge itself, not copper.
Diamond
Superconductivity in diamond was achieved through heavy p-type doping by boron such that the individual doping atoms started interacting and formed an "impurity band". The superconductivity was of type-II with the critical temperature Tc = 4 K and critical magnetic field Bc = 4 T. Later, Tc ≈ 11 K has been achieved in homoepitaxial CVD films.
Regarding the origin of superconductivity in diamond, three alternative theories were suggested: conventional BCS theory based on phonon-mediated pairing, correlated impurity band theory and spin-flip-driven pairing of holes weakly localized in the vicinity of the Fermi level. Experiments on diamonds enriched with 12C, 13C, 10B or 11B isotopes revealed a clear Tc shift, and its magnitude confirms the BCS mechanism of superconductivity in bulk polycrystalline diamond.
Carbon nanotubes
While there have been reports of intrinsic superconductivity in carbon nanotubes, many other experiments found no evidence of superconductivity, and the validity of these results remains a subject of debate. Note, however, a crucial difference between nanotubes and diamond: Although nanotubes contain covalently bonded carbon atoms, they are closer in properties to graphite than diamond, and can be metallic without doping. Meanwhile, undoped diamond is an insulator.
Intercalated graphite
When metal atoms are inserted (intercalated) between the graphite planes, several superconductors are created with the following transition temperatures:
Silicon
It was suggested that "Si and Ge, which also form in the diamond structure, may similarly exhibit superconductivity under the appropriate conditions", and indeed, discoveries of superconductivity in heavily boron doped Si (Si:B) and SiC:B have quickly followed. Similar to diamond, Si:B is type-II superconductor, but it has much smaller values of Tc = 0.4 K and Bc = 0.4 T. Superconductivity in Si:B was achieved by heavy doping (above 8 at.%), realized through a special non-equilibrium technique of gas immersion laser doping.
Silicon carbide
Superconductivity in SiC was achieved by heavy doping with boron or aluminum. Both the cubic (3C-SiC) and hexagonal (6H-SiC) phases are superconducting and show a very similar Tc of 1.5 K. A crucial difference is however observed for the magnetic field behavior between aluminum and boron doping: SiC:Al is type-II, same as Si:B. On the contrary, SiC:B is type-I. In attempt to explain this difference, it was noted that Si sites are more important than carbon sites for superconductivity in SiC. Whereas boron substitutes carbon in SiC, Al substitutes Si sites. Therefore, Al and B "see" different environment that might explain different properties of SiC:Al and SiC:B.
Hydrogen sulfide
At pressures above 90 GPa (gigapascal), hydrogen sulfide becomes a metallic conductor of electricity. When cooled below a critical temperature its high-pressure phase exhibits superconductivity. The critical temperature increases with pressure, ranging from 23 K at 100 GPa to 150 K at 200 GPa. If hydrogen sulfide is pressurized at higher temperatures, then cooled, the critical temperature reaches , the highest accepted superconducting critical temperature as of 2015. By substituting a small part of sulfur with phosphorus and using even higher pressures, it has been predicted that it may be possible to raise the critical temperature to above and achieve room-temperature superconductivity.
See also
References
External links
International Workshop on superconductivity in Diamond and Related Materials 2005
International Workshop on Superconductivity in Diamond and Related Materials 2008
New Diamond and Frontier Carbon Technology Volume 17, No.1 Special Issue on Superconductivity in CVD Diamond
Some papers on superconducting diamond
Superconductivity
Superconductors | Covalent superconductor | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,108 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Superconductors",
"Electrical resistance and conductance"
] |
8,404,649 | https://en.wikipedia.org/wiki/Pressure%E2%80%93volume%20diagram | A pressure–volume diagram (or PV diagram, or volume–pressure loop) is used to describe corresponding changes in volume and pressure in a system. They are commonly used in thermodynamics, cardiovascular physiology, and respiratory physiology.
PV diagrams, originally called indicator diagrams, were developed in the 18th century as tools for understanding the efficiency of steam engines.
Description
A PV diagram plots the change in pressure P with respect to volume V for some process or processes. Typically in thermodynamics, the set of processes forms a cycle, so that upon completion of the cycle there has been no net change in state of the system; i.e. the device returns to the starting pressure and volume.
The figure shows the features of an idealized PV diagram. It shows a series of numbered states (1 through 4). The path between each state consists of some process (A through D) which alters the pressure or volume of the system (or both).
A key feature of the diagram is that the amount of energy expended or received by the system as work can be measured because the net work is represented by the area enclosed by the four lines.
In the figure, the processes 1-2-3 produce a work output, but processes from 3-4-1 require a smaller energy input to return to the starting position / state; so the net work is the difference between the two.
This figure is highly idealized, in so far as all the lines are straight and the corners are right angles. A diagram showing the changes in pressure and volume in a real device will show a more complex shape enclosing the work cycle. ().
History
The PV diagram, then called an indicator diagram, was developed in 1796 by James Watt and his employee John Southern. Volume was traced by a plate moving with the piston, while pressure was traced by a pressure gauge whose indicator moved at right angles to the piston. A pencil was used to draw the diagram. Watt used the diagram to make radical improvements to steam engine performance.
Applications
Thermodynamics
Specifically, the diagram records the pressure of steam versus the volume of steam in a cylinder, throughout a piston's cycle of motion in a steam engine. The diagram enables calculation of the work performed and thus can provide a measure of the power produced by the engine.
To exactly calculate the work done by the system it is necessary to calculate the integral of the pressure with respect to volume. One can often quickly calculate this using the PV diagram as it is simply the area enclosed by the cycle.
Note that in some cases specific volume will be plotted on the x-axis instead of volume, in which case the area under the curve represents work per unit mass of the working fluid (i.e. J/kg).
Medicine
In cardiovascular physiology, the diagram is often applied to the left ventricle, and it can be mapped to specific events of the cardiac cycle. PV loop studies are widely used in basic research and preclinical testing, to characterize the intact heart's performance under various situations (effect of drugs, disease, characterization of mouse strains)
The sequence of events occurring in every heart cycle is as follows. The left figure shows a PV loop from a real experiment; letters refer to points.
A is the end-diastolic point; this is the point where contraction begins. Pressure starts to increase, becomes rapidly higher than the atrial pressure, and the mitral valve closes. Since pressure is also lower than the aortic pressure, the aortic valve is closed as well.
Segment AB is the contraction phase. Since both the mitral and aortic valves are closed, volume is constant. For this reason, this phase is called isovolumic contraction.
At point B, pressure becomes higher than the aortic pressure and the aortic valve opens, initiating ejection.
BC is the ejection phase, volume decreases. At the end of this phase, pressure lowers again and falls below aortic pressure. The aortic valve closes.
Point C is the end-systolic point.
Segment CD is the isovolumic relaxation. During this phase, pressure continues to fall. The mitral valve and aortic valve are both closed again so volume is constant.
At point D pressure falls below the atrial pressure and the mitral valve opens, initiating ventricular filling.
DA is the diastolic filling period. Blood flows from the left atrium to the left ventricle. Atrial contraction completes ventricular filling.
As it can be seen, the PV loop forms a roughly rectangular shape and each loop is formed in an anti-clockwise direction.
Very useful information can be derived by examination and analysis of individual loops or series of loops, for example:
the horizontal distance between the top-left corner and the bottom-right corner of each loop is the stroke volume
the line joining the top-left corner of several loops is the contractile or inotropic state.
See external links for a much more precise representation.
See also
Indicator diagram
Temperature–entropy diagram
Wiggers diagram
Stroke volume
Cyclic process
Pressure–volume loop experiments
Pressure–volume loop analysis in cardiology
References
Bibliography
Pacey, A. J. & Fisher, S. J. (1967) "Daniel Bernoulli and the vis viva of compressed air", The British Journal for the History of Science 3 (4), p. 388–392,
British Transport Commission (1957) Handbook for Railway Steam Locomotive Enginemen, London : B.T.C., p. 81, (facsimile copy publ. Ian Allan (1977), )
External links
Diagram at cvphysiology.com
Interactive demonstration at davidson.edu
Thermodynamics
Cardiovascular physiology
Diagrams
Energy conversion
Piston engines
Steam power | Pressure–volume diagram | [
"Physics",
"Chemistry",
"Mathematics",
"Technology"
] | 1,184 | [
"Physical quantities",
"Engines",
"Piston engines",
"Steam power",
"Power (physics)",
"Thermodynamics",
"Dynamical systems"
] |
8,413,399 | https://en.wikipedia.org/wiki/Inorganic%20pyrophosphatase | Inorganic pyrophosphatase (or inorganic diphosphatase, PPase) is an enzyme () that catalyzes the conversion of one ion of pyrophosphate to two phosphate ions. This is a highly exergonic reaction, and therefore can be coupled to unfavorable biochemical transformations in order to drive these transformations to completion. The functionality of this enzyme plays a critical role in lipid metabolism (including lipid synthesis and degradation), calcium absorption and bone formation, and DNA synthesis, as well as other biochemical transformations.
Two types of inorganic diphosphatase, very different in terms of both amino acid sequence and structure, have been characterised to date: soluble and transmembrane proton-pumping pyrophosphatases (sPPases and H(+)-PPases, respectively). sPPases are ubiquitous proteins that hydrolyse pyrophosphate to release heat, whereas H+-PPases, so far unidentified in animal and fungal cells, couple the energy of PPi hydrolysis to proton movement across biological membranes.
Structure
Thermostable soluble pyrophosphatase had been isolated from the extremophile Thermococcus litoralis. The 3-dimensional structure was determined using x-ray crystallography, and was found to consist of two alpha-helices, as well as an antiparallel closed beta-sheet. The form of inorganic pyrophosphatase isolated from Thermococcus litoralis was found to contain a total of 174 amino acid residues and have a hexameric oligomeric organization (Image 1).
Humans possess two genes encoding pyrophosphatase, PPA1 and PPA2. PPA1 has been assigned to a gene locus on human chromosome 10, and PPA2 to chromosome 4.
Mechanism
Though the precise mechanism of catalysis via inorganic pyrophosphatase in most organisms remains uncertain, site-directed mutagenesis studies in Escherichia coli have allowed for analysis of the enzyme active site and identification of key amino acids. In particular, this analysis has revealed 17 residues of that may be of functional importance in catalysis.
Further research suggests that the protonation state of Asp67 is responsible for modulating the reversibility of the reaction in Escherichia coli. The carboxylate functional group of this residue has been shown to perform a nucleophilic attack on the pyrophosphate substrate when four magnesium ions are present. Direct coordination with these four magnesium ions and hydrogen bonding interactions with Arg43, Lys29, and Lys142 (all positively charged residues) have been shown to anchor the substrate to the active site. The four magnesium ions are also suggested to be involved in the stabilization of the trigonal bipyramid transition state, which lowers the energetic barrier for the aforementioned nucleophilic attack.
Several studies have also identified additional substrates that can act as allosteric effectors. In particular, the binding of pyrophosphate (PPi) to the effector site of inorganic pyrophosphatase increases its rate of hydrolysis at the active site. ATP has also been shown to function as an allosteric activator in Escherichia coli, while fluoride has been shown to inhibit hydrolysis of pyrophosphate in yeast.
Biological function and significance
The hydrolysis of inorganic pyrophosphate (PPi) to two phosphate ions is utilized in many biochemical pathways to render reactions effectively irreversible. This process is highly exergonic (accounting for approximately a −19kJ change in free energy), and therefore greatly increases the energetic favorability of reaction system when coupled with a typically less-favorable reaction.
Inorganic pyrophosphatase catalyzes this hydrolysis reaction in the early steps of lipid degradation, a prominent example of this phenomenon. By promoting the rapid hydrolysis of pyrophosphate (PPi), Inorganic pyrophosphatase provides the driving force for the activation of fatty acids destined for beta oxidation.
Before fatty acids can undergo degradation to fulfill the metabolic needs of an organism, they must first be activated via a thioester linkage to coenzyme A. This process is catalyzed by the enzyme acyl CoA synthetase, and occurs on the outer mitochondrial membrane. This activation is accomplished in two reactive steps: (1) the fatty acid reacts with a molecule of ATP to form an enzyme-bound acyl adenylate and pyrophosphate (PPi), and (2) the sulfhydryl group of CoA attacks the acyl adenylate, forming acyl CoA and a molecule of AMP. Each of these two steps is reversible under biological conditions, save for the additional hydrolysis of PPi by inorganic pyrophosphatase. This coupled hydrolysis provides the driving force for the overall forward activation reaction, and serves as a source of inorganic phosphate used in other biological processes.
Evolution
Examination of prokaryotic and eukaryotic forms of soluble inorganic pyrophosphatase (sPPase, ) has shown that they differ significantly in both amino acid sequence, number of residues, and oligomeric organization. Despite differing structural components, recent work has suggested a large degree of evolutionary conservation of active site structure as well as reaction mechanism, based on kinetic data. Analysis of approximately one million genetic sequences taken from organisms in the Sargasso Sea identified a 57 residue sequence within the regions coding for proton-pumping inorganic pyrophosphatase (H+-PPase) that appears to be highly conserved; this region primarily consisted of the four early amino acid residues Gly, Ala, Val and Asp, suggesting an evolutionarily ancient origin for the protein.
References
External links
Further reading
Protein families
EC 3.6.1
Metal enzymes
Enzymes of known structure | Inorganic pyrophosphatase | [
"Biology"
] | 1,228 | [
"Protein families",
"Protein classification"
] |
8,414,323 | https://en.wikipedia.org/wiki/Flood%20stage | Flood stage is the water level or stage at which the surface of a body of water has risen to a sufficient level to cause sufficient inundation of areas that are not normally covered by water, causing an inconvenience or a threat to life and property. When a body of water rises to this level, it is considered a flood event. Flood stage does not apply to areal flooding. As areal flooding occurs, by definition, over areas not normally covered by water, the presence of any water at all constitutes a flood. Usually, moderate and major stages are not defined for areal floodplains.
Definition
Flood stage is the water level, as read by a stream gauge or tide gauge, for a body of water at a particular location, measured from the level at which a body of water threatens lives, property, commerce, or travel. The term "at flood stage" is commonly used to describe the point at which this occurs. "Gauge height" (also referred to as "stream stage", "stage of the [body of water]", or simply "stage") is the level of the water surface above an established zero datum at a given location. The zero level can be arbitrary, but it is usually close to the bottom of the stream or river or at the average level of standing bodies of water. Stage was traditionally measured visually using a staff gauge, which is a fixed ruler marked in 1/100 and 1/10 foot intervals, however electronic sensors that transmit real-time information to the Internet are now used for many of these kinds of measurements. The flood stage measurements are given as a height above or below the zero level. Levels below zero are reported as a negative value.
While usually the flood stage is set at the elevation of the floodplain, it can be higher (if there are no structures, roads, or farming areas immediately on the floodplain) or lower (if there are structures such as marinas, lake houses, or docks low on the banks or shores of the body of water) depending on the location. Because flood stage is defined by impacts to people, as opposed to the natural topography of the area, flood stages are usually only calculated for bodies of water near communities.
The flood stage can be listed for an entire community, in which case it is often set to the lowest man-made structure or road in the area, the lowest farming field in the area, or the floodplain. It can also be set for a specific location ("flood stage is 12 feet on Maple Street at First Avenue" means that the specified intersection will begin to flood when the stage reaches ).
In the United States during flood events, the National Weather Service will issue flood warnings that list the current and predicted stages for affected communities as well as the local flood stage. Current stage data is collected by the USGS using a network of gauges, over 9000 of which transmit real time data via satellite, radio, or telephone. Many communities have inundation maps that provide information on which areas will flood at which stages.
Flood categories
In the United States, there are five levels of flooding.
Action Stage
Rivers: typically at this level, the water surface is generally near or slightly above the top of its banks, but no man-made structures are flooded; typically any water overflowing is limited to small areas of parkland or marshland.
Coastlines: at action stage, usually elevated tides and minor inundation of low-lying beach areas occurs.
Minor Flood Stage
Rivers: minor flooding is expected at this level, slightly above flood stage. Few, if any, buildings are expected to be inundated, however, roads may be covered with water, parklands, and lawns may be inundated and water may go under buildings on stilts or higher elevations.
Coastlines: water will usually run all the way up to the dune in waves during a minor flood. Overwash may occur on shoreline roads. Lifeguard structures and beach concession stands will usually be flooded and may be damaged by surf.
Moderate Flood Stage
Rivers: inundation of buildings usually begins at this stage. Roads are likely to be closed and some areas cut off. Some evacuations may be necessary.
Coastlines: at moderate flood stage, usually water overtops the natural dune and begins flooding coastal areas. Shoreline roadways and beaches will often be completely flooded out. High surf usually associated with this level of flooding may pound some oceanside structures like piers, boardwalks, docks, and lifeguard stations apart. Beach houses may be damaged by water and surf, especially if lacking stilts.
Major Flood Stage
Rivers: significant to catastrophic, life-threatening flooding is usually expected at this stage. Extensive flooding with some low-lying areas completely inundated is likely. Structures may be completely submerged. Large-scale evacuations may be necessary.
Coastlines: Water surges over not only the dune, but also man-made walls and roads. Large and destructive waves pound weak structures and severely damage well-built homes and businesses. Overwash occurs on high-level seawalls. If major flooding occurs at high tide, impacts may be felt well inland. If cities are at or below sea level, catastrophic flooding can inundate the entire city and cause millions or billions of dollars in damage (as occurred in New Orleans during Hurricane Katrina).
Record Flood Stage
Rivers: at this level, the river is at its highest that it has been since records began for the area where the stream gauge is located. This does not necessarily imply a major flood. Some areas may have never experienced major flooding, and thus record stage is in the moderate category.
Coastlines: Usually, record flooding at the coast is associated with tropical cyclones, but it may be associated with coastal storms, Nor'easters, seiches caused by earthquakes, strong thunderstorms, or tsunamis. Destruction is often extensive and may extend a far distance inland.
References
External links
Advanced Hydrologic Prediction Service
U.S. National Streamflow Information Program
Hydrology | Flood stage | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,226 | [
"Hydrology",
"Environmental engineering"
] |
8,414,493 | https://en.wikipedia.org/wiki/174567%20Varda | {{Infobox planet
| minorplanet =
| name = 174567 Varda
| symbol = (astrological)
| background = #C2E0FF
| image = Varda.gif
| image_scale =
| caption = Hubble Space Telescope image of Varda and its satellite Ilmarë, taken in 2010 and 2011
| discovery_ref =
| discoverer = J. A. Larsen
| discovery_site = Kitt Peak National Obs.
| discovered = 21 June 2003
| earliest_precovery_date = 19 March 1980
| mpc_name = (174567) Varda
| alt_names =
| pronounced =
| named_after = Varda
| mp_category = TNOcubewanodetacheddistant
| orbit_ref =
| epoch = 31 May 2020 (JD 2459000.5)
| uncertainty = 2
| observation_arc = 39.12 yr (14,290 d)
| aphelion = 52.711 AU
| perihelion = 39.510 AU
| time_periastron = ≈ 1 November 2096±4 days
| semimajor = 46.110 AU
| eccentricity = 0.14315
| period = 313.12 yr (114,366 d)
| mean_anomaly = 275.208°
| mean_motion = / day
| inclination = 21.511°
| asc_node = 184.151°
| arg_peri = 180.072°
| satellites = 1
| flattening = or
| mean_diameter =
| mass = {{efn|name=mass|Using Grundy et al.s working diameters of 361 km and 163 km, and assuming the densities of the two bodies are equal, Varda would contribute 91.6% of the system mass of .}}
| density =
| surface_grav =
| rotation = or
| albedo =
| spectral_type = IR (moderately red)B−V=V–R=V−I=
| magnitude = 20.5
| abs_magnitude = 3.4
}}174567 Varda (provisional designation ''') is a binary trans-Neptunian planetoid of the resonant hot classical population of the Kuiper belt, located in the outermost region of the Solar System. Its moon, Ilmarë, was discovered in 2009.
Astronomer Michael Brown estimates that, with an absolute magnitude of 3.5 and a calculated diameter of approximately , it is likely a dwarf planet.
However, William M. Grundy et al. argue that objects in the size range of 400–1000 km, with albedos less than ≈0.2 and densities of ≈1.2 g/cm3 or less, have likely never compressed into fully solid bodies, let alone differentiated, and so are highly unlikely to be dwarf planets. It is not clear if Varda has a low or a high density.
Discovery and orbit
Varda was discovered in March 2006, using imagery dated from 21 June 2003, by Jeffrey A. Larsen with the Spacewatch telescope as part of a United States Naval Academy Trident Scholar project.
It orbits the Sun at a distance of 39.5–52.7 AU once every 313.1 years (over 114,000 days; semi-major axis of 46.1 AU). Its orbit has an eccentricity of 0.14 and an inclination of 21.5° with respect to the ecliptic. , Varda is 47.5 AU from the Sun. It will come to perihelion around November 2096. It has been observed 321 times over 23 oppositions, with precovery images back to 1980.
Name
The names for Varda and its moon were announced by the Minor Planets Center on 16 January 2014. Varda () is the queen of the Valar, creator of the stars, one of the most powerful servants of almighty Eru Ilúvatar in J. R. R. Tolkien's fictional mythology. Ilmarë is a chief of the Maiar and Varda's handmaiden.
The use of planetary symbols is discouraged in astronomy, so Varda never received a symbol in the astronomical literature. There is no standard symbol for Varda used by astrologers either. Zane Stein proposed a gleaming star as the symbol ().
Satellite
Varda has one known satellite, Ilmarë (or Varda I), which was discovered in 2009. It is estimated to be about 350 km in diameter (about 50% that of its primary), constituting 8% of the system mass, or , assuming its density and albedo are the same as that of Varda.
The Varda–Ilmarë system is tightly bound, with a semimajor axis of (about 12 Varda radii) and an orbital period of 5.75 days.
Physical properties
Based on its apparent brightness and assumed albedo, the estimated combined size of the Varda–Ilmarë system is , with the size of the primary estimated at . The total mass of the binary system is approximately . The density of both the primary and the satellite is estimated at , assuming that they have equal density. On the other hand, if the density or albedo of the satellite is lower than that of primary then the density of Varda will be higher up to .
On 10 September 2018, Varda's projected diameter was measured to be via a stellar occultation, with a projected oblateness of . The equivalent diameter is 740 km, consistent with previous measurements. Given Varda's equivalent diameter derived from the occultation, its geometric albedo is measured at 0.099, making it as dark as the large plutino .
The rotation period of Varda is unknown; it has been estimated at 5.61 hours in 2015, and more recently (in 2020) as either 4.76, 5.91 (the most likely value), 7.87 hours, or twice those values. The large uncertainty in Varda's rotation period yields various solutions for its density and true oblateness; given a most likely rotation period of 5.91 or 11.82 hours, its bulk density and true oblateness could be either and 0.235 or and 0.080, respectively.
The surfaces of both the primary and the satellite appear to be red in the visible and near-infrared parts of the spectrum (spectral class IR), with Ilmarë being slightly redder than Varda. The spectrum of the system does not show water absorption but shows evidence of methanol ice.
See also
– a similar trans-Neptunian object by orbit, size, and color
Notes
References
External links
List of binary asteroids and TNOs, Robert Johnston, johnstonsarchive.net
LCDB Data for (174567) Varda, Collaborative Asteroid Lightcurve Link''
(174567) 2003 MW12 Precovery Images
174567
Discoveries by Jeffrey A. Larsen
Named minor planets
Binary trans-Neptunian objects
174567
174567
Astronomical objects discovered in 2003 | 174567 Varda | [
"Physics",
"Astronomy"
] | 1,462 | [
"Concepts in astronomy",
"Unsolved problems in astronomy",
"Possible dwarf planets"
] |
8,414,812 | https://en.wikipedia.org/wiki/Robel%20pole | A Robel pole is a device consisting of a vertical pole possessing alternating horizontal bands and a line of rope or cord. It is used by range ecologists, field biologists and other scientists to measure the density of vegetation and to quantify the volume of ground cover in a particular habitat using the visual obstruction (VO) measurement method. The Robel pole is named for Robert J. Robel, the scientist who developed the device and technique. Modifications of Robel's original design have been developed and published; all use the VO method.
References
Robel, R. J. et al. 1970. Journal of Range Management. 23:295
Murray, L. D. and Ribic, C. A. 2003. Field Season Report USGS BRD WCWRU
Best, L. B. et al. 1998. American Midland Naturalist. 139:311-324
Measuring instruments | Robel pole | [
"Technology",
"Engineering"
] | 179 | [
"Measuring instruments"
] |
25,400,526 | https://en.wikipedia.org/wiki/Institute%20for%20Nuclear%20Research%20and%20Nuclear%20Energy | Institute for Nuclear Research and Nuclear Energy (INRNE) of the Bulgarian Academy of Sciences is
the leading center for research and application of the nuclear physics in Bulgaria.
The research areas include:
Theory of the elementary particles, string theory, theory of atomic nuclei, soliton interactions and quantum phenomena
Experimental physics of the elementary particles
Gamma-astrophysics at very high energies
Nuclear reactions, structure of atomic nuclei
Neutron interactions and cross sections, physics of the fission
Reactor physics, nuclear energy and nuclear safety and security
Dosimetry and radiation safety
Monitoring and management of the environment, radioecology
Radiochemistry, high precision analyses of substances, development and production of radioactive sources
Nuclear and neutron methods for investigations of substances
Nuclear instrument design and production
The institute's staff of about 320 (150 of them are scientific researchers) works in 16 laboratories, 2 scientific experimental facilities and 9 departments providing general support activities.
References
External links
Institute for Nuclear Research and Nuclear Energy
Institutes of the Bulgarian Academy of Sciences
Nuclear research institutes | Institute for Nuclear Research and Nuclear Energy | [
"Physics",
"Engineering"
] | 199 | [
"Nuclear research institutes",
"Nuclear and atomic physics stubs",
"Nuclear organizations",
"Nuclear physics"
] |
25,403,928 | https://en.wikipedia.org/wiki/Grunwald%E2%80%93Winstein%20equation | In physical organic chemistry, the Grunwald–Winstein equation is a linear free energy relationship between relative rate constants and the ionizing power of various solvent systems, describing the effect of solvent as nucleophile on different substrates. The equation, which was developed by Ernest Grunwald and Saul Winstein in 1948, could be written
where the and are the solvolysis rate constants for a certain compound in different solvent systems and in the reference solvent, 80% aqueous ethanol, respectively. The parameter is a parameter measuring the sensitivity of the solvolysis rate with respect to , the measure of ionizing power of the solvent.
Background
The Hammett equation (Equation 1) provides the relationship between the substituent on the benzene ring and the ionizing rate constant of the reaction. Hammett used the ionization of benzoic acid as the standard reaction to define a set of substituent parameters σX, and then to generate the ρ values, which represent ionizing abilities of different substrates. This relationship can be visualized through a Hammett plot.
However, if the solvent of the reaction is changed, but not the structure of the substrate, the rate constant may change too. Following this idea, Grunwald and Winstein plotted the relative rate constant vs. the change of solvent system, and formulated this behavior in the Grunwald–Winstein equation. Since the equation has the same pattern as the Hammett equation but captures the change of the solvent system, it is considered as an extension of the Hammett equation.
Definition
Reference compound
The substitution reaction of tert-Butyl chloride was chosen as reference reaction. The first step, ionizing step, is the rate determining step, SO stands for the nucleophilic solvent. The reference solvent is 80% Ethanol and 20% water by volume. Both of them can carry out the nucleophilic attack on the carbocation.
The SN1 reaction is performed through a stable carbocation intermediate, the more nucleophilic solvent can stabilize the carbocation better, thus the rate constant of the reaction could be larger. Since there’s no sharp line between the SN1 and SN2 reaction, a reaction that goes through SN1 mechanism more is preferred to achieve a better linear relationship, hence t-BuCl was chosen.
Y values
In equation , stands for the rate constant of t-BuCl reaction in 80% aqueous Ethanol, which is chosen as the reference. The variable stands for the rate constant of the same reaction in a different solvent system, such as ethanol-water, methanol-water, and acetic acid-formic acid. Thus, Y reflects the ionizing power of different nucleophile solvents.
m values
The equation parameter m, called the sensitivity factor of solvolysis, describes the compound’s ability to form the carbocation intermediate in given solvent system. It is the slope of the plot of log(ksol/k80%EtOH) vs Y values. Since the reference reaction has little solvent nucleophilic assistance, the reactions with m equal to 1 or larger than 1 have almost full ionized intermediates. If the compounds are not so sensitive to the ionizing ability of solvent, then the m values are smaller than 1. That is:
m ≥ 1, the reactions proceed through SN1 mechanism.
m < 1, the reactions proceed through a mechanism between SN1 and SN2.
Disadvantages
The Grunwald–Winstein equation cannot fit all data for different kinds of solvent mixtures. The combinations are limited to certain systems and only to nucleophilic solvents.
For many reactions and nucleophilic solvent systems, the relationships are not fully linear. This derives from the growing SN2 reaction character within the mechanism.
See also
Free-energy relationship
Hammett equation
Quantitative structure–activity relationship
References
Physical organic chemistry
Equations | Grunwald–Winstein equation | [
"Chemistry",
"Mathematics"
] | 817 | [
"Equations",
"Mathematical objects",
"Physical organic chemistry"
] |
25,410,090 | https://en.wikipedia.org/wiki/Swain%E2%80%93Lupton%20equation | In physical organic chemistry, the Swain–Lupton equation is a linear free energy relationship (LFER) that is used in the study of reaction mechanisms and in the development of quantitative structure activity relationships for organic compounds. It was developed by C. Gardner Swain and Elmer C. Lupton Jr. in 1968 as a refinement of the Hammett equation to include both field effects and resonance effects.
Background
In organic chemistry, the Hammett plot provides a means to assess substituent effects on a reaction equilibrium or rate using the Hammett equation (1):
Hammett developed this equation from equilibrium constants from the dissociation of benzoic acid and derivatives (Fig. 1):
Hammett defined the equation based on two parameters: the reaction constant (ρ) and the substituent parameter (σ). When other reactions were studied using these parameters, a correlation was not always found due to the specific derivation of these parameters from the dissociation equilibrium of substituted benzoic acids and the original negligence of resonance effects. Therefore, the effects of substituents on an array of compounds must be studied on an individual reaction basis using the equation Hammett derived either for field or resonance effects, but not both.
Redefining the equation
C. Gardner Swain and Elmer C. Lupton Jr. from the Massachusetts Institute of Technology redefined the substituent parameter, σ, based on the idea that no more than two variables (resonance effects and field effects) are necessary to describe the effects of any given substituent. Field effects, F, are defined to include all effects (inductive and pure field). Likewise, effects due to resonance, R, are due to the average of electron-donating ability and electron-accepting ability. These two effects are assumed to be independent of each other and therefore can be written as a linear combination:
These two parameters are treated as independent terms because of the assumption that Swain and Lupton made; the substituent is kept distant by three or more saturated centers or if the substituent is (CH3)3N+. All other terms are then negligible and leads to the Swain–Lupton equation ().
The new substituent parameter
The substituent parameter is now defined by field and resonance effects, F and R, which are dependent on the individual substituent. Constants r and f account for the importance of each of the two effects. These constants do not depend on the substituent but instead depend on the set of Hammett substituent parameters (σm, σp, σp+, σ', etc.).
In order to find the weighted constants, r and f, for each set of substituent parameters, one would need to establish the fact that each new substituent parameter σX could be written as a linear combination of specific reaction substituent parameters, i.e.
where σ1X and σ2X are specific substituent parameters (i.e. σ+, σ−, etc.) and c1 and c2 are constants independent of the substituent (depend on the reaction conditions, i.e. temperature, solvent, and individual reaction being studied). This can be expressed more generically as:
where i is an intercept to keep from fixing the origin at (0,0). If this was not done, the equation would give exceedingly more weight to the unsubstituted compounds that one is trying to make a comparison to using this equation.
A linear least-squares analysis is used to determine the coefficients/constants a, b, and i (Swain and Lupton used a procedure called DOVE: Dual Obligate Vector Evaluation).
Constants were first based on three previous reactions (σm, σp, σp+), which leads to more possible errors since the compiled data is only a minimal combination of a much larger pool. Seeing possible error in this limited pool, the data pool was increased by assigning a scale to begin with. A zero-scale is used for hydrogen, because it is known to neither readily donate or accept electron density when attached to a carbon atom due to similar electronegativities. A value of 1 was assigned to NO2, because previous research determined the effect of this substituent was predominantly due to resonance. Lastly, F was set equal to R for both components so that the field effects could be compared directly to the resonance effects. This then leads to:
F = R = 0 for H (Hydrogen).
F = R = 1 for NO2 (Nitro-group).
Fig. 2 shows some relative F and R values that Swain and Lupton founded.
Substituent categories
Alkyl groups have a low to zero value for F but sensible values for R. This is most commonly explained by hyperconjugation, meaning little to no inductive effects but partial resonance effects.
CF3 has a much higher R/F ratio than other substituents with high degrees of conjugation. This was studied in greater detail by Swain but is still explained best by fluoride hyperconjugation.
Positively charged substituents (i.e., and ) have larger positive F values due to a positive charge that is saturated near the carbon framework in question. Negatively charged substituents (i.e., CO2− and SO3−) have much lower F values because of their ability to resonate electron density amongst the oxygen atoms and stabilize it through hydrogen-bonding with solvents.
Linear free energy relationships are still useful, despite their disadvantages when pushed to the limits. New techniques to solve for Swain–Lupton substituent parameters involve studying chemical shifts through nuclear magnetic resonance spectroscopy. Recently, 15N NMR chemical shifts and substituent effects of 1,2,3,4,5,6,7,8-octahydroacridine and derivatives were studied. Values for R and F were found for the group, which could not be found previously using known methods.
Values of f and r
It is sometime useful to look at the percent resonance (%r), because r is dependent on the reaction and is the same for all substituents.
One can predict the difference in data comparing two substituents using %r:
The most dominant effect is clear when looking at the ratio of R to F. For example, a tungsten complex was shown to alkylate allyl carbonates A and B. The ratio of products A1 and B1 can be attributed to the para substituent, X (Fig. 3). Using Swain–Lupton parameters (σ = 0.2F + 0.8R) a ρ value of -2.5 was found to be the slope.
This is in agreement with the proposed mechanism (a positive charge forms on the benzylic carbon and is stabilized by resonance; R dominates by a ratio of 0.8/0.2).
Disadvantages
Like any other linear free-energy relationship established, the Swain–Lupton equation will too fail when special circumstances arise, i.e. change in the rate determining step of a mechanism or solvation structure.
See also
Hammett equation
Taft equation
Grunwald–Winstein equation
Yukawa–Tsuno equation
Bell–Evans–Polanyi principle
Free-energy relationship
Quantitative structure–activity relationship
References
Equations
Physical organic chemistry | Swain–Lupton equation | [
"Chemistry",
"Mathematics"
] | 1,552 | [
"Equations",
"Mathematical objects",
"Physical organic chemistry"
] |
25,410,110 | https://en.wikipedia.org/wiki/Yukawa%E2%80%93Tsuno%20equation | The Yukawa–Tsuno equation, first developed in 1959, is a linear free-energy relationship in physical organic chemistry. It is a modified version of the Hammett equation that accounts for enhanced resonance effects in electrophilic reactions of para- and meta-substituted organic compounds. This equation does so by introducing a new term to the original Hammett relation that provides a measure of the extent of resonance stabilization for a reactive structure that builds up charge (positive or negative) in its transition state. The Yukawa–Tsuno equation can take the following forms:
where and represent the rate constants for an X-substituted and unsubstituted compound, respectively; represents the Hammett reaction constant; represents the Hammett substituent constant; and represent the Hammett substituent constants for reactions in which positive or negative charges are built up at the reactive center, respectively; and represents the Yukawa–Tsuno parameter.
Background
The Hammett substituent constant, , is composed of two independent terms: an inductive effect and a resonance polar effect . These components represent the consequences of the presence of a particular substituent on reactivity through sigma and pi bonds, respectively. For a particular substituent, the value of is generally assumed to be a constant, irrespective of the nature of the reaction; however, it has been shown that for reactions of para-substituted compounds in which the transition state bears a nearly full charge, does not remain constant, and thus, the sum is also variable. In other words, for such reactions, application of the standard Hammett Equation does not produce a linear plot. To correlate these deviations from linearity, Yasuhide Yukawa and Yuho Tsuno proposed a modification to the original Hammett equation which accounts exclusively for enhanced resonance effects due to the high electron demand during such reactions.
Modified Hammett equation
In their 1959 publication, Yukawa and Tsuno attributed observed deviations from Hammett Plot linearity in electrophilic reactions to additional resonance effects occurring through the pi bonds of substituent groups in their compounds. This implied that the inductive component of the Hammett substituent constant remains constant in such reactions, while the resonance component, , does not. From this assumption, the two scientists defined a new resonance substituent constant, , that is mathematically represented as follows:
,
for a reaction in which positive charge is built up at the reactive center in the transition state. In order to quantify the extent of the observed enhanced resonance effects, Yukawa and Tsuno introduced an enhanced resonance parameter, , that quantifies the "demand for resonance" at the reactive center. Thus, the resultant Yukawa–Tsuno effective substituent constant is given by:
,
and the Yukawa–Tsuno equation (modified Hammett equation) takes the form:
Values of have been determined and catalogued for a number of substituents for quick application of the Yukawa–Tsuno equation.
Enhanced resonance parameter,
The enhanced resonance parameter, , is a measure of the influence of resonance on a new reaction. When , the resonance effects for a particular reaction are no different from those for reaction of the unsubstituted reference compound. However, when , the reaction in question is more sensitive to resonance effects than the standard, and when , the reaction is less sensitive to such effects.
The enhanced resonance parameter is determined by first establishing the Hammett Reaction constant from data collected from meta-substituted compounds, and subsequently correlating the remaining data to fit the modified equation described above.
Limitations
The Yukawa–Tsuno equation allows for treatment of both para- and meta- substituents, and it also better correlates data from reactions with high electron demand than the original Hammett equation. However, this equation does not take into account the effects of various solvents on organic reactions. Also, Yukawa and Tsuno note that, even within a group of similar reactions, -values for more electron-withdrawing substituents tend to be higher than predicted—seen as a slight increase in slope on a Yukawa–Tsuno plot—and thus, are not as strongly correlated with the remainder of the data.
See also
Free-energy relationship
Hammett equation
Quantitative structure-activity relationship
References
Physical organic chemistry
Equations | Yukawa–Tsuno equation | [
"Chemistry",
"Mathematics"
] | 898 | [
"Equations",
"Mathematical objects",
"Physical organic chemistry"
] |
13,056,002 | https://en.wikipedia.org/wiki/Saab%20Direct%20Ignition | Saab Direct Ignition is a capacitor discharge ignition developed by Saab Automobile, then known as Saab-Scania, and Mecel AB during the 1980s.
It was first shown in 1985 and put into series production in the Saab 9000 in 1988. One of the first instances of using the system was for a Formula Three racing engine (on B202 basis) developed with the help of engine builder John Nicholson, first shown in the spring of 1985. The system has been revised several times over the years. The ignition system together with the ignition coils form a single transformer oil filled cassette (or two cassettes in the case of a V6 engine) which is placed directly on the spark plugs, without the need for a distributor.
It was later implemented with the Saab Trionic Engine Management Systems as one of the first ion sensing ignition system on a production car.
The system puts a low voltage over the spark plugs when they are not fired to measure ionization in the cylinders. The ionic current measurement is used to replace the ordinary knock sensor and misfire measurement function.
Direct Ignition Cassette
The spark plugs are directly coupled to the "DIC" (or "IDM") which houses the ignition coils and electronics that measure cylinder ionization for use by the Trionic engine management system.
See also
Direct & Distributorless Ignition
Saab H engine
Saab 900NG (2nd Generation 900, 1994-1998)
Saab 9-3 (1st Generation, 1999-2002)
References
Ignition systems
Engine technology
Automotive technology tradenames
Saab engines | Saab Direct Ignition | [
"Technology"
] | 320 | [
"Engine technology",
"Engines"
] |
13,060,473 | https://en.wikipedia.org/wiki/LAPCAT | LAPCAT (Long-Term Advanced Propulsion Concepts and Technologies) was a 36-month European FP6 study to examine ways to produce engines for a Mach number 4-8 hypersonic flight aircraft. The project ended in April 2008. It was funded by the European Commission research and development fund (rather than ESA), and cost 7 million euros.
LAPCAT II, a 10 million euro, four-year, follow on project, started in October 2008. The study aims to refine some of the results of the first study "allowing the definition of a detailed development roadmap" of a Mach five vehicle.
Objectives
Two major technologies were to be considered:
ram-compression, which needs an additional propulsion system to achieve its minimum working speed.
active compression, which has an upper Mach number limitation but can accelerate a vehicle up to its cruise speed.
Key objectives were the definition and evaluation of:
different propulsion cycles and concepts for high-speed flight at Mach 4 to 8 such as turbine-based and rocket-based combined cycles
critical technologies for:
integrated engine/aircraft performance
mass-efficient turbines and heat-exchangers
high-pressure and supersonic combustion experiments
modelling
Intended results were:
definition of requirements and operational conditions for high-speed flight at system level
dedicated experimental data-base specific to high-speed aerodynamics for supersonic and high-pressure combustion and flow phenomena.
setting-up and validating physical models supported by numerical simulation tools to address supersonic and high-pressure combustion, turbulence and transition phenomena.
feasibility study of weight performance turbine and heat exchanger components
Results
Among the several vehicles studied, only two novel concepts were retained for LAPCAT II: a Mach five vehicle and a Mach eight vehicle.
Mach five vehicle
One possible supersonic transport aircraft being researched as part of this project is the A2 by Reaction Engines Limited. The researchers are looking at an aircraft capable of flying from Brussels (Belgium) to Sydney (Australia) in 2–4 hours, significantly reducing journey times across the globe.
To attain and maintain such high speeds, Reaction Engines Limited would need to develop its newly designed concept engine called the Scimitar, which exploits the thermodynamic properties of liquid hydrogen. The engine is theoretically capable of powering the A2 to a sustained Mach 5 throughout flight with an effective exhaust velocity of 40,900 m/s or specific impulse of 4170 s, SFC .
"Results so far show the Mach 5 vehicle from Reaction Engines can avoid later technology pitfalls and could travel from Brussels to Sydney," says ESA's LAPCAT project coordinator Johan Steelant.
Mach eight vehicle
Although the cruise flight of the scramjet based Mach eight vehicle seems feasible, the fuel consumption during acceleration requires a large fuel fraction, severely affecting gross take-off weight. Initial studies of a first-stage rocket ejector concept gave poor range with large take-off mass.
See also
Precooled jet engine
Reaction Engines SABRE
Reaction Engines A2
Liquid air cycle engine
Notes
References
.
Supersonic transports | LAPCAT | [
"Physics"
] | 604 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
13,060,570 | https://en.wikipedia.org/wiki/Prokineticin%20receptor | The prokineticin receptor is a G protein-coupled receptor which binds the peptide hormone prokineticin. There are two variants each encoded by a different gene (PROKR1, PROKR2). These receptors mediate gastrointestinal smooth muscle contraction and angiogenesis.
References
External links
G protein-coupled receptors | Prokineticin receptor | [
"Chemistry"
] | 69 | [
"G protein-coupled receptors",
"Signal transduction"
] |
13,061,160 | https://en.wikipedia.org/wiki/Free%20fatty%20acid%20receptor | Free fatty acid receptors (FFARs) are G-protein coupled receptors (GPRs). GPRs (also termed seven-(pass)-transmembrane domain receptors) are a large family of receptors. They reside on their parent cells' surface membranes, bind any one of a specific set of ligands that they recognize, and thereby are activated to elicit certain types of responses in their parent cells. Humans express more than 800 different types of GPCRs. FFARs are GPCR that bind and thereby become activated by particular fatty acids. In general, these binding/activating fatty acids are straight-chain fatty acids consisting of a carboxylic acid residue, i.e., -COOH, attached to aliphatic chains, i.e. carbon atom chains of varying lengths with each carbon being bound to 1, 2 or 3 hydrogens (CH1, CH2, or CH3). For example, propionic acid is a short-chain fatty acid consisting of 3 carbons (C's), CH3-CH2-COOH, and docosahexaenoic acid is a very long-chain polyunsaturated fatty acid consisting of 22 C's and six double bonds (double bonds notated as "="): CH3-CH2-CH1=CH1-CH2-CH1=CH1-CH2-CH1=CH1-CH2-CH1=CH1-CH2-CH1=CH1-CH2-CH1=CH1-CH2-CH2-COOH.
Currently, four FFARs are recognized: FFAR1, also termed GPR40; FFAR2, also termed GPR43; FFAR3, also termed GPR41; and FFAR4, also termed GPR120. The human FFAR1, FFAR2, and FFAR3 genes are located close to each other on the long (i.e., "q") arm of chromosome 19 at position 23.33 (notated as 19q23.33). This location also includes the GPR42 gene (previously termed the FFAR1L, FFAR3L, GPR41L, and GPR42P gene). This gene appears to be a segmental duplication of the FFAR3 gene. The human GPR42 gene codes for several proteins with a FFAR3-like structure but their expression in various cell types and tissues as well as their activities and functions have not yet been clearly defined. Consequently, none of these proteins are classified as an FFAR. The human FFAR1 gene is located on the long (i.e. "q") arm of chromosome 10 (notated as 10q23.33).
FFAR2 and FFAR3 bind and are activated by short-chain fatty acids, i.e., fatty acid chains consisting of 6 or less carbon atoms such as acetic, butyric, proprionic, pentanoic, and hexanoic acids. β-hydroxybutyric acid has been reported to stimulate or inhibit FFAR3. FFAR1 and FFAR4 bind to and are activated by medium-chain fatty acids (i.e., fatty acids consisting of 6-12 carbon atoms) such as lauric and capric acids and long-chain or very long-chain fatty acids (i.e., fatty acids consisting respectively of 13 to 21 or more than 21 carbon atoms) such as myristic, steric, oleic, palmitic, palmitoleic, linoleic, alpha-linolenic, dihomo-gamma-linolenic, eicosatrienoic, arachidonic (also termed eicosatetraenoic acid), eicosapentaenoic, docosatetraenoic, docosahexaenoic, and 20-hydroxyeicosatetraenoic acids. Among the fatty acids that activate FFAR1 and FFAR4, docosahexaenoic and eicosapentaenoic acids are regarded as the main fatty acids that do so.
Many of the FFAR-activating fatty acids also activate other types of GPRs. The actual GPR activated by a fatty acid must be identified in order to understand its and the activated GPR's function. The following section gives the non-FFAR GPRs that are activated by FFAR-activating fatty acids. One of the most often used and best way of showing that a fatty acid's action is due to a specific GPR is to show that the fatty acid's action is either absent or significantly reduced in cells, tissues, or animals that have no or significantly reduced activity due, respectively, to the knockout (i.e., total removal or inactivation) or knockdown (i.e., significant depression ) of the gene's GPR protein that mediates the fatty acid's action.
Other GPRs activated by FFAR-activating fatty acids
GPR84 binds and is activated by medium-chain fatty acids consisting of 9 to 14 carbon atoms such as capric, undecaenoic, and lauric acids. It has been recognized as a possible member of the free fatty acid receptor family in some publications but has not yet been given this designation perhaps because these medium-chain fatty acid activators require very high concentrations (e.g., in the micromolar range) to activate it. This allows that there may be a naturally occurring agent(s) that activates GPR84 at lower concentrations than the cited fatty acids. Consequently, GPR89 remains classified as an orphan receptor, i.e., a receptor who's naturally occurring activator(s) is unclear.
GPR109A is also termed hydroxycarboxylic acid receptor 2, niacin receptor 1, HM74a, HM74b, and PUMA-G. GPR109A binds and thereby is activated by the short-chain fatty acids, butyric, β-hydroxybutyric, pentanoic and hexanoic acids and by the intermediate-chain fatty acids heptanoic and octanoic acids. GPR109A is also activated by niacin but only at levels that are in general too low to activate it unless it is given as a drug in high doses.
GPR81 (also termed hydroxycarboxylic acid receptor 1, HCAR1, GPR104, GPR81, LACR1, TA-GPCR, TAGPCR, and FKSG80) binds and is activated by the short-chain fatty acids, lactic acid and β-hydroxybutyric acid. A more recent study reported that it is also activated by the compound 3,5-dihydroxybenzoic acid.
GPR109B (also known as hydroxycarboxylic acid receptor 3, HCA3, niacin receptor 2, and NIACR2) binds and is activated by the medium-chain fatty acid, 3-hydroxyoctanoate, niacin, and by four compounds viz., hippuric acid, 4-hydroxyphenyllactic acid, phenyllacetic acid, and indole-3-lactic acid. The latter three compounds are produced by Lactobacillus and Bifidobacterium species of bacteria that occupy the gastrointestinal tracts of animals and humans.
GPR91 (also termed the succinic acid receptor, succinate receptor, or SUCNR1) is activated most potently by the short-chain dicarobxylic fatty acid, succinic acid; the short-chain fatty acids, oxaloacetic, malic, and α-ketoglutaric acids are less potent activators of GPR91.
References
External links
G protein-coupled receptors | Free fatty acid receptor | [
"Chemistry"
] | 1,668 | [
"G protein-coupled receptors",
"Signal transduction"
] |
13,061,481 | https://en.wikipedia.org/wiki/Super%20Conserved%20Receptor%20Expressed%20in%20Brain | The Super Conserved Receptor Expressed in Brain (SREB) family are a group of related G-protein coupled receptors. Since no endogenous ligands have yet been identified for these receptors, they are classified as orphan receptors. Receptors within the group include SREB1 (GPR27), SREB2 (GPR85), and SREB3 (GPR173).
References
External links
IUPHAR GPCR Database - GPR27 (previously SREB1)
IUPHAR GPCR Database - GPR85 (previously SREB2)
IUPHAR GPCR Database - GPR173 (previously SREB3)
G protein-coupled receptors | Super Conserved Receptor Expressed in Brain | [
"Chemistry"
] | 138 | [
"G protein-coupled receptors",
"Signal transduction"
] |
13,061,682 | https://en.wikipedia.org/wiki/Platelet-activating%20factor%20receptor | The platelet-activating factor receptor (PAF-R) is a G-protein coupled receptor which binds platelet-activating factor. It is encoded in the human by the PTAFR gene.
The PAF receptor shows structural characteristics of the rhodopsin (MIM 180380) gene family and binds platelet-activating factor (PAF). PAF is a phospholipid (1-0-alkyl-2-acetyl-sn-glycero-3-phosphorylcholine) that has been implicated as a mediator in diverse pathologic processes, such as allergy, asthma, septic shock, arterial thrombosis, and inflammatory processes.[supplied by OMIM] Its pathogenetic role in chronic kidney failure has also been reported recently.
Ligands
Agonists
Platelet activating factor
Antagonists
Apafant (WEB-2086)
Israpafant (Y-24180)
Lexipafant
Rupatadine
References
Further reading
External links
G protein-coupled receptors | Platelet-activating factor receptor | [
"Chemistry"
] | 228 | [
"G protein-coupled receptors",
"Signal transduction"
] |
1,302,888 | https://en.wikipedia.org/wiki/BESS%20%28experiment%29 | BESS is a particle physics experiment carried by a balloon. BESS stands for Balloon-borne Experiment with Superconducting Spectrometer.
See also
BOOMERanG experiment
References
External links
BESS webpage on the NASA website
High energy particle telescopes
Cosmic-ray experiments
Balloon-borne experiments
Astronomical experiments in the Antarctic | BESS (experiment) | [
"Physics",
"Astronomy"
] | 65 | [
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Particle physics",
"Particle physics stubs"
] |
1,305,071 | https://en.wikipedia.org/wiki/Bridge%20%28graph%20theory%29 | In graph theory, a bridge, isthmus, cut-edge, or cut arc is an edge of a graph whose deletion increases the graph's number of connected components. Equivalently, an edge is a bridge if and only if it is not contained in any cycle. For a connected graph, a bridge can uniquely determine a cut. A graph is said to be bridgeless or isthmus-free if it contains no bridges.
This type of bridge should be distinguished from an unrelated meaning of "bridge" in graph theory, a subgraph separated from the rest of the graph by a specified subset of vertices; see bridge in the Glossary of graph theory.
Trees and forests
A graph with nodes can contain at most bridges, since adding additional edges must create a cycle. The graphs with exactly bridges are exactly the trees, and the graphs in which every edge is a bridge are exactly the forests.
In every undirected graph, there is an equivalence relation on the vertices according to which two vertices are related to each other whenever there are two edge-disjoint paths connecting them. (Every vertex is related to itself via two length-zero paths, which are identical but nevertheless edge-disjoint.) The equivalence classes of this relation are called 2-edge-connected components, and the bridges of the graph are exactly the edges whose endpoints belong to different components. The bridge-block tree of the graph has a vertex for every nontrivial component and an edge for every bridge.
Relation to vertex connectivity
Bridges are closely related to the concept of articulation vertices, vertices that belong to every path between some pair of other vertices. The two endpoints of a bridge are articulation vertices unless they have a degree of 1, although it may also be possible for a non-bridge edge to have two articulation vertices as endpoints. Analogously to bridgeless graphs being 2-edge-connected, graphs without articulation vertices are 2-vertex-connected.
In a cubic graph, every cut vertex is an endpoint of at least one bridge.
Bridgeless graphs
A bridgeless graph is a graph that does not have any bridges. Equivalent conditions are that each connected component of the graph has an open ear decomposition, that each connected component is 2-edge-connected, or (by Robbins' theorem) that every connected component has a strong orientation.
An important open problem involving bridges is the cycle double cover conjecture, due to Seymour and Szekeres (1978 and 1979, independently), which states that every bridgeless graph admits a multi-set of simple cycles which contains each edge exactly twice.
Tarjan's bridge-finding algorithm
The first linear time algorithm (linear in the number of edges) for finding the bridges in a graph was described by Robert Tarjan in 1974. It performs the following steps:
Find a spanning forest of
Create a Rooted forest from the spanning forest
Traverse the forest in preorder and number the nodes. Parent nodes in the forest now have lower numbers than child nodes.
For each node in preorder (denoting each node using its preorder number), do:
Compute the number of forest descendants for this node, by adding one to the sum of its children's descendants.
Compute , the lowest preorder label reachable from by a path for which all but the last edge stays within the subtree rooted at . This is the minimum of the set consisting of the preorder label of , of the values of at child nodes of and of the preorder labels of nodes reachable from by edges that do not belong to .
Similarly, compute , the highest preorder label reachable by a path for which all but the last edge stays within the subtree rooted at . This is the maximum of the set consisting of the preorder label of , of the values of at child nodes of and of the preorder labels of nodes reachable from by edges that do not belong to .
For each node with parent node , if and then the edge from to is a bridge.
Bridge-finding with chain decompositions
A very simple bridge-finding algorithm uses chain decompositions.
Chain decompositions do not only allow to compute all bridges of a graph, they also allow to read off every cut vertex of G (and the block-cut tree of G), giving a general framework for testing 2-edge- and 2-vertex-connectivity (which extends to linear-time 3-edge- and 3-vertex-connectivity tests).
Chain decompositions are special ear decompositions depending on a DFS-tree T of G and can be computed very simply: Let every vertex be marked as unvisited. For each vertex v in ascending DFS-numbers 1...n, traverse every backedge (i.e. every edge not in the DFS tree) that is incident to v and follow the path of tree-edges back to the root of T, stopping at the first vertex that is marked as visited. During such a traversal, every traversed vertex is marked as visited. Thus, a traversal stops at the latest at v and forms either a directed path or cycle, beginning with v; we call this path
or cycle a chain. The ith chain found by this procedure is referred to as Ci. C=C1,C2,... is then a chain decomposition of G.
The following characterizations then allow to read off several properties of G from C efficiently, including all bridges of G. Let C be a chain decomposition of a simple connected graph G=(V,E).
G is 2-edge-connected if and only if the chains in C partition E.
An edge e in G is a bridge if and only if e is not contained in any chain in C.
If G is 2-edge-connected, C is an ear decomposition.
G is 2-vertex-connected if and only if G has minimum degree 2 and C1 is the only cycle in C.
A vertex v in a 2-edge-connected graph G is a cut vertex if and only if v is the first vertex of a cycle in C - C1.
If G is 2-vertex-connected, C is an open ear decomposition.
See also
Biconnected component
Cut (graph theory)
Notes
Graph connectivity | Bridge (graph theory) | [
"Mathematics"
] | 1,281 | [
"Mathematical relations",
"Graph connectivity",
"Graph theory"
] |
1,305,761 | https://en.wikipedia.org/wiki/Afshar%20experiment | The Afshar experiment is a variation of the double-slit experiment in quantum mechanics, devised and carried out by Shahriar Afshar in 2004. In the experiment, light generated by a laser passes through two closely spaced pinholes, and is refocused by a lens so that the image of each pinhole falls on a separate single-photon detector. In addition, a grid of thin wires is placed just before the lens on the dark fringes of an interference pattern.
Afshar claimed that the experiment gives information about which path a photon takes through the apparatus, while simultaneously allowing interference between the paths to be observed. According to Afshar, this violates the complementarity principle of quantum mechanics.
The experiment has been analyzed and repeated by a number of investigators. There are several theories that explain the effect without violating complementarity. John G. Cramer claims the experiment provides evidence for the transactional interpretation of quantum mechanics over other interpretations.
History
Shahriar Afshar's experimental work was done initially at the Institute for Radiation-Induced Mass Studies (IRIMS) in Boston and later reproduced at Harvard University, while he was there as a visiting researcher. The results were first presented at a seminar at Harvard in March 2004. The experiment was featured as the cover story in the July 24, 2004 edition of the popular science magazine New Scientist endorsed by professor John G. Cramer of the University of Washington. The New Scientist feature article generated many responses, including various letters to the editor that appeared in the August 7 and August 14, 2004, issues, arguing against the conclusions being drawn by Afshar. The results were published in a SPIE conference proceedings in 2005. A follow-up paper was published in a scientific journal Foundations of Physics in January 2007 and featured in New Scientist in February 2007.
Experimental setup
The experiment uses a setup similar to that for the double-slit experiment. In Afshar's variant, light generated by a laser passes through two closely spaced circular pinholes (not slits). After the dual pinholes, a lens refocuses the light so that the image of each pinhole falls on separate photon-detectors (Fig. 1). With pinhole 2 closed, a photon that goes through pinhole 1 impinges only on photon detector 1. Similarly, with pinhole 1 closed, a photon that goes through pinhole 2 impinges only on photon detector 2. With both pinholes open, Afshar claims, citing Wheeler in support, that pinhole 1 remains correlated to photon Detector 1 (and vice versa for pinhole 2 to photon Detector 2), and therefore that which-way information is preserved when both pinholes are open.
When the light acts as a wave, because of quantum interference one can observe that there are regions that the photons avoid, called dark fringes. A grid of thin wires is placed just before the lens (Fig. 2) so that the wires lie in the dark fringes of an interference pattern which is produced by the dual pinhole setup. If one of the pinholes is blocked, the interference pattern will no longer be formed, and the grid of wires causes appreciable diffraction in the light and blocks some of it from detection by the corresponding photon detector. However, when both pinholes are open, the effect of the wires is negligible, comparable to the case in which there are no wires placed in front of the lens (Fig. 3), because the wires lie in the dark fringes of an interference pattern. The effect is not dependent on the light intensity (photon flux).
Afshar's interpretation
Afshar's conclusion is that, when both pinholes are open, the light exhibits wave-like behavior when going past the wires, since the light goes through the spaces between the wires but avoids the wires themselves, but also exhibits particle-like behavior after going through the lens, with photons going to a correlated photo-detector. Afshar argues that this behavior contradicts the principle of complementarity to the extent that it shows both wave and particle characteristics in the same experiment for the same photons.
Afshar asserts that there is simultaneously high visibility V of interference as well as high distinguishability D (corresponding to which-path information), so that V2 + D2 > 1, and the wave-particle duality relation is violated.
Reception
Specific criticism
A number of scientists have published criticisms of Afshar's interpretation of his results, some of which reject the claims of a violation of complementarity, while differing in the way they explain how complementarity copes with the experiment. For example, one paper contests Afshar's core claim, that the Englert–Greenberger duality relation is violated. The researchers re-ran the experiment, using a different method for measuring the visibility of the interference pattern than that used by Afshar, and found no violation of complementarity, concluding "This result demonstrates that the experiment can be perfectly explained by the Copenhagen interpretation of quantum mechanics."
Below is a synopsis of papers by several critics highlighting their main arguments and the disagreements they have amongst themselves:
Ruth Kastner, Committee on the History and Philosophy of Science, University of Maryland, College Park.
Kastner's criticism, published in a peer-reviewed paper, proceeds by setting up a thought experiment and applying Afshar's logic to it to expose its flaw. She proposes that Afshar's experiment is equivalent to preparing an electron in a spin-up state and then measuring its sideways spin. This does not imply that one has found out the up-down spin state and the sideways spin state of any electron simultaneously. Applied to Afshar's experiment: "Nevertheless, even with the grid removed, since the photon is prepared in a superposition S, the measurement at the final screen at t2 never really is a 'which-way' measurement (the term traditionally attached to the slit-basis observable ), because it cannot tell us 'which slit the photon actually went through.'
Daniel Reitzner, Research Center for Quantum Information, Institute of Physics, Slovak Academy of Sciences, Bratislava, Slovakia.
Reitzner performed numerical simulations, published in a preprint, of Afshar's arrangement and obtained the same results that Afshar obtained experimentally. From this he argues that the photons exhibit wave behavior, including high fringe visibility but no which-way information, up to the point they hit the detector: "In other words the two-peaked distribution is an interference pattern and the photon behaves as a wave and exhibits no particle properties until it hits the plate. As a result a which-way information can never be obtained in this way."
W. G. Unruh, Professor of Physics at University of British Columbia
Unruh, like Kastner, proceeds by setting up an arrangement that he feels is equivalent but simpler. The size of the effect is larger so that it is easier to see the flaw in the logic. In Unruh's view that flaw is, in the case that an obstacle exists at the position of the dark fringes, "drawing the inference that IF the particle was detected in detector 1, THEN it must have come from path 1. Similarly, IF it were detected in detector 2, then it came from path 2." In other words, he accepts the existence of an interference pattern but rejects the existence of which-way information.
Luboš Motl, Former assistant professor of physics, Harvard University.
Motl's criticism, published in his blog, is based on an analysis of Afshar's actual setup, instead of proposing a different experiment like Unruh and Kastner. In contrast to Unruh and Kastner, he believes that which-way information always exists, but argues that the measured contrast of the interference pattern is actually very low: "Because this signal (disruption) from the second, middle picture is small (equivalently, it only affects a very small portion of the photons), the contrast V is also very small, and goes to zero for infinitely thin wires." He also argues that the experiment can be understood with classical electrodynamics and has "nothing to do with quantum mechanics".
Ole Steuernagel, School of Physics, Astronomy and Mathematics, University of Hertfordshire, UK.
Steuernagel makes a quantitative analysis of the various transmitted, refracted, and reflected modes in a setup that differs only slightly from Afshar's. He concludes that the Englert-Greenberger duality relation is strictly satisfied, and in particular that the fringe visibility for thin wires is small. Like some of the other critics, he emphasizes that inferring an interference pattern is not the same as measuring one: "Finally, the greatest weakness in the analysis given by Afshar is the inference that an interference pattern must be present."
Andrew Knight, Department of Physics, New York University
Argues that Afshar's claim to violate complementarity is a simple logical inconsistency: by setting up the experiment so that photons are spatially coherent over the two pinholes, the pinholes are necessarily indistinguishable by those photons. “In other words, Afshar et al. claim in one breath to have set up the experiment so that pinholes A and B are inherently indistinguishable by certain photons [specifically, photons that are produced to be spatially coherent over the width spanned by pinholes that are thus incapable of distinguishing them], and in another breath to have distinguished pinholes A and B with those same photons.”
Specific support
Afshar's coauthors Eduardo Flores and Ernst Knoesel criticize Kastner's setup and propose an alternative experimental setup. By removing the lens of Afshar and causing two beams to overlap at a small angle, Flores et al. aimed to show that conservation of momentum guarantee the preservation of which-path information when both pinholes are open. But this experiment is still subject to Motl's objection that the 2 beams have a sub-microscopic diffraction pattern created by the convergence of the beams before the slits; the result would have been the measuring of which slit was open before the wires were ever reached.
John G. Cramer adopts Afshar's interpretation of the experiment to support his own transactional interpretation of quantum mechanics over the Copenhagen interpretation and the many-worlds interpretation.
See also
Wheeler's delayed choice experiment
Delayed choice quantum eraser
Weak measurement
Wheeler–Feynman absorber theory
References
Quantum measurement
Physics experiments
Philosophy of physics | Afshar experiment | [
"Physics"
] | 2,189 | [
"Philosophy of physics",
"Applied and interdisciplinary physics",
"Physics experiments",
"Quantum mechanics",
"Quantum measurement",
"Experimental physics"
] |
1,305,822 | https://en.wikipedia.org/wiki/Dado%20%28architecture%29 | In architecture, the dado is the lower part of a wall, below the dado rail and above the skirting board. The word is borrowed from Italian meaning "dice" or "cube", and refers to "die", an architectural term for the middle section of a pedestal or plinth.
Decorative treatment
This area is given a decorative treatment different from that for the upper part of the wall; for example panelling, wainscoting or lincrusta. The purpose of the dado treatment to a wall is both aesthetic and functional. Historically, the panelling below the dado rail was installed to cover the lower part of the wall which was subject to stains associated with rising damp; additionally it provided protection from furniture and passing traffic. The dado rail itself is sometimes referred to as a chair rail, though this can be misleading since its function is principally aesthetic and not to protect the wall from chair backs.
Derivation
The name was first used in English as an architectural term for the part of a pedestal between the base and the cornice. As with many other architectural terms, the word is Italian in origin. The dado in a pedestal is roughly cubical in shape, and the word in Italian means "dice" or "cube" (ultimately Latin datum, meaning "something given", hence also a die for casting lots). By extension, the dado becomes the lower part of a wall when the pedestal is treated as being continuous along the wall, with the cornice becoming the dado rail.
Gallery
See also
Dado (joinery)
References
External links
Columns and entablature
Types of wall | Dado (architecture) | [
"Technology",
"Engineering"
] | 332 | [
"Structural system",
"Types of wall",
"Structural engineering",
"Columns and entablature"
] |
1,305,947 | https://en.wikipedia.org/wiki/3D%20printing | 3D printing, or additive manufacturing, is the construction of a three-dimensional object from a CAD model or a digital 3D model. It can be done in a variety of processes in which material is deposited, joined or solidified under computer control, with the material being added together (such as plastics, liquids or powder grains being fused), typically layer by layer.
In the 1980s, 3D printing techniques were considered suitable only for the production of functional or aesthetic prototypes, and a more appropriate term for it at the time was rapid prototyping. , the precision, repeatability, and material range of 3D printing have increased to the point that some 3D printing processes are considered viable as an industrial-production technology; in this context, the term additive manufacturing can be used synonymously with 3D printing. One of the key advantages of 3D printing is the ability to produce very complex shapes or geometries that would be otherwise infeasible to construct by hand, including hollow parts or parts with internal truss structures to reduce weight while creating less material waste. Fused deposition modeling (FDM), which uses a continuous filament of a thermoplastic material, is the most common 3D printing process in use .
Terminology
The umbrella term additive manufacturing (AM) gained popularity in the 2000s, inspired by the theme of material being added together (in any of various ways). In contrast, the term subtractive manufacturing appeared as a retronym for the large family of machining processes with material removal as their common process. The term 3D printing still referred only to the polymer technologies in most minds, and the term AM was more likely to be used in metalworking and end-use part production contexts than among polymer, inkjet, or stereolithography enthusiasts.
By the early 2010s, the terms 3D printing and additive manufacturing evolved senses in which they were alternate umbrella terms for additive technologies, one being used in popular language by consumer-maker communities and the media, and the other used more formally by industrial end-use part producers, machine manufacturers, and global technical standards organizations. Until recently, the term 3D printing has been associated with machines low in price or capability. 3D printing and additive manufacturing reflect that the technologies share the theme of material addition or joining throughout a 3D work envelope under automated control. Peter Zelinski, the editor-in-chief of Additive Manufacturing magazine, pointed out in 2017 that the terms are still often synonymous in casual usage, but some manufacturing industry experts are trying to make a distinction whereby additive manufacturing comprises 3D printing plus other technologies or other aspects of a manufacturing process.
Other terms that have been used as synonyms or hypernyms have included desktop manufacturing, rapid manufacturing (as the logical production-level successor to rapid prototyping), and on-demand manufacturing (which echoes on-demand printing in the 2D sense of printing). The fact that the application of the adjectives rapid and on-demand to the noun manufacturing was novel in the 2000s reveals the long-prevailing mental model of the previous industrial era during which almost all production manufacturing had involved long lead times for laborious tooling development. Today, the term subtractive has not replaced the term machining, instead complementing it when a term that covers any removal method is needed. Agile tooling is the use of modular means to design tooling that is produced by additive manufacturing or 3D printing methods to enable quick prototyping and responses to tooling and fixture needs. Agile tooling uses a cost-effective and high-quality method to quickly respond to customer and market needs, and it can be used in hydroforming, stamping, injection molding and other manufacturing processes.
History
1940s and 1950s
The general concept of and procedure to be used in 3D-printing was first described by Murray Leinster in his 1945 short story "Things Pass By": "But this constructor is both efficient and flexible. I feed magnetronic plastics — the stuff they make houses and ships of nowadays — into this moving arm. It makes drawings in the air following drawings it scans with photo-cells. But plastic comes out of the end of the drawing arm and hardens as it comes ... following drawings only"
It was also described by Raymond F. Jones in his story, "Tools of the Trade", published in the November 1950 issue of Astounding Science Fiction magazine. He referred to it as a "molecular spray" in that story.
1970s
In 1971, Johannes F Gottwald patented the Liquid Metal Recorder, U.S. patent 3596285A, a continuous inkjet metal material device to form a removable metal fabrication on a reusable surface for immediate use or salvaged for printing again by remelting. This appears to be the first patent describing 3D printing with rapid prototyping and controlled on-demand manufacturing of patterns.
The patent states:
In 1974, David E. H. Jones laid out the concept of 3D printing in his regular column Ariadne in the journal New Scientist.
1980s
Early additive manufacturing equipment and materials were developed in the 1980s.
In April 1980, Hideo Kodama of Nagoya Municipal Industrial Research Institute invented two additive methods for fabricating three-dimensional plastic models with photo-hardening thermoset polymer, where the UV exposure area is controlled by a mask pattern or a scanning fiber transmitter.
He filed a patent for this XYZ plotter, which was published on 10 November 1981. (JP S56-144478).
His research results as journal papers were published in April and November 1981.
However, there was no reaction to the series of his publications. His device was not highly evaluated in the laboratory and his boss did not show any interest. His research budget was just 60,000 yen or $545 a year. Acquiring the patent rights for the XYZ plotter was abandoned, and the project was terminated.
A US 4323756 patent, method of fabricating articles by sequential deposition, granted on 6 April 1982 to Raytheon Technologies Corp describes using hundreds or thousands of "layers" of powdered metal and a laser energy source and represents an early reference to forming "layers" and the fabrication of articles on a substrate.
On 2 July 1984, American entrepreneur Bill Masters filed a patent for his computer automated manufacturing process and system (US 4665492). This filing is on record at the USPTO as the first 3D printing patent in history; it was the first of three patents belonging to Masters that laid the foundation for the 3D printing systems used today.
On 16 July 1984, Alain Le Méhauté, Olivier de Witte, and Jean Claude André filed their patent for the stereolithography process. The application of the French inventors was abandoned by the French General Electric Company (now Alcatel-Alsthom) and CILAS (The Laser Consortium). The claimed reason was "for lack of business perspective".
In 1983, Robert Howard started R.H. Research, later named Howtek, Inc. in Feb 1984 to develop a color inkjet 2D printer, Pixelmaster, commercialized in 1986, using Thermoplastic (hot-melt) plastic ink. A team was put together, 6 members from Exxon Office Systems, Danbury Systems Division, an inkjet printer startup and some members of Howtek, Inc group who became popular figures in the 3D printing industry. One Howtek member, Richard Helinski (patent US5136515A, Method and Means for constructing three-dimensional articles by particle deposition, application 11/07/1989 granted 8/04/1992) formed a New Hampshire company C.A.D-Cast, Inc, name later changed to Visual Impact Corporation (VIC) on 8/22/1991. A prototype of the VIC 3D printer for this company is available with a video presentation showing a 3D model printed with a single nozzle inkjet. Another employee Herbert Menhennett formed a New Hampshire company HM Research in 1991 and introduced the Howtek, Inc, inkjet technology and thermoplastic materials to Royden Sanders of SDI and Bill Masters of Ballistic Particle Manufacturing (BPM) where he worked for a number of years. Both BPM 3D printers and SPI 3D printers use Howtek, Inc style Inkjets and Howtek, Inc style materials. Royden Sanders licensed the Helinksi patent prior to manufacturing the Modelmaker 6 Pro at Sanders prototype, Inc (SPI) in 1993. James K. McMahon who was hired by Howtek, Inc to help develop the inkjet, later worked at Sanders Prototype and now operates Layer Grown Model Technology, a 3D service provider specializing in Howtek single nozzle inkjet and SDI printer support. James K. McMahon worked with Steven Zoltan, 1972 drop-on-demand inkjet inventor, at Exxon and has a patent in 1978 that expanded the understanding of the single nozzle design inkjets (Alpha jets) and helped perfect the Howtek, Inc hot-melt inkjets. This Howtek hot-melt thermoplastic technology is popular with metal investment casting, especially in the 3D printing jewelry industry. Sanders (SDI) first Modelmaker 6Pro customer was Hitchner Corporations, Metal Casting Technology, Inc in Milford, NH a mile from the SDI facility in late 1993-1995 casting golf clubs and auto engine parts.
On 8 August 1984 a patent, US4575330, assigned to UVP, Inc., later assigned to Chuck Hull of 3D Systems Corporation was filed, his own patent for a stereolithography fabrication system, in which individual laminae or layers are added by curing photopolymers with impinging radiation, particle bombardment, chemical reaction or just ultraviolet light lasers. Hull defined the process as a "system for generating three-dimensional objects by creating a cross-sectional pattern of the object to be formed". Hull's contribution was the STL (Stereolithography) file format and the digital slicing and infill strategies common to many processes today. In 1986, Charles "Chuck" Hull was granted a patent for this system, and his company, 3D Systems Corporation was formed and it released the first commercial 3D printer, the SLA-1, later in 1987 or 1988.
The technology used by most 3D printers to date—especially hobbyist and consumer-oriented models—is fused deposition modeling, a special application of plastic extrusion, developed in 1988 by S. Scott Crump and commercialized by his company Stratasys, which marketed its first FDM machine in 1992.
Owning a 3D printer in the 1980s cost upwards of $300,000 ($650,000 in 2016 dollars).
1990s
AM processes for metal sintering or melting (such as selective laser sintering, direct metal laser sintering, and selective laser melting) usually went by their own individual names in the 1980s and 1990s. At the time, all metalworking was done by processes that are now called non-additive (casting, fabrication, stamping, and machining); although plenty of automation was applied to those technologies (such as by robot welding and CNC), the idea of a tool or head moving through a 3D work envelope transforming a mass of raw material into a desired shape with a toolpath was associated in metalworking only with processes that removed metal (rather than adding it), such as CNC milling, CNC EDM, and many others. However, the automated techniques that added metal, which would later be called additive manufacturing, were beginning to challenge that assumption. By the mid-1990s, new techniques for material deposition were developed at Stanford and Carnegie Mellon University, including microcasting and sprayed materials. Sacrificial and support materials had also become more common, enabling new object geometries.
The term 3D printing originally referred to a powder bed process employing standard and custom inkjet print heads, developed at MIT by Emanuel Sachs in 1993 and commercialized by Soligen Technologies, Extrude Hone Corporation, and Z Corporation.
The year 1993 also saw the start of an inkjet 3D printer company initially named Sanders Prototype, Inc and later named Solidscape, introducing a high-precision polymer jet fabrication system with soluble support structures, (categorized as a "dot-on-dot" technique).
In 1995 the Fraunhofer Society developed the selective laser melting process.
2000s
In the early 2000s 3D printers were still largely being used just in the manufacturing and research industries, as the technology was still relatively young and was too expensive for most consumers to be able to get their hands on. The 2000s was when larger scale use of the technology began being seen in industry, most often in the architecture and medical industries, though it was typically used for low accuracy modeling and testing, rather than the production of common manufactured goods or heavy prototyping.
In 2005 users began to design and distribute plans for 3D printers that could print around 70% of their own parts, the original plans of which were designed by Adrian Bowyer at the University of Bath in 2004, with the name of the project being RepRap (Replicating Rapid-prototyper).
Similarly, in 2006 the Fab@Home project was started by Evan Malone and Hod Lipson, another project whose purpose was to design a low-cost and open source fabrication system that users could develop on their own and post feedback on, making the project very collaborative.
Much of the software for 3D printing available to the public at the time was open source, and as such was quickly distributed and improved upon by many individual users. In 2009 the Fused Deposition Modeling (FDM) printing process patents expired. This opened the door to a new wave of startup companies, many of which were established by major contributors of these open source initiatives, with the goal of many of them being to start developing commercial FDM 3D printers that were more accessible to the general public.
2010s
As the various additive processes matured, it became clear that soon metal removal would no longer be the only metalworking process done through a tool or head moving through a 3D work envelope, transforming a mass of raw material into a desired shape layer by layer. The 2010s were the first decade in which metal end-use parts such as engine brackets and large nuts would be grown (either before or instead of machining) in job production rather than obligately being machined from bar stock or plate. It is still the case that casting, fabrication, stamping, and machining are more prevalent than additive manufacturing in metalworking, but AM is now beginning to make significant inroads, and with the advantages of design for additive manufacturing, it is clear to engineers that much more is to come.
One place that AM is making a significant inroad is in the aviation industry. With nearly 3.8 billion air travelers in 2016, the demand for fuel efficient and easily produced jet engines has never been higher. For large OEMs (original equipment manufacturers) like Pratt and Whitney (PW) and General Electric (GE) this means looking towards AM as a way to reduce cost, reduce the number of nonconforming parts, reduce weight in the engines to increase fuel efficiency and find new, highly complex shapes that would not be feasible with the antiquated manufacturing methods. One example of AM integration with aerospace was in 2016 when Airbus delivered the first of GE's LEAP engines. This engine has integrated 3D-printed fuel nozzles, reducing parts from 20 to 1, a 25% weight reduction, and reduced assembly times. A fuel nozzle is the perfect inroad for additive manufacturing in a jet engine since it allows for optimized design of the complex internals and it is a low-stress, non-rotating part. Similarly, in 2015, PW delivered their first AM parts in the PurePower PW1500G to Bombardier. Sticking to low-stress, non-rotating parts, PW selected the compressor stators and synch ring brackets to roll out this new manufacturing technology for the first time. While AM is still playing a small role in the total number of parts in the jet engine manufacturing process, the return on investment can already be seen by the reduction in parts, the rapid production capabilities and the "optimized design in terms of performance and cost".
As technology matured, several authors began to speculate that 3D printing could aid in sustainable development in the developing world.
In 2012, Filabot developed a system for closing the loop with plastic and allows for any FDM or FFF 3D printer to be able to print with a wider range of plastics.
In 2014, Benjamin S. Cook and Manos M. Tentzeris demonstrated the first multi-material, vertically integrated printed electronics additive manufacturing platform (VIPRE) which enabled 3D printing of functional electronics operating up to 40 GHz.
As the price of printers started to drop people interested in this technology had more access and freedom to make what they wanted. As of 2014, the price for commercial printers was still high with the cost being over $2,000.
The term "3D printing" originally referred to a process that deposits a binder material onto a powder bed with inkjet printer heads layer by layer. More recently, the popular vernacular has started using the term to encompass a wider variety of additive-manufacturing techniques such as electron-beam additive manufacturing and selective laser melting. The United States and global technical standards use the official term additive manufacturing for this broader sense.
The most commonly used 3D printing process (46% ) is a material extrusion technique called fused deposition modeling, or FDM. While FDM technology was invented after the other two most popular technologies, stereolithography (SLA) and selective laser sintering (SLS), FDM is typically the most inexpensive of the three by a large margin, which lends to the popularity of the process.
2020s
As of 2020, 3D printers have reached the level of quality and price that allows most people to enter the world of 3D printing. In 2020 decent quality printers can be found for less than US$200 for entry-level machines. These more affordable printers are usually fused deposition modeling (FDM) printers.
In November 2021 a British patient named Steve Verze received the world's first fully 3D-printed prosthetic eye from the Moorfields Eye Hospital in London.
In April 2024, the world's largest 3D printer, the Factory of the Future 1.0 was revealed at the University of Maine. It is able to make objects 96 feet long, or 29 meters.
In 2024, researchers used machine learning to improve the construction of synthetic bone and set a record for shock absorption.
In July 2024, researchers published a paper in Advanced Materials Technologies describing the development of artificial blood vessels using 3D-printing technology, which are as strong and durable as natural blood vessels. The process involved using a rotating spindle integrated into a 3D printer to create grafts from a water-based gel, which were then coated in biodegradable polyester molecules.
Benefits of 3D printing
Additive manufacturing or 3D printing has rapidly gained importance in the field of engineering due to its many benefits. The vision of 3D printing is design freedom, individualization, decentralization and executing processes that were previously impossible through alternative methods. Some of these benefits include enabling faster prototyping, reducing manufacturing costs, increasing product customization, and improving product quality.
Furthermore, the capabilities of 3D printing have extended beyond traditional manufacturing, like lightweight construction, or repair and maintenance with applications in prosthetics, bioprinting, food industry, rocket building, design and art and renewable energy systems. 3D printing technology can be used to produce battery energy storage systems, which are essential for sustainable energy generation and distribution.
Another benefit of 3D printing is the technology's ability to produce complex geometries with high precision and accuracy. This is particularly relevant in the field of microwave engineering, where 3D printing can be used to produce components with unique properties that are difficult to achieve using traditional manufacturing methods.
Additive Manufacturing processes generate minimal waste by adding material only where needed, unlike traditional methods that cut away excess material. This reduces both material costs and environmental impact. This reduction in waste also lowers energy consumption for material production and disposal, contributing to a smaller carbon footprint.
General principles
Modeling
3D printable models may be created with a computer-aided design (CAD) package, via a 3D scanner, or by a plain digital camera and photogrammetry software. 3D printed models created with CAD result in relatively fewer errors than other methods. Errors in 3D printable models can be identified and corrected before printing. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D scanning is a process of collecting digital data on the shape and appearance of a real object, and creating a digital model based on it.
CAD models can be saved in the stereolithography file format (STL), a de facto CAD file format for additive manufacturing that stores data based on triangulations of the surface of CAD models. STL is not tailored for additive manufacturing because it generates large file sizes of topology-optimized parts and lattice structures due to the large number of surfaces involved. A newer CAD file format, the additive manufacturing file format (AMF), was introduced in 2011 to solve this problem. It stores information using curved triangulations.
Printing
Before printing a 3D model from an STL file, it must first be examined for errors. Most CAD applications produce errors in output STL files, of the following types:
holes
faces normals
self-intersections
noise shells
manifold errors
overhang issues
A step in the STL generation known as "repair" fixes such problems in the original model. Generally, STLs that have been produced from a model obtained through 3D scanning often have more of these errors as 3D scanning is often achieved by point to point acquisition/mapping. 3D reconstruction often includes errors.
Once completed, the STL file needs to be processed by a piece of software called a "slicer", which converts the model into a series of thin layers and produces a G-code file containing instructions tailored to a specific type of 3D printer (FDM printers). This G-code file can then be printed with 3D printing client software (which loads the G-code and uses it to instruct the 3D printer during the 3D printing process).
Printer resolution describes layer thickness and X–Y resolution in dots per inch (dpi) or micrometers (μm). Typical layer thickness is around , although some machines can print layers as thin as . X–Y resolution is comparable to that of laser printers. The particles (3D dots) are around in diameter. For that printer resolution, specifying a mesh resolution of and a chord length generates an optimal STL output file for a given model input file. Specifying higher resolution results in larger files without increase in print quality.
Construction of a model with contemporary methods can take anywhere from several hours to several days, depending on the method used and the size and complexity of the model. Additive systems can typically reduce this time to a few hours, although it varies widely depending on the type of machine used and the size and number of models being produced simultaneously.
Finishing
Though the printer-produced resolution and surface finish are sufficient for some applications, post-processing and finishing methods allow for benefits such as greater dimensional accuracy, smoother surfaces, and other modifications such as coloration.
The surface finish of a 3D-printed part can improved using subtractive methods such as sanding and bead blasting. When smoothing parts that require dimensional accuracy, it is important to take into account the volume of the material being removed.
Some printable polymers, such as acrylonitrile butadiene styrene (ABS), allow the surface finish to be smoothed and improved using chemical vapor processes based on acetone or similar solvents.
Some additive manufacturing techniques can benefit from annealing as a post-processing step. Annealing a 3D-printed part allows for better internal layer bonding due to recrystallization of the part. It allows for an increase in mechanical properties, some of which are fracture toughness, flexural strength, impact resistance, and heat resistance. Annealing a component may not be suitable for applications where dimensional accuracy is required, as it can introduce warpage or shrinkage due to heating and cooling.
Additive or subtractive hybrid manufacturing (ASHM) is a method that involves producing a 3D printed part and using machining (subtractive manufacturing) to remove material. Machining operations can be completed after each layer, or after the entire 3D print has been completed depending on the application requirements. These hybrid methods allow for 3D-printed parts to achieve better surface finishes and dimensional accuracy.
The layered structure of traditional additive manufacturing processes leads to a stair-stepping effect on part-surfaces that are curved or tilted with respect to the building platform. The effect strongly depends on the layer height used, as well as the orientation of a part surface inside the building process. This effect can be minimized using "variable layer heights" or "adaptive layer heights". These methods decrease the layer height in places where higher quality is needed.
Painting a 3D-printed part offers a range of finishes and appearances that may not be achievable through most 3D printing techniques. The process typically involves several steps, such as surface preparation, priming, and painting. These steps help prepare the surface of the part and ensuring the paint adheres properly.
Some additive manufacturing techniques are capable of using multiple materials simultaneously. These techniques are able to print in multiple colors and color combinations simultaneously and can produce parts that may not necessarily require painting.
Some printing techniques require internal supports to be built to support overhanging features during construction. These supports must be mechanically removed or dissolved if using a water-soluble support material such as PVA after completing a print.
Some commercial metal 3D printers involve cutting the metal component off the metal substrate after deposition. A new process for the GMAW 3D printing allows for substrate surface modifications to remove aluminium or steel.
Materials
Traditionally, 3D printing focused on polymers for printing, due to the ease of manufacturing and handling polymeric materials. However, the method has rapidly evolved to not only print various polymers but also metals and ceramics, making 3D printing a versatile option for manufacturing. Layer-by-layer fabrication of three-dimensional physical models is a modern concept that "stems from the ever-growing CAD industry, more specifically the solid modeling side of CAD. Before solid modeling was introduced in the late 1980s, three-dimensional models were created with wire frames and surfaces." but in all cases the layers of materials are controlled by the printer and the material properties. The three-dimensional material layer is controlled by the deposition rate as set by the printer operator and stored in a computer file. The earliest printed patented material was a hot melt type ink for printing patterns using a heated metal alloy.
Charles Hull filed the first patent on August 8, 1984, to use a UV-cured acrylic resin using a UV-masked light source at UVP Corp to build a simple model. The SLA-1 was the first SL product announced by 3D Systems at Autofact Exposition, Detroit, November 1978. The SLA-1 Beta shipped in Jan 1988 to Baxter Healthcare, Pratt and Whitney, General Motors and AMP. The first production SLA-1 shipped to Precision Castparts in April 1988. The UV resin material changed over quickly to an epoxy-based material resin. In both cases, SLA-1 models needed UV oven curing after being rinsed in a solvent cleaner to remove uncured boundary resin. A post cure apparatus (PCA) was sold with all systems. The early resin printers required a blade to move fresh resin over the model on each layer. The layer thickness was 0.006 inches and the HeCd laser model of the SLA-1 was 12 watts and swept across the surface at 30 in per second. UVP was acquired by 3D Systems in January 1990.
A review of the history shows that a number of materials (resins, plastic powder, plastic filament and hot-melt plastic ink) were used in the 1980s for patents in the rapid prototyping field. Masked lamp UV-cured resin was also introduced by Cubital's Itzchak Pomerantz in the Soldier 5600, Carl Deckard's (DTM) laser sintered thermoplastic powders, and adhesive-laser cut paper (LOM) stacked to form objects by Michael Feygin before 3D Systems made its first announcement. Scott Crump was also working with extruded "melted" plastic filament modeling (FDM) and drop deposition had been patented by William E Masters a week after Hull's patent in 1984, but he had to discover thermoplastic inkjets, introduced by Visual Impact Corporation 3D printer in 1992, using inkjets from Howtek, Inc., before he formed BPM to bring out his own 3D printer product in 1994.
Multi-material 3D printing
Efforts to achieve multi-material 3D printing range from enhanced FDM-like processes like VoxelJet to novel voxel-based printing technologies like layered assembly.
A drawback of many existing 3D printing technologies is that they only allow one material to be printed at a time, limiting many potential applications that require the integration of different materials in the same object. Multi-material 3D printing solves this problem by allowing objects of complex and heterogeneous arrangements of materials to be manufactured using a single printer. Here, a material must be specified for each voxel (or 3D printing pixel element) inside the final object volume.
The process can be fraught with complications, however, due to the isolated and monolithic algorithms. Some commercial devices have sought to solve these issues, such as building a Spec2Fab translator, but the progress is still very limited. Nonetheless, in the medical industry, a concept of 3D-printed pills and vaccines has been presented.
With this new concept, multiple medications can be combined, which is expected to decrease many risks. With more and more applications of multi-material 3D printing, the costs of daily life and high technology development will become inevitably lower.
Metallographic materials of 3D printing is also being researched. By classifying each material, CIMP-3D can systematically perform 3D printing with multiple materials.
4D printing
Using 3D printing and multi-material structures in additive manufacturing has allowed for the design and creation of what is called 4D printing. 4D printing is an additive manufacturing process in which the printed object changes shape with time, temperature, or some other type of stimulation. 4D printing allows for the creation of dynamic structures with adjustable shapes, properties or functionality. The smart/stimulus-responsive materials that are created using 4D printing can be activated to create calculated responses such as self-assembly, self-repair, multi-functionality, reconfiguration and shape-shifting. This allows for customized printing of shape-changing and shape-memory materials.
4D printing has the potential to find new applications and uses for materials (plastics, composites, metals, etc.) and has the potential to create new alloys and composites that were not viable before. The versatility of this technology and materials can lead to advances in multiple fields of industry, including space, commercial and medical fields. The repeatability, precision, and material range for 4D printing must increase to allow the process to become more practical throughout these industries.
To become a viable industrial production option, there are a few challenges that 4D printing must overcome. The challenges of 4D printing include the fact that the microstructures of these printed smart materials must be close to or better than the parts obtained through traditional machining processes. New and customizable materials need to be developed that have the ability to consistently respond to varying external stimuli and change to their desired shape. There is also a need to design new software for the various technique types of 4D printing. The 4D printing software will need to take into consideration the base smart material, printing technique, and structural and geometric requirements of the design.
Processes and printers
ISO/ASTM52900-15 defines seven categories of additive manufacturing (AM) processes within its meaning. They are:
Vat photopolymerization
Material jetting
Binder jetting
Powder bed fusion
Material extrusion
Directed energy deposition
Sheet lamination
The main differences between processes are in the way layers are deposited to create parts and in the materials that are used. Each method has its own advantages and drawbacks, which is why some companies offer a choice of powder and polymer for the material used to build the object. Others sometimes use standard, off-the-shelf business paper as the build material to produce a durable prototype. The main considerations in choosing a machine are generally speed, costs of the 3D printer, of the printed prototype, choice and cost of the materials, and color capabilities. Printers that work directly with metals are generally expensive. However, less expensive printers can be used to make a mold, which is then used to make metal parts.
Material jetting
The first process where three-dimensional material is deposited to form an object was done with material jetting or as it was originally called particle deposition. Particle deposition by inkjet first started with continuous inkjet technology (CIT) (1950s) and later with drop-on-demand inkjet technology (1970s) using hot-melt inks. Wax inks were the first three-dimensional materials jetted and later low-temperature alloy metal was jetted with CIT. Wax and thermoplastic hot melts were jetted next by DOD. Objects were very small and started with text characters and numerals for signage. An object must have form and can be handled. Wax characters tumbled off paper documents and inspired a liquid metal recorder patent to make metal characters for signage in 1971. Thermoplastic color inks (CMYK) were printed with layers of each color to form the first digitally formed layered objects in 1984. The idea of investment casting with Solid-Ink jetted images or patterns in 1984 led to the first patent to form articles from particle deposition in 1989, issued in 1992.
Material extrusion
Some methods melt or soften the material to produce the layers. In fused filament fabrication, also known as fused deposition modeling (FDM), the model or part is produced by extruding small beads or streams of material that harden immediately to form layers. A filament of thermoplastic, metal wire, or other material is fed into an extrusion nozzle head (3D printer extruder), which heats the material and turns the flow on and off. FDM is somewhat restricted in the variation of shapes that may be fabricated. Another technique fuses parts of the layer and then moves upward in the working area, adding another layer of granules and repeating the process until the piece has built up. This process uses the unfused media to support overhangs and thin walls in the part being produced, which reduces the need for temporary auxiliary supports for the piece. Recently, FFF/FDM has expanded to 3-D print directly from pellets to avoid the conversion to filament. This process is called fused particle fabrication (FPF) (or fused granular fabrication (FGF) and has the potential to use more recycled materials.
Powder bed fusion
Powder bed fusion techniques, or PBF, include several processes such as DMLS, SLS, SLM, MJF and EBM. Powder bed fusion processes can be used with an array of materials and their flexibility allows for geometrically complex structures, making it a good choice for many 3D printing projects. These techniques include selective laser sintering, with both metals and polymers and direct metal laser sintering. Selective laser melting does not use sintering for the fusion of powder granules but will completely melt the powder using a high-energy laser to create fully dense materials in a layer-wise method that has mechanical properties similar to those of conventional manufactured metals. Electron beam melting is a similar type of additive manufacturing technology for metal parts (e.g. titanium alloys). EBM manufactures parts by melting metal powder layer by layer with an electron beam in a high vacuum. Another method consists of an inkjet 3D printing system, which creates the model one layer at a time by spreading a layer of powder (plaster or resins) and printing a binder in the cross-section of the part using an inkjet-like process. With laminated object manufacturing, thin layers are cut to shape and joined. In addition to the previously mentioned methods, HP has developed the Multi Jet Fusion (MJF) which is a powder base technique, though no lasers are involved. An inkjet array applies fusing and detailing agents which are then combined by heating to create a solid layer.
Binder jetting
The binder jetting 3D printing technique involves the deposition of a binding adhesive agent onto layers of material, usually powdered, and then this "green" state part may be cured and even sintered. The materials can be ceramic-based, metal or plastic. This method is also known as inkjet 3D printing. To produce a part, the printer builds the model using a head that moves over the platform base to spread or deposit alternating layers of powder (plaster and resins) and binder. Most modern binder jet printers also cure each layer of binder. These steps are repeated until all layers have been printed. This green part is usually cured in an oven to off-gas most of the binder before being sintered in a kiln with a specific time-temperature curve for the given material(s).
This technology allows the printing of full-color prototypes, overhangs, and elastomer parts. The strength of bonded powder prints can be enhanced by impregnating in the spaces between the necked or sintered matrix of powder with other compatible materials depending on the powder material, like wax, thermoset polymer, or even bronze.
Stereolithography
Other methods cure liquid materials using different sophisticated technologies, such as stereolithography. Photopolymerization is primarily used in stereolithography to produce a solid part from a liquid. Inkjet printer systems like the Objet PolyJet system spray photopolymer materials onto a build tray in ultra-thin layers (between 16 and 30 μm) until the part is completed. Each photopolymer layer is cured with UV light after it is jetted, producing fully cured models that can be handled and used immediately, without post-curing. Ultra-small features can be made with the 3D micro-fabrication technique used in multiphoton photopolymerisation. Due to the nonlinear nature of photo excitation, the gel is cured to a solid only in the places where the laser was focused while the remaining gel is then washed away. Feature sizes of under 100 nm are easily produced, as well as complex structures with moving and interlocked parts. Yet another approach uses a synthetic resin that is solidified using LEDs.
In Mask-image-projection-based stereolithography, a 3D digital model is sliced by a set of horizontal planes. Each slice is converted into a two-dimensional mask image. The mask image is then projected onto a photocurable liquid resin surface and light is projected onto the resin to cure it in the shape of the layer. Continuous liquid interface production begins with a pool of liquid photopolymer resin. Part of the pool bottom is transparent to ultraviolet light (the "window"), which causes the resin to solidify. The object rises slowly enough to allow the resin to flow under and maintain contact with the bottom of the object. In powder-fed directed-energy deposition, a high-power laser is used to melt metal powder supplied to the focus of the laser beam. The powder-fed directed energy process is similar to selective laser sintering, but the metal powder is applied only where material is being added to the part at that moment.
Computed axial lithography
Computed axial lithography is a method for 3D printing based on computerised tomography scans to create prints in photo-curable resin. It was developed by a collaboration between the University of California, Berkeley with Lawrence Livermore National Laboratory. Unlike other methods of 3D printing it does not build models through depositing layers of material like fused deposition modelling and stereolithography, instead it creates objects using a series of 2D images projected onto a cylinder of resin. It is notable for its ability to build an object much more quickly than other methods using resins and the ability to embed objects within the prints.
Liquid additive manufacturing
Liquid additive manufacturing (LAM) is a 3D printing technique that deposits a liquid or high viscose material (e.g. liquid silicone rubber) onto a build surface to create an object which then is vulcanised using heat to harden the object. The process was originally created by Adrian Bowyer and was then built upon by German RepRap.
A technique called programmable tooling uses 3D printing to create a temporary mold, which is then filled via a conventional injection molding process and then immediately dissolved.
Lamination
In some printers, paper can be used as the build material, resulting in a lower cost to print. During the 1990s some companies marketed printers that cut cross-sections out of special adhesive coated paper using a carbon dioxide laser and then laminated them together.
In 2005 Mcor Technologies Ltd developed a different process using ordinary sheets of office paper, a tungsten carbide blade to cut the shape, and selective deposition of adhesive and pressure to bond the prototype.
Directed-energy deposition (DED)
Powder-fed directed-energy deposition
In powder-fed directed-energy deposition (also known as laser metal deposition), a high-power laser is used to melt metal powder supplied to the focus of the laser beam. The laser beam typically travels through the center of the deposition head and is focused on a small spot by one or more lenses. The build occurs on an X-Y table which is driven by a tool path created from a digital model to fabricate an object layer by layer. The deposition head is moved up vertically as each layer is completed. Some systems even make use of 5-axis or 6-axis systems (i.e. articulated arms) capable of delivering material on the substrate (a printing bed, or a pre-existing part) with few to no spatial access restrictions. Metal powder is delivered and distributed around the circumference of the head or can be split by an internal manifold and delivered through nozzles arranged in various configurations around the deposition head. A hermetically sealed chamber filled with inert gas or a local inert shroud gas (sometimes both combined) is often used to shield the melt pool from atmospheric oxygen, to limit oxidation and to better control the material properties. The powder-fed directed-energy process is similar to selective laser sintering, but the metal powder is projected only where the material is being added to the part at that moment. The laser beam is used to heat up and create a "melt pool" on the substrate, in which the new powder is injected quasi-simultaneously. The process supports a wide range of materials including titanium, stainless steel, aluminium, tungsten, and other specialty materials as well as composites and functionally graded materials. The process can not only fully build new metal parts but can also add material to existing parts for example for coatings, repair, and hybrid manufacturing applications. Laser engineered net shaping (LENS), which was developed by Sandia National Labs, is one example of the powder-fed directed-energy deposition process for 3D printing or restoring metal parts.
Metal wire processes
Laser-based wire-feed systems, such as laser metal deposition-wire (LMD-w), feed the wire through a nozzle that is melted by a laser using inert gas shielding in either an open environment (gas surrounding the laser) or in a sealed chamber. Electron beam freeform fabrication uses an electron beam heat source inside a vacuum chamber.
It is also possible to use conventional gas metal arc welding attached to a 3D stage to 3-D print metals such as steel, bronze and aluminium. Low-cost open source RepRap-style 3-D printers have been outfitted with Arduino-based sensors and demonstrated reasonable metallurgical properties from conventional welding wire as feedstock.
Selective powder deposition (SPD)
In selective powder deposition, build and support powders are selectively deposited into a crucible, such that the build powder takes the shape of the desired object and support powder fills the rest of the volume in the crucible. Then an infill material is applied, such that it comes in contact with the build powder. Then the crucible is fired up in a kiln at the temperature above the melting point of the infill but below the melting points of the powders. When the infill melts, it soaks the build powder. But it does not soak the support powder, because the support powder is chosen to be such that it is not wettable by the infill. If at the firing temperature, the atoms of the infill material and the build powder are mutually defusable, such as in the case of copper powder and zinc infill, then the resulting material will be a uniform mixture of those atoms, in this case, bronze. But if the atoms are not mutually defusable, such as in the case of tungsten and copper at 1100 °C, then the resulting material will be a composite. To prevent shape distortion, the firing temperature must be below the solidus temperature of the resulting alloy.
Cryogenic 3D printing
Cryogenic 3D printing is a collection of techniques that forms solid structures by freezing liquid materials while they are deposited. As each liquid layer is applied, it is cooled by the low temperature of the previous layer and printing environment which results in solidification. Unlike other 3D printing techniques, cryogenic 3D printing requires a controlled printing environment. The ambient temperature must be below the material's freezing point to ensure the structure remains solid during manufacturing and the humidity must remain low to prevent frost formation between the application of layers. Materials typically include water and water-based solutions, such as brine, slurry, and hydrogels. Cryogenic 3D printing techniques include rapid freezing prototype (RFP), low-temperature deposition manufacturing (LDM), and freeze-form extrusion fabrication (FEF).
Applications
3D printing or additive manufacturing has been used in manufacturing, medical, industry and sociocultural sectors (e.g. cultural heritage) to create successful commercial technology. More recently, 3D printing has also been used in the humanitarian and development sector to produce a range of medical items, prosthetics, spares and repairs. The earliest application of additive manufacturing was on the toolroom end of the manufacturing spectrum. For example, rapid prototyping was one of the earliest additive variants, and its mission was to reduce the lead time and cost of developing prototypes of new parts and devices, which was earlier only done with subtractive toolroom methods such as CNC milling, turning, and precision grinding. In the 2010s, additive manufacturing entered production to a much greater extent.
Food
Additive manufacturing of food is being developed by squeezing out food, layer by layer, into three-dimensional objects. A large variety of foods are appropriate candidates, such as chocolate and candy, and flat foods such as crackers, pasta, and pizza. NASA is looking into the technology in order to create 3D-printed food to limit food waste and to make food that is designed to fit an astronaut's dietary needs. In 2018, Italian bioengineer Giuseppe Scionti developed a technology allowing the production of fibrous plant-based meat analogues using a custom 3D bioprinter, mimicking meat texture and nutritional values.
Fashion
3D printing has entered the world of clothing, with fashion designers experimenting with 3D-printed bikinis, shoes, and dresses. In commercial production, Nike used 3D printing to prototype and manufacture the 2012 Vapor Laser Talon football shoe for players of American football, and New Balance has 3D manufactured custom-fit shoes for athletes. 3D printing has come to the point where companies are printing consumer-grade eyewear with on-demand custom fit and styling (although they cannot print the lenses). On-demand customization of glasses is possible with rapid prototyping.
Transportation
In cars, trucks, and aircraft, additive manufacturing is beginning to transform both unibody and fuselage design and production, and powertrain design and production. For example, General Electric uses high-end 3D printers to build parts for turbines. Many of these systems are used for rapid prototyping before mass production methods are employed. Other prominent examples include:
In early 2014, Swedish supercar manufacturer Koenigsegg announced the One:1, a supercar that utilizes many components that were 3D printed. Urbee is the first car produced using 3D printing (the bodywork and car windows were "printed").
In 2014, Local Motors debuted Strati, a functioning vehicle that was entirely 3D printed using ABS plastic and carbon fiber, except the powertrain.
In May 2015 Airbus announced that its new Airbus A350 XWB included over 1000 components manufactured by 3D printing.
In 2015, a Royal Air Force Eurofighter Typhoon fighter jet flew with printed parts. The United States Air Force has begun to work with 3D printers, and the Israeli Air Force has also purchased a 3D printer to print spare parts.
In 2017, GE Aviation revealed that it had used design for additive manufacturing to create a helicopter engine with 16 parts instead of 900, with great potential impact on reducing the complexity of supply chains.
Firearms
AM's impact on firearms involves two dimensions: new manufacturing methods for established companies, and new possibilities for the making of do-it-yourself firearms. In 2012, the US-based group Defense Distributed disclosed plans to design a working plastic 3D-printed firearm "that could be downloaded and reproduced by anybody with a 3D printer". After Defense Distributed released their plans, questions were raised regarding the effects that 3D printing and widespread consumer-level CNC machining may have on gun control effectiveness. Moreover, armor-design strategies can be enhanced by taking inspiration from nature and prototyping those designs easily, using AM.
Health
Surgical uses of 3D printing-centric therapies began in the mid-1990s with anatomical modeling for bony reconstructive surgery planning. Patient-matched implants were a natural extension of this work, leading to truly personalized implants that fit one unique individual. Virtual planning of surgery and guidance using 3D printed, personalized instruments have been applied to many areas of surgery including total joint replacement and craniomaxillofacial reconstruction with great success. One example of this is the bioresorbable trachial splint to treat newborns with tracheobronchomalacia developed at the University of Michigan. The use of additive manufacturing for serialized production of orthopedic implants (metals) is also increasing due to the ability to efficiently create porous surface structures that facilitate osseointegration. The hearing aid and dental industries are expected to be the biggest areas of future development using custom 3D printing technology.
3D printing is not just limited to inorganic materials; there have been a number of biomedical advancements made possible by 3D printing. , 3D bio-printing technology has been studied by biotechnology firms and academia for possible use in tissue engineering applications in which organs and body parts are built using inkjet printing techniques. In this process, layers of living cells are deposited onto a gel medium or sugar matrix and slowly built up to form three-dimensional structures including vascular systems. 3D printing has been considered as a method of implanting stem cells capable of generating new tissues and organs in living humans. In 2018, 3D printing technology was used for the first time to create a matrix for cell immobilization in fermentation. Propionic acid production by Propionibacterium acidipropionici immobilized on 3D-printed nylon beads was chosen as a model study. It was shown that those 3D-printed beads were capable of promoting high-density cell attachment and propionic acid production, which could be adapted to other fermentation bioprocesses.
3D printing has also been employed by researchers in the pharmaceutical field. During the last few years, there has been a surge in academic interest regarding drug delivery with the aid of AM techniques. This technology offers a unique way for materials to be utilized in novel formulations. AM manufacturing allows for the usage of materials and compounds in the development of formulations, in ways that are not possible with conventional/traditional techniques in the pharmaceutical field, e.g. tableting, cast-molding, etc. Moreover, one of the major advantages of 3D printing, especially in the case of fused deposition modelling (FDM), is the personalization of the dosage form that can be achieved, thus, targeting the patient's specific needs. In the not-so-distant future, 3D printers are expected to reach hospitals and pharmacies in order to provide on-demand production of personalized formulations according to the patients' needs.
3D printing has also been used for medical equipment. During the COVID-19 pandemic 3D printers were used to supplement the strained supply of PPE through volunteers using their personally owned printers to produce various pieces of personal protective equipment (i.e. frames for face shields).
Education
3D printing, and open source 3D printers, in particular, are the latest technologies making inroads into the classroom. Higher education has proven to be a major buyer of desktop and professional 3D printers which industry experts generally view as a positive indicator. Some authors have claimed that 3D printers offer an unprecedented "revolution" in STEM education. The evidence for such claims comes from both the low-cost ability for rapid prototyping in the classroom by students, but also the fabrication of low-cost high-quality scientific equipment from open hardware designs forming open-source labs. Additionally, Libraries around the world have also become locations to house smaller 3D printers for educational and community access. Future applications for 3D printing might include creating open-source scientific equipment.
Replicating archeological artifacts
In the 2010s, 3D printing became intensively used in the cultural heritage field for preservation, restoration and dissemination purposes. Many Europeans and North American Museums have purchased 3D printers and actively recreate missing pieces of their relics and archaeological monuments such as Tiwanaku in Bolivia. The Metropolitan Museum of Art and the British Museum have started using their 3D printers to create museum souvenirs that are available in the museum shops. Other museums, like the National Museum of Military History and Varna Historical Museum, have gone further and sell through the online platform Threeding digital models of their artifacts, created using Artec 3D scanners, in 3D printing friendly file format, which everyone can 3D print at home. Morehshin Allahyari, an Iranian-born U.S. artist, considers her use of 3D sculpting processes of re-constructing Iranian cultural treasures as feminist activism. Allahyari uses a 3D modeling software to reconstruct a series of cultural artifacts that were demolished by ISIS militants in 2014.
Replicating historic buildings and architectural structures
The application of 3D printing for the representation of architectural assets has many challenges. In 2018, the structure of Iran National Bank was traditionally surveyed and modeled in computer graphics software (specifically, Cinema4D) and was optimized for 3D printing. The team tested the technique for the construction of the part and it was successful. After testing the procedure, the modellers reconstructed the structure in Cinema4D and exported the front part of the model to Netfabb. The entrance of the building was chosen due to the 3D printing limitations and the budget of the project for producing the maquette. 3D printing was only one of the capabilities enabled by the produced 3D model of the bank, but due to the project's limited scope, the team did not continue modelling for the virtual representation or other applications. In 2021, Parsinejad et al. comprehensively compared the hand surveying method for 3D reconstruction ready for 3D printing with digital recording (adoption of photogrammetry method).
The world's first 3D-printed steel bridge was unveiled in Amsterdam in July 2021. Spanning 12 meters over the Oudezijds Achterburgwal canal, the bridge was created using robotic arms that printed over 4,500 kilograms of stainless steel. It took six months to complete.
Soft actuators
3D printed soft actuators is a growing application of 3D printing technology that has found its place in the 3D printing applications. These soft actuators are being developed to deal with soft structures and organs, especially in biomedical sectors and where the interaction between humans and robots is inevitable. The majority of the existing soft actuators are fabricated by conventional methods that require manual fabrication of devices, post-processing/assembly, and lengthy iterations until the maturity of the fabrication is achieved. Instead of the tedious and time-consuming aspects of the current fabrication processes, researchers are exploring an appropriate manufacturing approach for the effective fabrication of soft actuators. Thus, 3D-printed soft actuators are introduced to revolutionize the design and fabrication of soft actuators with custom geometrical, functional, and control properties in a faster and inexpensive approach. They also enable incorporation of all actuator components into a single structure eliminating the need to use external joints, adhesives, and fasteners.
Circuit boards
Circuit board manufacturing involves multiple steps which include imaging, drilling, plating, solder mask coating, nomenclature printing and surface finishes. These steps include many chemicals such as harsh solvents and acids. 3D printing circuit boards remove the need for many of these steps while still producing complex designs. Polymer ink is used to create the layers of the build while silver polymer is used for creating the traces and holes used to allow electricity to flow. Current circuit board manufacturing can be a tedious process depending on the design. Specified materials are gathered and sent into inner layer processing where images are printed, developed and etched. The etch cores are typically punched to add lamination tooling. The cores are then prepared for lamination. The stack-up, the buildup of a circuit board, is built and sent into lamination where the layers are bonded. The boards are then measured and drilled. Many steps may differ from this stage however for simple designs, the material goes through a plating process to plate the holes and surface. The outer image is then printed, developed and etched. After the image is defined, the material must get coated with a solder mask for later soldering. Nomenclature is then added so components can be identified later. Then the surface finish is added. The boards are routed out of panel form into their singular or array form and then electrically tested. Aside from the paperwork that must be completed which proves the boards meet specifications, the boards are then packed and shipped. The benefits of 3D printing would be that the final outline is defined from the beginning, no imaging, punching or lamination is required and electrical connections are made with the silver polymer which eliminates drilling and plating. The final paperwork would also be greatly reduced due to the lack of materials required to build the circuit board. Complex designs which may take weeks to complete through normal processing can be 3D printed, greatly reducing manufacturing time.
Hobbyists
In 2005, academic journals began to report on the possible artistic applications of 3D printing technology. Off-the-shelf machines were increasingly capable of producing practical household applications, for example, ornamental objects. Some practical examples include a working clock and gears printed for home woodworking machines among other purposes. Websites associated with home 3D printing tended to include backscratchers, coat hooks, door knobs, etc. As of 2017, domestic 3D printing was reaching a consumer audience beyond hobbyists and enthusiasts. Several projects and companies are making efforts to develop affordable 3D printers for home desktop use. Much of this work has been driven by and targeted at DIY/maker/enthusiast/early adopter communities, with additional ties to the academic and hacker communities.
Sped on by decreases in price and increases in quality, an estimated 2 million people worldwide have purchased a 3D printer for hobby use.
Legal aspects
Intellectual property
3D printing has existed for decades within certain manufacturing industries where many legal regimes, including patents, industrial design rights, copyrights, and trademarks may apply. However, there is not much jurisprudence to say how these laws will apply if 3D printers become mainstream and individuals or hobbyist communities begin manufacturing items for personal use, for non-profit distribution, or for sale.
Any of the mentioned legal regimes may prohibit the distribution of the designs used in 3D printing or the distribution or sale of the printed item. To be allowed to do these things, where active intellectual property was involved, a person would have to contact the owner and ask for a licence, which may come with conditions and a price. However, many patent, design and copyright laws contain a standard limitation or exception for "private" or "non-commercial" use of inventions, designs or works of art protected under intellectual property (IP). That standard limitation or exception may leave such private, non-commercial uses outside the scope of IP rights.
Patents cover inventions including processes, machines, manufacturing, and compositions of matter and have a finite duration which varies between countries, but generally 20 years from the date of application. Therefore, if a type of wheel is patented, printing, using, or selling such a wheel could be an infringement of the patent.
Copyright covers an expression in a tangible, fixed medium and often lasts for the life of the author plus 70 years thereafter. For example, a sculptor retains copyright over a statue, such that other people cannot then legally distribute designs to print an identical or similar statue without paying royalties, waiting for the copyright to expire, or working within a fair use exception.
When a feature has both artistic (copyrightable) and functional (patentable) merits when the question has appeared in US court, the courts have often held the feature is not copyrightable unless it can be separated from the functional aspects of the item. In other countries the law and the courts may apply a different approach allowing, for example, the design of a useful device to be registered (as a whole) as an industrial design on the understanding that, in case of unauthorized copying, only the non-functional features may be claimed under design law whereas any technical features could only be claimed if covered by a valid patent.
Gun legislation and administration
The US Department of Homeland Security and the Joint Regional Intelligence Center released a memo stating that "significant advances in three-dimensional (3D) printing capabilities, availability of free digital 3D printable files for firearms components, and difficulty regulating file sharing may present public safety risks from unqualified gun seekers who obtain or manufacture 3D printed guns" and that "proposed legislation to ban 3D printing of weapons may deter, but cannot completely prevent their production. Even if the practice is prohibited by new legislation, online distribution of these 3D printable files will be as difficult to control as any other illegally traded music, movie or software files."
Attempting to restrict the distribution of gun plans via the Internet has been likened to the futility of preventing the widespread distribution of DeCSS, which enabled DVD ripping. After the US government had Defense Distributed take down the plans, they were still widely available via the Pirate Bay and other file sharing sites. Downloads of the plans from the UK, Germany, Spain, and Brazil were heavy. Some US legislators have proposed regulations on 3D printers to prevent them from being used for printing guns. 3D printing advocates have suggested that such regulations would be futile, could cripple the 3D printing industry and could infringe on free speech rights, with early pioneers of 3D printing professor Hod Lipson suggesting that gunpowder could be controlled instead.
Internationally, where gun controls are generally stricter than in the United States, some commentators have said the impact may be more strongly felt since alternative firearms are not as easily obtainable. Officials in the United Kingdom have noted that producing a 3D-printed gun would be illegal under their gun control laws. Europol stated that criminals have access to other sources of weapons but noted that as technology improves, the risks of an effect would increase.
Aerospace regulation
In the United States, the FAA has anticipated a desire to use additive manufacturing techniques and has been considering how best to regulate this process. The FAA has jurisdiction over such fabrication because all aircraft parts must be made under FAA production approval or under other FAA regulatory categories. In December 2016, the FAA approved the production of a 3D-printed fuel nozzle for the GE LEAP engine. Aviation attorney Jason Dickstein has suggested that additive manufacturing is merely a production method, and should be regulated like any other production method. He has suggested that the FAA's focus should be on guidance to explain compliance, rather than on changing the existing rules, and that existing regulations and guidance permit a company "to develop a robust quality system that adequately reflects regulatory needs for quality assurance".
Health and safety
Polymer feedstock materials can release ultrafine particles and volatile organic compounds (VOCs) if sufficiently heated, which in combination have been associated with adverse respiratory and cardiovascular health effects. In addition, temperatures of 190 °C to 260 °C are typically reached by an FFF extrusion nozzle, which can cause skin burns. Vat photopolymerization stereolithography printers use high-powered lasers that present a skin and eye hazard, although they are considered nonhazardous during printing because the laser is enclosed within the printing chamber.
3D printers also contain many moving parts that include stepper motors, pulleys, threaded rods, carriages, and small fans, which generally do not have enough power to cause serious injuries but can still trap a user's finger, long hair, or loose clothing. Most desktop FFF 3D printers do not have any added electrical safety features beyond regular internal fuses or external transformers, although the voltages in the exposed parts of 3D printers usually do not exceed 12V to 24V, which is generally considered safe.
Research on the health and safety concerns of 3D printing is new and in development due to the recent proliferation of 3D printing devices. In 2017, the European Agency for Safety and Health at Work published a discussion paper on the processes and materials involved in 3D printing, the potential implications of this technology for occupational safety and health and avenues for controlling potential hazards.
Noise level is measured in decibels (dB), and can vary greatly in home printers from 15 dB to 75 dB. Some main sources of noise in filament printers are fans, motors and bearings, while in resin printers the fans usually are responsible for most of the noise. Some methods for dampening the noise from a printer may be to install vibration isolation, use larger diameter fans, perform regular maintenance and lubrication, or use a soundproofing enclosure.
Impact
Additive manufacturing, starting with today's infancy period, requires manufacturing firms to be flexible, ever-improving users of all available technologies to remain competitive. Advocates of additive manufacturing also predict that this arc of technological development will counter globalization, as end users will do much of their own manufacturing rather than engage in trade to buy products from other people and corporations. The real integration of the newer additive technologies into commercial production, however, is more a matter of complementing traditional subtractive methods rather than displacing them entirely.
The futurologist Jeremy Rifkin claimed that 3D printing signals the beginning of a third industrial revolution, succeeding the production line assembly that dominated manufacturing starting in the late 19th century.
Social change
Since the 1950s, a number of writers and social commentators have speculated in some depth about the social and cultural changes that might result from the advent of commercially affordable additive manufacturing technology. In recent years, 3D printing has created a significant impact in the humanitarian and development sector. Its potential to facilitate distributed manufacturing is resulting in supply chain and logistics benefits, by reducing the need for transportation, warehousing and wastage. Furthermore, social and economic development is being advanced through the creation of local production economies.
Others have suggested that as more and more 3D printers start to enter people's homes, the conventional relationship between the home and the workplace might get further eroded. Likewise, it has also been suggested that, as it becomes easier for businesses to transmit designs for new objects around the globe, so the need for high-speed freight services might also become less. Finally, given the ease with which certain objects can now be replicated, it remains to be seen whether changes will be made to current copyright legislation so as to protect intellectual property rights with the new technology widely available.
Some call attention to the conjunction of commons-based peer production with 3D printing and other low-cost manufacturing techniques. The self-reinforced fantasy of a system of eternal growth can be overcome with the development of economies of scope, and here, society can play an important role contributing to the raising of the whole productive structure to a higher plateau of more sustainable and customized productivity. Further, it is true that many issues, problems, and threats arise due to the democratization of the means of production, and especially regarding the physical ones. For instance, the recyclability of advanced nanomaterials is still questioned; weapons manufacturing could become easier; not to mention the implications for counterfeiting and on intellectual property. It might be maintained that in contrast to the industrial paradigm whose competitive dynamics were about economies of scale, commons-based peer production 3D printing could develop economies of scope. While the advantages of scale rest on cheap global transportation, the economies of scope share infrastructure costs (intangible and tangible productive resources), taking advantage of the capabilities of the fabrication tools. And following Neil Gershenfeld in that "some of the least developed parts of the world need some of the most advanced technologies", commons-based peer production and 3D printing may offer the necessary tools for thinking globally but acting locally in response to certain needs.
Larry Summers wrote about the "devastating consequences" of 3D printing and other technologies (robots, artificial intelligence, etc.) for those who perform routine tasks. In his view, "already there are more American men on disability insurance than doing production work in manufacturing. And the trends are all in the wrong direction, particularly for the less skilled, as the capacity of capital embodying artificial intelligence to replace white-collar as well as blue-collar work will increase rapidly in the years ahead." Summers recommends more vigorous cooperative efforts to address the "myriad devices" (e.g., tax havens, bank secrecy, money laundering, and regulatory arbitrage) enabling the holders of great wealth to "a paying" income and estate taxes, and to make it more difficult to accumulate great fortunes without requiring "great social contributions" in return, including: more vigorous enforcement of anti-monopoly laws, reductions in "excessive" protection for intellectual property, greater encouragement of profit-sharing schemes that may benefit workers and give them a stake in wealth accumulation, strengthening of collective bargaining arrangements, improvements in corporate governance, strengthening of financial regulation to eliminate subsidies to financial activity, easing of land-use restrictions that may cause the real estate of the rich to keep rising in value, better training for young people and retraining for displaced workers, and increased public and private investment in infrastructure development—e.g., in energy production and transportation.
Michael Spence wrote that "Now comes a ... powerful, wave of digital technology that is replacing labor in increasingly complex tasks. This process of labor substitution and disintermediation has been underway for some time in service sectors—think of ATMs, online banking, enterprise resource planning, customer relationship management, mobile payment systems, and much more. This revolution is spreading to the production of goods, where robots and 3D printing are displacing labor." In his view, the vast majority of the cost of digital technologies comes at the start, in the design of hardware (e.g. 3D printers) and, more importantly, in creating the software that enables machines to carry out various tasks. "Once this is achieved, the marginal cost of the hardware is relatively low (and declines as scale rises), and the marginal cost of replicating the software is essentially zero. With a huge potential global market to amortize the upfront fixed costs of design and testing, the incentives to invest [in digital technologies] are compelling."
Spence believes that, unlike prior digital technologies, which drove firms to deploy underutilized pools of valuable labor around the world, the motivating force in the current wave of digital technologies "is cost reduction via the replacement of labor". For example, as the cost of 3D printing technology declines, it is "easy to imagine" that production may become "extremely" local and customized. Moreover, production may occur in response to actual demand, not anticipated or forecast demand. Spence believes that labor, no matter how inexpensive, will become a less important asset for growth and employment expansion, with labor-intensive, process-oriented manufacturing becoming less effective, and that re-localization will appear in both developed and developing countries. In his view, production will not disappear, but it will be less labor-intensive, and all countries will eventually need to rebuild their growth models around digital technologies and the human capital supporting their deployment and expansion. Spence writes that "the world we are entering is one in which the most powerful global flows will be ideas and digital capital, not goods, services, and traditional capital. Adapting to this will require shifts in mindsets, policies, investments (especially in human capital), and quite possibly models of employment and distribution."
Naomi Wu regards the usage of 3D printing in the Chinese classroom (where rote memorization is standard) to teach design principles and creativity as the most exciting recent development of the technology, and more generally regards 3D printing as being the next desktop publishing revolution.
A printer was donated to the Juan Fernandez Women's Group in 2024, to support women in the remote community to be able to create parts to fix broken equipment, without having to wait for a ship to import the needed compenents.
Environmental change
The growth of additive manufacturing could have a large impact on the environment. Traditional subtractive manufacturing methods such as CNC milling create products by cutting away material from a larger block. In contrast, additive manufacturing creates products layer-by layer, using the minimum required materials to create the product. This has the benefit of reducing material waste, which further contributes to energy savings by avoiding raw material production.
Life-cycle assessment of additive manufacturing has estimated that adopting the technology could further lower carbon dioxide emissions since 3D printing creates localized production, thus reducing the need to transport products and the emissions associated. AM could also allow consumers to create their own replacement parts to fix purchased products to extend the lifespan of purchased products.
By making only the bare structural necessities of products, additive manufacturing also has the potential to make profound contributions to lightweighting. The use of these lightweight components would allow for reductions in the energy consumption and greenhouse gas emissions of vehicles and other forms of transportation. A case study on an airplane component made using additive manufacturing, for example, found that the use of the component saves 63% of relevant energy and carbon dioxide emissions over the course of the product's lifetime.
However, the adoption of additive manufacturing also has environmental disadvantages. Firstly, AM has a high energy consumption compared to traditional processes. This is due to its use of processes such as lasers and high temperatures for product creation. Secondly, despite additive manufacturing reducing up to 90% of waste compared to subtractive manufacturing, AM can generate waste that is non-recyclable. For example, there are issues with the recyclability of materials in metal AM as some highly regulated industries such as aerospace often insist on using virgin powder in the creation of safety critical components. Additive manufacturing has not yet reached its theoretical material efficiency potential of 97%, but it may get closer as the technology continues to increase productivity.
Despite the drawbacks, research and industry are making further strides to support AM's sustainability. Some large FDM printers that melt high-density polyethylene (HDPE) pellets may also accept sufficiently clean recycled material such as chipped milk bottles. In addition, these printers can use shredded material from faulty builds or unsuccessful prototype versions, thus reducing overall project wastage and materials handling and storage. The concept has been explored in the RecycleBot. There are also industrial efforts to produce metal powder from recycled metals.
See also
3D bioprinting
3D food printing
3D Manufacturing Format
3D printing marketplace
3D printing speed
3D printing in India
AstroPrint
Bubblegram
Cloud manufacturing
Computer numeric control
Delta robot
Fraunhofer Competence Field Additive Manufacturing
Fusion3
Laser cutting
Limbitless Solutions
List of 3D printer manufacturers
List of 3D printing software
List of common 3D test models
List of emerging technologies
List of notable 3D printed weapons and parts
Magnetically assisted slip casting
MakerBot Industries
Milling center
Organ-on-a-chip
Robocasting
Self-replicating machine
Ultimaker
Volumetric printing
References
Further reading
Wright, Paul K. (2001). 21st Century Manufacturing. New Jersey: Prentice-Hall Inc.
"3D printing: a new industrial revolution – Safety and health at work – EU-OSHA". osha.europa.eu. Retrieved 28 July 2017.
External links
Computer printers
DIY culture
Industrial design
Industrial processes
1981 introductions
1981 in technology
Computer-related introductions in 1981
Articles containing video clips
Open-source hardware | 3D printing | [
"Engineering"
] | 15,830 | [
"Industrial design",
"Design engineering",
"Design"
] |
1,306,999 | https://en.wikipedia.org/wiki/Lang%20factor | The Lang Factor is an estimated ratio of the total cost of creating a process within a plant, to the cost of all major technical components. It is widely used in industrial engineering to calculate the capital and operating costs of a plant.
The factors were introduced by H. J. Lang and Dr Micheal Bird in Chemical Engineering magazine in 1947 as a method for estimating the total installation cost for plants and equipment.
Industries
These factors are widely used in the refining and petrochemical industries to help estimate the cost of new facilities. A typical multiplier for a new unit within a refinery would be in the range of 5.0. When the purchase price of all the pumps, heat exchangers, pressure vessels, and other process equipment are multiplied by 5.0, a rough estimate of the total installed cost of the plant, including equipment, materials, construction, and engineering will be achieved. The accuracy of this estimate method usually is +/- 35%.
Guthrie factors
The factors change over time because construction labor, bulk materials (concrete, pipe, etc.), engineering design, indirect costs, and major process equipment prices often do not change at the same rate.
In the late 1960s and early 1970s Kenneth Guthrie further expanded on this concept, generating different factors for different types of process equipment (pumps, exchangers, vessels, etc.). These are sometimes referred to as "Guthrie factors".
References
Industrial engineering | Lang factor | [
"Engineering"
] | 288 | [
"Industrial engineering"
] |
1,307,226 | https://en.wikipedia.org/wiki/Variable-width%20encoding | A variable-width encoding is a type of character encoding scheme in which codes of differing lengths are used to encode a character set (a repertoire of symbols) for representation, usually in a computer. Most common variable-width encodings are multibyte encodings (aka MBCS – multi-byte character set), which use varying numbers of bytes (octets) to encode different characters. (Some authors, notably in Microsoft documentation, use the term multibyte character set, which is a misnomer, because representation size is an attribute of the encoding, not of the character set.)
Early variable-width encodings using less than a byte per character were sometimes used to pack English text into fewer bytes in adventure games for early microcomputers. However disks (which unlike tapes allowed random access allowing text to be loaded on demand), increases in computer memory and general purpose compression algorithms have rendered such tricks largely obsolete.
Multibyte encodings are usually the result of a need to increase the number of characters which can be encoded without breaking backward compatibility with an existing constraint. For example, with one byte (8 bits) per character, one can encode 256 possible characters; in order to encode more than 256 characters, the obvious choice would be to use two or more bytes per encoding unit, two bytes (16 bits) would allow 65,536 possible characters, but such a change would break compatibility with existing systems and therefore might not be feasible at all.
General structure
Since the aim of a multibyte encoding system is to minimise changes to existing application software, some characters must retain their pre-existing single-unit codes, even while other characters have multiple units in their codes. The result is that there are three sorts of units in a variable-width encoding: singletons, which consist of a single unit, lead units, which come first in a multiunit sequence, and trail units, which come afterwards in a multiunit sequence. Input and display software obviously needs to know about the structure of the multibyte encoding scheme, but other software generally doesn't need to know if a pair of bytes represent two separate characters or just one character.
For example, the four character string "I♥NY" is encoded in UTF-8 like this (shown as hexadecimal byte values): . Of the six units in that sequence, 49, 4E, and 59 are singletons (for I, N, and Y), is a lead unit and and are trail units. The heart symbol is represented by the combination of the lead unit and the two trail units.
UTF-8 makes it easy for a program to identify the three sorts of units, since they fall into separate value ranges. Older variable-width encodings are typically not as well-designed, since the ranges may overlap. A text processing application that deals with the variable-width encoding must then scan the text from the beginning of all definitive sequences in order to identify the various units and interpret the text correctly. In such encodings, one is liable to encounter false positives when searching for a string in the middle of the text. For example, if the hexadecimal values DE, DF, E0, and E1 can all be either lead units or trail units, then a search for the two-unit sequence DF E0 can yield a false positive in the sequence DE DF E0 E1, which consists of two consecutive two-unit sequences. There is also the danger that a single corrupted or lost unit may render the whole interpretation of a large run of multiunit sequences incorrect. In a variable-width encoding where all three types of units are disjunct, string searching always works without false positives, and (provided the decoder is well written) the corruption or loss of one unit corrupts only one character.
CJK multibyte encodings
The first use of multibyte encodings was for the encoding of Chinese, Japanese and Korean, which have large character sets well in excess of 256 characters. At first the encoding was constrained to the limit of 7 bits. The ISO-2022-JP, ISO-2022-CN and ISO-2022-KR encodings used the range 21–7E (hexadecimal) for both lead units and trail units, and marked them off from the singletons by using ISO 2022 escape sequences to switch between single-byte and multibyte mode. A total of 8,836 (94×94) characters could be encoded at first, and further sets of 94×94 characters with switching. The ISO 2022 encoding schemes for CJK are still in use on the Internet. The stateful nature of these encodings and the large overlap make them very awkward to process.
On Unix platforms, the ISO 2022 7-bit encodings were replaced by a set of 8-bit encoding schemes, the Extended Unix Code: EUC-JP, EUC-CN and EUC-KR. Instead of distinguishing between the multiunit sequences and the singletons with escape sequences, which made the encodings stateful, multiunit sequences were marked by having the most significant bit set, that is, being in the range 80–FF (hexadecimal), while the singletons were in the range 00–7F alone. The lead units and trail units were in the range A1 to FE (hexadecimal), that is, the same as their range in the ISO 2022 encodings, but with the high bit set to 1. These encodings were reasonably easy to work with provided all your delimiters were ASCII characters and you avoided truncating strings to fixed lengths, but a break in the middle of a multibyte character could still cause major corruption.
On the PC (DOS and Microsoft Windows platforms), two encodings became established for Japanese and Traditional Chinese in which all of singletons, lead units and trail units overlapped: Shift-JIS and Big5 respectively. In Shift-JIS, lead units had the range 81–9F and E0–FC, trail units had the range 40–7E and 80–FC, and singletons had the range 21–7E and A1–DF. In Big5, lead units had the range A1–FE, trail units had the range 40–7E and A1–FE, and singletons had the range 21–7E (all values in hexadecimal). This overlap again made processing tricky, though at least most of the symbols had unique byte values (though strangely the backslash does not).
Unicode variable-width encodings
The Unicode standard has two variable-width encodings: UTF-8 and UTF-16 (it also has a fixed-width encoding, UTF-32). Originally, both the Unicode and ISO 10646 standards were meant to be fixed-width, with Unicode being 16-bit and ISO 10646 being 32-bit. ISO 10646 provided a variable-width encoding called UTF-1, in which singletons had the range 00–9F, lead units the range A0–FF and trail units the ranges A0–FF and 21–7E. Because of this bad design, similar to Shift JIS and Big5 in its overlap of values, the inventors of the Plan 9 operating system, the first to implement Unicode throughout, abandoned it and replaced it with a much better designed variable-width encoding for Unicode: UTF-8, in which singletons have the range 00–7F, lead units have the range C0–FD (now actually C2–F4, to avoid overlong sequences and to maintain synchronism with the encoding capacity of UTF-16; see the UTF-8 article), and trail units have the range 80–BF. The lead unit also tells how many trail units follow: one after C2–DF, two after E0–EF and three after F0–F4.
UTF-16 was devised to break free of the 65,536-character limit of the original Unicode (1.x) without breaking compatibility with the 16-bit encoding. In UTF-16, singletons have the range 0000–D7FF (55,296 code points) and E000–FFFF (8192 code points, 63,488 in total), lead units the range D800–DBFF (1024 code points) and trail units the range DC00–DFFF (1024 code points, 2048 in total). The lead and trail units, called high surrogates and low surrogates, respectively, in Unicode terminology, map 1024×1024 or 1,048,576 supplementary characters, making 1,112,064 (63,488 BMP code points + 1,048,576 code points represented by high and low surrogate pairs) encodable code points, or scalar values in Unicode parlance (surrogates are not encodable).
See also
wchar_t wide characters
Lotus Multi-Byte Character Set (LMBCS)
Triple-Byte Character Set (TBCS)
Double-Byte Character Set (DBCS)
Single-Byte Character Set (SBCS)
Notes
References
Character encoding | Variable-width encoding | [
"Technology"
] | 1,931 | [
"Natural language and computing",
"Character encoding"
] |
1,307,566 | https://en.wikipedia.org/wiki/Atosiban | Atosiban, sold under the brand name Tractocile among others, is an inhibitor of the hormones oxytocin and vasopressin. It is used as an intravenous medication as a labour repressant (tocolytic) to halt premature labor. It was developed by Ferring Pharmaceuticals in Sweden and first reported in the literature in 1985. Originally marketed by Ferring Pharmaceuticals, it is licensed in proprietary and generic forms for the delay of imminent preterm birth in pregnant adult women.
The most commonly reported side effect is nausea.
Medical uses
Atosiban is used to delay birth in adult women who are 24 to 33 weeks pregnant, when they show signs that they may give birth pre-term (prematurely). These signs include regular contractions lasting at least 30 seconds at a rate of at least four every 30 minutes, and dilation of the cervix (the neck of the womb) of 1 to 3 cm and an effacement (a measure of the thinness of the cervix) of 50% or more. In addition, the baby must have a normal heart rate.
Pharmacology
Mechanism of action
Atosiban is a nonapeptide, desamino-oxytocin analogue, and a competitive vasopressin/oxytocin receptor antagonist (VOTra). Atosiban inhibits the oxytocin-mediated release of inositol trisphosphate from the myometrial cell membrane. As a result, reduced release of intracellular, stored calcium from the sarcoplasmic reticulum of myometrial cells and reduced influx of Ca2+ from the extracellular space through voltage-gated channels occur. In addition, atosiban suppresses oxytocin-mediated release of PGE and PGF from the decidua.
In human preterm labour, atosiban, at the recommended dosage, antagonises uterine contractions and induces uterine quiescence. The onset of uterus relaxation following atosiban is rapid, uterine contractions being significantly reduced within 10 minutes to achieve stable uterine quiescence.
Other uses
Atosiban use after assisted reproduction
Atosiban is useful in improving the pregnancy outcome of in vitro fertilization-embryo transfer (IVF-ET) in patients with repeated implantation failure. The pregnancy rate improved from zero to 43.7%.
First- and second-trimester bleeding was more prevalent in ART than in spontaneous pregnancies. From 2004 to 2010, 33 first-trimester pregnancies with vaginal bleeding after ART with evident uterine contractions, when using atosiban and/or ritodrine, no preterm delivery occurred before 30 weeks.
In a 2010 meta-analysis, nifedipine is superior to β2 adrenergic receptor agonists and magnesium sulfate for tocolysis in women with preterm labor (20–36 weeks), but it has been assigned to pregnancy category C by the U.S. Food and Drug Administration, so is not recommended before 20 weeks, or in the first trimester. A report from 2011 supports the use of atosiban, even at very early pregnancy, to decrease the frequency of uterine contractions to enhance success of pregnancy.
Pharmacovigilance
Following the launch of atosiban in 2000, the calculated cumulative patient exposure to atosiban (January 2000 to December 2005) is estimated as 156,468 treatment cycles. To date, routine monitoring of drug safety has revealed no major safety issues.
Regulatory affairs
Atosiban was approved in the European Union in January 2000 and launched in the European Union in April 2000. As of June 2007, atosiban was approved in 67 countries, excluding the United States and Japan. It was understood that Ferring did not expect to seek approval for atosiban in the US or Japan, focusing instead on development of new compounds for use in Spontaneous Preterm Labor (SPTL). The fact that atosiban only had a short duration before it was out of patent that the parent drug company decided not to pursue licensing in the US.
Systematic reviews
In a systematic review of atosiban for tocolysis in preterm labour, six clinical studies — two compared atosiban to placebo and four atosiban to a β agonist — showed a significant increase in the proportion of women undelivered by 48 hours in women receiving atosiban compared to placebo. When compared with β agonists, atosiban increased the proportion of women undelivered by 48 hours and was safer compared to β agonists. Therefore, oxytocin antagonists appear to be effective and safe for tocolysis in preterm labour.
A 2014 systematic review by the Cochrane Collaboration showed that while atosiban had fewer side effects than alternative drugs (such as ritodrine), other beta blockers, and calcium channel antagonists, it was no better than placebo in the major outcomes i.e. pregnancy prolongation or neonatal outcomes. The finding of an increase in infant deaths in one placebo-controlled trial warrants caution. Further research is recommended.
Clinical trials
Atosiban vs. nifedipine
A 2013 retrospective study comparing the efficacy and safety of atosiban and nifedipine in the suppression of preterm labour concluded that atosiban and nifedipine are effective in delaying delivery for seven days or more in women presenting with preterm labour. A total of 68.3% of women in the atosiban group remained undelivered at seven days or more, compared with 64.7% in the nifedipine group. They have the same efficacy and associated minor side effects. However, flushing, palpitation, and hypotension were significantly higher in the nifedipine group.
A 2012 clinical trial compared tocolytic efficacy and tolerability of atosiban with that of nifedipine. Forty-eight (68.6%) women allocated to atosiban and 39 (52%) to nifedipine did not deliver and did not require an alternate agent at 48 hours, respectively (p=.03). Atosiban has fewer failures within 48 hours. Nifedipine may be associated with a longer postponement of delivery.
A 2009 randomised controlled study demonstrated for the first time the direct effects of atosiban on fetal movement, heart rate, and blood flow. Tocolysis with either atosiban or nifedipine combined with betamethasone administration had no direct fetal adverse effects.
Atosiban vs. ritodrine
Multicentre, controlled trial of atosiban vs. ritodrine in 128 women shows a significantly better tocolytic efficacy after 7 days in the atosiban group than in the ritodrine group (60.3 versus 34.9%), but not at 48 hours (68.3 versus 58.7%). Maternal adverse events were reported less frequently in the atosiban group (7.9 vs 70.8%), resulting in fewer early drug terminations due to adverse events (0 versus 20%). Therefore, atosiban is superior to ritodrine in the treatment of preterm labour.
References
External links
Oxytocin receptor antagonists
Peptides
Tocolytics
Vasopressin receptor antagonists
Ethoxy compounds
Nonapeptides
Sec-Butyl compounds | Atosiban | [
"Chemistry"
] | 1,548 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
1,307,591 | https://en.wikipedia.org/wiki/Tocolytic | Tocolytics (also called anti-contraction medications or labor suppressants) are medications used to suppress premature labor (from Greek τόκος tókos, "childbirth", and λύσις lúsis, "loosening"). Preterm birth accounts for 70% of neonatal deaths. Therefore, tocolytic therapy is provided when delivery would result in premature birth, postponing delivery long enough for the administration of glucocorticoids, which accelerate fetal lung maturity but may require one to two days to take effect.
Commonly used tocolytic medications include β2 agonists, calcium channel blockers, NSAIDs, and magnesium sulfate. These can assist in delaying preterm delivery by suppressing uterine muscle contractions and their use is intended to reduce fetal morbidity and mortality associated with preterm birth. The suppression of contractions is often only partial and tocolytics can only be relied on to delay birth for a matter of days. Depending on the tocolytic used, the pregnant woman or fetus may require monitoring (e.g., blood pressure monitoring when nifedipine is used as it reduces blood pressure; cardiotocography to assess fetal well-being). In any case, the risk of preterm labor alone justifies hospitalization.
Indications
Tocolytics are used in preterm labor, which refers to when a baby is born too early before 37 weeks of pregnancy. As preterm birth represents one of the leading causes of neonatal morbidity and mortality, the goal is to prevent neonatal morbidity and mortality through delaying delivery and increasing gestational age by gaining more time for other management strategies like corticosteroids therapy that may help with fetus lung maturity. Tocolytics are considered for women with confirmed preterm labor between 24 and 34 weeks of gestation age and used in conjunction with other therapies that may include corticosteroids administration, fetus neuroprotection, and safe transfer to facilities.
Types of agents
There is no clear first-line tocolytic agent. Current evidence suggests that first line treatment with β2 agonists, calcium channel blockers, or NSAIDs to prolong pregnancy for up to 48 hours is the best course of action to allow time for glucocorticoid administration.
Various types of agents are used, with varying success rates and side effects. Some medications are not specifically approved by the U.S. Food and Drug Administration (FDA) for use in stopping uterine contractions in preterm labor, instead being used off-label.
According to a 2022 Cochrane review, the most effective tocolytics for delaying preterm birth by 48 hours, and 7 days were the nitric oxide donors, calcium channel blockers, oxytocin receptor antagonists and combinations of tocolytics.
Calcium-channel blockers (such as nifedipine) and oxytocin antagonists (such as atosiban) may delay delivery by 2 to 7 days, depending on how quickly the medication is administered. NSAIDs (such as indomethacin) and calcium channel blockers (such as nifedipine) are the most likely to delay delivery for 48 hours, with the least amount of maternal and neonatal side effects. Otherwise, tocolysis is rarely successful beyond 24 to 48 hours because current medications do not alter the fundamentals of labor activation. However, postponing premature delivery by 48 hours appears sufficient to allow pregnant women to be transferred to a center specialized for management of preterm deliveries, and thus administer corticosteroids for the possibility to reduce neonatal organ immaturity.
The efficacy of β-adrenergic agonists, atosiban, and indomethacin is a decreased odds ratio (OR) of delivery within 24 hours of 0.54 (95% confidence interval (CI): 0.32-0.91) and 0.47 within 48 hours (OR 0.47, 95% CI: 0.30-0.75).
Antibiotics were thought to delay delivery, but no studies have shown any evidence that using antibiotics during preterm labor effectively delays delivery or reduces neonatal morbidity. Antibiotics are used in people with premature rupture of membranes, but this is not characterized as tocolysis.
Contraindications to tocolytics
In addition to drug-specific contraindications, several general factors may contraindicate delaying childbirth with the use of tocolytic medications.
Fetus is older than 34 weeks gestation
Fetus weighs less than 2.5 kg, or has intrauterine growth restriction (IUGR) or placental insufficiency
Lethal congenital or chromosomal abnormalities
Cervical dilation is greater than 4 centimeters
Chorioamnionitis or intrauterine infection is present
Pregnant woman has severe pregnancy-induced hypertension, severe eclampsia/preeclampsia, active vaginal bleeding, placental abruption, a cardiac disease, or another condition which indicates that the pregnancy should not continue.
Maternal hemodynamic instability with bleeding
Intrauterine fetal demise, lethal fetal anomaly, or non-reassuring fetal status
Future direction of tocolytics
Most tocolytics are currently being used off-label. The future direction of the development of tocolytics agents should be directed toward better efficacy in intentionally prolonging pregnancy. This will potentially result in less maternal, fetal, and neonatal adverse effects when delaying preterm childbirth. A few tocolytic alternatives worth pursuing include Barusiban, a last generation of oxytocin receptor antagonists, as well as COX-2 inhibitors. More studies on the use of multiple tocolytics must be directed to research overall health outcomes rather than solely pregnancy prolongation.
See also
Labor induction
References
Chemical substances for emergency medicine
Obstetric drugs
Obstetrics
Obstetrical procedures
Childbirth | Tocolytic | [
"Chemistry"
] | 1,231 | [
"Chemicals in medicine",
"Chemical substances for emergency medicine"
] |
21,017,316 | https://en.wikipedia.org/wiki/Fault%20detection%20and%20isolation | Fault detection, isolation, and recovery (FDIR) is a subfield of control engineering which concerns itself with monitoring a system, identifying when a fault has occurred, and pinpointing the type of fault and its location. Two approaches can be distinguished: A direct pattern recognition of sensor readings that indicate a fault and an analysis of the discrepancy between the sensor readings and expected values, derived from some model. In the latter case, it is typical that a fault is said to be detected if the discrepancy or residual goes above a certain threshold. It is then the task of fault isolation to categorize the type of fault and its location in the machinery. Fault detection and isolation (FDI) techniques can be broadly classified into two categories. These include model-based FDI and signal processing based FDI.
Model-based FDI
In model-based FDI techniques some model of the system is used to decide about the occurrence of fault. The system model may be mathematical or knowledge based. Some of the model-based FDI techniques include observer-based approach, parity-space approach, and parameter identification based methods. There is another trend of model-based FDI schemes, which is called set-membership methods. These methods guarantee the detection of fault under certain conditions. The main difference is that instead of finding the most likely model, these techniques omit the models, which are not compatible with data.
The example shown in the figure on the right illustrates a model-based FDI technique for an aircraft elevator reactive controller through the use of a truth table and a state chart. The truth table defines how the controller reacts to detected faults, and the state chart defines how the controller switches between the different modes of operation (passive, active, standby, off, and isolated) of each actuator. For example, if a fault is detected in hydraulic system 1, then the truth table sends an event to the state chart that the left inner actuator should be turned off. One of the benefits of this model-based FDI technique is that this reactive controller can also be connected to a continuous-time model of the actuator hydraulics, allowing the study of switching transients.
Signal processing based FDI
In signal processing based FDI, some mathematical or statistical operations are performed on the measurements, or some neural network is trained using measurements to extract the information about the fault.
A good example of signal processing based FDI is time domain reflectometry where a signal is sent down a cable or electrical line and the reflected signal is compared mathematically to original signal to identify faults. Spread Spectrum Time Domain Reflectometry, for instance, involves sending down a spread spectrum signal down a wire line to detect wire faults. Several clustering methods have also been proposed to identify the novel fault and segment a given signal into normal and faulty segments.
Machine fault diagnosis
Machine fault diagnosis is a field of mechanical engineering concerned with finding faults arising in machines. A particularly well developed part of it applies specifically to rotating machinery, one of the most common types encountered. To identify the most probable faults leading to failure, many methods are used for data collection, including vibration monitoring, thermal imaging, oil particle analysis, etc. Then these data are processed utilizing methods like spectral analysis, wavelet analysis, wavelet transform, short term Fourier transform, Gabor Expansion, Wigner-Ville distribution (WVD), cepstrum, bispectrum, correlation method, high resolution spectral analysis, waveform analysis (in the time domain, because spectral analysis usually concerns only frequency distribution and not phase information) and others. The results of this analysis are used in a root cause failure analysis in order to determine the original cause of the fault. For example, if a bearing fault is diagnosed, then it is likely that the bearing was not itself damaged at installation, but rather as the consequence of another installation error (e.g., misalignment) which then led to bearing damage. Diagnosing the bearing's damaged state is not enough for precision maintenance purposes. The root cause needs to be identified and remedied. If this is not done, the replacement bearing will soon wear out for the same reason and the machine will suffer more damage, remaining dangerous. Of course, the cause may also be visible as a result of the spectral analysis undertaken at the data-collection stage, but this may not always be the case.
The most common technique for detecting faults is the time-frequency analysis technique. For a rotating machine, the rotational speed of the machine (often known as the RPM), is not a constant, especially not during the start-up and shutdown stages of the machine. Even if the machine is running in the steady state, the rotational speed will vary around a steady-state mean value, and this variation depends on load and other factors. Since sound and vibration signals obtained from a rotating machine are strongly related to its rotational speed, it can be said that they are time-variant signals in nature. These time-variant features carry the machine fault signatures. Consequently, how these features are extracted and interpreted is important to research and industrial applications.
The most common method used in signal analysis is the FFT, or Fourier transform. The Fourier transform and its inverse counterpart offer two perspectives to study a signal: via the time domain or via the frequency domain. The FFT-based spectrum of a time signal shows us the existence of its frequency contents. By studying these and their magnitude or phase relations, we can obtain various types of information, such as harmonics, sidebands, beat frequency, bearing fault frequency and so on. However, the FFT is only suitable for signals whose frequency contents do not change over time; however, as mentioned above, the frequency contents of the sound and vibration signals obtained from a rotating machine are very much time-dependent. For this reason, FFT-based spectra are unable to detect how the frequency contents develop over time. To be more specific, if the RPM of a machine is increasing or decreasing during its startup or shutdown period, its bandwidth in the FFT spectrum will become much wider than it would be simply for the steady state. Hence, in such a case, the harmonics are not so distinguishable in the spectrum.
The time frequency approach for machine fault diagnosis can be divided into two broad categories: linear methods and the quadratic methods. The difference is that linear transforms can be inverted to construct the time signal, thus, they are more suitable for signal processing, such as noise reduction and time-varying filtering. Although the quadratic method describes the energy distribution of a signal in the joint time frequency domain, which is useful for analysis, classification, and detection of signal features, phase information is lost in the quadratic time-frequency representation; also, the time histories cannot be reconstructed with this method.
The short-term Fourier transform (STFT) and the Gabor transform are two algorithms commonly used as linear time-frequency methods. If we consider linear time-frequency analysis to be the evolution of the conventional FFT, then quadratic time frequency analysis would be the power spectrum counterpart. Quadratic algorithms include the Gabor spectrogram, Cohen's class and the adaptive spectrogram. The main advantage of time frequency analysis is discovering the patterns of frequency changes, which usually represent the nature of the signal. As long as this pattern is identified the machine fault associated with this pattern can be identified. Another important use of time frequency analysis is the ability to filter out a particular frequency component using a time-varying filter.
Robust fault diagnosis
In practice, model uncertainties and measurement noise can complicate fault detection and isolation.
As a result, using fault diagnostics to meet industrial needs in a cost-effective way, and to reduce maintenance costs without requiring more investments than the cost of what is to be avoided in the first place, requires an effective scheme of applying them. This is the subject of maintenance, repair and operations; the different strategies include:
Condition-based maintenance
Planned preventive maintenance
Preventive maintenance
Corrective maintenance (does not use diagnostics)
Integrated vehicle health management
Fault detection and diagnosis using artificial intelligence
Machine learning techniques for fault detection and diagnosis
In fault detection and diagnosis, mathematical classification models which in fact belong to supervised learning methods, are trained on the training set of a labeled dataset to accurately identify the redundancies, faults and anomalous samples. During the past decades, there are different classification and preprocessing models that have been developed and proposed in this research area. K-nearest-neighbors algorithm (kNN) is one of the oldest techniques which has been used to solve fault detection and diagnosis problems. Despite the simple logic that this instance-based algorithm has, there are some problems with large dimensionality and processing time when it is used on large datasets. Since kNN is not able to automatically extract the features to overcome the curse of dimensionality, so often some data preprocessing techniques like Principal component analysis(PCA), Linear discriminant analysis(LDA) or Canonical correlation analysis(CCA) accompany it to reach a better performance. In many industrial cases, the effectiveness of kNN has been compared with other methods, specially with more complex classification models such as Support Vector Machines (SVMs), which is widely used in this field. Thanks to their appropriate nonlinear mapping using kernel methods, SVMs have an impressive performance in generalization, even with small training data. However, general SVMs do not have automatic feature extraction themselves and just like kNN, are often coupled with a data pre-processing technique. Another drawback of SVMs is that their performance is highly sensitive to the initial parameters, particularly to the kernel methods, so in each signal dataset, a parameter tuning process is required to be conducted first. Therefore, the low speed of the training phase is a limitation of SVMs when it comes to its usage in fault detection and diagnosis cases.
Artificial Neural Networks (ANNs) are among the most mature and widely used mathematical classification algorithms in fault detection and diagnosis. ANNs are well-known for their efficient self-learning capabilities of the complex relations (which generally exist inherently in fault detection and diagnosis problems) and are easy to operate. Another advantage of ANNs is that they perform automatic feature extraction by allocating negligible weights to the irrelevant features, helping the system to avoid dealing with another feature extractor. However, ANNs tend to over-fit the training set, which will have consequences of having poor validation accuracy on the validation set. Hence, often, some regularization terms and prior knowledge are added to the ANN model to avoid over-fitting and achieve higher performance. Moreover, properly determining the size of the hidden layer needs an exhaustive parameter tuning, to avoid poor approximation and generalization capabilities.
In general, different SVMs and ANNs models (i.e. Back-Propagation Neural Networks and Multi-Layer Perceptron) have shown successful performances in the fault detection and diagnosis in industries such as gearbox, machinery parts (i.e. mechanical bearings), compressors, wind and gas turbines and steel plates.
Deep learning techniques for fault detection and diagnosis
With the research advances in ANNs and the advent of deep learning algorithms using deep and complex layers, novel classification models have been developed to cope with fault detection and diagnosis.
Most of the shallow learning models extract a few feature values from signals, causing a dimensionality reduction from the original signal. By using Convolutional neural networks, the continuous wavelet transform scalogram can be directly classified to normal and faulty classes. Such a technique avoids omitting any important fault message and results in a better performance of fault detection and diagnosis.
In addition, by transforming signals to image constructions, 2D Convolutional neural networks can be implemented to identify faulty signals from vibration image features.
Deep belief networks, Restricted Boltzmann machines and Autoencoders are other deep neural networks architectures which have been successfully used in this field of research. In comparison to traditional machine learning, due to their deep architecture, deep learning models are able to learn more complex structures from datasets, however, they need larger samples and longer processing time to achieve higher accuracy.
Fault recovery
Fault Recovery in FDIR is the action taken after a failure has been detected and isolated to return the system to a stable state. Some examples of fault recoveries are:
Switch-off of a faulty equipment
Switch-over from a faulty equipment to a redundant equipment
Change of state of the complete system into a Safe Mode with limited functionalities
See also
Control reconfiguration
Control theory
Failure mode and effects analysis
Fault-tolerant system
Predictive maintenance
Spread-spectrum time-domain reflectometry
System identification
References
Control theory
Systems engineering | Fault detection and isolation | [
"Mathematics",
"Engineering"
] | 2,599 | [
"Systems engineering",
"Reliability engineering",
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
21,017,672 | https://en.wikipedia.org/wiki/Mucoadhesion | Mucoadhesion describes the attractive forces between a biological material and mucus or mucous membrane. Mucous membranes adhere to epithelial surfaces such as the gastrointestinal tract (GI-tract), the vagina, the lung, the eye, etc. They are generally hydrophilic as they contain many hydrogen macromolecules due to the large amount of water (approximately 95%) within its composition. However, mucin also contains glycoproteins that enable the formation of a gel-like substance. Understanding the hydrophilic bonding and adhesion mechanisms of mucus to biological material is of utmost importance in order to produce the most efficient applications. For example, in drug delivery systems, the mucus layer must be penetrated in order to effectively transport micro- or nanosized drug particles into the body. Bioadhesion is the mechanism by which two biological materials are held together by interfacial forces. The mucoadhesive properties of polymers can be evaluated via rheological synergism studies with freshly isolated mucus, tensile studies and mucosal residence time studies. Results obtained with these in vitro methods show a high correlation with results obtained in humans.
Mucoadhesive bondings
Mucoadhesion involves several types of bonding mechanisms, and it is the interaction between each process that allows for the adhesive process. The major categories are wetting theory, adsorption theory, diffusion theory, electrostatic theory, and fracture theory. Specific processes include mechanical interlocking, electrostatic, diffusion interpenetration, adsorption and fracture processes.
Bonding mechanisms
Wetting theory: Wetting is the oldest and most prevalent theory of adhesion. The adhesive components in a liquid solution anchor themselves in irregularities on the substrate and eventually harden, providing sites on which to adhere. Surface tension effects restrict the movement of the adhesive along the surface of the substrate, and is related to the thermodynamic work of adhesion by Dupre's Equation. Measuring the affinity of the adhesive for the substrate is performed by determining the contact angle. Contact angles closer to zero indicate a more wettable interaction, and those interactions have a greater spreadability.
Adsorption theory: Adsorption is another widely accepted theory, where adhesion between the substrate and adhesive is due to primary and secondary bonding. The primary bonds are due to chemisorption, and result in comparatively long lasting covalent and non-covalent bonds. Among covalent bonds disulfide bonds are likely most important. Thiolated polymers – designated thiomers – are mucoadhesive polymers that can form disulfide bonds with cysteine-rich subdomains of mucus glycoproteins. Recently several new classes of polymers have been developed that are capable of forming covalent bonds with mucosal surfaces similarly to thiomers. These polymers have acryloyl, methacryloyl, maleimide, boronate and N‐hydroxy (sulfo) succinimide ester groups in their structure. Among non-covalent bonds likely ionic interactions such as interactions of mucoadhesive chitosans with the anionically charged mucus and Hydrogen bonding are most important. The secondary bonds include weak Van Der Waals forces, and interactions between hydrophobic substructure.
Diffusion theory: The mechanism for diffusion involves polymer and mucin chains from the adhesive penetrating the matrix of the substrate and forming a semipermanent bond. As the similarities between the adhesive and the substrate increase, so does the degree of mucoadhesion. The bond strength increases with the degree of penetration, increasing the adhesion strength. The penetration rate is determined by the diffusion coefficient, the degree of flexibility of the adsorbate chains, mobility and contact time. The diffusion mechanism itself is affected by the length of the molecular chains being implanted and cross-linking density, and is driven by a concentration gradient.
Electrostatic theory: is an electrostatic process involving the transfer of electrons across the interface between the substrate and adhesive. The net result is the formation of a double layer of charges that are attracted to each other due to balancing of the Fermi layers, and therefore cause adhesion. This theory only works given the assumption that the substrate and adhesive have different electrostatic surface characteristics.
Fracture theory: Fracture theory is the major mechanism by which to determine the mechanical strength of a particular mucoadhesive, and describes the force necessary to separate the two materials after mucoadhesion has occurred. Ultimate tensile strength is determined by the separating force and the total surface area of the adhesion, and failure generally occurs in one of the surfaces rather than at the interface. Since the fracture theory only deals with the separation force, the diffusion and penetration of polymers is not accounted for in this mechanism.
Stages of mucoadhesive process
The mucoadhesive process will differ greatly depending on the surface and properties of the adhesive. However, two general steps of the process have been identified: the contact stage and the consolidation stage.
Contact stage
The contact stage is the initial wetting that occurs between the adhesive and membrane. This can occur mechanically by bringing together the two surfaces, or through the bodily systems, like when particles are deposited in the nasal cavity by inhalation. The principles of initial adsorption of small molecule adsorbates can be described by DLVO theory.
Adsorption theory
According to DLVO theory, particles are held in suspension by a balance of attractive and repulsive forces. This theory can be applied to the adsorption of small molecules like mucoadhesive polymers, on surfaces, like mucus layers. Particles in general experience attractive van der Waals forces that promote coagulation; in the context of adsorption, the particle and mucus layers are naturally attracted. The attractive forces between particles increases with decreasing particle size due to increasing surface-area-to-volume ratio. This increases the strength of van der Waals interactions, so smaller particles should be easier to adsorb onto mucous membranes.
DLVO theory also explains some of the challenges in establishing contact between particles and mucus layers in mucoadhesion due to their repulsive forces. Surfaces will develop an electrical double layer if they are in a solution containing ions, as is the case with many bodily systems, creating electrostatic repulsive forces between the adhesive and surface. Steric effects can also hinder particle adsorption to surfaces. Entropy or disorder of a system will decrease as polymeric mucoadhesives adsorb to surfaces, which makes establishing contact between the adhesive and membrane more difficult. Adhesives with large surface groups will also experience a decrease in entropy as they approach the surface, creating repulsion.
Wettability theory
The initial adsorption of the molecule adhesive will also depend on the wetting between the adhesive and membrane. This can be described using Young's equation:
where is the interfacial tension between the membrane and gas or bodily environment, is the interfacial tension between the bioadhesive and membrane, is the interfacial tension between the bioadhesive and bodily environment, and is the contact angle of the bioadhesive on the membrane. The ideal contact angle is 0° meaning the bioadhesive perfectly wets the membrane and good contact is achieved. The interfacial tensions can be measured using common experimental techniques such as a Wilhelmy plate or the Du Noüy ring method to predict if the adhesive will make good contact with the membrane.
Consolidation stage
Strong and prolonged adhesion
The consolidation stage of mucoadhesion involves the establishment of adhesive interactions to reinforce strong or prolonged adhesion. When moisture is present, mucoadhesive materials become activated and the system becomes plasticized. This stimulus allows the mucoadhesive molecules to separate and break free while proceeding to link up by weak van der Waals and hydrogen bonds. Consolidation factors are essential for the surface when exposed to significant dislodging stresses. Multiple mucoadhesion theories exist that explain the consolidation stage, the main two which focus on macromolecular interpenetration and dehydration.
Macromolecular interpenetration theory
The Macromolecular Interpenetration theory, also known as the diffusion theory, states that the mucoadhesive molecules and mucus glycoproteins mutually interact by means of interpenetration of their chains and the forming of secondary semi-permanent adhesive bonds. It is necessary that the mucoadhesive device has features or properties that favor both chemical and mechanical interactions for the macromolecular interpenetration theory to take place. Molecules that can present mucoadhesive properties are molecules with hydrogen bond building groups, high molecular weight, flexible chains, and surface active properties.
It is perceived that increase in adhesion force is associated with the degree of penetration of polymer chains. Literature states that the degree of penetration required for efficient bioadhesive bonds lies in the range of 0.2-0.5μm. The following equation can be used to estimate the degree of penetration of polymer and mucus chains:
with as contact time and as the diffusion coefficient of the mucoadhesive material in the mucus. Maximum adhesion strength is reached when penetration depth is approximately equal to polymer chain size. Properties of mutual solubility and structural similarity will improve the mucoadhesive bond.
Dehydration theory
The dehydration theory explains why mucoadhesion can arise rapidly. When two gels capable of rapid gelation in an aqueous environment are brought into contact, movement occurs between the two gels until a state of equilibrium is reached. Gels associated with a strong affinity for water will have high osmotic pressures and large swelling forces. The difference in osmotic pressure when these gels contact mucus gels will draw water into the formulation and quickly dehydrate the mucus gel, forcing intermixing and consolidation until equilibrium results.
This mixture of formulation and mucus can increase contact time with the mucous membrane, leading to the consolidation of the adhesive bond. However, the dehydration theory does not apply to solid formulations or highly hydrated forms.
Mucoadhesives in drug delivery
Depending on the dosage form and route of administration, mucoadhesives may be used for either local or systemic drug delivery. An overview on the mucoadhesive properties of mucoadhesives is provided by Vjera Grabovac and Andreas Bernkop-Schnürch. The bioavailability of such drugs is affected by many factors unique to each route of application. In general, mucoadhesives work to increase the contact time at these sites, prolonging the residence time and maintaining an effective release rate. These polymeric coatings may be applied to a wide variety of liquid and solid dosages, each specially suited for the route of administration.
Dosage Forms
Tablets
Tablets are small, solid dosages suitable for the use of mucoadhesive coatings. The coating may be formulated to adhere to a specific mucosa, enabling both systemic and targeted local administration. Tablets are generally taken enterally, as the size and stiffness of the form results in poor patient compliance when administered through other routes.
Patches
In general, patches consist of three separate layers that contribute and control the release of medicine. The outer impermeable backing layer controls the direction of release and reduces drug loss away from the site of contact. It also protects the other layers and acts as a mechanical support. The middle reservoir layer holds the drug and is tailored to provide the specified dosage. The final inner layer consists of the mucoadhesive, allowing the patch to adhere to the specified mucosa.
Gels
As a liquid or semisolid dosage, gels are typically used where a solid form would affect the patient’s comfort. As a trade-off, conventional gels have poor retention rates. This results in unpredictable losses of the drug, as the non-solid dosage is unable to maintain its position at the site of administration. Mucoadhesives increase retention by dynamically increasing the viscosity of the gel after application. This allows the gel to effectively administer the drug at the local site while maintaining the comfort of the patient.
Solutions
These dosage forms are commonly used to deliver drugs to the eye and nasal cavity. They often include mucoadhesive polymers to improve retention on dynamic mucosal surfaces. Some advanced eye drop formulations may also turn from a liquid to a gel (so called in situ gelling systems) upon drug administration. For example, gel-forming solutions containing Pluronics could be used to improve the efficiency of eye drops and provide better retention on ocular surfaces.
Routes of Administration
Oromucosal
With a 0.1-0.7 mm thick mucus layer, the oral cavity serves as an important route of administration for mucoadhesive dosages. Permeation sites can be separated into two groups: sublingual and buccal, in which the former is much more permeable than the latter. However, the sublingual mucosa also produces more saliva, resulting in relatively low retention rates. Thus, sublingual mucosa is preferable for rapid onset and short duration treatments, while the buccal mucosa is more appropriate for longer dosage and onset times. Because of this dichotomy, the oral cavity is suitable for both local and systemic administration. Some common dosage forms for the oral cavity include gels, ointments, patches, and tablets. Depending on the dosage form, some drug loss can occur due to swallowing of saliva. This can be minimized by layering the side of the dosage facing the oral cavity with an impermeable coating(,) commonly seen in patches.
Nasal
With an active surface area of 160 cm2, the nasal cavity is another noteworthy route of mucoadhesive administration. Due to the sweeping motion of the cilia that lines the mucosa, nasal mucus has a quick turnover of 10 to 15 minutes. Because of this, the nasal cavity is most suitable for rapid, local medicinal dosages. Additionally, its close proximity to the blood–brain barrier makes it a convenient route for administering specialized drugs to the central nervous system. Gels, solutions, and aerosols are common dosage forms in the nasal cavity. However, recent research into particles and microspheres have shown increased bioavailability over non-solid forms of medicine largely due to the use of mucoadhesives.
Ocular
Within the eye, it is difficult to achieve therapeutic concentrations through systemic administration. Often, other parts of the body will reach toxic levels of the medication before the eye reaches the treatment concentration. Consequently, direct administration through the fibrous tunic is common. This is made difficult due to the numerous defense mechanisms in place, such as blinking, tear production, and the tightness of the corneal epithelium. Estimates put tear turnover rates at 5 minutes, meaning most conventional drugs are not retained for long periods of time. Mucoadhesives increase retention rates, either by enhancing the viscosity or bonding directly to one of the mucosae surrounding the eye.
Intravesical
Intravesical drug administration is the delivery of pharmaceuticals to the urinary bladder through a catheter. This route of administration is used for the therapy of bladder cancer and interstitial cystitis. The retention of dosage forms in the bladder is relatively poor, which is related to the need for a periodical urine voiding. Some mucoadhesive materials are able to stick to mucosal lining in the bladder, resist urine wash out effects and provide a sustained drug delivery.
See also
Bioadhesives
Thiomer
Wetting
Adsorption
DLVO theory
References
Adhesives
Biomolecules
Body fluids
Excretion
Routes of administration | Mucoadhesion | [
"Chemistry",
"Biology"
] | 3,276 | [
"Pharmacology",
"Natural products",
"Biochemistry",
"Routes of administration",
"Excretion",
"Organic compounds",
"Biomolecules",
"Molecular biology",
"Structural biology"
] |
21,021,640 | https://en.wikipedia.org/wiki/D.%20E.%20Shaw%20Research | D. E. Shaw Research (DESRES) is a privately held biochemistry research company based in New York City. Under the scientific direction of David E. Shaw, the group's chief scientist, D. E. Shaw Research develops technologies for molecular dynamics simulations (including Anton, a massively parallel special-purpose supercomputer, and Desmond, a software package for use on conventional computers and computer clusters) and applies such simulations to basic scientific research in structural biology and biochemistry, and to the process of computer-aided drug design.
This interdisciplinary laboratory is composed of members with backgrounds in chemistry, biology, hardware engineering and design, computer science, or applied mathematics. In addition to its main New York facility, D. E. Shaw Research has offices in Durham, North Carolina and Hyderabad, India.
References
External links
D. E. Shaw Research
D. E. Shaw Research Publications Page
Molecular dynamics
Molecular dynamics software | D. E. Shaw Research | [
"Physics",
"Chemistry"
] | 185 | [
"Molecular dynamics software",
"Molecular physics",
"Computational chemistry software",
"Computational physics",
"Molecular dynamics",
"Computational chemistry"
] |
21,023,991 | https://en.wikipedia.org/wiki/ARINC%20818 | ARINC 818: Avionics Digital Video Bus (ADVB) is a video interface and protocol standard developed for high bandwidth, low-latency, uncompressed digital video transmission in avionics systems. The standard, which was released in January 2007, has been advanced by ARINC and the aerospace community to meet the stringent needs of high performance digital video. The specification was updated and ARINC 818-2 was released in December 2013, adding a number of new features, including link rates up to 32X fibre channel rates, channel-bonding, switching, field sequential color, bi-directional control and data-only links.
ARINC 818-3 was released in 2018. This revision clarified the 8b/10b encoding rates versus the 64b/66b encoding rates, along with clarifying several issues.
Although simplified, ADVB retains attributes of Fibre Channel that are beneficial for mission-critical applications:
High Speed / High Reliability / Low Latency / Flexibility / High-Performance / Uncompressed Digital Video Transmission
Benefits of ARINC 818 (ADVB):
Low Overhead
Real-time transmission of video signals at high data rates (high bandwidth)
Low-latency
Uncompressed Digital Video transmission
Flexibility - not tied to any one physical layer or video format
Opportunity to standardize high-speed video systems
High reliability - 2 layers of error checking available
Networking capable
Multiple video streams on a single link
Multiple timing classes defined
Suitable for mission-critical applications (up to DAL A)
Background
In aircraft, an ever-increasing amount of information is supplied in the form of images, this information passes through a complex video system before reaching cockpit displays. Video systems include: infrared and other wavelength sensors, optical cameras, radar, flight recorders, map/chart systems, synthetic vision, image fusion systems, heads-up displays (HUD) and heads-down primary flight and multifunction displays, video concentrators, and other subsystems. Video systems are used for taxi and take-off assist, cargo loading, navigation, target tracking, collision avoidance, and other critical functions.
ARINC 818 (ADVB) is a Fibre Channel (FC) protocol that builds on FC-AV (Fibre Channel Audio Video, defined in ANSI INCITS 356-2002), which was used extensively on video systems in the F-18 and the C-130AMP. Although FC-AV has been used on numerous programs, each implementation has been unique. ARINC 818 provides an opportunity to standardize high-speed video systems and has since been adopted by a number of high-profile commercial and military aerospace programs, including the A400M, A350XWB, B787, KC-46A, C-130, KF-X, Comac C919, and numerous other programs. ARINC 818 is also common in avionics suites, such as Proline Fusion by Rockwell Collins, and the TopDeck by Thales.
Overview of ARINC 818 protocol
ARINC 818 (Avionics Digital Video Bus) is a point-to-point, 8b/10b-encoded (or 64B/66B for higher speeds) serial protocol for transmission of video, audio, and data. The protocol is packetized but is video-centric and very flexible, supporting an array of complex video functions including the multiplexing of multiple video streams on a single link or the transmission of a single stream over a dual link. Four different synchronization classes of video are defined, from simple asynchronous to stringent pixel synchronous systems.
ARINC 818 (ADVB) is unidirectional, and does not require handshaking.
ARINC 818 (ADVB) has 15 defined speeds—from 1 Gbit/s to 28 Gbit/s.
Each ADVB project requires an Interface Control Document (ICD). Shared among all project members, the ICD ensures interoperability, reduces the implementation magnitude, and defines:
Video format(s) for the project
Embedded data (Ancillary Data)
Video and line timing
Pixel format
Synchronization class
ADVB Packet Structure
The ARINC 818 (ADVB) frame is the basic transport mechanism for ARINC 818. It is important to refer to these packets as “ADVB frames” rather than simply “frames” to eliminate potential confusion with video frames.
The start of an ADVB frame is signaled by a SOFx 4-byte ordered set and terminated with an EOFx ordered set. Every ADVB frame has a standard Fibre Channel header composed of six 32-bit words. These header words pertain to such things as the ADVB frame origin and intended destination and the ADVB frames position within the sequence. The Source ID field (SID) in the ADVB frame header allows video from each sensor to be distinguished from the other sensors.
The “payload” contains either video, video parameters or ancillary data. The payload can vary in size, but is limited to 2112 bytes per ADVB frame. To insure data integrity, all ADVB frames have a 32-bit CRC calculated for data between the SOFx and the CRC word. The CRC is the same 32-bit polynomial calculation defined for Fibre Channel.
ADVB container structure
The ARINC 818 (ADVB) specification defines a “container” as a set of ADVB frames used to transport video. In other words, a video image and data is encapsulated into a “container” that spans many ADVB frames. The “payload” of each ADVB frame contains either data or video. Within a container, ARINC 818 defines objects that contain certain types of data. That is, certain ADVB frames within the container are part of an object.
An example of how ARINC 818 transmits color XGA provides a good overview. XGA RGB requires ~141M bytes/s of data transfer (1024 pixels x 3 bytes per pixel x 768 lines x 60 Hz). Adding the protocol overhead and blanking time, a standard link rate of 2.125 Gbit/s is required. ARINC 818 “packetizes” video images into Fibre Channel frames. Each FC frame begins with a 4 byte ordered set, called an SOF (Start of Frame), and ends with an EOF (End of Frame), additionally, a 4 byte CRC is included for data integrity. The payload of the first ADVB frame in a sequence contains container header data that accompanies each video image.
Each XGA video line requires 3072 bytes, which exceeds the maximum FC payload length, so each line is divided into two ADVB frames. Transporting an XGA image requires a “payload” of 1536 FC frames. Additionally, an ADVB header frame is added, making a total of 1537 FC frames. Idle characters are required between FC frames because they are used for synchronization between transmitters and receivers.
Applications
Although ARINC 818 was developed specifically for avionics applications, the protocol is already being used in sensor fusion applications where multiple sensor outputs are multiplexed onto a single high-speed link. Features added in ARINC 818-2 facilitate using ARINC 818 as a sensor interface.
The ARINC 818 specification does not mandate which physical layer is to be used and implementations are done using both copper and fiber. Although the majority of implementation use fiber, low-speed implementations of ARINC 818 (1.0625Gbp to 6.375 Gbit/s) sometimes use copper (twinax or TSP or coax). The most commonly, either 850 nm MM fiber (<500m) or 1310 nm SM fiber (up to 10 km) is used. ARINC 818 lends itself to applications that require few conductors (slip rings, turrets), low weight (aerospace), EMI resistance, or long-distance transmission (aerospace, ships).
Flexibility vs. Interoperability
ARINC 818 is flexible and can accommodate many types of video and data applications. It is the intention of the standard that all implementation be accompanied by a small interface control document (ICD) that defines key parameters of the header such as: link speed, video resolution, color scheme, size of ancillary data, pixel format, timing classification, or bit-packing schemes. Interoperability is only guaranteed among equipment built to the same ICD.
Implementation considerations
ARINC 818 uses a FC physical layer that can be constructed from any FC compatible 8b/10b SerDes, which are common in large FPGAs.
ARINC 818 transmitters must assemble valid FC frames, including starting and ending ordered sets, headers, and CRC. This can easily be done with VHDL state machines, and many PLD SerDes include built in CRC calculations.
The flexibility of ARINC 818 allows for receiver implementations using either full image buffers or just display-line buffers. For either, synchronization issues must be considered at the pixel, line, and frame level.
Line buffer or FIFO-based receivers will require that the transmitter adhere to strict line timing requirements of the display. Since the display horizontal scanning must be precise, the arrival time of lines will also need to be precise. ARINC 818 intends that timing parameters such as these be captured in an ICD specific to the video system.
The authors of ARINC 818 built upon many years of combined experience of using FC to transport different video formats, and key implementation details are included in the specification, including examples of common analog formats.
ARINC 818-2 updates
ARINC , ratified in December 2013, adds features to accommodate higher link rates, support for compression and encryption, networking, and sophisticated display schemes, such as channel bonding used on large area displays (LADs).
Link rates: At the time the original ARINC 818 specification was ratified, the fiber-channel protocol supported link rates up to 8.5 gigabits per second (Gb/s). ARINC 818-2 added rates of 5.0, 6.375 (FC 6x), 12.75 (FC 12x), 14.025 (FC 16x), 21.0375 (FC 24x), and 28.05 (FC 32x) Gb/s. The 6x, 12x, and 24x speeds were added to accommodate the use of high-speed, bi-directional coax with power as a physical medium. The specification also provides for non-standard link rates for bi-directional return path for applications such as camera control where high speed video links are not required.
Compression and Encryption: ARINC 818 was originally envisioned as carrying only uncompressed video and audio. Applications such as high-resolution sensors, UAV/UAS with bandwidth limited downlinks, and data only applications drove the need to compress and/or encrypt a link. Sticking to a philosophy of maximum flexibility, the ARINC 818-2 calls for the ICD to specify implementation details for compression and encryption. The ARINC 818 protocol does not provide a means for compression and encryption, it simply provides flags to indicate that payload is compressed or encrypted.
Switching: ARINC 818 was designed as a point-to-point protocol. Since many of the newer implementations of the ARINC 818 have multiple displays and or many channels of ARINC 818 (10 or more), switching has become more important. The new specification requires that active switching can only occur between frames. In effect, to prevent broken video frames, the switch must wait until the vertical blanking. Again, the ICD controls the implementation details.
Field Sequential Color: A video format code was added to support field sequential color. The color field-sequential mode will typically send each color component in a separate container.
Channel Bonding: To overcome link bandwidth limitations of FPGAs, ARINC 818-2 supports multiple links in parallel. The video frame is broken into smaller segments and transmitted on two or more links. Each link must transmit a complete ADVB frame with header, and the ICD addresses latency and skew between the links.
Data-only Links: ARINC 818-2 provides for data-only links, typically used in command-and-control channels, such as those needed for bi-directional camera interfaces. These may employ a standard link rate or a non-standard rate specified by the ICD.
Regions of Interest: The ARINC 818-2 protocol provides a means for defining partial images, tiling, and region-of-interest that are important for high-speed sensors and stereo displays.
ARINC 818-3 updates
Defines display emulation mode for test equipment
Adds new material describing a latency budget for ARINC 818 devices used in transmit and receive modes
10 Gbit/s as the highest 8b/10b-encoded bus speed
Adds 64B/66B encoding for speeds of 12 Gbit/s and higher
Supports 28.05 Gbit/s (FC32X) bus speeds using 256B/257B or 64B/66B encoding
Overall this revision will allow for technologies such as 4K and 8K displays, windowless cockpits, VR and high-bandwidth sensors & cameras around the aircraft.
See also
Aircraft flight control system
Fibre Channel 8b/10b encoding
Fibre Channel network protocols
Integrated Modular Avionics
References
818-1 Avionics Digital Video Bus (ADVB) High Data Rate, published by ARINC 2007
ARINC 818 Becomes New Protocol Standard for High-Performance Video Systems, COTS Journal, Dec 2006
Explaining ARINC 818, Avionics Magazine March 1, 2008
Paul Grunwald, “What’s New in ARINC 818-2,” 32nd Digital Avionics Systems Conference, Syracuse, New York, October 6–10, 2013.
External links
ARINC 818
ARINC standards
Avionics
Fibre Channel
Serial buses | ARINC 818 | [
"Technology"
] | 2,876 | [
"Avionics",
"Aircraft instruments"
] |
19,916,559 | https://en.wikipedia.org/wiki/Atomic%20nucleus | The atomic nucleus is the small, dense region consisting of protons and neutrons at the center of an atom, discovered in 1911 by Ernest Rutherford based on the 1909 Geiger–Marsden gold foil experiment. After the discovery of the neutron in 1932, models for a nucleus composed of protons and neutrons were quickly developed by Dmitri Ivanenko and Werner Heisenberg. An atom is composed of a positively charged nucleus, with a cloud of negatively charged electrons surrounding it, bound together by electrostatic force. Almost all of the mass of an atom is located in the nucleus, with a very small contribution from the electron cloud. Protons and neutrons are bound together to form a nucleus by the nuclear force.
The diameter of the nucleus is in the range of () for hydrogen (the diameter of a single proton) to about for uranium. These dimensions are much smaller than the diameter of the atom itself (nucleus + electron cloud), by a factor of about 26,634 (uranium atomic radius is about ()) to about 60,250 (hydrogen atomic radius is about ).
The branch of physics involved with the study and understanding of the atomic nucleus, including its composition and the forces that bind it together, is called nuclear physics.
History
The nucleus was discovered in 1911, as a result of Ernest Rutherford's efforts to test Thomson's "plum pudding model" of the atom. The electron had already been discovered by J. J. Thomson. Knowing that atoms are electrically neutral, J. J. Thomson postulated that there must be a positive charge as well. In his plum pudding model, Thomson suggested that an atom consisted of negative electrons randomly scattered within a sphere of positive charge. Ernest Rutherford later devised an experiment with his research partner Hans Geiger and with help of Ernest Marsden, that involved the deflection of alpha particles (helium nuclei) directed at a thin sheet of metal foil. He reasoned that if J. J. Thomson's model were correct, the positively charged alpha particles would easily pass through the foil with very little deviation in their paths, as the foil should act as electrically neutral if the negative and positive charges are so intimately mixed as to make it appear neutral. To his surprise, many of the particles were deflected at very large angles. Because the mass of an alpha particle is about 8000 times that of an electron, it became apparent that a very strong force must be present if it could deflect the massive and fast moving alpha particles. He realized that the plum pudding model could not be accurate and that the deflections of the alpha particles could only be explained if the positive and negative charges were separated from each other and that the mass of the atom was a concentrated point of positive charge. This justified the idea of a nuclear atom with a dense center of positive charge and mass.
Etymology
The term nucleus is from the Latin word , a diminutive of ('nut'), meaning 'the kernel' (i.e., the 'small nut') inside a watery type of fruit (like a peach). In 1844, Michael Faraday used the term to refer to the "central point of an atom". The modern atomic meaning was proposed by Ernest Rutherford in 1912. The adoption of the term "nucleus" to atomic theory, however, was not immediate. In 1916, for example, Gilbert N. Lewis stated, in his famous article The Atom and the Molecule, that "the atom is composed of the kernel and an outer atom or shell."
Similarly, the term kern meaning kernel is used for nucleus in German and Dutch.
Principles
The nucleus of an atom consists of neutrons and protons, which in turn are the manifestation of more elementary particles, called quarks, that are held in association by the nuclear strong force in certain stable combinations of hadrons, called baryons. The nuclear strong force extends far enough from each baryon so as to bind the neutrons and protons together against the repulsive electrical force between the positively charged protons. The nuclear strong force has a very short range, and essentially drops to zero just beyond the edge of the nucleus. The collective action of the positively charged nucleus is to hold the electrically negative charged electrons in their orbits about the nucleus. The collection of negatively charged electrons orbiting the nucleus display an affinity for certain configurations and numbers of electrons that make their orbits stable. Which chemical element an atom represents is determined by the number of protons in the nucleus; the neutral atom will have an equal number of electrons orbiting that nucleus. Individual chemical elements can create more stable electron configurations by combining to share their electrons. It is that sharing of electrons to create stable electronic orbits about the nuclei that appears to us as the chemistry of our macro world.
Protons define the entire charge of a nucleus, and hence its chemical identity. Neutrons are electrically neutral, but contribute to the mass of a nucleus to nearly the same extent as the protons. Neutrons can explain the phenomenon of isotopes (same atomic number with different atomic mass). The main role of neutrons is to reduce electrostatic repulsion inside the nucleus.
Composition and shape
Protons and neutrons are fermions, with different values of the strong isospin quantum number, so two protons and two neutrons can share the same space wave function since they are not identical quantum entities. They are sometimes viewed as two different quantum states of the same particle, the nucleon. Two fermions, such as two protons, or two neutrons, or a proton + neutron (the deuteron) can exhibit bosonic behavior when they become loosely bound in pairs, which have integer spin.
In the rare case of a hypernucleus, a third baryon called a hyperon, containing one or more strange quarks and/or other unusual quark(s), can also share the wave function. However, this type of nucleus is extremely unstable and not found on Earth except in high-energy physics experiments.
The neutron has a positively charged core of radius ≈ 0.3 fm surrounded by a compensating negative charge of radius between 0.3 fm and 2 fm. The proton has an approximately exponentially decaying positive charge distribution with a mean square radius of about 0.8 fm.
The shape of the atomic nucleus can be spherical, rugby ball-shaped (prolate deformation), discus-shaped (oblate deformation), triaxial (a combination of oblate and prolate deformation) or pear-shaped.
Forces
Nuclei are bound together by the residual strong force (nuclear force). The residual strong force is a minor residuum of the strong interaction which binds quarks together to form protons and neutrons. This force is much weaker between neutrons and protons because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms (such as van der Waals forces that act between two inert gas atoms) are much weaker than the electromagnetic forces that hold the parts of the atoms together internally (for example, the forces that hold the electrons in an inert gas atom bound to its nucleus).
The nuclear force is highly attractive at the distance of typical nucleon separation, and this overwhelms the repulsion between protons due to the electromagnetic force, thus allowing nuclei to exist. However, the residual strong force has a limited range because it decays quickly with distance (see Yukawa potential); thus only nuclei smaller than a certain size can be completely stable. The largest known completely stable nucleus (i.e. stable to alpha, beta, and gamma decay) is lead-208 which contains a total of 208 nucleons (126 neutrons and 82 protons). Nuclei larger than this maximum are unstable and tend to be increasingly short-lived with larger numbers of nucleons. However, bismuth-209 is also stable to beta decay and has the longest half-life to alpha decay of any known isotope, estimated at a billion times longer than the age of the universe.
The residual strong force is effective over a very short range (usually only a few femtometres (fm); roughly one or two nucleon diameters) and causes an attraction between any pair of nucleons. For example, between a proton and a neutron to form a deuteron [NP], and also between protons and protons, and neutrons and neutrons.
Halo nuclei and nuclear force range limits
The effective absolute limit of the range of the nuclear force (also known as residual strong force) is represented by halo nuclei such as lithium-11 or boron-14, in which dineutrons, or other collections of neutrons, orbit at distances of about (roughly similar to the radius of the nucleus of uranium-238). These nuclei are not maximally dense. Halo nuclei form at the extreme edges of the chart of the nuclides—the neutron drip line and proton drip line—and are all unstable with short half-lives, measured in milliseconds; for example, lithium-11 has a half-life of .
Halos in effect represent an excited state with nucleons in an outer quantum shell which has unfilled energy levels "below" it (both in terms of radius and energy). The halo may be made of either neutrons [NN, NNN] or protons [PP, PPP]. Nuclei which have a single neutron halo include 11Be and 19C. A two-neutron halo is exhibited by 6He, 11Li, 17B, 19B and 22C. Two-neutron halo nuclei break into three fragments, never two, and are called Borromean nuclei because of this behavior (referring to a system of three interlocked rings in which breaking any ring frees both of the others). 8He and 14Be both exhibit a four-neutron halo. Nuclei which have a proton halo include 8B and 26P. A two-proton halo is exhibited by 17Ne and 27S. Proton halos are expected to be more rare and unstable than the neutron examples, because of the repulsive electromagnetic forces of the halo proton(s).
Nuclear models
Although the standard model of physics is widely believed to completely describe the composition and behavior of the nucleus, generating predictions from theory is much more difficult than for most other areas of particle physics. This is due to two reasons:
In principle, the physics within a nucleus can be derived entirely from quantum chromodynamics (QCD). In practice however, current computational and mathematical approaches for solving QCD in low-energy systems such as the nuclei are extremely limited. This is due to the phase transition that occurs between high-energy quark matter and low-energy hadronic matter, which renders perturbative techniques unusable, making it difficult to construct an accurate QCD-derived model of the forces between nucleons. Current approaches are limited to either phenomenological models such as the Argonne v18 potential or chiral effective field theory.
Even if the nuclear force is well constrained, a significant amount of computational power is required to accurately compute the properties of nuclei ab initio. Developments in many-body theory have made this possible for many low mass and relatively stable nuclei, but further improvements in both computational power and mathematical approaches are required before heavy nuclei or highly unstable nuclei can be tackled.
Historically, experiments have been compared to relatively crude models that are necessarily imperfect. None of these models can completely explain experimental data on nuclear structure.
The nuclear radius (R) is considered to be one of the basic quantities that any model must predict. For stable nuclei (not halo nuclei or other unstable distorted nuclei) the nuclear radius is roughly proportional to the cube root of the mass number (A) of the nucleus, and particularly in nuclei containing many nucleons, as they arrange in more spherical configurations:
The stable nucleus has approximately a constant density and therefore the nuclear radius R can be approximated by the following formula,
where A = Atomic mass number (the number of protons Z, plus the number of neutrons N) and r0 = 1.25 fm = 1.25 × 10−15 m. In this equation, the "constant" r0 varies by 0.2 fm, depending on the nucleus in question, but this is less than 20% change from a constant.
In other words, packing protons and neutrons in the nucleus gives approximately the same total size result as packing hard spheres of a constant size (like marbles) into a tight spherical or almost spherical bag (some stable nuclei are not quite spherical, but are known to be prolate).
Models of nuclear structure include:
Cluster model
The cluster model describes the nucleus as a molecule-like collection of proton-neutron groups (e.g., alpha particles) with one or more valence neutrons occupying molecular orbitals.
Liquid drop model
Early models of the nucleus viewed the nucleus as a rotating liquid drop. In this model, the trade-off of long-range electromagnetic forces and relatively short-range nuclear forces, together cause behavior which resembled surface tension forces in liquid drops of different sizes. This formula is successful at explaining many important phenomena of nuclei, such as their changing amounts of binding energy as their size and composition changes (see semi-empirical mass formula), but it does not explain the special stability which occurs when nuclei have special "magic numbers" of protons or neutrons.
The terms in the semi-empirical mass formula, which can be used to approximate the binding energy of many nuclei, are considered as the sum of five types of energies (see below). Then the picture of a nucleus as a drop of incompressible liquid roughly accounts for the observed variation of binding energy of the nucleus:
Volume energy. When an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. So, this nuclear energy is proportional to the volume.
Surface energy. A nucleon at the surface of a nucleus interacts with fewer other nucleons than one in the interior of the nucleus and hence its binding energy is less. This surface energy term takes that into account and is therefore negative and is proportional to the surface area.
Coulomb energy. The electric repulsion between each pair of protons in a nucleus contributes toward decreasing its binding energy.
Asymmetry energy (also called Pauli Energy). An energy associated with the Pauli exclusion principle. Were it not for the Coulomb energy, the most stable form of nuclear matter would have the same number of neutrons as protons, since unequal numbers of neutrons and protons imply filling higher energy levels for one type of particle, while leaving lower energy levels vacant for the other type.
Pairing energy. An energy which is a correction term that arises from the tendency of proton pairs and neutron pairs to occur. An even number of particles is more stable than an odd number.
Shell models and other quantum models
A number of models for the nucleus have also been proposed in which nucleons occupy orbitals, much like the atomic orbitals in atomic physics theory. These wave models imagine nucleons to be either sizeless point particles in potential wells, or else probability waves as in the "optical model", frictionlessly orbiting at high speed in potential wells.
In the above models, the nucleons may occupy orbitals in pairs, due to being fermions, which allows explanation of even/odd Z and N effects well known from experiments. The exact nature and capacity of nuclear shells differs from those of electrons in atomic orbitals, primarily because the potential well in which the nucleons move (especially in larger nuclei) is quite different from the central electromagnetic potential well which binds electrons in atoms. Some resemblance to atomic orbital models may be seen in a small atomic nucleus like that of helium-4, in which the two protons and two neutrons separately occupy 1s orbitals analogous to the 1s orbital for the two electrons in the helium atom, and achieve unusual stability for the same reason. Nuclei with 5 nucleons are all extremely unstable and short-lived, yet, helium-3, with 3 nucleons, is very stable even with lack of a closed 1s orbital shell. Another nucleus with 3 nucleons, the triton hydrogen-3 is unstable and will decay into helium-3 when isolated. Weak nuclear stability with 2 nucleons {NP} in the 1s orbital is found in the deuteron hydrogen-2, with only one nucleon in each of the proton and neutron potential wells. While each nucleon is a fermion, the {NP} deuteron is a boson and thus does not follow Pauli Exclusion for close packing within shells. Lithium-6 with 6 nucleons is highly stable without a closed second 1p shell orbital. For light nuclei with total nucleon numbers 1 to 6 only those with 5 do not show some evidence of stability. Observations of beta-stability of light nuclei outside closed shells indicate that nuclear stability is much more complex than simple closure of shell orbitals with magic numbers of protons and neutrons.
For larger nuclei, the shells occupied by nucleons begin to differ significantly from electron shells, but nevertheless, present nuclear theory does predict the magic numbers of filled nuclear shells for both protons and neutrons. The closure of the stable shells predicts unusually stable configurations, analogous to the noble group of nearly-inert gases in chemistry. An example is the stability of the closed shell of 50 protons, which allows tin to have 10 stable isotopes, more than any other element. Similarly, the distance from shell-closure explains the unusual instability of isotopes which have far from stable numbers of these particles, such as the radioactive elements 43 (technetium) and 61 (promethium), each of which is preceded and followed by 17 or more stable elements.
There are however problems with the shell model when an attempt is made to account for nuclear properties well away from closed shells. This has led to complex post hoc distortions of the shape of the potential well to fit experimental data, but the question remains whether these mathematical manipulations actually correspond to the spatial deformations in real nuclei. Problems with the shell model have led some to propose realistic two-body and three-body nuclear force effects involving nucleon clusters and then build the nucleus on this basis. Three such cluster models are the 1936 Resonating Group Structure model of John Wheeler, Close-Packed Spheron Model of Linus Pauling and the 2D Ising Model of MacGregor.
See also
Notes
References
External links
The Nucleus – a chapter from an online textbook
The LIVEChart of Nuclides – IAEA in Java or HTML
Article on the "nuclear shell model", giving nuclear shell filling for the various elements. Accessed September 16, 2009.
Timeline: Subatomic Concepts, Nuclear Science & Technology .
Atoms
Nuclear chemistry
Nuclear physics
Subatomic particles
Radiochemistry
Proton
Electron | Atomic nucleus | [
"Physics",
"Chemistry"
] | 3,900 | [
"Electron",
"Molecular physics",
"Nuclear chemistry",
"Subatomic particles",
"Particle physics",
"Radioactivity",
"Nuclear physics",
"Radiochemistry",
"nan",
"Atoms",
"Matter"
] |
19,916,615 | https://en.wikipedia.org/wiki/Electron%20shell | In chemistry and atomic physics, an electron shell may be thought of as an orbit that electrons follow around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called the "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on further and further from the nucleus. The shells correspond to the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with the letters used in X-ray notation (K, L, M, ...). A useful guide when understanding electron shells in atoms is to note that each row on the conventional periodic table of elements represents an electron shell.
Each shell can contain only a fixed number of electrons: the first shell can hold up to two electrons, the second shell can hold up to eight electrons, the third shell can hold up to 18, continuing as the general formula of the nth shell being able to hold up to 2(n2) electrons. For an explanation of why electrons exist in these shells, see electron configuration.
Each shell consists of one or more subshells, and each subshell consists of one or more atomic orbitals.
History
In 1913, Niels Bohr proposed a model of the atom, giving the arrangement of electrons in their sequential orbits. At that time, Bohr allowed the capacity of the inner orbit of the atom to increase to eight electrons as the atoms got larger, and "in the scheme given below the number of electrons in this [outer] ring is arbitrary put equal to the normal valency of the corresponding element". Using these and other constraints, he proposed configurations that are in accord with those now known only for the first six elements. "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:"
The shell terminology comes from Arnold Sommerfeld's modification of the 1913 Bohr model. During this period Bohr was working with Walther Kossel, whose papers in 1914 and in 1916 called the orbits "shells". Sommerfeld retained Bohr's planetary model, but added mildly elliptical orbits (characterized by additional quantum numbers and m) to explain the fine spectroscopic structure of some elements. The multiple electrons with the same principal quantum number (n) had close orbits that formed a "shell" of positive thickness instead of the circular orbit of Bohr's model which orbits called "rings" were described by a plane.
The existence of electron shells was first observed experimentally in Charles Barkla's and Henry Moseley's X-ray absorption studies. Moseley's work did not directly concern the study of electron shells, because he was trying to prove that the periodic table was not arranged by weight, but by the charge of the protons in the nucleus. However, because the number of electrons in an electrically neutral atom equals the number of protons, this work was extremely important to Niels Bohr who mentioned Moseley's work several times in his 1962 interview. Moseley was part of Rutherford's group, as was Niels Bohr. Moseley measured the frequencies of X-rays emitted by every element between calcium and zinc and found that the frequencies became greater as the elements got heavier. This led to the theory that electrons were emitting X-rays when they were shifted to lower shells. This led to the conclusion that the electrons were in Kossel's shells with a definite limit per shell, labeling them with the letters K, L, M, N, O, P, and Q. The origin of this terminology was alphabetic. Barkla, who worked independently from Moseley as an X-ray spectrometry experimentalist, first noticed two distinct types of scattering from shooting X-rays at elements in 1909 and named them "A" and "B". Barkla described these two types of X-ray diffraction: the first was unconnected with the type of material used in the experiment and could be polarized. The second diffraction beam he called "fluorescent" because it depended on the irradiated material. It was not known what these lines meant at the time, but in 1911 Barkla decided there might be scattering lines previous to "A", so he began at "K". However, later experiments indicated that the K absorption lines are produced by the innermost electrons. These letters were later found to correspond to the n values 1, 2, 3, etc. that were used in the Bohr model. They are used in the spectroscopic Siegbahn notation.
The work of assigning electrons to shells was continued from 1913 to 1925 by many chemists and a few physicists. Niels Bohr was one of the few physicists who followed the chemist's work of defining the periodic table, while Arnold Sommerfeld worked more on trying to make a relativistic working model of the atom that would explain the fine structure of the spectra from a classical orbital physics standpoint through the Atombau approach. Einstein and Rutherford, who did not follow chemistry, were unaware of the chemists who were developing electron shell theories of the periodic table from a chemistry point of view, such as Irving Langmuir, Charles Bury, J.J. Thomson, and Gilbert Lewis, who all introduced corrections to Bohr's model such as a maximum of two electrons in the first shell, eight in the next and so on, and were responsible for explaining valency in the outer electron shells, and the building up of atoms by adding electrons to the outer shells. So when Bohr outlined his electron shell atomic theory in 1922, there was no mathematical formula for the theory. So Rutherford said he was hard put "to form an idea of how you arrive at your conclusions". Einstein said of Bohr's 1922 paper that his "electron-shells of the atoms together with their significance for chemistry appeared to me like a miracle – and appears to me as a miracle even today". Arnold Sommerfeld, who had followed the Atombau structure of electrons instead of Bohr who was familiar with the chemists' views of electron structure, spoke of Bohr's 1921 lecture and 1922 article on the shell model as "the greatest advance in atomic structure since 1913". However, the electron shell development of Niels Bohr was basically the same theory as that of the chemist Charles Rugeley Bury in his 1921 paper.
As work continued on the electron shell structure of the Sommerfeld-Bohr Model, Sommerfeld had introduced three "quantum numbers n, k, and m, that described the size of the orbit, the shape of the orbit, and the direction in which the orbit was pointing." Because we use k for the Boltzmann constant, the azimuthal quantum number was changed to ℓ. When the modern quantum mechanics theory was put forward based on Heisenberg's matrix mechanics and Schrödinger's wave equation, these quantum numbers were kept in the current quantum theory but were changed to n being the principal quantum number, and m being the magnetic quantum number.
However, the final form of the electron shell model still in use today for the number of electrons in shells was discovered in 1923 by Edmund Stoner, who introduced the principle that the nth shell was described by 2(n2). Seeing this in 1925, Wolfgang Pauli added a fourth quantum number, "spin", during the old quantum theory period of the Sommerfeld-Bohr Solar System atom to complete the modern electron shell theory.
Subshells
Each shell is composed of one or more subshells, which are themselves composed of atomic orbitals. For example, the first (K) shell has one subshell, called 1s; the second (L) shell has two subshells, called 2s and 2p; the third shell has 3s, 3p, and 3d; the fourth shell has 4s, 4p, 4d and 4f; the fifth shell has 5s, 5p, 5d, and 5f and can theoretically hold more in the 5g subshell that is not occupied in the ground-state electron configuration of any known element. The various possible subshells are shown in the following table:
The first column is the "subshell label", a lowercase-letter label for the type of subshell. For example, the "4s subshell" is a subshell of the fourth (N) shell, with the type (s) described in the first row.
The second column is the azimuthal quantum number (ℓ) of the subshell. The precise definition involves quantum mechanics, but it is a number that characterizes the subshell.
The third column is the maximum number of electrons that can be put into a subshell of that type. For example, the top row says that each s-type subshell (1s, 2s, etc.) can have at most two electrons in it. Each of the following subshells (p, d, f, g) can have 4 more electrons than the one preceding it.
The fourth column says which shells have a subshell of that type. For example, looking at the top two rows, every shell has an s subshell, while only the second shell and higher have a p subshell (i.e., there is no "1p" subshell).
The final column gives the historical origin of the labels s, p, d, and f. They come from early studies of atomic spectral lines. The other labels, namely g, h, and i, are an alphabetic continuation following the last historically originated label of f.
Number of electrons in each shell
Each subshell is constrained to hold electrons at most, namely:
Each s subshell holds at most 2 electrons
Each p subshell holds at most 6 electrons
Each d subshell holds at most 10 electrons
Each f subshell holds at most 14 electrons
Each g subshell holds at most 18 electrons
Therefore, the K shell, which contains only an s subshell, can hold up to 2 electrons; the L shell, which contains an s and a p, can hold up to 2 + 6 = 8 electrons, and so forth; in general, the nth shell can hold up to 2n2 electrons.
Although that formula gives the maximum in principle, that maximum is only achieved (in known elements) for the first four shells (K, L, M, N). No known element has more than 32 electrons in any one shell. This is because the subshells are filled according to the Aufbau principle. The first elements to have more than 32 electrons in one shell would belong to the g-block of period 8 of the periodic table. These elements would have some electrons in their 5g subshell and thus have more than 32 electrons in the O shell (fifth principal shell).
Subshell energies and filling order
Although it is sometimes stated that all the electrons in a shell have the same energy, this is an approximation. However, the electrons in one subshell do have exactly the same level of energy, with later subshells having more energy per electron than earlier ones. This effect is great enough that the energy ranges associated with shells can overlap.
The filling of the shells and subshells with electrons proceeds from subshells of lower energy to subshells of higher energy. This follows the n + ℓ rule which is also commonly known as the Madelung rule. Subshells with a lower n + ℓ value are filled before those with higher n + ℓ values. In the case of equal n + ℓ values, the subshell with a lower n value is filled first.
Because of this, the later shells are filled over vast sections of the periodic table. The K shell fills in the first period (hydrogen and helium), while the L shell fills in the second (lithium to neon). However, the M shell starts filling at sodium (element 11) but does not finish filling till copper (element 29), and the N shell is even slower: it starts filling at potassium (element 19) but does not finish filling till ytterbium (element 70). The O, P, and Q shells begin filling in the known elements (respectively at rubidium, caesium, and francium), but they are not complete even at the heaviest known element, oganesson (element 118).
List of elements with electrons per shell
The list below gives the elements arranged by increasing atomic number and shows the number of electrons per shell. At a glance, the subsets of the list show obvious patterns. In particular, every set of five elements () before each noble gas (group 18, ) heavier than helium have successive numbers of electrons in the outermost shell, namely three to seven.
Sorting the table by chemical group shows additional patterns, especially with respect to the last two outermost shells. (Elements 57 to 71 belong to the lanthanides, while 89 to 103 are the actinides.)
The list below is primarily consistent with the Aufbau principle. However, there are a number of exceptions to the rule; for example palladium (atomic number 46) has no electrons in the fifth shell, unlike other atoms with lower atomic number. The elements past 108 have such short half-lives that their electron configurations have not yet been measured, and so predictions have been inserted instead.
See also
Periodic table (electron configurations)
Electron counting
18-electron rule
Core charge
References
Electron
Atomic physics
Quantum mechanics
Chemical bonding | Electron shell | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,834 | [
"Electron",
"Molecular physics",
"Theoretical physics",
"Quantum mechanics",
"Condensed matter physics",
"Atomic physics",
" molecular",
"Atomic",
"nan",
"Chemical bonding",
" and optical physics"
] |
6,436,309 | https://en.wikipedia.org/wiki/Master%20of%20Physics | A Master of Physics honours (or MPhys (Hons)) degree is a specific master's degree for courses in the field of physics.
United Kingdom
In England and Wales, the MPhys is an undergraduate award available after pursuing a four-year course of study at a university. In Scotland the course has a five-year duration. In some universities, the degree has the variant abbreviation MSci. These are taught courses, with a research element in the final year — this can vary from a small component to an entire year working with a research group — and are not available as postgraduate qualifications in most cases, although depending on institution the final year can be considered as approximately equivalent to an MSc.
Structure
In terms of course structure, MPhys degrees usually follow the pattern familiar from bachelor's degrees with lectures, laboratory work, coursework and exams each year. Usually one, or more commonly two, substantial projects are to be completed in the fourth year which may well have research elements. At the end of the second or third years, there is usually a threshold of academic performance in examinations to be reached to allow progression into the final year. Final results are, in most cases, awarded on the standard British undergraduate degree classification scale, although some universities award something structurally similar to 'Distinction', 'Merit', 'Pass' or 'Fail', as this is often the way that taught postgraduate master's degrees are classified.
Degree schemes
It is usual for there to be some variation in the MPhys schemes, to allow for students to study the area of physics which most interests them. For example, Lancaster University's physics department offer the following schemes:
MPhys Physics
MPhys Physics, Astrophysics and Cosmology
MPhys Physics with Particle Physics and Cosmology
MPhys Physics with Space Science
MPhys Physics with Biomedical Physics
MPhys Theoretical Physics
MPhys Theoretical Physics with Mathematics
These schemes will usually incorporate the same core modules with additional scheme specific modules. Students tend to take all the same core modules during their first year and start to specialise in their second year. In some cases, optional modules can be taken from other schemes.
See also
British degree abbreviations
Bachelor's degrees
Master's degrees
References
Physics
Physics education | Master of Physics | [
"Physics"
] | 450 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
26,893,990 | https://en.wikipedia.org/wiki/Insertion%20mount%20machine | An insertion mount machine or inserter is a device used to insert the leads of electronic components through holes in printed circuit boards.
Machine configuration
An insertion mount machine often has a rotary table on a X- and Y-axis positioning system which moves the board to the necessary position for the component's insertion into the board. The machine can be configured to be standalone machine.
Axial insertion
An axial inserter takes axial leaded through-hole components from reels which are fed into dispensing heads that cut the parts onto a chain in the order of insertion; transferred from the sequence chain to the insertion chain, which brings the component underneath the insertion head which then cuts the leads of the component to the correct length for lead length and insertion span; bends the leads 90°; and inserts the component leads into the board while a clinch assembly underneath cuts and bends the leads towards each other.
Radial insertion
A radial inserter takes radial leaded through-hole components from a reels which are fed into dispensing heads that cut the component from the reel and place it onto the chain in sequence of the order of insertion. The component is brought to a component transfer assembly behind the insertion head and is transferred to the insertion head, then inserted into the board while a clinch assembly underneath cuts and bends the leads opposite to each other.
Dual in-line package insertion
A dual in-line (DIP) inserter takes integrated circuits from tubes which are loaded into magazines. A shuttle mechanism picks the needed component needed from the magazines and drops it into a transfer assembly. The insertion head picks the component from the transfer assembly and inserts the IC into the board while a clinch assembly underneath cuts and bends the leads either inward for sockets or outward for ICs.
Due to the transition from insertion mount technology (through-hole) to surface-mount technology of integrated circuits, these machines are no longer being newly manufactured.
Obsolete configurations
Axial inserters used to consist of a stand-alone sequencer machine which cut and sequenced the parts onto a reel. That reel was then transferred over to a standalone axial inserter to insert the components. This is all done on one machine today.
See also
Pick-and-place machine
References
Printed circuit board manufacturing | Insertion mount machine | [
"Engineering"
] | 451 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
26,894,208 | https://en.wikipedia.org/wiki/Life-cycle%20greenhouse%20gas%20emissions%20of%20energy%20sources | Greenhouse gas emissions are one of the environmental impacts of electricity generation. Measurement of life-cycle greenhouse gas emissions involves calculating the global warming potential (GWP) of energy sources through life-cycle assessment. These are usually sources of only electrical energy but sometimes sources of heat are evaluated. The findings are presented in units of global warming potential per unit of electrical energy generated by that source. The scale uses the global warming potential unit, the carbon dioxide equivalent (e), and the unit of electrical energy, the kilowatt hour (kWh). The goal of such assessments is to cover the full life of the source, from material and fuel mining through construction to operation and waste management.
In 2014, the Intergovernmental Panel on Climate Change harmonized the carbon dioxide equivalent (e) findings of the major electricity generating sources in use worldwide. This was done by analyzing the findings of hundreds of individual scientific papers assessing each energy source. Coal is by far the worst emitter, followed by natural gas, with solar, wind and nuclear all low-carbon. Hydropower, biomass, geothermal and ocean power may generally be low-carbon, but poor design or other factors could result in higher emissions from individual power stations.
For all technologies, advances in efficiency, and therefore reductions in e since the time of publication, have not been included. For example, the total life cycle emissions from wind power may have lessened since publication. Similarly, due to the time frame over which the studies were conducted, nuclear Generation II reactor's e results are presented and not the global warming potential of Generation III reactors. Other limitations of the data include: a) missing life cycle phases, and, b) uncertainty as to where to define the cut-off point in the global warming potential of an energy source. The latter is important in assessing a combined electrical grid in the real world, rather than the established practice of simply assessing the energy source in isolation.
Global warming potential of selected electricity sources
1 see also environmental impact of reservoirs#Greenhouse gases.
List of acronyms:
PC — pulverized coal
CCS — carbon capture and storage
IGCC — integrated gasification combined cycle
SC — supercritical
NGCC — natural gas combined cycle
CSP — concentrated solar power
PV — photovoltaic power
Bioenergy with carbon capture and storage
whether bioenergy with carbon capture and storage can be carbon neutral or carbon negative is being researched and is controversial.
Studies after the 2014 IPCC report
Individual studies show a wide range of estimates for fuel sources arising from the different methodologies used. Those on the low end tend to leave parts of the life cycle out of their analysis, while those on the high end often make unrealistic assumptions about the amount of energy used in some parts of the life cycle.
Since the 2014 IPCC study some geothermal has been found to emit such as some geothermal power in Italy: further research is ongoing in the 2020s.
Ocean energy technologies (tidal and wave) are relatively new, and few studies have been conducted on them. A major issue of the available studies is that they seem to underestimate the impacts of maintenance, which could be significant. An assessment of around 180 ocean technologies found that the GWP of ocean technologies varies between 15 and 105 g/kWh of eq, with an average of 53 g/kWh eq. In a tentative preliminary study, published in 2020, the environmental impact of subsea tidal kite technologies the GWP varied between 15 and 37, with a median value of 23.8 g/kWh), which is slightly higher than that reported in the 2014 IPCC GWP study mentioned earlier (5.6 to 28, with a mean value of 17 g/kWh eq).
In 2021 UNECE published a lifecycle analysis of environmental impact of electricity generation technologies, accounting for the following impacts: resource use (minerals, metals); land use; resource use (fossils); water use; particulate matter; photochemical ozone formation; ozone depletion; human toxicity (non-cancer); ionising radiation; human toxicity (cancer); eutrophication (terrestrial, marine, freshwater); ecotoxicity (freshwater); acidification; climate change, with the latter summarized in the table above.
In June 2022, Électricité de France publishes a detailed Life-cycle assessment study, following the norm ISO 14040, showing the 2019 French nuclear infrastructure produces less than 4 g/kWh eq.
Cutoff points of calculations and estimates of how long plants last
Because most emissions from wind, solar and nuclear are not during operation, if they are operated for longer and generate more electricity over their lifetime then emissions per unit energy will be less. Therefore, their lifetimes are relevant.
Wind farms are estimated to last 30 years: after that the carbon emissions from repowering would need to be taken into account. Solar panels from the 2010s may have a similar lifetime: however how long 2020s solar panels (such as perovskite) will last is not yet known. Some nuclear plants can be used for 80 years, but others may have to be retired earlier for safety reasons. more than half the world's nuclear plants are expected to request license extensions, and there have been calls for these extensions to be better scrutinised under the Convention on Environmental Impact Assessment in a Transboundary Context.
Some coal-fired power stations may operate for 50 years but others may be shut down after 20 years, or less. According to one 2019 study considering the time value of GHG emissions with techno-economic assessment considerably increases the life cycle emissions from carbon intensive fuels such as coal.
Lifecycle emissions from heating
For residential heating in almost all countries emissions from natural gas furnaces are more than from heat pumps. But in some countries, such as the UK, there is an ongoing debate in the 2020s about whether it is better to replace the natural gas used in residential central heating with hydrogen, or whether to use heat pumps or in some cases more district heating.
Fossil gas bridge fuel controversy
whether natural gas should be used as a "bridge" from coal and oil to low carbon energy, is being debated for coal-reliant economies, such as India, China and Germany. Germany, as part of its Energiewende transformation, declares preservation of coal-based power until 2038 but immediate shutdown of nuclear power plants, which further increased its dependency on fossil gas.
Missing life cycle phases
Although the life cycle assessments of each energy source should attempt to cover the full life cycle of the source from cradle-to-grave, they are generally limited to the construction and operation phase. The most rigorously studied phases are those of material and fuel mining, construction, operation, and waste management. However, missing life cycle phases exist for a number of energy sources. At times, assessments variably and sometimes inconsistently include the global warming potential that results from decommissioning the energy supplying facility, once it has reached its designed life-span. This includes the global warming potential of the process to return the power-supply site to greenfield status. For example, the process of hydroelectric dam removal is usually excluded as it is a rare practice with little practical data available. Dam removal however is becoming increasingly common as dams age. Larger dams, such as the Hoover Dam and the Three Gorges Dam, are intended to last "forever" with the aid of maintenance, a period that is not quantified. Therefore, decommissioning estimates are generally omitted for some energy sources, while other energy sources include a decommissioning phase in their assessments.
Along with the other prominent values of the paper, the median value presented of 12 g -eq/kWhe for nuclear fission, found in the 2012 Yale University nuclear power review, a paper which also serves as the origin of the 2014 IPCC's nuclear value, does however include the contribution of facility decommissioning with an "Added facility decommissioning" global warming potential in the full nuclear life cycle assessment.
Thermal power plants, even if low carbon power biomass, nuclear or geothermal energy stations, directly add heat energy to the earth's global energy balance. As for wind turbines, they may change both horizontal and vertical atmospheric circulation. But, although both these may slightly change the local temperature, any difference they might make to the global temperature is undetectable against the far larger temperature change caused by greenhouse gases.
See also
Bioenergy with carbon capture and storage
Carbon capture and storage
Carbon footprint
Climate change mitigation
Efficient energy use
Low-carbon economy
Nuclear power proposed as renewable energy
References
External links
National Renewable Energy Laboratory. LCA emissions of all present day energy sources.
Wise uranium calculator
Scientific comparisons
Nuclear power
Greenhouse gas emissions
Energy | Life-cycle greenhouse gas emissions of energy sources | [
"Physics",
"Chemistry"
] | 1,795 | [
"Greenhouse gas emissions",
"Physical quantities",
"Nuclear power",
"Power (physics)",
"Energy (physics)",
"Energy",
"Greenhouse gases"
] |
26,894,908 | https://en.wikipedia.org/wiki/British%20Fluid%20Power%20Association | The British Fluid Power Association is a trade association in the United Kingdom that represents the hydraulic and pneumatic equipment industry, utilising properties of fluid power.
History
It started in 1959 as AHEM, becoming BFPA in 1986. A division of the organisation, the British Fluid Power Distributors Association (BFPDA) was formed in 1989.
Structure
It is based in Chipping Norton in Oxfordshire, just off the northern spur of the A44 in the north-east of the town. There are three types of membership: Full, Associate and Education.
Function
It acts as a marketing organisation (mostly abroad) for the industry and collects industry-wide statistics. Its technical committees also help in implementation and origination of standards for the BSI Group.
It represents companies involved with:
Electrohydraulics (e.g.power steering)
Pneumatic controls
Motion control
Linear motion
Hydraulic accumulators
Hydraulic pumps and Hydraulic motors
Valves
Pneumatic and hydraulic cylinders
Hydraulic seals
Hose and fittings
Marketing and industry statistical information
See also
National Fluid Power Association
International Association of Hydraulic Engineering and Research
References
External links
BFPA
IFPEX
Hydraulic engineering organizations
Organisations based in Oxfordshire
West Oxfordshire District
Organizations established in 1959
Fluid power
Trade associations based in the United Kingdom
1959 establishments in the United Kingdom | British Fluid Power Association | [
"Physics",
"Engineering"
] | 257 | [
"Physical quantities",
"Civil engineering organizations",
"Power (physics)",
"Fluid power",
"Hydraulic engineering organizations"
] |
26,895,580 | https://en.wikipedia.org/wiki/C22H26N2O5 | {{DISPLAYTITLE:C22H26N2O5}}
The molecular formula C22H26N2O5 may refer to:
FV-100, an orally available nucleoside analogue drug with antiviral activity
Vineridine, a vinca alkaloid
Molecular formulas | C22H26N2O5 | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
26,897,895 | https://en.wikipedia.org/wiki/Nuclear%20power%20plant%20emergency%20response%20team | A nuclear power plant emergency response team (ERT) is an incident response team composed of plant personnel and civil authority personnel specifically trained to respond to the occurrence of an accident at a nuclear power plant.
Each nuclear power plant is required to have a detailed emergency plan. In the event of a potential accident (as defined by the International Nuclear Event Scale), the ERT personnel are notified by beeper and have a set time limit for reporting to their duty station.
Potential duty stations include:
The nuclear power plant's control room
The nuclear power plant's Emergency Operations Facility
An offsite (i.e., not near the nuclear plant) operations facility
A news center
Roving teams of health physicists who scan for possible radiation
Police traffic direction
In the United States, ERT personnel are required to train twice a year and typically train four times. The Federal Emergency Management Agency (FEMA) (with support from the Nuclear Regulatory Commission (NRC) and other agencies) grades some of the drills. The drills normally are not announced in advance so as to simulate "surprise" conditions.
See also
List of nuclear power stations
List of nuclear reactors
Nuclear Emergency Support Team (NEST) - different from ERTs here
Nuclear reactor technology
References
External links
Federal Radiological Emergency Response Plan (FRERP)--Operational Plan
Federal Radiological Preparedness Coordinating Committee (FEMA)
Radiological Emergency Response Team (EPA)
Illinois Emergency Management Agency FAQs
Nuclear power | Nuclear power plant emergency response team | [
"Physics"
] | 294 | [
"Power (physics)",
"Physical quantities",
"Nuclear power"
] |
26,902,158 | https://en.wikipedia.org/wiki/Kerr/CFT%20correspondence | The Kerr/CFT correspondence is an extension of the AdS/CFT correspondence or gauge-gravity duality to rotating black holes (which are described by the Kerr metric).
The duality works for black holes whose near-horizon geometry can be expressed as a product of AdS3 and a single compact coordinate. The AdS/CFT duality then maps this to a two-dimensional conformal field theory (the compact coordinate being analogous to the S5 factor in Maldacena's original work), from which the correct Bekenstein entropy can then be deduced.
The original form of the duality applies to black holes with the maximum value of angular momentum, but it has now been speculatively extended to all lesser values.
See also
AdS black hole
References
External links
Motl, Luboš (2010). Kerr black hole: the CFT entropy works for all M,J
String theory
Conformal field theory
Black holes
Thermodynamics | Kerr/CFT correspondence | [
"Physics",
"Chemistry",
"Astronomy",
"Mathematics"
] | 196 | [
"Black holes",
"Physical phenomena",
"Astronomical hypotheses",
"Physical quantities",
"String theory",
"Unsolved problems in physics",
"Astronomy stubs",
"Astrophysics",
"Stellar astronomy stubs",
"Astrophysics stubs",
"Density",
"Relativity stubs",
"Theory of relativity",
"Thermodynamics... |
26,908,142 | https://en.wikipedia.org/wiki/Looming%20and%20similar%20refraction%20phenomena | While mirages are the best known atmospheric refraction phenomena, looming and similar refraction phenomena do not produce mirages. Mirages show an extra image or images of the miraged object, while looming, towering, stooping, and sinking do not. No inverted image is present in those phenomena either. Depending on atmospheric conditions, the objects can appear to be elevated or lowered, stretched or stooped. These phenomena can occur together, changing the appearance of different parts of the objects in different ways. Sometimes these phenomena can occur together with a true mirage.
Looming
Looming is the most noticeable and most often observed of these refraction phenomena. It is an abnormally large refraction of the object that increases the apparent elevation of the distant objects and sometimes allows an observer to see objects that are located below the horizon under normal conditions. One of the most famous looming observations was made by William Latham in 1798, who wrote:
Thomas Jefferson noted the phenomenon of looming in his book Notes on the State of Virginia:
He was unable to explain this phenomenon and did not think refraction could account from the perceived changes of shape of the object in question.
Other famous observations that were called "mirages" may actually be referring to looming. One of those was described in Scientific American on August 25, 1894, as "a remarkable mirage seen by the citizens of Buffalo, New York". Such looming—sometimes with apparent magnification of opposite shores—have been reported over the Great Lakes. Canadian shorelines have been observed from Rochester, New York, across Lake Ontario, and from Cleveland across Lake Erie. The landforms over distant, normally beyond the horizon, were sometimes perceived as away.
Looming is most commonly seen in the polar regions. Looming was sometimes responsible for the errors made by polar explorers; for example, Charles Wilkes charted the coast of Antarctica, where later only water was found.
The larger the size of the sphere (the planet where an observer is located) the less curved the horizon is. William Jackson Humphreys' calculations showed that an observer may be able to see all the way around a planet of sufficient size and with sufficient atmospheric density gradient.
Sinking
Sinking is the opposite of looming. In sinking, stationary objects that are normally seen above the horizon appear to be lowered, or may even disappear below the horizon. In looming, the curvature of the rays is increasing, while sinking produces the opposite effect. In general, looming is more noticeable than sinking because objects that appear to grow stand out more than those that appear to shrink.
Towering and stooping
Towering and stooping are more complex forms of atmospheric refraction than looming and sinking. While looming and sinking change the apparent elevation of an object, towering and stooping change the apparent shape of the object itself. With towering, objects appear stretched; with stooping, objects seem to be shortened. The apparent stretching and shortening of the objects are not symmetrical and depends on the thermal profile of the atmosphere. The curvature of the rays changes more rapidly in some places because the thermal profile is curved.
Image example and explanation
These three images were taken from the same place on different days under different atmospheric conditions. The top frame shows looming. The island shape is not distorted, but is elevated. The middle frame shows looming with towering. The lowest frame is a 5-image superior mirage of the islands. As the image shows, the different refraction phenomena are not independent from each other and may occur together as a combination, depending on atmospheric conditions.
See also
Mirage of astronomical objects
Fata Morgana (mirage)
Tropospheric propagation
References
External links
Annotated Green-Flash and Mirage Bibliography by Andy Young
Atmospheric optical phenomena | Looming and similar refraction phenomena | [
"Physics"
] | 736 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
1,929,534 | https://en.wikipedia.org/wiki/Solar%20neutrino | A solar neutrino is a neutrino originating from nuclear fusion in the Sun's core, and is the most common type of neutrino passing through any source observed on Earth at any particular moment. Neutrinos are elementary particles with extremely small rest mass and a neutral electric charge. They only interact with matter via weak interaction and gravity, making their detection very difficult. This has led to the now-resolved solar neutrino problem. Much is now known about solar neutrinos, but research in this field is ongoing.
History and background
Homestake experiment
The timeline of solar neutrinos and their discovery dates back to the 1960s, beginning with the two astrophysicists John N. Bahcall and Raymond Davis Jr. The experiment, known as the Homestake experiment, named after the town in which it was conducted (Homestake, South Dakota), aimed to count the solar neutrinos arriving at Earth. Bahcall, using a solar model he developed, came to the conclusion that the most effective way to study solar neutrinos would be via the chlorine-argon reaction. Using his model, Bahcall was able to calculate the number of neutrinos expected to arrive at Earth from the Sun.
Once the theoretical value was determined, the astrophysicists began pursuing experimental confirmation. Davis developed the idea of taking hundreds of thousands of liters of perchloroethylene, a chemical compound made up of carbon and chlorine, and searching for neutrinos using a chlorine-argon detector. The process was conducted very far underground, hence the decision to conduct the experiment in Homestake as the town was home to the Homestake Gold Mine. By conducting the experiment deep underground, Bahcall and Davis were able to avoid cosmic ray interactions which could affect the process and results. The entire experiment lasted several years as it was able to detect only a few chlorine to argon conversions each day, and the first results were not yielded by the team until 1968. To their surprise, the experimental value of the solar neutrinos present was less than 20% of the theoretical value Bahcall calculated. At the time, it was unknown if there was an error with the experiment or with the calculations, or if Bahcall and Davis did not account for all variables, but this discrepancy gave birth to what became known as the solar neutrino problem.
Further experimentation
Davis and Bahcall continued their work to understand where they may have gone wrong or what they were missing, along with other astrophysicists who also did their own research on the subject. Many reviewed and redid Bahcall's calculations in the 1970s and 1980s, and although there was more data making the results more precise, the difference still remained. Davis even repeated his experiment changing the sensitivity and other factors to make sure nothing was overlooked, but he found nothing and the results still showed "missing" neutrinos. By the end of the 1970s, the widely expected result was the experimental data yielded about 39% of the calculated number of neutrinos. In 1969, Bruno Pontecorvo, an Italo-Russian astrophysicist, suggested a new idea that maybe we do not quite understand neutrinos like we think we do, and that neutrinos could change in some way, meaning the neutrinos that are released by the sun changed form and were no longer neutrinos the way neutrinos were thought of by the time they reached Earth where the experiment was conducted. This theory Pontecorvo had would make sense in accounting for the discrepancy between the experimental and theoretical results that persisted.
Solution to solar neutrino problem
Pontecorvo was never able to prove his theory, but he was on to something with his thinking. In 2002, results from an experiment conducted 2100 meters underground at the Sudbury Neutrino Observatory proved and supported Pontecorvo's theory and discovered that neutrinos released from the Sun can in fact change form or flavor because they are not completely massless. This discovery of neutrino oscillation solved the solar neutrino problem, nearly 40 years after Davis and Bahcall began studying solar neutrinos.
Neutrino observatories
Super-Kamiokande
The Super-Kamiokande is a 50,000 ton water Cherenkov detector underground. The primary uses for this detector in Japan in addition to neutrino observation is cosmic ray observation as well as searching for proton decay. In 1998, the Super-Kamiokande was the site of the Super-Kamiokande experiment which led to the discovery of neutrino oscillation, the process by neutrinos change their flavor, either to electron, muon or tau.
The Super-Kamiokande experiment began in 1996 and is still active. In the experiment, the detector works by being able to spot neutrinos by analyzing water molecules and detecting electrons being removed from them which then produces a blue Cherenkov light, which is produced by neutrinos. Therefore, when this detection of blue light happens it can be inferred that a neutrino is present and counted.
The Sudbury Neutrino Observatory
The Sudbury Neutrino Observatory (SNO), a underground observatory in Sudbury, Canada, is the other site where neutrino oscillation research was taking place in the late 1990s and early 2000s. The results from experiments at this observatory along with those at Super-Kamiokande are what helped solve the solar neutrino problem.
The SNO is also a heavy-water Cherenkov detector and designed to work the same way as the Super-Kamiokande. The Neutrinos when reacted with heavy water produce the blue Cherenkov light, signaling the detection of neutrinos to researchers and observers.
Borexino
The Borexino detector is located at the Laboratori Nazionali de Gran Sasso, Italy. Borexino is an actively used detector, and experiments are on-going at the site. The goal of the Borexino experiment is measuring low energy, typically below 1 MeV, solar neutrinos in real-time. The detector is a complex structure consisting of photomultipliers, electrons, and calibration systems making it equipped to take proper measurements of the low energy solar neutrinos. Photomultipliers are used as the detection device in this system as they are able to detect light for extremely weak signals.
Solar neutrinos are able to provide direct insight into the core of the Sun because that is where the solar neutrinos originate. Solar neutrinos leaving the Sun's core reach Earth before light does due to the fact solar neutrinos do not interact with any other particle or subatomic particle during their path, while light (photons) bounces around from particle to particle. The Borexino experiment used this phenomenon to discover that the Sun releases the same amount of energy currently as it did a 100,000 years ago.
Formation process
Solar neutrinos are produced in the core of the Sun through various nuclear fusion reactions, each of which occurs at a particular rate and leads to its own spectrum of neutrino energies. Details of the more prominent of these reactions are described below.
The main contribution comes from the proton–proton chain. The reaction is:
or in words:
two protons deuteron + positron + electron neutrino.
Of all Solar neutrinos, approximately 91% are produced from this reaction. As shown in the figure titled "Solar neutrinos (proton–proton chain) in the standard solar model", the deuteron will fuse with another proton to create a 3He nucleus and a gamma ray. This reaction can be seen as:
The isotope 4He can be produced by using the 3He in the previous reaction which is seen below.
With both helium-3 and helium-4 now in the environment, one of each weight of helium nucleus can fuse to produce beryllium:
Beryllium-7 can follow two different paths from this stage: It could capture an electron and produce the more stable lithium-7 nucleus and an electron neutrino, or alternatively, it could capture one of the abundant protons, which would create boron-8. The first reaction via lithium-7 is:
This lithium-yielding reaction produces approximately 7% of the solar neutrinos. The resulting lithium-7 later combines with a proton to produce two nuclei of helium-4. The alternative reaction is proton capture, that produces boron-8, which then beta+ decays into beryllium-8 as shown below:
This alternative boron-yielding reaction produces about 0.02% of the solar neutrinos; although so few that they would conventionally be neglected, these rare solar neutrinos stand out because of their higher average energies. The asterisk (*) on the beryllium-8 nucleus indicates that it is in an excited, unstable state. The excited beryllium-8 nucleus then splits into two helium-4 nuclei:
Observed data
The highest flux of solar neutrinos come directly from the proton–proton interaction, and have a low energy, up to 400 keV. There are also several other significant production mechanisms, with energies up to 18 MeV. From the Earth, the amount of neutrino flux at Earth is around 7·1010 particles·cm−2·s −1. The number of neutrinos can be predicted with great confidence by the standard solar model, but the number of neutrinos detected on Earth versus the number of neutrinos predicted are different by a factor of a third, which is the solar neutrino problem.
Solar models additionally predict the location within the Sun's core where solar neutrinos should originate, depending on the nuclear fusion reaction which leads to their production. Future neutrino detectors will be able to detect the incoming direction of these neutrinos with enough precision to measure this effect.
The energy spectrum of solar neutrinos is also predicted by solar models. It is essential to know this energy spectrum because different neutrino detection experiments are sensitive to different neutrino energy ranges. The Homestake experiment used chlorine and was most sensitive to solar neutrinos produced by the decay of the beryllium isotope 7Be. The Sudbury Neutrino Observatory is most sensitive to solar neutrinos produced by 8B. The detectors that use gallium are most sensitive to the solar neutrinos produced by the proton–proton chain reaction process, however they were not able to observe this contribution separately. The observation of the neutrinos from the basic reaction of this chain, proton–proton fusion in deuterium, was achieved for the first time by Borexino in 2014. In 2012 the same collaboration reported detecting low-energy neutrinos for the proton–electron–proton (pep reaction) that produces 1 in 400 deuterium nuclei in the Sun. The detector contained 100 metric tons of liquid and saw on average 3 events each day (due to C production) from this relatively uncommon thermonuclear reaction.
In 2014, Borexino reported a successful direct detection of neutrinos from the pp-reaction at a rate of 144±33/day, consistent with the predicted rate of 131±2/day that was expected based on the standard solar model prediction that the pp-reaction generates 99% of the Sun's luminosity and their analysis of the detector's efficiency.
And in 2020, Borexino reported the first detection of CNO cycle neutrinos from deep within the solar core.
Note that Borexino measured neutrinos of several energies; in this manner they have demonstrated experimentally, for the first time, the pattern of solar neutrino oscillations predicted by the theory. Neutrinos can trigger nuclear reactions. By looking at ancient ores of various ages that have been exposed to solar neutrinos over geologic time, it may be possible to interrogate the luminosity of the Sun over time, which, according to the standard solar model, has changed over the eons as the (presently) inert byproduct helium has accumulated in its core.
Key contributing astrophysicists
Wolfgang Pauli was the first to suggest the idea of a particle such as the neutrino existing in our universe in 1930. He believed such a particle to be completely massless. This was the belief amongst the astrophysics community until the solar neutrino problem was solved.
Frederick Reines, from the University of California at Irvine, and Clyde Cowan were the first astrophysicists to detect neutrinos in 1956. They won a Nobel Prize in Physics for their work in 1995.
Raymond Davis and John Bahcall are the pioneers of solar neutrino studies. While Bahcall never won a Nobel Prize, Davis along with Masatoshi Koshiba won the Nobel Prize in Physics in 2002 after the solar neutrino problem was solved for their contributions in helping solve the problem.
Pontecorvo, known as the first astrophysicist to suggest the idea neutrinos have some mass and can oscillate, never received a Nobel Prize for his contributions due to his passing in 1993.
Arthur B. McDonald, a Canadian physicist, was a key contributor in building the Sudbury Neutrino Observatory (SNO) in the mid 1980s and later became the director of the SNO and leader of the team that solved the solar neutrino problem. McDonald, along with Japanese physicist Kajita Takaaki both received a Nobel Prize for their work discovering the oscillation of neutrinos in 2015.
Current research and findings
The critical issue of the solar neutrino problem, that many astrophysicists interested in solar neutrinos studied and attempted to solve in late 1900s and early 2000s, is solved. In the 21st century, even without a main problem to solve, there is still unique and novel research ongoing in this field of astrophysics.
Solar neutrino flux at keV energies
This research, published in 2017, aimed to solve the solar neutrino and antineutrino flux for extremely low energies (keV range). Processes at these low energies consisted vital information that told researchers about the solar metallicity. Solar metallicity is the measure of elements present in the particle that are heavier than hydrogen and helium, typically in this field this element is usually iron. The results from this research yielded significantly different findings compared to past research in terms of the overall flux spectrum. Currently technology does not yet exist to put these findings to the test.
Limiting neutrino magnetic moments with Borexino Phase-II solar neutrino data
This research, published in 2017, aimed to search for the solar neutrino effective magnetic moment. The search was completed using data from exposure from the Borexino experiment's second phase which consisted of data over 1291.5 days (3.54 years). The results yielded that the electron recoil spectrum shape was as expected with no major changes or deviations from it.
See also
Neutrino detector
Neutral particle oscillation
Solar neutrino unit
Stellar nucleosynthesis
Supernova neutrinos
Diffuse supernova neutrino background (DSNB)
References
Further reading
Nuclear fusion
Neutrino
Neutrino astronomy | Solar neutrino | [
"Physics",
"Chemistry",
"Astronomy"
] | 3,205 | [
"Neutrino astronomy",
"Nuclear fusion",
"Astronomical sub-disciplines",
"Nuclear physics"
] |
1,929,872 | https://en.wikipedia.org/wiki/Irreversible%20circuit | In the study of reversible computing, an irreversible circuit is a circuit whose inputs cannot be reconstructed from its outputs. Such a circuit, of necessity, consumes energy. More precisely, there is a lower bound derived from quantum physics on the minimum amount of energy needed for each computation with such a circuit. In contrast, reversible circuits can, theoretically, be designed to operate on arbitrarily small amounts of energy.
Any irreversible circuit can be simulated by a reversible circuit that is padded with additional outputs.
See also
Reversible computing
References
Integrated circuits | Irreversible circuit | [
"Technology",
"Engineering"
] | 123 | [
"Computing stubs",
"Computer engineering stubs",
"Computer engineering",
"Integrated circuits"
] |
1,930,122 | https://en.wikipedia.org/wiki/Residual%20entropy | Residual entropy is the difference in entropy between a non-equilibrium state and crystal state of a substance close to absolute zero. This term is used in condensed matter physics to describe the entropy at zero kelvin of a glass or plastic crystal referred to the crystal state, whose entropy is zero according to the third law of thermodynamics. It occurs if a material can exist in many different states when cooled. The most common non-equilibrium state is vitreous state, glass.
A common example is the case of carbon monoxide, which has a very small dipole moment. As the carbon monoxide crystal is cooled to absolute zero, few of the carbon monoxide molecules have enough time to align themselves into a perfect crystal (with all of the carbon monoxide molecules oriented in the same direction). Because of this, the crystal is locked into a state with different corresponding microstates, giving a residual entropy of , rather than zero.
Another example is any amorphous solid (glass). These have residual entropy, because the atom-by-atom microscopic structure can be arranged in a huge number of different ways across a macroscopic system.
The residual entropy has a somewhat special significance compared to other residual properties, in that it has a role in the framework of residual entropy scaling, which is used to compute transport coefficients (coefficients governing non-equilibrium phenomena) directly from the equilibrium property residual entropy, which can be computed directly from any equation of state.
History
One of the first examples of residual entropy was pointed out by Pauling to describe water ice. In water, each oxygen atom is bonded to two hydrogen atoms. However, when water freezes it forms a tetragonal structure where each oxygen atom has four hydrogen neighbors (due to neighboring water molecules). The hydrogen atoms sitting between the oxygen atoms have some degree of freedom as long as each oxygen atom has two hydrogen atoms that are 'nearby', thus forming the traditional H2O water molecule. However, it turns out that for a large number of water molecules in this configuration, the hydrogen atoms have a large number of possible configurations that meet the 2-in 2-out rule (each oxygen atom must have two 'near' (or 'in') hydrogen atoms, and two far (or 'out') hydrogen atoms). This freedom exists down to absolute zero, which was previously seen as an absolute one-of-a-kind configuration. The existence of these multiple configurations (choices for each H of orientation along O--O axis) that meet the rules of absolute zero (2-in 2-out for each O) amounts to randomness, or in other words, entropy. Thus systems that can take multiple configurations at or near absolute zero are said to have residual entropy.
Although water ice was the first material for which residual entropy was proposed, it is generally very difficult to prepare pure defect-free crystals of water ice for studying. A great deal of research has thus been undertaken into finding other systems that exhibit residual entropy. Geometrically frustrated systems in particular often exhibit residual entropy. An important example is spin ice, which is a geometrically frustrated magnetic material where the magnetic moments of the magnetic atoms have Ising-like magnetic spins and lie on the corners of network of corner-sharing tetrahedra. This material is thus analogous to water ice, with the exception that the spins on the corners of the tetrahedra can point into or out of the tetrahedra, thereby producing the same 2-in, 2-out rule as in water ice, and therefore the same residual entropy. One of the interesting properties of geometrically frustrated magnetic materials such as spin ice is that the level of residual entropy can be controlled by the application of an external magnetic field. This property can be used to create one-shot refrigeration systems.
See also
Proton disorder in ice
Ice rules
Geometrical frustration
Notes
Thermodynamic entropy | Residual entropy | [
"Physics"
] | 793 | [
"Statistical mechanics",
"Entropy",
"Physical quantities",
"Thermodynamic entropy"
] |
1,930,814 | https://en.wikipedia.org/wiki/Hydrogen%20iodide | Hydrogen iodide (HI) is a diatomic molecule and hydrogen halide. Aqueous solutions of HI are known as hydroiodic acid or hydriodic acid, a strong acid. Hydrogen iodide and hydroiodic acid are, however, different in that the former is a gas under standard conditions, whereas the other is an aqueous solution of the gas. They are interconvertible. HI is used in organic and inorganic synthesis as one of the primary sources of iodine and as a reducing agent.
Properties of hydrogen iodide
HI is a colorless gas that reacts with oxygen to give water and iodine. With moist air, HI gives a mist (or fumes) of hydroiodic acid. It is exceptionally soluble in water, giving hydroiodic acid. One liter of water will dissolve 425 liters of HI gas, the most concentrated solution having only four water molecules per molecule of HI.
Hydroiodic acid
Hydroiodic acid is not pure hydrogen iodide, but a mixture containing it. Commercial "concentrated" hydroiodic acid usually contains 48–57% HI by mass. The solution forms an azeotrope boiling at 127 °C with 57% HI, 43% water. The high acidity is caused by the dispersal of the ionic charge over the anion. The iodide ion radius is much larger than the other common halides, which results in the negative charge being dispersed over a large space. By contrast, a chloride ion is much smaller, meaning its negative charge is more concentrated, leading to a stronger interaction between the proton and the chloride ion. This weaker H+···I− interaction in HI facilitates dissociation of the proton from the anion and is the reason HI is the strongest acid of the hydrohalides.
Ka ≈ 1010
Ka ≈ 109
Ka ≈ 106
Synthesis
The industrial preparation of HI involves the reaction of I2 with hydrazine, which also yields nitrogen gas:
When performed in water, the HI must be distilled.
HI can also be distilled from a solution of NaI or other alkali iodide in concentrated phosphoric acid (note that concentrated sulfuric acid will not work for acidifying iodides, as it will oxidize the iodide to elemental iodine).
Another way HI may be prepared is by bubbling hydrogen sulfide steam through an aqueous solution of iodine, forming hydroiodic acid (which is distilled) and elemental sulfur (this is filtered):
Additionally, HI can be prepared by simply combining H2 and I2: This is a reversible reaction (using conditions 250°C)
This method is usually employed to generate high-purity samples.
For many years, this reaction was considered to involve a simple bimolecular reaction between molecules of H2 and I2. However, when a mixture of the gases is irradiated with the wavelength of light equal to the dissociation energy of I2, about 578 nm, the rate increases significantly. This supports a mechanism whereby I2 first dissociates into 2 iodine atoms, which each attach themselves to a side of an H2 molecule and break the :
In the laboratory, another method involves hydrolysis of PI3, the iodine analog of PBr3. In this method, I2 reacts with phosphorus to create phosphorus triiodide, which then reacts with water to form HI and phosphorous acid:
Key reactions and applications
Solutions of hydrogen iodide are easily oxidized by air:
is dark brown in color, which makes aged solutions of HI often appear dark brown.
Like HBr and HCl, HI adds to alkenes:
HI is also used in organic chemistry to convert primary alcohols into alkyl iodides. This reaction is an SN2 substitution, in which the iodide ion replaces the "activated" hydroxyl group (water):
HI is preferred over other hydrogen halides because the iodide ion is a much better nucleophile than bromide or chloride, so the reaction can take place at a reasonable rate without much heating. This reaction also occurs for secondary and tertiary alcohols, but substitution occurs via the SN1 pathway.
HI (or HBr) can also be used to cleave ethers into alkyl iodides and alcohols, in a reaction similar to the substitution of alcohols. This type of cleavage is significant because it can be used to convert a chemically stable and inert ether into more reactive species. In this example diethyl ether is split into ethanol and iodoethane:
The reaction is regioselective, as iodide tends to attack the less sterically hindered ether carbon. If an excess of HI is used, the alcohol formed in this reaction will be converted to a 2nd equivalent of alkyl iodide, as in the conversion of primary alcohols into alkyl iodides.
HI is subject to the same Markovnikov and anti-Markovnikov guidelines as HCl and HBr.
Although harsh by modern standards, HI was commonly employed as a reducing agent early on in the history of organic chemistry. Chemists in the 19th century attempted to prepare cyclohexane by HI reduction of benzene at high temperatures, but instead isolated the rearranged product, methylcyclopentane (see the article on cyclohexane). As first reported by Kiliani, hydroiodic acid reduction of sugars and other polyols results in the reductive cleavage of several or even all hydroxy groups, although often with poor yield and/or reproducibility. In the case of benzyl alcohols and alcohols with α-carbonyl groups, reduction by HI can provide synthetically useful yields of the corresponding hydrocarbon product (). This process can be made catalytic in HI using red phosphorus to reduce the formed I2.
See also
Hydroiodic acid
References
External links
International Chemical Safety Card 1326 - Hydrogen Iodide
Hydrogen compounds
Iodides
Diatomic molecules
Iodine compounds
Mineral acids
Nonmetal halides
Reducing agents | Hydrogen iodide | [
"Physics",
"Chemistry"
] | 1,268 | [
"Acids",
"Redox",
"Inorganic compounds",
"Mineral acids",
"Molecules",
"Reducing agents",
"Diatomic molecules",
"Matter"
] |
1,930,909 | https://en.wikipedia.org/wiki/Phosphide | In chemistry, a phosphide is a compound containing the ion or its equivalent. Many different phosphides are known, with widely differing structures. Most commonly encountered on the binary phosphides, i.e. those materials consisting only of phosphorus and a less electronegative element. Numerous are polyphosphides, which are solids consisting of anionic chains or clusters of phosphorus. Phosphides are known with the majority of less electronegative elements with the exception of Hg, Pb, Sb, Bi, Te, and Po. Finally, some phosphides are molecular.
Binary phosphides
Binary phosphides include phosphorus and one other element. An example of a group 1 phosphide is sodium phosphide (). Other notable examples include aluminium phosphide () and calcium phosphide (), which are used as pesticides, exploiting their tendency to release toxic phosphine upon hydrolysis. Magnesium phosphide () also is moisture sensitive. Indium phosphide () and gallium phosphide () are used as a semi-conductors, often in combination of related arsenides. Copper phosphide () illustrates a rare stoichiometry for a phosphide. These species are insoluble in all solvents - they are 3-dimensional solid state polymers. For those with electropositive metals, the materials hydrolyze:
Polyphosphides
Polyphosphides contain bonds. The simplest polyphosphides would be derivatives of . The free anions are rarely encountered because they are so basic. Most members follow the octet rule.
Well studied polyphosphides are derivatives of P73-.
The nomenclature for polyphosphides can be deceptive. As confirmed by X-ray crystallography tin triphosphide and germanium triphosphide are not triphosphides, but hexaphosphides. They consist of ruffled cyclo-P66- subunits. Another example of deceptive nomenclature is "thorium pentaphosphide", which consists of a polymeric polyphosphide related to Hittorf's phosphorus.
Several polyphosphides contain the cluster ions and polymeric chain anions (e.g. the helical ion) and complex sheet or 3-D anions. The range of structures is extensive. Potassium has nine phosphides: , , , , , , , , . Eight mono- and polyphosphides of nickel also exist: (, , , , , , , ).
Two polyphosphide ions, found in and found in , are radical anions with an odd number of valence electrons.
Preparation of phosphide and polyphosphide materials
There are many ways to prepare phosphide compounds. One common way involves heating a metal and red phosphorus (P) under inert atmospheric conditions or vacuum. In principle, all metal phosphides and polyphosphides can be synthesized from elemental phosphorus and the respective metal element in stoichiometric forms. However, the synthesis is complicated due to several problems. The exothermic reactions are often explosive due to local overheating. Oxidized metals, or even just an oxidized layer on the exterior of the metal, causes extreme and unacceptably high temperatures for beginning phosphorination. Hydrothermal reactions to generate nickel phosphides have produced pure and well crystallized nickel phosphide compounds, and . These compounds were synthesized through a solid-liquid reaction between and red phosphorus at 200 °C for 24 and 48 hours, respectively.
Metal phosphides are also produced by reaction of tris(trimethylsilyl)phosphine with metal halides. In this method, the halide is liberated as the volatile trimethylsilyl chloride.
A method for the preparation of from red phosphorus and potassium ethoxide has been reported.
Molecular phosphides
Compounds with triple bonds between a metal and phosphorus are rare. The main examples have the formula , where R is a bulky organic substituent.
Organic phosphides
Many organophosphides are known. Common examples have the formula where R is an organic substituent and M is a metal. One example is lithium diphenylphosphide. The Zintl cluster is obtained with diverse alkali metal derivatives.
Natural examples
The mineral schreibersite is common in some meteorites.
References
Anions
Phosphorus(−III) compounds | Phosphide | [
"Physics",
"Chemistry"
] | 964 | [
"Ions",
"Matter",
"Anions"
] |
1,930,923 | https://en.wikipedia.org/wiki/Arsenide | In chemistry, an arsenide is a compound of arsenic with a less electronegative element or elements. Many metals form binary compounds containing arsenic, and these are called arsenides. They exist with many stoichiometries, and in this respect arsenides are similar to phosphides.
Alkali metal and alkaline earth arsenides
The group 1 alkali metals and the group 2, alkaline earth metals, form arsenides with isolated arsenic atoms. They form upon heating arsenic powder with excess sodium gives sodium arsenide (Na3As). The structure of Na3As is complex with unusually short Na–Na distances of 328–330 pm which are shorter than in sodium metal. This short distance indicates the complex bonding in these simple phases, i.e. they are not simply salts of As3− anion, for example. The compound LiAs, has a metallic lustre and electrical conductivity indicating some metallic bonding. These compounds are mainly of academic interest. For example, "sodium arsenide" is a structural motif adopted by many compounds with the A3B stoichiometry.
Indicative of their salt-like properties, hydrolysis of alkali metal arsenides gives arsine:
Na3As + 3 H2O → AsH3 + 3 NaOH
III–V compounds
Many arsenides of the group 13 elements (group III) are valuable semiconductors. Gallium arsenide (GaAs) features isolated arsenic centers with a zincblende structure (wurtzite structure can eventually also form in nanostructures), and with predominantly covalent bonding – it is a III–V semiconductor.
II–V compounds
Arsenides of the group 12 elements (group II) are also noteworthy. Cadmium arsenide (Cd3As2) was shown to be a three-dimensional (3D) topological Dirac semimetal analogous to graphene. Cd3As2, Zn3As2 and other compounds of the Zn-Cd-P-As quaternary system have very similar crystalline structures, which can be considered distorted mixtures of the zincblende and antifluorite crystalline structures.
Polyarsenides
Transition metal arsenides
Arsenic anionics are known to catenate, that is, form chains, rings, and cages. The mineral skutterudite (CoAs3) features rings that are usually described as . Assigning formal oxidation numbers is difficult because these materials are highly covalent and often are best described with band theory. Sperrylite (PtAs2) is usually described as . The arsenides of the transition metals are mainly of interest because they contaminate sulfidic ores of commercial interest. The extraction of the metals – nickel, iron, cobalt, copper – entails chemical processes such as smelting that poses environmental risks. In the mineral, arsenic is immobile and poses no environmental risk. Released from the mineral, arsenic is poisonous and mobile.
Zintl phases
Partial reduction of arsenic with alkali metals (and related electropositive elements) affords polyarsenic compounds, which are members of the Zintl phases.
See also
See :Category:Arsenides for a list.
References
Anions
Arsenic(−III) compounds | Arsenide | [
"Physics",
"Chemistry"
] | 678 | [
"Ions",
"Matter",
"Anions"
] |
1,931,704 | https://en.wikipedia.org/wiki/Conformal%20anomaly | A conformal anomaly, scale anomaly, trace anomaly or Weyl anomaly is an anomaly, i.e. a quantum phenomenon that breaks the conformal symmetry of the classical theory.
In quantum field theory when we set to zero we have only Feynman tree diagrams, which is a "classical" theory (equivalent to the Fredholm formulation of a classical field theory).
One-loop (N-loop) Feynman diagrams are proportional to ().
If a current is conserved classically () but develops a divergence at loop level in quantum field theory (), we say there is an "anomaly." A famous example is the axial current anomaly where massless fermions will have a classically conserved axial current, but which develops a nonzero divergence in the presence of gauge fields.
A scale invariant theory, one in which there are no mass scales, will have a conserved Noether current called the "scale current." This is derived by performing scale transformations on the coordinates of space-time. The divergence of the scale current is then the trace of the stress tensor. In the absence of any mass scales the stress tensor trace vanishes (), hence the current is "classically conserved" and the theory is classically scale invariant.
However, at loop level the scale current can develop a nonzero divergence. This is called the "scale anomaly" or "trace anomaly" and represents the generation of mass
by quantum mechanics. It is related to the renormalization group,
or the "running of coupling constants," when they are viewed at different mass scales.
While this can be formulated without reference to gravity, it becomes more powerful
when general relativity is considered.
A classically conformal theory with arbitrary background metric has an action that is invariant under rescalings of the background metric and other matter fields,
called Weyl transformations. Note that if we rescale the coordinates this is a general coordinate transformation, and merges with general covariance, the exact symmetry of general relativity, and thus it becomes an unsatisfactory way to formulate scale symmetry (general covariance implies a conserved stress tensor; a "gravitational anomaly" represents a quantum breakdown of general covariance, and should not be confused with Weyl (scale) invariance).
However, under Weyl transformations we do not rescale the coordinates of the theory, but rather the metric and other matter fields. In the sense of Weyl, mass (or length) are defined by the metric, and coordinates are simply scale-less book-keeping devices. Hence Weyl symmetry is the correct statement of scale symmetry when gravitation is incorporated
and there will then be a conserved Weyl current.
There is an extensive literature involving spontaneous breaking of Weyl symmetry in
four dimensions, leading to a dynamically generate Planck mass together with inflation. These theories appear to be in good agreement with observational cosmology.
A conformal quantum theory is therefore one whose path integral, or partition function, is unchanged by rescaling the metric (together with other fields). The variation of the action with respect to the background metric is proportional to the stress tensor, and therefore the variation with respect to a conformal rescaling is proportional to the trace of the stress tensor. As a result, the trace of the stress tensor must vanish for a conformally invariant theory. The trace of the stress tensor appears in the divergence of the Weyl current as an anomaly, thus breaking the Weyl (or Scale) invariance of the theory.
QCD
In quantum chromodynamics in the chiral limit, the classical theory has no mass scale so there is a conformal symmetry. Naively, we would expect that the proton is nearly massless because the quark kinetic energy and potential energy cancel by the relativistic virial theorem. However, in the quantum case the symmetry is broken by a conformal anomaly.
This introduces a scale, the scale at which colour confinement occurs and determines the masses of hadrons, and the phenomenon of chiral symmetry breaking. Beside the anomaly (believed to contribute to about 20% of the proton mass), the rest can be attributed to the light quarks sigma terms (i.e., the fact that quark have small non-zero masses that are not associated with the trace anomaly) believed to contribute to about 17%, and the quark and gluon energies believed to contribute to about 29% and 34% of the proton mass, respectively.
Hence QCD, via the trace anomaly, quark and gluon energies and sigma terms, is responsible for more than 99% of the mass of ordinary matter in the Universe, the Higgs mechanism directly contributing only less than one percent via mostly the u quark, d quark and electron masses.
Coleman-Weinberg Potentials
Coleman and Weinberg showed how spontaneous symmetry breaking of electroweak interactions involving a fundamental Higgs scalar could occur via Feynmans loops.
Moreover, the authors showed how to "improve" the results of their calculation using the renormalization group.
In fact, the Coleman-Weinberg mechanism can be traced entirely to the
renormalization group running of the quartic Higgs coupling, . The resulting Coleman-Weinberg potential is proportional to the associated -function, while the trace anomaly is given by , hence the Coleman-Weinberg potential
can be viewed as arising directly from the trace anomaly.
It has been conjectured that all mass in nature is generated by trace anomalies, hence
by quantum mechanics alone.
String theory
String theory is not classically scale invariant since it is defined with a massive "string constant".
In string theory, conformal symmetry on the worldsheet is a local Weyl symmetry. There is also a potential gravitational anomaly in two dimensions and this anomaly must therefore cancel if the theory is to be consistent. The required cancellation of the gravitational anomaly implies that the spacetime dimensionality must be equal to the critical dimension which is either 26 in the case of bosonic string theory or 10 in the case of superstring theory. This case is called critical string theory.
There are alternative approaches known as non-critical string theory in which the space-time dimensions can be less than 26 for the bosonic theory or less than 10 for the superstring i.e. the four-dimensional case is plausible within this context. However, some intuitive postulates like flat space being a valid background, need to be given up.
See also
Anomaly (physics)
Charge (physics)
Central charge
Anomalous scaling dimension
Dimensional transmutation
References
Anomalies (physics)
Conformal field theory
Quantum chromodynamics
Renormalization group
String theory | Conformal anomaly | [
"Physics",
"Astronomy"
] | 1,373 | [
"Astronomical hypotheses",
"Physical phenomena",
"String theory",
"Critical phenomena",
"Renormalization group",
"Statistical mechanics"
] |
1,931,707 | https://en.wikipedia.org/wiki/Critical%20dimension | In the renormalization group analysis of phase transitions in physics, a critical dimension is the dimensionality of space at which the character of the phase transition changes. Below the lower critical dimension there is no phase transition. Above the upper critical dimension the critical exponents of the theory become the same as that in mean field theory. An elegant criterion to obtain the critical dimension within mean field theory is due to V. Ginzburg.
Since the renormalization group sets up a relation between a phase transition and a quantum field theory, this has implications for the latter and for our larger understanding of renormalization in general. Above the upper critical dimension, the quantum field theory which belongs to the model of the phase transition is a free field theory. Below the lower critical dimension, there is no field theory corresponding to the model.
In the context of string theory the meaning is more restricted: the critical dimension is the dimension at which string theory is consistent assuming a constant dilaton background without additional confounding permutations from background radiation effects. The precise number may be determined by the required cancellation of conformal anomaly on the worldsheet; it is 26 for the bosonic string theory and 10 for superstring theory.
Upper critical dimension in field theory
Determining the upper critical dimension of a field theory is a matter of linear algebra. It is worthwhile to formalize the procedure because it yields the lowest-order approximation for scaling and essential input for the renormalization group. It also reveals conditions to have a critical model in the first place.
A Lagrangian may be written as a sum of terms, each consisting of an integral over a monomial of coordinates and fields . Examples are the standard -model and the isotropic Lifshitz tricritical point with Lagrangians
see also the figure on the right.
This simple structure may be compatible with a scale invariance under a rescaling of the
coordinates and fields with a factor according to
Time is not singled out here — it is just another coordinate: if the Lagrangian contains a time variable then this variable is to be rescaled as with some constant exponent . The goal is to determine the exponent set .
One exponent, say , may be chosen arbitrarily, for example . In the language of dimensional analysis this means that the exponents count wave vector factors (a reciprocal length ). Each monomial of the Lagrangian thus leads to a homogeneous linear equation for the exponents . If there are (inequivalent) coordinates and fields in the Lagrangian, then such equations constitute a square matrix. If this matrix were invertible then there only would be the trivial solution .
The condition for a nontrivial solution gives an equation between the space dimensions, and this determines the upper critical dimension (provided there is only one variable dimension in the Lagrangian). A redefinition of the coordinates and fields now shows that determining the scaling exponents is equivalent to a dimensional analysis with respect to the wavevector , with all coupling constants occurring in the Lagrangian rendered dimensionless. Dimensionless coupling constants are the technical hallmark for the upper critical dimension.
Naive scaling at the level of the Lagrangian does not directly correspond to physical scaling because a cutoff is required to give a meaning to the field theory and the path integral. Changing the length scale also changes the number of degrees of freedom.
This complication is taken into account by the renormalization group. The main result at the upper critical dimension is that scale invariance remains valid for large factors , but with additional factors in the scaling of the coordinates and fields.
What happens below or above depends on whether one is interested in long distances (statistical field theory) or short distances (quantum field theory). Quantum field theories are trivial (convergent) below and not renormalizable above . Statistical field theories are trivial (convergent) above and renormalizable below . In the latter case there arise "anomalous" contributions to the naive scaling exponents . These anomalous contributions to the effective critical exponents vanish at the upper critical dimension.
It is instructive to see how the scale invariance at the upper critical dimension becomes a scale invariance below this dimension. For small external wave vectors the vertex functions acquire additional exponents, for example . If these exponents are inserted into a matrix (which only has values in the first column) the condition for scale invariance becomes . This equation only can be satisfied if the anomalous exponents of the vertex functions cooperate in some way. In fact, the vertex functions depend on each other hierarchically. One way to express this interdependence are the Schwinger–Dyson equations.
Naive scaling at thus is important as zeroth order approximation. Naive scaling at the upper critical dimension also classifies terms of the Lagrangian as relevant, irrelevant or marginal. A Lagrangian is compatible with scaling if the - and -exponents lie on a hyperplane, for examples see the figure above. is a normal vector of this hyperplane.
Lower critical dimension
The lower critical dimension of a phase transition of a given universality class is the last dimension for which this phase transition does not occur if the dimension is increased starting with .
Thermodynamic stability of an ordered phase depends on entropy and energy. Quantitatively this depends on the type of domain walls and their fluctuation modes. There appears to be no generic formal way for deriving the lower critical dimension of a field theory. Lower bounds may be derived with statistical mechanics arguments.
Consider first a one-dimensional system with short range interactions. Creating a domain wall requires a fixed energy amount . Extracting this energy from other degrees of freedom decreases entropy by . This entropy change must be compared with the entropy of the domain wall itself. In a system of length there are positions for the domain wall, leading (according to Boltzmann's principle) to an entropy gain . For nonzero temperature and large enough the entropy gain always dominates, and thus there is no phase transition in one-dimensional systems with short-range interactions at . Space dimension thus is a lower bound for the lower critical dimension of such systems.
A stronger lower bound can be derived with the help of similar arguments for systems with short range interactions and an order parameter with a continuous symmetry. In this case the Mermin–Wagner Theorem states that the order parameter expectation value vanishes in at , and there thus is no phase transition of the usual type at and below.
For systems with quenched disorder a criterion given by Imry and Ma might be relevant. These authors used the criterion to determine the lower critical dimension of random field magnets.
References
External links
Kanon: A free windows program to determine the critical dimension, with examples, online help and mathematical details
Critical phenomena
Statistical mechanics
Phase transitions
String theory | Critical dimension | [
"Physics",
"Chemistry",
"Materials_science",
"Astronomy",
"Mathematics"
] | 1,426 | [
"Physical phenomena",
"Phase transitions",
"Astronomical hypotheses",
"String theory",
"Critical phenomena",
"Phases of matter",
"Condensed matter physics",
"Statistical mechanics",
"Matter",
"Dynamical systems"
] |
1,932,012 | https://en.wikipedia.org/wiki/X-ray%20magnetic%20circular%20dichroism | X-ray magnetic circular dichroism (XMCD) is a difference spectrum of two X-ray absorption spectra (XAS) taken in a magnetic field, one taken with left circularly polarized light, and one with right circularly polarized light. By closely analyzing the difference in the XMCD spectrum, information can be obtained on the magnetic properties of the atom, such as its spin and orbital magnetic moment. Using XMCD magnetic moments below 10−5 μB can be observed.
In the case of transition metals such as iron, cobalt, and nickel, the absorption spectra for XMCD are usually measured at the L-edge. This corresponds to the process in the iron case: with iron, a 2p electron is excited to a 3d state by an X-ray of about 700 eV. Because the 3d electron states are the origin of the magnetic properties of the elements, the spectra contain information on the magnetic properties. In rare-earth elements usually, the M4,5-edges are measured, corresponding to electron excitations from a 3d state to mostly 4f states.
Line intensities and selection rules
The line intensities and selection rules of XMCD can be understood by considering the transition matrix elements of an atomic state excited by circularly polarised light. Here is the principal, the angular momentum and the magnetic quantum numbers. The polarisation vector of left and right circular polarised light can be rewritten in terms of spherical harmonicsleading to an expression for the transition matrix element which can be simplified using the 3-j symbol:The radial part is referred to as the line strength while the angular one contains symmetries from which selection rules can be deduced. Rewriting the product of three spherical harmonics with the 3-j symbol finally leads to:The 3-j symbols are not zero only if satisfy the following conditions giving us the following selection rules for dipole transitions with circular polarised light:
Derivation of sum rules for 3d and 4f systems
We will derive the XMCD sum rules from their original sources, as presented in works by Carra, Thole, Koenig, Sette, Altarelli, van der Laan, and Wang. The following equations can be used to derive the actual magnetic moments associated with the states:
We employ the following approximation:
where represents linear polarization, right circular polarization, and left circular polarization. This distinction is crucial, as experiments at beamlines typically utilize either left and right circular polarization or switch the field direction while maintaining the same circular polarization, or a combination of both.
The sum rules, as presented in the aforementioned references, are:
Here, denotes the magnetic dipole tensor, c and l represent the initial and final orbital respectively (s,p,d,f,... = 0,1,2,3,...). The edges integrated within the measured signal are described by , and n signifies the number of electrons in the final shell.
The magnetic orbital moment , using the same sign conventions, can be expressed as:
For moment calculations, we use c=1 and l=2 for L2,3-edges, and c=2 and l=3 for M4,5-edges. Applying the earlier approximation, we can express the L2,3-edges as:
For 3d transitions, is calculated as:
For 4f rare earth metals (M4,5-edges), using c=2 and l=3:
The calculation of for 4f transitions is as follows:
When is neglected, the term is commonly referred to as the effective spin . By disregarding and calculating the effective spin moment , it becomes apparent that both the non-magnetic XAS component and the number of electrons in the shell n appear in both equations. This allows for the calculation of the orbital to effective spin moment ratio using only the XMCD spectra.
See also
EMCD
Faraday effect
Magnetic circular dichroism
Magnetic field
Transition metals
References
X-ray spectroscopy | X-ray magnetic circular dichroism | [
"Physics",
"Chemistry"
] | 819 | [
"X-ray spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
1,932,063 | https://en.wikipedia.org/wiki/Dye-sensitized%20solar%20cell | A dye-sensitized solar cell (DSSC, DSC, DYSC or Grätzel cell) is a low-cost solar cell belonging to the group of thin film solar cells. It is based on a semiconductor formed between a photo-sensitized anode and an electrolyte, a photoelectrochemical system. The modern version of a dye solar cell, also known as the Grätzel cell, was originally co-invented in 1988 by Brian O'Regan and Michael Grätzel at UC Berkeley and this work was later developed by the aforementioned scientists at the École Polytechnique Fédérale de Lausanne (EPFL) until the publication of the first high efficiency DSSC in 1991. Michael Grätzel has been awarded the 2010 Millennium Technology Prize for this invention.
The DSSC has a number of attractive features; it is simple to make using conventional roll-printing techniques, is semi-flexible and semi-transparent which offers a variety of uses not applicable to glass-based systems, and most of the materials used are low-cost. In practice it has proven difficult to eliminate a number of expensive materials, notably platinum and ruthenium, and the liquid electrolyte presents a serious challenge to making a cell suitable for use in all weather. Although its conversion efficiency is less than the best thin-film cells, in theory its price/performance ratio should be good enough to allow them to compete with fossil fuel electrical generation by achieving grid parity. Commercial applications, which were held up due to chemical stability problems, had been forecast in the European Union Photovoltaic Roadmap to significantly contribute to renewable electricity generation by 2020.
Current technology: semiconductor solar cells
In a traditional solid-state semiconductor, a solar cell is made from two doped crystals, one doped with n-type impurities (n-type semiconductor), which add additional free conduction band electrons, and the other doped with p-type impurities (p-type semiconductor), which add additional electron holes. When placed in contact, some of the electrons in the n-type portion flow into the p-type to "fill in" the missing electrons, also known as electron holes. Eventually enough electrons will flow across the boundary to equalize the Fermi levels of the two materials. The result is a region at the interface, the p–n junction, where charge carriers are depleted and/or accumulated on each side of the interface. In silicon, this transfer of electrons produces a potential barrier of about 0.6 to 0.7 eV.
When placed in the sun, photons of the sunlight can excite electrons on the p-type side of the semiconductor, a process known as photoexcitation. In silicon, sunlight can provide enough energy to push an electron out of the lower-energy valence band into the higher-energy conduction band. As the name implies, electrons in the conduction band are free to move about the silicon. When a load is placed across the cell as a whole, these electrons will flow out of the p-type side into the n-type side, lose energy while moving through the external circuit, and then flow back into the p-type material where they can once again re-combine with the valence-band hole they left behind. In this way, sunlight creates an electric current.
In any semiconductor, the band gap means that only photons with that amount of energy, or more, will contribute to producing a current. In the case of silicon, the majority of visible light from red to violet has sufficient energy to make this happen. Unfortunately higher energy photons, those at the blue and violet end of the spectrum, have more than enough energy to cross the band gap; although some of this extra energy is transferred into the electrons, the majority of it is wasted as heat. Another issue is that in order to have a reasonable chance of capturing a photon, the n-type layer has to be fairly thick. This also increases the chance that a freshly ejected electron will meet up with a previously created hole in the material before reaching the p–n junction. These effects produce an upper limit on the efficiency of silicon solar cells, currently around 20% for common modules and up to 27.1% for the best laboratory cells (33.16% is the theoretical maximum efficiency for single band gap solar cells, see Shockley–Queisser limit.).
By far the biggest problem with the conventional approach is cost; solar cells require a relatively thick layer of doped silicon in order to have reasonable photon capture rates, and silicon processing is expensive. There have been a number of different approaches to reduce this cost over the last decade, notably the thin-film approaches, but to date they have seen limited application due to a variety of practical problems. Another line of research has been to dramatically improve efficiency through the multi-junction approach, although these cells are very high cost and suitable only for large commercial deployments. In general terms the types of cells suitable for rooftop deployment have not changed significantly in efficiency, although costs have dropped somewhat due to increased supply.
Dye-sensitized solar cells
In the late 1960s it was discovered that illuminated organic dyes can generate electricity at oxide electrodes in electrochemical cells. In an effort to understand and simulate the primary processes in photosynthesis the phenomenon was studied at the University of California at Berkeley with chlorophyll extracted from spinach (bio-mimetic or bionic approach). On the basis of such experiments electric power generation via the dye sensitization solar cell (DSSC) principle was demonstrated and discussed in 1972. The instability of the dye solar cell was identified as a main challenge. Its efficiency could, during the following two decades, be improved by optimizing the porosity of the electrode prepared from fine oxide powder, but the instability remained a problem.
A modern n-type DSSC, the most common type of DSSC, is composed of a porous layer of titanium dioxide nanoparticles, covered with a molecular dye that absorbs sunlight, like the chlorophyll in green leaves. The titanium dioxide is immersed under an electrolyte solution, above which is a platinum-based catalyst. As in a conventional alkaline battery, an anode (the titanium dioxide) and a cathode (the platinum) are placed on either side of a liquid conductor (the electrolyte).
The working principle for n-type DSSCs can be summarized into a few basic steps. Sunlight passes through the transparent electrode into the dye layer where it can excite electrons that then flow into the conduction band of the n-type semiconductor, typically titanium dioxide. The electrons from titanium dioxide then flow toward the transparent electrode where they are collected for powering a load. After flowing through the external circuit, they are re-introduced into the cell on a metal electrode on the back, also known as the counter electrode, and flow into the electrolyte. The electrolyte then transports the electrons back to the dye molecules and regenerates the oxidized dye.
The basic working principle above, is similar in a p-type DSSC, where the dye-sensitised semiconductor is of p-type nature (typically nickel oxide). However, instead of injecting an electron into the semiconductor, in a p-type DSSC, a hole flows from the dye into the valence band of the p-type semiconductor.
Dye-sensitized solar cells separate the two functions provided by silicon in a traditional cell design. Normally the silicon acts as both the source of photoelectrons, as well as providing the electric field to separate the charges and create a current. In the dye-sensitized solar cell, the bulk of the semiconductor is used solely for charge transport, the photoelectrons are provided from a separate photosensitive dye. Charge separation occurs at the surfaces between the dye, semiconductor and electrolyte.
The dye molecules are quite small (nanometer sized), so in order to capture a reasonable amount of the incoming light the layer of dye molecules needs to be made fairly thick, much thicker than the molecules themselves. To address this problem, a nanomaterial is used as a scaffold to hold large numbers of the dye molecules in a 3-D matrix, increasing the number of molecules for any given surface area of cell. In existing designs, this scaffolding is provided by the semiconductor material, which serves double-duty.
Counter Electrode Materials
One of the most important components of DSSC is the counter electrode. As stated before, the counter electrode is responsible for collecting electrons from the external circuit and introducing them back into the electrolyte to catalyze the reduction reaction of the redox shuttle, generally I3− to I−. Thus, it is important for the counter electrode to not only have high electron conductivity and diffusive ability, but also electrochemical stability, high catalytic activity and appropriate band structure. The most common counter electrode material currently used is platinum in DSSCs, but is not sustainable owing to its high costs and scarce resources. Thus, much research has been focused towards discovering new hybrid and doped materials that can replace platinum with comparable or superior electrocatalytic performance. One such category being widely studied includes chalcogen compounds of cobalt, nickel, and iron (CCNI), particularly the effects of morphology, stoichiometry, and synergy on the resulting performance. It has been found that in addition to the elemental composition of the material, these three parameters greatly impact the resulting counter electrode efficiency. Of course, there are a variety of other materials currently being researched, such as highly mesoporous carbons, tin-based materials, gold nanostructures, as well as lead-based nanocrystals. However, the following section compiles a variety of ongoing research efforts specifically relating to CCNI towards optimizing the DSSC counter electrode performance.
Morphology
Even with the same composition, morphology of the nanoparticles that make up the counter electrode play such an integral role in determining the efficiency of the overall photovoltaic. Because a material's electrocatalytic potential is highly dependent on the amount of surface area available to facilitate the diffusion and reduction of the redox species, numerous research efforts have been focused towards understanding and optimizing the morphology of nanostructures for DSSC counter electrodes.
In 2017, Huang et al. utilized various surfactants in a microemulsion-assisted hydrothermal synthesis of CoSe2/CoSeO3 composite crystals to produce nanocubes, nanorods, and nanoparticles. Comparison of these three morphologies revealed that the hybrid composite nanoparticles, due to having the largest electroactive surface area, had the highest power conversion efficiency of 9.27%, even higher than its platinum counterpart. Not only that, the nanoparticle morphology displayed the highest peak current density and smallest potential gap between the anodic and cathodic peak potentials, thus implying the best electrocatalytic ability.
With a similar study but a different system, Du et al. in 2017 determined that the ternary oxide of NiCo2O4 had the greatest power conversion efficiency and electrocatalytic ability as nanoflowers when compared to nanorods or nanosheets. Du et al. realized that exploring various growth mechanisms that help to exploit the larger active surface areas of nanoflowers may provide an opening for extending DSSC applications to other fields.
Stoichiometry
Of course, the composition of the material that is used as the counter electrode is extremely important to creating a working photovoltaic, as the valence and conduction energy bands must overlap with those of the redox electrolyte species to allow for efficient electron exchange.
In 2018, Jin et al. prepared ternary nickel cobalt selenide (NixCoySe) films at various stoichiometric ratios of nickel and cobalt to understand its impact on the resulting cell performance. Nickel and cobalt bimetallic alloys were known to have outstanding electron conduction and stability, so optimizing its stoichiometry would ideally produce a more efficient and stable cell performance than its singly metallic counterparts. Such is the result that Jin et al. found, as Ni0.12Co0.80Se achieved superior power conversion efficiency (8.61%), lower charge transfer impedance, and higher electrocatalytic ability than both its platinum and binary selenide counterparts.
Synergy
One last area that has been actively studied is the synergy of different materials in promoting superior electroactive performance. Whether through various charge transport material, electrochemical species, or morphologies, exploiting the synergetic relationship between different materials has paved the way for even newer counter electrode materials.
In 2016, Lu et al. mixed nickel cobalt sulfide microparticles with reduced graphene oxide (rGO) nanoflakes to create the counter electrode. Lu et al. discovered not only that the rGO acted as a co-catalyst in accelerating the triiodide reduction, but also that the microparticles and rGO had a synergistic interaction that decreased the charge transfer resistance of the overall system. Although the efficiency of this system was slightly lower than its platinum analog (efficiency of NCS/rGO system: 8.96%; efficiency of Pt system: 9.11%), it provided a platform on which further research can be conducted.
Construction
In the case of the original Grätzel and O'Regan design, the cell has 3 primary parts. On top is a transparent anode made of fluoride-doped tin dioxide (SnO2:F) deposited on the back of a (typically glass) plate. On the back of this conductive plate is a thin layer of titanium dioxide (TiO2), which forms into a highly porous structure with an extremely high surface area. The (TiO2) is chemically bound by a process called sintering. TiO2 only absorbs a small fraction of the solar photons (those in the UV). The plate is then immersed in a mixture of a photosensitive ruthenium-polypyridyl dye (also called molecular sensitizers) and a solvent. After soaking the film in the dye solution, a thin layer of the dye is left covalently bonded to the surface of the TiO2. The bond is either an ester, chelating, or bidentate bridging linkage.
A separate plate is then made with a thin layer of the iodide electrolyte spread over a conductive sheet, typically platinum metal. The two plates are then joined and sealed together to prevent the electrolyte from leaking. The construction is simple enough that there are hobby kits available to hand-construct them. Although they use a number of "advanced" materials, these are inexpensive compared to the silicon needed for normal cells because they require no expensive manufacturing steps. TiO2, for instance, is already widely used as a paint base.
One of the efficient DSSCs devices uses ruthenium-based molecular dye, e.g. [Ru(4,4'-dicarboxy-2,2'-bipyridine)2(NCS)2] (N3), that is bound to a photoanode via carboxylate moieties. The photoanode consists of 12 μm thick film of transparent 10–20 nm diameter TiO2 nanoparticles covered with a 4 μm thick film of much larger (400 nm diameter) particles that scatter photons back into the transparent film. The excited dye rapidly injects an electron into the TiO2 after light absorption. The injected electron diffuses through the sintered particle network to be collected at the front side transparent conducting oxide (TCO) electrode, while the dye is regenerated via reduction by a redox shuttle, I3−/I−, dissolved in a solution. Diffusion of the oxidized form of the shuttle to the counter electrode completes the circuit.
Mechanism of DSSCs
The following steps convert in a conventional n-type DSSC photons (light) to current:
The efficiency of a DSSC depends on four energy levels of the component: the excited state (approximately LUMO) and the ground state (HOMO) of the photosensitizer, the Fermi level of the TiO2 electrode and the redox potential of the mediator (I−/I3−) in the electrolyte.
Nanoplant-like morphology
In DSSC, electrodes consisted of sintered semiconducting nanoparticles, mainly TiO2 or ZnO. These nanoparticle DSSCs rely on trap-limited diffusion through the semiconductor nanoparticles for the electron transport. This limits the device efficiency since it is a slow transport mechanism. Recombination is more likely to occur at longer wavelengths of radiation. Moreover, sintering of nanoparticles requires a high temperature of about 450 °C, which restricts the fabrication of these cells to robust, rigid solid substrates. It has been proven that there is an increase in the efficiency of DSSC, if the sintered nanoparticle electrode is replaced by a specially designed electrode possessing an exotic 'nanoplant-like' morphology.
Operation
In a conventional n-type DSSC, sunlight enters the cell through the transparent SnO2:F top contact, striking the dye on the surface of the TiO2. Photons striking the dye with enough energy to be absorbed create an excited state of the dye, from which an electron can be "injected" directly into the conduction band of the TiO2. From there it moves by diffusion (as a result of an electron concentration gradient) to the clear anode on top.
Meanwhile, the dye molecule has lost an electron and the molecule will decompose if another electron is not provided. The dye strips one from iodide in electrolyte below the TiO2, oxidizing it into triiodide. This reaction occurs quite quickly compared to the time that it takes for the injected electron to recombine with the oxidized dye molecule, preventing this recombination reaction that would effectively short-circuit the solar cell.
The triiodide then recovers its missing electron by mechanically diffusing to the bottom of the cell, where the counter electrode re-introduces the electrons after flowing through the external circuit.
Efficiency
Several important measures are used to characterize solar cells. The most obvious is the total amount of electrical power produced for a given amount of solar power shining on the cell. Expressed as a percentage, this is known as the solar conversion efficiency. Electrical power is the product of current and voltage, so the maximum values for these measurements are important as well, Jsc and Voc respectively. Finally, in order to understand the underlying physics, the "quantum efficiency" is used to compare the chance that one photon (of a particular energy) will create one electron.
In quantum efficiency terms, DSSCs are extremely efficient. Due to their "depth" in the nanostructure there is a very high chance that a photon will be absorbed, and the dyes are very effective at converting them to electrons. Most of the small losses that do exist in DSSC's are due to conduction losses in the TiO2 and the clear electrode, or optical losses in the front electrode. The overall quantum efficiency for green light is about 90%, with the "lost" 10% being largely accounted for by the optical losses in the top electrode. The quantum efficiency of traditional designs vary, depending on their thickness, but are about the same as the DSSC.
In theory, the maximum voltage generated by such a cell is simply the difference between the (quasi-)Fermi level of the TiO2 and the redox potential of the electrolyte, about 0.7 V under solar illumination conditions (Voc). That is, if an illuminated DSSC is connected to a voltmeter in an "open circuit", it would read about 0.7 V. In terms of voltage, DSSCs offer slightly higher Voc than silicon, about 0.7 V compared to 0.6 V. This is a fairly small difference, so real-world differences are dominated by current production, Jsc.
Although the dye is highly efficient at converting absorbed photons into free electrons in the TiO2, only photons absorbed by the dye ultimately produce current. The rate of photon absorption depends upon the absorption spectrum of the sensitized TiO2 layer and upon the solar flux spectrum. The overlap between these two spectra determines the maximum possible photocurrent. Typically used dye molecules generally have poorer absorption in the red part of the spectrum compared to silicon, which means that fewer of the photons in sunlight are usable for current generation. These factors limit the current generated by a DSSC, for comparison, a traditional silicon-based solar cell offers about 35 mA/cm2, whereas current DSSCs offer about 20 mA/cm2.
Overall peak power conversion efficiency for current DSSCs is about 11%. Current record for prototypes lies at 15%.
Degradation
DSSCs degrade when exposed to light. In 2014 air infiltration of the commonly-used amorphous Spiro-MeOTAD hole-transport layer was identified as the primary cause of the degradation, rather than oxidation. The damage could be avoided by the addition of an appropriate barrier.
The barrier layer may include UV stabilizers and/or UV absorbing luminescent chromophores (which emit at longer wavelengths which may be reabsorbed by the dye) and antioxidants to protect and improve the efficiency of the cell.
Advantages
DSSCs are currently the most efficient third-generation (2005 Basic Research Solar Energy Utilization 16) solar technology available. Other thin-film technologies are typically between 5% and 13%, and traditional low-cost commercial silicon panels operate between 14% and 17%. This makes DSSCs attractive as a replacement for existing technologies in "low density" applications like rooftop solar collectors, where the mechanical robustness and light weight of the glass-less collector is a major advantage. They may not be as attractive for large-scale deployments where higher-cost higher-efficiency cells are more viable, but even small increases in the DSSC conversion efficiency might make them suitable for some of these roles as well.
There is another area where DSSCs are particularly attractive. The process of injecting an electron directly into the TiO2 is qualitatively different from that occurring in a traditional cell, where the electron is "promoted" within the original crystal. In theory, given low rates of production, the high-energy electron in the silicon could re-combine with its own hole, giving off a photon (or other form of energy) which does not result in current being generated. Although this particular case may not be common, it is fairly easy for an electron generated by another atom to combine with a hole left behind in a previous photoexcitation.
In comparison, the injection process used in the DSSC does not introduce a hole in the TiO2, only an extra electron. Although it is energetically possible for the electron to recombine back into the dye, the rate at which this occurs is quite slow compared to the rate that the dye regains an electron from the surrounding electrolyte. Recombination directly from the TiO2 to species in the electrolyte is also possible although, again, for optimized devices this reaction is rather slow. On the contrary, electron transfer from the platinum coated electrode to species in the electrolyte is necessarily very fast.
As a result of these favorable "differential kinetics", DSSCs work even in low-light conditions. DSSCs are therefore able to work under cloudy skies and non-direct sunlight, whereas traditional designs would suffer a "cutout" at some lower limit of illumination, when charge carrier mobility is low and recombination becomes a major issue. The cutoff is so low they are even being proposed for indoor use, collecting energy for small devices from the lights in the house.
A practical advantage which DSSCs share with most thin-film technologies, is that the cell's mechanical robustness indirectly leads to higher efficiencies at higher temperatures. In any semiconductor, increasing temperature will promote some electrons into the conduction band "mechanically". The fragility of traditional silicon cells requires them to be protected from the elements, typically by encasing them in a glass box similar to a greenhouse, with a metal backing for strength. Such systems suffer noticeable decreases in efficiency as the cells heat up internally. DSSCs are normally built with only a thin layer of conductive plastic on the front layer, allowing them to radiate away heat much easier, and therefore operate at lower internal temperatures.
Disadvantages
The major disadvantage to the DSSC design is the use of the liquid electrolyte, which has temperature stability problems. At low temperatures the electrolyte can freeze, halting power production and potentially leading to physical damage. Higher temperatures cause the liquid to expand, making sealing the panels a serious problem. Another disadvantage is that costly ruthenium (dye), platinum (catalyst) and conducting glass or plastic (contact) are needed to produce a DSSC. A third major drawback is that the electrolyte solution contains volatile organic compounds (or VOC's), solvents which must be carefully sealed as they are hazardous to human health and the environment. This, along with the fact that the solvents permeate plastics, has precluded large-scale outdoor application and integration into flexible structure.
Replacing the liquid electrolyte with a solid has been a major ongoing field of research. Recent experiments using solidified melted salts have shown some promise, but currently suffer from higher degradation during continued operation, and are not flexible.
Photocathodes and tandem cells
Dye sensitised solar cells operate as a photoanode (n-DSC), where photocurrent result from electron injection by the sensitized dye. Photocathodes (p-DSCs) operate in an inverse mode compared to the conventional n-DSC, where dye-excitation is followed by rapid electron transfer from a p-type semiconductor to the dye (dye-sensitized hole injection, instead of electron injection). Such p-DSCs and n-DSCs can be combined to construct tandem solar cells (pn-DSCs) and the theoretical efficiency of tandem DSCs is well beyond that of single-junction DSCs.
A standard tandem cell consists of one n-DSC and one p-DSC in a simple sandwich configuration with an intermediate electrolyte layer. n-DSC and p-DSC are connected in series, which implies that the resulting photocurrent will be controlled by the weakest photoelectrode, whereas photovoltages are additive. Thus, photocurrent matching is very important for the construction of highly efficient tandem pn-DSCs. However, unlike n-DSCs, fast charge recombination following dye-sensitized hole injection usually resulted in low photocurrents in p-DSC and thus hampered the efficiency of the overall device.
Researchers have found that using dyes comprising a perylenemonoimide (PMI) as the acceptor and an oligothiophene coupled to triphenylamine as the donor greatly improve the performance of p-DSC by reducing charge recombination rate following dye-sensitized hole injection. The researchers constructed a tandem DSC device with NiO on the p-DSC side and TiO2 on the n-DSC side. Photocurrent matching was achieved through adjustment of NiO and TiO2 film thicknesses to control the optical absorptions and therefore match the photocurrents of both electrodes. The energy conversion efficiency of the device is 1.91%, which exceeds the efficiency of its individual components, but is still much lower than that of high performance n-DSC devices (6%–11%). The results are still promising since the tandem DSC was in itself rudimentary. The dramatic improvement in performance in p-DSC can eventually lead to tandem devices with much greater efficiency than lone n-DSCs.
As previously mentioned, using a solid-state electrolyte has several advantages over a liquid system (such as no leakage and faster charge transport), which has also been realised for dye-sensitised photocathodes. Using electron transporting materials such as PCBM, TiO2 and ZnO instead of the conventional liquid redox couple electrolyte, researchers have managed to fabricate solid state p-DSCs (p-ssDSCs), aiming for solid state tandem dye sensitized solar cells, which have the potential to achieve much greater photovoltages than a liquid tandem device.
Development
The dyes used in early experimental cells (circa 1995) were sensitive only in the high-frequency end of the solar spectrum, in the UV and blue. Newer versions were quickly introduced (circa 1999) that had much wider frequency response, notably "triscarboxy-ruthenium terpyridine" [Ru(4,4',4"-(COOH)3-terpy)(NCS)3], which is efficient right into the low-frequency range of red and IR light. The wide spectral response results in the dye having a deep brown-black color, and is referred to simply as "black dye". The dyes have an excellent chance of converting a photon into an electron, originally around 80% but improving to almost perfect conversion in more recent dyes, the overall efficiency is about 90%, with the "lost" 10% being largely accounted for by the optical losses in top electrode.
A solar cell must be capable of producing electricity for at least twenty years, without a significant decrease in efficiency (life span). The "black dye" system was subjected to 50 million cycles, the equivalent of ten years' exposure to the sun in Switzerland. No discernible performance decrease was observed. However the dye is subject to breakdown in high-light situations. Over the last decade an extensive research program has been carried out to address these concerns. The newer dyes included 1-ethyl-3 methylimidazolium tetrocyanoborate [EMIB(CN)4] which is extremely light- and temperature-stable, copper-diselenium [Cu(In,GA)Se2] which offers higher conversion efficiencies, and others with varying special-purpose properties.
DSSCs are still at the start of their development cycle. Efficiency gains are possible and have recently started more widespread study. These include the use of quantum dots for conversion of higher-energy (higher frequency) light into multiple electrons, using solid-state electrolytes for better temperature response, and changing the doping of the TiO2 to better match it with the electrolyte being used.
New developments
2003
A group of researchers at the École Polytechnique Fédérale de Lausanne (EPFL) has reportedly increased the thermostability of DSC by using amphiphilic ruthenium sensitizer in conjunction with quasi-solid-state gel electrolyte. The stability of the device matches that of a conventional inorganic silicon-based solar cell. The cell sustained heating for 1,000 h at 80 °C.
The group has previously prepared a ruthenium amphiphilic dye Z-907 (cis-Ru(H2dcbpy)(dnbpy)(NCS)2, where the ligand H2dcbpy is 4,4′-dicarboxylic acid-2,2′-bipyridine and dnbpy is 4,4′-dinonyl-2,2′-bipyridine) to increase dye tolerance to water in the electrolytes. In addition, the group also prepared a quasi-solid-state gel electrolyte with a 3-methoxypropionitrile (MPN)-based liquid electrolyte that was solidified by a photochemically stable fluorine polymer, polyvinylidenefluoride-co-hexafluoropropylene (PVDF-HFP).
The use of the amphiphilic Z-907 dye in conjunction with the polymer gel electrolyte in DSC achieved an energy conversion efficiency of 6.1%. More importantly, the device was stable under thermal stress and soaking with light. The high conversion efficiency of the cell was sustained after heating for 1,000 h at 80 °C, maintaining 94% of its initial value. After
accelerated testing in a solar simulator for 1,000 h of light-soaking at 55 °C (100 mW cm−2) the efficiency had decreased by less than 5% for cells covered with an ultraviolet absorbing polymer film. These results are well within the limit for that of traditional inorganic silicon solar cells.
The enhanced performance may arise from a decrease in solvent permeation across the sealant due to the application of the polymer gel electrolyte. The polymer gel electrolyte is quasi-solid at room temperature, and becomes a viscous liquid (viscosity: 4.34 mPa·s) at 80 °C compared with the traditional liquid electrolyte (viscosity: 0.91 mPa·s). The much improved stabilities of the device under both thermal stress and soaking with light has never before been seen in DSCs, and they match the durability criteria applied to solar cells for outdoor use, which makes these devices viable for practical application.
2006
The first successful solid-hybrid dye-sensitized solar cells were reported.
To improve electron transport in these solar cells, while maintaining the high surface area needed for dye adsorption, two researchers have designed alternate semiconductor morphologies, such as arrays of nanowires and a combination of nanowires and nanoparticles, to provide a direct path to the electrode via the semiconductor conduction band. Such structures may provide a means to improve the quantum efficiency of DSSCs in the red region of the spectrum, where their performance is currently limited.
In August 2006, to prove the chemical and thermal robustness of the 1-ethyl-3 methylimidazolium tetracyanoborate solar cell, the researchers subjected the devices to heating at 80 °C in the dark for 1000 hours, followed by light soaking at 60 °C for 1000 hours. After dark heating and light soaking, 90% of the initial photovoltaic efficiency was maintained – the first time such excellent thermal stability has been observed for a liquid electrolyte that exhibits such a high conversion efficiency. Contrary to silicon solar cells, whose performance declines with increasing temperature, the dye-sensitized solar-cell devices were only negligibly influenced when increasing the operating temperature from ambient to 60 °C.
2007
Wayne Campbell at Massey University, New Zealand, has experimented with a wide variety of organic dyes based on porphyrin. In nature, porphyrin is the basic building block of the hemoproteins, which include chlorophyll in plants and hemoglobin in animals. He reports efficiency on the order of 5.6% using these low-cost dyes.
2008
An article published in Nature Materials demonstrated cell efficiencies of 8.2% using a new solvent-free liquid redox electrolyte consisting of a melt of three salts, as an alternative to using organic solvents as an electrolyte solution. Although the efficiency with this electrolyte is less than the 11% being delivered using the existing iodine-based solutions, the team is confident the efficiency can be improved.
2009
A group of researchers at Georgia Tech made dye-sensitized solar cells with a higher effective surface area by wrapping the cells around a quartz optical fiber. The researchers removed the cladding from optical fibers, grew zinc oxide nanowires along the surface, treated them with dye molecules, surrounded the fibers by an electrolyte and a metal film that carries electrons off the fiber. The cells are six times more efficient than a zinc oxide cell with the same surface area. Photons bounce inside the fiber as they travel, so there are more chances to interact with the solar cell and produce more current. These devices only collect light at the tips, but future fiber cells could be made to absorb light along the entire length of the fiber, which would require a coating that is conductive as well as transparent. Max Shtein of the University of Michigan said a sun-tracking system would not be necessary for such cells, and would work on cloudy days when light is diffuse.
2010
Researchers at the École Polytechnique Fédérale de Lausanne and at the Université du Québec à Montréal claim to have overcome two of the DSC's major issues:
"New molecules" have been created for the electrolyte, resulting in a liquid or gel that is transparent and non-corrosive, which can increase the photovoltage and improve the cell's output and stability.
At the cathode, platinum was replaced by cobalt sulfide, which is far less expensive, more efficient, more stable and easier to produce in the laboratory.
2011
Dyesol and Tata Steel Europe announced in June the development of the world's largest dye sensitized photovoltaic module, printed onto steel in a continuous line.
Dyesol and CSIRO announced in October a Successful Completion of Second Milestone in Joint Dyesol / CSIRO Project.
Dyesol Director Gordon Thompson said, "The materials developed during this joint collaboration have the potential to significantly advance the commercialisation of DSC in a range of applications where performance and stability are essential requirements.
Dyesol is extremely encouraged by the breakthroughs in the chemistry allowing the production of the target molecules. This creates a path to the immediate commercial utilisation of these new materials."
Dyesol and Tata Steel Europe announced in November the targeted development of Grid Parity Competitive BIPV solar steel that does not require government subsidised feed in tariffs. TATA-Dyesol "Solar Steel" Roofing is currently being installed on the Sustainable Building Envelope Centre (SBEC) in Shotton, Wales.
2012
Northwestern University researchers announced a solution to a primary problem of DSSCs, that of difficulties in using and containing the liquid electrolyte and the consequent relatively short useful life of the device. This is achieved through the use of nanotechnology and the conversion of the liquid electrolyte to a solid. The current efficiency is about half that of silicon cells, but the cells are lightweight and potentially of much lower cost to produce.
2013
During the last 5–10 years, a new kind of DSSC has been developed – the solid state dye-sensitized solar cell. In this case the liquid electrolyte is replaced by one of several solid hole conducting materials. From 2009 to 2013 the efficiency of Solid State DSSCs has dramatically increased from 4% to 15%. Michael Grätzel announced the fabrication of Solid State DSSCs with 15.0% efficiency, reached by the means of a hybrid perovskite CH3NH3PbI3 dye, subsequently deposited from the separated solutions of CH3NH3I and PbI2.
The first architectural integration was demonstrated at EPFL's SwissTech Convention Center in partnership with Romande Energie. The total surface is 300 m2, in 1400 modules of 50 cm x 35 cm. Designed by artists Daniel Schlaepfer and Catherine Bolle.
2018
Researchers have investigated the role of surface plasmon resonances present on gold nanorods in the performance of dye-sensitized solar cells. They found that with an increase nanorod concentration, the light absorption grew linearly; however, charge extraction was also dependent on the concentration. With an optimized concentration, they found that the overall power conversion efficiency improved from 5.31 to 8.86% for Y123 dye-sensitized solar cells.
The synthesis of one-dimensional TiO2 nanostructures directly on fluorine-doped tin oxide glass substrates was successful demonstrated via a two-stop solvothermal reaction. Additionally, through a TiO2 sol treatment, the performance of the dual TiO2 nanowire cells was enhanced, reaching a power conversion efficiency of 7.65%.
Stainless steel based counter-electrodes for DSSCs have been reported which further reduce cost compared to conventional platinum based counter electrode and are suitable for outdoor application.
Researchers from EPFL have advanced the DSSCs based on copper complexes redox electrolytes, which have achieved 13.1% efficiency under standard AM1.5G, 100 mW/cm2 conditions and record 32% efficiency under 1000 lux of indoor light.
Researchers from Uppsala University have used n-type semiconductors instead of redox electrolyte to fabricate solid state p-type dye sensitized solar cells.
2021
The field of building-integrated photovoltaics (BIPV) has gained attention from the scientific community due to its potential to reduce pollution and materials and electricity costs, as well as to improve the aesthetics of a building. In recent years, scientists have looked at ways to incorporate DSSC’s in BIPV applications, since the dominant Si-based PV systems in the market have a limited presence in this field due to their energy-intensive manufacturing methods, poor conversion efficiency under low light intensities, and high maintenance requirements. In 2021, a group of researchers from the Silesian University of Technology in Poland developed a DSSC in which the classic glass counter electrode was replaced by an electrode based on a ceramic tile and nickel foil. The motivation for this change was that, despite that glass substrates have resulted in the highest recorded efficiencies for DSSC’s, for BIPV applications like roof tiles or building facades, lighter and more flexible materials are essential. This includes plastic films, metals, steel, or paper, which may also reduce manufacturing costs. The team found that the cell had an efficiency of 4% (close to that of a solar cell with a glass counter electrode), demonstrated the potential for creating building-integrated DSSC’s that are stable and low-cost.
2022
Photosensitizers are dye compounds that absorb the photons from incoming light and eject electrons, producing an electric current that can be used to power a device or a storage unit. According to a new study performed by Michael Grätzel and fellow scientist Anders Hagfeldt, advances in photosensitizers have resulted in a substantial improvement in performance of DSSC’s under solar and ambient light conditions. Another key factor to achieve power-conversion records is cosensitization, due to its ability combine dyes that can absorb light across a wider range of the light spectrum. Cosensitization is a chemical manufacturing method that produces DSSC electrodes containing two or more different dyes with complementary optical absorption capabilities, enabling the use of all available sunlight.
The researchers from Switzerland’s École polytechnique fédérale de Lausanne (EPFL) found that the efficiency to cosensitized solar cells can be raised by the pre-adsorption of a monolayer of hydroxamic acid derivative on a surface of nanocrystalline mesoporous titanium dioxide, which functions as the electron transport mechanism of the electrode. The two photosensitizer molecules used in the study were the organic dye SL9, which served as the primary long wavelength-light harvester, and the dye SL10, which provided an additional absorption peak that compensates the SL9’s inefficient blue light harvesting. It was found that adding this hydroxamic acid layer improved the dye layer’s molecular packing and ordering. This slowed down the adsorption of the sensitizers and augmented their fluorescence quantum yield, improving the power conversion efficiency of the cell.
The DSSC developed by the team showed a record-breaking power conversion efficiency of 15.2% under standard global simulated sunlight and long-term operational stability over 500 hours. In addition, devices with a larger active area exhibited efficiencies of around 30% while maintaining high stability, offering new possibilities for the DSSC field.
See also
References
External links
Brian O'Regan's account of the invention of the modern DSSC
Dye Solar Cells for Real, the assembly guide for making your own solar cells
Breakthrough in low-cost efficient solar cells
Thin-film cells
Dye-sensitized solar cells
Renewable energy commercialization
Ultraviolet radiation
Swiss inventions | Dye-sensitized solar cell | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 9,214 | [
"Spectrum (physical sciences)",
"Thin-film cells",
"Electromagnetic spectrum",
"Ultraviolet radiation",
"Planes (geometry)",
"Thin films"
] |
1,932,171 | https://en.wikipedia.org/wiki/Halo%20nucleus | In nuclear physics, an atomic nucleus is called a halo nucleus or is said to have a nuclear halo when it has a core nucleus surrounded by a "halo" of orbiting protons or neutrons, which makes the radius of the nucleus appreciably larger than that predicted by the liquid drop model. Halo nuclei form at the extreme edges of the table of nuclides — the neutron drip line and proton drip line — and have short half-lives, measured in milliseconds. These nuclei are studied shortly after their formation in an ion beam.
Typically, an atomic nucleus is a tightly bound group of protons and neutrons. However, in some nuclides, there is an overabundance of one species of nucleon. In some of these cases, a nuclear core and a halo will form.
Often, this property may be detected in scattering experiments, which show the nucleus to be much larger than the otherwise expected value. Normally, the cross-section (corresponding to the classical radius) of the nucleus is proportional to the cube root of its mass, as would be the case for a sphere of constant density. Specifically, for a nucleus of mass number A, the radius r is (approximately)
where is 1.2 fm.
One example of a halo nucleus is 11Li, which has a half-life of 8.6 ms. It contains a core of 3 protons and 6 neutrons, and a halo of two independent and loosely bound neutrons. It decays into 11Be by the emission of an antineutrino and an electron. Its mass radius of 3.16 fm is close to that of 32S or, even more impressively, of 208Pb, both much heavier nuclei.
Experimental confirmation of nuclear halos is recent and ongoing. Additional candidates are suspected. Several nuclides including 9B, 13N, and 15N are calculated to have a halo in the excited state but not in the ground state.
List of known nuclides with nuclear halo
Nuclei that have a neutron halo include 11Be and 19C. A two-neutron halo is exhibited by 6He, 11Li, 17B, 19B and 22C.
Two-neutron halo nuclei break into three fragments and are called Borromean because of this behavior, analogously to how all three of the Borromean rings are linked together but no two share a link. For example, the two-neutron halo nucleus 6He (which can be taken as a three-body system consisting of an alpha particle and two neutrons) is bound, but neither 5He nor the dineutron is. 8He and 14Be both exhibit a four-neutron halo.
Nuclei that have a proton halo include 8B and 26P. A two-proton halo is exhibited by 17Ne and 27S. Proton halos are expected to be rarer and more unstable than neutron halos because of the repulsive forces of the excess proton(s).
See also
Halo nuclei and nuclear force range limits
Isotopes of lithium
Exotic atom
Borromean nucleus
References
Further reading
Nuclear physics | Halo nucleus | [
"Physics"
] | 631 | [
"Nuclear physics"
] |
24,027,000 | https://en.wikipedia.org/wiki/Properties%20of%20water | Water () is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, which is nearly colorless apart from an inherent hint of blue. It is by far the most studied chemical compound and is described as the "universal solvent" and the "solvent of life". It is the most abundant substance on the surface of Earth and the only common substance to exist as a solid, liquid, and gas on Earth's surface. It is also the third most abundant molecule in the universe (behind molecular hydrogen and carbon monoxide).
Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to dissociate ions in salts and bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity.
Water is amphoteric, meaning that it can exhibit properties of an acid or a base, depending on the pH of the solution that it is in; it readily produces both and ions. Related to its amphoteric character, it undergoes self-ionization. The product of the activities, or approximately, the concentrations of and is a constant, so their respective concentrations are inversely proportional to each other.
Physical properties
Water is the chemical substance with chemical formula ; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom. Water is a tasteless, odorless liquid at ambient temperature and pressure. Liquid water has weak absorption bands at wavelengths of around 750 nm which cause it to appear to have a blue color. This can easily be observed in a water-filled bath or wash-basin whose lining is white. Large ice crystals, as in glaciers, also appear blue.
Under standard conditions, water is primarily a liquid, unlike other analogous hydrides of the oxygen family, which are generally gaseous. This unique property of water is due to hydrogen bonding. The molecules of water are constantly moving concerning each other, and the hydrogen bonds are continually breaking and reforming at timescales faster than 200 femtoseconds (2 × 10−13 seconds). However, these bonds are strong enough to create many of the peculiar properties of water, some of which make it integral to life.
Water, ice, and vapor
Within the Earth's atmosphere and surface, the liquid phase is the most common and is the form that is generally denoted by the word "water". The solid phase of water is known as ice and commonly takes the structure of hard, amalgamated crystals, such as ice cubes, or loosely accumulated granular crystals, like snow. Aside from common hexagonal crystalline ice, other crystalline and amorphous phases of ice are known. The gaseous phase of water is known as water vapor (or steam). Visible steam and clouds are formed from minute droplets of water suspended in the air.
Water also forms a supercritical fluid. The critical temperature is 647 K and the critical pressure is 22.064 MPa. In nature, this only rarely occurs in extremely hostile conditions. A likely example of naturally occurring supercritical water is in the hottest parts of deep water hydrothermal vents, in which water is heated to the critical temperature by volcanic plumes and the critical pressure is caused by the weight of the ocean at the extreme depths where the vents are located. This pressure is reached at a depth of about 2200 meters: much less than the mean depth of the ocean (3800 meters).
Heat capacity and heats of vaporization and fusion
Water has a very high specific heat capacity of 4184 J/(kg·K) at 20 °C (4182 J/(kg·K) at 25 °C) —the second-highest among all the heteroatomic species (after ammonia), as well as a high heat of vaporization (40.65 kJ/mol or 2268 kJ/kg at the normal boiling point), both of which are a result of the extensive hydrogen bonding between its molecules. These unusual properties allow water to moderate Earth's climate by buffering large fluctuations in temperature. Most of the additional energy stored in the climate system since 1970 has accumulated in the oceans.
The specific enthalpy of fusion (more commonly known as latent heat) of water is 333.55 kJ/kg at 0 °C: the same amount of energy is required to melt ice as to warm ice from −160 °C up to its melting point or to heat the same amount of water by about 80 °C. Of common substances, only that of ammonia is higher. This property confers resistance to melting on the ice of glaciers and drift ice. Before and since the advent of mechanical refrigeration, ice was and still is in common use for retarding food spoilage.
The specific heat capacity of ice at −10 °C is 2030 J/(kg·K) and the heat capacity of steam at 100 °C is 2080 J/(kg·K).
Density of water and ice
The density of water is about : this relationship was originally used to define the gram. The density varies with temperature, but not linearly: as the temperature increases, the density rises to a peak at and then decreases; the initial increase is unusual because most liquids undergo thermal expansion so that the density only decreases as a function of temperature. The increase observed for water from to and for a few other liquids is described as negative thermal expansion. Regular, hexagonal ice is also less dense than liquid water—upon freezing, the density of water decreases by about 9%.
These peculiar effects are due to the highly directional bonding of water molecules via the hydrogen bonds: ice and liquid water at low temperature have comparatively low-density, low-energy open lattice structures. The breaking of hydrogen bonds on melting with increasing temperature in the range 0–4 °C allows for a denser molecular packing in which some of the lattice cavities are filled by water molecules. Above 4 °C, however, thermal expansion becomes the dominant effect, and water near the boiling point (100 °C) is about 4% less dense than water at .
Under increasing pressure, ice undergoes a number of transitions to other polymorphs with higher density than liquid water, such as ice II, ice III, high-density amorphous ice (HDA), and very-high-density amorphous ice (VHDA).
The unusual density curve and lower density of ice than of water is essential for much of the life on earth—if water were most dense at the freezing point, then in winter the cooling at the surface would lead to convective mixing. Once 0 °C are reached, the water body would freeze from the bottom up, and all life in it would be killed. Furthermore, given that water is a good thermal insulator (due to its heat capacity), some frozen lakes might not completely thaw in summer. As it is, the inversion of the density curve leads to a stable layering for surface temperatures below 4 °C, and with the layer of ice that floats on top insulating the water below, even e.g., Lake Baikal in central Siberia freezes only to about 1 m thickness in winter. In general, for deep enough lakes, the temperature at the bottom stays constant at about 4 °C (39 °F) throughout the year (see diagram).
Density of saltwater and ice
The density of saltwater depends on the dissolved salt content as well as the temperature. Ice still floats in the oceans, otherwise, they would freeze from the bottom up. However, the salt content of oceans lowers the freezing point by about 1.9 °C (due to freezing-point depression of a solvent containing a solute) and lowers the temperature of the density maximum of water to the former freezing point at 0 °C. This is why, in ocean water, the downward convection of colder water is not blocked by an expansion of water as it becomes colder near the freezing point. The oceans' cold water near the freezing point continues to sink. So creatures that live at the bottom of cold oceans like the Arctic Ocean generally live in water 4 °C colder than at the bottom of frozen-over fresh water lakes and rivers.
As the surface of saltwater begins to freeze (at −1.9 °C for normal salinity seawater, 3.5%) the ice that forms is essentially salt-free, with about the same density as freshwater ice. This ice floats on the surface, and the salt that is "frozen out" adds to the salinity and density of the seawater just below it, in a process known as brine rejection. This denser saltwater sinks by convection and the replacing seawater is subject to the same process. This produces essentially freshwater ice at −1.9 °C on the surface. The increased density of the seawater beneath the forming ice causes it to sink towards the bottom. On a large scale, the process of brine rejection and sinking cold salty water results in ocean currents forming to transport such water away from the Poles, leading to a global system of currents called the thermohaline circulation.
Miscibility and condensation
Water is miscible with many liquids, including ethanol in all proportions. Water and most oils are immiscible, usually forming layers according to increasing density from the top. This can be predicted by comparing the polarity. Water being a relatively polar compound will tend to be miscible with liquids of high polarity such as ethanol and acetone, whereas compounds with low polarity will tend to be immiscible and poorly soluble such as with hydrocarbons.
As a gas, water vapor is completely miscible with air. On the other hand, the maximum water vapor pressure that is thermodynamically stable with the liquid (or solid) at a given temperature is relatively low compared with total atmospheric pressure. For example, if the vapor's partial pressure is 2% of atmospheric pressure and the air is cooled from 25 °C, starting at about 22 °C, water will start to condense, defining the dew point, and creating fog or dew. The reverse process accounts for the fog burning off in the morning. If the humidity is increased at room temperature, for example, by running a hot shower or a bath, and the temperature stays about the same, the vapor soon reaches the pressure for phase change and then condenses out as minute water droplets, commonly referred to as steam.
A saturated gas or one with 100% relative humidity is when the vapor pressure of water in the air is at equilibrium with vapor pressure due to (liquid) water; water (or ice, if cool enough) will fail to lose mass through evaporation when exposed to saturated air. Because the amount of water vapor in the air is small, relative humidity, the ratio of the partial pressure due to the water vapor to the saturated partial vapor pressure, is much more useful. Vapor pressure above 100% relative humidity is called supersaturated and can occur if the air is rapidly cooled, for example, by rising suddenly in an updraft.
Vapour pressure
Compressibility
The compressibility of water is a function of pressure and temperature. At 0 °C, at the limit of zero pressure, the compressibility is . At the zero-pressure limit, the compressibility reaches a minimum of around 45 °C before increasing again with increasing temperature. As the pressure is increased, the compressibility decreases, being at 0 °C and .
The bulk modulus of water is about 2.2 GPa. The low compressibility of non-gasses, and of water in particular, leads to their often being assumed as incompressible. The low compressibility of water means that even in the deep oceans at depth, where pressures are 40 MPa, there is only a 1.8% decrease in volume.
The bulk modulus of water ice ranges from 11.3 GPa at 0 K up to 8.6 GPa at 273 K. The large change in the compressibility of ice as a function of temperature is the result of its relatively large thermal expansion coefficient compared to other common solids.
Triple point
The temperature and pressure at which ordinary solid, liquid, and gaseous water coexist in equilibrium is a triple point of water. Since 1954, this point had been used to define the base unit of temperature, the kelvin, but, starting in 2019, the kelvin is now defined using the Boltzmann constant, rather than the triple point of water.
Due to the existence of many polymorphs (forms) of ice, water has other triple points, which have either three polymorphs of ice or two polymorphs of ice and liquid in equilibrium. Gustav Heinrich Johann Apollon Tammann in Göttingen produced data on several other triple points in the early 20th century. Kamb and others documented further triple points in the 1960s.
Melting point
The melting point of ice is at standard pressure; however, pure liquid water can be supercooled well below that temperature without freezing if the liquid is not mechanically disturbed. It can remain in a fluid state down to its homogeneous nucleation point of about . The melting point of ordinary hexagonal ice falls slightly under moderately high pressures, by /atm or about /70 atm as the stabilization energy of hydrogen bonding is exceeded by intermolecular repulsion, but as ice transforms into its polymorphs (see crystalline states of ice) above , the melting point increases markedly with pressure, i.e., reaching at (triple point of Ice VII).
Electrical properties
Electrical conductivity
Pure water containing no exogenous ions is an excellent electronic insulator, but not even "deionized" water is completely free of ions. Water undergoes autoionization in the liquid state when two water molecules form one hydroxide anion () and one hydronium cation (). Because of autoionization, at ambient temperatures pure liquid water has a similar intrinsic charge carrier concentration to the semiconductor germanium and an intrinsic charge carrier concentration three orders of magnitude greater than the semiconductor silicon, hence, based on charge carrier concentration, water can not be considered to be a completely dielectric material or electrical insulator but to be a limited conductor of ionic charge.
Because water is such a good solvent, it almost always has some solute dissolved in it, often a salt. If water has even a tiny amount of such an impurity, then the ions can carry charges back and forth, allowing the water to conduct electricity far more readily.
It is known that the theoretical maximum electrical resistivity for water is approximately 18.2 MΩ·cm (182 kΩ·m) at 25 °C. This figure agrees well with what is typically seen on reverse osmosis, ultra-filtered and deionized ultra-pure water systems used, for instance, in semiconductor manufacturing plants. A salt or acid contaminant level exceeding even 100 parts per trillion (ppt) in otherwise ultra-pure water begins to noticeably lower its resistivity by up to several kΩ·m.
In pure water, sensitive equipment can detect a very slight electrical conductivity of 0.05501 ± 0.0001 μS/cm at 25.00 °C. Water can also be electrolyzed into oxygen and hydrogen gases but in the absence of dissolved ions this is a very slow process, as very little current is conducted. In ice, the primary charge carriers are protons (see proton conductor). Ice was previously thought to have a small but measurable conductivity of 1 S/cm, but this conductivity is now thought to be almost entirely from surface defects, and without those, ice is an insulator with an immeasurably small conductivity.
Polarity and hydrogen bonding
An important feature of water is its polar nature. The structure has a bent molecular geometry for the two hydrogens from the oxygen vertex. The oxygen atom also has two lone pairs of electrons. One effect usually ascribed to the lone pairs is that the H–O–H gas-phase bend angle is 104.48°, which is smaller than the typical tetrahedral angle of 109.47°. The lone pairs are closer to the oxygen atom than the electrons sigma bonded to the hydrogens, so they require more space. The increased repulsion of the lone pairs forces the O–H bonds closer to each other.
Another consequence of its structure is that water is a polar molecule. Due to the difference in electronegativity, a bond dipole moment points from each H to the O, making the oxygen partially negative and each hydrogen partially positive. A large molecular dipole, points from a region between the two hydrogen atoms to the oxygen atom. The charge differences cause water molecules to aggregate (the relatively positive areas being attracted to the relatively negative areas). This attraction, hydrogen bonding, explains many of the properties of water, such as its solvent properties.
Although hydrogen bonding is a relatively weak attraction compared to the covalent bonds within the water molecule itself, it is responsible for several of the water's physical properties. These properties include its relatively high melting and boiling point temperatures: more energy is required to break the hydrogen bonds between water molecules. In contrast, hydrogen sulfide (), has much weaker hydrogen bonding due to sulfur's lower electronegativity. is a gas at room temperature, despite hydrogen sulfide having nearly twice the molar mass of water. The extra bonding between water molecules also gives liquid water a large specific heat capacity. This high heat capacity makes water a good heat storage medium (coolant) and heat shield.
Cohesion and adhesion
Water molecules stay close to each other (cohesion), due to the collective action of hydrogen bonds between water molecules. These hydrogen bonds are constantly breaking, with new bonds being formed with different water molecules; but at any given time in a sample of liquid water, a large portion of the molecules are held together by such bonds.
Water also has high adhesion properties because of its polar nature. On clean, smooth glass the water may form a thin film because the molecular forces between glass and water molecules (adhesive forces) are stronger than the cohesive forces. In biological cells and organelles, water is in contact with membrane and protein surfaces that are hydrophilic; that is, surfaces that have a strong attraction to water. Irving Langmuir observed a strong repulsive force between hydrophilic surfaces. To dehydrate hydrophilic surfaces—to remove the strongly held layers of water of hydration—requires doing substantial work against these forces, called hydration forces. These forces are very large but decrease rapidly over a nanometer or less. They are important in biology, particularly when cells are dehydrated by exposure to dry atmospheres or to extracellular freezing.
Surface tension
Water has an unusually high surface tension of 71.99 mN/m at 25 °C which is caused by the strength of the hydrogen bonding between water molecules. This allows insects to walk on water.
Capillary action
Because water has strong cohesive and adhesive forces, it exhibits capillary action. Strong cohesion from hydrogen bonding and adhesion allows trees to transport water more than 100 m upward.
Water as a solvent
Water is an excellent solvent due to its high dielectric constant. Substances that mix well and dissolve in water are known as hydrophilic ("water-loving") substances, while those that do not mix well with water are known as hydrophobic ("water-fearing") substances. The ability of a substance to dissolve in water is determined by whether or not the substance can match or better the strong attractive forces that water molecules generate between other water molecules. If a substance has properties that do not allow it to overcome these strong intermolecular forces, the molecules are precipitated out from the water. Contrary to the common misconception, water and hydrophobic substances do not "repel", and the hydration of a hydrophobic surface is energetically, but not entropically, favorable.
When an ionic or polar compound enters water, it is surrounded by water molecules (hydration). The relatively small size of water molecules (~ 3 angstroms) allows many water molecules to surround one molecule of solute. The partially negative dipole ends of the water are attracted to positively charged components of the solute, and vice versa for the positive dipole ends.
In general, ionic and polar substances such as acids, alcohols, and salts are relatively soluble in water, and nonpolar substances such as fats and oils are not. Nonpolar molecules stay together in water because it is energetically more favorable for the water molecules to hydrogen bond to each other than to engage in van der Waals interactions with non-polar molecules.
An example of an ionic solute is table salt; the sodium chloride, NaCl, separates into cations and anions, each being surrounded by water molecules. The ions are then easily transported away from their crystalline lattice into solution. An example of a nonionic solute is table sugar. The water dipoles make hydrogen bonds with the polar regions of the sugar molecule (OH groups) and allow it to be carried away into solution.
Quantum tunneling
The quantum tunneling dynamics in water was reported as early as 1992. At that time it was known that there are motions which destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers. On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamer. Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds. Later in the same year, the discovery of the quantum tunneling of water molecules was reported.
Electromagnetic absorption
Water is relatively transparent to visible light, near ultraviolet light, and far-red light, but it absorbs most ultraviolet light, infrared light, and microwaves. Most photoreceptors and photosynthetic pigments utilize the portion of the light spectrum that is transmitted well through water. Microwave ovens take advantage of water's opacity to microwave radiation to heat the water inside of foods. Water's light blue color is caused by weak absorption in the red part of the visible spectrum.
Structure
A single water molecule can participate in a maximum of four hydrogen bonds because it can accept two bonds using the lone pairs on oxygen and donate two hydrogen atoms. Other molecules like hydrogen fluoride, ammonia, and methanol can also form hydrogen bonds. However, they do not show anomalous thermodynamic, kinetic, or structural properties like those observed in water because none of them can form four hydrogen bonds: either they cannot donate or accept hydrogen atoms, or there are steric effects in bulky residues. In water, intermolecular tetrahedral structures form due to the four hydrogen bonds, thereby forming an open structure and a three-dimensional bonding network, resulting in the anomalous decrease in density when cooled below 4 °C. This repeated, constantly reorganizing unit defines a three-dimensional network extending throughout the liquid. This view is based upon neutron scattering studies and computer simulations, and it makes sense in the light of the unambiguously tetrahedral arrangement of water molecules in ice structures.
However, there is an alternative theory for the structure of water. In 2004, a controversial paper from Stockholm University suggested that water molecules in the liquid state typically bind not to four but only two others; thus forming chains and rings. The term "string theory of water" (which is not to be confused with the string theory of physics) was coined. These observations were based upon X-ray absorption spectroscopy that probed the local environment of individual oxygen atoms.
Molecular structure
The repulsive effects of the two lone pairs on the oxygen atom cause water to have a bent, not linear, molecular structure, allowing it to be polar. The hydrogen–oxygen–hydrogen angle is 104.45°, which is less than the 109.47° for ideal sp3 hybridization. The valence bond theory explanation is that the oxygen atom's lone pairs are physically larger and therefore take up more space than the oxygen atom's bonds to the hydrogen atoms. The molecular orbital theory explanation (Bent's rule) is that lowering the energy of the oxygen atom's nonbonding hybrid orbitals (by assigning them more s character and less p character) and correspondingly raising the energy of the oxygen atom's hybrid orbitals bonded to the hydrogen atoms (by assigning them more p character and less s character) has the net effect of lowering the energy of the occupied molecular orbitals because the energy of the oxygen atom's nonbonding hybrid orbitals contributes completely to the energy of the oxygen atom's lone pairs while the energy of the oxygen atom's other two hybrid orbitals contributes only partially to the energy of the bonding orbitals (the remainder of the contribution coming from the hydrogen atoms' 1s orbitals).
Chemical properties
Self-ionization
In liquid water there is some self-ionization giving hydronium ions and hydroxide ions.
2 +
The equilibrium constant for this reaction, known as the ionic product of water, , has a value of about at 25 °C. At neutral pH, the concentration of the hydroxide ion () equals that of the (solvated) hydrogen ion (), with a value close to 10−7 mol L−1 at 25 °C. See data page for values at other temperatures.
The thermodynamic equilibrium constant is a quotient of thermodynamic activities of all products and reactants including water:
However, for dilute solutions, the activity of a solute such as H3O+ or OH− is approximated by its concentration, and the activity of the solvent H2O is approximated by 1, so that we obtain the simple ionic product
Geochemistry
The action of water on rock over long periods of time typically leads to weathering and water erosion, physical processes that convert solid rocks and minerals into soil and sediment, but under some conditions chemical reactions with water occur as well, resulting in metasomatism or mineral hydration, a type of chemical alteration of a rock which produces clay minerals. It also occurs when Portland cement hardens.
Water ice can form clathrate compounds, known as clathrate hydrates, with a variety of small molecules that can be embedded in its spacious crystal lattice. The most notable of these is methane clathrate, 4 , naturally found in large quantities on the ocean floor.
Acidity in nature
Rain is generally mildly acidic, with a pH between 5.2 and 5.8 if not having any acid stronger than carbon dioxide. If high amounts of nitrogen and sulfur oxides are present in the air, they too will dissolve into the cloud and raindrops, producing acid rain.
Isotopologues
Several isotopes of both hydrogen and oxygen exist, giving rise to several known isotopologues of water. Vienna Standard Mean Ocean Water is the current international standard for water isotopes. Naturally occurring water is almost completely composed of the neutron-less hydrogen isotope protium. Only 155 ppm include deuterium ( or D), a hydrogen isotope with one neutron, and fewer than 20 parts per quintillion include tritium ( or T), which has two neutrons. Oxygen also has three stable isotopes, with present in 99.76%, in 0.04%, and in 0.2% of water molecules.
Deuterium oxide, , is also known as heavy water because of its higher density. It is used in nuclear reactors as a neutron moderator. Tritium is radioactive, decaying with a half-life of 4500 days; exists in nature only in minute quantities, being produced primarily via cosmic ray-induced nuclear reactions in the atmosphere. Water with one protium and one deuterium atom occur naturally in ordinary water in low concentrations (~0.03%) and in far lower amounts (0.000003%) and any such molecules are temporary as the atoms recombine.
The most notable physical differences between and , other than the simple difference in specific mass, involve properties that are affected by hydrogen bonding, such as freezing and boiling, and other kinetic effects. This is because the nucleus of deuterium is twice as heavy as protium, and this causes noticeable differences in bonding energies. The difference in boiling points allows the isotopologues to be separated. The self-diffusion coefficient of at 25 °C is 23% higher than the value of . Because water molecules exchange hydrogen atoms with one another, hydrogen deuterium oxide (DOH) is much more common in low-purity heavy water than pure dideuterium monoxide .
Consumption of pure isolated may affect biochemical processes—ingestion of large amounts impairs kidney and central nervous system function. Small quantities can be consumed without any ill-effects; humans are generally unaware of taste differences, but sometimes report a burning sensation or sweet flavor. Very large amounts of heavy water must be consumed for any toxicity to become apparent. Rats, however, are able to avoid heavy water by smell, and it is toxic to many animals.
Light water refers to deuterium-depleted water (DDW), water in which the deuterium content has been reduced below the standard level.
Occurrence
Water is the most abundant substance on Earth's surface and also the third most abundant molecule in the universe, after and . 0.23 ppm of the earth's mass is water and 97.39% of the global water volume of 1.38 km3 is found in the oceans.
Water is far more prevalent in the outer Solar System, beyond a point called the frost line, where the Sun's radiation is too weak to vaporize solid and liquid water (as well as other elements and chemical compounds with relatively low melting points, such as methane and ammonia). In the inner Solar System, planets, asteroids, and moons formed almost entirely of metals and silicates. Water has since been delivered to the inner Solar System via an as-yet unknown mechanism, theorized to be the impacts of asteroids or comets carrying water from the outer Solar System, where bodies contain much more water ice. The difference between planetary bodies located inside and outside the frost line can be stark. Earth's mass is 0.000023% water, while Tethys, a moon of Saturn, is almost entirely made of water.
Reactions
Acid–base reactions
Water is amphoteric: it has the ability to act as either an acid or a base in chemical reactions. According to the Brønsted-Lowry definition, an acid is a proton () donor and a base is a proton acceptor. When reacting with a stronger acid, water acts as a base; when reacting with a stronger base, it acts as an acid. For instance, water receives an ion from HCl when hydrochloric acid is formed:
+ +
In the reaction with ammonia, , water donates a ion, and is thus acting as an acid:
+ +
Because the oxygen atom in water has two lone pairs, water often acts as a Lewis base, or electron-pair donor, in reactions with Lewis acids, although it can also react with Lewis bases, forming hydrogen bonds between the electron pair donors and the hydrogen atoms of water. HSAB theory describes water as both a weak hard acid and a weak hard base, meaning that it reacts preferentially with other hard species:
+ →
+ →
+ →
When a salt of a weak acid or of a weak base is dissolved in water, water can partially hydrolyze the salt, producing the corresponding base or acid, which gives aqueous solutions of soap and baking soda their basic pH:
+ NaOH +
Ligand chemistry
Water's Lewis base character makes it a common ligand in transition metal complexes, examples of which include metal aquo complexes such as to perrhenic acid, which contains two water molecules coordinated to a rhenium center. In solid hydrates, water can be either a ligand or simply lodged in the framework, or both. Thus, consists of [Fe2(H2O)6]2+ centers and one "lattice water". Water is typically a monodentate ligand, i.e., it forms only one bond with the central atom.
Organic chemistry
As a hard base, water reacts readily with organic carbocations; for example in a hydration reaction, a hydroxyl group () and an acidic proton are added to the two carbon atoms bonded together in the carbon-carbon double bond, resulting in an alcohol. When the addition of water to an organic molecule cleaves the molecule in two, hydrolysis is said to occur. Notable examples of hydrolysis are the saponification of fats and the digestion of proteins and polysaccharides. Water can also be a leaving group in SN2 substitution and E2 elimination reactions; the latter is then known as a dehydration reaction.
Water in redox reactions
Water contains hydrogen in the oxidation state +1 and oxygen in the oxidation state −2. It oxidizes chemicals such as hydrides, alkali metals, and some alkaline earth metals. One example of an alkali metal reacting with water is:
2 Na + 2 → + 2 + 2
Some other reactive metals, such as aluminium and beryllium, are oxidized by water as well, but their oxides adhere to the metal and form a passive protective layer. Note that the rusting of iron is a reaction between iron and oxygen that is dissolved in water, not between iron and water.
Water can be oxidized to emit oxygen gas, but very few oxidants react with water even if their reduction potential is greater than the potential of . Almost all such reactions require a catalyst. An example of the oxidation of water is:
4 + 2 → 4 AgF + 4 HF +
Electrolysis
Water can be split into its constituent elements, hydrogen, and oxygen, by passing an electric current through it. This process is called electrolysis. The cathode half reaction is:
2 + 2 →
The anode half reaction is:
2 → + 4 + 4
The gases produced bubble to the surface, where they can be collected or ignited with a flame above the water if this was the intention. The required potential for the electrolysis of pure water is 1.23 V at 25 °C. The operating potential is actually 1.48 V or higher in practical electrolysis.
History
Henry Cavendish showed that water was composed of oxygen and hydrogen in 1781. The first decomposition of water into hydrogen and oxygen, by electrolysis, was done in 1800 by English chemist William Nicholson and Anthony Carlisle. In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is composed of two parts hydrogen and one part oxygen.
Gilbert Newton Lewis isolated the first sample of pure heavy water in 1933.
The properties of water have historically been used to define various temperature scales. Notably, the Kelvin, Celsius, Rankine, and Fahrenheit scales were, or currently are, defined by the freezing and boiling points of water. The less common scales of Delisle, Newton, Réaumur, and Rømer were defined similarly. The triple point of water is a more commonly used standard point today.
Nomenclature
The accepted IUPAC name of water is oxidane or simply water, or its equivalent in different languages, although there are other systematic names which can be used to describe the molecule. Oxidane is only intended to be used as the name of the mononuclear parent hydride used for naming derivatives of water by substituent nomenclature. These derivatives commonly have other recommended names. For example, the name hydroxyl is recommended over oxidanyl for the –OH group. The name oxane is explicitly mentioned by the IUPAC as being unsuitable for this purpose, since it is already the name of a cyclic ether also known as tetrahydropyran.
The simplest systematic name of water is hydrogen oxide. This is analogous to related compounds such as hydrogen peroxide, hydrogen sulfide, and deuterium oxide (heavy water). Using chemical nomenclature for type I ionic binary compounds, water would take the name hydrogen monoxide, but this is not among the names published by the International Union of Pure and Applied Chemistry (IUPAC). Another name is dihydrogen monoxide, which is a rarely used name of water, and mostly used in the dihydrogen monoxide parody.
Other systematic names for water include hydroxic acid, hydroxylic acid, and hydrogen hydroxide, using acid and base names. None of these exotic names are used widely. The polarized form of the water molecule, , is also called hydron hydroxide by IUPAC nomenclature.
Water substance is a rare term used for H2O when one does not wish to specify the phase of matter (liquid water, water vapor, some form of ice, or a component in a mixture) though the term "water" is also used with this general meaning.
Oxygen dihydride is another way of referring to water, but modern usage often restricts the term "hydride" to ionic compounds (which water is not).
See also
Chemical bonding of water
Dihydrogen monoxide parody
Double distilled water
Electromagnetic absorption by water
Fluid dynamics
Hard water
Heavy water
Hydrogen polyoxide
Ice
Optical properties of water and ice
Steam
Superheated water
Water cluster
Water (data page)
Water dimer
Water model
Water thread experiment
Footnotes
References
Notes
Bibliography
Further reading
External links
Release on the IAPWS Formulation 1995 for the Thermodynamic Properties of Ordinary Water Substance for General and Scientific Use (simpler formulation)
Online calculator using the IAPWS Supplementary Release on Properties of Liquid Water at 0.1 MPa, September 2008
Calculation of vapor pressure, liquid density, dynamic liquid viscosity, and surface tension of water
Water Density Calculator
Why does ice float in my drink?, NASA
Water
Forms of water
Hydrogen compounds
Triatomic molecules
Oxygen compounds
Hydroxides
Inorganic solvents
Neutron moderators
Oxides
Limnology
Oceanography
Extraterrestrial water
Transport phenomena
Heat transfer
Greenhouse gases | Properties of water | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 7,871 | [
"Physical phenomena",
"Acids",
"Chemical engineering",
"Environmental chemistry",
"Phases of matter",
"Thermodynamics",
"Transport phenomena",
"Amphoteric compounds",
"Oxides",
"Salts",
"Triatomic molecules",
"Greenhouse gases",
"Heat transfer",
"Bases (chemistry)",
"Hydrology",
"Appli... |
24,027,962 | https://en.wikipedia.org/wiki/Jet%20aerators | Jet aerators are applied across a wide range of water, wastewater and biosolids treatment applications. Their primary purpose is to transfer oxygen to the liquid or sludge. A Jet aerator works through aspirating technology by simultaneously introducing large volumes of high kinetic energy liquid and air through one or more jet nozzles. The high velocity liquid exits the inner, primary jet and rapidly mixes with the incoming air in the outer jet. This intense mixing and high degree of turbulence in the gas/liquid cloud travels outward from the jet along the basin floor prior to the vertical rise of the gas bubble column to the liquid surface.
Applications, features and benefits
Oxygen transfer efficiency and energy savings
In most industrial wastewater and biosolids applications jet aerators exhibit superior oxygen transfer efficiency compared to other aeration technologies. The hydrodynamic conditions within the jet and fine bubble cloud produces continuous surface renewal at the gas/liquid interface resulting in higher alpha factors. This results in superior process oxygen transfer performance in the presence of surfactants, extracellular enzymes and high MLS concentrations.
Process flexibility
Jet aerators do not require any external air source (i.e. compressor), except for the surrounding atmosphere. Jet aerators can be installed either as submersible units or piped through the tank wall using an external dry-installed chopper pump to feed the aspirating ejector(s). Jet aerators are easily configured into any basin geometry including circular, rectangular, looped reactors and sloped wall basins. Jet aerators are ideally suited for deep tank processes. The jet oxidation ditch is an example of technology innovation where the combination of a deeper basin design, bottom to top mixing and conservation of momentum combines to make a very efficient treatment process. In this and other applications the independent control of oxygen transfer and mixing is a valuable feature for both process control and energy savings.
Applications
Equalization basins at sewage treatment plants
Sewage wet wells and lift stations
Aerobic digesters
Leachate processing from landfills
Waste processing at slaughterhouses, poultry abattoirs, fish processing plants, etc.
Waste processing at tanneries (Article at Leather International)
Pulp and paper - aeration of waste sludge
As compresor-less aerator in electrochemical reactors to produce hydrogen peroxide
References
Chemical equipment
Pumps
Environmental engineering | Jet aerators | [
"Physics",
"Chemistry",
"Engineering"
] | 468 | [
"Pumps",
"Turbomachinery",
"Chemical equipment",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Civil engineering",
"nan",
"Environmental engineering"
] |
24,028,211 | https://en.wikipedia.org/wiki/The%20Combustion%20Institute | The Combustion Institute is an educational non-profit, international, scientific and engineering society whose purpose is to promote research in combustion science. The institute was established in 1954, and its headquarters are in Pittsburgh, Pennsylvania, United States. The current president of The Combustion Institute is Philippe Dagaut (2021-).
Foundation and mission
The support of this important field of study spanning many scientific and engineering disciplines is done through the discussion of research findings at regional, national and the biennial international symposia, and through the publication of the Proceedings of the Combustion Institute and the institute's journals, Combustion and Flame and the affiliated journals Progress in Energy and Combustion Science, Combustion Science and Technology and Combustion Theory and Modelling.
The institute serves as the parent organization for thirty three national sections organized in many countries (the US being divided into three sections) as of 2012:
In honor of fiftieth anniversary of Combustion Institute, the leading combustion scientists John D. Buckmaster, Paul Clavin, Amable Liñán, Moshe Matalon, Norbert Peters, Gregory Sivashinsky and Forman A. Williams wrote a paper in the Proceedings of the Combustion Institute.
International symposium on combustion
The international symposium on combustion is organised by the Combustion Institute biennially. The first symposium on combustion was held in 1928 in the United States and the first international symposium on combustion was held on 1948, even though the combustion institute itself was found on 1954. Thirty seven symposiums has been held so far and the 38th symposium was to be held on 2020 but is postponed to 2021.
Institute Awards
During each International Symposium, The Combustion Institute awards the following:
Bernard Lewis Gold Medal – established in 1958 and awarded for brilliant research in the field of combustion.
Alfred C. Egerton Gold Medal – established in 1958 and awarded biennially for distinguished, continuing and encouraging contributions to the field of combustion.
Silver Combustion Medal – established in 1958 and awarded to an outstanding paper presented at the previous symposium.
The Hottel Lecture.
Ya B. Zeldovich Gold Medal – established in 1990 and awarded for outstanding contribution to the theory of combustion or detonation.
Bernard Lewis Fellowship – established in 1996 during the 26th International Symposium, this award is awarded to encourage high quality research in combustion by young scientists and engineers.
Distinguished Paper Award – established in 1996 during the 31st International Symposium, this award is presented to the paper in each of the twelve colloquia of a Symposium which is judged to be most distinguished in quality, achievement and significance.
Bernard Lewis Visiting Lecturer Fellowship.
Hiroshi Tsuji Early Career Researcher Award.
See also
International Flame Research Foundation
References
American engineering organizations
Scientific organizations established in 1954
Combustion | The Combustion Institute | [
"Chemistry"
] | 534 | [
"Combustion"
] |
24,030,500 | https://en.wikipedia.org/wiki/C12H18O | {{DISPLAYTITLE:C12H18O}}
The molecular formula C12H18O (molar mass: 178.27 g/mol, exact mass: 178.1358 u) may refer to:
Amylmetacresol (AMC)
2,4-Dimethyl-6-tert-butylphenol
Propofol
Molecular formulas | C12H18O | [
"Physics",
"Chemistry"
] | 80 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,031,904 | https://en.wikipedia.org/wiki/Wiegand%20effect | The Wiegand effect is a nonlinear magnetic effect, named after its discoverer John R. Wiegand, produced in specially annealed and hardened wire called Wiegand wire.
Wiegand wire is low-carbon Vicalloy, a ferromagnetic alloy of cobalt, iron, and vanadium. Initially, the wire is fully annealed. In this state the alloy is "soft" in the magnetic sense; that is, it is attracted to magnets and so magnetic field lines will divert preferentially into the metal, but the metal retains only a very small residual field when the external field is removed.
During manufacture, to give the wire its unique magnetic properties, it is subjected to a series of twisting and untwisting operations to cold-work the outside shell of the wire while retaining a soft core within the wire, and then the wire is aged. The result is that the magnetic coercivity of the outside shell is much larger than that of the inner core. This high coercivity outer shell will retain an external magnetic field even when the field's original source is removed.
The wire now exhibits a very large magnetic hysteresis: If a magnet is brought near the wire, the high coercivity outer shell excludes the magnetic field from the inner soft core until the magnetic threshold is reached, whereupon the entire wire — both the outer shell and inner core — rapidly switches magnetisation polarity. This switchover occurs in a few microseconds, and is called the Wiegand effect.
The value of the Wiegand effect is that the switchover speed is sufficiently fast that a significant voltage can be output from a coil using a Wiegand-wire core. Because the voltage induced by a changing magnetic field is proportional to the rate of change of the field, a Wiegand-wire core can increase the output voltage of a magnetic field sensor by several orders of magnitude as compared to a similar coil with a non-Wiegand core. This higher voltage can easily be detected electronically, and when combined with the high repeatability threshold of the magnetic field switching, making the Wiegand effect useful for positional sensors.
Once the Wiegand wire has flipped magnetization, it will retain that magnetization until flipped in the other direction. Sensors and mechanisms that use the Wiegand effect must take this retention into account.
The Wiegand effect is a macroscopic extension of the Barkhausen effect, as the special treatment of the Wiegand wire causes the wire to act macroscopically as a single large magnetic domain. The numerous small high-coercivity domains in the Wiegand wire outer shell switch in an avalanche, generating the Wiegand effect's rapid magnetic field change.
Applications
Wiegand sensors
Wiegand sensors are magnetic sensors that make use of the Wiegand effect to generate a consistent pulse every time magnetic field polarity reverses and therefore do not rely on any external voltage or current. The consistency of the pulses produced by Wiegand sensors can be used to provide energy for low-power and energy-saving applications. Being self-powered, Wiegand sensors have a potential in IoT applications as energy harvesters, proximity sensors, and event counters.
Wiegand keycards
John R. Wiegand and Milton Velinsky developed an access control card using Wiegand wires.
Besides sensors, the Wiegand effect is used for security keycard door locks. The plastic keycard has a series of short lengths of Wiegand wire embedded in it, which encodes the key by the presence or absence of wires. A second track of wires provides a clock track. The card is read by pulling it through a slot in a reader device, which has a fixed magnetic field and a sensor coil. As each length of wire passes through the magnetic field, its magnetic state flips, which indicates a 1, and this is sensed by the coil. The absence of a wire indicates a 0. The resulting Wiegand protocol digital code is then sent to a host controller to determine whether to electrically unlock the door.
Wiegand cards are more durable and difficult to counterfeit than bar code or magnetic stripe cards. Since the keycode is permanently set into the card at manufacture by the positions of the wires, Wiegand cards can't be erased by magnetic fields or reprogrammed as magnetic stripe cards can.
The Wiegand interface, originally developed for Weigand-wire cards, is still the de-facto standard convention for transmitting data from any kind of access card to an access control panel.
A capacitive MM code card, like Weigand cards, embeds a code inside the plastic of the card, and so are more durable and difficult to counterfeit than magnetic stripes or printed barcodes on the surface of the card.
Rotary encoder
Wiegand wires are used by some rotary magnetic encoders to power the multi-turn circuitry. As the encoder revolves, the Wiegand wire core coil generates a pulse of electricity sufficient to power the encoder and write the turns count to non-volatile memory. This works at any speed of rotation and eliminates the clock/gear mechanism typically associated with multi-turn encoders.
Wheel speed sensor
Wiegand wires are fitted to the outer diameter of a wheel to measure rotational speeds. An externally mounted reading head detects the Wiegand pulses.
References
External links
— The original Wiegand patent (1974)
— The patent on Vicalloy
— An explanation of the Wiegand effect as used in access control
See also
Wiegand interface — the interface originally used by Wiegand-wire card readers.
Ferromagnetism | Wiegand effect | [
"Chemistry",
"Materials_science"
] | 1,174 | [
"Magnetic ordering",
"Ferromagnetism"
] |
24,032,731 | https://en.wikipedia.org/wiki/Electrochemical%20reduction%20of%20carbon%20dioxide | The electrochemical reduction of carbon dioxide, also known as CO2RR, is the conversion of carbon dioxide () to more reduced chemical species using electrical energy. It represents one potential step in the broad scheme of carbon capture and utilization.
CO2RR can produce diverse compounds including formate (HCOO−), carbon monoxide (CO), methane (CH4), ethylene (C2H4), and ethanol (C2H5OH). The main challenges are the relatively high cost of electricity (vs petroleum) and that CO2 is often contaminated with O2 and must be purified before reduction.
The first examples of CO2RR are from the 19th century, when carbon dioxide was reduced to carbon monoxide using a zinc cathode. Research in this field intensified in the 1980s following the oil embargoes of the 1970s. As of 2021, pilot-scale carbon dioxide electrochemical reduction is being developed by several companies, including Siemens, Dioxide Materials, Twelve and GIGKarasek. The techno-economic analysis was recently conducted to assess the key technical gaps and commercial potentials of the carbon dioxide electrolysis technology at near ambient conditions.
CO2RR electrolyzers have been developed to reduce other forms of CO2 including [bi]carbonates sourced from CO2 captured directly from the air using strong alkalis like KOH or carbamates sourced from flue gas effluents using alkali or amine-based absorbents like MEA or DEA. While the techno-economics of these systems are not yet feasible, they provide a near net carbon neutral pathway to produce commodity chemicals like ethylene at industrially relavant scales.
Chemicals from carbon dioxide
In carbon fixation, plants convert carbon dioxide into sugars, from which many biosynthetic pathways originate. The catalyst responsible for this conversion, RuBisCO, is the most common protein. Some anaerobic organisms employ enzymes to convert CO2 to carbon monoxide, from which fatty acids can be made.
In industry, a few products are made from CO2, including urea, salicylic acid, methanol, and certain inorganic and organic carbonates. In the laboratory, carbon dioxide is sometimes used to prepare carboxylic acids in a process known as carboxylation. An electrochemical CO2 electrolyzer that operates at room temperature has not yet been commercialized. Elevated temperature solid oxide electrolyzer cells (SOECs) for CO2 reduction to CO are commercially available. For example, Haldor Topsoe offers SOECs for CO2 reduction with a reported 6–8 kWh per Nm3 CO produced and purity up to 99.999% CO.
Electrocatalysis
The electrochemical reduction of carbon dioxide to various products is usually described as:
The redox potentials for these reactions are similar to that for hydrogen evolution in aqueous electrolytes, thus electrochemical reduction of CO2 is usually competitive with hydrogen evolution reaction.
Electrochemical methods have gained significant attention:
at ambient pressure and room temperature;
in connection with renewable energy sources (see also solar fuel)
competitive controllability, modularity and scale-up are relatively simple.
The electrochemical reduction or electrocatalytic conversion of CO2 can produce value-added chemicals such methane, ethylene, ethanol, etc., and the products are mainly dependent on the selected catalysts and operating potentials (applying reduction voltage). A variety of homogeneous and heterogeneous catalysts have been evaluated.
Many such processes are assumed to operate via the intermediacy of metal carbon dioxide complexes. Many processes suffer from high overpotential, low current efficiency, low selectivity, slow kinetics, and/or poor catalyst stability.
The composition of the electrolyte can be decisive. Gas-diffusion electrodes are beneficial.
Catalysts
Catalysts can be grouped by their primary products. Several metal are unfit for CO2RR because they promote to perform hydrogen evolution instead. Electrocatalysts selective for one particular organic compound include tin or bismuth for formate and silver or gold for carbon monoxide. Copper produces multiple reduced products such as methane, ethylene or ethanol, while methanol, propanol and 1-butanol have also been produced in minute quantities.
Three common products are carbon monoxide, formate, or higher order carbon products (two or more carbons).
Carbon monoxide-producing
Carbon monoxide can be produced from CO2RR over various precious metal catalysts. Steel has proven to be one such catalyst., or hydrogen.
Mechanistically, carbon monoxide arises from the metal bonded to the carbon of CO2 (see metallacarboxylic acid). Oxygen is lost as water.
Formate/formic acid-producing
Formic acid is produced as a primary product from CO2RR over diverse catalysts.
Catalysts that promote Formic Acid production from CO2 operate by strongly binding to both oxygen atoms of CO2, allowing protons to attack the central carbon. After attacking the central carbon, one proton attaching to an oxygen results in the creation of formate. Indium catalysts promote formate production because the Indium-Oxygen binding energy is stronger than the Indium-Carbon binding energy. This promotes the production of formate instead of Carbon Monoxide.
C>1-producing catalysts
Copper electrocatalysts produce multicarbon compounds from CO2. These include C2 products (ethylene, ethanol, acetate, etc.) and even C3 products (propanol, acetone, etc.) These products are more valuable than C1 products, but the current efficiencies are low.
See also
Electromethanogenesis
Biobattery
Electrofuel
Lemon battery
Photoelectrochemical reduction of carbon dioxide
Photochemical reduction of carbon dioxide
Electrolysis of water
Electrochemical energy conversion
Bioelectrochemical reactor
Notes
References
Further reading
Carbon dioxide
Electrolysis
Energy engineering
Electrochemical engineering | Electrochemical reduction of carbon dioxide | [
"Chemistry",
"Engineering"
] | 1,220 | [
"Chemical engineering",
"Electrochemical engineering",
"Energy engineering",
"Electrochemistry",
"Electrolysis",
"Greenhouse gases",
"Electrical engineering",
"Carbon dioxide"
] |
24,034,899 | https://en.wikipedia.org/wiki/Biosimulation | Biosimulation is a computer-aided mathematical simulation of biological processes and systems and thus is an integral part of systems biology. Due to the complexity of biological systems simplified models are often used, which should only be as complex as necessary.
The aim of biosimulations is model-based prediction of the behaviour and the dynamics of biological systems e.g. the response of an organ or a single cell towards a chemical. However the quality of model-based predictions strongly depends on the quality of the model, which in turn is defined by the quality of the data and the profoundness of the knowledge.
Pharmacy
Biosimulation is becoming increasingly important for drug development. Since on average only 11% of all drug candidates
are approved, it is anticipated that biosimulation may be the tool to predict whether a candidate drug will fail in the development process e.g. in clinical trials due to adverse side effects, bad pharmacokinetics or even toxicity. The early prediction if a drug will fail in animals or humans would be a key to reduce both drug development costs and the amount of required animal experiments and clinical trials. The latter is also in line with the so-called "3Rs" which refer to the principle of reduction and replacement of animal experiments as well as to the refinement of the methodology in cases where animal tests are still necessary. In a future scenario, biosimulation would change the way substances are tested, in which in vivo and in vitro tests are substituted by tests in silico.
Due to the importance of biosimulation in drug development a number of research projects exist which aim for simulating metabolism, toxicity, pharmacodynamic and pharmacokinetics of a drug candidate. Some of the research projects are listed below:
BioSim project; funded by the 6th frame program of the European Union
NSR Physiome Project
Hepatosys
Moreover, a few software tools already exist, which aim for predicting the toxicity of a substance or even try to simulate the virtual patient (Entelos). A few of these software tools are listed below:
COPASI
runBiosimulations
Tellurium
Metabolism PhysioLab (Entelos)
GastroPlus and ADMEPredictor (Simulations-Plus)
Certara's Simcyp physiologically based pharmacokinetics (PBPK) platform
RHEDDOS (Rhenovia Pharma SAS)
VirtualToxLab
Derek (Lhasa Limited)
DS TOPKAT (accelrys)
ADME Workbench
Applied BioMath Assess
MATLAB SimBiology
References
Systems biology | Biosimulation | [
"Biology"
] | 544 | [
"Systems biology"
] |
24,035,429 | https://en.wikipedia.org/wiki/C5H4 | {{DISPLAYTITLE:C5H4}}
The molecular formula (molar mass: 64.09 g/mol, exact mass: 64.0313 u) may refer to:
3-Ethynylcycloprop-1-ene
1,4-Pentadiyne
Penta-1,2-dien-4-yne
Spiropentadiene, or bowtiediene
Molecular formulas | C5H4 | [
"Physics",
"Chemistry"
] | 94 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
2,674,172 | https://en.wikipedia.org/wiki/State-universal%20coupled%20cluster | State-universal coupled cluster (SUCC) method is one of several multi-reference coupled-cluster (MR) generalizations of single-reference coupled cluster method. It was first formulated by Bogumił Jeziorski and Hendrik Monkhorst in their work published in Physical Review A in 1981. State-universal coupled cluster is often abbreviated as SUMR-CC or MR-SUCC.
References
Quantum chemistry | State-universal coupled cluster | [
"Physics",
"Chemistry"
] | 86 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
"Physical chemistry stubs",
" and optical physics"
] |
2,676,597 | https://en.wikipedia.org/wiki/Isotachophoresis | Isotachophoresis (ITP) is a technique in analytical chemistry used for selective separation and concentration of ionic analytes. It is a form of electrophoresis; charged analytes are separated based on ionic mobility, a quantity which tells how fast an ion migrates through an electric field.
Overview
In conventional ITP separations, a discontinuous buffer system is used. The sample is introduced between a zone of fast leading electrolyte (LE) and a zone of slow terminating (or: trailing) electrolyte (TE). Usually, the LE and the TE have a common counterion, but the co-ions (having charges with the same sign as the analytes of interest) are different: the LE is defined by co-ions with high ionic mobility, while the TE is defined by co-ions with low ionic mobility. The analytes of interest have intermediate ionic mobility. Application of an electric potential results in a low electrical field in the leading electrolyte and a high electrical field in the terminating electrolyte. Analyte ions situated in the TE zone will migrate faster than the surrounding TE co-ions, while analyte ions situated in the LE will migrate slower; the result is that analytes are focused at the LE/TE interface.
ITP is a displacement method: focusing ions of a certain kind displace other ions. If present in sufficient amounts, focusing analyte ions can displace all electrolyte co-ions, reaching a plateau concentration. Multiple analytes with sufficiently different ionic mobilities will form multiple plateau zones. Indeed, plateau mode ITP separations are readily recognized by stairlike profiles, each plateau of the stair representing an electrolyte or analyte zone having (from LE to TE) increasing electric fields and decreasing conductivities. In peak mode ITP, analytes amounts are insufficient to reach plateau concentrations, such analytes will focus in sharp Gaussian-like peaks. In peak mode ITP, analyte peaks will strongly overlap, unless so-called spacer compounds are added with intermediate ionic mobilities between those of the analytes; such spacer compounds are able to segregate adjacent analyte zones.
A completed ITP separation is characterized by a dynamic equilibrium in which all coionic zones migrate with equal velocities. From this phenomenon ITP has obtained its name: iso = equal, tachos = speed, phoresis = migration.
Isotachophoresis is exactly equal to the steady-state-stacking step in discontinuous electrophoresis.
Transient ITP
A popular form of ITP is transient ITP (tITP). It alleviates the limitation of conventional ITP that it has limited separation capacity because of analyte zone overlap. In transient ITP, analytes are first concentrated by ITP, and then can be baseline separated by zone electrophoresis. Transient ITP is usually accomplished by dissolving the sample in the TE and sandwiching the sample/TE plug between LE zones - or vice versa: a sample/LE plug can also be sandwiched between TE zones. In the first case, analytes are focused at the front TE/LE interface. Meanwhile, the back of the TE plug becomes dissolved in the LE because the faster LE ions overcome the TE ions. When all of the TE ions are dissolved, the focusing process ceases and the analytes are separated according to the principles of zone electrophoresis.
tITP is nowadays more widespread than conventional ITP because it is easily implemented in capillary electrophoresis (CE) separations as a preconcentration step, making CE more sensitive while profiting from its powerful separation capacities.
References
Electrophoresis | Isotachophoresis | [
"Chemistry",
"Biology"
] | 780 | [
"Instrumental analysis",
"Molecular biology techniques",
"Electrophoresis",
"Biochemical separation processes"
] |
2,676,755 | https://en.wikipedia.org/wiki/Ian%20Cheshire%20%28engineer%29 | Ian Cheshire (12 April 1936 – 28 November 2013) was a Scottish petroleum engineer who developed the ECLIPSE reservoir simulator.
Biography
He had worked for Schlumberger where he was a Schlumberger Fellow from 1999 to 2003.
He was awarded the Anthony F. Lucas Gold Medal by the Society of Petroleum Engineers in 2001. He was also awarded the Queen's Award for Technology in 1985.
References
External links
IOR Views "Seminar and Dinner Mark Retirement of Professor John Fayers", 9 November 2004
Petroleum engineers
Living people
1936 births
Scottish engineers | Ian Cheshire (engineer) | [
"Engineering"
] | 111 | [
"Petroleum engineers",
"Petroleum engineering"
] |
2,676,822 | https://en.wikipedia.org/wiki/Permanganic%20acid | Permanganic acid (or manganic(VII) acid) is the inorganic compound with the formula HMnO4 and various hydrates. This strong oxoacid has been isolated as its dihydrate. It is the conjugate acid of permanganate salts. It is the subject of few publications and its characterization as well as its uses are very limited.
Preparation and structure
Permanganic acid is most often prepared by the reaction of dilute sulfuric acid with a solution of barium permanganate, the insoluble barium sulfate byproduct being removed by filtering:
Ba(MnO4)2 + H2SO4 → 2 HMnO4 + BaSO4↓
The sulfuric acid used must be dilute; reactions of permanganates with concentrated sulfuric acid yield the anhydride, manganese heptoxide.
Permanganic acid has also been prepared through the reaction of hydrofluorosilicic acid with potassium permanganate, through electrolysis, and through hydrolysis of manganese heptoxide, though the last route often results in explosions.
Crystalline permanganic acid has been prepared at low temperatures as the dihydrate, HMnO4·2H2O.
Although its structure has not been verified spectroscopically or crystallographically, HMnO4 is assumed to be adopt a tetrahedral structure akin to that for perchloric acid.
Reactions
As a strong acid, HMnO4 is deprotonated to form the intensely purple coloured permanganates. Potassium permanganate, KMnO4, is a widely used, versatile and powerful oxidising agent.
Permanganic acid solutions are unstable, and gradually decompose into manganese dioxide, oxygen, and water, with initially formed manganese dioxide catalyzing further decomposition. Decomposition is accelerated by heat, light, and acids. Concentrated solutions decompose more rapidly than dilute.
References
Hydrogen compounds
Manganese(VII) compounds
Oxidizing acids
Mineral acids
Transition metal oxoacids | Permanganic acid | [
"Chemistry"
] | 430 | [
"Acids",
"Inorganic compounds",
"Mineral acids",
"Oxidizing agents",
"Oxidizing acids",
"Permanganates",
"Manganese(VII) compounds"
] |
2,676,890 | https://en.wikipedia.org/wiki/Universal%203D | Universal 3D (U3D) is a compressed file format standard for 3D computer graphics data.
The format was defined by a special consortium called 3D Industry Forum that brought together a diverse group of companies and organizations, including Intel, Boeing, HP, Adobe Systems, Bentley Systems, Right Hemisphere and others whose main focus had been the promotional development of 3D graphics for use in various industries, specifically at this time manufacturing as well as construction and industrial plant design. The format was later standardized by Ecma International in August 2005 as ECMA-363.
The goal is a universal standard for three-dimensional data of all kinds, to facilitate data exchange. The consortium promoted also the development of an open source library for facilitating the adoption of the format.
The format is natively supported by the PDF format and 3D objects in U3D format can be inserted into PDF documents and interactively visualized by Acrobat Reader (since version 7).
Editions
There are four editions to date.
The first edition is supported by many/all of the various applications mentioned below. It is capable of storing vertex based geometry, color, textures, lighting, bones, and transform based animation.
The second and third editions correct some errata in the first edition, and the third edition also adds the concept of vendor specified blocks. One such block widely deployed is the RHAdobeMesh block, which provides a more compressed alternative to the mesh blocks defined in the first edition. Deep Exploration, Tetra4D for Acrobat Pro and PDF3D-SDK can author this data, and Adobe Acrobat and Reader 8.1 can read this data.
The fourth edition provides definitions for higher order primitives (curved surfaces).
Application support
Applications which support PDFs with embedded U3D objects include:
Adobe Acrobat Pro allows PDF creation and conversion of various file formats to U3D within the PDF. Acrobat Pro allows PDF creation and embedding of pre-created U3D files.
Adobe Photoshop CS3, CS4 and CS5 Extended are able to export a 3D Layer as a U3D file.
Adobe Substance 3D rendering software.
ArchiCAD allows export of U3D files.
Bluebeam Revu Allows PDF creation and embedding of U3D within the PDF. Comes packaged with plugins that can export 3D PDFs from Revit and Solidworks.
Daz Studio Allows export to U3D.
iText open source Java library allows creation of PDF containing U3D
Jreality, an open source mathematical visualization package with 3D-PDF and U3D export
MeVisLab supports export of U3D models for biomedical images.
MicroStation allows export of PDF containing U3D.
Poser 7
Autodesk Inventor allows saving of files to 3D PDF containing U3D. Soon available since version 2017.
KeyCreator allows reading of U3D data from U3D and PDF files and exporting models to U3D and embedding in 3D PDF.
Siemens Solid Edge allows exorting and saving of files to 3D PDF containing U3D. And also other 3D file Docs.
SolidWorks allows saving of files to 3D PDF containing U3D up to release 2014.
ArtiosCAD allows saving of files to 3D PDF containing U3D.
SpaceClaim allows the opening and saving of 3D PDF format comprising U3D.
See also
glTF
COLLADA
PRC (File format)
X3D
3DMLW
LibHaru
References
External links
Universal 3D Sample Software formerly at www.3dif.org
at Institute of Science and Technology (ISTI CNR)
Tutorial on embedding a U3D file into a PDF with Meshlab and Miktex
3D graphics file formats
Ecma standards | Universal 3D | [
"Technology"
] | 742 | [
"Computer standards",
"Ecma standards"
] |
2,677,209 | https://en.wikipedia.org/wiki/Drilling%20and%20blasting | Drilling and blasting is the controlled use of explosives and other methods, such as gas pressure blasting pyrotechnics, to break rock for excavation. It is practiced most often in mining, quarrying and civil engineering such as dam, tunnel or road construction. The result of rock blasting is often known as a rock cut.
Drilling and blasting currently utilizes many different varieties of explosives with different compositions and performance properties. Higher velocity explosives are used for relatively hard rock in order to shatter and break the rock, while low velocity explosives are used in soft rocks to generate more gas pressure and a greater heaving effect. For instance, an early 20th-century blasting manual compared the effects of black powder to that of a wedge, and dynamite to that of a hammer. The most commonly used explosives in mining today are ANFO based blends due to lower cost than dynamite.
Before the advent of tunnel boring machines (TBMs), drilling and blasting was the only economical way of excavating long tunnels through hard rock, where digging is not possible. Even today, the method is still used in the construction of tunnels, such as in the construction of the Lötschberg Base Tunnel. The decision whether to construct a tunnel using a TBM or using a drill and blast method includes a number of factors. Tunnel length is a key issue that needs to be addressed because large TBMs for a rock tunnel have a high capital cost, but because they are usually quicker than a drill and blast tunnel the price per metre of tunnel is lower. This means that shorter tunnels tend to be less economical to construct with a TBM and are therefore usually constructed by drill and blast. Managing ground conditions can also have a significant effect on the choice with different methods suited to different hazards in the ground.
History
The use of explosives in mining goes back to the year 1627, when gunpowder was first used in place of mechanical tools in the Hungarian (now Slovak) town of Banská Štiavnica. The innovation spread quickly throughout Europe and the Americas.
The standard method for blasting rocks was to drill a hole to a considerable depth and deposit a charge of gunpowder at the further end of the hole and then fill the remainder of the hole with clay or some other soft mineral substance, well rammed, to make it as tight as possible. A wire laid in the hole during this process was then removed and replaced with a train of gunpowder. This train was ignited by a slow match, often consisting simply of brown paper smeared with grease, intended to burn long enough to allow the person who fires it enough time to reach a place of safety.
The uncertainty of this method led to many accidents and various measures were introduced to improve safety for those involved. One was replacing the iron wire, by which the passage for the gunpowder is formed, with one of copper, to eliminate sparking that could ignite the powder prematurely. Another was the use of a safety fuse. This consisted of small train of gunpowder inserted in a water-proof cord, which burns at a steady and uniform rate. This in turn was later replaced by a long piece of wire that was used to deliver an electric charge to ignite the explosive. The first to use this method for underwater blasting was Charles Pasley who employed it in 1839 to break up the wreck of the British warship HMS Royal George which had become a shipping hazard at Spithead.
An early major use of blasting to remove rock occurred in 1843 when the British civil engineer William Cubitt used 18,000 lbs of gunpowder to remove a 400-foot-high chalk cliff near Dover as part of the construction of the South Eastern Railway. About 400,000 cubic yards of chalk was displaced in an exercise that it was estimated saved the company six months time and £7,000 in expense.
While drilling and blasting saw limited use in pre-industrial times using gunpowder (such as with the Blue Ridge Tunnel in the United States, built in the 1850s), it was not until more powerful (and safer) explosives, such as dynamite (patented 1867), as well as powered drills were developed, that its potential was fully realised.
Drilling and blasting was successfully used to construct tunnels throughout the world, notably the Fréjus Rail Tunnel, the Gotthard Rail Tunnel, the Simplon Tunnel, the Jungfraubahn and even the longest road tunnel in the world, Lærdalstunnelen, are constructed using this method.
In 1990, 2.1 billion kg of commercial explosives were consumed in the United States (12 m3 per capita), representing an estimated expenditure of 3.5 to 4 billion 1993 dollars on blasting. In this year the Soviet Union was the leader in total volume with 2.7 billion kg of explosives consumed (13 m3 per capita), and Australia had the highest per capita explosives consumption that year with 45 m3 per capita.
Procedure
As the name suggests, drilling and blasting works as follows:
A blast pattern is created
A number of holes are drilled into the rock, which are then partially filled with explosives.
Stemming, inert material, is packed into the holes to direct the explosive force into the surrounding rock.
Detonating the explosive causes the rock to collapse.
Rubble is removed and the new tunnel surface is reinforced.
Repeating these steps until desired excavation is complete.
The positions and depths of the holes (and the amount of explosive each hole receives) are determined by a carefully constructed pattern, which, together with the correct timing of the individual explosions, will guarantee that the tunnel will have an approximately circular cross-section.
During operation, blasting mats may be used to contain the blast, suppress dust and noise, for fly rock prevention and sometimes to direct the blast.
Rock support
As a tunnel or excavation progresses the roof and side walls need to be supported to stop the rock falling into the excavation. The philosophy and methods for rock support vary widely but typical rock support systems can include:
Rock bolts or rock dowels
Shotcrete
Ribs or mining arches and lagging
Cable bolts
In-situ concrete
Typically a rock support system would include a number of these support methods, each intended to undertake a specific role in the rock support such as the combination of rock bolting and shotcrete.
Gallery
See also
Building implosion
Demolition
International Society for Explosive Engineers
References
External links
"Air Curtain Fences Blast" Popular Mechanics, August 1954, pp. 96–97, the delicate controlled blast in 1954 to connect the two reservoirs at a Canadian Niagara Falls power station.
This is an extensive survey of techniques used in the early 20th century.
Tunnel construction
Mining techniques
Explosives
Civil engineering | Drilling and blasting | [
"Chemistry",
"Engineering"
] | 1,319 | [
"Construction",
"Civil engineering",
"Explosives",
"Explosions"
] |
2,677,564 | https://en.wikipedia.org/wiki/Belt%20sander | A belt sander or strip sander is a sander used in shaping and finishing wood and other materials. It consists of an electric motor that turns a pair of drums on which a continuous loop of sandpaper is mounted. Belt sanders may be handheld and moved over the material, or stationary (fixed), where the material is moved to the sanding belt. Stationary belt sanders are sometimes mounted on a work bench, in which case they are called bench sanders. Stationary belt sanders are often combined with a disc sander.
Belt sanders can have a very aggressive action on wood and are normally used only for the beginning stages of the sanding process, or used to rapidly remove material. Sometimes they are also used for removing paints or finishes from wood. Fitted with fine grit sand paper, a belt sander can be used to achieve a completely smooth surface.
Stationary belt sanders are used for removing non-ferrous metals, such as aluminum. Non-ferrous metals tend to clog grinding wheels, quickly making them useless for grinding soft metals. Because the small grooves in the sandpaper are opened up as they go around the arc of the drive wheel, belt sanders are less prone to clogging.
Belt sanders can vary in size from the small handheld unit shown in the illustration to units wide enough to sand a full 1.2 by 2.5 m (4-by-8 foot sheet) of plywood in a manufacturing plant. Some belt sanders can be as large as .
Sanding wood produces a large amount of sawdust. Therefore, belt sanders employed in woodworking are usually equipped with some type of dust collection system. It may be as simple as a cloth filter bag attached to a portable sander or a large vacuum system to suck dust particles away into a central collector.
Taut-belt sanders allow for adjusting the angle of the idler drum to keep the belt centered.
Slack-belt sanding is commonly used in the manufacturing process of guitars and other medium-sized wooden objects. It employs a long sanding belt which runs slackly over the object. The machinist then exerts pressure to it to sand down specific areas.
Racing
Belt sanders were one of the first power tools used in the growing field of power tool drag racing wherein a pair of stock or modified belt sanders are placed in parallel wooden channels and fitted with long extension cords. Each heat begins when a common switch or individual switches triggered by the racers energizes them, causing the sanders to race towards the end of the track spitting wood dust along the way. Stock sanders race down a track, while modified sanders race on a track. Sanders of all shapes and sizes can go very fast or very slow, depending on the power of the motor. For example, some can go as fast as 8 km/h (5 mph) etc.
Wide belt sander
A wide belt sander is used to machine stock flat and to specific thicknesses. It consists of sanding heads, contact drums and a conveyor belt. The sander is electric powered but relies on air pressure to control the abrasive belt. A rubber conveyor carries the stock through the machine while a wide abrasive belt removes material from the top surface. It is sometimes used in conjunction with the jointer to create square and true stock.
This type of sander has applications in woodworking and furniture production. It does fine sanding using rigid sanding pads and air cushion pads, cross and diagonal sanding as well as lacquer sanding.
References
Grinding machines
Tool racing
Woodworking hand-held power tools
Woodworking machines | Belt sander | [
"Physics",
"Technology"
] | 727 | [
"Woodworking machines",
"Machines",
"Physical systems"
] |
2,678,192 | https://en.wikipedia.org/wiki/Faddeev%20equations | The Faddeev equations, named after their discoverer Ludvig Faddeev, describe, at once, all the possible exchanges/interactions in a system of three particles in a fully quantum mechanical formulation. They can be solved iteratively.
In general, Faddeev equations need as input a potential that describes the interaction between two individual particles. It is also possible to introduce a term in the equation in order to take also three-body forces into account.
The Faddeev equations are the most often used non-perturbative formulations of the quantum-mechanical three-body problem. Unlike the three body problem in classical mechanics, the quantum three body problem is uniformly soluble.
In nuclear physics, the off the energy shell nucleon-nucleon interaction has been studied by analyzing (n,2n) and (p,2p) reactions on deuterium targets, using the Faddeev Equations. The nucleon-nucleon interaction is expanded (approximated) as a series of separable potentials. The Coulomb interaction between two protons is a special problem, in that its expansion in separable potentials does not converge, but this is handled by matching the Faddeev solutions to long range Coulomb solutions, instead of to plane waves.
Separable potentials are interactions that do not preserve a particle's location. Ordinary local potentials can be expressed as sums of separable potentials. The physical nucleon-nucleon interaction, which involves exchange of mesons, is not expected to be either local or separable.
References
L.D. Faddeev, S.P. Merkuriev, Quantum Scattering Theory for Several Particle Systems, Springer, August 31, 1993, .
Quantum mechanics
Nuclear physics
Equations | Faddeev equations | [
"Physics",
"Mathematics"
] | 374 | [
"Theoretical physics",
"Mathematical objects",
"Quantum mechanics",
"Equations",
"Nuclear physics"
] |
2,678,365 | https://en.wikipedia.org/wiki/Elmer%20FEM%20solver | Elmer is a computational tool for multi-physics problems. It has been developed by CSC in collaboration with Finnish universities, research laboratories and industry. Elmer FEM solver is free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2 or any later.
Elmer includes physical models of fluid dynamics, structural mechanics, electromagnetics, heat transfer and acoustics, for example. These are described by partial differential equations which Elmer solves by the Finite Element Method (FEM).
Elmer comprises several different parts:
ElmerGrid – A mesh conversion tool, which can be used to convert differing mesh formats into Elmer-suitable meshes.
ElmerGUI – A graphical interface which can be used on an existing mesh to assign physical models, this generates a "case file" which describes the problem to be solved. Does not show the whole ElmerSolver functionality in GUI.
ElmerSolver – The numerical solver which performs the finite element calculations, using the mesh and case files.
ElmerPost – A post-processing/visualisation module. (Development stopped in favour of other post-processing tools such as ParaView, VisIt, etc.)
The different parts of Elmer software may be used independently. Whilst the main module is the ElmerSolver tool, which includes many sophisticated features for physical model solving, the additional components are required to create a full workflow. For pre- and post-processing other tools, such as Paraview can be used to visualise the output.
The software runs on Unix and Windows platforms and can be compiled on a large variety of compilers, using the CMake building tool. The solver can also be used in a multi-host parallel mode on platforms that support MPI. Elmer's parallelisation capability is one of the strongest sides of this solver.
External links
See also
Finite Element Method
List of finite element packages
References
Numerical software
Free computer-aided design software
Finite element software for Linux
Free software programmed in Fortran
Free science software
Computational physics
Engineering software that uses Qt
Computer-aided engineering software for Linux
Software that uses Tk (software) | Elmer FEM solver | [
"Physics",
"Mathematics"
] | 431 | [
"Mathematical software",
"Numerical software",
"Computational physics"
] |
18,775,143 | https://en.wikipedia.org/wiki/PPAR%20agonist | PPAR agonists are drugs which act upon the peroxisome proliferator-activated receptor. They are used for the treatment of symptoms of the metabolic syndrome, mainly for lowering triglycerides and blood sugar.
Classification
PPAR-alpha and PPAR-gamma are the molecular targets of a number of marketed drugs. The main classes of PPAR agonists are:
PPAR-alpha agonists
An endogenous compound, 7(S)-Hydroxydocosahexaenoic Acid (7(S)-HDHA), which is a Docosanoid derivative of the omega-3 fatty acid DHA was isolated as an endogenous high affinity ligand for PPAR-alpha in the rat and mouse brain. The 7(S) enantiomer bound with micromolar affity to PPAR alpha with 10 fold higher affinity compared to the (R) enantiomer and could trigger dendritic activation. PPARα (alpha) is the main target of fibrate drugs, a class of amphipathic carboxylic acids (clofibrate, gemfibrozil, ciprofibrate, bezafibrate, and fenofibrate). They were originally indicated for dyslipidemia of cholesterol and more recently for disorders characterized by high triglycerides.
PPAR-gamma agonists
PPARγ (gamma) is the main target of the drug class of thiazolidinediones (TZDs), used in diabetes mellitus and other diseases that feature insulin resistance. It is also mildly activated by certain NSAIDs (such as ibuprofen) and indoles, as well as from a number of natural compounds. Known inhibitors include the experimental agent GW-9662.
They are also used in treating hyperlipidaemia in atherosclerosis. Here they act by increasing the expression of ABCA1, which transports extra-hepatic cholesterol into HDL. Increased uptake and excretion from the liver therefore follows.
Animal studies have shown their possible role in amelioration of pulmonary inflammation, especially in asthma.
PPAR-delta agonists
PPARδ (delta) is the main target of a research chemical named GW501516. It has been shown that agonism of PPARδ changes the body's fuel preference from glucose to lipids.
Dual and pan PPAR agonists
A fourth class of dual PPAR agonists, so-called glitazars, which bind to both the α and γ PPAR isoforms, are currently under active investigation for treatment of a larger subset of the symptoms of the metabolic syndrome. These include the experimental compounds aleglitazar, muraglitazar, oxeglitazar and tesaglitazar. In June 2013, saroglitazar was the first glitazar to be approved for clinical use.
In addition, there is continuing research and development of new dual α/δ and γ/δ PPAR agonists for additional therapeutic indications, as well as "pan" agonists acting on all three isoforms.
The anti-hypertension drug telmisartan is known to have PPAR γ/δ dual partial agonist activity in vivo. It also activates PPAR-α in vitro.
Research
A relatively recent avenue of drug research in treating depression and drug addiction is through PPARα and PPARγ activation. Both TLR4-mediated and NF-κB-mediated signalling pathways have been implicated in the development of addiction to several drugs such as opioids and cocaine, and therefore are appealing targets for pharmacotherapy. Despite a breadth of preclinical research showing potential in animal models in the treatment of drug addictions including alcohol, nicotine, cocaine, opioids and methamphetamine, the human evidence is limited with the amount of trials looking at using PPAR agonists for humans still being low; and so far (as of 2020) not being particularly promising. There are several suggested hypotheses for the poor translation from animal to human research evidence such as the potency and selectivity of PPAR ligands, sex-related variability, and species differences in the distribution and signaling of PPAR.
References
Transcription factors | PPAR agonist | [
"Chemistry",
"Biology"
] | 890 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
18,777,003 | https://en.wikipedia.org/wiki/XCT-790 | XCT-790 is a potent and selective inverse agonist ligand of the estrogen-related receptor alpha (ERRα). Independent of its inhibition of ERRα, XCT-790 is a potent mitochondrial electron transport chain uncoupler.
Mitochondrial electron transport chain uncoupling effect
XCT-790 has been shown to uncouple oxygen consumption from ATP production in mitochondria at very low, nanomolar-range doses independently of ERRα expression. Its effects are similar to proton ionophores such as FCCP, which disrupt mitochondrial transmembrane electrochemical gradients. This uncoupling leads to a fast drop in ATP production and, consequently, a prompt activation of AMPK.
References
External links
Nitriles
Trifluoromethyl compounds
Thiadiazoles
Uncouplers | XCT-790 | [
"Chemistry"
] | 178 | [
"Nitriles",
"Cellular respiration",
"Functional groups",
"Uncouplers"
] |
18,779,111 | https://en.wikipedia.org/wiki/Faucet%20aerator | A faucet aerator (or tap aerator) is often found at the tip of modern indoor water faucets. Aerators can simply be screwed onto the faucet head, creating a non-splashing stream and often delivering a mixture of water and air.
History
The aerator was invented by Greek engineer Elie Aghnides.
Function
An aerator can:
Prevent splashing
Shape the water stream coming out of the faucet spout, to produce a straight and evenly pressured stream
Conserve water and reduce energy costs
Reduce faucet noise
Increase perceived water pressure (often used in homes with low water pressure); sometimes described as a pressure regulator or flow regulator
Provide slight filtration of debris due to a small sieve plate
Splash prevention
When a single stream of water hits a surface the water must go somewhere, and because the stream is uniform the water will tend to go mostly in the same direction. If a single stream hits a surface which is curved, then the stream will conform to the shape and be easily redirected with the force of the volume of water falling. Adding the aerator does two things: it reduces the volume of falling water which reduces the splash distance, and it creates multiple "mini-streams" within the main stream. Each mini-stream, if it were falling by itself, would splash or flow in a unique and different way when it hit the surface, as compared to the other mini-streams. Because they are all falling at the same time, the streams will splash in their own way but end up hitting other splash streams. The resulting interference cancels out the majority of the splashing effect.
Conservation and energy reduction
Because the aerator limits the water flow through the faucet, water usage is reduced compared to the same duration of flow without an aerator. In the case of hot water, because less water is used, less heat energy is used.
Perceived water pressure
The perception of water pressure is actually the speed of the water as it hits a surface (the hands, in the case of hand washing). When an aerator is added to the faucet (or fluid stream), there is a region of high pressure created behind the aerator. Because of the higher pressure behind the aerator and the low pressure in front of it (outside the faucet), due to Bernoulli's principle there is an increase in velocity of the fluid flow.
Process
Aeration occurs in two basic steps:
Air is drawn into the water stream, breaking the stream into a flow of tiny droplets mixed with air.
The mixture of air and water passes through a screen, further mixing the air and water and evenly spreading out the resulting stream.
Design and features
Three major components of an aerator are: housing, insert and rubber washer.
A faucet aerator can be classified on the basis of its flow rate and the type of water stream (aerated, non-aerated, spray) it produces. In general, standard-sized aerators are available with female (M22x1) or male threading (M24x1). Bathtub spouts often have a bigger diameter with a male M28x1 thread. The United States uses different thread sizes: "-27 for standard-sized male and "-27 for standard-sized female threads.
Using faucet aerators may help meet local regulations and construction standards such as ASME A112.18.1, U.S. Leadership in Energy and Environmental Design (LEED) certifications or WELS (Australia/New Zealand). In Europe, European standard EN246 "Sanitary tapware — General specifications for flow rate regulators" defines the flow rate and noise reduction requirements.
References
Plumbing
Valves | Faucet aerator | [
"Physics",
"Chemistry",
"Engineering"
] | 765 | [
"Plumbing",
"Physical systems",
"Construction",
"Valves",
"Hydraulics",
"Piping"
] |
4,921,531 | https://en.wikipedia.org/wiki/Patlak%20plot | A Patlak plot (sometimes called Gjedde–Patlak plot, Patlak–Rutland plot, or Patlak analysis) is a graphical analysis technique based on the compartment model that uses linear regression to identify and analyze pharmacokinetics of tracers involving irreversible uptake, such as in the case of deoxyglucose. It is used for the evaluation of nuclear medicine imaging data after the injection of a radioopaque or radioactive tracer.
The method is model-independent because it does not depend on any specific compartmental model configuration for the tracer, and the minimal assumption is that the behavior of the tracer can be approximated by two compartments – a "central" (or reversible) compartment that is in rapid equilibrium with plasma, and a "peripheral" (or irreversible) compartment, where tracer enters without ever leaving during the time of the measurements. The amount of tracer in the region of interest is accumulating according to the equation:
where represents time after tracer injection, is the amount of tracer in region of interest, is the concentration of tracer in plasma or blood, is the clearance determining the rate of entry into the peripheral (irreversible) compartment, and is the distribution volume of the tracer in the central compartment. The first term of the right-hand side represents tracer in the peripheral compartment, and the second term tracer in the central compartment.
By dividing both sides by , one obtains:
The unknown constants and can be obtained by linear regression from a graph of against .
See also
Logan plot
Positron emission tomography
Multi-compartment model
Binding potential
Deconvolution
Albert Gjedde
References
Further literature
External links
PMOD, Patlak Plot, PMOD Kinetic Modeling Tool (PKIN).
Gjedde–Patlak plot, Turku PET Centre.
Mathematical modeling
Systems theory
Plots (graphics)
Pharmacokinetics | Patlak plot | [
"Chemistry",
"Mathematics"
] | 404 | [
"Pharmacology",
"Mathematical modeling",
"Pharmacokinetics",
"Applied mathematics"
] |
4,923,796 | https://en.wikipedia.org/wiki/Plasma%20channel | A plasma channel is a conductive channel of plasma. A plasma channel can be formed in the following ways.
With a high-powered laser that operates at a certain frequency that will provide enough energy for an atmospheric gas to break into its ions, or form a plasma, such as in a Laser-Induced Plasma Channel, for example in an electrolaser.
With a voltage higher than the dielectric breakdown voltage applied across a dielectric, and dielectric breakdown occurs.
A plasma channel has a low electrical resistance and, once formed, will permit continuous current flow if the energy source that heats the plasma can be maintained. Unlike a normal electrical conductor, the resistance (and voltage drop) across an unconfined plasma channel decreases with increasing current flow, a property called negative resistance. As a result, an electric spark that initially required a very high voltage to initiate avalanche breakdown within the insulating gas will rapidly evolve into a hot, low-voltage electric arc if the electrical power source can continue to deliver sufficient power to the arc. Plasma channels tend to self constrict (see plasma pinch) due to magnetic forces stemming from the current flowing through the plasma.
On Earth, plasma channels are most frequently encountered in lightning storms.
References
Channel, plasma
Electromagnetism | Plasma channel | [
"Physics"
] | 260 | [
"Physical phenomena",
"Electromagnetism",
"Plasma physics",
"Plasma phenomena",
"Plasma physics stubs",
"Fundamental interactions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.