id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
7,716,173 | https://en.wikipedia.org/wiki/NMR%20tube | An NMR tube is a thin glass walled tube used to contain samples in nuclear magnetic resonance spectroscopy. Typically NMR tubes come in 5 mm diameters but 10 mm and 3 mm samples are known. It is important that the tubes are uniformly thick and well-balanced to ensure that NMR tube spins at a regular rate (i.e., they do not wobble), usually about 20 Hz in the NMR spectrometer.
Construction
NMR tubes are typically made of borosilicate glass. They are available in seven and eight inch lengths; a 5 mm tube outer diameter is most common, but 3 mm and 10 mm outer diameters are available as well. Where boron NMR is desired, quartz NMR tubes containing low concentrations of boron (as opposed to borosilicate glass) are available. Specialized closures such as J. Young valves and screwcap closures are available aside from more common polyethylene caps.
Two common specifications for NMR tubes are concentricity and camber. Concentricity refers to the variation in the radial centers, measured at the inner and outer walls. Camber refers to the "straightness" of the tube. Poor values for either may cause poorer quality spectra by reducing the homogeneity of the sample. In particular, an NMR tube which has poor camber may wobble when rotated, giving rise to spinning side bands. With modern manufacturing techniques even cheap tubes give good spectra for routine applications.
Sample preparation
Usually, only a small sample is dissolved in an appropriate solvent. For 1H NMR experiments, this will usually be a deuterated solvent such as CDCl3. Sufficient solvent should be used in order to fill the tube by 4–5 cm (depending on the spectrometer). Protein NMR is usually performed in a 90% H2O (or buffer)/10% D2O mixture.
The sample may be sonicated or agitated to aid dissolution, and solids are removed via filtering through a plug of celite layered on a cotton wool plug in a Pasteur pipette, directly into the NMR tube.
The NMR tube is then usually sealed with a polyethylene cap, but can be flame sealed or sealed with a Teflon 'Schlenk' tap or even a very small rubber septum. Parafilm may be wrapped around the cap to reduce solvent evaporation.
Shigemi tubes
A Shigemi tube is a microscale NMR tube used with an ordinary-size NMR tube. Shigemi tubes may be appropriate for protein NMR experiments, where only a smaller sample is available. A corresponding smaller solvent volume is desired to maintain a higher sample concentration. The reduced sample depth is compensated for by solid glass on the NMR tube beneath the level of sample, which varies for the make of spectrometer. Once air bubbles have been expelled, the plunger is secured to the tube proper by parafilm. Ideally, the tubes are matched with the deuterated solvent used to have better spectrum resolution.
Cleaning
NMR tubes are hard to clean because of their small bore. They are cleaned best before the sample has dried.
Cleaning is performed usually by rinsing with the same (non-deuterated) solvent used to dissolve the initial sample. Dichloromethane or acetone are good choices because dichloromethane is similar in polarity to chloroform, a common NMR solvent, while acetone dissolves many organic compounds. Sonication and scrubbing with a pipe cleaner may be helpful in removing traces of solid contaminants. If necessary, the tube may be filled with an oxidizing solution of aqua regia or piranha solution (H2O2/H2SO4). Care should be taken with these solutions, as they can unexpectedly and violently erupt from the NMR tube due to pressure build-up (aqua regia) or explosion (piranha). Chromic acid solutions are never used, due to traces of paramagnetic chromium left behind on the tubes causing interference with NMR experiments.
When the NMR tube is determined to be clean, it is triple-rinsed with distilled water and left to air-dry or dry in an oven at low temperature. It is best not to exceed 60 °C. At higher temperatures, slight tube distortion can occur which will affect tube camber. If NMR tubes are washed, a final rinse is recommended with a solvent that easily evaporates at 60 °C and that has no residue such as methanol. Avoid acetone, which leaves a residue.
NMR tube cleaner
A better alternative to the use of potentially hazardous oxidizers is an NMR tube cleaner (right). It is an apparatus which uses a vacuum to flush solvent and/or a detergent solution through the entire length of the NMR tube.
In this apparatus, the NMR tube 1 (with the cap 3 fixed to the base of the NMR tube), is placed upside down on the apparatus. The NMR tube fits over an inner tube 5 linked to the solvent reservoir 6. The NMR cap rests on the outer tube of the apparatus 4. A vacuum is applied (usually via a water aspirator via the vacuum inlet). The NMR tube cap forms a vacuum seal. Solvent 7 is drawn from the solvent reservoir 6 and is forced to the base of the NMR tube and flushes the NMR tube out 9 with solvent cleaning it. Note to complete the vacuum a flask is attached to the NMR tube cleaning apparatus.
This sort of apparatus is commercially available, though it is costly and easy to destroy by shattering or breaking off the cleaning tube. Equivalent designs may be assembled from ordinary labware as well.
Gallery
References
Laboratory glassware
Nuclear magnetic resonance | NMR tube | [
"Physics",
"Chemistry"
] | 1,199 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
7,716,807 | https://en.wikipedia.org/wiki/Ionic%20polymer%E2%80%93metal%20composites | Ionic polymer–metal composites (IPMCs) are synthetic composite nanomaterials that display artificial muscle behavior under an applied voltage or electric field. IPMCs are composed of an ionic polymer like Nafion or Flemion whose surfaces are chemically plated or physically coated with conductors such as platinum or gold. Under an applied voltage (1–5 V for typical samples), ion migration and redistribution due to the imposed voltage across a strip of IPMCs result in a bending deformation. Also, IPMCs can be ionic hydrogel which is being immersed in an electrolyte solution and connected to the electric field indirectly.
If the plated electrodes are arranged in a non-symmetric configuration, the imposed voltage can induce a variety of deformations such as twisting, rolling, torsioning, turning, twirling, whirling and non-symmetric bending deformation. Alternatively, if such deformations are physically applied to an IPMC strips they generate an output voltage signal (few millivolts for typical small samples) as sensors and energy harvesters. IPMCs are a type of electroactive polymer. They work very well in a liquid environment as well as in air. They have a force density of about 40 in a cantilever configuration, meaning that they can generate a tip force of almost 40 times their own weight in a cantilever mode. IPMCs in actuation, sensing and energy harvesting have a very broad bandwidth to kilo HZ and higher. IPMCs were first introduced in 1998 by Shahinpoor, Bar-Cohen, Xue, Simpson and Smith (see references below) but the original idea of ionic polymer actuators and sensors goes back to 1992-93 by Adolf, Shahinpoor, Segalman, Witkowski, Osada, Okuzaki, Hori, Doi, Matsumoto, Hirose, Oguro, Takenaka, Asaka and Kawami as depicted below:
1-Segalman D. J., Witkowski W. R., Adolf D. B., Shahinpoor M.,"Theory and Application of Electrically Controlled Polymeric Gels", Int. Journal of Smart Material and Structures, vol. 1, pp. 95–100, (1992)
2-Shahinpoor M.,"Conceptual Design, Kinematics and Dynamics of Swimming Robotic Structures Using Ionic Polymeric Gel Muscles", Int. Journal of Smart Material and Structures, vol.1, pp. 91–94, (1992)
3-Y. Osada, H. Okuzaki and H. Hori, "A Polymer Gel with Electrically Driven Motility", Nature, vol. 355, pp. 242–244, (1992)
4-Oguro K., Kawami Y.and Takenaka H.,"Bending of an Ion-Conducting Polymer Film Electrode Composite by An Electric Stimulus at Low Voltage", Trans. J. Micro-Machine Society, vol. 5, pp. 27–30, (1992)
5-M. Doi, M. Marsumoto and Y. Hirose, "Deformation of Ionic Gels by Electric Fields", Macromolecules, vol. 25, pp. 5504–5511, (1992)
6-Oguro, K., K. Asaka, and H. Takenaka, "Polymer film actuator driven by low voltage", In Proceedings of the 4th International Symposium of Micro Machines and Human Science", Nagoya, pp. 38–40, (1993)
7-Adolf D., Shahinpoor M., Segalman D., Witkowski W.,"Electrically Controlled Polymeric Gel Actuators", US Patent Office, US Patent No. 5,250,167, Issued October 5, (1993)
8-Oguro K., Kawami Y.and Takenaka H.,"Actuator Element", US Patent Office, US Patent No. 5,268,082, Issued December 7, (1993)
These patents were followed by additional related patents:
9-Shahinpoor, M., "Spring-Loaded Ionic Polymeric Gel Linear Actuator", US Patent Office, US Patent No. 5,389,222, Issued February 14,(1995)
10-Shahinpoor, M. and Mojarrad, M., "Soft Actuators and Artificial Muscles", US Patent Office, United States Patent 6,109,852, Issued August 29,(2000)
11-Shahinpoor, M. and Mojarrad, M.,"Ionic Polymer Sensors and Actuators", US Patent Office, No. 6,475,639, Issued November 5, (2002)
12-Shahinpoor, M. and Kim, K.J.,“Method of Fabricating a Dry Electro-Active Polymeric Synthetic Muscle”, US Patent Office, Patent No. 7,276,090, Issued October 2,(2007)
It should also be mentioned that Tanaka, Nishio and Sun introduced the phenomenon of ionic gel collapse in an electric field:
13-T. Tanaka, I. Nishio and S.T. Sun, "Collapse of Gells in an Electric Field", Science, vol. 218, pp. 467–469, (1982)
It should also be mentioned that Hamlen, Kent and Shafer introduced the electrochemical contraction of ionic polymer fibers:
14-R. P. Hamlen, C. E. Kent and S. N. Shafer, "Electrolytically Activated Contractile Polymer", Nature, vol. 206, no. 4989, pp. 1140–1141, (1965)
Credit should also be extended to Darwin G. Caldwell and Paul M. Taylor for early work on chemically stimulated gels as artificial muscles:
15-Darwin G. Caldwell and Paul M. Taylor,"Chemically stimulated pseudo-muscular actuation", International Journal of Engineering Science, Volume 28, Issue 8, pp. 797–808, (1990)
References
External links
M. Shahinpoor, Y. Bar-Cohen, J. O. Simpson and J. Smith "Ionic Polymer Metal Composites (IPMCs) as Biomimetic Sensors, Actuators and Artificial Muscles-A Review", Int. J. Smart Materials and Structures, vol. 7, no.6, pp. R15-R30, (1998)
Shahinpoor, M.; Bar-Cohen, Y.; Xue, T.; Simpson, J.O and Smith, J. "Ionic Polymer-Metal Composites (IPMC) as Biomimetic Sensors and Actuators", Proceedings of SPIE's 5th Annual International Symposium on Smart Structures and Materials, 1–5 March 1998, San Diego, California. Paper No. 3324-27.
S. Nemat-Nasser and C. Thomas, "Electroactive Polymer (EAP) Actuators as Artificial Muscles – Reality, Potential and Challenges", Ionomeric Polymer-metal Composites, edited by Bar-Cohen, SPIE, Chap. 6 [139] 2001.
IPMC Actuator
KHAWWAF, Jasim, et al. Robust tracking control of an IPMC actuator using nonsingular terminal sliding mode. Smart Materials and Structures, 2017, 26.9: 095042.
Polymers | Ionic polymer–metal composites | [
"Chemistry",
"Materials_science"
] | 1,538 | [
"Polymers",
"Polymer chemistry"
] |
7,717,738 | https://en.wikipedia.org/wiki/Contact%20resistance | Electrical contact resistance (ECR, or simply contact resistance) is resistance to the flow of electric current caused by incomplete contact of the surfaces through which the current is flowing, and by films or oxide layers on the contacting surfaces. It occurs at electrical connections such as switches, connectors, breakers, contacts, and measurement probes. Contact resistance values are typically small (in the microohm to milliohm range).
Contact resistance can cause significant voltage drops and heating in circuits with high current. Because contact resistance adds to the intrinsic resistance of the conductors, it can cause significant measurement errors when exact resistance values are needed.
Contact resistance may vary with temperature. It may also vary with time (most often decreasing) in a process known as resistance creep.
Electrical contact resistance is also called interface resistance, transitional resistance, or the correction term. Parasitic resistance is a more general term, of which it is usually assumed that contact resistance is a major component.
William Shockley introduced the idea of a potential drop on an injection electrode to explain the difference between experimental results and the model of gradual channel approximation.
Measurement methods
Because contact resistance is usually comparatively small, it can be difficult to measure, and four-terminal measurement gives better results than a simple two-terminal measurement made with an ohmmeter.
In a two-terminal measurement (as with a typical ohmmeter), the current used to make the measurement is injected through the measurement leads, which causes a potential drop not just across the contact area to be measured but also across the probe contacts and the leads. That means that the contact resistance of the probes and their leads is inseparable from the resistance of the contact area to be measured, with which they are in series.
In a four-terminal measurement, the current used to make the measurement is injected using a second, separate pair of leads, so the contact resistance of the measurement probes and their leads is not included in the measurement.
Specific contact resistance can be obtained by multiplying by contact area.
Experimental characterization
For experimental characterization, a distinction must be made between contact resistance evaluation in two-electrode systems (for example, diodes) and three-electrode systems (for example, transistors).
In two-electrode systems, specific contact resistivity is experimentally defined as the slope of the I–V curve at :
where is the current density, or current per area. The units of specific contact resistivity are typically therefore in ohm-square metre, or Ω⋅m2. When the current is a linear function of the voltage, the device is said to have ohmic contacts.
Inductive and capacitive methods could be used in principle to measure an intrinsic impedance without the complication of contact resistance. In practice, direct current methods are more typically used to determine resistance.
The three electrode systems such as transistors require more complicated methods for the contact resistance approximation. The most common approach is the transmission line model (TLM). Here, the total device resistance is plotted as a function of the channel length:
where and are contact and channel resistances, respectively, is the channel length/width, is gate insulator capacitance (per unit of area), is carrier mobility, and and are gate-source and drain-source voltages. Therefore, the linear extrapolation of total resistance to the zero channel length provides the contact resistance. The slope of the linear function is related to the channel transconductance and can be used for estimation of the ”contact resistance-free” carrier mobility. The approximations used here (linear potential drop across the channel region, constant contact resistance, ...) lead sometimes to the channel dependent contact resistance.
Beside the TLM it was proposed the gated four-probe measurement and the modified time-of-flight method (TOF). The direct methods able to measure potential drop on the injection electrode directly are the Kelvin probe force microscopy (KFM) and the electric-field induced second harmonic generation.
In the semiconductor industry, Cross-Bridge Kelvin Resistor(CBKR) structures are the mostly used test structures to characterize metal-semiconductor contacts in the Planar devices of VLSI technology. During the measurement process, force the current () between contacts 1 and 2 and measure the potential deference between contacts 3 and 4. The contact resistance can be then calculated as .
Mechanisms
For given physical and mechanical material properties, parameters that govern the magnitude of electrical contact resistance (ECR) and its variation at an interface relate primarily to surface structure and applied load (Contact mechanics). Surfaces of metallic contacts generally exhibit an external layer of oxide material and adsorbed water molecules, which lead to capacitor-type junctions at weakly contacting asperities and resistor type contacts at strongly contacting asperities, where sufficient pressure is applied for asperities to penetrate the oxide layer, forming metal-to-metal contact patches. If a contact patch is sufficiently small, with dimensions comparable or smaller than the mean free path of electrons resistance at the patch can be described by the Sharvin mechanism, whereby electron transport can be described by ballistic conduction. Generally, over time, contact patches expand and the contact resistance at an interface relaxes, particularly at weakly contacting surfaces, through current induced welding and dielectric breakdown. This process is known also as resistance creep. The coupling of surface chemistry, contact mechanics and charge transport mechanisms needs to be considered in the mechanistic evaluation of ECR phenomena.
Quantum limit
When a conductor has spatial dimensions close to , where is Fermi wavevector of the conducting material, Ohm's law does not hold anymore. These small devices are called quantum point contacts. Their conductance must be an integer multiple of the value , where is the elementary charge and is the Planck constant. Quantum point contacts behave more like waveguides than the classical wires of everyday life and may be described by the Landauer scattering formalism. Point-contact tunneling is an important technique for characterizing superconductors.
Other forms of contact resistance
Measurements of thermal conductivity are also subject to contact resistance, with particular significance in heat transport through granular media. Similarly, a drop in hydrostatic pressure (analogous to electrical voltage) occurs when fluid flow transitions from one channel to another.
Significance
Bad contacts are the cause of failure or poor performance in a wide variety of electrical devices. For example, corroded jumper cable clamps can frustrate attempts to start a vehicle that has a low battery. Dirty or corroded contacts on a fuse or its holder can give the false impression that the fuse is blown. A sufficiently high contact resistance can cause substantial heating in a high current device. Unpredictable or noisy contacts are a major cause of the failure of electrical equipment.
See also
Contact cleaner
Wetting current
References
Further reading
(NB. Free download after registration.)
(NB. A rewrite of the earlier "Electric Contacts Handbook".)
(NB. A rewrite and translation of the earlier "Die technische Physik der elektrischen Kontakte" (1941) in German language, which is available as reprint under .)
Materials science
Electrical resistance and conductance | Contact resistance | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,468 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Materials science",
"nan",
"Wikipedia categories named after physical quantities",
"Electrical resistance and conductance"
] |
12,378,165 | https://en.wikipedia.org/wiki/Fluxomics | Fluxomics describes the various approaches that seek to determine the rates of metabolic reactions within a biological entity. While metabolomics can provide instantaneous information on the metabolites in a biological sample, metabolism is a dynamic process. The significance of fluxomics is that metabolic fluxes determine the cellular phenotype. It has the added advantage of being based on the metabolome which has fewer components than the genome or proteome.
Fluxomics falls within the field of systems biology which developed with the appearance of high throughput technologies. Systems biology recognizes the complexity of biological systems and has the broader goal of explaining and predicting this complex behavior.
Metabolic flux
Metabolic flux refers to the rate of metabolite conversion in a metabolic network. For a reaction this rate is a function of both enzyme abundance and enzyme activity. Enzyme concentration is itself a function of transcriptional and translational regulation in addition to the stability of the protein. Enzyme activity is affected by the kinetic parameters of the enzyme, the substrate concentrations, the product concentrations, and the effector molecules concentration. The genomic and environmental effects on metabolic flux are what determine healthy or diseased phenotype.
Fluxome
Similar to genome, transcriptome, proteome, and metabolome, the fluxome is defined as the complete set of metabolic fluxes in a cell. However, unlike the others the fluxome is a dynamic representation of the phenotype. This is due to the fluxome resulting from the interactions of the metabolome, genome, transcriptome, proteome, post-translational modifications and the environment.
Flux analysis technologies
Two important technologies are flux balance analysis (FBA) and 13C-fluxomics. In FBA metabolic fluxes are estimated by first representing the metabolic reactions of a metabolic network in a numerical matrix containing the stoichiometric coefficients of each reaction. The stoichiometric coefficients constrain the system model and are why FBA is only applicable to steady state conditions. Additional constraints can be imposed. By providing constraints the possible set of solutions to the system are reduced. Following the addition of constraints the system model is optimized. Flux-balance analysis resources include the BIGG database, the COBRA toolbox, and FASIMU.
In 13C-fluxomics, metabolic precursors are enriched with 13C before being introduced to the system. Using an imaging technique such as mass spectrometry or nuclear magnetic resonance spectroscopy the level of incorporation of 13C into metabolites can be measured and with stoichiometry the metabolic fluxes can be estimated.
Stoichiometric and kinetic paradigms
A number of different methods, broadly divided into stoichiometric and kinetic paradigms.
Within the stoichiometric paradigm, a number of relatively simple linear algebra methods use restricted metabolic networks or genome-scale metabolic network models to perform flux balance analysis and the array of techniques derived from it. These linear equations are useful for steady state conditions. Dynamic methods are not yet usable. On the more experimental side, metabolic flux analysis allows the empirical estimation of reaction rates by stable isotope labelling.
Within the kinetic paradigm, kinetic modelling of metabolic networks can be purely theoretical, exploring the potential space of dynamic metabolic fluxes under perturbations away from steady state using formalisms such as biochemical systems theory. Such explorations are most informative when accompanied by empirical measurements of the system under study following actual perturbations, as is the case in metabolic control analysis.
Constraint based reconstruction and analysis
Collected methods in fluxomics have been described as "COBRA" methods, for constraint based reconstruction and analysis. A number of software tools and environments have been created for this purpose.
Although it can only be measured indirectly, metabolic flux is the critical link between genes, proteins and the observable phenotype. This is due to the fluxome integrating mass-energy, information, and signaling networks. Fluxomics has the potential to provide a quantifiable representation of the effect the environment has on the phenotype because the fluxome describes the genome environment interaction. In the fields of metabolic engineering and systems biology, fluxomic methods are considered a key enabling technology due to their unique position in the ontology of biological processes, allowing genome scale stoichiometric models to act as a framework for the integration of diverse biological datasets.
Examples of use in research
One potential application of fluxomic techniques is in drug design. Rama et al. used FBA to study the mycolic acid pathway in Mycobacterium tuberculosis. Mycolic acids are known to be important to M. tuberculosis survival and as such its pathway has been studied extensively. This allowed the construction of a model of the pathway and for FBA to analyze it. The results of this found multiple possible drug targets for future investigation.
FBA was used to analyze the metabolic networks of multidrug-resistant Staphylococcus aureus. By performing in silico single and double gene deletions many enzymes essential to growth were identified.
References
Bioinformatics
Systems biology
Computational biology | Fluxomics | [
"Engineering",
"Biology"
] | 1,025 | [
"Bioinformatics",
"Biological engineering",
"Computational biology",
"Systems biology"
] |
12,382,234 | https://en.wikipedia.org/wiki/International%20Review%20of%20Cell%20and%20Molecular%20Biology | The International Review of Cell and Molecular Biology is a scientific book series that publishes articles on plant and animal cell biology. Until 2008 it was known as the International Review of Cytology.
References
Molecular and cellular biology journals
English-language journals
Elsevier academic journals | International Review of Cell and Molecular Biology | [
"Chemistry"
] | 52 | [
"Molecular and cellular biology journals",
"Molecular biology"
] |
5,880,781 | https://en.wikipedia.org/wiki/Retinoic%20acid%20receptor | The retinoic acid receptor (RAR) is a type of nuclear receptor which can also act as a ligand-activated transcription factor that is activated by both all-trans retinoic acid and 9-cis retinoic acid, retinoid active derivatives of Vitamin A. They are typically found within the nucleus. There are three retinoic acid receptors (RAR), RAR-alpha, RAR-beta, and RAR-gamma, encoded by the , , genes, respectively. Within each RAR subtype there are various isoforms differing in their N-terminal region A. Multiple splice variants have been identified in human RARs: four for , five for , and two for . As with other type II nuclear receptors, RAR heterodimerizes with RXR and in the absence of ligand, the RAR/RXR dimer binds to hormone response elements known as retinoic acid response elements (RAREs) complexed with corepressor protein. Binding of agonist ligands to RAR results in dissociation of corepressor and recruitment of coactivator protein that, in turn, promotes transcription of the downstream target gene into mRNA and eventually protein. In addition, the expression of RAR genes is under epigenetic regulation by promoter methylation. Both the length and magnitude of the retinoid response is dependent of the degradation of RARs and RXRs through the ubiquitin-proteasome. This degradation can lead to elongation of the DNA transcription through disruption of the initiation complex or to end the response to facilitate further transcriptional programs. RAR receptors are also known to exhibit many retinoid-independent effects as they bind to and regulate other nuclear receptor pathways, such as the estrogen receptor.
RARs play a crucial role in embryonic development. Mice knockout studies of RARs revealed that knocking out RARs could fully replicate the spectrum of defects associated with fetal vitamin A deficiency syndrome, unveiling additional abnormalities beyond previously known vitamin A functions. Notably, double RAR mutants exhibited the most severe defects, including ocular and cardiovascular defects, indicating some level of redundancy among RARs. RXR/RAR heterodimers transmit retinoid signals in diverse ways to control the expression of networks of retinoic acid (RA) target genes. This process plays a crucial role in shaping both the axial and limb patterning during early embryo development, as well as influencing various aspects of organ formation in later stages of development.
See also
Retinoid receptor
Retinoid X receptor
References
External links
Intracellular receptors
Transcription factors | Retinoic acid receptor | [
"Chemistry",
"Biology"
] | 537 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
5,880,996 | https://en.wikipedia.org/wiki/Farnesoid%20X%20receptor | The bile acid receptor (BAR), also known as farnesoid X receptor (FXR) or NR1H4 (nuclear receptor subfamily 1, group H, member 4), is a nuclear receptor that is encoded by the NR1H4 gene in humans.
Function
FXR is expressed at high levels in the liver and intestine. Chenodeoxycholic acid and other bile acids are natural ligands for FXR. Similar to other nuclear receptors, when activated, FXR translocates to the cell nucleus, forms a dimer (in this case a heterodimer with RXR) and binds to hormone response elements on DNA, which up- or down-regulates the expression of certain genes.
One of the primary functions of FXR activation is the suppression of cholesterol 7 alpha-hydroxylase (CYP7A1), the rate-limiting enzyme in bile acid synthesis from cholesterol. FXR does not directly bind to the CYP7A1 promoter. Rather, FXR induces expression of small heterodimer partner (SHP), which then functions to inhibit transcription of the CYP7A1 gene. FXR likewise stimulates the synthesis of fibroblast growth factor 19, which also inhibits expression of CYP7A1 and sterol 12-alpha-hydroxylase (CYP8B1) via fibroblast growth factor receptor 4. In this way, a negative feedback pathway is established in which synthesis of bile acids is inhibited when cellular levels are already high.
The absence of FXR in an FXR-/- mouse model led to increased bile acids in the liver, and the spontaneous development of liver tumors. Reducing the pool of bile acids in the FXR-/- mice by feeding the bile acid sequestering resin cholestyramine reduced the number and size of the malignant lesions.
FXR has also been found to be important in regulation of hepatic triglyceride levels. Specifically, FXR activation suppresses lipogenesis and promotes free fatty acid oxidation by PPARα activation. Studies have also shown the FXR to regulate the expression and activity of epithelial transport proteins involved in fluid homeostasis in the intestine, such as the cystic fibrosis transmembrane conductance regulator (CFTR).
Activation of FXR in diabetic mice reduces plasma glucose and improves insulin sensitivity, whereas inactivation of FXR has the opposite effect.
Interactions
Farnesoid X receptor has been shown to interact with:
Peroxisome proliferator-activated receptor gamma coactivator 1-alpha and
Retinoid X receptor alpha.
Ligands
A number of ligands for FXR are known, of both natural and synthetic origin.
Agonists
Cafestol
Chenodeoxycholic acid
Fexaramine
GW 4064
Ivermectin
Obeticholic acid
Tropifexor
Antagonists
Guggulsterone
References
Further reading
Chamoli, M., Rane, A., Foulger, A. et al. A drug-like molecule engages nuclear hormone receptor DAF-12/FXR to regulate mitophagy and extend lifespan. Nat Aging (2023). A drug-like molecule engages nuclear hormone receptor DAF-12/FXR to regulate mitophagy and extend lifespan
External links
g2
Transcription factors | Farnesoid X receptor | [
"Chemistry",
"Biology"
] | 702 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
5,884,194 | https://en.wikipedia.org/wiki/Vesicular%20transport%20adaptor%20protein | Vesicular transport adaptor proteins are proteins involved in forming complexes that function in the trafficking of molecules from one subcellular location to another. These complexes concentrate the correct cargo molecules in vesicles that bud or extrude off of one organelle and travel to another location, where the cargo is delivered. While some of the details of how these adaptor proteins achieve their trafficking specificity has been worked out, there is still much to be learned.
There are several human disorders associated with defects in components of these complexes including Alzheimer's and Parkinson's diseases.
The proteins
Most of the adaptor proteins are heterotetramers. In the AP complexes, there are two large proteins (~100 kD) and two smaller proteins. One of the large proteins is termed β (beta), with β1 in the AP-1 complex, β2 in the AP-2 complex, and so on. The other large protein has different designations in the different complexes. In AP-1 it is named γ (gamma), AP-2 has α (alpha), AP-3 has δ (delta), AP-4 has ε (epsilon) and AP-5 has ζ (zeta). The two smaller proteins are a medium subunit named μ (mu ~50 kD) and a small subunit σ (sigma ~20 kD), and named 1 through 5 corresponding to the 5 AP complexes. Components of COPI (cop one) a coatomer, and TSET (T-set) a membrane trafficking complex have similar heterotetramers of the AP complexes.
Retromer is not closely related, has been reviewed, and its proteins will not be described here. GGAs (Golgi-localising, Gamma-adaptin ear domain homology, ARF-binding proteins) are a group of related proteins (three in humans) that act as monomeric clathrin adaptor proteins in various important membrane vesicle traffickings, but are not similar to any of the AP complexes and will not be discussed in detail in this article. Stonins (not shown in the lead figure) are also monomers similar in some regards to GGA and will also not be discussed in detail in this article.
PTBs are protein domains that include NUMB, DAB1 and DAB2. Epsin and AP180 in the ANTH domain are other adaptor proteins that have been reviewed.
An important transport complex, COPII, was not shown in the lead figure. The COPII complex is a heterohexamer, but not closely related to the AP/TSET complexes. The individual proteins of the COPII complex are called SEC proteins, because they are encoded by genes identified in secretory mutants of yeast. One especially interesting aspect of COPII is that it can form typical spherical vesicles and tubules to transport large molecules like collagen precursors, which cannot fit inside typical spherical vesicles. COPII structure has been discussed in an open article and will not be a focus of this article. These are examples of the much larger set of cargo adaptors.
Evolutionary considerations
The most recent common ancestor (MRCA) of the eukaryotes must have had a mechanism for trafficking molecules between its endomembranes and organelles, and the likely identity of the adaptor complex involved has been reported. It is believed that the MRCA had 3 proteins involved in trafficking and that they formed a heterotrimer. That heterotrimer next "dimerized" to form a 6 membered-complex. The individual components further changed into the current complexes, in the order shown, with AP1 and AP2 being the last to diverge.
In addition, one component of TSET, a muniscin also known as the TCUP protein, appears to have evolved into part of the proteins of opisthokonts (animals and fungi). Parts of the AP complexes have evolved into parts of the GGA and stonin proteins. There is evidence indicating that parts of the nuclear pore complex and COPII may be evolutionarily related.
Formation of transport vesicles
The best characterized type of vesicle is the clathrin coated vesicle (CCV). The formation of a COPII vesicle at the endoplasmic reticulum and its transport to the Golgi body. The involvement of the heterotetramer of COPI is similar to that of the AP/clathrin situation, but the coat of COPI is not closely related to the coats of either CCVs or COPII vesicles. AP-5 is associated with 2 proteins, SPG11 and SPG15, which have some structural similarity to clathrin, and may form the coat around the AP-5 complex, but the ultrastructure of that coat is not known. The coat of AP-4 is unknown.
An almost universal feature of coat assembly is the recruitment of the various adaptor complexes to the "donor" membrane by the protein Arf1. The one known exception is AP-2, which is recruited by a particular plasma membrane lipid.
Another almost universal feature of coat assembly is that the adaptors are recruited first, and they then recruit the coats. The exception is COPI, in which the 7 proteins are recruited to the membrane as a heptamer.
As illustrated in the accompanying image, the production of a coated vesicle is not instantaneous, and a considerable fraction of the maturation time is used by making "abortive" or "futile" interactions until enough interactions occur simultaneously to allow the structure to continue to develop.
The last step in the formation of a transport vesicle is "pinching off" from the donor membrane. This requires energy, but even in the well studied case of CCVs, not all require dynamin. The accompanying illustration shows the case for AP-2 CCVs, however AP-1 and AP-3 CCVs do not use dynamin.
Selection of cargo molecules
Which cargo molecules are incorporated into a particular type of vesicle relies on specific interactions. Some of these interactions are directly with AP complexes and some are indirectly with "alternative adaptors", as shown in this diagram. As examples, membrane proteins can have direct interactions, while proteins that are soluble in the lumen of the donor organelle bind indirectly to AP complexes by binding to membrane proteins that traverse the membrane and bind at their lumenal end to the desired cargo molecule. Molecules that should not be included in the vesicle appear to be excluded by "molecular crowding".
The "signals" or amino acid "motifs" in the cargo proteins that interact with the adaptor proteins can be very short. For example, one well-known example is the dileucine motif, in which a leucine amino acid (aa) residue is followed immediately by another leucine or isoleucine residue. An even simpler example is the tyrosine based signal, which is YxxØ (a tyrosine residue separated by 2 aa residues from another bulky, hydrophobic aa residue). The accompanying figure shows how a small part of a protein can interact specifically with another protein, so these short signalling motifs should not be surprising. The sort of sequence comparisons used, in part, to define these motifs.
In some cases, post-translational modifications, such as phosphorylations (shown in the figure) are important for cargo recognition.
Diseases
Adaptor diseases have been reviewed.
AP-2/CCVs are involved in autosomal recessive hypercholesterolemia through the associated low-density lipoprotein receptor adapter protein 1.
Retromer is involved in recycling components of the plasma membrane. The importance of that recycling at a synapse is hinted at in one of the figures in the gallery. There are at least 3 ways in which retromer dysfunction can contribute to brain disorders, including Alzheimer and Parkinson diseases.
AP-5 is the most recently described complex, and one reason supporting the idea that it is an authentic adaptor complex is that it is associated with hereditary spastic paraplegia, as is AP-4. AP-1 is linked to MEDNIK syndrome. AP-3 is linked to Hermansky–Pudlak syndrome. COPI is linked to an autoimmune disease. COPII is linked to cranio-lenticulo-sutural dysplasia.
One of the GGA proteins may be involved in Alzheimer's disease.
Gallery
See also
Exomer
List of adaptins
SNAREs
Molecular evolution
Biogenesis of lysosome-related organelles complex 1
Notes
References
External links
A collage of electron micrographs showing COPI, COPII and clathrin vesicles
structure of COPI coat from this publication, free with free registration
Video description of the COPII disease CLSD
iBiology videos by Kai Simons about lipids, lipid rafts and cellular trafficking
Part 1: The role of lipids in organizing the cellular traffic.
Part 2: Lipid rafts as a membrane organizing principle
Part 3: Biogenesis of glycolipid-rich apical membranes
Cell biology
Cell anatomy
Organelles
Protein complexes
Protein families
Molecular evolution
Vesicular transport proteins | Vesicular transport adaptor protein | [
"Chemistry",
"Biology"
] | 1,898 | [
"Evolutionary processes",
"Cell biology",
"Molecular evolution",
"Protein classification",
"Molecular biology",
"Protein families"
] |
207,601 | https://en.wikipedia.org/wiki/Adsorption | Adsorption is the adhesion of atoms, ions or molecules from a gas, liquid or dissolved solid to a surface. This process creates a film of the adsorbate on the surface of the adsorbent. This process differs from absorption, in which a fluid (the absorbate) is dissolved by or permeates a liquid or solid (the absorbent). While adsorption does often precede absorption, which involves the transfer of the absorbate into the volume of the absorbent material, alternatively, adsorption is distinctly a surface phenomenon, wherein the adsorbate does not penetrate through the material surface and into the bulk of the adsorbent. The term sorption encompasses both adsorption and absorption, and desorption is the reverse of sorption.
Like surface tension, adsorption is a consequence of surface energy. In a bulk material, all the bonding requirements (be they ionic, covalent or metallic) of the constituent atoms of the material are fulfilled by other atoms in the material. However, atoms on the surface of the adsorbent are not wholly surrounded by other adsorbent atoms and therefore can attract adsorbates. The exact nature of the bonding depends on the details of the species involved, but the adsorption process is generally classified as physisorption (characteristic of weak van der Waals forces) or chemisorption (characteristic of covalent bonding). It may also occur due to electrostatic attraction. The nature of the adsorption can affect the structure of the adsorbed species. For example, polymer physisorption from solution can result in squashed structures on a surface.
Adsorption is present in many natural, physical, biological and chemical systems and is widely used in industrial applications such as heterogeneous catalysts, activated charcoal, capturing and using waste heat to provide cold water for air conditioning and other process requirements (adsorption chillers), synthetic resins, increasing storage capacity of carbide-derived carbons and water purification. Adsorption, ion exchange and chromatography are sorption processes in which certain adsorbates are selectively transferred from the fluid phase to the surface of insoluble, rigid particles suspended in a vessel or packed in a column. Pharmaceutical industry applications, which use adsorption as a means to prolong neurological exposure to specific drugs or parts thereof, are lesser known.
The word "adsorption" was coined in 1881 by German physicist Heinrich Kayser (1853–1940).
Isotherms
The adsorption of gases and solutes is usually described through isotherms, that is, the amount of adsorbate on the adsorbent as a function of its pressure (if gas) or concentration (for liquid phase solutes) at constant temperature. The quantity adsorbed is nearly always normalized by the mass of the adsorbent to allow comparison of different materials. To date, 15 different isotherm models have been developed.
Freundlich
The first mathematical fit to an isotherm was published by Freundlich and Kuster (1906) and is a purely empirical formula for gaseous adsorbates:
where is the mass of adsorbate adsorbed, is the mass of the adsorbent, is the pressure of adsorbate (this can be changed to concentration if investigating solution rather than gas), and and are empirical constants for each adsorbent–adsorbate pair at a given temperature. The function is not adequate at very high pressure because in reality has an asymptotic maximum as pressure increases without bound. As the temperature increases, the constants and change to reflect the empirical observation that the quantity adsorbed rises more slowly and higher pressures are required to saturate the surface.
Langmuir
Irving Langmuir was the first to derive a scientifically based adsorption isotherm in 1918. The model applies to gases adsorbed on solid surfaces. It is a semi-empirical isotherm with a kinetic basis and was derived based on statistical thermodynamics. It is the most common isotherm equation to use due to its simplicity and its ability to fit a variety of adsorption data. It is based on four assumptions:
All of the adsorption sites are equivalent, and each site can only accommodate one molecule.
The surface is energetically homogeneous, and adsorbed molecules do not interact.
There are no phase transitions.
At the maximum adsorption, only a monolayer is formed. Adsorption only occurs on localized sites on the surface, not with other adsorbates.
These four assumptions are seldom all true: there are always imperfections on the surface, adsorbed molecules are not necessarily inert, and the mechanism is clearly not the same for the first molecules to adsorb to a surface as for the last. The fourth condition is the most troublesome, as frequently more molecules will adsorb to the monolayer; this problem is addressed by the BET isotherm for relatively flat (non-microporous) surfaces. The Langmuir isotherm is nonetheless the first choice for most models of adsorption and has many applications in surface kinetics (usually called Langmuir–Hinshelwood kinetics) and thermodynamics.
Langmuir suggested that adsorption takes place through this mechanism: , where A is a gas molecule, and S is an adsorption site. The direct and inverse rate constants are k and k−1. If we define surface coverage, , as the fraction of the adsorption sites occupied, in the equilibrium we have:
or
where is the partial pressure of the gas or the molar concentration of the solution.
For very low pressures , and for high pressures .
The value of is difficult to measure experimentally; usually, the adsorbate is a gas and the quantity adsorbed is given in moles, grams, or gas volumes at standard temperature and pressure (STP) per gram of adsorbent. If we call vmon the STP volume of adsorbate required to form a monolayer on the adsorbent (per gram of adsorbent), then , and we obtain an expression for a straight line:
Through its slope and y intercept we can obtain vmon and K, which are constants for each adsorbent–adsorbate pair at a given temperature. vmon is related to the number of adsorption sites through the ideal gas law. If we assume that the number of sites is just the whole area of the solid divided into the cross section of the adsorbate molecules, we can easily calculate the surface area of the adsorbent.
The surface area of an adsorbent depends on its structure: the more pores it has, the greater the area, which has a big influence on reactions on surfaces.
If more than one gas adsorbs on the surface, we define as the fraction of empty sites, and we have:
Also, we can define as the fraction of the sites occupied by the j-th gas:
where i is each one of the gases that adsorb.
Note:
1) To choose between the Langmuir and Freundlich equations, the enthalpies of adsorption must be investigated. While the Langmuir model assumes that the energy of adsorption remains constant with surface occupancy, the Freundlich equation is derived with the assumption that the heat of adsorption continually decrease as the binding sites are occupied. The choice of the model based on best fitting of the data is a common misconception.
2) The use of the linearized form of the Langmuir model is no longer common practice. Advances in computational power allowed for nonlinear regression to be performed quickly and with higher confidence since no data transformation is required.
BET
Often molecules do form multilayers, that is, some are adsorbed on already adsorbed molecules, and the Langmuir isotherm is not valid. In 1938 Stephen Brunauer, Paul Emmett, and Edward Teller developed a model isotherm that takes that possibility into account. Their theory is called BET theory, after the initials in their last names. They modified Langmuir's mechanism as follows:
A(g) + S ⇌ AS,
A(g) + AS ⇌ A2S,
A(g) + A2S ⇌ A3S and so on.
The derivation of the formula is more complicated than Langmuir's (see links for complete derivation). We obtain:
where x is the pressure divided by the vapor pressure for the adsorbate at that temperature (usually denoted ), v is the STP volume of adsorbed adsorbate, vmon is the STP volume of the amount of adsorbate required to form a monolayer, and c is the equilibrium constant K we used in Langmuir isotherm multiplied by the vapor pressure of the adsorbate. The key assumption used in deriving the BET equation that the successive heats of adsorption for all layers except the first are equal to the heat of condensation of the adsorbate.
The Langmuir isotherm is usually better for chemisorption, and the BET isotherm works better for physisorption for non-microporous surfaces.
Kisliuk
In other instances, molecular interactions between gas molecules previously adsorbed on a solid surface form significant interactions with gas molecules in the gaseous phases. Hence, adsorption of gas molecules to the surface is more likely to occur around gas molecules that are already present on the solid surface, rendering the Langmuir adsorption isotherm ineffective for the purposes of modelling. This effect was studied in a system where nitrogen was the adsorbate and tungsten was the adsorbent by Paul Kisliuk (1922–2008) in 1957. To compensate for the increased probability of adsorption occurring around molecules present on the substrate surface, Kisliuk developed the precursor state theory, whereby molecules would enter a precursor state at the interface between the solid adsorbent and adsorbate in the gaseous phase. From here, adsorbate molecules would either adsorb to the adsorbent or desorb into the gaseous phase. The probability of adsorption occurring from the precursor state is dependent on the adsorbate's proximity to other adsorbate molecules that have already been adsorbed. If the adsorbate molecule in the precursor state is in close proximity to an adsorbate molecule that has already formed on the surface, it has a sticking probability reflected by the size of the SE constant and will either be adsorbed from the precursor state at a rate of kEC or will desorb into the gaseous phase at a rate of kES. If an adsorbate molecule enters the precursor state at a location that is remote from any other previously adsorbed adsorbate molecules, the sticking probability is reflected by the size of the SD constant.
These factors were included as part of a single constant termed a "sticking coefficient", kE, described below:
As SD is dictated by factors that are taken into account by the Langmuir model, SD can be assumed to be the adsorption rate constant. However, the rate constant for the Kisliuk model (R’) is different from that of the Langmuir model, as R’ is used to represent the impact of diffusion on monolayer formation and is proportional to the square root of the system's diffusion coefficient. The Kisliuk adsorption isotherm is written as follows, where θ(t) is fractional coverage of the adsorbent with adsorbate, and t is immersion time:
Solving for θ(t) yields:
Adsorption enthalpy
Adsorption constants are equilibrium constants, therefore they obey the Van 't Hoff equation:
As can be seen in the formula, the variation of K must be isosteric, that is, at constant coverage.
If we start from the BET isotherm and assume that the entropy change is the same for liquefaction and adsorption, we obtain
that is to say, adsorption is more exothermic than liquefaction.
Single-molecule explanation
The adsorption of ensemble molecules on a surface or interface can be divided into two processes: adsorption and desorption. If the adsorption rate wins the desorption rate, the molecules will accumulate over time giving the adsorption curve over time. If the desorption rate is larger, the number of molecules on the surface will decrease over time. The adsorption rate is dependent on the temperature, the diffusion rate of the solute (related to mean free path for pure gas), and the energy barrier between the molecule and the surface. The diffusion and key elements of the adsorption rate can be calculated using Fick's laws of diffusion and Einstein relation (kinetic theory).
Under ideal conditions, when there is no energy barrier and all molecules that diffuse and collide with the surface get adsorbed, the number of molecules adsorbed at a surface of area on an infinite area surface can be directly integrated from Fick's second law differential equation to be:
where is the surface area (unit m2), is the number concentration of the molecule in the bulk solution (unit #/m3), is the diffusion constant (unit m2/s), and is time (unit s). Further simulations and analysis of this equation show that the square root dependence on the time is originated from the decrease of the concentrations near the surface under ideal adsorption conditions. Also, this equation only works for the beginning of the adsorption when a well-behaved concentration gradient forms near the surface. Correction on the reduction of the adsorption area and slowing down of the concentration gradient evolution have to be considered over a longer time.
Under real experimental conditions, the flow and the small adsorption area always make the adsorption rate faster than what this equation predicted, and the energy barrier will either accelerate this rate by surface attraction or slow it down by surface repulsion. Thus, the prediction from this equation is often a few to several orders of magnitude away from the experimental results. Under special cases, such as a very small adsorption area on a large surface, and under chemical equilibrium when there is no concentration gradience near the surface, this equation becomes useful to predict the adsorption rate with debatable special care to determine a specific value of in a particular measurement.
The desorption of a molecule from the surface depends on the binding energy of the molecule to the surface and the temperature. The typical overall adsorption rate is thus often a combined result of the adsorption and desorption.
Quantum mechanical – thermodynamic modelling for surface area and porosity
Since 1980 two theories were worked on to explain adsorption and obtain equations that work. These two are referred to as the chi hypothesis, the quantum mechanical derivation, and excess surface work (ESW). Both these theories yield the same equation for flat surfaces:
where U is the unit step function. The definitions of the other symbols is as follows:
where "ads" stands for "adsorbed", "m" stands for "monolayer equivalence" and "vap" is reference to the vapor pressure of the liquid adsorptive at the same temperature as the solid sample. The unit function creates the definition of the molar energy of adsorption for the first adsorbed molecule by:
The plot of adsorbed versus is referred to as the chi plot. For flat surfaces, the slope of the chi plot yields the surface area. Empirically, this plot was noticed as being a very good fit to the isotherm by Michael Polanyi and also by Jan Hendrik de Boer and Cornelis Zwikker but not pursued. This was due to criticism in the former case by Albert Einstein and in the latter case by Brunauer. This flat surface equation may be used as a "standard curve" in the normal tradition of comparison curves, with the exception that the porous sample's early portion of the plot of versus acts as a self-standard. Ultramicroporous, microporous and mesoporous conditions may be analyzed using this technique. Typical standard deviations for full isotherm fits including porous samples are less than 2%.
Notice that in this description of physical adsorption, the entropy of adsorption is consistent with the Dubinin thermodynamic criterion, that is the entropy of adsorption from the liquid state to the adsorbed state is approximately zero.
Adsorbents
Characteristics and general requirements
Adsorbents are used usually in the form of spherical pellets, rods, moldings, or monoliths with a hydrodynamic radius between 0.25 and 5 mm. They must have high abrasion resistance, high thermal stability and small pore diameters, which results in higher exposed surface area and hence high capacity for adsorption. The adsorbents must also have a distinct pore structure that enables fast transport of the gaseous vapors.
Most industrial adsorbents fall into one of three classes:
Oxygen-containing compounds – Are typically hydrophilic and polar, including materials such as silica gel, limestone (calcium carbonate) and zeolites.
Carbon-based compounds – Are typically hydrophobic and non-polar, including materials such as activated carbon and graphite.
Polymer-based compounds – Are polar or non-polar, depending on the functional groups in the polymer matrix.
Silica gel
Silica gel is a chemically inert, non-toxic, polar and dimensionally stable (< ) amorphous form of SiO2. It is prepared by the reaction between sodium silicate and acetic acid, which is followed by a series of after-treatment processes such as aging, pickling, etc. These after-treatment methods results in various pore size distributions.
Silica is used for drying of process air (e.g. oxygen, natural gas) and adsorption of heavy (polar) hydrocarbons from natural gas.
Zeolites
Zeolites are natural or synthetic crystalline aluminosilicates, which have a repeating pore network and release water at high temperature. Zeolites are polar in nature.
They are manufactured by hydrothermal synthesis of sodium aluminosilicate or another silica source in an autoclave followed by ion exchange with certain cations (Na+, Li+, Ca2+, K+, NH4+). The channel diameter of zeolite cages usually ranges from 2 to 9 Å. The ion exchange process is followed by drying of the crystals, which can be pelletized with a binder to form macroporous pellets.
Zeolites are applied in drying of process air, CO2 removal from natural gas, CO removal from reforming gas, air separation, catalytic cracking, and catalytic synthesis and reforming.
Non-polar (siliceous) zeolites are synthesized from aluminum-free silica sources or by dealumination of aluminum-containing zeolites. The dealumination process is done by treating the zeolite with steam at elevated temperatures, typically greater than . This high temperature heat treatment breaks the aluminum-oxygen bonds and the aluminum atom is expelled from the zeolite framework.
Activated carbon
The term "adsorption" itself was coined by Heinrich Kayser in 1881 in the context of uptake of gases by carbons.
Activated carbon is a highly porous, amorphous solid consisting of microcrystallites with a graphite lattice, usually prepared in small pellets or a powder. It is non-polar and cheap. One of its main drawbacks is that it reacts with oxygen at moderate temperatures (over 300 °C).
Activated carbon can be manufactured from carbonaceous material, including coal (bituminous, subbituminous, and lignite), peat, wood, or nutshells (e.g., coconut). The manufacturing process consists of two phases, carbonization and activation. The carbonization process includes drying and then heating to separate by-products, including tars and other hydrocarbons from the raw material, as well as to drive off any gases generated. The process is completed by heating the material over in an oxygen-free atmosphere that cannot support combustion. The carbonized particles are then "activated" by exposing them to an oxidizing agent, usually steam or carbon dioxide at high temperature. This agent burns off the pore blocking structures created during the carbonization phase and so, they develop a porous, three-dimensional graphite lattice structure. The size of the pores developed during activation is a function of the time that they spend in this stage. Longer exposure times result in larger pore sizes. The most popular aqueous phase carbons are bituminous based because of their hardness, abrasion resistance, pore size distribution, and low cost, but their effectiveness needs to be tested in each application to determine the optimal product.
Activated carbon is used for adsorption of organic substances and non-polar adsorbates and it is also usually used for waste gas (and waste water) treatment. It is the most widely used adsorbent since most of its chemical (e.g. surface groups) and physical properties (e.g. pore size distribution and surface area) can be tuned according to what is needed. Its usefulness also derives from its large micropore (and sometimes mesopore) volume and the resulting high surface area. Recent research works reported activated carbon as an effective agent to adsorb cationic species of toxic metals from multi-pollutant systems and also proposed possible adsorption mechanisms with supporting evidences.
Water adsorption
The adsorption of water at surfaces is of broad importance in chemical engineering, materials science and catalysis. Also termed surface hydration, the presence of physically or chemically adsorbed water at the surfaces of solids plays an important role in governing interface properties, chemical reaction pathways and catalytic performance in a wide range of systems. In the case of physically adsorbed water, surface hydration can be eliminated simply through drying at conditions of temperature and pressure allowing full vaporization of water. For chemically adsorbed water, hydration may be in the form of either dissociative adsorption, where H2O molecules are dissociated into surface adsorbed -H and -OH, or molecular adsorption (associative adsorption) where individual water molecules remain intact
Adsorption solar heating and storage
The low cost ($200/ton) and high cycle rate (2,000 ×) of synthetic zeolites such as Linde 13X with water adsorbate has garnered much academic and commercial interest recently for use for thermal energy storage (TES), specifically of low-grade solar and waste heat. Several pilot projects have been funded in the EU from 2000 to the present (2020). The basic concept is to store solar thermal energy as chemical latent energy in the zeolite. Typically, hot dry air from flat plate solar collectors is made to flow through a bed of zeolite such that any water adsorbate present is driven off. Storage can be diurnal, weekly, monthly, or even seasonal depending on the volume of the zeolite and the area of the solar thermal panels. When heat is called for during the night, or sunless hours, or winter, humidified air flows through the zeolite. As the humidity is adsorbed by the zeolite, heat is released to the air and subsequently to the building space. This form of TES, with specific use of zeolites, was first taught by John Guerra in 1978.
Carbon capture and storage
Typical adsorbents proposed for carbon capture and storage are zeolites and MOFs. The customization of adsorbents makes them a potentially attractive alternative to absorption. Because adsorbents can be regenerated by temperature or pressure swing, this step can be less energy intensive than absorption regeneration methods. Major problems that are present with adsorption cost in carbon capture are: regenerating the adsorbent, mass ratio, solvent/MOF, cost of adsorbent, production of the adsorbent, lifetime of adsorbent.
In sorption enhanced water gas shift (SEWGS) technology a pre-combustion carbon capture process, based on solid adsorption, is combined with the water gas shift reaction (WGS) in order to produce a high pressure hydrogen stream. The CO2 stream produced can be stored or used for other industrial processes.
Protein and surfactant adsorption
Protein adsorption is a process that has a fundamental role in the field of biomaterials. Indeed, biomaterial surfaces in contact with biological media, such as blood or serum, are immediately coated by proteins. Therefore, living cells do not interact directly with the biomaterial surface, but with the adsorbed proteins layer. This protein layer mediates the interaction between biomaterials and cells, translating biomaterial physical and chemical properties into a "biological language". In fact, cell membrane receptors bind to protein layer bioactive sites and these receptor-protein binding events are transduced, through the cell membrane, in a manner that stimulates specific intracellular processes that then determine cell adhesion, shape, growth and differentiation. Protein adsorption is influenced by many surface properties such as surface wettability, surface chemical composition and surface nanometre-scale morphology.
Surfactant adsorption is a similar phenomenon, but utilising surfactant molecules in the place of proteins.
Adsorption chillers
Combining an adsorbent with a refrigerant, adsorption chillers use heat to provide a cooling effect. This heat, in the form of hot water, may come from any number of industrial sources including waste heat from industrial processes, prime heat from solar thermal installations or from the exhaust or water jacket heat of a piston engine or turbine.
Although there are similarities between adsorption chillers and absorption refrigeration, the former is based on the interaction between gases and solids. The adsorption chamber of the chiller is filled with a solid material (for example zeolite, silica gel, alumina, active carbon or certain types of metal salts), which in its neutral state has adsorbed the refrigerant. When heated, the solid desorbs (releases) refrigerant vapour, which subsequently is cooled and liquefied. This liquid refrigerant then provides a cooling effect at the evaporator from its enthalpy of vaporization. In the final stage the refrigerant vapour is (re)adsorbed into the solid. As an adsorption chiller requires no compressor, it is relatively quiet.
Portal site mediated adsorption
Portal site mediated adsorption is a model for site-selective activated gas adsorption in metallic catalytic systems that contain a variety of different adsorption sites. In such systems, low-coordination "edge and corner" defect-like sites can exhibit significantly lower adsorption enthalpies than high-coordination (basal plane) sites. As a result, these sites can serve as "portals" for very rapid adsorption to the rest of the surface. The phenomenon relies on the common "spillover" effect (described below), where certain adsorbed species exhibit high mobility on some surfaces. The model explains seemingly inconsistent observations of gas adsorption thermodynamics and kinetics in catalytic systems where surfaces can exist in a range of coordination structures, and it has been successfully applied to bimetallic catalytic systems where synergistic activity is observed.
In contrast to pure spillover, portal site adsorption refers to surface diffusion to adjacent adsorption sites, not to non-adsorptive support surfaces.
The model appears to have been first proposed for carbon monoxide on silica-supported platinum by Brandt et al. (1993). A similar, but independent model was developed by King and co-workers to describe hydrogen adsorption on silica-supported alkali promoted ruthenium, silver-ruthenium and copper-ruthenium bimetallic catalysts. The same group applied the model to CO hydrogenation (Fischer–Tropsch synthesis). Zupanc et al. (2002) subsequently confirmed the same model for hydrogen adsorption on magnesia-supported caesium-ruthenium bimetallic catalysts. Trens et al. (2009) have similarly described CO surface diffusion on carbon-supported Pt particles of varying morphology.
Adsorption spillover
In the case catalytic or adsorbent systems where a metal species is dispersed upon a support (or carrier) material (often quasi-inert oxides, such as alumina or silica), it is possible for an adsorptive species to indirectly adsorb to the support surface under conditions where such adsorption is thermodynamically unfavorable. The presence of the metal serves as a lower-energy pathway for gaseous species to first adsorb to the metal and then diffuse on the support surface. This is possible because the adsorbed species attains a lower energy state once it has adsorbed to the metal, thus lowering the activation barrier between the gas phase species and the support-adsorbed species.
Hydrogen spillover is the most common example of an adsorptive spillover. In the case of hydrogen, adsorption is most often accompanied with dissociation of molecular hydrogen (H2) to atomic hydrogen (H), followed by spillover of the hydrogen atoms present.
The spillover effect has been used to explain many observations in heterogeneous catalysis and adsorption.
Polymer adsorption
Adsorption of molecules onto polymer surfaces is central to a number of applications, including development of non-stick coatings and in various biomedical devices. Polymers may also be adsorbed to surfaces through polyelectrolyte adsorption.
In viruses
Adsorption is the first step in the viral life cycle. The next steps are penetration, uncoating, synthesis (transcription if needed, and translation), and release. The virus replication cycle, in this respect, is similar for all types of viruses. Factors such as transcription may or may not be needed if the virus is able to integrate its genomic information in the cell's nucleus, or if the virus can replicate itself directly within the cell's cytoplasm.
In popular culture
The game of Tetris is a puzzle game in which blocks of 4 are adsorbed onto a surface during game play. Scientists have used Tetris blocks "as a proxy for molecules with a complex shape" and their "adsorption on a flat surface" for studying the thermodynamics of nanoparticles.
See also
Adatom
Cryo-adsorption
Dual-polarization interferometry
Fluidized bed concentrator
Kelvin probe force microscope
Micromeritics
Molecular sieve
Polanyi adsorption
Pressure swing adsorption
Random sequential adsorption
Hydrogen-bonded organic framework
References
Further reading
External links
Derivation of Langmuir and BET isotherms, at JHU.edu
Carbon Adsorption, at MEGTEC.com
Surface science
Materials science
Chemical processes
Colloidal chemistry
Catalysis
Gases
Gas technologies | Adsorption | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 6,554 | [
"Catalysis",
"Matter",
"Colloidal chemistry",
"Applied and interdisciplinary physics",
"Chemical kinetics",
"Phases of matter",
"Materials science",
"Surface science",
"Colloids",
"Chemical processes",
"Condensed matter physics",
"nan",
"Chemical process engineering",
"Statistical mechanic... |
207,754 | https://en.wikipedia.org/wiki/Somatic%20cell | In cellular biology, a somatic cell (), or vegetal cell, is any biological cell forming the body of a multicellular organism other than a gamete, germ cell, gametocyte or undifferentiated stem cell. Somatic cells compose the body of an organism and divide through mitosis.
In contrast, gametes derive from meiosis within the germ cells of the germline and they fuse during sexual reproduction. Stem cells also can divide through mitosis, but are different from somatic in that they differentiate into diverse specialized cell types.
In mammals, somatic cells make up all the internal organs, skin, bones, blood and connective tissue, while mammalian germ cells give rise to spermatozoa and ova which fuse during fertilization to produce a cell called a zygote, which divides and differentiates into the cells of an embryo. There are approximately 220 types of somatic cell in the human body.
Theoretically, these cells are not germ cells (the source of gametes); they transmit their mutations, to their cellular descendants (if they have any), but not to the organism's descendants. However, in sponges, non-differentiated somatic cells form the germ line and, in Cnidaria, differentiated somatic cells are the source of the germline. Mitotic cell division is only seen in diploid somatic cells. Only some cells like germ cells take part in reproduction.
Evolution
As multicellularity was theorized to be evolved many times, so did sterile somatic cells. The evolution of an immortal germline producing specialized somatic cells involved the emergence of mortality, and can be viewed in its simplest version in volvocine algae. Those species with a separation between sterile somatic cells and a germline are called Weismannists. Weismannist development is relatively rare (e.g., vertebrates, arthropods, Volvox), as many species have the capacity for somatic embryogenesis (e.g., land plants, most algae, and numerous invertebrates).
Genetics and chromosomes
Like all cells, somatic cells contain DNA arranged in chromosomes. If a somatic cell contains chromosomes arranged in pairs, it is called diploid and the organism is called a diploid organism. The gametes of diploid organisms contain only single unpaired chromosomes and are called haploid. Each pair of chromosomes comprises one chromosome inherited from the father and one inherited from the mother. In humans, somatic cells contain 46 chromosomes organized into 23 pairs. By contrast, gametes of diploid organisms contain only half as many chromosomes. In humans, this is 23 unpaired chromosomes. When two gametes (i.e. a spermatozoon and an ovum) meet during conception, they fuse together, creating a zygote. Due to the fusion of the two gametes, a human zygote contains 46 chromosomes (i.e. 23 pairs).
A large number of species have the chromosomes in their somatic cells arranged in fours ("tetraploid") or even sixes ("hexaploid"). Thus, they can have diploid or even triploid germline cells. An example of this is the modern cultivated species of wheat, Triticum aestivum L., a hexaploid species whose somatic cells contain six copies of every chromatid.
The frequency of spontaneous mutations is significantly lower in advanced male germ cells than in somatic cell types from the same individual. Female germ cells also show a mutation frequency that is lower than that in corresponding somatic cells and similar to that in male germ cells. These findings appear to reflect employment of more effective mechanisms to limit the initial occurrence of spontaneous mutations in germ cells than in somatic cells. Such mechanisms likely include elevated levels of DNA repair enzymes that ameliorate most potentially mutagenic DNA damages.
Cloning
In recent years, the technique of cloning whole organisms has been developed in mammals, allowing almost identical genetic clones of an animal to be produced. One method of doing this is called "somatic cell nuclear transfer" and involves removing the nucleus from a somatic cell, usually a skin cell. This nucleus contains all of the genetic information needed to produce the organism it was removed from. This nucleus is then injected into an ovum of the same species which has had its own genetic material removed. The ovum now no longer needs to be fertilized, because it contains the correct amount of genetic material (a diploid number of chromosomes). In theory, the ovum can be implanted into the uterus of a same-species animal and allowed to develop. The resulting animal will be a nearly genetically identical clone to the animal from which the nucleus was taken. The only difference is caused by any mitochondrial DNA that is retained in the ovum, which is different from the cell that donated the nucleus. In practice, this technique has so far been problematic, although there have been a few high-profile successes, such as Dolly the Sheep (July 5, 1996 - February 14, 2003) and, more recently, Snuppy (April 24, 2005 - May 2015), the first cloned dog.
Biobanking
Somatic cells have also been collected in the practice of biobanking. The cryoconservation of animal genetic resources is a means of conserving animal genetic material in response to decreasing ecological biodiversity. As populations of living organisms fall so does their genetic diversity. This places species long-term survivability at risk. Biobanking aims to preserve biologically viable cells through long-term storage for later use. Somatic cells have been stored with the hopes that they can be reprogrammed into induced pluripotent stem cells (iPSCs), which can then differentiate into viable reproductive cells.
Genetic modifications
Development of biotechnology has allowed for the genetic manipulation of somatic cells, whether for the modelling of chronic disease or for the prevention of malaise conditions. Two current means of gene editing are the use of transcription activator-like effector nucleases (TALENs) or clustered regularly interspaced short palindromic repeats (CRISPR).
Genetic engineering of somatic cells has resulted in some controversies, although the International Summit on Human Gene Editing has released a statement in support of genetic modification of somatic cells, as the modifications thereof are not passed on to offspring.
Cellular aging
In mammals a high level of repair and maintenance of cellular DNA appears to be beneficial early in life. However, some types of cell, such as those of the brain and muscle, undergo a transition from mitotic cell division to a post-mitotic (non-dividing) condition during early development, and this transition is accompanied by a reduction in DNA repair capability. This reduction may be an evolutionary adaptation permitting the diversion of cellular resources that were earlier used for DNA repair, as well as for DNA replication and cell division, to higher priority neuronal and muscular functions. An effect of these reductions is to allow increased accumulation of DNA damage likely contributing to cellular aging.
See also
Somatic cell count
List of biological development disorders
References
Cloning
Cells
Developmental biology | Somatic cell | [
"Engineering",
"Biology"
] | 1,492 | [
"Behavior",
"Developmental biology",
"Reproduction",
"Cloning",
"Genetic engineering"
] |
208,215 | https://en.wikipedia.org/wiki/Geochronology | Geochronology is the science of determining the age of rocks, fossils, and sediments using signatures inherent in the rocks themselves. Absolute geochronology can be accomplished through radioactive isotopes, whereas relative geochronology is provided by tools such as paleomagnetism and stable isotope ratios. By combining multiple geochronological (and biostratigraphic) indicators the precision of the recovered age can be improved.
Geochronology is different in application from biostratigraphy, which is the science of assigning sedimentary rocks to a known geological period via describing, cataloging and comparing fossil floral and faunal assemblages. Biostratigraphy does not directly provide an absolute age determination of a rock, but merely places it within an interval of time at which that fossil assemblage is known to have coexisted. Both disciplines work together hand in hand, however, to the point where they share the same system of naming strata (rock layers) and the time spans utilized to classify sublayers within a stratum.
The science of geochronology is the prime tool used in the discipline of chronostratigraphy, which attempts to derive absolute age dates for all fossil assemblages and determine the geologic history of the Earth and extraterrestrial bodies.
Dating methods
Radiometric dating
By measuring the amount of radioactive decay of a radioactive isotope with a known half-life, geologists can establish the absolute age of the parent material. A number of radioactive isotopes are used for this purpose, and depending on the rate of decay, are used for dating different geological periods. More slowly decaying isotopes are useful for longer periods of time, but less accurate in absolute years. With the exception of the radiocarbon method, most of these techniques are actually based on measuring an increase in the abundance of a radiogenic isotope, which is the decay-product of the radioactive parent isotope. Two or more radiometric methods can be used in concert to achieve more robust results. Most radiometric methods are suitable for geological time only, but some such as the radiocarbon method and the 40Ar/39Ar dating method can be extended into the time of early human life and into recorded history.
Some of the commonly used techniques are:
Radiocarbon dating. This technique measures the decay of carbon-14 in organic material and can be best applied to samples younger than about 60,000 years.
Uranium–lead dating. This technique measures the ratio of two lead isotopes (lead-206 and lead-207) to the amount of uranium in a mineral or rock. Often applied to the trace mineral zircon in igneous rocks, this method is one of the two most commonly used (along with argon–argon dating) for geologic dating. Monazite geochronology is another example of U–Pb dating, employed for dating metamorphism in particular. Uranium–lead dating is applied to samples older than about 1 million years.
Uranium–thorium dating. This technique is used to date speleothems, corals, carbonates, and fossil bones. Its range is from a few years to about 700,000 years.
Potassium–argon dating and argon–argon dating. These techniques date metamorphic, igneous and volcanic rocks. They are also used to date volcanic ash layers within or overlying paleoanthropologic sites. The younger limit of the argon–argon method is a few thousand years.
Electron spin resonance (ESR) dating
Fission-track dating
Cosmogenic nuclide geochronology
A series of related techniques for determining the age at which a geomorphic surface was created (exposure dating), or at which formerly surficial materials were buried (burial dating). Exposure dating uses the concentration of exotic nuclides (e.g. 10Be, 26Al, 36Cl) produced by cosmic rays interacting with Earth materials as a proxy for the age at which a surface, such as an alluvial fan, was created. Burial dating uses the differential radioactive decay of 2 cosmogenic elements as a proxy for the age at which a sediment was screened by burial from further cosmic rays exposure.
Luminescence dating
Luminescence dating techniques observe 'light' emitted from materials such as quartz, diamond, feldspar, and calcite. Many types of luminescence techniques are utilized in geology, including optically stimulated luminescence (OSL), cathodoluminescence (CL), and thermoluminescence (TL). Thermoluminescence and optically stimulated luminescence are used in archaeology to date 'fired' objects such as pottery or cooking stones and can be used to observe sand migration.
Incremental dating
Incremental dating techniques allow the construction of year-by-year annual chronologies, which can be fixed (i.e. linked to the present day and thus calendar or sidereal time) or floating.
Dendrochronology
Ice cores
Lichenometry
Varves
Paleomagnetic dating
A sequence of paleomagnetic poles (usually called virtual geomagnetic poles), which are already well defined in age, constitutes an apparent polar wander path (APWP). Such a path is constructed for a large continental block. APWPs for different continents can be used as a reference for newly obtained poles for the rocks with unknown age. For paleomagnetic dating, it is suggested to use the APWP in order to date a pole obtained from rocks or sediments of unknown age by linking the paleopole to the nearest point on the APWP. Two methods of paleomagnetic dating have been suggested: (1) the angular method and (2) the rotation method. The first method is used for paleomagnetic dating of rocks inside of the same continental block. The second method is used for the folded areas where tectonic rotations are possible.
Magnetostratigraphy
Magnetostratigraphy determines age from the pattern of magnetic polarity zones in a series of bedded sedimentary and/or volcanic rocks by comparison to the magnetic polarity timescale. The polarity timescale has been previously determined by dating of seafloor magnetic anomalies, radiometrically dating volcanic rocks within magnetostratigraphic sections, and astronomically dating magnetostratigraphic sections.
Chemostratigraphy
Global trends in isotope compositions, particularly carbon-13 and strontium isotopes, can be used to correlate strata.
Correlation of marker horizons
Marker horizons are stratigraphic units of the same age and of such distinctive composition and appearance that, despite their presence in different geographic sites, there is certainty about their age-equivalence. Fossil faunal and floral assemblages, both marine and terrestrial, make for distinctive marker horizons. Tephrochronology is a method for geochemical correlation of unknown volcanic ash (tephra) to geochemically fingerprinted, dated tephra. Tephra is also often used as a dating tool in archaeology, since the dates of some eruptions are well-established.
Geological hierarchy of chronological periodization
Geochronology, from largest to smallest:
Supereon
Eon
Era
Period
Epoch
Age
Chron
Differences from chronostratigraphy
It is important not to confuse geochronologic and chronostratigraphic units. Geochronological units are periods of time, thus it is correct to say that Tyrannosaurus rex lived during the Late Cretaceous Epoch. Chronostratigraphic units are geological material, so it is also correct to say that fossils of the genus Tyrannosaurus have been found in the Upper Cretaceous Series. In the same way, it is entirely possible to go and visit an Upper Cretaceous Series deposit – such as the Hell Creek deposit where the Tyrannosaurus fossils were found – but it is naturally impossible to visit the Late Cretaceous Epoch as that is a period of time.
See also
Astronomical chronology
Age of Earth
Age of the universe
Chronological dating, archaeological chronology
Absolute dating
Relative dating
Phase (archaeology)
Archaeological association
Geochronology
Closure temperature
Geologic time scale
Geological history of Earth
Thermochronology
List of geochronologic names
General
Consilience, evidence from independent, unrelated sources can "converge" on strong conclusions
References
Further reading
Smart, P.L., and Frances, P.D. (1991), Quaternary dating methods - a user's guide. Quaternary Research Association Technical Guide No.4
Lowe, J.J., and Walker, M.J.C. (1997), Reconstructing Quaternary Environments (2nd edition). Longman publishing
Mattinson, J. M. (2013), Revolution and evolution: 100 years of U-Pb geochronology. Elements 9, 53–57.
Geochronology bibliography Talk:Origins Archive
External links
Geochronology and Isotopes Data Portal
International Commission on Stratigraphy
BGS Open Data Geochronological Ontologies
Radiometric dating | Geochronology | [
"Chemistry"
] | 1,857 | [
"Radiometric dating",
"Radioactivity"
] |
208,369 | https://en.wikipedia.org/wiki/Thermoluminescence%20dating | Thermoluminescence dating (TL) is the determination, by means of measuring the accumulated radiation dose, of the time elapsed since material containing crystalline minerals was either heated (lava, ceramics) or exposed to sunlight (sediments). As a crystalline material is heated during measurements, the process of thermoluminescence starts. Thermoluminescence emits a weak light signal that is proportional to the radiation dose absorbed by the material. It is a type of luminescence dating.
The technique has wide application, and is relatively cheap at some US$300–700 per object; ideally a number of samples are tested. Sediments are more expensive to date. The destruction of a relatively significant amount of sample material is necessary, which can be a limitation in the case of artworks. The heating must have taken the object above 500 °C, which covers most ceramics, although very high-fired porcelain creates other difficulties. It will often work well with stones that have been heated by fire. The clay core of bronze sculptures made by lost wax casting is also able to be tested.
Different materials vary considerably in their suitability for the technique, depending on several factors. Subsequent irradiation, for example if an x-ray is taken, can affect accuracy, as will the "annual dose" of radiation a buried object has received from the surrounding soil. Ideally this is assessed by measurements made at the precise findspot over a long period. For artworks, it may be sufficient to confirm whether a piece is broadly ancient or modern (that is, authentic or a fake), and this may be possible even if a precise date cannot be estimated.
Functionality
Natural crystalline materials contain imperfections: impurity ions, stress dislocations, and other phenomena that disturb the regularity of the electric field that holds the atoms in the crystalline lattice together. These imperfections lead to local humps and dips in the crystalline material's electric potential. Where there is a dip (a so-called "electron trap"), a free electron may be attracted and trapped.
The flux of ionizing radiation—both from cosmic radiation and from natural radioactivity—excites electrons from atoms in the crystal lattice into the conduction band where they can move freely. Most excited electrons will soon recombine with lattice ions, but some will be trapped, storing part of the energy of the radiation in the form of trapped electric charge (Figure 1).
Depending on the depth of the traps (the energy required to free an electron from them) the storage time of trapped electrons will vary as some traps are sufficiently deep to store charge for hundreds of thousands of years.
In practical use
Another important technique in testing samples from a historic or archaeological site is a process known as thermoluminescence testing, which involves the principle that all
objects absorb radiation from the environment. This process frees electrons within elements or minerals that remain caught within the item. Thermoluminescence testing involves
heating a sample until it releases a type of light, which is then measured to determine the last time the item was heated.
In thermoluminescence dating, these long-term traps are used to determine the age of materials: When irradiated crystalline material is again heated or exposed to strong light, the trapped electrons are given sufficient energy to escape. In the process of recombining with a lattice ion, they lose energy and emit photons (light quanta), detectable in the laboratory.
The amount of light produced is proportional to the number of trapped electrons that have been freed which is in turn proportional to the radiation dose accumulated. In order to relate the signal (the thermoluminescence—light produced when the material is heated) to the radiation dose that caused it, it is necessary to calibrate the material with known doses of radiation since the density of traps is highly variable.
Thermoluminescence dating presupposes a "zeroing" event in the history of the material, either heating (in the case of pottery or lava) or exposure to sunlight (in the case of sediments), that removes the pre-existing trapped electrons. Therefore, at that point the thermoluminescence signal is zero.
As time goes on, the ionizing radiation field around the material causes the trapped electrons to accumulate (Figure 2). In the laboratory, the accumulated radiation dose can be measured, but this by itself is insufficient to determine the time since the zeroing event.
The Radiation Dose Rate - the dose accumulated per year-must be determined first. This is commonly done by measurement of the alpha radioactivity (the uranium and thorium content) and the potassium content (K-40 is a beta and gamma emitter) of the sample material.
Often the gamma radiation field at the position of the sample material is measured, or it may be calculated from the alpha radioactivity and potassium content of the sample environment, and the cosmic ray dose is added in. Once all components of the radiation field are determined, the accumulated dose from the thermoluminescence measurements is divided by the dose accumulating each year, to obtain the years since the zeroing event.
Relation to radiocarbon dating
Thermoluminescence dating is used for material where radiocarbon dating is not available, like sediments. Its use is now common in the authentication of old ceramic wares, for which it gives the approximate date of the last firing. An example of this can be seen in Rink and Bartoll, 2005.
Thermoluminescence dating was modified for use as a passive sand migration analysis tool by Keizars, et al., 2008 (Figure 3), demonstrating the direct consequences resulting from the improper replenishment of starving beaches using fine sands, as well as providing a passive method of policing sand replenishment and observing riverine or other sand inputs along shorelines (Figure 4).
Relation to other luminescence dating methods
Optically stimulated luminescence dating is a related measurement method which replaces heating with exposure to intense light. The sample material is illuminated with a very bright source of green or blue light (for quartz) or infrared light (for potassium feldspar). Ultraviolet light emitted by the sample is detected for measurement.
See also
Geochronology
Luminescence dating
Rehydroxylation dating
Thermoluminescent dosimeter
Notes
Oxford Authentication: Home - TL Testing Authentication 'Oxford Authentication® Ltd authenticates ceramic antiquities using the scientific technique of thermoluminescence (TL). TL testing is a dating method for archaeological items which can distinguish between genuine and fake antiquities.' See some of their case studies here: https://www.oxfordauthentication.com/case-studies/
References and bibliography
GlobalNet.co.uk, Quaternary TL Surveys - Guide to thermoluminescence date measurement
Aitken, M.J., Thermoluminescence Dating, Academic Press, London (1985) – Standard text for introduction to the field. Quite complete and rather technical, but well written and well organized. There is a second edition.
Aitken, M.J., Introduction to Optical Dating, Oxford University Press (1998) – Good introduction to the field.
Keizars, K.Z. 2003. NRTL as a method of analysis of sand transport along the coast of the St. Joseph Peninsula, Florida. GAC/MAC 2003. Presentation: Brock University, St. Catharines, Ontario, Canada.
JCRonline.org, Ķeizars, Z., Forrest, B., Rink, W.J. 2008. Natural Residual Thermoluminescence as a Method of Analysis of Sand Transport along the Coast of the St. Joseph Peninsula, Florida. Journal of Coastal Research, 24: 500–507.
Keizars, Z. 2008b. NRTL trends observed in the sands of St. Joseph Peninsula, Florida. Queen's University. Presentation: Queen's University, Kingston, Ontario, Canada.
Liritzis, I., 2011. Surface Dating by Luminescence: An Overview. Geochronometria, 38(3): 292–302.
Mortlock, AJ; Price, D and Gardiner, G. The Discovery and Preliminary Thermoluminescence Dating of Two Aboriginal Cave Shelters in the Selwyn Ranges, Queensland [online]. Australian Archaeology, No. 9, Nov 1979: 82–86. Availability: <> . [cited 04 Feb 15].
Antiquity.ac.uk, Rink, W. J., Bartoll, J. 2005. Dating the geometric Nasca lines in the Peruvian desert. Antiquity, 79: 390–401.
Sullasi, H. S., Andrade, M. B., Ayta, W. E. F., Frade, M., Sastry, M. D., & Watanabe, S. (2004). Irradiation for dating Brazilian fish fossil by thermoluminescence and EPR technique. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 213, 756–760.doi:10.1016/S0168-583X(03)01698-7
External links
Brief introduction on TL technique - Link no longer valid (Oct 2022)
Dating methods
Exploration geophysics
Luminescence
Conservation and restoration of cultural heritage
Dating methodologies in archaeology | Thermoluminescence dating | [
"Chemistry"
] | 1,955 | [
"Luminescence",
"Molecular physics"
] |
208,572 | https://en.wikipedia.org/wiki/Optical%20window | The optical window is the portion of the optical spectrum that is blocked by the Earth's atmosphere. The window runs from around 300 nanometers (ultraviolet-B) up into the range the human eye can detect, roughly 400–700 nm and continues up to approximately 2 μm. Sunlight mostly reaches the ground through the optical atmospheric window; the Sun is particularly active in most of this range (44% of the radiation emitted by the Sun falls within the visible spectrum and 49% falls within the infrared spectrum).
Definition
The Earth's atmosphere is not totally transparent and is in fact 100% opaque to many wavelengths (see plot of Earth's opacity); the wavelength ranges to which it is transparent are called atmospheric windows.
Disambiguation of the term 'optical spectrum'
Although the word optical, deriving from Ancient Greek ὀπτῐκός (optikós, "of or for sight"), generally refers to something visible or visual, the term optical spectrum is used to describe the sum of the visible, the ultraviolet and the infrared spectra (at least in this context).
Optical atmospheric window
The optical atmospheric window is the optical portion of the electromagnetic spectrum that passes through the Earth's atmosphere, excluding its infrared part; although, as mentioned before, the optical spectrum also includes the IR spectrum and thus the optical window could include the infrared window (8 – 14 μm), the latter is considered separate by convention, since the visible spectrum is not contained in it.
Historical importance for observational astronomy
Up until the 1940s, astronomers could only use the visible and near infrared portions of the optical spectrum for their observations. The first great astronomical discoveries such as the ones made by the famous Italian polymath Galileo Galilei were made using optical telescopes that received light reaching the ground through the optical window. After the 1940s, the development of radio telescopes gave rise to the even more successful field of radio astronomy that utilized the radio window.
See also
Infrared (atmospheric) window
Optical window in biological tissue
Radio window
References
Electromagnetic spectrum
Observational astronomy | Optical window | [
"Physics",
"Astronomy"
] | 419 | [
"Astronomical sub-disciplines",
"Observational astronomy",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
208,810 | https://en.wikipedia.org/wiki/Eddington%20luminosity | The Eddington luminosity, also referred to as the Eddington limit, is the maximum luminosity a body (such as a star) can achieve when there is balance between the force of radiation acting outward and the gravitational force acting inward. The state of balance is called hydrostatic equilibrium. When a star exceeds the Eddington luminosity, it will initiate a very intense radiation-driven stellar wind from its outer layers. Since most massive stars have luminosities far below the Eddington luminosity, their winds are driven mostly by the less intense line absorption. The Eddington limit is invoked to explain the observed luminosities of accreting black holes such as quasars.
Originally, Sir Arthur Eddington took only the electron scattering into account when calculating this limit, something that now is called the classical Eddington limit. Nowadays, the modified Eddington limit also takes into account other radiation processes such as bound–free and free–free radiation interaction.
Derivation
The Eddington limit is obtained by setting the outward radiation pressure equal to the inward gravitational force. Both forces decrease by inverse-square laws, so once equality is reached, the hydrodynamic flow is the same throughout the star.
From Euler's equation in hydrostatic equilibrium, the mean acceleration is zero,
where is the velocity, is the pressure, is the density, and is the gravitational potential. If the pressure is dominated by radiation pressure associated with an irradiance ,
Here is the opacity of the stellar material, defined as the fraction of radiation energy flux absorbed by the medium per unit density and unit length. For ionized hydrogen, , where is the Thomson scattering cross-section for the electron and is the mass of a proton. Note that is defined as the energy flux over a surface, which can be expressed with the momentum flux using for radiation. Therefore, the rate of momentum transfer from the radiation to the gaseous medium per unit density is , which explains the right-hand side of the above equation.
The luminosity of a source bounded by a surface may be expressed with these relations as
Now assuming that the opacity is a constant, it can be brought outside the integral. Using Gauss's theorem and Poisson's equation gives
where is the mass of the central object. This result is called the Eddington luminosity. For pure ionized hydrogen,
where is the mass of the Sun and is the luminosity of the Sun.
The maximum possible luminosity of a source in hydrostatic equilibrium is the Eddington luminosity. If the luminosity exceeds the Eddington limit, then the radiation pressure drives an outflow.
The mass of the proton appears because, in the typical environment for the outer layers of a star, the radiation pressure acts on electrons, which are driven away from the center. Because protons are negligibly pressured by the analog of Thomson scattering, due to their larger mass, the result is to create a slight charge separation and therefore a radially directed electric field, acting to lift the positive charges, which, under the conditions in stellar atmospheres, typically are free protons. When the outward electric field is sufficient to levitate the protons against gravity, both electrons and protons are expelled together.
Different limits for different materials
The derivation above for the outward light pressure assumes a hydrogen plasma. In other circumstances the pressure balance can be different from what it is for hydrogen.
In an evolved star with a pure helium atmosphere, the electric field would have to lift a helium nucleus (an alpha particle), with nearly 4 times the mass of a proton, while the radiation pressure would act on 2 free electrons. Thus twice the usual Eddington luminosity would be needed to drive off an atmosphere of pure helium.
At very high temperatures, as in the environment of a black hole or neutron star, high-energy photons can interact with nuclei, or even with other photons, to create an electron–positron plasma. In that situation the combined mass of the positive–negative charge carrier pair is approximately 918 times smaller (half of the proton-to-electron mass ratio), while the radiation pressure on the positrons doubles the effective upward force per unit mass, so the limiting luminosity needed is reduced by a factor of ≈ 918×2.
The exact value of the Eddington luminosity depends on the chemical composition of the gas layer and the spectral energy distribution of the emission. A gas with cosmological abundances of hydrogen and helium is much more transparent than gas with solar abundance ratios. Atomic line transitions can greatly increase the effects of radiation pressure, and line-driven winds exist in some bright stars (e.g., Wolf–Rayet and O-type stars).
Super-Eddington luminosities
The role of the Eddington limit in today's research lies in explaining the very high mass loss rates seen in, for example, the series of outbursts of η Carinae in 1840–1860. The regular, line-driven stellar winds can only explain a mass loss rate of around ~ solar masses per year, whereas losses of up to per year are needed to understand the η Carinae outbursts. This can be done with the help of the super-Eddington winds driven by broad-spectrum radiation.
Gamma-ray bursts, novae and supernovae are examples of systems exceeding their Eddington luminosity by a large factor for very short times, resulting in short and highly intensive mass loss rates. Some X-ray binaries and active galaxies are able to maintain luminosities close to the Eddington limit for very long times. For accretion-powered sources such as accreting neutron stars or cataclysmic variables (accreting white dwarfs), the limit may act to reduce or cut off the accretion flow, imposing an Eddington limit on accretion corresponding to that on luminosity. Super-Eddington accretion onto stellar-mass black holes is one possible model for ultraluminous X-ray sources (ULXSs).
For accreting black holes, not all the energy released by accretion has to appear as outgoing luminosity, since energy can be lost through the event horizon, down the hole. Such sources effectively may not conserve energy. Then the accretion efficiency, or the fraction of energy actually radiated of that theoretically available from the gravitational energy release of accreting material, enters in an essential way.
Other factors
The Eddington limit is not a strict limit on the luminosity of a stellar object. The limit does not consider several potentially important factors, and super-Eddington objects have been observed that do not seem to have the predicted high mass-loss rate. Other factors that might affect the maximum luminosity of a star include:
Porosity. A problem with steady winds driven by broad-spectrum radiation is that both the radiative flux and gravitational acceleration scale with r−2. The ratio between these factors is constant, and in a super-Eddington star, the whole envelope would become gravitationally unbound at the same time. This is not observed. A possible solution is introducing an atmospheric porosity, where we imagine the stellar atmosphere to consist of denser regions surrounded by regions of lower-density gas. This would reduce the coupling between radiation and matter, and the full force of the radiation field would be seen only in the more homogeneous outer, lower-density layers of the atmosphere.
Turbulence. A possible destabilizing factor might be the turbulent pressure arising when energy in the convection zones builds up a field of supersonic turbulence. The importance of turbulence is being debated, however.
Photon bubbles. Another factor that might explain some stable super-Eddington objects is the photon bubble effect. Photon bubbles would develop spontaneously in radiation-dominated atmospheres when the radiation pressure exceeds the gas pressure. We can imagine a region in the stellar atmosphere with a density lower than the surroundings, but with a higher radiation pressure. Such a region would rise through the atmosphere, with radiation diffusing in from the sides, leading to an even higher radiation pressure. This effect could transport radiation more efficiently than a homogeneous atmosphere, increasing the allowed total radiation rate. Accretion discs may exhibit luminosities as high as 10–100 times the Eddington limit without experiencing instabilities.
Humphreys–Davidson limit
Observations of massive stars show a clear upper limit to their luminosity, termed the Humphreys–Davidson limit after the researchers who first wrote about it.
Only highly unstable objects are found, temporarily, at higher luminosities. Efforts to reconcile this with the theoretical Eddington limit have been largely unsuccessful.
The H–D limit for cool supergiants is placed at around 320,000 .
See also
Hayashi limit
List of most massive stars
M82 X-1
M82 X-2
References
Further reading
External links
Surpassing the Eddington Limit.
Concepts in astrophysics
Stellar astronomy | Eddington luminosity | [
"Physics",
"Astronomy"
] | 1,833 | [
"Astronomical sub-disciplines",
"Concepts in astrophysics",
"Astrophysics",
"Stellar astronomy"
] |
208,986 | https://en.wikipedia.org/wiki/Red%20fuming%20nitric%20acid | Red fuming nitric acid (RFNA) is a storable oxidizer used as a rocket propellant. It consists of nitric acid (), dinitrogen tetroxide () and a small amount of water. The color of red fuming nitric acid is due to the dinitrogen tetroxide, which breaks down partially to form nitrogen dioxide. The nitrogen dioxide dissolves until the liquid is saturated, and produces toxic fumes with a suffocating odor. RFNA increases the flammability of combustible materials and is highly exothermic when reacting with water.
Since nitrogen dioxide is a product of decomposition of nitric acid, its addition stabilizes nitric acid in accordance with Le Chatelier's principle. Addition of dinitrogen tetroxide also increases oxidizing power and lowers the freezing point.
It is usually used with an inhibitor (with various, sometimes secret, substances, including hydrogen fluoride; any such combination is called inhibited RFNA, IRFNA) because nitric acid attacks most container materials. Hydrogen fluoride for instance will passivate the container metal with a thin layer of metal fluoride, making it nearly impervious to the nitric acid.
It can also be a component of a monopropellant; with substances like amine nitrates dissolved in it, it can be used as the sole fuel in a rocket. This is inefficient and it is not normally used this way.
During World War II, the German military used RFNA in some rockets. The mixtures used were called S-Stoff (96% nitric acid with 4% ferric chloride as an ignition catalyst) and SV-Stoff (94% nitric acid with 6% dinitrogen tetroxide) and nicknamed Salbei (sage).
Inhibited RFNA was the oxidizer of the world's most-launched light orbital rocket, the Kosmos-3M. In former-Soviet countries inhibited RFNA is known as Mélange.
Other uses for RFNA include fertilizers, dye intermediates, explosives, and pharmaceutical acidifiers. It can also be used as a laboratory reagent in photoengraving and metal etching.
Compositions
IRFNA IIIa: 83.4% HNO3, 14% NO2, 2% H2O, 0.6% HF
IRFNA IV HDA: 54.3% HNO3, 44% NO2, 1% H2O, 0.7% HF
S-Stoff: 96% HNO3, 4% FeCl3
SV-Stoff: 94% HNO3, 6% N2O4
AK20: 80% HNO3, 20% N2O4
AK20F: 80% HNO3, 20% N2O4, fluorine-based inhibitor
AK20I: 80% HNO3, 20% N2O4, iodine-based inhibitor
AK20K: 80% HNO3, 20% N2O4, potassium-based inhibitor
AK27I: 73% HNO3, 27% N2O4, iodine-based inhibitor
AK27P: 73% HNO3, 27% N2O4, phosphorus-based inhibitor
Corrosion
Hydrofluoric acid content of IRFNA When RFNA is used as an oxidizer for rocket fuels, it usually has a HF content of about 0.6%. The purpose of the HF is to act as a corrosion inhibitor by forming a metal fluoride layer on the surface of the storage vessels.
Water content of RFNA To test the water content, a sample of 80% HNO3, 8–20% NO2, and the rest H2O depending on the varied amount of NO2 in the sample. When the RFNA contained HF, there was an average H2O% between 2.4% and 4.2%. When the RFNA did not contain HF, there was an average H2O% between 0.1% and 5.0%. When the metal impurities from corrosion were taken into account, the H2O% increased, and the H2O% was between 2.2% and 8.8%
Corrosion of metals in RFNA Stainless steel, aluminium alloys, iron alloys, chrome plates, tin, gold and tantalum were tested to see how RFNA affected the corrosion rates of each. Experiments were performed using 16% and 6.5% RFNA samples and the different substances listed above. Many different stainless steels showed resistance to corrosion. Aluminium alloys did not endure as well as stainless steels especially in high temperature, but the corrosion rates were not high enough to prohibit the use of this with RFNA. Tin, gold and tantalum showed high corrosion resistance similar to that of stainless steel. These materials are better though because at high temperatures the corrosion rates did not increase much. Corrosion rates at elevated temperatures increase in the presence of phosphoric acid. Sulfuric acid decreased corrosion rates.
See also
White fuming nitric acid
References
Further reading
External links
National Pollutant Inventory – Nitric Acid Fact Sheet
https://web.archive.org/web/20030429160808/http://www.astronautix.com/props/nitidjpx.htm
Rocket oxidizers
Oxidizing acids | Red fuming nitric acid | [
"Chemistry"
] | 1,133 | [
"Acids",
"Rocket oxidizers",
"Oxidizing acids",
"Oxidizing agents"
] |
209,005 | https://en.wikipedia.org/wiki/Noon | Noon (also known as noontime or midday) is 12 o'clock in the daytime. It is written as 12 noon, 12:00 m. (for meridiem, literally 12:00 midday), 12 p.m. (for post meridiem, literally "after midday"), 12 pm, or 12:00 (using a 24-hour clock) or 1200 (military time).
Solar noon is the time when the Sun appears to contact the local celestial meridian. This is when the Sun reaches its apparent highest point in the sky, at 12 noon apparent solar time and can be observed using a sundial. The local or clock time of solar noon depends on the date, longitude, and time zone, with Daylight Saving Time tending to place solar noon closer to 1:00pm.
Etymology
The word noon is derived from Latin nona hora, the ninth canonical hour of the day, in reference to the Western Christian liturgical term Nones (liturgy), (number nine), one of the seven fixed prayer times in traditional Christian denominations. The Roman and Western European medieval monastic day began at 6:00 a.m. (06:00) at the equinox by modern timekeeping, so the ninth hour started at what is now 3:00 p.m. (15:00) at the equinox. In English, the meaning of the word shifted to midday and the time gradually moved back to 12:00 local timethat is, not taking into account the modern invention of time zones. The change began in the 12th century and was fixed by the 14th century.
Solar noon
Solar noon, also known as the local apparent solar noon and Sun transit time (informally high noon), is the moment when the Sun contacts the observer's meridian (culmination or meridian transit), reaching its highest position above the horizon on that day and casting the shortest shadow. This is also the origin of the terms ante meridiem (a.m.) and post meridiem (p.m.), as noted below. The Sun is directly overhead at solar noon at the Equator on the equinoxes, at the Tropic of Cancer (latitude N) on the June solstice and at the Tropic of Capricorn ( S) on the December solstice. In the Northern Hemisphere, north of the Tropic of Cancer, the Sun is due south of the observer at solar noon; in the Southern Hemisphere, south of the Tropic of Capricorn, it is due north.
When the Sun contacts the observer's meridian at the observer's zenith, it is perceived to be directly overhead and no shadows are cast. This occurs at Earth's subsolar point, a point which moves around the tropics throughout the year.
The elapsed time from the local solar noon of one day to the next is exactly 24 hours on only four instances in any given year. This occurs when the effects of Earth's obliquity of ecliptic and its orbital speed around the Sun offset each other. These four days for the current epoch are centered on 11 February, 13 May, 26 July, and 3 November. It occurs at only one particular line of longitude in each instance. This line varies year to year, since Earth's true year is not an integer number of days. This event time and location also varies due to Earth's orbit being gravitationally perturbed by the planets. These four 24-hour days occur in both hemispheres simultaneously. The precise Coordinated Universal Times for these four days also mark when the opposite line of longitude, 180° away, experiences precisely 24 hours from local midnight to local midnight the next day. Thus, four varying great circles of longitude define from year to year when a 24-hour day (noon to noon or midnight to midnight) occurs.
The two longest time spans from noon to noon occur twice each year, around 20 June (24 hours plus 13 seconds) and 21 December (24 hours plus 30 seconds). The shortest time spans occur twice each year, around 25 March (24 hours minus 18 seconds) and 13 September (24 hours minus 22 seconds).
For the same reasons, solar noon and "clock noon" are usually not the same. The equation of time shows that the reading of a clock at solar noon will be higher or lower than 12:00 by as much as 16 minutes. Additionally, due to the political nature of time zones, as well as the application of daylight saving time, it can be off by more than an hour.
Nomenclature
In the US, noon is commonly indicated by 12 p.m., and midnight by 12 a.m. While some argue that such usage is "improper" based on the Latin meaning (a.m. stands for ante meridiem and p.m. for post meridiem, meaning "before midday" and "after midday" respectively), digital clocks are unable to display anything else, and an arbitrary decision must be made. An earlier standard of indicating noon as "12M" or "12m" (for "meridies"), which was specified in the U.S. GPO Government Style Manual, has fallen into relative obscurity; the current edition of the GPO makes no mention of it. However, due to the lack of an international standard, the use of "12 a.m." and "12 p.m." can be confusing. Common alternative methods of representing these times are:
to use a 24-hour clock (00:00 and 12:00, 24:00; but never 24:01)
to use "12 noon" or "12 midnight" (though "12 midnight" may still present ambiguity regarding the specific date)
to specify midnight as between two successive days or dates (as in "midnight Saturday/Sunday" or "midnight December 14/15")
to avoid those specific times and to use "11:59 p.m." or "12:01 a.m." instead. (This is common in the travel industry to avoid confusion to passengers' schedules, especially train and plane schedules.)
See also
Afternoon
Analemma
Dipleidoscope
Hour angle
Solar azimuth angle
Notes
References
External links
Generate a solar noon calendar for your location
U.S. Government Printing Office Style Manual (2008), 30th edition
Shows the hour and angle of sunrise, noon, and sunset drawn over a map.
Real Sun Time - gives you an exact unique time to the sun, with yours GPS coordinates position.
Parts of a day
Time in astronomy | Noon | [
"Astronomy",
"Technology"
] | 1,355 | [
"Time in astronomy",
"Parts of a day",
"Components"
] |
209,128 | https://en.wikipedia.org/wiki/Quotient%20rule | In calculus, the quotient rule is a method of finding the derivative of a function that is the ratio of two differentiable functions. Let , where both and are differentiable and The quotient rule states that the derivative of is
It is provable in many ways by using other derivative rules.
Examples
Example 1: Basic example
Given , let , then using the quotient rule:
Example 2: Derivative of tangent function
The quotient rule can be used to find the derivative of as follows:
Reciprocal rule
The reciprocal rule is a special case of the quotient rule in which the numerator . Applying the quotient rule gives
Utilizing the chain rule yields the same result.
→8==Proofs==
Proof from derivative definition and limit properties
Let Applying the definition of the derivative and properties of limits gives the following proof, with the term added and subtracted to allow splitting and factoring in subsequent steps without affecting the value:The limit evaluation is justified by the differentiability of , implying continuity, which can be expressed as .
Proof using implicit differentiation
Let so that
The product rule then gives
Solving for and substituting back for gives:
Proof using the reciprocal rule or chain rule
Let
Then the product rule gives
To evaluate the derivative in the second term, apply the reciprocal rule, or the power rule along with the chain rule:
Substituting the result into the expression gives
Proof by logarithmic differentiation
Let Taking the absolute value and natural logarithm of both sides of the equation gives
Applying properties of the absolute value and logarithms,
Taking the logarithmic derivative of both sides,
Solving for and substituting back for gives:
Taking the absolute value of the functions is necessary for the logarithmic differentiation of functions that may have negative values, as logarithms are only real-valued for positive arguments. This works because , which justifies taking the absolute value of the functions for logarithmic differentiation.
Higher order derivatives
Implicit differentiation can be used to compute the th derivative of a quotient (partially in terms of its first derivatives). For example, differentiating twice (resulting in ) and then solving for yields
See also
References
Articles containing proofs
Differentiation rules
Theorems in analysis
Theorems in calculus | Quotient rule | [
"Mathematics"
] | 458 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in calculus",
"Calculus",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems"
] |
209,271 | https://en.wikipedia.org/wiki/Pieter%20Zeeman | Pieter Zeeman (, ; ; 25 May 1865 – 9 October 1943) was a Dutch physicist who shared the 1902 Nobel Prize in Physics with Hendrik Lorentz for his discovery of the Zeeman effect.
Childhood and youth
Pieter Zeeman was born in Zonnemaire, a small town on the island of Schouwen-Duiveland, the Netherlands, the son of Rev Catharinus Forandinus Zeeman, a minister of the Dutch Reformed Church, and his wife, Willemina Worst.
Pieter became interested in physics at an early age. In 1883, the aurora borealis happened to be visible in the Netherlands. Zeeman, then a student at the high school in Zierikzee, made a drawing and description of the phenomenon and submitted it to Nature, where it was published. The editor praised "the careful observations of Professor Zeeman from his observatory in Zonnemaire".
After finishing high school in 1883, Zeeman went to Delft for supplementary education in classical languages, then a requirement for admission to University. He stayed at the home of Dr J.W. Lely, co-principal of the gymnasium and brother of Cornelis Lely, who was responsible for the concept and realization of the Zuiderzee Works. While in Delft, he first met Heike Kamerlingh Onnes, who was to become his thesis adviser.
Education and early career
After Zeeman passed the qualification exams in 1885, he studied physics at the University of Leiden under Kamerlingh Onnes and Hendrik Lorentz. In 1890, even before finishing his thesis, he became Lorentz's assistant. This allowed him to participate in a research programme on the Kerr effect. In 1893 he submitted his doctoral thesis on the Kerr effect, the reflection of polarized light on a magnetized surface. After obtaining his doctorate he went for half a year to Friedrich Kohlrausch's institute in Strasbourg. In 1895, after returning from Strasbourg, Zeeman became Privatdozent in mathematics and physics in Leiden. The same year he married Johanna Elisabeth Lebret (1873–1962); they had three daughters and one son.
In 1896, shortly before moving from Leiden to Amsterdam, he measured the splitting of spectral lines by a strong magnetic field, a discovery now known as the Zeeman effect, for which he won the 1902 Nobel Prize in Physics. This research involved an investigation of the effect of magnetic fields on a light source. He discovered that a spectral line is split into several components in the presence of a magnetic field. Lorentz first heard about Zeeman's observations on Saturday 31 October 1896 at the meeting of the Royal Netherlands Academy of Arts and Sciences in Amsterdam, where these results were communicated by Kamerlingh Onnes. The next Monday, Lorentz called Zeeman into his office and presented him with an explanation of his observations, based on Lorentz's theory of electromagnetic radiation.
The importance of Zeeman's discovery soon became apparent. It confirmed Lorentz's prediction about the polarization of light emitted in the presence of a magnetic field. Thanks to Zeeman's work it became clear that the oscillating particles that according to Lorentz were the source of light emission were negatively charged, and were a thousandfold lighter than the hydrogen atom. This conclusion was reached well before J. J. Thomson's discovery of the electron. The Zeeman effect thus became an important tool for elucidating the structure of the atom.
Professor in Amsterdam
Shortly after his discovery, Zeeman was offered a position as lecturer in Amsterdam, where he started to work in the autumn of 1896. In 1900, this was followed by his promotion to professor of physics at the University of Amsterdam. In 1902, together with his former mentor Lorentz, he received the Nobel Prize for Physics for the discovery of the Zeeman effect. Five years later, in 1908, he succeeded Van der Waals as full professor and Director of the Physics Institute in Amsterdam.
In 1918 he published "Some experiments on gravitation: The ratio of mass to weight for crystals and radioactive substances" in the Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen, experimentally confirming the equivalence principle with regard to gravitational and inertial mass.
A new laboratory built in Amsterdam in 1923 was renamed the Zeeman Laboratory in 1940. This new facility allowed Zeeman to pursue a refined investigation of the Zeeman effect. For the remainder of his career he remained interested in research in magneto-optic effects. He also investigated the propagation of light in moving media. This subject became the focus of a renewed interest because of special relativity, and enjoyed a keen interest from Lorentz and Albert Einstein. Later in his career he became interested in mass spectrometry.
Later years
In 1898 Zeeman was elected to membership of the Royal Netherlands Academy of Arts and Sciences in Amsterdam, and he served as its secretary from 1912 to 1920. He won the Henry Draper Medal in 1921, and several other awards and Honorary degrees. Zeeman was elected a Foreign member of the Royal Society (ForMemRS) in 1921. He retired as a professor in 1935.
Zeeman died on 9 October 1943 in Amsterdam, and was buried in Haarlem.
Awards and honors
Zeeman received the following awards for his contributions.
Nobel Prize for Physics (1902)
Matteucci Medal (1912)
Elected a Foreign Member of the Royal Society (ForMemRS) in 1921
Henry Draper Medal from the National Academy of Sciences (1921)
Rumford Medal (1922)
Franklin Medal (1925)
The crater Zeeman on the Moon is named in his honour.
See also
Atom and Atomic Theory
Bohr–Sommerfeld model
Fresnel drag coefficient
Light-dragging effects
References
External links
, available at Gallica. The "Address" of Gabriel Bertrand of December 20, 1943 at the French Academy: he gives biographical sketches of the lives of recently deceased members, including Pieter Zeeman, David Hilbert and Georges Giraud.
Albert van Helden Pieter Zeeman 1865 – 1943 In: K. van Berkel, A. van Helden and L. Palm ed., A History of Science in The Netherlands. Survey, Themes and Reference (Leiden: Brill, 1999) 606 - 608.
Biography at the Nobel e-museum and Nobel Lecture.
P.F.A. Klinkenberg, Zeeman, Pieter (1865-1943), in Biografisch Woordenboek van Nederland.
Biography of Pieter Zeeman (1865 – 1943) at the National library of the Netherlands.
Anne J. Kox, Wetenschappelijke feiten en postmoderne fictie in de wetenschapsgeschiedenis, Inaugural lecture (1999).
Pim de Bie, prof.dr. P. Zeeman Zonnemaire 25 mei 1865 - Amsterdam 9 oktober 1943 Gravesite of Pieter Zeeman
Pieter Zeeman, Bijzondere collecties Leiden.
photo & short info
1865 births
1943 deaths
20th-century Dutch physicists
19th-century Dutch physicists
Nobel laureates in Physics
Dutch Nobel laureates
Leiden University alumni
Academic staff of Leiden University
Academic staff of the University of Amsterdam
People from Zonnemaire
Quantum physicists
Foreign members of the Royal Society
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the Royal Academy of Belgium
Experimental physicists
Spectroscopists
Recipients of the Matteucci Medal
Recipients of Franklin Medal | Pieter Zeeman | [
"Physics",
"Chemistry"
] | 1,527 | [
"Spectrum (physical sciences)",
"Physical chemists",
"Analytical chemists",
"Quantum physicists",
"Quantum mechanics",
"Spectroscopists",
"Experimental physics",
"Spectroscopy",
"Experimental physicists"
] |
209,455 | https://en.wikipedia.org/wiki/Thylakoid | Thylakoids are membrane-bound compartments inside chloroplasts and cyanobacteria. They are the site of the light-dependent reactions of photosynthesis. Thylakoids consist of a thylakoid membrane surrounding a thylakoid lumen. Chloroplast thylakoids frequently form stacks of disks referred to as grana (singular: granum). Grana are connected by intergranal or stromal thylakoids, which join granum stacks together as a single functional compartment.
In thylakoid membranes, chlorophyll pigments are found in packets called quantasomes. Each quantasome contains 230 to 250 chlorophyll molecules.
Etymology
The word Thylakoid comes from the Greek word thylakos or θύλακος, meaning "sac" or "pouch". Thus, thylakoid means "sac-like" or "pouch-like".
Structure
Thylakoids are membrane-bound structures embedded in the chloroplast stroma. A stack of thylakoids is called a granum and resembles a stack of coins.
Membrane
The thylakoid membrane is the site of the light-dependent reactions of photosynthesis with the photosynthetic pigments embedded directly in the membrane. It is an alternating pattern of dark and light bands measuring each 1 nanometre. The thylakoid lipid bilayer shares characteristic features with prokaryotic membranes and the inner chloroplast membrane. For example, acidic lipids can be found in thylakoid membranes, cyanobacteria and other photosynthetic bacteria and are involved in the functional integrity of the photosystems. The thylakoid membranes of higher plants are composed primarily of phospholipids and galactolipids that are asymmetrically arranged along and across the membranes. Thylakoid membranes are richer in galactolipids rather than phospholipids; also they predominantly consist of hexagonal phase II forming monogalacotosyl diglyceride lipid. Despite this unique composition, plant thylakoid membranes have been shown to assume largely lipid-bilayer dynamic organization. Lipids forming the thylakoid membranes, richest in high-fluidity linolenic acid are synthesized in a complex pathway involving exchange of lipid precursors between the endoplasmic reticulum and inner membrane of the plastid envelope and transported from the inner membrane to the thylakoids via vesicles.
Lumen
The thylakoid lumen is a continuous aqueous phase enclosed by the thylakoid membrane. It plays an important role for photophosphorylation during photosynthesis. During the light-dependent reaction, protons are pumped across the thylakoid membrane into the lumen making it acidic down to pH 4.
Granum and stroma lamellae
In higher plants thylakoids are organized into a granum-stroma membrane assembly. A granum (plural grana) is a stack of thylakoid discs. Chloroplasts can have from 10 to 100 grana. Grana are connected by stroma thylakoids, also called intergranal thylakoids or lamellae. Grana thylakoids and stroma thylakoids can be distinguished by their different protein composition. Grana contribute to chloroplasts' large surface area to volume ratio. A recent electron tomography study of the thylakoid membranes has shown that the stroma lamellae are organized in wide sheets perpendicular to the grana stack axis and form multiple right-handed helical surfaces at the granal interface. Left-handed helical surfaces consolidate between the right-handed helices and sheets. This complex network of alternating helical membrane surfaces of different radii and pitch was shown to minimize the surface and bending energies of the membranes. This new model, the most extensive one generated to date, revealed that features from two, seemingly contradictory, older models coexist in the structure. Notably, similar arrangements of helical elements of alternating handedness, often referred to as "parking garage" structures, were proposed to be present in the endoplasmic reticulum and in ultradense nuclear matter. This structural organization may constitute a fundamental geometry for connecting between densely packed layers or sheets.
Formation
Chloroplasts develop from proplastids when seedlings emerge from the ground. Thylakoid formation requires light. In the plant embryo and in the absence of light, proplastids develop into etioplasts that contain semicrystalline membrane structures called prolamellar bodies. When exposed to light, these prolamellar bodies develop into thylakoids. This does not happen in seedlings grown in the dark, which undergo etiolation. An underexposure to light can cause the thylakoids to fail. This causes the chloroplasts to fail resulting to the death of the plant.
Thylakoid formation requires the action of vesicle-inducing protein in plastids 1 (VIPP1). Plants cannot survive without this protein, and reduced VIPP1 levels lead to slower growth and paler plants with reduced ability to photosynthesize. VIPP1 appears to be required for basic thylakoid membrane formation, but not for the assembly of protein complexes of the thylakoid membrane. It is conserved in all organisms containing thylakoids, including cyanobacteria, green algae, such as Chlamydomonas, and higher plants, such as Arabidopsis thaliana.
Isolation and fractionation
Thylakoids can be purified from plant cells using a combination of differential and gradient centrifugation. Disruption of isolated thylakoids, for example by mechanical shearing, releases the lumenal fraction. Peripheral and integral membrane fractions can be extracted from the remaining membrane fraction. Treatment with sodium carbonate (Na2CO3) detaches peripheral membrane proteins, whereas treatment with detergents and organic solvents solubilizes integral membrane proteins.
Proteins
Thylakoids contain many integral and peripheral membrane proteins, as well as lumenal proteins. Recent proteomics studies of thylakoid fractions have provided further details on the protein composition of the thylakoids. These data have been summarized in several plastid protein databases that are available online.
According to these studies, the thylakoid proteome consists of at least 335 different proteins. Out of these, 89 are in the lumen, 116 are integral membrane proteins, 62 are peripheral proteins on the stroma side, and 68 peripheral proteins on the lumenal side. Additional low-abundance lumenal proteins can be predicted through computational methods. Of the thylakoid proteins with known functions, 42% are involved in photosynthesis. The next largest functional groups include proteins involved in protein targeting, processing and folding with 11%, oxidative stress response (9%) and translation (8%).
Integral membrane proteins
Thylakoid membranes contain integral membrane proteins which play an important role in light-harvesting and the light-dependent reactions of photosynthesis. There are four major protein complexes in the thylakoid membrane:
Photosystems I and II
Cytochrome b6f complex
ATP synthase
Photosystem II is located mostly in the grana thylakoids, whereas photosystem I and ATP synthase are mostly located in the stroma thylakoids and the outer layers of grana. The cytochrome b6f complex is distributed evenly throughout thylakoid membranes. Due to the separate location of the two photosystems in the thylakoid membrane system, mobile electron carriers are required to shuttle electrons between them. These carriers are plastoquinone and plastocyanin. Plastoquinone shuttles electrons from photosystem II to the cytochrome b6f complex, whereas plastocyanin carries electrons from the cytochrome b6f complex to photosystem I.
Together, these proteins make use of light energy to drive electron transport chains that generate a chemiosmotic potential across the thylakoid membrane and NADPH, a product of the terminal redox reaction. The ATP synthase uses the chemiosmotic potential to make ATP during photophosphorylation.
Photosystems
These photosystems are light-driven redox centers, each consisting of an antenna complex that uses chlorophylls and accessory photosynthetic pigments such as carotenoids and phycobiliproteins to harvest light at a variety of wavelengths. Each antenna complex has between 250 and 400 pigment molecules and the energy they absorb is shuttled by resonance energy transfer to a specialized chlorophyll a at the reaction center of each photosystem. When either of the two chlorophyll a molecules at the reaction center absorb energy, an electron is excited and transferred to an electron-acceptor molecule. Photosystem I contains a pair of chlorophyll a molecules, designated P700, at its reaction center that maximally absorbs 700 nm light. Photosystem II contains P680 chlorophyll that absorbs 680 nm light best (note that these wavelengths correspond to deep red – see the visible spectrum). The P is short for pigment and the number is the specific absorption peak in nanometers for the chlorophyll molecules in each reaction center. This is the green pigment present in plants that is not visible to unaided eyes.
Cytochrome b6f complex
The cytochrome b6f complex is part of the thylakoid electron transport chain and couples electron transfer to the pumping of protons into the thylakoid lumen. Energetically, it is situated between the two photosystems and transfers electrons from photosystem II-plastoquinone to plastocyanin-photosystem I.
ATP synthase
The thylakoid ATP synthase is a CF1FO-ATP synthase similar to the mitochondrial ATPase. It is integrated into the thylakoid membrane with the CF1-part sticking into the stroma. Thus, ATP synthesis occurs on the stromal side of the thylakoids where the ATP is needed for the light-independent reactions of photosynthesis.
Lumen proteins
The electron transport protein plastocyanin is present in the lumen and shuttles electrons from the cytochrome b6f protein complex to photosystem I. While plastoquinones are lipid-soluble and therefore move within the thylakoid membrane, plastocyanin moves through the thylakoid lumen.
The lumen of the thylakoids is also the site of water oxidation by the oxygen evolving complex associated with the lumenal side of photosystem II.
Lumenal proteins can be predicted computationally based on their targeting signals. In Arabidopsis, out of the predicted lumenal proteins possessing the Tat signal, the largest groups with known functions are 19% involved in protein processing (proteolysis and folding), 18% in photosynthesis, 11% in metabolism, and 7% redox carriers and defense.
Protein expression
Chloroplasts have their own genome, which encodes a number of thylakoid proteins. However, during the course of plastid evolution from their cyanobacterial endosymbiotic ancestors, extensive gene transfer from the chloroplast genome to the cell nucleus took place. This results in the four major thylakoid protein complexes being encoded in part by the chloroplast genome and in part by the nuclear genome. Plants have developed several mechanisms to co-regulate the expression of the different subunits encoded in the two different organelles to assure the proper stoichiometry and assembly of these protein complexes. For example, transcription of nuclear genes encoding parts of the photosynthetic apparatus is regulated by light. Biogenesis, stability and turnover of thylakoid protein complexes are regulated by phosphorylation via redox-sensitive kinases in the thylakoid membranes. The translation rate of chloroplast-encoded proteins is controlled by the presence or absence of assembly partners (control by epistasy of synthesis). This mechanism involves negative feedback through binding of excess protein to the 5' untranslated region of the chloroplast mRNA. Chloroplasts also need to balance the ratios of photosystem I and II for the electron transfer chain. The redox state of the electron carrier plastoquinone in the thylakoid membrane directly affects the transcription of chloroplast genes encoding proteins of the reaction centers of the photosystems, thus counteracting imbalances in the electron transfer chain.
Protein targeting to the thylakoids
Thylakoid proteins are targeted to their destination via signal peptides and prokaryotic-type secretory pathways inside the chloroplast. Most thylakoid proteins encoded by a plant's nuclear genome need two targeting signals for proper localization: An N-terminal chloroplast targeting peptide (shown in yellow in the figure), followed by a thylakoid targeting peptide (shown in blue). Proteins are imported through the translocon of the outer and inner membrane (Toc and Tic) complexes. After entering the chloroplast, the first targeting peptide is cleaved off by a protease processing imported proteins. This unmasks the second targeting signal and the protein is exported from the stroma into the thylakoid in a second targeting step. This second step requires the action of protein translocation components of the thylakoids and is energy-dependent. Proteins are inserted into the membrane via the SRP-dependent pathway (1), the Tat-dependent pathway (2), or spontaneously via their transmembrane domains (not shown in the figure). Lumenal proteins are exported across the thylakoid membrane into the lumen by either the Tat-dependent pathway (2) or the Sec-dependent pathway (3) and released by cleavage from the thylakoid targeting signal. The different pathways utilize different signals and energy sources. The Sec (secretory) pathway requires ATP as an energy source and consists of SecA, which binds to the imported protein and a Sec membrane complex to shuttle the protein across. Proteins with a twin arginine motif in their thylakoid signal peptide are shuttled through the Tat (twin arginine translocation) pathway, which requires a membrane-bound Tat complex and the pH gradient as an energy source. Some other proteins are inserted into the membrane via the SRP (signal recognition particle) pathway. The chloroplast SRP can interact with its target proteins either post-translationally or co-translationally, thus transporting imported proteins as well as those that are translated inside the chloroplast. The SRP pathway requires GTP and the pH gradient as energy sources. Some transmembrane proteins may also spontaneously insert into the membrane from the stromal side without energy requirement.
Function
The thylakoids are the site of the light-dependent reactions of photosynthesis. These include light-driven water oxidation and oxygen evolution, the pumping of protons across the thylakoid membranes coupled with the electron transport chain of the photosystems and cytochrome complex, and ATP synthesis by the ATP synthase utilizing the generated proton gradient.
Water photolysis
The first step in photosynthesis is the light-driven reduction (splitting) of water to provide the electrons for the photosynthetic electron transport chains as well as protons for the establishment of a proton gradient. The water-splitting reaction occurs on the lumenal side of the thylakoid membrane and is driven by the light energy captured by the photosystems. This oxidation of water conveniently produces the waste product O2 that is vital for cellular respiration. The molecular oxygen formed by the reaction is released into the atmosphere.
Electron transport chains
Two different variations of electron transport are used during photosynthesis:
Noncyclic electron transport or non-cyclic photophosphorylation produces NADPH + H+ and ATP.
Cyclic electron transport or cyclic photophosphorylation produces only ATP.
The noncyclic variety involves the participation of both photosystems, while the cyclic electron flow is dependent on only photosystem I.
Photosystem I uses light energy to reduce NADP+ to NADPH + H+, and is active in both noncyclic and cyclic electron transport. In cyclic mode, the energized electron is passed down a chain that ultimately returns it (in its base state) to the chlorophyll that energized it.
Photosystem II uses light energy to oxidize water molecules, producing electrons (e−), protons (H+), and molecular oxygen (O2), and is only active in noncyclic transport. Electrons in this system are not conserved, but are rather continually entering from oxidized 2H2O (O2 + 4 H+ + 4 e−) and exiting with NADP+ when it is finally reduced to NADPH.
Chemiosmosis
A major function of the thylakoid membrane and its integral photosystems is the establishment of chemiosmotic potential. The carriers in the electron transport chain use some of the electron's energy to actively transport protons from the stroma to the lumen. During photosynthesis, the lumen becomes acidic, as low as pH 4, compared to pH 8 in the stroma. This represents a 10,000 fold concentration gradient for protons across the thylakoid membrane.
Source of proton gradient
The protons in the lumen come from three primary sources.
Photolysis by photosystem II oxidises water to oxygen, protons and electrons in the lumen.
The transfer of electrons from photosystem II to plastoquinone during non-cyclic electron transport consumes two protons from the stroma. These are released in the lumen when the reduced plastoquinol is oxidized by the cytochrome b6f protein complex on the lumen side of the thylakoid membrane. From the plastoquinone pool, electrons pass through the cytochrome b6f complex. This integral membrane assembly resembles cytochrome bc1.
The reduction of plastoquinone by ferredoxin during cyclic electron transport also transfers two protons from the stroma to the lumen.
The proton gradient is also caused by the consumption of protons in the stroma to make NADPH from NADP+ at the NADP reductase.
ATP generation
The molecular mechanism of ATP (Adenosine triphosphate) generation in chloroplasts is similar to that in mitochondria and takes the required energy from the proton motive force (PMF). However, chloroplasts rely more on the chemical potential of the PMF to generate the potential energy required for ATP synthesis. The PMF is the sum of a proton chemical potential (given by the proton concentration gradient) and a transmembrane electrical potential (given by charge separation across the membrane). Compared to the inner membranes of mitochondria, which have a significantly higher membrane potential due to charge separation, thylakoid membranes lack a charge gradient. To compensate for this, the 10,000 fold proton concentration gradient across the thylakoid membrane is much higher compared to a 10 fold gradient across the inner membrane of mitochondria. The resulting chemiosmotic potential between the lumen and stroma is high enough to drive ATP synthesis using the ATP synthase. As the protons travel back down the gradient through channels in ATP synthase, ADP + Pi are combined into ATP. In this manner, the light-dependent reactions are coupled to the synthesis of ATP via the proton gradient.
Thylakoid membranes in cyanobacteria
Cyanobacteria are photosynthetic prokaryotes with highly differentiated membrane systems. Cyanobacteria have an internal system of thylakoid membranes where the fully functional electron transfer chains of photosynthesis and respiration reside. The presence of different membrane systems lends these cells a unique complexity among bacteria. Cyanobacteria must be able to reorganize the membranes, synthesize new membrane lipids, and properly target proteins to the correct membrane system. The outer membrane, plasma membrane, and thylakoid membranes each have specialized roles in the cyanobacterial cell. Understanding the organization, functionality, protein composition, and dynamics of the membrane systems remains a great challenge in cyanobacterial cell biology.
In contrast to the thylakoid network of higher plants, which is differentiated into grana and stroma lamellae, the thylakoids in cyanobacteria are organized into multiple concentric shells that split and fuse to parallel layers forming a highly connected network. This results in a continuous network that encloses a single lumen (as in higher‐plant chloroplasts) and allows water‐soluble and lipid‐soluble molecules to diffuse through the entire membrane network. Moreover, perforations are often observed within the parallel thylakoid sheets. These gaps in the membrane allow for the traffic of particles of different sizes throughout the cell, including ribosomes, glycogen granules, and lipid bodies. The relatively large distance between the thylakoids provides space for the external light-harvesting antennae, the phycobilisomes. This macrostructure, as in the case of higher plants, shows some flexibility during changes in the physicochemical environment.
See also
Arthur Meyer (botanist)
André Jagendorf
Chemiosmosis
Electrochemical gradient
Endosymbiosis
Oxygen evolution
Photosynthesis
References
Textbook sources
Membrane biology
Photosynthesis
Plant anatomy
Plastids | Thylakoid | [
"Chemistry",
"Biology"
] | 4,568 | [
"Membrane biology",
"Photosynthesis",
"Plastids",
"Molecular biology",
"Biochemistry"
] |
209,459 | https://en.wikipedia.org/wiki/Alternative%20splicing | Alternative splicing, or alternative RNA splicing, or differential splicing, is an alternative splicing process during gene expression that allows a single gene to produce different splice variants. For example, some exons of a gene may be included within or excluded from the final RNA product of the gene. This means the exons are joined in different combinations, leading to different splice variants. In the case of protein-coding genes, the proteins translated from these splice variants may contain differences in their amino acid sequence and in their biological functions (see Figure).
Biologically relevant alternative splicing occurs as a normal phenomenon in eukaryotes, where it increases the number of proteins that can be encoded by the genome. In humans, it is widely believed that ~95% of multi-exonic genes are alternatively spliced to produce functional alternative products from the same gene but many scientists believe that most of the observed splice variants are due to splicing errors and the actual number of biologically relevant alternatively spliced genes is much lower.
Discovery
Alternative splicing was first observed in 1977. The adenovirus produces five primary transcripts early in its infectious cycle, prior to viral DNA replication, and an additional one later, after DNA replication begins. The early primary transcripts continue to be produced after DNA replication begins. The additional primary transcript produced late in infection is large and comes from 5/6 of the 32kb adenovirus genome. This is much larger than any of the individual adenovirus mRNAs present in infected cells. Researchers found that the primary RNA transcript produced by adenovirus type 2 in the late phase was spliced in many different ways, resulting in mRNAs encoding different viral proteins. In addition, the primary transcript contained multiple polyadenylation sites, giving different 3' ends for the processed mRNAs.
In 1981, the first example of alternative splicing in a transcript from a normal, endogenous gene was characterized. The gene encoding the thyroid hormone calcitonin was found to be alternatively spliced in mammalian cells. The primary transcript from this gene contains 6 exons; the calcitonin mRNA contains exons 1–4, and terminates after a polyadenylation site in exon 4. Another mRNA is produced from this pre-mRNA by skipping exon 4, and includes exons 1–3, 5, and 6. It encodes a protein known as CGRP (calcitonin gene related peptide). Examples of alternative splicing in immunoglobin gene transcripts in mammals were also observed in the early 1980s.
Since then, many other examples of biologically relevant alternative splicing have been found in eukaryotes. The "record-holder" for alternative splicing is a D. melanogaster gene called Dscam, which could potentially have 38,016 splice variants.
In 2021, it was discovered that the genome of adenovirus type 2, the adenovirus in which alternative splicing was first identified, was able to produce a much greater variety of splice variants than previously thought. By using next generation sequencing technology, researchers were able to update the human adenovirus type 2 transcriptome and document the presence of 904 splice variants produced by the virus through a complex pattern of alternative splicing. Very few of these splice variants have been shown to be functional, a point that the authors raise in their paper.
"An outstanding question is what roles the menagerie of novel RNAs play or whether they are spurious molecules generated by an overloaded splicing machinery."
Modes
Five basic modes of alternative splicing are generally recognized.
Exon skipping or cassette exon: in this case, an exon may be spliced out of the primary transcript or retained. This is the most common mode in mammalian pre-mRNAs.
Mutually exclusive exons: One of two exons is retained in mRNAs after splicing, but not both.
Alternative donor site: An alternative 5' splice junction (donor site) is used, changing the 3' boundary of the upstream exon.
Alternative acceptor site: An alternative 3' splice junction (acceptor site) is used, changing the 5' boundary of the downstream exon.
Intron retention: A sequence may be spliced out as an intron or simply retained. This is distinguished from exon skipping because the retained sequence is not flanked by introns. If the retained intron is in the coding region, the intron must encode amino acids in frame with the neighboring exons, or a stop codon or a shift in the reading frame will cause the protein to be non-functional. This is the rarest mode in mammals but the most common in plants.
In addition to these primary modes of alternative splicing, there are two other main mechanisms by which different mRNAs may be generated from the same gene; multiple promoters and multiple polyadenylation sites. Use of multiple promoters is properly described as a transcriptional regulation mechanism rather than alternative splicing; by starting transcription at different points, transcripts with different 5'-most exons can be generated. At the other end, multiple polyadenylation sites provide different 3' end points for the transcript. Both of these mechanisms are found in combination with alternative splicing and provide additional variety in mRNAs derived from a gene.
These modes describe basic splicing mechanisms, but may be inadequate to describe complex splicing events. For instance, the figure to the right shows 3 spliceforms from the mouse hyaluronidase 3 gene. Comparing the exonic structure shown in the first line (green) with the one in the second line (yellow) shows intron retention, whereas the comparison between the second and the third spliceform (yellow vs. blue) exhibits exon skipping. A model nomenclature to uniquely designate all possible splicing patterns has recently been proposed.
Mechanisms
General splicing mechanism
When the pre-mRNA has been transcribed from the DNA, it includes several introns and exons. (In nematodes, the mean is 4–5 exons and introns; in the fruit fly Drosophila there can be more than 100 introns and exons in one transcribed pre-mRNA.) The exons to be retained in the mRNA are determined during the splicing process. The regulation and selection of splice sites are done by trans-acting splicing activator and splicing repressor proteins as well as cis-acting elements within the pre-mRNA itself such as exonic splicing enhancers and exonic splicing silencers.
The typical eukaryotic nuclear intron has consensus sequences defining important regions. Each intron has the sequence GU at its 5' end. Near the 3' end there is a branch site. The nucleotide at the branchpoint is always an A; the consensus around this sequence varies somewhat. In humans the branch site consensus sequence is yUnAy. The branch site is followed by a series of pyrimidines – the polypyrimidine tract – then by AG at the 3' end.
Splicing of mRNA is performed by an RNA and protein complex known as the spliceosome, containing snRNPs designated U1, U2, U4, U5, and U6 (U3 is not involved in mRNA splicing). U1 binds to the 5' GU and U2, with the assistance of the U2AF protein factors, binds to the branchpoint A within the branch site. The complex at this stage is known as the spliceosome A complex. Formation of the A complex is usually the key step in determining the ends of the intron to be spliced out, and defining the ends of the exon to be retained. (The U nomenclature derives from their high uridine content).
The U4,U5,U6 complex binds, and U6 replaces the U1 position. U1 and U4 leave. The remaining complex then performs two transesterification reactions. In the first transesterification, 5' end of the intron is cleaved from the upstream exon and joined to the branch site A by a 2',5'-phosphodiester linkage. In the second transesterification, the 3' end of the intron is cleaved from the downstream exon, and the two exons are joined by a phosphodiester bond. The intron is then released in lariat form and degraded.
Regulatory elements and proteins
Splicing is regulated by trans-acting proteins (repressors and activators) and corresponding cis-acting regulatory sites (silencers and enhancers) on the pre-mRNA. However, as part of the complexity of alternative splicing, it is noted that the effects of a splicing factor are frequently position-dependent. That is, a splicing factor that serves as a splicing activator when bound to an intronic enhancer element may serve as a repressor when bound to its splicing element in the context of an exon, and vice versa. The secondary structure of the pre-mRNA transcript also plays a role in regulating splicing, such as by bringing together splicing elements or by masking a sequence that would otherwise serve as a binding element for a splicing factor. Together, these elements form a "splicing code" that governs how splicing will occur under different cellular conditions.
There are two major types of cis-acting RNA sequence elements present in pre-mRNAs and they have corresponding trans-acting RNA-binding proteins. Splicing silencers are sites to which splicing repressor proteins bind, reducing the probability that a nearby site will be used as a splice junction. These can be located in the intron itself (intronic splicing silencers, ISS) or in a neighboring exon (exonic splicing silencers, ESS). They vary in sequence, as well as in the types of proteins that bind to them. The majority of splicing repressors are heterogeneous nuclear ribonucleoproteins (hnRNPs) such as hnRNPA1 and polypyrimidine tract binding protein (PTB). Splicing enhancers are sites to which splicing activator proteins bind, increasing the probability that a nearby site will be used as a splice junction. These also may occur in the intron (intronic splicing enhancers, ISE) or exon (exonic splicing enhancers, ESE). Most of the activator proteins that bind to ISEs and ESEs are members of the SR protein family. Such proteins contain RNA recognition motifs and arginine and serine-rich (RS) domains.
In general, the determinants of splicing work in an inter-dependent manner that depends on context, so that the rules governing how splicing is regulated form a splicing code. The presence of a particular cis-acting RNA sequence element may increase the probability that a nearby site will be spliced in some cases, but decrease the probability in other cases, depending on context. The context within which regulatory elements act includes cis-acting context that is established by the presence of other RNA sequence features, and trans-acting context that is established by cellular conditions. For example, some cis-acting RNA sequence elements influence splicing only if multiple elements are present in the same region so as to establish context. As another example, a cis-acting element can have opposite effects on splicing, depending on which proteins are expressed in the cell (e.g., neuronal versus non-neuronal PTB). The adaptive significance of splicing silencers and enhancers is attested by studies showing that there is strong selection in human genes against mutations that produce new silencers or disrupt existing enhancers.
Examples
Exon skipping: Drosophila dsx
Pre-mRNAs from the D. melanogaster gene dsx contain 6 exons. In males, exons 1,2,3,5,and 6 are joined to form the mRNA, which encodes a transcriptional regulatory protein required for male development. In females, exons 1,2,3, and 4 are joined, and a polyadenylation signal in exon 4 causes cleavage of the mRNA at that point. The resulting mRNA is a transcriptional regulatory protein required for female development.
This is an example of exon skipping. The intron upstream from exon 4 has a polypyrimidine tract that doesn't match the consensus sequence well, so that U2AF proteins bind poorly to it without assistance from splicing activators. This 3' splice acceptor site is therefore not used in males. Females, however, produce the splicing activator Transformer (Tra) (see below). The SR protein Tra2 is produced in both sexes and binds to an ESE in exon 4; if Tra is present, it binds to Tra2 and, along with another SR protein, forms a complex that assists U2AF proteins in binding to the weak polypyrimidine tract. U2 is recruited to the associated branchpoint, and this leads to inclusion of exon 4 in the mRNA.
Alternative acceptor sites: Drosophila
Pre-mRNAs of the Transformer (Tra) gene of Drosophila melanogaster undergo alternative splicing via the alternative acceptor site mode. The gene Tra encodes a protein that is expressed only in females. The primary transcript of this gene contains an intron with two possible acceptor sites. In males, the upstream acceptor site is used. This causes a longer version of exon 2 to be included in the processed transcript, including an early stop codon. The resulting mRNA encodes a truncated protein product that is inactive. Females produce the master sex determination protein Sex lethal (Sxl). The Sxl protein is a splicing repressor that binds to an ISS in the RNA of the Tra transcript near the upstream acceptor site, preventing U2AF protein from binding to the polypyrimidine tract. This prevents the use of this junction, shifting the spliceosome binding to the downstream acceptor site. Splicing at this point bypasses the stop codon, which is excised as part of the intron. The resulting mRNA encodes an active Tra protein, which itself is a regulator of alternative splicing of other sex-related genes (see dsx above).
Exon definition: Fas receptor
Multiple isoforms of the Fas receptor protein are produced by alternative splicing. Two normally occurring isoforms in humans are produced by an exon-skipping mechanism. An mRNA including exon 6 encodes the membrane-bound form of the Fas receptor, which promotes apoptosis, or programmed cell death. Increased expression of Fas receptor in skin cells chronically exposed to the sun, and absence of expression in skin cancer cells, suggests that this mechanism may be important in elimination of pre-cancerous cells in humans. If exon 6 is skipped, the resulting mRNA encodes a soluble Fas protein that does not promote apoptosis. The inclusion or skipping of the exon depends on two antagonistic proteins, TIA-1 and polypyrimidine tract-binding protein (PTB).
The 5' donor site in the intron downstream from exon 6 in the pre-mRNA has a weak agreement with the consensus sequence, and is not bound usually by the U1 snRNP. If U1 does not bind, the exon is skipped (see "a" in accompanying figure).
Binding of TIA-1 protein to an intronic splicing enhancer site stabilizes binding of the U1 snRNP. The resulting 5' donor site complex assists in binding of the splicing factor U2AF to the 3' splice site upstream of the exon, through a mechanism that is not yet known (see b).
Exon 6 contains a pyrimidine-rich exonic splicing silencer, ure6, where PTB can bind. If PTB binds, it inhibits the effect of the 5' donor complex on the binding of U2AF to the acceptor site, resulting in exon skipping (see c).
This mechanism is an example of exon definition in splicing. A spliceosome assembles on an intron, and the snRNP subunits fold the RNA so that the 5' and 3' ends of the intron are joined. However, recently studied examples such as this one show that there are also interactions between the ends of the exon. In this particular case, these exon definition interactions are necessary to allow the binding of core splicing factors prior to assembly of the spliceosomes on the two flanking introns.
Repressor-activator competition: HIV-1 tat exon 2
HIV, the retrovirus that causes AIDS in humans, produces a single primary RNA transcript, which is alternatively spliced in multiple ways to produce over 40 different mRNAs. Equilibrium among differentially spliced transcripts provides multiple mRNAs encoding different products that are required for viral multiplication. One of the differentially spliced transcripts contains the tat gene, in which exon 2 is a cassette exon that may be skipped or included. The inclusion of tat exon 2 in the RNA is regulated by competition between the splicing repressor hnRNP A1 and the SR protein SC35. Within exon 2 an exonic splicing silencer sequence (ESS) and an exonic splicing enhancer sequence (ESE) overlap. If A1 repressor protein binds to the ESS, it initiates cooperative binding of multiple A1 molecules, extending into the 5' donor site upstream of exon 2 and preventing the binding of the core splicing factor U2AF35 to the polypyrimidine tract. If SC35 binds to the ESE, it prevents A1 binding and maintains the 5' donor site in an accessible state for assembly of the spliceosome. Competition between the activator and repressor ensures that both mRNA types (with and without exon 2) are produced.
Adaptive significance
Genuine alternative splicing occurs in both protein-coding genes and non-coding genes to produce multiple products (proteins or non-coding RNAs). External information is needed in order to decide which product is made, given a DNA sequence and the initial transcript. Since the methods of regulation are inherited, this provides novel ways for mutations to affect gene expression.
Alternative splicing may provide evolutionary flexibility. A single point mutation may cause a given exon to be occasionally excluded or included from a transcript during splicing, allowing production of a new protein isoform without loss of the original protein. Studies have identified intrinsically disordered regions (see Intrinsically unstructured proteins) as enriched in the non-constitutive exons suggesting that protein isoforms may display functional diversity due to the alteration of functional modules within these regions. Such functional diversity achieved by isoforms is reflected by their expression patterns and can be predicted by machine learning approaches. Comparative studies indicate that alternative splicing preceded multicellularity in evolution, and suggest that this mechanism might have been co-opted to assist in the development of multicellular organisms.
Research based on the Human Genome Project and other genome sequencing has shown that humans have only about 30% more genes than the roundworm Caenorhabditis elegans, and only about twice as many as the fly Drosophila melanogaster. This finding led to speculation that the perceived greater complexity of humans, or vertebrates generally, might be due to higher rates of alternative splicing in humans than are found in invertebrates. However, a study on samples of 100,000 expressed sequence tags (EST) each from human, mouse, rat, cow, fly (D. melanogaster), worm (C. elegans), and the plant Arabidopsis thaliana found no large differences in frequency of alternatively spliced genes among humans and any of the other animals tested. Another study, however, proposed that these results were an artifact of the different numbers of ESTs available for the various organisms. When they compared alternative splicing frequencies in random subsets of genes from each organism, the authors concluded that vertebrates do have higher rates of alternative splicing than invertebrates.
Disease
Changes in the RNA processing machinery may lead to mis-splicing of multiple transcripts, while single-nucleotide alterations in splice sites or cis-acting splicing regulatory sites may lead to differences in splicing of a single gene, and thus in the mRNA produced from a mutant gene's transcripts. A study in 2005 involving probabilistic analyses indicated that greater than 60% of human disease-causing mutations affect splicing rather than directly affecting coding sequences. A more recent study indicates that one-third of all hereditary diseases are likely to have a splicing component. Regardless of exact percentage, a number of splicing-related diseases do exist. As described below, a prominent example of splicing-related diseases is cancer.
Abnormally spliced mRNAs are also found in a high proportion of cancerous cells. Combined RNA-Seq and proteomics analyses have revealed striking differential expression
of splice isoforms of key proteins in important cancer pathways. It is not always clear whether such aberrant patterns of splicing contribute to the cancerous growth, or are merely consequence of cellular abnormalities associated with cancer. For certain types of cancer, like in colorectal and prostate, the number of splicing errors per cancer has been shown to vary greatly between individual cancers, a phenomenon referred to as transcriptome instability. Transcriptome instability has further been shown to correlate grealty with reduced expression level of splicing factor genes. Mutation of DNMT3A has been demonstrated to contribute to hematologic malignancies, and that DNMT3A-mutated cell lines exhibit transcriptome instability as compared to their isogenic wildtype counterparts.
In fact, there is actually a reduction of alternative splicing in cancerous cells compared to normal ones, and the types of splicing differ; for instance, cancerous cells show higher levels of intron retention than normal cells, but lower levels of exon skipping. Some of the differences in splicing in cancerous cells may be due to the high frequency of somatic mutations in splicing factor genes, and some may result from changes in phosphorylation of trans-acting splicing factors. Others may be produced by changes in the relative amounts of splicing factors produced; for instance, breast cancer cells have been shown to have increased levels of the splicing factor SF2/ASF. One study found that a relatively small percentage (383 out of over 26000) of alternative splicing variants were significantly higher in frequency in tumor cells than normal cells, suggesting that there is a limited set of genes which, when mis-spliced, contribute to tumor development. It is believed however that the deleterious effects of mis-spliced transcripts are usually safeguarded and eliminated by a cellular posttranscriptional quality control mechanism termed nonsense-mediated mRNA decay [NMD].
One example of a specific splicing variant associated with cancers is in one of the human DNMT genes. Three DNMT genes encode enzymes that add methyl groups to DNA, a modification that often has regulatory effects. Several abnormally spliced DNMT3B mRNAs are found in tumors and cancer cell lines. In two separate studies, expression of two of these abnormally spliced mRNAs in mammalian cells caused changes in the DNA methylation patterns in those cells. Cells with one of the abnormal mRNAs also grew twice as fast as control cells, indicating a direct contribution to tumor development by this product.
Another example is the Ron (MST1R) proto-oncogene. An important property of cancerous cells is their ability to move and invade normal tissue. Production of an abnormally spliced transcript of Ron has been found to be associated with increased levels of the SF2/ASF in breast cancer cells. The abnormal isoform of the Ron protein encoded by this mRNA leads to cell motility.
Overexpression of a truncated splice variant of the FOSB gene – ΔFosB – in a specific population of neurons in the nucleus accumbens has been identified as the causal mechanism involved in the induction and maintenance of an addiction to drugs and natural rewards.
Recent provocative studies point to a key function of chromatin structure and histone modifications in alternative splicing regulation. These insights suggest that epigenetic regulation determines not only what parts of the genome are expressed but also how they are spliced.
Genome-scale (transcriptome-wide) analysis
Transcriptome-wide analysis of alternative splicing is typically performed by high-throughput RNA-sequencing. Most commonly, by short-read sequencing, such as by Illumina instrumentation. But even more informative, by long-read sequencing, such as by Nanopore or PacBio instrumentation. Transcriptome-wide analyses can for example be used to measure the amount of deviating alternative splicing, such as in a cancer cohort.
Deep sequencing technologies have been used to conduct genome-wide analyses of both unprocessed and processed mRNAs; thus providing insights into alternative splicing. For example, results from use of deep sequencing indicate that, in humans, an estimated 95% of transcripts from multiexon genes undergo alternative splicing, with a number of pre-mRNA transcripts spliced in a tissue-specific manner. Functional genomics and computational approaches based on multiple instance learning have also been developed to integrate RNA-seq data to predict functions for alternatively spliced isoforms. Deep sequencing has also aided in the in vivo detection of the transient lariats that are released during splicing, the determination of branch site sequences, and the large-scale mapping of branchpoints in human pre-mRNA transcripts.
More historically, alternatively spliced transcripts have been found by comparing EST sequences, but this requires sequencing of very large numbers of ESTs. Most EST libraries come from a very limited number of tissues, so tissue-specific splice variants are likely to be missed in any case. High-throughput approaches to investigate splicing have, however, been developed, such as: DNA microarray-based analyses, RNA-binding assays, and deep sequencing. These methods can be used to screen for polymorphisms or mutations in or around splicing elements that affect protein binding. When combined with splicing assays, including in vivo reporter gene assays, the functional effects of polymorphisms or mutations on the splicing of pre-mRNA transcripts can then be analyzed.
In microarray analysis, arrays of DNA fragments representing individual exons (e.g. Affymetrix exon microarray) or exon/exon boundaries (e.g. arrays from ExonHit or Jivan) have been used. The array is then probed with labeled cDNA from tissues of interest. The probe cDNAs bind to DNA from the exons that are included in mRNAs in their tissue of origin, or to DNA from the boundary where two exons have been joined. This can reveal the presence of particular alternatively spliced mRNAs.
CLIP (Cross-linking and immunoprecipitation) uses UV radiation to link proteins to RNA molecules in a tissue during splicing. A trans-acting splicing regulatory protein of interest is then precipitated using specific antibodies. When the RNA attached to that protein is isolated and cloned, it reveals the target sequences for that protein. Another method for identifying RNA-binding proteins and mapping their binding to pre-mRNA transcripts is "Microarray Evaluation of Genomic Aptamers by shift (MEGAshift)".net This method involves an adaptation of the "Systematic Evolution of Ligands by Exponential Enrichment (SELEX)" method together with a microarray-based readout. Use of the MEGAshift method has provided insights into the regulation of alternative splicing by allowing for the identification of sequences in pre-mRNA transcripts surrounding alternatively spliced exons that mediate binding to different splicing factors, such as ASF/SF2 and PTB. This approach has also been used to aid in determining the relationship between RNA secondary structure and the binding of splicing factors.
Use of reporter assays makes it possible to find the splicing proteins involved in a specific alternative splicing event by constructing reporter genes that will express one of two different fluorescent proteins depending on the splicing reaction that occurs. This method has been used to isolate mutants affecting splicing and thus to identify novel splicing regulatory proteins inactivated in those mutants.
Recent advancements in protein structure prediction have facilitated the development of new tools for genome annotation and alternative splicing anlaysis. For instance, isoform.io, a platform guided by protein structure predictions, has evaluated hundreds of thousands of isoforms of human protein-coding genes assembled from numerous RNA sequencing experiments across a variety of human tissues. This comprehensive analysis has led to the identification of numerous isoforms with more confidently predicted structure and potentially superior function compared to canonical isoforms in the latest human gene database. By integrating structural predictions with expression and evolutionary evidence, this approach has demonstrated the potential of protein structure prediction as a tool for refining the annotation of the human genome.
Databases
There is a collection of alternative splicing databases. These databases are useful for finding genes having pre-mRNAs undergoing alternative splicing and alternative splicing events or to study the functional impact of alternative splicing.
AspicDB database
Intronerator database
ProSAS database
See also
Exitron
Polyadenylation § Alternative polyadenylation
Trans-splicing
References
External links
A General Definition and Nomenclature for Alternative Splicing Events at SciVee
AStalavista (Alternative Splicing landscape visualization tool), a method for the computationally exhaustive classification of Alternative Splicing Structures
IsoPred: computationally predicted isoform functions
Stamms-lab.net: Research Group dealing with alternative Splicing issues and mis-splicing in human diseases
Alternative Splicing of ion channels in the brain, connected to mental and neurological diseases
BIPASS: Web Services in Alternative Splicing
Gene expression
Spliceosome
RNA splicing
fr:Épissage#Épissage alternatif
it:Splicing#Splicing alternativo | Alternative splicing | [
"Chemistry",
"Biology"
] | 6,421 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
209,874 | https://en.wikipedia.org/wiki/Density%20functional%20theory | Density functional theory (DFT) is a computational quantum mechanical modelling method used in physics, chemistry and materials science to investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases. Using this theory, the properties of a many-electron system can be determined by using functionals - that is, functions that accept a function as input and output a single real number. In the case of DFT, these are functionals of the spatially dependent electron density. DFT is among the most popular and versatile methods available in condensed-matter physics, computational physics, and computational chemistry.
DFT has been very popular for calculations in solid-state physics since the 1970s. However, DFT was not considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations used in the theory were greatly refined to better model the exchange and correlation interactions. Computational costs are relatively low when compared to traditional methods, such as exchange only Hartree–Fock theory and its descendants that include electron correlation. Since, DFT has become an important tool for methods of nuclear spectroscopy such as Mössbauer spectroscopy or perturbed angular correlation, in order to understand the origin of specific electric field gradients in crystals.
Despite recent improvements, there are still difficulties in using density functional theory to properly describe: intermolecular interactions (of critical importance to understanding chemical reactions), especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant interactions and some strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors. The incomplete treatment of dispersion can adversely affect the accuracy of DFT (at least when used alone and uncorrected) in the treatment of systems which are dominated by dispersion (e.g. interacting noble gas atoms) or where dispersion competes significantly with other effects (e.g. in biomolecules). The development of new DFT methods designed to overcome this problem, by alterations to the functional or by the inclusion of additive terms, is a current research topic. Classical density functional theory uses a similar formalism to calculate the properties of non-uniform classical fluids.
Despite the current popularity of these alterations or of the inclusion of additional terms, they are reported to stray away from the search for the exact functional. Further, DFT potentials obtained with adjustable parameters are no longer true DFT potentials, given that they are not functional derivatives of the exchange correlation energy with respect to the charge density. Consequently, it is not clear if the second theorem of DFT holds in such conditions.
Overview of method
In the context of computational materials science, ab initio (from first principles) DFT calculations allow the prediction and calculation of material behavior on the basis of quantum mechanical considerations, without requiring higher-order parameters such as fundamental material properties. In contemporary DFT techniques the electronic structure is evaluated using a potential acting on the system's electrons. This DFT potential is constructed as the sum of external potentials , which is determined solely by the structure and the elemental composition of the system, and an effective potential , which represents interelectronic interactions. Thus, a problem for a representative supercell of a material with electrons can be studied as a set of one-electron Schrödinger-like equations, which are also known as Kohn–Sham equations.
Origins
Although density functional theory has its roots in the Thomas–Fermi model for the electronic structure of materials, DFT was first put on a firm theoretical footing by Walter Kohn and Pierre Hohenberg in the framework of the two Hohenberg–Kohn theorems (HK). The original HK theorems held only for non-degenerate ground states in the absence of a magnetic field, although they have since been generalized to encompass these.
The first HK theorem demonstrates that the ground-state properties of a many-electron system are uniquely determined by an electron density that depends on only three spatial coordinates. It set down the groundwork for reducing the many-body problem of electrons with spatial coordinates to three spatial coordinates, through the use of functionals of the electron density. This theorem has since been extended to the time-dependent domain to develop time-dependent density functional theory (TDDFT), which can be used to describe excited states.
The second HK theorem defines an energy functional for the system and proves that the ground-state electron density minimizes this energy functional.
In work that later won them the Nobel prize in chemistry, the HK theorem was further developed by Walter Kohn and Lu Jeu Sham to produce Kohn–Sham DFT (KS DFT). Within this framework, the intractable many-body problem of interacting electrons in a static external potential is reduced to a tractable problem of noninteracting electrons moving in an effective potential. The effective potential includes the external potential and the effects of the Coulomb interactions between the electrons, e.g., the exchange and correlation interactions. Modeling the latter two interactions becomes the difficulty within KS DFT. The simplest approximation is the local-density approximation (LDA), which is based upon exact exchange energy for a uniform electron gas, which can be obtained from the Thomas–Fermi model, and from fits to the correlation energy for a uniform electron gas. Non-interacting systems are relatively easy to solve, as the wavefunction can be represented as a Slater determinant of orbitals. Further, the kinetic energy functional of such a system is known exactly. The exchange–correlation part of the total energy functional remains unknown and must be approximated.
Another approach, less popular than KS DFT but arguably more closely related to the spirit of the original HK theorems, is orbital-free density functional theory (OFDFT), in which approximate functionals are also used for the kinetic energy of the noninteracting system.
Derivation and formalism
As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born–Oppenheimer approximation), generating a static external potential , in which the electrons are moving. A stationary electronic state is then described by a wavefunction satisfying the many-electron time-independent Schrödinger equation
where, for the -electron system, is the Hamiltonian, is the total energy, is the kinetic energy, is the potential energy from the external field due to positively charged nuclei, and is the electron–electron interaction energy. The operators and are called universal operators, as they are the same for any -electron system, while is system-dependent. This complicated many-particle equation is not separable into simpler single-particle equations because of the interaction term .
There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion of the wavefunction in Slater determinants. While the simplest one is the Hartree–Fock method, more sophisticated approaches are usually categorized as post-Hartree–Fock methods. However, the problem with these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to larger, more complex systems.
Here DFT provides an appealing alternative, being much more versatile, as it provides a way to systematically map the many-body problem, with , onto a single-body problem without . In DFT the key variable is the electron density , which for a normalized is given by
This relation can be reversed, i.e., for a given ground-state density it is possible, in principle, to calculate the corresponding ground-state wavefunction . In other words, is a unique functional of ,
and consequently the ground-state expectation value of an observable is also a functional of :
In particular, the ground-state energy is a functional of :
where the contribution of the external potential can be written explicitly in terms of the ground-state density :
More generally, the contribution of the external potential can be written explicitly in terms of the density :
The functionals and are called universal functionals, while is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified , one then has to minimize the functional
with respect to , assuming one has reliable expressions for and . A successful minimization of the energy functional will yield the ground-state density and thus all other ground-state observables.
The variational problems of minimizing the energy functional can be solved by applying the Lagrangian method of undetermined multipliers. First, one considers an energy functional that does not explicitly have an electron–electron interaction energy term,
where denotes the kinetic-energy operator, and is an effective potential in which the particles are moving. Based on , Kohn–Sham equations of this auxiliary noninteracting system can be derived:
which yields the orbitals that reproduce the density of the original many-body system
The effective single-particle potential can be written as
where is the external potential, the second term is the Hartree term describing the electron–electron Coulomb repulsion, and the last term is the exchange–correlation potential. Here, includes all the many-particle interactions. Since the Hartree term and depend on , which depends on the , which in turn depend on , the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative) way. Usually one starts with an initial guess for , then calculates the corresponding and solves the Kohn–Sham equations for the . From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is an alternative approach to this.
Notes
The one-to-one correspondence between electron density and single-particle potential is not so smooth. It contains kinds of non-analytic structure. contains kinds of singularities, cuts and branches. This may indicate a limitation of our hope for representing exchange–correlation functional in a simple analytic form.
It is possible to extend the DFT idea to the case of the Green function instead of the density . It is called as Luttinger–Ward functional (or kinds of similar functionals), written as . However, is determined not as its minimum, but as its extremum. Thus we may have some theoretical and practical difficulties.
There is no one-to-one correspondence between one-body density matrix and the one-body potential . (All the eigenvalues of are 1.) In other words, it ends up with a theory similar to the Hartree–Fock (or hybrid) theory.
Relativistic formulation (ab initio functional forms)
The same theorems can be proven in the case of relativistic electrons, thereby providing generalization of DFT for the relativistic case. Unlike the nonrelativistic theory, in the relativistic case it is possible to derive a few exact and explicit formulas for the relativistic density functional.
Let one consider an electron in the hydrogen-like ion obeying the relativistic Dirac equation. The Hamiltonian for a relativistic electron moving in the Coulomb potential can be chosen in the following form (atomic units are used):
where is the Coulomb potential of a pointlike nucleus, is a momentum operator of the electron, and , and are the elementary charge, electron mass and the speed of light respectively, and finally and are a set of Dirac 2 × 2 matrices:
To find out the eigenfunctions and corresponding energies, one solves the eigenfunction equation
where is a four-component wavefunction, and is the associated eigenenergy. It is demonstrated in Brack (1983) that application of the virial theorem to the eigenfunction equation produces the following formula for the eigenenergy of any bound state:
and analogously, the virial theorem applied to the eigenfunction equation with the square of the Hamiltonian yields
It is easy to see that both of the above formulae represent density functionals. The former formula can be easily generalized for the multi-electron case.
One may observe that both of the functionals written above do not have extremals, of course, if a reasonably wide set of functions is allowed for variation. Nevertheless, it is possible to design a density functional with desired extremal properties out of those ones. Let us make it in the following way:
where in Kronecker delta symbol of the second term denotes any extremal for the functional represented by the first term of the functional . The second term amounts to zero for any function that is not an extremal for the first term of functional . To proceed further we'd like to find Lagrange equation for this functional. In order to do this, we should allocate a linear part of functional increment when the argument function is altered:
Deploying written above equation, it is easy to find the following formula for functional derivative:
where , and , and is a value of potential at some point, specified by support of variation function , which is supposed to be infinitesimal. To advance toward Lagrange equation, we equate functional derivative to zero and after simple algebraic manipulations arrive to the following equation:
Apparently, this equation could have solution only if . This last condition provides us with Lagrange
equation for functional , which could be finally written down in the following form:
Solutions of this equation represent extremals for functional . It's easy to see that all real densities,
that is, densities corresponding to the bound states of the system in question, are solutions of written above equation, which could be called the Kohn–Sham equation in this particular case. Looking back onto the definition of the functional , we clearly see that the functional produces energy of the system for appropriate density, because the first term amounts to zero for such density and the second one delivers the energy value.
Approximations (exchange–correlation functionals)
The major problem with DFT is that the exact functionals for exchange and correlation are not known, except for the free-electron gas. However, approximations exist which permit the calculation of certain physical quantities quite accurately. One of the simplest approximations is the local-density approximation (LDA), where the functional depends only on the density at the coordinate where the functional is evaluated:
The local spin-density approximation (LSDA) is a straightforward generalization of the LDA to include electron spin:
In LDA, the exchange–correlation energy is typically separated into the exchange part and the correlation part: . The exchange part is called the Dirac (or sometimes Slater) exchange, which takes the form . There are, however, many mathematical forms for the correlation part. Highly accurate formulae for the correlation energy density have been constructed from quantum Monte Carlo simulations of jellium. A simple first-principles correlation functional has been recently proposed as well. Although unrelated to the Monte Carlo simulation, the two variants provide comparable accuracy.
The LDA assumes that the density is the same everywhere. Because of this, the LDA has a tendency to underestimate the exchange energy and over-estimate the correlation energy. The errors due to the exchange and correlation parts tend to compensate each other to a certain degree. To correct for this tendency, it is common to expand in terms of the gradient of the density in order to account for the non-homogeneity of the true electron density. This allows corrections based on the changes in density away from the coordinate. These expansions are referred to as generalized gradient approximations (GGA) and have the following form:
Using the latter (GGA), very good results for molecular geometries and ground-state energies have been achieved.
Potentially more accurate than the GGA functionals are the meta-GGA functionals, a natural development after the GGA (generalized gradient approximation). Meta-GGA DFT functional in its original form includes the second derivative of the electron density (the Laplacian), whereas GGA includes only the density and its first derivative in the exchange–correlation potential.
Functionals of this type are, for example, TPSS and the Minnesota Functionals. These functionals include a further term in the expansion, depending on the density, the gradient of the density and the Laplacian (second derivative) of the density.
Difficulties in expressing the exchange part of the energy can be relieved by including a component of the exact exchange energy calculated from Hartree–Fock theory. Functionals of this type are known as hybrid functionals.
Generalizations to include magnetic fields
The DFT formalism described above breaks down, to various degrees, in the presence of a vector potential, i.e. a magnetic field. In such a situation, the one-to-one mapping between the ground-state electron density and wavefunction is lost. Generalizations to include the effects of magnetic fields have led to two different theories: current density functional theory (CDFT) and magnetic field density functional theory (BDFT). In both these theories, the functional used for the exchange and correlation must be generalized to include more than just the electron density. In current density functional theory, developed by Vignale and Rasolt, the functionals become dependent on both the electron density and the paramagnetic current density. In magnetic field density functional theory, developed by Salsbury, Grayce and Harris, the functionals depend on the electron density and the magnetic field, and the functional form can depend on the form of the magnetic field. In both of these theories it has been difficult to develop functionals beyond their equivalent to LDA, which are also readily implementable computationally.
Applications
In general, density functional theory finds increasingly broad application in chemistry and materials science for the interpretation and prediction of complex system behavior at an atomic scale. Specifically, DFT computational methods are applied for synthesis-related systems and processing parameters. In such systems, experimental studies are often encumbered by inconsistent results and non-equilibrium conditions. Examples of contemporary DFT applications include studying the effects of dopants on phase transformation behavior in oxides, magnetic behavior in dilute magnetic semiconductor materials, and the study of magnetic and electronic behavior in ferroelectrics and dilute magnetic semiconductors. It has also been shown that DFT gives good results in the prediction of sensitivity of some nanostructures to environmental pollutants like sulfur dioxide or acrolein, as well as prediction of mechanical properties.
In practice, Kohn–Sham theory can be applied in several distinct ways, depending on what is being investigated. In solid-state calculations, the local density approximations are still commonly used along with plane-wave basis sets, as an electron-gas approach is more appropriate for electrons delocalised through an infinite solid. In molecular calculations, however, more sophisticated functionals are needed, and a huge variety of exchange–correlation functionals have been developed for chemical applications. Some of these are inconsistent with the uniform electron-gas approximation; however, they must reduce to LDA in the electron-gas limit. Among physicists, one of the most widely used functionals is the revised Perdew–Burke–Ernzerhof exchange model (a direct generalized gradient parameterization of the free-electron gas with no free parameters); however, this is not sufficiently calorimetrically accurate for gas-phase molecular calculations. In the chemistry community, one popular functional is known as BLYP (from the name Becke for the exchange part and Lee, Yang and Parr for the correlation part). Even more widely used is B3LYP, which is a hybrid functional in which the exchange energy, in this case from Becke's exchange functional, is combined with the exact energy from Hartree–Fock theory. Along with the component exchange and correlation funсtionals, three parameters define the hybrid functional, specifying how much of the exact exchange is mixed in. The adjustable parameters in hybrid functionals are generally fitted to a "training set" of molecules. Although the results obtained with these functionals are usually sufficiently accurate for most applications, there is no systematic way of improving them (in contrast to some of the traditional wavefunction-based methods like configuration interaction or coupled cluster theory). In the current DFT approach it is not possible to estimate the error of the calculations without comparing them to other methods or experiments.
Density functional theory is generally highly accurate but highly computationally-expensive. In recent years, DFT has been used with machine learning techniques - especially graph neural networks - to create machine learning potentials. These graph neural networks approximate DFT, with the aim of achieving similar accuracies with much less computation, and are especially beneficial for large systems. They are trained using DFT-calculated properties of a known set of molecules. Researchers have been trying to approximate DFT with machine learning for decades, but have only recently made good estimators. Breakthroughs in model architecture and data preprocessing that more heavily encoded theoretical knowledge, especially regarding symmetries and invariances, have enabled huge leaps in model performance. Using backpropagation, the process by which neural networks learn from training errors, to extract meaningful information about forces and densities, has similarly improved machine learning potentials accuracy. By 2023, for example, the DFT approximator Matlantis could simulate 72 elements, handle up to 20,000 atoms at a time, and execute calculations up to 20,000,000 times faster than DFT with similar accuracy, showcasing the power of DFT approximators in the artificial intelligence age. ML approximations of DFT have historically faced substantial transferability issues, with models failing to generalize potentials from some types of elements and compounds to others; improvements in architecture and data have slowly mitigated, but not eliminated, this issue. For very large systems, electrically nonneutral simulations, and intricate reaction pathways, DFT approximators often remain insufficiently computationally-lightweight or insufficiently accurate.
Thomas–Fermi model
The predecessor to density functional theory was the Thomas–Fermi model, developed independently by both Llewellyn Thomas and Enrico Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two electrons in every of volume. For each element of coordinate space volume we can fill out a sphere of momentum space up to the Fermi momentum
Equating the number of electrons in coordinate space to that in phase space gives
Solving for and substituting into the classical kinetic energy formula then leads directly to a kinetic energy represented as a functional of the electron density:
where
As such, they were able to calculate the energy of an atom using this kinetic-energy functional combined with the classical expressions for the nucleus–electron and electron–electron interactions (which can both also be represented in terms of the electron density).
Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the resulting kinetic-energy functional is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli principle. An exchange-energy functional was added by Paul Dirac in 1928.
However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation.
Edward Teller (1962) showed that Thomas–Fermi theory cannot describe molecular bonding. This can be overcome by improving the kinetic-energy functional.
The kinetic-energy functional can be improved by adding the von Weizsäcker (1935) correction:
Hohenberg–Kohn theorems
The Hohenberg–Kohn theorems relate to any system consisting of electrons moving under the influence of an external potential.
Theorem 1. The external potential (and hence the total energy), is a unique functional of the electron density.
If two systems of electrons, one trapped in a potential and the other in , have the same ground-state density , then is necessarily a constant.
Corollary 1: the ground-state density uniquely determines the potential and thus all properties of the system, including the many-body wavefunction. In particular, the HK functional, defined as , is a universal functional of the density (not depending explicitly on the external potential).
Corollary 2: In light of the fact that the sum of the occupied energies provides the energy content of the Hamiltonian, a unique functional of the ground state charge density, the spectrum of the Hamiltonian is also a unique functional of the ground state charge density.
Theorem 2. The functional that delivers the ground-state energy of the system gives the lowest energy if and only if the input density is the true ground-state density.
In other words, the energy content of the Hamiltonian reaches its absolute minimum, i.e., the ground state, when the charge density is that of the ground state.
For any positive integer and potential , a density functional exists such that
reaches its minimal value at the ground-state density of electrons in the potential . The minimal value of is then the ground-state energy of this system.
Pseudo-potentials
The many-electron Schrödinger equation can be very much simplified if electrons are divided in two groups: valence electrons and inner core electrons. The electrons in the inner shells are strongly bound and do not play a significant role in the chemical binding of atoms; they also partially screen the nucleus, thus forming with the nucleus an almost inert core. Binding properties are almost completely due to the valence electrons, especially in metals and semiconductors. This separation suggests that inner electrons can be ignored in a large number of cases, thereby reducing the atom to an ionic core that interacts with the valence electrons. The use of an effective interaction, a pseudopotential, that approximates the potential felt by the valence electrons, was first proposed by Fermi in 1934 and Hellmann in 1935. In spite of the simplification pseudo-potentials introduce in calculations, they remained forgotten until the late 1950s.
Ab initio pseudo-potentials
A crucial step toward more realistic pseudo-potentials was given by William C. Topp and John Hopfield, who suggested that the pseudo-potential should be adjusted such that they describe the valence charge density accurately. Based on that idea, modern pseudo-potentials are obtained inverting the free-atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo-wavefunctions to coincide with the true valence wavefunctions beyond a certain distance . The pseudo-wavefunctions are also forced to have the same norm (i.e., the so-called norm-conserving condition) as the true valence wavefunctions and can be written as
where is the radial part of the wavefunction with angular momentum , and PP and AE denote the pseudo-wavefunction and the true (all-electron) wavefunction respectively. The index in the true wavefunctions denotes the valence level. The distance beyond which the true and the pseudo-wavefunctions are equal is also dependent on .
Electron smearing
The electrons of a system will occupy the lowest Kohn–Sham eigenstates up to a given energy level according to the Aufbau principle. This corresponds to the steplike Fermi–Dirac distribution at absolute zero. If there are several degenerate or close to degenerate eigenstates at the Fermi level, it is possible to get convergence problems, since very small perturbations may change the electron occupation. One way of damping these oscillations is to smear the electrons, i.e. allowing fractional occupancies. One approach of doing this is to assign a finite temperature to the electron Fermi–Dirac distribution. Other ways is to assign a cumulative Gaussian distribution of the electrons or using a Methfessel–Paxton method.
Classical density functional theory
Classical density functional theory is a classical statistical method to investigate the properties of many-body systems consisting of interacting molecules, macromolecules, nanoparticles or microparticles. The classical non-relativistic method is correct for classical fluids with particle velocities less than the speed of light and thermal de Broglie wavelength smaller than the distance between particles. The theory is based on the calculus of variations of a thermodynamic functional, which is a function of the spatially dependent density function of particles, thus the name. The same name is used for quantum DFT, which is the theory to calculate the electronic structure of electrons based on spatially dependent electron density with quantum and relativistic effects. Classical DFT is a popular and useful method to study fluid phase transitions, ordering in complex liquids, physical characteristics of interfaces and nanomaterials. Since the 1970s it has been applied to the fields of materials science, biophysics, chemical engineering and civil engineering. Computational costs are much lower than for molecular dynamics simulations, which provide similar data and a more detailed description but are limited to small systems and short time scales. Classical DFT is valuable to interpret and test numerical results and to define trends although details of the precise motion of the particles are lost due to averaging over all possible particle trajectories. As in electronic systems, there are fundamental and numerical difficulties in using DFT to quantitatively describe the effect of intermolecular interaction on structure, correlations and thermodynamic properties.
Classical DFT addresses the difficulty of describing thermodynamic equilibrium states of many-particle systems with nonuniform density. Classical DFT has its roots in theories such as the van der Waals theory for the equation of state and the virial expansion method for the pressure. In order to account for correlation in the positions of particles the direct correlation function was introduced as the effective interaction between two particles in the presence of a number of surrounding particles by Leonard Ornstein and Frits Zernike in 1914. The connection to the density pair distribution function was given by the Ornstein–Zernike equation. The importance of correlation for thermodynamic properties was explored through density distribution functions. The functional derivative was introduced to define the distribution functions of classical mechanical systems. Theories were developed for simple and complex liquids using the ideal gas as a basis for the free energy and adding molecular forces as a second-order perturbation. A term in the gradient of the density was added to account for non-uniformity in density in the presence of external fields or surfaces. These theories can be considered precursors of DFT.
To develop a formalism for the statistical thermodynamics of non-uniform fluids functional differentiation was used extensively by Percus and Lebowitz (1961), which led to the Percus–Yevick equation linking the density distribution function and the direct correlation. Other closure relations were also proposed;the Classical-map hypernetted-chain method, the BBGKY hierarchy. In the late 1970s classical DFT was applied to the liquid–vapor interface and the calculation of surface tension. Other applications followed: the freezing of simple fluids, formation of the glass phase, the crystal–melt interface and dislocation in crystals, properties of polymer systems, and liquid crystal ordering. Classical DFT was applied to colloid dispersions, which were discovered to be good models for atomic systems. By assuming local chemical equilibrium and using the local chemical potential of the fluid from DFT as the driving force in fluid transport equations, equilibrium DFT is extended to describe non-equilibrium phenomena and fluid dynamics on small scales.
Classical DFT allows the calculation of the equilibrium particle density and prediction of thermodynamic properties and behavior of a many-body system on the basis of model interactions between particles. The spatially dependent density determines the local structure and composition of the material. It is determined as a function that optimizes the thermodynamic potential of the grand canonical ensemble. The grand potential is evaluated as the sum of the ideal-gas term with the contribution from external fields and an excess thermodynamic free energy arising from interparticle interactions. In the simplest approach the excess free-energy term is expanded on a system of uniform density using a functional Taylor expansion. The excess free energy is then a sum of the contributions from s-body interactions with density-dependent effective potentials representing the interactions between s particles. In most calculations the terms in the interactions of three or more particles are neglected (second-order DFT). When the structure of the system to be studied is not well approximated by a low-order perturbation expansion with a uniform phase as the zero-order term, non-perturbative free-energy functionals have also been developed. The minimization of the grand potential functional in arbitrary local density functions for fixed chemical potential, volume and temperature provides self-consistent thermodynamic equilibrium conditions, in particular, for the local chemical potential. The functional is not in general a convex functional of the density; solutions may not be local minima. Limiting to low-order corrections in the local density is a well-known problem, although the results agree (reasonably) well on comparison to experiment.
A variational principle is used to determine the equilibrium density. It can be shown that for constant temperature and volume the correct equilibrium density minimizes the grand potential functional of the grand canonical ensemble over density functions . In the language of functional differentiation (Mermin theorem):
The Helmholtz free energy functional is defined as .
The functional derivative in the density function determines the local chemical potential: .
In classical statistical mechanics the partition function is a sum over probability for a given microstate of classical particles as measured by the Boltzmann factor in the Hamiltonian of the system. The Hamiltonian splits into kinetic and potential energy, which includes interactions between particles, as well as external potentials. The partition function of the grand canonical ensemble defines the grand potential. A correlation function is introduced to describe the effective interaction between particles.
The s-body density distribution function is defined as the statistical ensemble average of particle positions. It measures the probability to find s particles at points in space :
From the definition of the grand potential, the functional derivative with respect to the local chemical potential is the density; higher-order density correlations for two, three, four or more particles are found from higher-order derivatives:
The radial distribution function with s = 2 measures the change in the density at a given point for a change of the local chemical interaction at a distant point.
In a fluid the free energy is a sum of the ideal free energy and the excess free-energy contribution from interactions between particles. In the grand ensemble the functional derivatives in the density yield the direct correlation functions :
The one-body direct correlation function plays the role of an effective mean field. The functional derivative in density of the one-body direct correlation results in the direct correlation function between two particles . The direct correlation function is the correlation contribution to the change of local chemical potential at a point for a density change at and is related to the work of creating density changes at different positions. In dilute gases the direct correlation function is simply the pair-wise interaction between particles (Debye–Huckel equation). The Ornstein–Zernike equation between the pair and the direct correlation functions is derived from the equation
Various assumptions and approximations adapted to the system under study lead to expressions for the free energy. Correlation functions are used to calculate the free-energy functional as an expansion on a known reference system. If the non-uniform fluid can be described by a density distribution that is not far from uniform density a functional Taylor expansion of the free energy in density increments leads to an expression for the thermodynamic potential using known correlation functions of the uniform system. In the square gradient approximation a strong non-uniform density contributes a term in the gradient of the density. In a perturbation theory approach the direct correlation function is given by the sum of the direct correlation in a known system such as hard spheres and a term in a weak interaction such as the long range London dispersion force. In a local density approximation the local excess free energy is calculated from the effective interactions with particles distributed at uniform density of the fluid in a cell surrounding a particle. Other improvements have been suggested such as the weighted density approximation for a direct correlation function of a uniform system which distributes the neighboring particles with an effective weighted density calculated from a self-consistent condition on the direct correlation function.
The variational Mermin principle leads to an equation for the equilibrium density and system properties are calculated from the solution for the density. The equation is a non-linear integro-differential equation and finding a solution is not trivial, requiring numerical methods, except for the simplest models. Classical DFT is supported by standard software packages, and specific software is currently under development. Assumptions can be made to propose trial functions as solutions, and the free energy is expressed in the trial functions and optimized with respect to parameters of the trial functions. Examples are a localized Gaussian function centered on crystal lattice points for the density in a solid, the hyperbolic function for interfacial density profiles.
Classical DFT has found many applications, for example:
developing new functional materials in materials science, in particular nanotechnology;
studying the properties of fluids at surfaces and the phenomena of wetting and adsorption;
understanding life processes in biotechnology;
improving filtration methods for gases and fluids in chemical engineering;
fighting pollution of water and air in environmental science;
cell membranes by modelling complex systems with amphiphile compounds;
generating new procedures in microfluidics and nanofluidics.
The extension of classical DFT towards nonequilibrium systems is known as dynamical density functional theory (DDFT). DDFT allows to describe the time evolution of the one-body density of a colloidal system, which is governed by the equation
with the mobility and the free energy . DDFT can be derived from the microscopic equations of motion for a colloidal system (Langevin equations or Smoluchowski equation) based on the adiabatic approximation, which corresponds to the assumption that the two-body distribution in a nonequilibrium system is identical to that in an equilibrium system with the same one-body density. For a system of noninteracting particles, DDFT reduces to the standard diffusion equation.
See also
Basis set (chemistry)
Dynamical mean field theory
Gas in a box
Harris functional
Helium atom
Kohn–Sham equations
Local density approximation
Molecule
Molecular design software
Molecular modelling
Quantum chemistry
Thomas–Fermi model
Time-dependent density functional theory
Car–Parrinello molecular dynamics
Lists
List of quantum chemistry and solid state physics software
List of software for molecular mechanics modeling
References
Sources
External links
Walter Kohn, Nobel Laureate – Video interview with Walter on his work developing density functional theory by the Vega Science Trust
Walter Kohn, Nobel Lecture
Electron Density Functional Theory – Lecture Notes
Density Functional Theory through Legendre Transformation pdf
Modeling Materials Continuum, Atomistic and Multiscale Techniques, Book
NIST Jarvis-DFT
Clary, David C. (2024). Walter Kohn: From Kindertransport and Internment to DFT and the Nobel Prize. World Scientific Publishing.
Electronic structure methods | Density functional theory | [
"Physics",
"Chemistry"
] | 8,136 | [
"Density functional theory",
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Electronic structure methods",
"Computational chemistry"
] |
14,001,145 | https://en.wikipedia.org/wiki/Die%20shrink | The term die shrink (sometimes optical shrink or process shrink) refers to the scaling of metal–oxide–semiconductor (MOS) devices. The act of shrinking a die creates a somewhat identical circuit using a more advanced fabrication process, usually involving an advance of lithographic nodes. This reduces overall costs for a chip company, as the absence of major architectural changes to the processor lowers research and development costs while at the same time allowing more processor dies to be manufactured on the same piece of silicon wafer, resulting in less cost per product sold.
Die shrinks are the key to lower prices and higher performance at semiconductor companies such as Samsung, Intel, TSMC, and SK Hynix, and fabless manufacturers such as AMD (including the former ATI), NVIDIA and MediaTek.
Details
Examples in the 2000s include the downscaling of the PlayStation 2's Emotion Engine processor from Sony and Toshiba (from 180 nm CMOS in 2000 to 90 nm CMOS in 2003), the codenamed Cedar Mill Pentium 4 processors (from 90 nm CMOS to 65 nm CMOS) and Penryn Core 2 processors (from 65 nm CMOS to 45 nm CMOS), the codenamed Brisbane Athlon 64 X2 processors (from 90 nm SOI to 65 nm SOI), various generations of GPUs from both ATI and NVIDIA, and various generations of RAM and flash memory chips from Samsung, Toshiba and SK Hynix. In January 2010, Intel released Clarkdale Core i5 and Core i7 processors fabricated with a 32 nm process, down from a previous 45 nm process used in older iterations of the Nehalem processor microarchitecture. Intel, in particular, formerly focused on leveraging die shrinks to improve product performance at a regular cadence through its Tick-Tock model. In this business model, every new microarchitecture (tock) is followed by a die shrink (tick) to improve performance with the same microarchitecture.
Die shrinks are beneficial to end-users as shrinking a die reduces the current used by each transistor switching on or off in semiconductor devices while maintaining the same clock frequency of a chip, making a product with less power consumption (and thus less heat production), increased clock rate headroom, and lower prices. Since the cost to fabricate a 200-mm or 300-mm silicon wafer is proportional to the number of fabrication steps and not proportional to the number of chips on the wafer, die shrinks cram more chips onto each wafer, resulting in lowered manufacturing costs per chip.
Half-shrink
In CPU fabrications, a die shrink always involves an advance to a lithographic node as defined by ITRS (see list). For GPU and SoC manufacturing, the die shrink often involves shrinking the die on a node not defined by the ITRS, for instance, the 150 nm, 110 nm, 80 nm, 55 nm, 40 nm and more currently 8 nm nodes, sometimes referred to as "half-nodes". This is a stopgap between two ITRS-defined lithographic nodes (thus called a "half-node shrink") before further shrink to the lower ITRS-defined nodes occurs, which helps save additional R&D cost. The choice to perform die shrinks to either full nodes or half-nodes rests with the foundry and not the integrated circuit designer.
See also
Integrated circuit
Semiconductor device fabrication
Photolithography
Moore's law
Transistor count
References
External links
0.11 μm Standard Cell ASIC
EETimes: ON Semi offers 110-nm ASIC platform
Renesas 55 nm process features
RDA, SMIC make 55-nm mixed-signal IC
Globalfoundries 40nm
UMC 45/40nm
SiliconBlue tips FPGA move to 40-nm
Globalfoundries 28nm, Leading-Edge Technologies
TSMC Reiterates 28 nm Readiness by Q4 2011
Design starts triple for TSMC at 28-nm
Integrated circuits
Semiconductors | Die shrink | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 835 | [
"Electrical resistance and conductance",
"Integrated circuits",
"Physical quantities",
"Computer engineering",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
14,005,026 | https://en.wikipedia.org/wiki/Diffusiophoresis%20and%20diffusioosmosis | Diffusiophoresis is the spontaneous motion of colloidal particles or molecules in a fluid, induced by a concentration gradient of a different substance. In other words, it is motion of one species, A, in response to a concentration gradient in another species, B. Typically, A is colloidal particles which are in aqueous solution in which B is a dissolved salt such as sodium chloride, and so the particles of A are much larger than the ions of B. But both A and B could be polymer molecules, and B could be a small molecule. For example, concentration gradients in ethanol solutions in water move 1 μm diameter colloidal particles with diffusiophoretic velocities of order , the movement is towards regions of the solution with lower ethanol concentration (and so higher water concentration). Both species A and B will typically be diffusing but diffusiophoresis is distinct from simple diffusion: in simple diffusion a species A moves down a gradient in its own concentration.
Diffusioosmosis, also referred to as capillary osmosis, is flow of a solution relative to a fixed wall or pore surface, where the flow is driven by a concentration gradient in the solution. This is distinct from flow relative to a surface driven by a gradient in the hydrostatic pressure in the fluid. In diffusioosmosis the hydrostatic pressure is uniform and the flow is due to a concentration gradient.
Diffusioosmosis and diffusiophoresis are essentially the same phenomenon. They are both relative motion of a surface and a solution, driven by a concentration gradient in the solution. This motion is called diffusiophoresis when the solution is considered static with particles moving in it due to relative motion of the fluid at the surface of these particles. The term diffusioosmosis is used when the surface is viewed as static, and the solution flows.
A well studied example of diffusiophoresis is the motion of colloidal particles in an aqueous solution of an electrolyte solution, where a gradient in the concentration of the electrolyte causes motion of the colloidal particles. Colloidal particles may be hundred of nanometres or larger in diameter, while the interfacial double layer region at the surface of the colloidal particle will be of order the Debye length wide, and this is typically only nanometres. So here, the interfacial width is much smaller than the size of the particle, and then the gradient in the smaller species drives diffusiophoretic motion of the colloidal particles largely through motion in the interfacial double layer.
Diffusiophoresis was first studied by Derjaguin and coworkers in 1947.
Applications of diffusiophoresis
Diffusiophoresis, by definition, moves colloidal particles, and so the applications of diffusiophoresis are to situations where we want to move colloidal particles. Colloidal particles are typically between 10 nanometres and a few micrometres in size. Simple diffusion of colloids is fast on length scales of a few micrometres, and so diffusiophoresis would not be useful, whereas on length scales larger than millimetres, diffusiophoresis may be slow as its speed decreases with decreasing size of the solute concentration gradient. Thus, typically diffusiophoresis is employed on length scales approximately in the range a micrometre to a millimetre. Applications include moving particles into or out of pores of that size, and helping or inhibiting mixing of colloidal particles.
In addition, solid surfaces that are slowly dissolving will create concentration gradients near them, and these gradients may drive movement of colloidal particles towards or away from the surface. This was studied by Prieve in the context of latex particles being pulled towards, and coating, a dissolving steel surface.
Relation to thermophoresis, multicomponent diffusion and the Marangoni effect
Diffusiophoresis is an analogous phenomenon to thermophoresis, where a species A moves in response to a temperature gradient. Both diffusiophoresis and thermophoresis are governed by Onsager reciprocal relations. Simply speaking, a gradient in any thermodynamic quantity, such as the concentration of any species, or temperature, will drive motion of all thermodynamic quantities, i.e., motion of all species present, and a temperature flux. Each gradient provides a thermodynamic force that moves the species present, and the Onsager reciprocal relations govern the relationship between the forces and the motions.
Diffusiophoresis is a special case of multicomponent diffusion. Multicomponent diffusion is diffusion in mixtures, and diffusiophoresis is the special case where we are interested in the movement of one species that is usually a colloidal particle, in a gradient of a much smaller species, such as dissolved salt such as sodium chloride in water. or a miscible liquid, such as ethanol in water. Thus diffusiophoresis always occurs in a mixture, typically a three-component mixture of water, salt and a colloidal species, and we are interested in the cross-interaction between the salt and the colloidal particle.
It is the very large difference in size between the colloidal particle, which may be 1μm across, and the size of the ions or molecules, which are less than 1 nm across, that makes diffusiophoresis closely related to diffusioosomosis at a flat surface. In both cases the forces that drive the motion are largely localised to the interfacial region, which is a few molecules across and so typically of order a nanometer across. Over distances of order a nanometer, there is little difference between the surface of a colloidal particle 1 μm across, and a flat surface.
Diffusioosmosis is flow of a fluid at a solid surface, or in other words, flow at a solid/fluid interface. The Marangoni effect is flow at a fluid/fluid interface. So the two phenomena are analogous with the difference being that in diffusioosmosis one of the phases is a solid. Both diffusioosmosis and the Marangoni effect are driven by gradients in the interfacial free energy, i.e., in both cases the induced velocities are zero if the interfacial free energy is uniform in space, and in both cases if there are gradients the velocities are directed along the direction of increasing interfacial free energy.
Theory for diffusioosmotic flow of a solution
In diffusioosmosis, for a surface at rest the velocity increases from zero at the surface to the diffusioosmotic velocity, over the width of the interface between the surface and the solution. Beyond this distance, the diffusioosmotic velocity does not vary with distance from the surface. The driving force for diffusioosmosis is thermodynamic, i.e., it acts to reduce the free energy if the system, and so the direction of flow is away from surface regions of low surface free energy, and towards regions of high surface free energy. For a solute that adsorbs at surface, diffusioosmotic flow is away from regions of high solute concentration, while for solutes that are repelled by the surface, flow is away from regions of low solute concentration.
For gradients that are not-too-large, the diffusioosmotic slip velocity, i.e., the relative flow velocity far from the surface will be proportional to the gradient in the concentration gradient
where is a diffusioosmotic coefficient, and is the solute concentration. When the solute is ideal and interacts with a surface in the x–y plane at via a potential , the coefficient is given by
where is the Boltzmann constant, is the absolute temperature, and is the viscosity in the interfacial region, assumed to be constant in the interface. This expression assumes that the fluid velocity for fluid in contact with the surface is forced to be zero, by interaction between the fluid and the wall. This is called the no-slip condition.
To understand these expressions better, we can consider a very simple model, where the surface simply excludes an ideal solute from an interface of width , this is would be the Asakura–Oosawa model of an ideal polymer against a hard wall. Then the integral is simply and the diffusioosmotic slip velocity
Note that the slip velocity is directed towards increasing solute concentrations.
A particle much larger than moves with a diffusiophoretic velocity relative to the surrounding solution. So diffusiophoresis moves particles towards lower solute concentrations, in this case.
Derivation of diffusioosmotic velocity from Stokes flow
In this simple model, can also be derived directly from the expression for fluid flow in the Stokes limit for an incompressible fluid, which is
for the fluid flow velocity and the pressure. We consider an infinite surface in the plane at , and enforce stick boundary conditions there, i.e., . We take the concentration gradient to be along the axis, i.e., . Then the only non-zero component of the flow velocity is along x, , and it depends only on height . So the only non-zero component of the Stokes' equation is
In diffusioosmosis, in the bulk of the fluid (i.e., outside the interface) the hydrostatic pressure is assumed to be uniform (as we expect any gradients to relax away by fluid flow) and so in bulk
for the solvent's contribution to the hydrostatic pressure, and the contribution of the solute, called the osmotic pressure. Thus in the bulk the gradients obey
As we have assumed the solute is ideal, , and so
Our solute is excluded from a region of width (the interfacial region) from the surface, and so in interface , and so there . Assuming continuity of the solvent contribution into the interface we have a gradient of the hydrostatic pressure in the interface
i.e., in the interface there is a gradient of the hydrostatic pressure equal to the negative of the bulk gradient in the osmotic pressure. It is this gradient in the interface in the hydrostatic pressure that creates the diffusioosmotic flow. Now that we have , we can substitute into the Stokes equation, and integrate twice, then
where , , and are integration constants. Far from the surface the flow velocity must be a constant, so . We have imposed zero flow velocity at , so . Then imposing continuity where the interface meets the bulk, i.e., forcing and to be continuous at we determine and , and so get
Which gives, as it should, the same expression for the slip velocity, as above. This result is for a specific and very simple model, but it does illustrate general features of diffusioosmoisis: 1) the hydrostatic pressure is, by definition (flow induced by pressure gradients in the bulk is a common but separate physical phenomenon) uniform in the bulk, but there is a gradient in the pressure in the interface, 2) this pressure gradient in the interface causes the velocity to vary in the direction perpendicular to the surface, and this results in a slip velocity, i.e., for the bulk of the fluid to move relative to the surface, 3) away from the interface the velocity is constant, this type of flow is sometimes called plug flow.
Diffusiophoresis in salt solutions
In many applications of diffusiophoresis, the motion is driven by gradients in the concentration of a salt (electrolyte) concentration, such as sodium chloride in water. Colloidal particles in water are typically charged, and there is an electrostatic potential, called a zeta potential at their surface. This charged surface of the colloidal particle interacts with a gradient in salt concentration, and this gives rise to diffusiophoretic velocity given by
where is the permittivity of water, is the viscosity of water, is the zeta potential of the colloidal particle in the salt solution, is the reduced difference between the diffusion constant of the positively charged ion, , and the diffusion constant of the negatively charged ion, , and is the salt concentration. is the gradient, i.e., rate of change with position, of the logarithm of the salt concentration, which is equivalent to the rate of change of the salt concentration, divided by the salt concentration – it is effectively one over the distance over which the concentration decreases by a factor of e. The above equation is approximate, and only valid for 1:1 electrolytes such as sodium chloride.
Note that there are two contributions to diffusiophoresis of a charged particle in a salt gradient, which give rise to the two terms in the above equation for . The first is due to the fact that whenever there is a salt concentration gradient, then unless the diffusion constants of the positive and negative ions are exactly equal to each other, there is an electric field, i.e., the gradient acts a little like a capacitor. This electric filed generated by the salt gradient drives electrophoresis of the charged particle, just as an externally applied electric field does. This gives rise to the first term in the equation above, i.e., diffusiophoresis at a velocity .
The second part is due to the surface free energy of the surface of a charged particle, decreasing with increasing salt concentration, this is a similar mechanism to that found in diffusiophoresis in gradients of neutrial substances. This gives rise to the second part of the diffusiophoretic velocity . Note that this simple theory predicts that this contribution to the diffusiophoretic motion is always up a salt concentration gradient, it always moves particles towards higher salt concentration. By contrast, the sign of the electric-field contribution to diffusiophoresis depends on the sign of . So for example, for a negatively charged particle, , and if the positively charged ions diffuse faster than the negatively charged ones, then this term will push particles down a salt gradient, but if it is the negatively charged ions that diffuse faster, then this term pushes the particles up the salt gradient.
Practical applications
A group from Princeton University reported the application of diffusiophoresis to water purification. Contaminated water is treated with CO2 to create carbonic acid, and to split the water into a waste stream and a potable water stream. This allows for easy ionic separation of suspended particles. This has huge energy cost and time savings opportunity to make drinking water safe compared to traditional water filtration methods for dirty water sources.
See also
Electrokinetic phenomena
Electrophoresis
Marangoni effect – the analog of diffusioosmosis at a fluid/fluid interface
Thermophoresis
References
Further reading
Colloidal chemistry
Fluid mechanics | Diffusiophoresis and diffusioosmosis | [
"Chemistry",
"Engineering"
] | 3,206 | [
"Colloidal chemistry",
"Colloids",
"Surface science",
"Civil engineering",
"Fluid mechanics"
] |
14,011,666 | https://en.wikipedia.org/wiki/Electromagnetic%20absorption%20by%20water | The absorption of electromagnetic radiation by water depends on the state of the water.
The absorption in the gas phase occurs in three regions of the spectrum. Rotational transitions are responsible for absorption in the microwave and far-infrared, vibrational transitions in the mid-infrared and near-infrared. Vibrational bands have rotational fine structure. Electronic transitions occur in the vacuum ultraviolet regions.
Its weak absorption in the visible spectrum results in the pale blue color of water.
Overview
The water molecule, in the gaseous state, has three types of transition that can give rise to absorption of electromagnetic radiation:
Rotational transitions, in which the molecule gains a quantum of rotational energy. Atmospheric water vapour at ambient temperature and pressure gives rise to absorption in the far-infrared region of the spectrum, from about 200 cm−1 (50 μm) to longer wavelengths towards the microwave region.
Vibrational transitions in which a molecule gains a quantum of vibrational energy. The fundamental transitions give rise to absorption in the mid-infrared in the regions around 1650 cm−1 (μ band, 6 μm) and 3500 cm−1 (so-called X band, 2.9 μm)
Electronic transitions in which a molecule is promoted to an excited electronic state. The lowest energy transition of this type is in the vacuum ultraviolet region.
In reality, vibrations of molecules in the gaseous state are accompanied by rotational transitions, giving rise to a vibration-rotation spectrum. Furthermore, vibrational overtones and combination bands occur in the near-infrared region. The HITRAN spectroscopy database lists more than 37,000 spectral lines for gaseous H216O, ranging from the microwave region to the visible spectrum.
In liquid water the rotational transitions are effectively quenched, but absorption bands are affected by hydrogen bonding. In crystalline ice the vibrational spectrum is also affected by hydrogen bonding and there are lattice vibrations causing absorption in the far-infrared. Electronic transitions of gaseous molecules will show both vibrational and rotational fine structure.
Units
Infrared absorption band positions may be given either in wavelength (usually in micrometers, μm) or wavenumber (usually in reciprocal centimeters, cm−1) scale.
Rotational spectrum
The water molecule is an asymmetric top, that is, it has three independent moments of inertia. Rotation about the 2-fold symmetry axis is illustrated at the left. Because of the low symmetry of the molecule, a large number of transitions can be observed in the far infrared region of the spectrum. Measurements of microwave spectra have provided a very precise value for the O−H bond length, 95.84 ± 0.05 pm and H−O−H bond angle, 104.5 ± 0.3°.
Vibrational spectrum
The water molecule has three fundamental molecular vibrations. The O-H stretching vibrations give rise to absorption bands with band origins at 3657 cm−1 (ν1, 2.734 μm) and 3756 cm−1 (ν3, 2.662 μm) in the gas phase. The asymmetric stretching vibration, of B2 symmetry in the point group C2v is a normal vibration. The H-O-H bending mode origin is at 1595 cm−1 (ν2, 6.269 μm). Both symmetric stretching and bending vibrations have A1 symmetry, but the frequency difference between them is so large that mixing is effectively zero. In the gas phase all three bands show extensive rotational fine structure. In the near-infrared spectrum ν3 has a series of overtones at wavenumbers somewhat less than n·ν3, n=2,3,4,5... Combination bands, such as ν2 + ν3 are also easily observed in the near-infrared region. The presence of water vapor in the atmosphere is important for atmospheric chemistry especially as the infrared and near infrared spectra are easy to observe. Standard (atmospheric optical) codes are assigned to absorption bands as follows. 0.718 μm (visible): α, 0.810 μm: μ, 0.935 μm: ρστ, 1.13 μm: φ, 1.38 μm: ψ, 1.88 μm: Ω, 2.68 μm: X. The gaps between the bands define the infrared window in the Earth's atmosphere.
The infrared spectrum of liquid water is dominated by the intense absorption due to the fundamental O-H stretching vibrations. Because of the high intensity, very short path lengths, usually less than 50 μm, are needed to record the spectra of aqueous solutions. There is no rotational fine structure, but the absorption bands are broader than might be expected, because of hydrogen bonding. Peak maxima for liquid water are observed at 3450 cm−1 (2.898 μm), 3615 cm−1 (2.766 μm) and 1640 cm −1 (6.097 μm). Direct measurement of the infrared spectra of aqueous solutions requires that the cuvette windows be made of substances such as calcium fluoride which are water-insoluble. This difficulty can alternatively be overcome by using an attenuated total reflectance (ATR) device rather than transmission.
In the near-infrared range liquid water has absorption bands around 1950 nm (5128 cm−1), 1450 nm (6896 cm−1), 1200 nm (8333 cm−1) and 970 nm, (10300 cm−1). The regions between these bands can be used in near-infrared spectroscopy to measure the spectra of aqueous solutions, with the advantage that glass is transparent in this region, so glass cuvettes can be used. The absorption intensity is weaker than for the fundamental vibrations, but this is not important as longer path-length cuvettes can be used. The absorption band at 698 nm (14300 cm−1) is a 3rd overtone (n=4). It tails off onto the visible region and is responsible for the intrinsic blue color of water. This can be observed with a standard UV/vis spectrophotometer, using a 10 cm path-length. The colour can be seen by eye by looking through a column of water about 10 m in length; the water must be passed through an ultrafilter to eliminate color due to Rayleigh scattering which also can make water appear blue.
The spectrum of ice is similar to that of liquid water, with peak maxima at 3400 cm−1 (2.941 μm), 3220 cm−1 (3.105 μm) and 1620 cm−1 (6.17 μm)
In both liquid water and ice clusters, low-frequency vibrations occur, which involve the stretching (TS) or bending (TB) of intermolecular hydrogen bonds (O–H•••O). Bands at wavelengths λ = 50-55 μm or 182-200 cm−1 (44 μm, 227 cm−1 in ice) have been attributed to TS, intermolecular stretch, and 200 μm or 50 cm−1 (166 μm, 60 cm−1 in ice), to TB, intermolecular bend
Visible region
Absorption coefficients for 200 nm and 900 nm are almost equal at 6.9 m−1 (attenuation length of 14.5 cm). Very weak light absorption, in the visible region, by liquid water has been measured using an integrating cavity absorption meter (ICAM). The absorption was attributed to a sequence of overtone and combination bands whose intensity decreases at each step, giving rise to an absolute minimum at 418 nm, at which wavelength the attenuation coefficient is about 0.0044 m−1, which is an attenuation length of about 227 meters. These values correspond to pure absorption without scattering effects. The attenuation of, e.g., a laser beam would be slightly stronger.
Electronic spectrum
The electronic transitions of the water molecule lie in the vacuum ultraviolet region. For water vapor the bands have been assigned as follows.
65 nm band — many different electronic transitions, photoionization, photodissociation
discrete features between 115 and 180 nm
set of narrow bands between 115 and 125 nmRydberg series: 1b1 (n2) → many different Rydberg states and 3a1 (n1) → 3sa1 Rydberg state
128 nm bandRydberg series: 3a1 (n1) → 3sa1 Rydberg state and 1b1 (n2) → 3sa1 Rydberg state
166.5 nm band1b1 (n2) → 4a1 (σ1*-like orbital)
Microwaves and radio waves
The pure rotation spectrum of water vapor extends into the microwave region.
Liquid water has a broad absorption spectrum in the microwave region, which has been explained in terms of changes in the hydrogen bond network giving rise to a broad, featureless, microwave spectrum. The absorption (equivalent to dielectric loss) is used in microwave ovens to heat food that contains water molecules. A frequency of 2.45 GHz, wavelength 122 mm, is commonly used.
Radiocommunication at GHz frequencies is very difficult in fresh waters and even more so in salt waters.
Atmospheric effects
Water vapor is a greenhouse gas in the Earth's atmosphere, responsible for 70% of the known absorption of incoming sunlight, particularly in the infrared region, and about 60% of the atmospheric absorption of thermal radiation by the Earth known as the greenhouse effect. It is also an important factor in multispectral imaging and hyperspectral imaging used in remote sensing because water vapor absorbs radiation differently in different spectral bands. Its effects are also an important consideration in infrared astronomy and radio astronomy in the microwave or millimeter wave bands. The South Pole Telescope was constructed in Antarctica in part because the elevation and low temperatures there mean there is very little water vapor in the atmosphere.
Similarly, carbon dioxide absorption bands occur around 1400, 1600 and 2000 nm, but its presence in the Earth's atmosphere accounts for just 26% of the greenhouse effect. Carbon dioxide gas absorbs energy in some small segments of the thermal infrared spectrum that water vapor misses. This extra absorption within the atmosphere causes the air to warm just a bit more and the warmer the atmosphere the greater its capacity to hold more water vapor. This extra water vapor absorption further enhances the Earth's greenhouse effect.
In the atmospheric window between approximately 8000 and 14000 nm, in the far-infrared spectrum, carbon dioxide and water absorption is weak. This window allows most of the thermal radiation in this band to be radiated out to space directly from the Earth's surface. This band is also used for remote sensing of the Earth from space, for example with thermal Infrared imaging.
As well as absorbing radiation, water vapour occasionally emits radiation in all directions, according to the Black Body Emission curve for its current temperature overlaid on the water absorption spectrum. Much of this energy will be recaptured by other water molecules, but at higher altitudes, radiation sent towards space is less likely to be recaptured, as there is less water available to recapture radiation of water-specific absorbing wavelengths. By the top of the troposphere, about 12 km above sea level, most water vapor condenses to liquid water or ice as it releases its heat of vapourization. Once changed state, liquid water and ice fall away to lower altitudes. This will be balanced by incoming water vapour rising via convection currents.
Liquid water and ice emit radiation at a higher rate than water vapour (see graph above). Water at the top of the troposphere, particularly in liquid and solid states, cools as it emits net photons to space. Neighboring gas molecules other than water (e.g. nitrogen) are cooled by passing their heat kinetically to the water. This is why temperatures at the top of the troposphere (known as the tropopause) are about -50 degrees Celsius.
See also
Dielectric spectroscopy
Differential optical absorption spectroscopy
Hydroxyl ion absorption in optical fiber
Water model
References
External links
High resolution gas-phase absorption simulations
Water absorption spectrum (Martin Chaplin) (archived version)
Water physics
Chemical physics
Absorption spectroscopy
Electrochemistry
Electric and magnetic fields in matter | Electromagnetic absorption by water | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,476 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Electric and magnetic fields in matter",
"Absorption spectroscopy",
"Materials science",
"Electrochemistry",
"Condensed matter physics",
"nan",
"Water physics",
"Spectroscopy",
"Chemical physics"
] |
1,124,466 | https://en.wikipedia.org/wiki/Uranium%E2%80%93thorium%20dating | Uranium–thorium dating, also called thorium-230 dating, uranium-series disequilibrium dating or uranium-series dating, is a radiometric dating technique established in the 1960s which has been used since the 1970s to determine the age of calcium carbonate materials such as speleothem or coral. Unlike other commonly used radiometric dating techniques such as rubidium–strontium or uranium–lead dating, the uranium-thorium technique does not measure accumulation of a stable end-member decay product. Instead, it calculates an age from the degree to which secular equilibrium has been restored between the radioactive isotope thorium-230 and its radioactive parent uranium-234 within a sample.
Background
Thorium is not soluble in natural water under conditions found at or near the surface of the earth, so materials grown in or from this water do not usually contain thorium. In contrast, uranium is soluble to some extent in all natural water, so any material that precipitates or is grown from such water also contains trace uranium, typically at levels of between a few parts per billion and few parts per million by weight. As time passes after such material has formed, uranium-234 in the sample with a half-life of 245,000 years decays to thorium-230. Thorium-230 is itself radioactive with a half-life of 75,000 years, so instead of accumulating indefinitely (as for instance is the case for the uranium–lead system), thorium-230 instead approaches secular equilibrium with its radioactive parent uranium-234. At secular equilibrium, the number of thorium-230 decays per year within a sample is equal to the number of thorium-230 produced, which also equals the number of uranium-234 decays per year in the same sample.
History
In 1908, John Joly, a professor of geology at Trinity College Dublin, found higher radium contents in deep sediments than in those of the continental shelf, and suspected that detrital sediments scavenged radium out of seawater. Piggot and Urry found in 1942, that radium excess corresponded with an excess of thorium. It took another 20 years until the technique was applied to terrestrial carbonates (speleothems and travertines). In the late 1980s, the method was refined by mass spectrometry, with significant contributions from Larry Edwards. After Viktor Viktorovich Cherdyntsev's landmark book about uranium-234 had been translated into English, U-Th dating came to widespread research attention in Western geology.
Methods
U-series dating is a family of methods which can be applied to different materials over different time ranges.
Each method is named after the isotopes measured to obtain the date, mostly a daughter and its parent. Eight methods are
listed in the table below.
The U/U method is based on the fact that U is dissolved preferentially over U because when a U atom decays by emitting an alpha ray the daughter atom is displaced from its normal position in the crystal by atomic recoil.
This produces a Th atom which quickly becomes a U atom. Once the uranium is deposited, the ratio of U to U goes back down to its secular equilibrium (at which the radioactivities of the two are equal), with the distance from equilibrium decreasing by a factor of 2 every 245,000 years.
A material balance gives, for some unknown constant , these expressions for activity rations (assuming that the Th starts at zero):
We can solve the first equation for in terms of the unknown age, :
Putting this into the second equation gives us an equation to be solved for :
Unfortunately there is no closed-form expression for the age, , but it is easily found using equation solving algorithms.
Dating limits
Uranium–thorium dating has an upper age limit of somewhat over 500,000 years, defined by the half-life of thorium-230, the precision with which one can measure the thorium-230/uranium-234 ratio in a sample, and the accuracy to which one knows the half-lives of thorium-230 and uranium-234. Using this technique to calculate an age, the ratio of uranium-234 to its parent isotope uranium-238 must also be measured.
Precision
U-Th dating yields the most accurate results if applied to precipitated calcium carbonate, that is in stalagmites, travertines, and lacustrine limestones. Bone and shell are less reliable. Mass spectrometry can achieve a precision of ±1%. Conventional alpha counting's precision is ±5%. Mass spectrometry also uses smaller samples.
See also
Radiocarbon dating
References
External links
Radiometric dating
Thorium
Uranium | Uranium–thorium dating | [
"Chemistry"
] | 960 | [
"Radiometric dating",
"Radioactivity"
] |
1,125,528 | https://en.wikipedia.org/wiki/Swordquest | Swordquest is a series of video games originally produced by Atari, Inc. in the 1980s as part of a contest, consisting of three finished games, Earthworld, Fireworld and Waterworld (with these titles occasionally appearing on cartridge labels and boxes with capitalized central Ws, e.g. EarthWorld), and a planned fourth game, Airworld.
About
Each of the games came with a comic book that explained the plot, as well as containing part of the solution to a major puzzle that had to be solved to win the contest, with a series of prizes whose total value was $150,000. The series had its genesis as a possible sequel to Atari's groundbreaking 1979 title Adventure, but it developed mythology and a system of play that was unique.
The comic books were produced by DC Comics, written by Roy Thomas and Gerry Conway, and drawn and inked by George Pérez and Dick Giordano. All three game box covers were illustrated by an Atari in-house illustrator, Warren Chang. A special fan club offer was provided, allowing those who wanted the game to also get a T-shirt and poster for each game.
The games of the Swordquest series (along with Atari 2600 Raiders of the Lost Ark) were some of the earliest attempts to combine the narrative and logic elements of the adventure game genre with the twitch gameplay of the action genre, making them some of the first action-adventure games. However, due to Atari's financial problems related to the video game crash of 1983, the last contest along with the grand finale contest were never held and the final game in the series, Airworld was not released. As such the contest was never completed and the current unknown fate of some of the prizes has become an urban legend in the gaming community.
As part of Atari 50: The Anniversary Celebration, a collection of Atari games for its 50th anniversary in 2022, Digital Eclipse created a version of Airworld that completes the Swordquest series.
Gameplay
Each game of the Swordquest series was themed after the classical elements: earth, fire, water, and air. Each game required the player to move through a maze of rooms, collecting objects from one and placing them in other rooms. The arrangement or theme of the rooms varied with each game: Earthworld was themed after the Western zodiac, Fireworld after the Kabbalah tree of life, Waterworld after the chakras, and Airworld was to have been modeled after the I Ching. Traversing between rooms sometimes required the player to complete a "twitch"-style minigame to progress. When the player placed an item in its correct room, they would be presented with numerical clues that referred to a page and panel within the comic that was packaged with the game. There, the player would find a hidden word that was part of the larger Swordquest contest, as by submitting all the correct words in the correct order to Atari, they would be entered into the next phase of the project. The discovered words would form a relevant phrase towards the larger contest. In at least two cases, for Earthworld and Fireworld, there were more clues indicated by the game than required to be submitted. Players also had to identify a second clue in the game's instruction manual (for Earthworld, indicating prime numbers to use only clues on prime numbered pages) to know which clues to send in.
Plot
The games follow twins named Tarra and Torr. Their parents were slain by King Tyrannus's guards, prompted by a prophecy by the king's wizard Konjuro that the twins would slay Tyrannus. The twins were then raised as commoners by thieves to avoid being slain by the king. When they go to plunder Konjuro's sea keep, they accidentally reveal their identities to him. The twins then start running from a demon summoned to kill them, but it appears that a jewel they stole attracts it. After smashing the stone to avoid the demon, two of Tyrannus's old advisers appear and tell the two about the "Sword of Ultimate Sorcery" and the "Talisman of Penultimate Truth." They are then transported to Earthworld.
After defeating many beasts of the Zodiac and another thief (Herminus) in Earthworld, the twins are transported to the "central chamber" where the "Sword of Ultimate Sorcery" and the "Talisman of Penultimate Truth" are kept. Upon reaching them, the sword burns a hole through its altar all the way to Fireworld. In Fireworld, the twins split up to look for water, and Torr, with the aid of the talisman, summons Mentorr who shows Torr the "Chalice of Light," which will quench his thirst. The twins reunite eventually and find the chalice. However, Torr drops it after he is startled, and it is revealed that the one they found was not the true chalice. Herminus then gives them the chalice, and it grows until it becomes large enough to swallow the twins and transports them to Waterworld.
Upon reaching Waterworld, the twins become separated. Konjuro casts a spell that causes the twins to lose their memories. Tarra travels to a ship made of ice and meets Cap'n Frost, who desires to find the "Crown of Life" and rule Waterworld. Meanwhile, Torr travels to an undersea kingdom and meets the city's ex-queen Aquana, who desires to find the "Crown of Life" in order to regain her throne. After a brief war between the ex-queen and captain, Herminus sets the twins to duel each other. They then pray to their deities for guidance, which summons Mentorr who allows them to regain their memories. The twins throw down their swords, causing the crown to be revealed and split in half. The halves are given to the ex-queen and the captain, who then rule as equals. The "Sword of Ultimate Sorcery" then transports the twins to Airworld where they would have to do battle with King Tyrannus and Konjuro.
While the comic for Airworld was started, the cancellation of the series left the comic unfinished.
Development
The concept of Swordquest originated from Atari's previous Adventure video game, which is notable for one of the first documented Easter eggs. Adventure drew more interest once the Easter egg was found and documented, leading Atari to come up with a type of sequel where "marketing thought it would be a great idea to create a series of games where players would have to find clues both in the game [and in its physical materials]", as described by Atari historian Curt Vendel. As Atari was owned by Warner Communications at this point, they were able to use two of Warner's subsidiaries to help with this contest. DC Comics was used to create the comic book that would help create the setting where the word clues would be hidden, written by Gerry Conway and Roy Thomas and illustrated by George Pérez. The Franklin Mint crafted the game's prizes. The games themselves were programmed by Tod Frye.
Contest
Atari had designed the Swordquest contest to award a winner for each of the four games. For each game, they had planned to bring all winners to the Atari headquarters in Sunnyvale, California, to race to complete a specially-programmed version of that game to be the first to finish it. The person with the fastest completion would be named the winner and be awarded a "treasure", produced by Franklin Mint, each valued at around at the time of Swordquests release. The prizes were:
Earthworld: The "Talisman of Penultimate Truth", an 18-karat solid gold disc studded with 12 diamonds, the birthstones of the 12 Zodiac signs and a miniature white gold sword set atop it.
Fireworld: The "Chalice of Light", a goblet made of platinum and gold studded with diamonds, rubies, sapphires, pearls, and green jade.
Waterworld: The "Crown of Life", a solid gold crown decorated with diamonds, rubies, sapphires, and aquamarines.
Airworld: The "Philosopher's Stone", a large piece of white jade encased in an 18-karat gold box encrusted with emeralds, rubies, and diamonds.
The four winners would then have competed in a final contest to win the ultimate prize, "The Sword of Ultimate Sorcery" with a silver blade and an 18-carat gold handle covered with diamonds, emeralds, sapphires, and rubies, that was valued at
For Earthworld, about 5000 entries were received, but only eight answered correctly. The contest was held in May 1983, with Stephen Bell winning the Talisman. For Fireworld, Atari received several more entries, with 73 of these being correct. For practicality, Atari required the 73 finalists to write a brief essay of what they liked about the game, selecting the top 50 replies to continue to the final competition, held in January 1984. This was won by Michael Rideout, who was awarded the Chalice.
At this point in time, Atari had suffered major financial setbacks due to the 1983 video game crash. Atari was further in the midst of dealing with fallout from an insider trading scandal by former CEO Ray Kassar; Kassar was replaced by James J. Morgan in mid-1983, and looking to cut financial losses, eventually cancelled the Swordquest project, despite work having already started on Airworld. However, because the company had already advertised the availability of the Waterworld contest, Atari's lawyers required the company to continue the contest. To limit the number of entries, Waterworld was only made available to members of the Atari Club. During the contest period, in mid-1984, Atari was sold to Jack Tramiel, the owner of Commodore International. Tramiel, who had been more focused on the success of home computers than gaming consoles, placed the Atari divisions in a new company, Tramel Technology, and reviewed the state of all divisions, furthering the troubles in completing the Waterworld contest. Most who did enter the Waterworld contest were told they did not qualify for the final, but according to Vendel, Atari was legally required to follow through as advertised on the Waterworld contest. Vendel stated that Atari did secretly invite those with correct entries to hold the final round, and the Crown was awarded to a person, their name remaining anonymous due to legal requirements. Because they could not hold the ultimate final round, Bell and Rideout were both awarded an additional as well as an Atari 7800 as a compensation prize, and granting the ten finalists of Waterworld each.
The fate of the prizes has become an urban legend in the gaming community since the cancellation of the project. Of the five treasures, Rideout has claimed, as recently as 2017, that he still has the Chalice in his possession, stored in a safe deposit box. Bell fell out of contact following the Swordquest event, but according to Vendel and Rideout, Bell appeared to have had the disc part of the Talisman melted down for its value (about at the time), keeping the small sword, diamonds, and birthstones; the current fate of these is unknown. The fate of the Crown is unknown; Vendel stated that while Atari was required to hold the contest, they could have simply awarded the winner with a cash prize equivalent as opposed to the Crown.
Since they were never part of any contest, the Philosopher's Stone and the Sword have seemingly disappeared. Some sources have claimed that Tramiel took possession of the prizes himself, based on rumoured observations that Atari staff or associates of Tramiel had made of seeing a similar looking sword mounted in Tramiel's office or on his home mantel. However, Vendel believes that the persons who started this rumor may have mistaken a Tramiel family heirloom for the Swordquest sword. Vendel believes that it is unlikely that Tramiel would have been able to keep the Stone, Sword, and (if not given away) Crown, as when Atari, Inc was sold, these items were still the property of Warner Communications, and would have been returned to the Franklin Mint. With the Franklin Mint later being sold in 1985 to American Protective Services, and the original Atari business no longer existing, the prizes were most likely melted back down to their base components for reuse elsewhere, according to Vendel.
On June 29, 2022, Atari announced that as part of the Atari 50: The Anniversary Celebration collection, that the Digital Eclipse team had created the fourth and final entry in the Swordquest series, Airworld. Atari 50 was released on PlayStation 4 and 5, Xbox One and Series X/S, Nintendo Switch, and Windows in November 2022.
Comic books
Original mini-comics
Each of the three released games shipped with a comic book, published jointly by Atari and DC Comics. The books included clues to solve the puzzles within each of the games.
Dynamite Entertainment mini-series
In February 2017, Dynamite Entertainment announced a new comic book series, called Swordquest, but based on the actual contest around the three games, rather than the story within the games. It was a six-issue series, starting with a special #0 "Preview" book that sold with a cover price of 25¢ and was published in May 2017. The remaining 5 issues, published monthly after the preview, sold at $3.99 each. In addition, Dynamite released a trade paperback that reprinted the three mini-comics along with the mini-comic for the game Yars' Revenge. As with the originals, the TPB is sized as a mini-comic.
The series featured the story of a person who had played the three Swordquest games (with help from two friends who were brother and sister) when he was younger and was anticipating Airworld. Now as an adult, he continues his efforts to play Airworld using his old Atari hardware, but is caught up with a mysterious figure who offers to help him obtain the real "Sword of Ultimate Sorcery" from its resting place in the World Arcade Museum. As well as being valuable, it may have its own mysterious powers. The man contacts his two childhood friends to accompany him on his new "Swordquest".
The comic was written by Chad Bowers and Chris Sims and had art by Scott Kowalchuk under the pseudonym "Ghostwriter X". A trade paperback reprint of all six issues, titled Swordquest: Realworld was released in February 2018.
Reception
Richard A. Edwards reviewed Swordquest: Earthworld in The Space Gamer No. 61. Edwards commented that "The only reason to purchase a copy of Swordquest: Earthworld is to try and solve the puzzle and win the prize. Gamers not interested in spending the time required should pass this one." In 1995, Flux magazine ranked Swordquest: Earthworld 71st on their Top 100 Video Games.
In popular culture
Both the novel Ready Player One and the film adaptation reference the Swordquest series.
References
External links
Atari Protos SwordQuest: AirWorld
Atari Protos SwordQuest: EarthWorld
Atari Protos SwordQuest: FireWorld
Atari Protos SwordQuest: WaterWorld
The SwordQuest Comic Book Archive
Swordquest Interview With Michael Rideout
1982 video games
1983 video games
Action-adventure games
Atari 2600 games
Comics by George Pérez
Comics by Gerry Conway
Comics by Roy Thomas
Multimedia works
Puzzle competitions
Defunct esports competitions
Video games adapted into comics
Video game franchises introduced in 1982
Video games developed in the United States | Swordquest | [
"Technology"
] | 3,159 | [
"Multimedia",
"Multimedia works"
] |
1,126,109 | https://en.wikipedia.org/wiki/Photosystem | Photosystems are functional and structural units of protein complexes involved in photosynthesis. Together they carry out the primary photochemistry of photosynthesis: the absorption of light and the transfer of energy and electrons. Photosystems are found in the thylakoid membranes of plants, algae, and cyanobacteria. These membranes are located inside the chloroplasts of plants and algae, and in the cytoplasmic membrane of photosynthetic bacteria. There are two kinds of photosystems: PSI and PSII.
PSII will absorb red light, and PSI will absorb far-red light. Although photosynthetic activity will be detected when the photosystems are exposed to either red or far-red light, the photosynthetic activity will be the greatest when plants are exposed to both wavelengths of light. Studies have actually demonstrated that the two wavelengths together have a synergistic effect on the photosynthetic activity, rather than an additive one.
Each photosystem has two parts: a reaction center, where the photochemistry occurs, and an antenna complex, which surrounds the reaction center. The antenna complex contains hundreds of chlorophyll molecules which funnel the excitation energy to the center of the photosystem. At the reaction center, the energy will be trapped and transferred to produce a high energy molecule.
The main function of PSII is to efficiently split water into oxygen molecules and protons. PSII will provide a steady stream of electrons to PSI, which will boost these in energy and transfer them to NADP and H to make NADPH. The hydrogen from this NADPH can then be used in a number of different processes within the plant.
Reaction centers
Reaction centers are multi-protein complexes found within the thylakoid membrane.
At the heart of a photosystem lies the reaction center, which is an enzyme that uses light to reduce and oxidize molecules (give off and take up electrons). This reaction center is surrounded by light-harvesting complexes that enhance the absorption of light.
In addition, surrounding the reaction center are pigments which will absorb light. The pigments which absorb light at the highest energy level are found furthest from the reaction center. On the other hand, the pigments with the lowest energy level are more closely associated with the reaction center. Energy will be efficiently transferred from the outer part of the antenna complex to the inner part. This funneling of energy is performed via resonance transfer, which occurs when energy from an excited molecule is transferred to a molecule in the ground state. This ground state molecule will be excited, and the process will continue between molecules all the way to the reaction center. At the reaction center, the electrons on the special chlorophyll molecule will be excited and ultimately transferred away by electron carriers. (If the electrons were not transferred away after excitation to a high energy state, they would lose energy by fluorescence back to the ground state, which would not allow plants to drive photosynthesis.) The reaction center will drive photosynthesis by taking light and turning it into chemical energy that can then be used by the chloroplast.
Two families of reaction centers in photosystems can be distinguished: type I reaction centers (such as photosystem I (P700) in chloroplasts and in green-sulfur bacteria) and type II reaction centers (such as photosystem II (P680) in chloroplasts and in non-sulfur purple bacteria). The two photosystems originated from a common ancestor, but have since diversified.
Each of the photosystem can be identified by the wavelength of light to which it is most reactive (700 nanometers for PSI and 680 nanometers for PSII in chloroplasts), the amount and type of light-harvesting complex present, and the type of terminal electron acceptor used.
Type I photosystems use ferredoxin-like iron-sulfur cluster proteins as terminal electron acceptors, while type II photosystems ultimately shuttle electrons to a quinone terminal electron acceptor. Both reaction center types are present in chloroplasts and cyanobacteria, and work together to form a unique photosynthetic chain able to extract electrons from water, creating oxygen as a byproduct.
Structure of PSI and PSII
A reaction center comprises several (about 25-30) protein subunits, which provide a scaffold for a series of cofactors. The cofactors can be pigments (like chlorophyll, pheophytin, carotenoids), quinones, or iron-sulfur clusters.
Each photosystem has two main subunits: an antenna complex (a light harvesting complex or LHC) and a reaction center. The antenna complex is where light is captured, while the reaction center is where this light energy is transformed into chemical energy. At the reaction center, there are many polypeptides that are surrounded by pigment proteins. At the center of the reaction center is a special pair of chlorophyll molecules.
Each PSII has about 8 LHCII. These contain about 14 chlorophyll a and chlorophyll b molecules, as well as about four carotenoids. In the reaction center of PSII of plants and cyanobacteria, the light energy is used to split water into oxygen, protons, and electrons. The protons will be used in proton pumping to fuel the ATP synthase at the end of an electron transport chain. A majority of the reactions occur at the D1 and D2 subunits of PSII.
In oxygenic photosynthesis
Both photosystem I and II are required for oxygenic photosynthesis. Oxygenic photosynthesis can be performed by plants and cyanobacteria; cyanobacteria are believed to be the progenitors of the photosystem-containing chloroplasts of eukaryotes. Photosynthetic bacteria that cannot produce oxygen have only one photosystem, which is similar to either PSI or PSII.
At the core of photosystem II is P680, a special chlorophyll to which incoming excitation energy from the antenna complex is funneled. One of the electrons of excited P680* will be transferred to a non-fluorescent molecule, which ionizes the chlorophyll and boosts its energy further, enough that it can split water in the oxygen evolving complex (OEC) of PSII and recover its electron. At the heart of the OEC are 4 Mn atoms, each of which can trap one electron. The electrons harvested from the splitting of two waters fill the OEC complex in its highest-energy state, which holds 4 excess electrons.
Electrons travel through the cytochrome b6f complex to photosystem I via an electron transport chain within the thylakoid membrane. Energy from PSI drives this process and is harnessed (the whole process is termed chemiosmosis) to pump protons across the membrane, into the thylakoid lumen space from the chloroplast stroma. This will provide a potential energy difference between lumen and stroma, which amounts to a proton-motive force that can be utilized by the proton-driven ATP synthase to generate ATP. If electrons only pass through once, the process is termed noncyclic photophosphorylation, but if they pass through PSI and the proton pump multiple times it is called cyclic photophosphorylation.
When the electron reaches photosystem I, it fills the electron deficit of light-excited reaction-center chlorophyll P700 of PSI. The electron may either continue to go through cyclic electron transport around PSI or pass, via ferredoxin, to the enzyme NADP reductase. Electrons and protons are added to NADP to form NADPH.
This reducing (hydrogenation) agent is transported to the Calvin cycle to react with glycerate 3-phosphate, along with ATP to form glyceraldehyde 3-phosphate, the basic building block from which plants can make a variety of substances.
Photosystem repair
In intense light, plants use various mechanisms to prevent damage to their photosystems. They are able to release some light energy as heat, but the excess light can also produce reactive oxygen species. While some of these can be detoxified by antioxidants, the remaining oxygen species will be detrimental to the photosystems of the plant. More specifically, the D1 subunit in the reaction center of PSII can be damaged. Studies have found that deg1 proteins are involved in the degradation of these damaged D1 subunits. New D1 subunits can then replace these damaged D1 subunits in order to allow PSII to function properly again.
See also
Light reaction
Photoinhibition
Photosynthetic reaction centre
References
External links
Photosystems I + II: Imperial College, Barber Group
Photosystem I: Molecule of the Month in the Protein Data Bank
Photosystem II: Molecule of the Month in the Protein Data Bank
Photosystem II: ANU
UMich Orientation of Proteins in Membranes – Calculated spatial positions of photosynthetic reaction centers and photosystems in membrane
Photosynthesis
Light reactions
Metalloproteins
Integral membrane proteins | Photosystem | [
"Chemistry",
"Biology"
] | 1,946 | [
"Photosynthesis",
"Biochemical reactions",
"Light reactions",
"Biochemistry",
"Metalloproteins",
"Bioinorganic chemistry"
] |
1,126,110 | https://en.wikipedia.org/wiki/Photosystem%20II | Photosystem II (or water-plastoquinone oxidoreductase) is the first protein complex in the light-dependent reactions of oxygenic photosynthesis. It is located in the thylakoid membrane of plants, algae, and cyanobacteria. Within the photosystem, enzymes capture photons of light to energize electrons that are then transferred through a variety of coenzymes and cofactors to reduce plastoquinone to plastoquinol. The energized electrons are replaced by oxidizing water to form hydrogen ions and molecular oxygen.
By replenishing lost electrons with electrons from the splitting of water, photosystem II provides the electrons for all of photosynthesis to occur. The hydrogen ions (protons) generated by the oxidation of water help to create a proton gradient that is used by ATP synthase to generate ATP. The energized electrons transferred to plastoquinone are ultimately used to reduce to NADPH or are used in non-cyclic electron flow. DCMU is a chemical often used in laboratory settings to inhibit photosynthesis. When present, DCMU inhibits electron flow from photosystem II to plastoquinone.
Structure of complex
The core of PSII consists of a pseudo-symmetric heterodimer of two homologous proteins D1 and D2. Unlike the reaction centers of all other photosystems in which the positive charge sitting on the chlorophyll dimer that undergoes the initial photoinduced charge separation is equally shared by the two monomers, in intact PSII the charge is mostly localized on one chlorophyll center (70−80%). Because of this, P680+ is highly oxidizing and can take part in the splitting of water.
Photosystem II (of cyanobacteria and green plants) is composed of around 20 subunits (depending on the organism) as well as other accessory, light-harvesting proteins. Each photosystem II contains at least 99 cofactors: 35 chlorophyll a, 12 beta-carotene, two pheophytin, two plastoquinone, two heme, one bicarbonate, 20 lipids, the cluster (including two chloride ions), one non heme and two putative ions per monomer. There are several crystal structures of photosystem II. The PDB accession codes for this protein are , , (3BZ1 and 3BZ2 are monomeric structures of the Photosystem II dimer), , , , , , .
Oxygen-evolving complex (OEC)
The oxygen-evolving complex is the site of water oxidation. It is a metallo-oxo cluster comprising four manganese ions (in oxidation states ranging from +3 to +4) and one divalent calcium ion. When it oxidizes water, producing oxygen gas and protons, it sequentially delivers the four electrons from water to a tyrosine (D1-Y161) sidechain and then to P680 itself. It is composed of three protein subunits, OEE1 (PsbO), OEE2 (PsbP) and OEE3 (PsbQ); a fourth PsbR peptide is associated nearby.
The first structural model of the oxygen-evolving complex was solved using X-ray crystallography from frozen protein crystals with a resolution of 3.8Å in 2001. Over the next years the resolution of the model was gradually increased to 2.9Å. While obtaining these structures was in itself a great feat, they did not show the oxygen-evolving complex in full detail. In 2011 the OEC of PSII was resolved to a level of 1.9Å revealing five oxygen atoms serving as oxo bridges linking the five metal atoms and four water molecules bound to the cluster; more than 1,300 water molecules were found in each photosystem II monomer, some forming extensive hydrogen-bonding networks that may serve as channels for protons, water or oxygen molecules. At this stage, it is suggested that the structures obtained by X-ray crystallography are biased, since there is evidence that the manganese atoms are reduced by the high-intensity X-rays used, altering the observed OEC structure. This incentivized researchers to take their crystals to a different X-ray facilities, called X-ray Free Electron Lasers, such as SLAC in the USA. In 2014 the structure observed in 2011 was confirmed. Knowing the structure of Photosystem II did not suffice to reveal how it works exactly. So now the race has started to solve the structure of Photosystem II at different stages in the mechanistic cycle (discussed below). Currently structures of the S1 state and the S3 state's have been published almost simultaneously from two different groups, showing the addition of an oxygen molecule designated O6 between Mn1 and Mn4, suggesting that this may be the site on the oxygen evolving complex, where oxygen is produced.
Water splitting
Photosynthetic water splitting (or oxygen evolution) is one of the most important reactions on the planet, since it is the source of nearly all the atmosphere's oxygen. Moreover, artificial photosynthetic water-splitting may contribute to the effective use of sunlight as an alternative energy-source.
The mechanism of water oxidation is understood in substantial detail. The oxidation of water to molecular oxygen requires extraction of four electrons and four protons from two molecules of water. The experimental evidence that oxygen is released through cyclic reaction of oxygen evolving complex (OEC) within one PSII was provided by Pierre Joliot et al. They have shown that, if dark-adapted photosynthetic material (higher plants, algae, and cyanobacteria) is exposed to a series of single turnover flashes, oxygen evolution is detected with typical period-four damped oscillation with maxima on the third and the seventh flash and with minima on the first and the fifth flash (for review, see). Based on this experiment, Bessel Kok and co-workers introduced a cycle of five flash-induced transitions of the so-called S-states, describing the four redox states of OEC: When four oxidizing equivalents have been stored (at the S4-state), OEC returns to its basic S0-state. In the absence of light, the OEC will "relax" to the S1 state; the S1 state is often described as being "dark-stable". The S1 state is largely considered to consist of manganese ions with oxidation states of Mn3+, Mn3+, Mn4+, Mn4+. Finally, the intermediate S-states were proposed by Jablonsky and Lazar as a regulatory mechanism and link between S-states and tyrosine Z.
In 2012, Renger expressed the idea of internal changes of water molecules into typical oxides in different S-states during water splitting.
Inhibitors
Inhibitors of PSII are used as herbicides. There are two main chemical families, the triazines derived from cyanuric chloride of which atrazine and simazine are the most commonly used and the aryl ureas which include chlortoluron and diuron (DCMU).
See also
Oxygen evolution
P680
Photosynthesis
Photosystem
Photosystem I
Photosystem II light-harvesting protein
Reaction Centre
Photoinhibition
References
Photosynthesis
Light reactions
Manganese enzymes
EC 1.10.3 | Photosystem II | [
"Chemistry",
"Biology"
] | 1,562 | [
"Biochemistry",
"Light reactions",
"Photosynthesis",
"Biochemical reactions"
] |
1,126,111 | https://en.wikipedia.org/wiki/Photosystem%20I | Photosystem I (PSI, or plastocyanin–ferredoxin oxidoreductase) is one of two photosystems in the photosynthetic light reactions of algae, plants, and cyanobacteria. Photosystem I is an integral membrane protein complex that uses light energy to catalyze the transfer of electrons across the thylakoid membrane from plastocyanin to ferredoxin. Ultimately, the electrons that are transferred by Photosystem I are used to produce the moderate-energy hydrogen carrier NADPH. The photon energy absorbed by Photosystem I also produces a proton-motive force that is used to generate ATP. PSI is composed of more than 110 cofactors, significantly more than Photosystem II.
History
This photosystem is known as PSI because it was discovered before Photosystem II, although future experiments showed that Photosystem II is actually the first enzyme of the photosynthetic electron transport chain. Aspects of PSI were discovered in the 1950s, but the significance of these discoveries was not yet recognized at the time. Louis Duysens first proposed the concepts of Photosystems I and II in 1960, and, in the same year, a proposal by Fay Bendall and Robert Hill assembled earlier discoveries into a coherent theory of serial photosynthetic reactions. Hill and Bendall's hypothesis was later confirmed in experiments conducted in 1961 by the Duysens and Witt groups.
Components and action
Two main subunits of PSI, PsaA and PsaB, are closely related proteins involved in the binding of the vital electron transfer cofactors P, Acc, A, A, and F. PsaA and PsaB are both integral membrane proteins of 730 to 750 amino acids that contain 11 transmembrane segments. A [4Fe-4S] iron-sulfur cluster called F is coordinated by four cysteines; two cysteines are provided each by PsaA and PsaB. The two cysteines in each are proximal and located in a loop between the ninth and tenth transmembrane segments. A leucine zipper motif seems to be present downstream of the cysteines and could contribute to dimerisation of PsaA/PsaB. The terminal electron acceptors F and F, also [4Fe-4S] iron-sulfur clusters, are located in a 9-kDa protein called PsaC that binds to the PsaA/PsaB core near F.
Photon
Photoexcitation of the pigment molecules in the antenna complex induces electron and energy transfer.
Antenna complex
The antenna complex is composed of molecules of chlorophyll and carotenoids mounted on two proteins. These pigment molecules transmit the resonance energy from photons when they become photoexcited. Antenna molecules can absorb all wavelengths of light within the visible spectrum. The number of these pigment molecules varies from organism to organism. For instance, the cyanobacterium Synechococcus elongatus (Thermosynechococcus elongatus) has about 100 chlorophylls and 20 carotenoids, whereas spinach chloroplasts have around 200 chlorophylls and 50 carotenoids. Located within the antenna complex of PSI are molecules of chlorophyll called P700 reaction centers. The energy passed around by antenna molecules is directed to the reaction center. There may be as many as 120 or as few as 25 chlorophyll molecules per P700.
P700 reaction center
The P700 reaction center is composed of modified chlorophyll a that best absorbs light at a wavelength of 700 nm. P700 receives energy from antenna molecules and uses the energy from each photon to raise an electron to a higher energy level (P700*). These electrons are moved in pairs in an oxidation/reduction process from P700* to electron acceptors, leaving behind P700. The pair of P700* - P700 has an electric potential of about −1.2 volts. The reaction center is made of two chlorophyll molecules and is therefore referred to as a dimer. The dimer is thought to be composed of one chlorophyll a molecule and one chlorophyll a′ molecule. However, if P700 forms a complex with other antenna molecules, it can no longer be a dimer.
Modified chlorophyll A and A
The two modified chlorophyll molecules are early electron acceptors in PSI. They are present one per PsaA/PsaB side, forming two branches electrons can take to reach F. A accepts electrons from P700*, passes it to A of the same side, which then passes the electron to the quinone on the same side. Different species seems to have different preferences for either A/B branch.
Phylloquinone
A phylloquinone, sometimes called vitamin K, is the next early electron acceptor in PSI. It oxidizes A in order to receive the electron and in turn is re-oxidized by F, from which the electron is passed to F and F. The reduction of Fx appears to be the rate-limiting step.
Iron–sulfur complex
Three proteinaceous iron–sulfur reaction centers are found in PSI. Labeled F, F, and F, they serve as electron relays. F and F are bound to protein subunits of the PSI complex and F is tied to the PSI complex. Various experiments have shown some disparity between theories of iron–sulfur cofactor orientation and operation order. In one model, F passes an electron to F, which passes it on to F to reach the ferredoxin.
Ferredoxin
Ferredoxin (Fd) is a soluble protein that facilitates reduction of to NADPH. Fd moves to carry an electron either to a lone thylakoid or to an enzyme that reduces . Thylakoid membranes have one binding site for each function of Fd. The main function of Fd is to carry an electron from the iron-sulfur complex to the enzyme ferredoxin– reductase.
Ferredoxin– reductase (FNR)
This enzyme transfers the electron from reduced ferredoxin to to complete the reduction to NADPH. FNR may also accept an electron from NADPH by binding to it.
Plastocyanin
Plastocyanin is an electron carrier that transfers the electron from cytochrome b6f to the P700 cofactor of PSI in its ionized state P700.
Ycf4 protein domain
The Ycf4 protein domain found on the thylakoid membrane is vital to photosystem I. This thylakoid transmembrane protein helps assemble the components of photosystem I. Without it, photosynthesis would be inefficient.
Evolution
Molecular data show that PSI likely evolved from the photosystems of green sulfur bacteria. The photosystems of green sulfur bacteria and those of cyanobacteria, algae, and higher plants are not the same, but there are many analogous functions and similar structures. Three main features are similar between the different photosystems. First, redox potential is negative enough to reduce ferredoxin. Next, the electron-accepting reaction centers include iron–sulfur proteins. Last, redox centres in complexes of both photosystems are constructed upon a protein subunit dimer. The photosystem of green sulfur bacteria even contains all of the same cofactors of the electron transport chain in PSI. The number and degree of similarities between the two photosystems strongly indicates that PSI and the analogous photosystem of green sulfur bacteria evolved from a common ancestral photosystem.
See also
Biohybrid solar cell
References
External links
Photosystem I: Molecule of the Month in the Protein Data Bank
Photosystem I in A Companion to Plant Physiology
James Barber FRS Photosystems I & II
Photosynthesis
Light reactions
EC 1.97.1
Protein complexes | Photosystem I | [
"Chemistry",
"Biology"
] | 1,664 | [
"Biochemistry",
"Light reactions",
"Photosynthesis",
"Biochemical reactions"
] |
1,126,641 | https://en.wikipedia.org/wiki/Invariant%20%28physics%29 | In theoretical physics, an invariant is an observable of a physical system which remains unchanged under some transformation. Invariance, as a broader term, also applies to the no change of form of physical laws under a transformation, and is closer in scope to the mathematical definition. Invariants of a system are deeply tied to the symmetries imposed by its environment.
Invariance is an important concept in modern theoretical physics, and many theories are expressed in terms of their symmetries and invariants.
Examples
In classical and quantum mechanics, invariance of space under translation results in momentum being an invariant and the conservation of momentum, whereas invariance of the origin of time, i.e. translation in time, results in energy being an invariant and the conservation of energy. In general, by Noether's theorem, any invariance of a physical system under a continuous symmetry leads to a fundamental conservation law.
In crystals, the electron density is periodic and invariant with respect to discrete translations by unit cell vectors. In very few materials, this symmetry can be broken due to enhanced electron correlations.
Another examples of physical invariants are the speed of light, and charge and mass of a particle observed from two reference frames moving with respect to one another (invariance under a spacetime Lorentz transformation), and invariance of time and acceleration under a Galilean transformation between two such frames moving at low velocities.
Quantities can be invariant under some common transformations but not under others. For example, the velocity of a particle is invariant when switching coordinate representations from rectangular to curvilinear coordinates, but is not invariant when transforming between frames of reference that are moving with respect to each other. Other quantities, like the speed of light, are always invariant.
Physical laws are said to be invariant under transformations when their predictions remain unchanged. This generally means that the form of the law (e.g. the type of differential equations used to describe the law) is unchanged in transformations so that no additional or different solutions are obtained.
Covariance and contravariance generalize the mathematical properties of invariance in tensor mathematics, and are frequently used in electromagnetism, special relativity, and general relativity.
Informal usage
In the field of physics, the adjective covariant (as in covariance and contravariance of vectors) is often used informally as a synonym for "invariant". For example, the Schrödinger equation does not keep its written form under the coordinate transformations of special relativity. Thus, a physicist might say that the Schrödinger equation is not covariant. In contrast, the Klein–Gordon equation and the Dirac equation do keep their written form under these coordinate transformations. Thus, a physicist might say that these equations are covariant.
Despite this usage of "covariant", it is more accurate to say that the Klein–Gordon and Dirac equations are invariant, and that the Schrödinger equation is not invariant. Additionally, to remove ambiguity, the transformation by which the invariance is evaluated should be indicated.
See also
Casimir operator
Charge (physics)
Conservation law
Conserved quantity
General covariance
Eigenvalues and eigenvectors
Invariants of tensors
Killing form
Physical constant
Poincaré group
Scalar (physics)
Symmetry (physics)
Uniformity of nature
Weyl transformation
References
Conservation laws
Physical quantities | Invariant (physics) | [
"Physics",
"Mathematics"
] | 696 | [
"Physical phenomena",
"Physical quantities",
"Equations of physics",
"Conservation laws",
"Quantity",
"Physical properties",
"Symmetry",
"Physics theorems"
] |
1,127,490 | https://en.wikipedia.org/wiki/Sporopollenin | Sporopollenin is a biological polymer found as a major component of the tough outer (exine) walls of plant spores and pollen grains. It is chemically very stable (one of the most inert among biopolymers) and is usually well preserved in soils and sediments. The exine layer is often intricately sculptured in species-specific patterns, allowing material recovered from (for example) lake sediments to provide useful information to palynologists about plant and fungal populations in the past. Sporopollenin has found uses in the field of paleoclimatology as well. Sporopollenin is also found in the cell walls of several taxa of green alga, including Phycopeltis (an ulvophycean) and Chlorella.
Spores are dispersed by many different environmental factors, such as wind, water or animals. In suitable conditions, the sporopollenin-rich walls of pollen grains and spores can persist in the fossil record for hundreds of millions of years, since sporopollenin is resistant to chemical degradation by organic and inorganic chemicals.
Chemical composition
The chemical composition of sporopollenin has long been elusive due to its unusual chemical stability, insolubility and resistance to degradation by enzymes and strong chemical reagents. It was once thought to consist of polymerised carotenoids but the application of more detailed analytical methods since the 1980s has shown that this is not correct. Analyses have revealed a complex biopolymer, containing mainly long-chain fatty acids, phenylpropanoids, phenolics and traces of carotenoids in a random co-polymer. It is likely that sporopollenin derives from several precursors that are chemically cross-linked to form a rigid structure. There is also good evidence that the chemical composition of sporopollenin is not the same in all plants, indicating it is a class of compounds rather than having one constant structure.
In 2019, thioacidolysis degradation and solid-state NMR was used to determine the molecular structure of pitch pine sporopollenin, finding it primarily composed of polyvinyl alcohol units alongside other aliphatic monomers, all crosslinked through a series of acetal linkages. Its complex and heterogeneous chemical structure give some protection from the biodegradative enzymes of bacteria, fungi and animals. Some aromatic structures based on p-coumarate and naringenin were also identified within the sporopollenin polymer. These can absorb ultraviolet light and thus prevent it penetrating further into the spore. This has relevance to the role of pollen and spores in transporting and dispersing the gametes of plants. The DNA of the gametes is readily damaged by the ultraviolet component of daylight. Sporopollenin thus provides some protection from this damage as well as a physically robust container.
Analysis of sporopollenin from the clubmoss Lycopodium in the late 1980s have shown distinct structural differences from that of flowering plants. In 2020, more detailed analysis of sporopollenin from Lycopodium clavatum provided more structural information. It showed a complete lack of aromatic structures and the presence of a macrocyclic backbone of polyhydroxylated tetraketide-like monomers with pseudo-aromatic 2-pyrone rings. These were crosslinked to a poly(hydroxy acid) chain by ether linkages to form the polymer.
Biosynthesis
Electron microscopy shows that the tapetal cells that surround the developing pollen grain in the anther have a highly active secretory system containing lipophilic globules. These globules are believed to contain sporopollenin precursors. Tracer experiments have shown that phenylalanine is a major precursor, but other carbon sources also contribute. The biosynthetic pathway for phenylpropanoid is very active in tapetal cells, supporting the idea that its products are needed for sporopollenin synthesis. Chemical inhibitors of pollen development and many male sterile mutants have effects on the secretion of these globules by the tapetal cells.
See also
Chitin
Conchiolin
Tectin
References
Further reading
Biomolecules
Organic polymers
Pollination
Palynology | Sporopollenin | [
"Chemistry",
"Biology"
] | 889 | [
"Organic polymers",
"Natural products",
"Biochemistry",
"Organic compounds",
"Biomolecules",
"Molecular biology",
"Structural biology"
] |
1,127,875 | https://en.wikipedia.org/wiki/Accelerator%20mass%20spectrometry | Accelerator mass spectrometry (AMS) is a form of mass spectrometry that accelerates ions to extraordinarily high kinetic energies before mass analysis. The special strength of AMS among the different methods of mass spectrometry is its ability to separate a rare isotope from an abundant neighboring mass ("abundance sensitivity", e.g. 14C from 12C). The method suppresses molecular isobars completely and in many cases can also separate atomic isobars (e.g. 14N from 14C). This makes possible the detection of naturally occurring, long-lived radio-isotopes such as 10Be, 36Cl, 26Al and 14C. (Their typical isotopic abundance ranges from 10−12 to 10−18.)
AMS can outperform the competing technique of decay counting for all isotopes where the half-life is long enough. Other advantages of AMS include its short measuring time as well as its ability to detect atoms in extremely small samples.
Method
Generally, negative ions are created (atoms are ionized) in an ion source. In fortunate cases, this already allows the suppression of an unwanted isobar, which does not form negative ions (as 14N in the case of 14C measurements). The pre-accelerated ions are usually separated by a first mass spectrometer of sector-field type and enter an electrostatic "tandem accelerator". This is a large nuclear particle accelerator based on the principle of a tandem van de Graaff accelerator operating at 0.2 to many million volts with two stages operating in tandem to accelerate the particles. At the connecting point between the two stages, the ions change charge from negative to positive by passing through a thin layer of matter ("stripping", either gas or a thin carbon foil). Molecules will break apart in this stripping stage. The complete suppression of molecular isobars (e.g. 13CH− in the case of 14C measurements) is one reason for the exceptional abundance sensitivity of AMS. Additionally, the impact strips off several of the ion's electrons, converting it into a positively charged ion. In the second half of the accelerator, the now positively charged ion is accelerated away from the highly positive centre of the electrostatic accelerator which previously attracted the negative ion. When the ions leave the accelerator they are positively charged and are moving at several percent of the speed of light. In the second stage of mass spectrometer, the fragments from the molecules are separated from the ions of interest. This spectrometer may consist of magnetic or electric sectors, and so-called velocity selectors, which utilizes both electric fields and magnetic fields. After this stage, no background is left, unless a stable (atomic) isobar forming negative ions exists (e.g. 36S if measuring 36Cl), which is not suppressed at all by the setup described so far. Thanks to the high energy of the ions, these can be separated by methods borrowed from nuclear physics, like degrader foils and gas-filled magnets. Individual ions are finally detected by single-ion counting (with silicon surface-barrier detectors, ionization chambers, and/or time-of-flight telescopes). Thanks to the high energy of the ions, these detectors can provide additional identification of background isobars by nuclear-charge determination.
Generalizations
The above is just one example. There are other ways in which AMS is achieved; however, they all work based on improving mass selectivity and specificity by creating high kinetic energies before molecule destruction by stripping, followed by single-ion counting.
History
L.W. Alvarez and Robert Cornog of the United States first used an accelerator as a mass spectrometer in 1939 when they employed a cyclotron to demonstrate that 3He was stable; from this observation, they immediately and correctly concluded that the other mass-3 isotope, tritium (3H), was radioactive. In 1977, inspired by this early work, Richard A. Muller at the Lawrence Berkeley Laboratory recognised that modern accelerators could accelerate radioactive particles to an energy where the background interferences could be separated using particle identification techniques. He published the seminal paper in Science showing how accelerators (cyclotrons and linear) could be used for detection of tritium, radiocarbon (14C), and several other isotopes of scientific interest including 10Be; he also reported the first successful radioisotope date experimentally obtained using tritium. His paper was the direct inspiration for other groups using cyclotrons (G. Raisbeck and F. Yiou, in France) and tandem linear accelerators (D. Nelson, R. Korteling, W. Stott at McMaster). K. Purser and colleagues also published the successful detection of radiocarbon using their tandem at Rochester. Soon afterwards the Berkeley and French teams reported the successful detection of 10Be, an isotope widely used in geology. Soon the accelerator technique, since it was more sensitive by a factor of about 1,000, virtually supplanted the older "decay counting" methods for these and other radioisotopes. In 1982, AMS labs began processing archaeological samples for radiocarbon dating
Applications
There are many applications for AMS throughout a variety of disciplines. AMS is most often employed to determine the concentration of 14C, e.g. by archaeologists for radiocarbon dating. Compared to other radiocarbon dating methods, AMS requires smaller sample sizes (about 50 mg), while yielding extensive chronologies. MS technology has expanded the scope of radiocarbon dating. Samples ranging from 50,000 years old to 100 years old can be successfully dated using AMS, as other forms of mass spectrometry provide insufficient suppression of molecular isobars to resolve 13CH and 12CH2 from 14C atoms. Because of the long half-life of 14C, decay counting requires significantly larger samples. 10Be, 26Al, and 36Cl are used for surface exposure dating in geology. 3H, 14C, 36Cl, and 129I are used as hydrological tracers.
Accelerator mass spectrometry is widely used in biomedical research. In particular, 41Ca has been used to measure bone resorption in postmenopausal women.
See also
List of accelerator mass spectrometry facilities
Arizona Accelerator Mass Spectrometry Laboratory
References
Bibliography
Mass spectrometry | Accelerator mass spectrometry | [
"Physics",
"Chemistry"
] | 1,309 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Accelerator mass spectrometry",
"Mass spectrometry",
"Matter"
] |
4,449,103 | https://en.wikipedia.org/wiki/Vibration%20isolation | Vibration isolation is the prevention of transmission of vibration from one component of a system to others parts of the same system, as in buildings or mechanical systems. Vibration is undesirable in many domains, primarily engineered systems and habitable spaces, and methods have been developed to prevent the transfer of vibration to such systems. Vibrations propagate via mechanical waves and certain mechanical linkages conduct vibrations more efficiently than others. Passive vibration isolation makes use of materials and mechanical linkages that absorb and damp these mechanical waves. Active vibration isolation involves sensors and actuators that produce disruptive interference that cancels-out incoming vibration.
Passive isolation
"Passive vibration isolation" refers to vibration isolation or mitigation of vibrations by passive techniques such as rubber pads or mechanical springs, as opposed to "active vibration isolation" or "electronic force cancellation" employing electric power, sensors, actuators, and control systems.
Passive vibration isolation is a vast subject, since there are many types of passive vibration isolators used for many different applications. A few of these applications are for industrial equipment such as pumps, motors, HVAC systems, or washing machines; isolation of civil engineering structures from earthquakes (base isolation), sensitive laboratory equipment, valuable statuary, and high-end audio.
A basic understanding of how passive isolation works, the more common types of passive isolators, and the main factors that influence the selection of passive isolators:
Common passive isolation systems
Pneumatic or air isolators
These are bladders or canisters of compressed air. A source of compressed air is required to maintain them. Air springs are rubber bladders which provide damping as well as isolation and are used in large trucks. Some pneumatic isolators can attain low resonant frequencies and are used for isolating large industrial equipment. Air tables consist of a working surface or optical surface mounted on air legs. These tables provide enough isolation for laboratory instrument under some conditions. Air systems may leak under vacuum conditions. The air container can interfere with isolation of low-amplitude vibration.
Mechanical springs and spring-dampers
These are heavy-duty isolators used for building systems and industry. Sometimes they serve as mounts for a concrete block, which provides further isolation.
Pads or sheets of flexible materials such as elastomers, rubber, cork, dense foam and laminate materials.
Elastomer pads, dense closed cell foams and laminate materials are often used under heavy machinery, under common household items, in vehicles and even under higher performing audio systems.
Molded and bonded rubber and elastomeric isolators and mounts
These are often used as machinery (such as engines) mounts or in vehicles. They absorb shock and attenuate some vibration.
Negative-stiffness isolators
Negative-stiffness isolators are less common than other types and have generally been developed for high-level research applications such as gravity wave detection. Lee, Goverdovskiy, and Temnikov (2007) proposed a negative-stiffness system for isolating vehicle seats.
The focus on negative-stiffness isolators has been on developing systems with very low resonant frequencies (below 1 Hz), so that low frequencies can be adequately isolated, which is critical for sensitive instrumentation. All higher frequencies are also isolated. Negative-stiffness systems can be made with low stiction, so that they are effective in isolating low-amplitude vibrations.
Negative-stiffness mechanisms are purely mechanical and typically involve the configuration and loading of components such as beams or inverted pendulums. Greater loading of the negative-stiffness mechanism, within the range of its operability, decreases the natural frequency.
Wire rope isolators
These isolators are durable and can withstand extreme environments. They are often used in military applications.
Base isolators for seismic isolation of buildings, bridges, etc.
Base isolators made of layers of neoprene and steel with a low horizontal stiffness are used to lower the natural frequency of the building. Some other base isolators are designed to slide, preventing the transfer of energy from the ground to the building.
Tuned mass dampers
Tuned mass dampers reduce the effects of harmonic vibration in buildings or other structures. A relatively small mass is attached in such a way that it can dampen out a very narrow band of vibration of the structure.
Do it Yourself Isolators
In less sophisticated solutions, bungee cords can be used as a cheap isolation system which may be effective enough for some applications. The item to be isolated is suspended from the bungee cords. This is difficult to implement without a danger of the isolated item falling. Tennis balls cut in half have been used under washing machines and other items with some success. In fact, tennis balls became the de facto standard suspension technique used in DIY rave/DJ culture, placed under the feet of each record turntable which produces enough dampening to neutralize the vibrations of high-powered soundsystems from affecting the delicate, high-sensitivity mechanisms of the turntable needles.
How passive isolation works
A passive isolation system, such as a shock mount, in general contains mass, spring, and damping elements and moves as a harmonic oscillator. The mass and spring stiffness dictate a natural frequency of the system. Damping causes energy dissipation and has a secondary effect on natural frequency.
Every object on a flexible support has a fundamental natural frequency. When vibration is applied, energy is transferred most efficiently at the natural frequency, somewhat efficiently below the natural frequency, and with increasing inefficiency (decreasing efficiency) above the natural frequency. This can be seen in the transmissibility curve, which is a plot of transmissibility vs. frequency.
Here is an example of a transmissibility curve. Transmissibility is the ratio of vibration of the isolated surface to that of the source. Vibrations are never eliminated, but they can be greatly reduced. The curve below shows the typical performance of a passive, negative-stiffness isolation system with a natural frequency of 0.5 Hz. The general shape of the curve is typical for passive systems. Below the natural frequency, transmissibility hovers near 1. A value of 1 means that vibration is going through the system without being amplified or reduced. At the resonant frequency, energy is transmitted efficiently, and the incoming vibration is amplified. Damping in the system limits the level of amplification. Above the resonant frequency, little energy can be transmitted, and the curve rolls off to a low value. A passive isolator can be seen as a mechanical low-pass filter for vibrations.
In general, for any given frequency above the natural frequency, an isolator with a lower natural frequency will show greater isolation than one with a higher natural frequency. The best isolation system for a given situation depends on the frequency, direction, and magnitude of vibrations present and the desired level of attenuation of those frequencies.
All mechanical systems in the real world contain some amount of damping. Damping dissipates energy in the system, which reduces the vibration level which is transmitted at the natural frequency. The fluid in automotive shock absorbers is a kind of damper, as is the inherent damping in elastomeric (rubber) engine mounts.
Damping is used in passive isolators to reduce the amount of amplification at the natural frequency. However, increasing damping tends to reduce isolation at the higher frequencies. As damping is increased, transmissibility roll-off decreases. This can be seen in the chart below.
Passive isolation operates in both directions, isolating the payload from vibrations originating in the support, and also isolating the support from vibrations originating in the payload. Large machines such as washers, pumps, and generators, which would cause vibrations in the building or room, are often isolated from the floor. However, there are a multitude of sources of vibration in buildings, and it is often not possible to isolate each source. In many cases, it is most efficient to isolate each sensitive instrument from the floor. Sometimes it is necessary to implement both approaches.
In Superyachts, the engines and alternators produce noise and vibrations. To solve this, the solution is a double elastic suspension where the engine and alternator are mounted with vibration dampers on a common frame. This set is then mounted elastically between the common frame and the hull.
Factors influencing the selection of passive vibration isolators
Characteristics of item to be isolated
Size: The dimensions of the item to be isolated help determine the type of isolation which is available and appropriate. Small objects may use only one isolator, while larger items might use a multiple-isolator system.
Weight: The weight of the object to be isolated is an important factor in choosing the correct passive isolation product. Individual passive isolators are designed to be used with a specific range of loading.
Movement: Machines or instruments with moving parts may affect isolation systems. It is important to know the mass, speed, and distance traveled of the moving parts.
Operating Environment
Industrial: This generally entails strong vibrations over a wide band of frequencies and some amount of dust.
Laboratory: Labs are sometimes troubled by specific building vibrations from adjacent machinery, foot traffic, or HVAC airflow.
Indoor or outdoor: Isolators are generally designed for one environment or the other.
Corrosive/non-corrosive: Some indoor environments may present a corrosive danger to isolator components due to the presence of corrosive chemicals. Outdoors, water and salt environments need to be considered.
Clean room: Some isolators can be made appropriate for clean room.
Temperature: In general, isolators are designed to be used in the range of temperatures normal for human environments. If a larger range of temperatures is required, the isolator design may need to be modified.
Vacuum: Some isolators can be used in a vacuum environment. Air isolators may have leakage problems. Vacuum requirements typically include some level of clean room requirement and may also have a large temperature range.
Magnetism: Some experimentation which requires vibration isolation also requires a low-magnetism environment. Some isolators can be designed with low-magnetism components.
Acoustic noise: Some instruments are sensitive to acoustic vibration. In addition, some isolation systems can be excited by acoustic noise. It may be necessary to use an acoustic shield. Air compressors can create problematic acoustic noise, heat, and airflow.
Static or dynamic loads: This distinction is quite important as isolators are designed for a certain type and level of loading.
; Static loading
is basically the weight of the isolated object with low-amplitude vibration input. This is the environment of apparently stationary objects such as buildings (under normal conditions) or laboratory instruments.
; Dynamic loading
involves accelerations and larger amplitude shock and vibration. This environment is present in vehicles, heavy machinery, and structures with significant movement.
Cost:
Cost of providing isolation: Costs include the isolation system itself, whether it is a standard or custom product; a compressed air source if required; shipping from manufacturer to destination; installation; maintenance; and an initial vibration site survey to determine the need for isolation.
Relative costs of different isolation systems: Inexpensive shock mounts may need to be replaced due to dynamic loading cycles. A higher level of isolation which is effective at lower vibration frequencies and magnitudes generally costs more. Prices can range from a few dollars for bungee cords to millions of dollars for some space applications.
Adjustment: Some isolation systems require manual adjustment to compensate for changes in weight load, weight distribution, temperature, and air pressure, whereas other systems are designed to automatically compensate for some or all of these factors.
Maintenance: Some isolation systems are quite durable and require little or no maintenance. Others may require periodic replacement due to mechanical fatigue of parts or aging of materials.
Size Constraints: The isolation system may have to fit in a restricted space in a laboratory or vacuum chamber, or within a machine housing.
Nature of vibrations to be isolated or mitigated
Frequencies: If possible, it is important to know the frequencies of ambient vibrations. This can be determined with a site survey or accelerometer data processed through FFT analysis.
Amplitudes: The amplitudes of the vibration frequencies present can be compared with required levels to determine whether isolation is needed. In addition, isolators are designed for ranges of vibration amplitudes. Some isolators are not effective for very small amplitudes.
Direction: Knowing whether vibrations are horizontal or vertical can help to target isolation where it is needed and save money.
Vibration specifications of item to be isolated: Many instruments or machines have manufacturer-specified levels of vibration for the operating environment. The manufacturer may not guarantee the proper operation of the instrument if vibration exceeds the spec.
Not For Profit Organizations such as ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) and VISCMA (Vibration Isolation and Seismic Control Manufacturers Association) provide specifications / standards for isolator types and spring deflection requirements that cover a wide array of industries including electrical, mechanical, plumbing, and HVAC.
Comparison of passive isolators
Negative-stiffness vibration isolator
Negative-Stiffness-Mechanism (NSM) vibration isolation systems offer a unique passive approach for achieving low vibration environments and isolation against sub-Hertz vibrations. "Snap-through" or "over-center" NSM devices are used to reduce the stiffness of elastic suspensions and create compact six-degree-of-freedom systems with low natural frequencies. Practical systems with vertical and horizontal natural frequencies as low as 0.2 to 0.5 Hz are possible. Electro-mechanical auto-adjust mechanisms compensate for varying weight loads and provide automatic leveling in multiple-isolator systems, similar to the function of leveling valves in pneumatic systems. All-metal systems can be configured which are compatible with high vacuums and other adverse environments such as high temperatures.
These isolation systems enable vibration-sensitive instruments such as scanning probe microscopes, micro-hardness testers and scanning electron microscopes to operate in severe vibration environments sometimes encountered, for example, on upper floors of buildings and in clean rooms. Such operation would not be practical with pneumatic isolation systems. Similarly, they enable vibration-sensitive instruments to produce better images and data than those achievable with pneumatic isolators.
The theory of operation of NSM vibration isolation systems is summarized, some typical systems and applications are described, and data on measured performance is presented. The theory of NSM isolation systems is explained in References 1 and 2. It is summarized briefly for convenience.
Vertical-motion isolation
A vertical-motion isolator is shown . It uses a conventional spring connected to an NSM consisting of two bars hinged at the center, supported at their outer ends on pivots, and loaded in compression by forces P. The spring is compressed by weight W to the operating position of the isolator, as shown in Figure 1. The stiffness of the isolator is K=KS-KN where KS is the spring stiffness and KN is the magnitude of a negative-stiffness which is a function of the length of the bars and the load P. The isolator stiffness can be made to approach zero while the spring supports the weight W.
Horizontal-motion isolation
A horizontal-motion isolator consisting of two beam-columns is illustrated in Figure. 2. Each beam-column behaves like two fixed-free beam columns loaded axially by a weight load W. Without the weight load the beam-columns have horizontal stiffness KS With the weight load the lateral bending stiffness is reduced by the "beam-column" effect. This behavior is equivalent to a horizontal spring combined with an NSM so that the horizontal stiffness is , and is the magnitude of the beam-column effect. Horizontal stiffness can be made to approach zero by loading the beam-columns to approach their critical buckling load.
Six-degree-of-freedom (six-DOF) isolation
A six-DOF NSM isolator typically uses three isolators stacked in series: a tilt-motion isolator on top of a horizontal-motion isolator on top of a vertical-motion isolator. Figure 3 (Ref. needed) shows a schematic of a vibration isolation system consisting of a weighted platform supported by a single six-DOF isolator incorporating the isolators of Figures 1 and 2 (Figures 1 and 2 are missing). Flexures are used in place of the hinged bars shown in Figure 1. A tilt flexure serves as the tilt-motion isolator. A vertical-stiffness adjustment screw is used to adjust the compression force on the negative-stiffness flexures thereby changing the vertical stiffness. A vertical load adjustment screw is used to adjust for varying weight loads by raising or lowering the base of the support spring to keep the flexures in their straight, unbent operating positions.
Vibration isolation of supporting joint
The equipment or other mechanical components are necessarily linked to surrounding objects (the supporting joint - with the support; the unsupporting joint - the pipe duct or cable), thus presenting the opportunity for unwanted transmission of vibrations. Using a suitably designed vibration-isolator (absorber), vibration isolation of the supporting joint is realized. The accompanying illustration shows the attenuation of vibration levels, as measured before installation of the functioning gear on a vibration isolator as well as after installation, for a wide range of frequencies.
The vibration isolator
This is defined as a device that reflects and absorbs waves of oscillatory energy, extending from a piece of working machinery or electrical equipment, and with the desired effect being vibration insulation. The goal is to establish vibration isolation between a body transferring mechanical fluctuations and a supporting body (for example, between the machine and the foundation). The illustration shows a vibration isolator from the series «ВИ» (~"VI" in Roman characters), as used in shipbuilding in Russia, for example the submarine "St.Petersburg" (Lada). The depicted «ВИ» devices allow loadings ranging from 5, 40 and 300 kg. They differ in their physical sizes, but all share the same fundamental design. The structure consists of a rubber envelope that is internally reinforced by a spring. During manufacture, the rubber and the spring are intimately and permanently connected as a result of the vulcanization process that is integral to the processing of the crude rubber material. Under action of weight loading of the machine, the rubber envelope deforms, and the spring is compressed or stretched. Therefore, in the direction of the spring's cross section, twisting of the enveloping rubber occurs. The resulting elastic deformation of the rubber envelope results in very effective absorption of the vibration. This absorption is crucial to reliable vibration insulation, because it averts the potential for resonance effects. The amount of elastic deformation of the rubber largely dictates the magnitude of vibration absorption that can be attained; the entire device (including the spring itself) must be designed with this in mind. The design of the vibration isolator must also take into account potential exposure to shock loadings, in addition to the routine everyday vibrations. Lastly, the vibration isolator must also be designed for long-term durability as well as convenient integration into the environment in which it is to be used. Sleeves and flanges are typically employed in order to enable the vibration isolator to be securely fastened to the equipment and the supporting foundation.
Vibration isolation of unsupporting joint
Vibration isolation of unsupporting joint is realized in the device named branch pipe a of isolating vibration.
Branch pipe a of isolating vibration
Branch pipe a of isolating vibration is a part of a tube with elastic walls for reflection and absorption of waves of the oscillatory energy extending from the working pump over wall of the pipe duct. Is established between the pump and the pipe duct. On an illustration is presented the image a vibration-isolating branch pipe of a series «ВИПБ». In a structure is used the rubber envelope, which is reinforced by a spring. Properties of an envelope are similar envelope to an isolator vibration. Has the device reducing axial effort from action of internal pressure up to zero.
Subframe isolation
Another technique used to increase isolation is to use an isolated subframe. This splits the system with an additional mass/spring/damper system. This doubles the high frequency attenuation rolloff, at the cost of introducing additional low frequency modes which may cause the low frequency behaviour to deteriorate. This is commonly used in the rear suspensions of cars with Independent Rear Suspension (IRS), and in the front subframes of some cars. The graph (see illustration) shows the force into the body for a subframe that is rigidly bolted to the body compared with the red curve that shows a compliantly mounted subframe. Above 42 Hz the compliantly mounted subframe is superior, but below that frequency the bolted in subframe is better.
Semi-active isolation
Semiactive vibration isolators have received attention because they consume less power than active devices and controllability over passive systems.
Active isolation
Active vibration isolation systems contain, along with the spring, a feedback circuit which consists of a sensor (for example a piezoelectric accelerometer or a geophone), a controller, and an actuator. The acceleration (vibration) signal is processed by a control circuit and amplifier. Then it feeds the electromagnetic actuator, which amplifies the signal. As a result of such a feedback system, a considerably stronger suppression of vibrations is achieved compared to ordinary damping. Active isolation today is used for applications where structures smaller than a micrometer have to be produced or measured. A couple of companies produce active isolation products as OEM for research, metrology, lithography and medical systems. Another important application is the semiconductor industry. In the microchip production, the smallest structures today are below 20 nm, so the machines which produce and check them have to oscillate much less.
Sensors for active isolation
Piezoelectric accelerometers and force sensors
MEM accelerometers
Geophones
Proximity sensors
Interferometers
Actuators for active isolation
Linear motors
Pneumatic actuators
Piezoelectric motors
See also
Active vibration control
Base isolation
Bushing (isolator)
Damped wave
Damping ratio
Noise, vibration, and harshness
Noise and vibration on maritime vessels
Oscillation
Package cushioning
Passive heave compensation
Shock absorber
Shock mount
Sorbothane
Soundproofing
Vibration
Vibration control
References
Platus PhD, David L., SPIE International Society of Optical Engineering - July 1999, Optomechanical Engineering and Vibration Control Negative-Stiffness-Mechanism Vibration Isolation Systems
Harris, C., Piersol, A., Harris Shock and Vibration Handbook, Fifth Edition, McGraw-Hill, (2002),
A.Kolesnikov «Noise and vibration». Russia. Leningrad. Publ.«Shipbuilding». 1988
External links
White Paper on Active Vibration Isolation for Lithography and Imaging
Passive Isolation of Harmonic Excitation
Vibration Control for Microscopy
Mechanical engineering
Mechanical vibrations
sk:Vibrácia | Vibration isolation | [
"Physics",
"Engineering"
] | 4,753 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Mechanics",
"Mechanical vibrations",
"Mechanical engineering"
] |
4,449,204 | https://en.wikipedia.org/wiki/Quasi-solid | Quasi-solid, Falsely-solid, or semisolid is the physical term for something whose state lies between a solid and a liquid. While similar to solids in some respects, such as having the ability to support their own weight and hold their shapes, a quasi-solid also shares some properties of liquids, such as conforming in shape to something applying pressure to it and the ability to flow under pressure. The words quasi-solid, semisolid, and semiliquid are used interchangeably.
Quasi-solids and semisolids are sometimes described as amorphous because at the microscopic scale they have a disordered structure unlike crystalline solids. They should not be confused with amorphous solids as they are not solids and exhibit properties such as flow which bulk solids do not.
Examples
Pharmaceutical and cosmetic creams, gels, and ointments, e.g. petroleum jelly, toothpaste, hand sanitizer
Foods, e.g. pudding, guacamole, salsa, mayonnaise, whipping cream, peanut butter, jelly, jam
See also
Plasticity (physics)
Viscosity
Premelting
Non-Newtonian fluid
References
Phases of matter | Quasi-solid | [
"Physics",
"Chemistry"
] | 240 | [
"Phases of matter",
"Physical chemistry stubs",
"Matter"
] |
4,450,467 | https://en.wikipedia.org/wiki/Minimum%20energy%20control | In control theory, the minimum energy control is the control that will bring a linear time invariant system to a desired state with a minimum expenditure of energy.
Let the linear time invariant (LTI) system be
with initial state . One seeks an input so that the system will be in the state at time , and for any other input , which also drives the system from to at time , the energy expenditure would be larger, i.e.,
To choose this input, first compute the controllability Gramian
Assuming is nonsingular (if and only if the system is controllable), the minimum energy control is then
Substitution into the solution
verifies the achievement of state at .
See also
LTI system theory
Control engineering
State space (controls)
Variational Calculus
Control theory | Minimum energy control | [
"Mathematics"
] | 159 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
4,450,595 | https://en.wikipedia.org/wiki/Petasis%20reagent | The Petasis reagent, named after Nicos A. Petasis, is an organotitanium compound with the formula Cp2Ti(CH3)2. It is an orange-colored solid.
Preparation and use
The Petasis reagent is prepared by the salt metathesis reaction of methylmagnesium chloride or methyllithium with titanocene dichloride:
Cp2TiCl2 + 2 CH3MgCl → Cp2Ti(CH3)2 + 2 MgCl2
This compound is used for the transformation of carbonyl groups to terminal alkenes. It exhibits similar reactivity to the Tebbe reagent and Wittig reaction. Unlike the Wittig reaction, the Petasis reagent can react with a wide range of aldehydes, ketones and esters. The Petasis reagent is also very air stable, and is commonly used in solution with toluene or THF.
The Tebbe reagent and the Petasis reagent share a similar reaction mechanism. The active olefinating reagent, Cp2TiCH2, is generated in situ upon heating. With the organic carbonyl, this titanium carbene forms a four membered oxatitanacyclobutane that releases the terminal alkene.
In contrast to the Tebbe reagent, homologs of the Petasis reagent are relatively easy to prepare by using the corresponding alkyllithium instead of methyllithium, allowing the conversion of carbonyl groups to alkylidenes.
See also
Nysted reagent
Titanium–zinc methylenation
References
Organotitanium compounds
Coordination complexes
Reagents for organic chemistry
Titanocenes
Cyclopentadienyl complexes
Titanium(IV) compounds
Methyl complexes | Petasis reagent | [
"Chemistry"
] | 377 | [
"Cyclopentadienyl complexes",
"Coordination complexes",
"Coordination chemistry",
"Reagents for organic chemistry",
"Organometallic chemistry"
] |
9,980,598 | https://en.wikipedia.org/wiki/Nth-term%20test | In mathematics, the nth-term test for divergence is a simple test for the divergence of an infinite series:If or if the limit does not exist, then diverges.Many authors do not name this test or give it a shorter name.
When testing if a series converges or diverges, this test is often checked first due to its ease of use.
In the case of p-adic analysis the term test is a necessary and sufficient condition for convergence due to the non-Archimedean ultrametric triangle inequality.
Usage
Unlike stronger convergence tests, the term test cannot prove by itself that a series converges. In particular, the converse to the test is not true; instead all one can say is:If then may or may not converge. In other words, if the test is inconclusive.The harmonic series is a classic example of a divergent series whose terms approach zero in the limit as . The more general class of p-series,
exemplifies the possible results of the test:
If p ≤ 0, then the nth-term test identifies the series as divergent.
If 0 < p ≤ 1, then the nth-term test is inconclusive, but the series is divergent by the integral test for convergence.
If 1 < p, then the nth-term test is inconclusive, but the series is convergent by the integral test for convergence.
Proofs
The test is typically proven in contrapositive form:If converges, then
Limit manipulation
If sn are the partial sums of the series, then the assumption that the series
converges means that
for some number L. Then
Cauchy's criterion
Assuming that the series converges implies that it passes Cauchy's convergence test: for every there is a number N such that
holds for all n > N and p ≥ 1. Setting p = 1 recovers the claim
Scope
The simplest version of the term test applies to infinite series of real numbers. The above two proofs, by invoking the Cauchy criterion or the linearity of the limit, also work in any other normed vector space or any additively written abelian group.
Notes
References
Convergence tests
Articles containing proofs | Nth-term test | [
"Mathematics"
] | 456 | [
"Theorems in mathematical analysis",
"Convergence tests",
"Articles containing proofs"
] |
9,986,646 | https://en.wikipedia.org/wiki/Soliton%20model%20in%20neuroscience | The soliton hypothesis in neuroscience is a model that claims to explain how action potentials are initiated and conducted along axons based on a thermodynamic theory of nerve pulse propagation. It proposes that the signals travel along the cell's membrane in the form of certain kinds of solitary sound (or density) pulses that can be modeled as solitons. The model is proposed as an alternative to the Hodgkin–Huxley model in which action potentials: voltage-gated ion channels in the membrane open and allow sodium ions to enter the cell (inward current). The resulting decrease in membrane potential opens nearby voltage-gated sodium channels, thus propagating the action potential. The transmembrane potential is restored by delayed opening of potassium channels. Soliton hypothesis proponents assert that energy is mainly conserved during propagation except dissipation losses; Measured temperature changes are completely inconsistent with the Hodgkin-Huxley model.
The soliton model (and sound waves in general) depends on adiabatic propagation in which the energy provided at the source of excitation is carried adiabatically through the medium, i.e. plasma membrane. The measurement of a temperature pulse and the claimed absence of heat release during an action potential were the basis of the proposal that nerve impulses are an adiabatic phenomenon much like sound waves. Synaptically evoked action potentials in the electric organ of the electric eel are associated with substantial positive (only) heat production followed by active cooling to ambient temperature. In the garfish olfactory nerve, the action potential is associated with a biphasic temperature change; however, there is a net production of heat. These published results are inconsistent with the Hodgkin-Huxley Model and the authors interpret their work in terms of that model: The initial sodium current releases heat as the membrane capacitance is discharged; heat is absorbed during recharge of the membrane capacitance as potassium ions move with their concentration gradient but against the membrane potential. This mechanism is called the "Condenser Theory". Additional heat may be generated by membrane configuration changes driven by the changes in membrane potential. An increase in entropy during depolarization would release heat; entropy increase during repolarization would absorb heat. However, any such entropic contributions are incompatible with Hodgkin and Huxley model
History
Ichiji Tasaki pioneered a thermodynamic approach to the phenomenon of nerve pulse propagation which identified several phenomena that were not included in the Hodgkin–Huxley model. Along with measuring various non-electrical components of a nerve impulse, Tasaki investigated the physical chemistry of phase transitions in nerve fibers and its importance for nerve pulse propagation. Based on Tasaki's work, Konrad Kaufman proposed sound waves as a physical basis for nerve pulse propagation in an unpublished manuscript. The basic idea at the core of the soliton model is the balancing of intrinsic dispersion of the two dimensional sound waves in the membrane by nonlinear elastic properties near a phase transition. The initial impulse can acquire a stable shape under such circumstances, in general known as a solitary wave. Solitons are the simplest solution of the set of nonlinear wave equations governing such phenomenon and were applied to model nerve impulse in 2005 by Thomas Heimburg and Andrew D. Jackson, both at the Niels Bohr Institute of the University of Copenhagen. Heimburg heads the institute's Membrane Biophysics Group. The biological physics group of Matthias Schneider has studied propagation of two-dimensional sound waves in lipid interfaces and their possible role in biological signalling
Justification
The model starts with the observation that cell membranes always have a freezing point (the temperature below which the consistency changes from fluid to gel-like) only slightly below the organism's body temperature, and this allows for the propagation of solitons. An action potential traveling along a mixed nerve results in a slight increase in temperature followed by a decrease in temperature. Soliton model proponents claim that no net heat is released during the overall pulse and that the observed temperature changes are inconsistent with the Hodgkin-Huxley model. However, this is untrue: the Hodgkin Huxley model predicts a biphasic release and absorption of heat. In addition, the action potential causes a slight local thickening of the membrane and a force acting outwards; this effect is not predicted by the Hodgkin–Huxley model but does not contradict it, either.
The soliton model attempts to explain the electrical currents associated with the action potential as follows: the traveling soliton locally changes density and thickness of the membrane, and since the membrane contains many charged and polar substances, this will result in an electrical effect, akin to piezoelectricity. Indeed, such nonlinear sound waves have now been shown to exist at lipid interfaces that show superficial similarity to action potentials (electro-opto-mechanical coupling, velocities, biphasic pulse shape, threshold for excitation etc.). Furthermore, the waves remain localized in the membrane and do not spread out in the surrounding due to an impedance mismatch.
Formalism
The soliton representing the action potential of nerves is the solution of the partial differential equation
where is time and is the position along the nerve axon. is the change in membrane density under the influence of the action potential, is the sound velocity of the nerve membrane, and describe the nature of the phase transition and thereby the nonlinearity of the elastic constants of the nerve membrane. The parameters , and are dictated by the thermodynamic properties of the nerve membrane and cannot be adjusted freely. They have to be determined experimentally. The parameter describes the frequency dependence of the sound velocity of the membrane (dispersion relation). The above equation does not contain any fit parameters. It is formally related to the Boussinesq approximation for solitons in water canals. The solutions of the above equation possess a limiting maximum amplitude and a minimum propagation velocity that is similar to the pulse velocity in myelinated nerves. Under restrictive assumptions, there exist periodic solutions that display hyperpolarization and refractory periods.
Role of ion channels
Advocates of the soliton model claim that it explains several aspects of the action potential, which are not explained by the Hodgkin–Huxley model. Since it is of thermodynamic nature it does not address the properties of single macromolecules like ion channel proteins on a molecular scale. It is rather assumed that their properties are implicitly contained in the macroscopic thermodynamic properties of the nerve membranes. The soliton model predicts membrane current fluctuations during the action potential. These currents are of similar appearance as those reported for ion channel proteins. They are thought to be caused by lipid membrane pores spontaneously generated by the thermal fluctuations. Such thermal fluctuations explain the specific ionic selectivity or the specific time-course of the response to voltage changes on the basis of their effect on the macroscopic susceptibilities of the system.
Application to anesthesia
The authors claim that their model explains the previously obscure mode of action of numerous anesthetics. The Meyer–Overton observation holds that the strength of a wide variety of chemically diverse anesthetics is proportional to their lipid solubility, suggesting that they do not act by binding to specific proteins such as ion channels but instead by dissolving in and changing the properties of the lipid membrane. Dissolving substances in the membrane lowers the membrane's freezing point, and the resulting larger difference between body temperature and freezing point inhibits the propagation of solitons. By increasing pressure, lowering pH or lowering temperature, this difference can be restored back to normal, which should cancel the action of anesthetics: this is indeed observed. The amount of pressure needed to cancel the action of an anesthetic of a given lipid solubility can be computed from the soliton model and agrees reasonably well with experimental observations.
Differences between model predictions and experimental observations
The following is a list of some of the disagreements between experimental observations and the "soliton model":
Antidromic invasion of soma from axonAn action potential initiated anywhere on an axon will travel in an antidromic (backward) direction to the neuron soma (cell body) without loss of amplitude and produce a full-amplitude action potential in the soma. As the membrane area of the soma is orders of magnitude larger than the area of the axon, conservation of energy requires that an adiabatic mechanical wave decrease in amplitude. Since the absence of heat production is one of the claimed justifications of the 'soliton model', this is particularly difficult to explain within that model.
Persistence of action potential over wide temperature range An important assumption of the soliton model is the presence of a phase transition near the ambient temperature of the axon ("Formalism", above). Then, rapid change of temperature away from the phase transition temperature would necessarily cause large changes in the action potential. Below the phase transition temperature, the soliton wave would not be possible. Yet, action potentials are present at 0 °C. The time course is slowed in a manner predicted by the measured opening and closing kinetics of the Hodgkin-Huxley ion channels.
CollisionsNerve impulses traveling in opposite directions annihilate each other on collision. On the other hand, mechanical waves do not annihilate but pass through each other. Soliton model proponents have attempted to show that action potentials can pass through a collision; however, collision annihilation of orthodromic and antidromic action potentials is a routinely observed phenomenon in neuroscience laboratories and are the basis of a standard technique for identification of neurons. Solitons pass each other on collision (Figure--"Collision of Solitons"), solitary waves in general can pass, annihilate or bounce of each other and solitons are only a special case of such solitary waves.
Ionic currents under voltage clampThe voltage clamp, used by Hodgkin and Huxley (1952) (Hodgkin-Huxley Model) to experimentally dissect the action potential in the squid giant axon, uses electronic feedback to measure the current necessary to hold membrane voltage constant at a commanded value. A silver wire, inserted into the interior of the axon, forces a constant membrane voltage along the length of the axon. Under these circumstances, there is no possibility of a traveling 'soliton'. Any thermodynamic changes are very different from those resulting from an action potential. Yet, the measured currents accurately reproduce the action potential.
Single channel currentsThe patch clamp technique isolates a microscopic patch of membrane on the tip of a glass pipette. It is then possible to record currents from single ionic channels. There is no possibility of propagating solitons or thermodynamic changes. Yet, the properties of these channels (temporal response to voltage jumps, ionic selectivity) accurately predict the properties of the macroscopic currents measured under conventional voltage clamp.
Selective ionic conductivityThe current underlying the action potential depolarization is selective for sodium. Repolarization depends on a selective potassium current. These currents have very specific responses to voltage changes which quantitatively explain the action potential. Substitution of non-permeable ions for sodium abolishes the action potential. The 'soliton model' cannot explain either the ionic selectivity or the responses to voltage changes.
Pharmacology The drug tetrodotoxin (TTX) blocks action potentials at extremely low concentrations. The site of action of TTX on the sodium channel has been identified. Dendrotoxins block the potassium channels. These drugs produce quantitatively predictable changes in the action potential. The 'soliton model' provides no explanation for these pharmacological effects.
Action waves
A recent theoretical model, proposed by Ahmed El Hady and Benjamin Machta, proposes that there is a mechanical surface wave which co-propagates with the electrical action potential. These surface waves are called "action waves". In the El Hady–Machta's model, these co-propagating waves are driven by voltage changes across the membrane caused by the action potential.
See also
Biological neuron models
Hodgkin–Huxley model
Vector soliton
Sources
Federico Faraci (2013) "The 60th anniversary of the Hodgkin-Huxley model: a critical assessment from a historical and modeler’s viewpoint"
Revathi Appali, Ursula van Rienen, Thomas Heimburg (2012) "A comparison of the Hodgkin-Huxley model and the Soliton theory for the Action Potential in Nerves "
Action Waves in the Brain, The Guardian, 1 May 2015.
Ichiji Tasaki (1982) "Physiology and Electrochemistry of Nerve Fibers"
Konrad Kaufman (1989) "Action Potentials and Electrochemical Coupling in the Macroscopic Chiral Phospholipid Membrane".
Andersen, Jackson and Heimburg"Towards a thermodynamic theory of nerve pulse propagation"
Revisiting the mechanics of the action potential, Princeton University Journal watch, 1 April 2015.
On the (sound) track of anesthetics, Eurekalert, according to a press release University of Copenhagen, 6 March 2007
An elementary introduction.
Solitary acoustic waves observed to propagate at a lipid membrane interface, Phys.org June 20, 2014
References
Cellular neuroscience
Computational neuroscience
Biophysics | Soliton model in neuroscience | [
"Physics",
"Biology"
] | 2,800 | [
"Applied and interdisciplinary physics",
"Biophysics"
] |
18,004,542 | https://en.wikipedia.org/wiki/Hindmarsh%E2%80%93Rose%20model | The Hindmarsh–Rose model of neuronal activity is aimed to study the spiking-bursting behavior of the membrane potential observed in experiments made with a single neuron. The relevant variable is the membrane potential, x(t), which is written in dimensionless units. There are two more variables, y(t) and z(t), which take into account the transport of ions across the membrane through the ion channels. The transport of sodium and potassium ions is made through fast ion channels and its rate is measured by y(t), which is called the spiking variable. z(t) corresponds to an adaptation current, which is incremented at every spike, leading to a decrease in the firing rate. Then, the Hindmarsh–Rose model has the mathematical form of a system of three nonlinear ordinary differential equations on the dimensionless dynamical variables x(t), y(t), and z(t). They read:
where
The model has eight parameters: a, b, c, d, r, s, xR and I. It is common to fix some of them and let the others be control parameters. Usually the parameter I, which means the current that enters the neuron, is taken as a control parameter. Other control parameters used often in the literature are a, b, c, d, or r, the first four modeling the working of the fast ion channels and the last one the slow ion channels, respectively. Frequently, the parameters held fixed are s = 4 and xR = -8/5. When a, b, c, d are fixed the values given are a = 1, b = 3, c = 1, and d = 5. The parameter r governs the time scale of the neural adaptation and is something of the order of 10−3, and I ranges between −10 and 10.
The third state equation:
allows a great variety of dynamic behaviors of the membrane potential, described by variable x, including unpredictable behavior, which is referred to as chaotic dynamics. This makes the Hindmarsh–Rose model relatively simple and provides a good qualitative description of the many different patterns that are observed empirically.
See also
Biological neuron models
Ephaptic coupling
Hodgkin–Huxley model
Computational neuroscience
Neural oscillation
Rulkov map
Chialvo map
References
Nonlinear systems
Electrophysiology
Computational neuroscience
Biophysics
Chaotic maps | Hindmarsh–Rose model | [
"Physics",
"Mathematics",
"Biology"
] | 496 | [
"Functions and mappings",
"Applied and interdisciplinary physics",
"Mathematical objects",
"Nonlinear systems",
"Biophysics",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
18,004,969 | https://en.wikipedia.org/wiki/Manganese%28II%29%20fluoride | Manganese(II) fluoride is the chemical compound composed of manganese and fluoride with the formula MnF2. It is a light pink solid, the light pink color being characteristic for manganese(II) compounds. It is made by treating manganese and diverse compounds of manganese(II) in hydrofluoric acid. Like some other metal difluorides, MnF2 crystallizes in the rutile structure, which features octahedral Mn centers.
Uses
MnF2 is used in the manufacture of special kinds of glass and lasers.
It is a canonical example of uniaxial antiferromagnet (with Neel temperature of 68 K) which has been experimentally studied since early on.
References
Manganese(II) compounds
Fluorides
Metal halides | Manganese(II) fluoride | [
"Chemistry"
] | 168 | [
"Inorganic compounds",
"Inorganic compound stubs",
"Salts",
"Metal halides",
"Fluorides"
] |
18,005,010 | https://en.wikipedia.org/wiki/Niobium%20dioxide | Niobium dioxide, is the chemical compound with the formula NbO2. It is a bluish-black non-stoichiometric solid with a composition range of NbO1.94-NbO2.09. It can be prepared by reducing Nb2O5 with H2 at 800–1350 °C. An alternative method is reaction of Nb2O5 with Nb powder at 1100 °C.
Properties
The room temperature form of NbO2 has a tetragonal, rutile-like structure with short Nb-Nb distances, indicating Nb-Nb bonding. The high temperature form also has a rutile-like structure with short Nb-Nb distances. Two high-pressure phases have been reported: one with a rutile-like structure (again, with short Nb-Nb distances); and a higher pressure with baddeleyite-related structure.
NbO2 is insoluble in water and is a powerful reducing agent, reducing carbon dioxide to carbon and sulfur dioxide to sulfur. In an industrial process for the production of niobium metal, NbO2 is produced as an intermediate, by the hydrogen reduction of Nb2O5. The NbO2 is subsequently reacted with magnesium vapor to produce niobium metal.
References
Niobium(IV) compounds
Non-stoichiometric compounds
Transition metal oxides | Niobium dioxide | [
"Chemistry"
] | 287 | [
"Non-stoichiometric compounds"
] |
18,005,603 | https://en.wikipedia.org/wiki/Fluoran | Fluoran is a triarylmethane dye. It is the structural core of a variety of other dyes.
These dyes have a variety of applications such as chemical stains (for example eosins) and in thermal paper. Black 305 is a common leuco dye product for thermal paper.
References
Triarylmethane dyes
Spiro compounds
Lactones | Fluoran | [
"Chemistry"
] | 77 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs",
"Spiro compounds"
] |
18,005,720 | https://en.wikipedia.org/wiki/Manganese%28II%29%20bromide | Manganese(II) bromide is the chemical compound composed of manganese and bromine with the formula MnBr2.
It can be used in place of palladium in the Stille reaction, which couples two carbon atoms using an organotin compound.
References
Manganese(II) compounds
Bromides
Metal halides | Manganese(II) bromide | [
"Chemistry"
] | 68 | [
"Inorganic compounds",
"Inorganic compound stubs",
"Salts",
"Bromides",
"Metal halides"
] |
18,006,051 | https://en.wikipedia.org/wiki/Alfred%20Bucherer | Alfred Heinrich Bucherer (* 9 July 1863 in Cologne; † 16 April 1927 in Bonn) was a German physicist, who is known for his experiments on relativistic mass. He also was the first who used the phrase "theory of relativity" for Einstein's theory of special relativity.
Education
He studied from 1884 until 1899 at the University of Hannover, the Johns Hopkins University, the University of Strassburg, the University of Leipzig, and the University of Bonn. In Bonn he habilitated in 1899 and taught there until 1923.
In 1903 Bucherer published the first German-language book to be completely based on vector calculus.
Theory of relativity
Like Henri Poincaré (1895, 1900), Bucherer (1903b) believed in the validity of the Principle of relativity, i.e. that all descriptions of electrodynamic effects should only contain the relative motion of bodies, not of the aether. However, he went a step further and even assumed the physical non-existence of the aether. Based on those ideas he developed a theory in 1906, which also included the assumption that the geometry of space is riemannian. But the theory was vaguely formulated and in 1908 Walther Ritz showed that Bucherer's theory leads to wrong conclusions with respect to electrodynamics. And contrary to Albert Einstein, he didn't connect his rejection of the aether with the relativity of space and time.
In 1904 he developed a theory of electrons in which the electrons contract in the line of motion and expand perpendicular to it. Independently of him Paul Langevin developed a very similar model in 1905. The Bucherer-Langevin model was an alternative to the electron models of:
Hendrik Lorentz (1899), Henri Poincaré (1905, 1906) and Albert Einstein (1905). in which the electrons are subjected to length contraction without expansion in the other direction
and the model of Max Abraham, in which the electron is rigid.
All three models predicted an increase of the electron mass if their velocities are approaching the speed of light. The Bucherer-Langevin model was quickly abandoned, so some experimentalists tried to distinguish between Abraham's theory and the Lorentz-Einstein theory by experiment. This was done by Walter Kaufmann (1901–1905) who believed that his experiments confirmed Abraham's theory, and disproved the Lorentz-Einstein theory. But in 1908 Bucherer conducted some experiments as well, and obtained results which seem to confirm the Lorentz-Einstein theory and the principle of relativity. With exceptions like Adolf Bestelmeyer with whom Bucherer had a polemical dispute, Bucherer's experiments were regarded as decisive. But it was shown in 1938 that all those experiments of Kaufmann, Bucherer, Neumann etc. showed only a qualitative increase in mass, but were too imprecise to distinguish between the different models. This lasted until 1940, when similar experimental equipments were sufficiently accurate to confirm the Lorentz-Einstein formula, see Kaufmann–Bucherer–Neumann experiments and Tests of relativistic energy and momentum.
Bucherer (1906) was the first who used — during some critical remarks on Einstein's theory — the expression "Einsteinian relativity theory / theory of relativity" ("Einsteinsche Relativitätstheorie"). This was based on Max Planck's term "relative theory" for the Lorentz-Einstein theory. And in 1908 Bucherer himself rejected his own version of the relativity principle, and accepted the "Lorentz-Einstein theory".
Later (1923, 1924), Bucherer criticized general relativity in some papers. However, this criticism was rejected because Bucherer misinterpreted Einstein's equivalence hypothesis.
See also
History of special relativity
Sources
Publications
1863 births
1927 deaths
20th-century German physicists
Mass spectrometrists
Fellows of the American Physical Society
19th-century German physicists | Alfred Bucherer | [
"Physics",
"Chemistry"
] | 814 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
18,007,622 | https://en.wikipedia.org/wiki/Jordan%E2%80%93Wigner%20transformation | The Jordan–Wigner transformation is a transformation that maps spin operators onto fermionic creation and annihilation operators. It was proposed by Pascual Jordan and Eugene Wigner for one-dimensional lattice models, but now two-dimensional analogues of the transformation have also been created. The Jordan–Wigner transformation is often used to exactly solve 1D spin-chains such as the Ising and XY models by transforming the spin operators to fermionic operators and then diagonalizing in the fermionic basis.
This transformation actually shows that the distinction between spin-1/2 particles and fermions is nonexistent. It can be applied to systems with an arbitrary dimension.
Analogy between spins and fermions
In what follows we will show how to map a 1D spin chain of spin-1/2 particles to fermions.
Take spin-1/2 Pauli operators acting on a site of a 1D chain, . Taking the anticommutator of and , we find , as would be expected from fermionic creation and annihilation operators. We might then be tempted to set
Now, we have the correct same-site fermionic relations ; however, on different sites, we have the relation ,and where , so spins on different sites commute unlike fermions which anti-commute. We must remedy this before we can take the analogy very seriously.
A transformation which recovers the true fermion commutation relations from spin-operators was performed in 1928 by Jordan and Wigner. This is a special example of a Klein transformation. We take a chain of fermions, and define a new set of operators
They differ from the above only by a phase . The phase is determined by the number of occupied fermionic modes in modes of the field. The phase is equal to if the number of occupied modes is even, and if the number of occupied modes is odd. This phase is often expressed as
The transformed spin operators now have the appropriate fermionic canonical anti-commutation relations
The above anti-commutation relations can be proved by invoking the relations
The inverse transformation is given by
Note that the definition of the fermionic operators is nonlocal with respect to the bosonic operators because we have to deal with an entire chain of operators to the left of the site the fermionic operators are defined with respect to. This is also true the other way around. This is an example of a 't Hooft loop, which is a disorder operator instead of an order operator. This is also an example of an S-duality.
If the system has more than one dimension the transformation can still be applied. It is only necessary to label the sites in an arbitrary way by a single index.
Quantum computing
The Jordan–Wigner transformation can be inverted to map a fermionic Hamiltonian into a spin Hamiltonian. A series of spins is equivalent to a chain of qubits for quantum computing. Some molecular potentials can be efficiently simulated by a quantum computer using this transformation.
See also
S-duality
Klein transformation
Bogoliubov transformation
Holstein–Primakoff transformation
Jordan–Schwinger transformation
References
Further reading
Michael Nielsen,
Piers Coleman, simple examples of second quantization
Condensed matter physics
Statistical mechanics
Quantum field theory
Lattice models | Jordan–Wigner transformation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 674 | [
"Quantum field theory",
"Phases of matter",
"Quantum mechanics",
"Lattice models",
"Computational physics",
"Materials science",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
18,008,163 | https://en.wikipedia.org/wiki/Groundwater%20remediation | Groundwater remediation is the process that is used to treat polluted groundwater by removing the pollutants or converting them into harmless products. Groundwater is water present below the ground surface that saturates the pore space in the subsurface. Globally, between 25 per cent and 40 per cent of the world's drinking water is drawn from boreholes and dug wells. Groundwater is also used by farmers to irrigate crops and by industries to produce everyday goods. Most groundwater is clean, but groundwater can become polluted, or contaminated as a result of human activities or as a result of natural conditions.
The many and diverse activities of humans produce innumerable waste materials and by-products. Historically, the disposal of such waste have not been subject to many regulatory controls. Consequently, waste materials have often been disposed of or stored on land surfaces where they percolate into the underlying groundwater. As a result, the contaminated groundwater is unsuitable for use.
Current practices can still impact groundwater, such as the over application of fertilizer or pesticides, spills from industrial operations, infiltration from urban runoff, and leaking from landfills. Using contaminated groundwater causes hazards to public health through poisoning or the spread of disease, and the practice of groundwater remediation has been developed to address these issues. Contaminants found in groundwater cover a broad range of physical, inorganic chemical, organic chemical, bacteriological, and radioactive parameters. Pollutants and contaminants can be removed from groundwater by applying various techniques, thereby bringing the water to a standard that is commensurate with various intended uses.
Techniques
Ground water remediation techniques span biological, chemical, and physical treatment technologies. Most ground water treatment techniques utilize a combination of technologies. Some of the biological treatment techniques include bioaugmentation, bioventing, biosparging, bioslurping, and phytoremediation. Some chemical treatment techniques include ozone and oxygen gas injection, chemical precipitation, membrane separation, ion exchange, carbon absorption, aqueous chemical oxidation, and surfactant enhanced recovery. Some chemical techniques may be implemented using nanomaterials. Physical treatment techniques include, but are not limited to, pump and treat, air sparging, and dual phase extraction.
Biological treatment technologies
Bioaugmentation
If a treatability study shows no degradation (or an extended lab period before significant degradation is achieved) in contamination contained in the groundwater, then inoculation with strains known to be capable of degrading the contaminants may be helpful. This process increases the reactive enzyme concentration within the bioremediation system and subsequently may increase contaminant degradation rates over the nonaugmented rates, at least initially after inoculation.
Bioventing
Bioventing is an on site remediation technology that uses microorganisms to biodegrade organic constituents in the groundwater system. Bioventing enhances the activity of indigenous bacteria and archaea and stimulates the natural in situ biodegradation of hydrocarbons by inducing air or oxygen flow into the unsaturated zone and, if necessary, by adding nutrients. During bioventing, oxygen may be supplied through direct air injection into residual contamination in soil. Bioventing primarily assists in the degradation of adsorbed fuel residuals, but also assists in the degradation of volatile organic compounds (VOCs) as vapors move slowly through biologically active soil.
Biosparging
Biosparging is an in situ remediation technology that uses indigenous microorganisms to biodegrade organic constituents in the saturated zone. In biosparging, air (or oxygen) and nutrients (if needed) are injected into the saturated zone to increase the biological activity of the indigenous microorganisms. Biosparging can be used to reduce concentrations of petroleum constituents that are dissolved in groundwater, adsorbed to soil below the water table, and within the capillary fringe.
Bioslurping
Bioslurping combines elements of bioventing and vacuum-enhanced pumping of free-product that is lighter than water (light non-aqueous phase liquid or LNAPL) to recover free-product from the groundwater and soil, and to bioremediate soils. The bioslurper system uses a “slurp” tube that extends into the free-product layer. Much like a straw in a glass draws liquid, the pump draws liquid (including free-product) and soil gas up the tube in the same process stream. Pumping lifts LNAPLs, such as oil, off the top of the water table and from the capillary fringe (i.e., an area just above the saturated zone, where water is held in place by capillary forces). The LNAPL is brought to the surface, where it is separated from water and air. The biological processes in the term “bioslurping” refer to aerobic biological degradation of the hydrocarbons when air is introduced into the unsaturated zone contaminated soil.
Phytoremediation
In the phytoremediation process certain plants and trees are planted, whose roots absorb contaminants from ground water over time. This process can be carried out in areas where the roots can tap the ground water. Few examples of plants that are used in this process are Chinese Ladder fern Pteris vittata, also known as the brake fern, is a highly efficient accumulator of arsenic. Genetically altered cottonwood trees are good absorbers of mercury and transgenic Indian mustard plants soak up selenium well.
Permeable reactive barriers
Certain types of permeable reactive barriers utilize biological organisms in order to remediate groundwater.
Chemical treatment technologies
Chemical precipitation
Chemical precipitation is commonly used in wastewater treatment to remove hardness and heavy metals. In general, the process involves addition of agent to an aqueous waste stream in a stirred reaction vessel, either batchwise or with steady flow. Most metals can be converted to insoluble compounds by chemical reactions between the agent and the dissolved metal ions. The insoluble compounds (precipitates) are removed by settling and/or filtering.
Ion exchange
Ion exchange for ground water remediation is virtually always carried out by passing the water downward under pressure through a fixed bed of granular medium (either cation exchange media and anion exchange media) or spherical beads. Cations are displaced by certain cations from the solutions and ions are displaced by certain anions from the solution. Ion exchange media most often used for remediation are zeolites (both natural and synthetic) and synthetic resins.
Carbon adsorption
The most common activated carbon used for remediation is derived from bituminous coal. Activated carbon adsorbs volatile organic compounds from ground water; the compounds attach to the graphite-like surface of the activated carbon.
Chemical oxidation
In this process, called In Situ Chemical Oxidation or ISCO, chemical oxidants are delivered in the subsurface to destroy (converted to water and carbon dioxide or to nontoxic substances) the organics molecules. The oxidants are introduced as either liquids or gasses. Oxidants include air or oxygen, ozone, and certain liquid chemicals such as hydrogen peroxide, permanganate and persulfate.
Ozone and oxygen gas can be generated on site from air and electricity and directly injected into soil and groundwater contamination. The process has the potential to oxidize and/or enhance naturally occurring aerobic degradation. Chemical oxidation has proven to be an effective technique for dense non-aqueous phase liquid or DNAPL when it is present.
Surfactant enhanced recovery
Surfactant enhanced recovery increases the mobility and solubility of the contaminants absorbed to the saturated soil matrix or present as dense non-aqueous phase liquid. Surfactant-enhanced recovery injects surfactants (surface-active agents that are primary ingredient in soap and detergent) into contaminated groundwater. A typical system uses an extraction pump to remove groundwater downstream from the injection point. The extracted groundwater is treated aboveground to separate the injected surfactants from the contaminants and groundwater. Once the surfactants have separated from the groundwater they are re-used. The surfactants used are non-toxic, food-grade, and biodegradable. Surfactant enhanced recovery is used most often when the groundwater is contaminated by dense non-aqueous phase liquids (DNAPLs). These dense compounds, such as trichloroethylene (TCE), sink in groundwater because they have a higher density than water. They then act as a continuous source for contaminant plumes that can stretch for miles within an aquifer. These compounds may biodegrade very slowly. They are commonly found in the vicinity of the original spill or leak where capillary forces have trapped them.
Permeable reactive barriers
Some permeable reactive barriers utilize chemical processes to achieve groundwater remediation.
Physical treatment technologies
Pump and treat
Pump and treat is one of the most widely used ground water remediation technologies. In this process ground water is pumped to the surface and is coupled with either biological or chemical treatments to remove the impurities.
Air sparging
Air sparging is the process of blowing air directly into the ground water. As the bubbles rise, the contaminants are removed from the groundwater by physical contact with the air (i.e., stripping) and are carried up into the unsaturated zone (i.e., soil). As the contaminants move into the soil, a soil vapor extraction system is usually used to remove vapors.
Dual phase vacuum extraction
Dual-phase vacuum extraction (DPVE), also known as multi-phase extraction, is a technology that uses a high-vacuum system to remove both contaminated groundwater and soil vapor. In DPVE systems, a high-vacuum extraction well is installed with its screened section in the zone of contaminated soils and groundwater. Fluid/vapor extraction systems depress the water table and water flows faster to the extraction well. DPVE removes contaminants from above and below the water table. As the water table around the well is lowered from pumping, unsaturated soil is exposed. This area, called the capillary fringe, is often highly contaminated, as it holds undissolved chemicals, chemicals that are lighter than water, and vapors that have escaped from the dissolved groundwater below. Contaminants in the newly exposed zone can be removed by vapor extraction. Once above ground, the extracted vapors and liquid-phase organics and groundwater are separated and treated. Use of dual-phase vacuum extraction with these technologies can shorten the cleanup time at a site, because the capillary fringe is often the most contaminated area.
Monitoring-well oil skimming
Monitoring-wells are often drilled for the purpose of collecting ground water samples for analysis. These wells, which are usually six inches or less in diameter, can also be used to remove hydrocarbons from the contaminant plume within a groundwater aquifer by using a belt-style oil skimmer. Belt oil skimmers, which are simple in design, are commonly used to remove oil and other floating hydrocarbon contaminants from industrial water systems.
A monitoring-well oil skimmer remediates various oils, ranging from light fuel oils such as petrol, light diesel or kerosene to heavy products such as No. 6 oil, creosote and coal tar. It consists of a continuously moving belt that runs on a pulley system driven by an electric motor. The belt material has a strong affinity for hydrocarbon liquids and for shedding water. The belt, which can have a vertical drop of 100+ feet, is lowered into the monitoring well past the LNAPL/water interface. As the belt moves through this interface, it picks up liquid hydrocarbon contaminant which is removed and collected at ground level as the belt passes through a wiper mechanism. To the extent that DNAPL hydrocarbons settle at the bottom of a monitoring well, and the lower pulley of the belt skimmer reaches them, these contaminants can also be removed by a monitoring-well oil skimmer.
Typically, belt skimmers remove very little water with the contaminant, so simple weir-type separators can be used to collect any remaining hydrocarbon liquid, which often makes the water suitable for its return to the aquifer. Because the small electric motor uses little electricity, it can be powered from solar panels or a wind turbine, making the system self-sufficient and eliminating the cost of running electricity to a remote location.
See also
Toxic torts
Brownfield
CERCLA
Groundwater pollution
Plume (hydrodynamics)
Groundwater remediation applications of nanotechnology
References
External links
EPA Alternative Cleanup Technologies for Underground Storage Tank Sites
Aquifers
Environmental science
Ecological restoration
Environmental issues with water
-
Water chemistry
Water pollution | Groundwater remediation | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 2,655 | [
"Hydrology",
"Ecological restoration",
"Phytoremediation plants",
"Water pollution",
"Environmental engineering",
"Aquifers",
"nan",
"Bioremediation"
] |
18,012,776 | https://en.wikipedia.org/wiki/Symmetric%20convolution | In mathematics, symmetric convolution is a special subset of convolution operations in which the convolution kernel is symmetric across its zero point. Many common convolution-based processes such as Gaussian blur and taking the derivative of a signal in frequency-space are symmetric and this property can be exploited to make these convolutions easier to evaluate.
Convolution theorem
The convolution theorem states that a convolution in the real domain can be represented as a pointwise multiplication across the frequency domain of a Fourier transform. Since sine and cosine transforms are related transforms a modified version of the convolution theorem can be applied, in which the concept of circular convolution is replaced with symmetric convolution. Using these transforms to compute discrete symmetric convolutions is non-trivial since discrete sine transforms (DSTs) and discrete cosine transforms (DCTs) can be counter-intuitively incompatible for computing symmetric convolution, i.e. symmetric convolution can only be computed between a fixed set of compatible transforms.
Mutually compatible transforms
In order to compute symmetric convolution effectively, one must know which particular frequency domains (which are reachable by transforming real data through DSTs or DCTs) the inputs and outputs to the convolution can be and then tailor the symmetries of the transforms to the required symmetries of the convolution.
The following table documents which combinations of the domains from the main eight commonly used DST I-IV and DCT I-IV satisfy where represents the symmetric convolution operator. Convolution is a commutative operator, and so and are interchangeable.
Forward transforms of , and , through the transforms specified should allow the symmetric convolution to be computed as a pointwise multiplication, with any excess undefined frequency amplitudes set to zero. Possibilities for symmetric convolutions involving DSTs and DCTs V-VIII derived from the discrete Fourier transforms (DFTs) of odd logical order can be determined by adding four to each type in the above tables.
Advantages of symmetric convolutions
There are a number of advantages to computing symmetric convolutions in DSTs and DCTs in comparison with the more common circular convolution with the Fourier transform.
Most notably the implicit symmetry of the transforms involved is such that only data unable to be inferred through symmetry is required. For instance using a DCT-II, a symmetric signal need only have the positive half DCT-II transformed, since the frequency domain will implicitly construct the mirrored data comprising the other half. This enables larger convolution kernels to be used with the same cost as smaller kernels circularly convolved on the DFT. Also the boundary conditions implicit in DSTs and DCTs create edge effects that are often more in keeping with neighbouring data than the periodic effects introduced by using the Fourier transform.
References
Functional analysis | Symmetric convolution | [
"Mathematics"
] | 618 | [
"Functional analysis",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
18,013,345 | https://en.wikipedia.org/wiki/ModelSim | ModelSim is a multi-language environment by Siemens (previously developed by Mentor Graphics,) for simulation of hardware description languages such as VHDL, Verilog and SystemC, and includes a built-in C debugger. ModelSim can be used independently, or in conjunction with Intel Quartus Prime, PSIM, Xilinx ISE or Xilinx Vivado. Simulation is performed using the graphical user interface (GUI), or automatically using scripts.
Editions
Mentor HDL simulation products are offered in multiple editions, such as ModelSim PE and Questa Sim.
Questa Sim offers high-performance and advanced debugging capabilities, while ModelSim PE is the entry-level simulator for hobbyists and students. Questa Sim is used in large multi-million gate designs, and is supported on Microsoft Windows and Linux, in 32-bit and 64-bit architectures.
ModelSim can also be used with MATLAB/Simulink, using Link for ModelSim. Link for ModelSim is a fast bidirectional co-simulation interface between Simulink and ModelSim. For such designs, MATLAB provides a numerical simulation toolset, while ModelSim provides tools to verify the hardware implementation & timing characteristics of the design.
Language support
ModelSim uses a unified kernel for simulation of all supported languages, and the method of debugging embedded C code is the same as VHDL or Verilog.
ModelSim and Questa Sim products enable simulation, verification and debugging for the following languages:
VHDL
Verilog
Verilog 2001
SystemVerilog
PSL
SystemC
See also
Intel Quartus Prime
Icarus Verilog
List of HDL simulators
NCSim
Verilator
Xilinx ISE
Xilinx Vivado
References
External links
Computer-aided design software
Electronic design automation software
Digital electronics | ModelSim | [
"Engineering"
] | 380 | [
"Electronic engineering",
"Digital electronics"
] |
404,001 | https://en.wikipedia.org/wiki/Algebraic%20equation | In mathematics, an algebraic equation or polynomial equation is an equation of the form , where P is a polynomial with coefficients in some field, often the field of the rational numbers.
For example, is an algebraic equation with integer coefficients and
is a multivariate polynomial equation over the rationals.
For many authors, the term algebraic equation refers only to the univariate case, that is polynomial equations that involve only one variable. On the other hand, a polynomial equation may involve several variables (the multivariate case), in which case the term polynomial equation is usually preferred.
Some but not all polynomial equations with rational coefficients have a solution that is an algebraic expression that can be found using a finite number of operations that involve only those same types of coefficients (that is, can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but for degree five or more it can only be done for some equations, not all. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root-finding algorithm) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
Terminology
The term "algebraic equation" dates from the time when the main problem of algebra was to solve univariate polynomial equations. This problem was completely solved during the 19th century; see Fundamental theorem of algebra, Abel–Ruffini theorem and Galois theory.
Since then, the scope of algebra has been dramatically enlarged. In particular, it includes the study of equations that involve th roots and, more generally, algebraic expressions. This makes the term algebraic equation ambiguous outside the context of the old problem. So the term polynomial equation is generally preferred when this ambiguity may occur, specially when considering multivariate equations.
History
The study of algebraic equations is probably as old as mathematics: the Babylonian mathematicians, as early as 2000 BC could solve some kinds of quadratic equations (displayed on Old Babylonian clay tablets).
Univariate algebraic equations over the rationals (i.e., with rational coefficients) have a very long history. Ancient mathematicians wanted the solutions in the form of radical expressions, like for the positive solution of . The ancient Egyptians knew how to solve equations of degree 2 in this manner. The Indian mathematician Brahmagupta (597–668 AD) explicitly described the quadratic formula in his treatise Brāhmasphuṭasiddhānta published in 628 AD, but written in words instead of symbols. In the 9th century Muhammad ibn Musa al-Khwarizmi and other Islamic mathematicians derived the quadratic formula, the general solution of equations of degree 2, and recognized the importance of the discriminant. During the Renaissance in 1545, Gerolamo Cardano published the solution of Scipione del Ferro and Niccolò Fontana Tartaglia to equations of degree 3 and that of Lodovico Ferrari for equations of degree 4. Finally Niels Henrik Abel proved, in 1824, that equations of degree 5 and higher do not have general solutions using radicals. Galois theory, named after Évariste Galois, showed that some equations of at least degree 5 do not even have an idiosyncratic solution in radicals, and gave criteria for deciding if an equation is in fact solvable using radicals.
Areas of study
The algebraic equations are the basis of a number of areas of modern mathematics: Algebraic number theory is the study of (univariate) algebraic equations over the rationals (that is, with rational coefficients). Galois theory was introduced by Évariste Galois to specify criteria for deciding if an algebraic equation may be solved in terms of radicals. In field theory, an algebraic extension is an extension such that every element is a root of an algebraic equation over the base field. Transcendental number theory is the study of the real numbers which are not solutions to an algebraic equation over the rationals. A Diophantine equation is a (usually multivariate) polynomial equation with integer coefficients for which one is interested in the integer solutions. Algebraic geometry is the study of the solutions in an algebraically closed field of multivariate polynomial equations.
Two equations are equivalent if they have the same set of solutions. In particular the equation is equivalent to . It follows that the study of algebraic equations is equivalent to the study of polynomials.
A polynomial equation over the rationals can always be converted to an equivalent one in which the coefficients are integers. For example, multiplying through by 42 = 2·3·7 and grouping its terms in the first member, the previously mentioned polynomial equation becomes
Because sine, exponentiation, and 1/T are not polynomial functions,
is not a polynomial equation in the four variables x, y, z, and T over the rational numbers. However, it is a polynomial equation in the three variables x, y, and z over the field of the elementary functions in the variable T.
Theory
Polynomials
Given an equation in unknown
,
with coefficients in a field , one can equivalently say that the solutions of (E) in are the roots in of the polynomial
.
It can be shown that a polynomial of degree in a field has at most roots. The equation (E) therefore has at most solutions.
If is a field extension of , one may consider (E) to be an equation with coefficients in and the solutions of (E) in are also solutions in (the converse does not hold in general). It is always possible to find a field extension of known as the rupture field of the polynomial , in which (E) has at least one solution.
Existence of solutions to real and complex equations
The fundamental theorem of algebra states that the field of the complex numbers is closed algebraically, that is, all polynomial equations with complex coefficients and degree at least one have a solution.
It follows that all polynomial equations of degree 1 or more with real coefficients have a complex solution. On the other hand, an equation such as does not have a solution in (the solutions are the imaginary units and ).
While the real solutions of real equations are intuitive (they are the -coordinates of the points where the curve intersects the -axis), the existence of complex solutions to real equations can be surprising and less easy to visualize.
However, a monic polynomial of odd degree must necessarily have a real root. The associated polynomial function in is continuous, and it approaches as approaches and as approaches . By the intermediate value theorem, it must therefore assume the value zero at some real , which is then a solution of the polynomial equation.
Connection to Galois theory
There exist formulas giving the solutions of real or complex polynomials of degree less than or equal to four as a function of their coefficients. Abel showed that it is not possible to find such a formula in general (using only the four arithmetic operations and taking roots) for equations of degree five or higher. Galois theory provides a criterion which allows one to determine whether the solution to a given polynomial equation can be expressed using radicals.
Explicit solution of numerical equations
Approach
The explicit solution of a real or complex equation of degree 1 is trivial. Solving an equation of higher degree reduces to factoring the associated polynomial, that is, rewriting (E) in the form
,
where the solutions are then the . The problem is then to express the in terms of the .
This approach applies more generally if the coefficients and solutions belong to an integral domain.
General techniques
Factoring
If an equation of degree has a rational root , the associated polynomial can be factored to give the form (by dividing by or by writing as a linear combination of terms of the form , and factoring out . Solving thus reduces to solving the degree equation . See for example the case .
Elimination of the sub-dominant term
To solve an equation of degree ,
,
a common preliminary step is to eliminate the degree- term: by setting , equation (E) becomes
.
Leonhard Euler developed this technique for the case but it is also applicable to the case , for example.
Quadratic equations
To solve a quadratic equation of the form one calculates the discriminant Δ defined by .
If the polynomial has real coefficients, it has:
two distinct real roots if ;
one real double root if ;
no real root if , but two complex conjugate roots.
Cubic equations
The best-known method for solving cubic equations, by writing roots in terms of radicals, is Cardano's formula.
Quartic equations
For detailed discussions of some solution methods see:
Tschirnhaus transformation (general method, not guaranteed to succeed);
Bezout method (general method, not guaranteed to succeed);
Ferrari method (solutions for degree 4);
Euler method (solutions for degree 4);
Lagrange method (solutions for degree 4);
Descartes method (solutions for degree 2 or 4);
A quartic equation with may be reduced to a quadratic equation by a change of variable provided it is either biquadratic () or quasi-palindromic ().
Some cubic and quartic equations can be solved using trigonometry or hyperbolic functions.
Higher-degree equations
Évariste Galois and Niels Henrik Abel showed independently that in general a polynomial of degree 5 or higher is not solvable using radicals. Some particular equations do have solutions, such as those associated with the cyclotomic polynomials of degrees 5 and 17.
Charles Hermite, on the other hand, showed that polynomials of degree 5 are solvable using elliptical functions.
Otherwise, one may find numerical approximations to the roots using root-finding algorithms, such as Newton's method.
See also
Algebraic function
Algebraic number
Root finding
Linear equation (degree = 1)
Quadratic equation (degree = 2)
Cubic equation (degree = 3)
Quartic equation (degree = 4)
Quintic equation (degree = 5)
Sextic equation (degree = 6)
Septic equation (degree = 7)
System of linear equations
System of polynomial equations
Linear Diophantine equation
Linear equation over a ring
Cramer's theorem (algebraic curves), on the number of points usually sufficient to determine a bivariate n-th degree curve
References
Polynomials
Equations | Algebraic equation | [
"Mathematics"
] | 2,085 | [
"Polynomials",
"Equations",
"Mathematical objects",
"Algebra"
] |
404,078 | https://en.wikipedia.org/wiki/Brauer%20group | In mathematics, the Brauer group of a field K is an abelian group whose elements are Morita equivalence classes of central simple algebras over K, with addition given by the tensor product of algebras. It was defined by the algebraist Richard Brauer.
The Brauer group arose out of attempts to classify division algebras over a field. It can also be defined in terms of Galois cohomology. More generally, the Brauer group of a scheme is defined in terms of Azumaya algebras, or equivalently using projective bundles.
Construction
A central simple algebra (CSA) over a field K is a finite-dimensional associative K-algebra A such that A is a simple ring and the center of A is equal to K. Note that CSAs are in general not division algebras, though CSAs can be used to classify division algebras.
For example, the complex numbers C form a CSA over themselves, but not over R (the center is C itself, hence too large to be CSA over R). The finite-dimensional division algebras with center R (that means the dimension over R is finite) are the real numbers and the quaternions by a theorem of Frobenius, while any matrix ring over the reals or quaternions – or – is a CSA over the reals, but not a division algebra (if n > 1).
We obtain an equivalence relation on CSAs over K by the Artin–Wedderburn theorem (Wedderburn's part, in fact), to express any CSA as a M(n, D) for some division algebra D. If we look just at D, that is, if we impose an equivalence relation identifying with for all positive integers m and n, we get the Brauer equivalence relation on CSAs over K. The elements of the Brauer group are the Brauer equivalence classes of CSAs over K.
Given central simple algebras A and B, one can look at their tensor product A ⊗ B as a . It turns out that this is always central simple. A slick way to see this is to use a characterization: a central simple algebra A over K is a that becomes a matrix ring when we extend the field of scalars to an algebraic closure of K. This result also shows that the dimension of a central simple algebra A as a K-vector space is always a square. The degree of A is defined to be the square root of its dimension.
As a result, the isomorphism classes of CSAs over K form a monoid under tensor product, compatible with Brauer equivalence, and the Brauer classes are all invertible: the inverse of an algebra A is given by its opposite algebra Aop (the opposite ring with the same action by K since the image of is in the center of A). Explicitly, for a CSA A we have , where n is the degree of A over K.
The Brauer group of any field is a torsion group. In more detail, define the period of a central simple algebra A over K to be its order as an element of the Brauer group. Define the index of A to be the degree of the division algebra that is Brauer equivalent to A. Then the period of A divides the index of A (and hence is finite).
Examples
In the following cases, every finite-dimensional central division algebra over a field K is K itself, so that the Brauer group Br(K) is trivial:
K is an algebraically closed field.
K is a finite field (Wedderburn's theorem). Equivalently, every finite division ring is commutative.
K is the function field of an algebraic curve over an algebraically closed field (Tsen's theorem). More generally, the Brauer group vanishes for any C1 field.
K is an algebraic extension of Q containing all roots of unity.
The Brauer group Br R of the real numbers is the cyclic group of order two. There are just two non-isomorphic real division algebras with center R: R itself and the quaternion algebra H. Since , the class of H has order two in the Brauer group.
Let K be a non-Archimedean local field, meaning that K is complete under a discrete valuation with finite residue field. Then Br K is isomorphic to Q/Z.
Severi–Brauer varieties
Another important interpretation of the Brauer group of a field K is that it classifies the projective varieties over K that become isomorphic to projective space over an algebraic closure of K. Such a variety is called a Severi–Brauer variety, and there is a one-to-one correspondence between the isomorphism classes of Severi–Brauer varieties of dimension over K and the central simple algebras of degree n over K.
For example, the Severi–Brauer varieties of dimension 1 are exactly the smooth conics in the projective plane over K. For a field K of characteristic not 2, every conic over K is isomorphic to one of the form ax2 + by2 = z2 for some nonzero elements a and b of K. The corresponding central simple algebra is the quaternion algebra
The conic is isomorphic to the projective line P1 over K if and only if the corresponding quaternion algebra is isomorphic to the matrix algebra M(2, K).
Cyclic algebras
For a positive integer n, let K be a field in which n is invertible such that K contains a primitive nth root of unity ζ. For nonzero elements a and b of K, the associated cyclic algebra is the central simple algebra of degree n over K defined by
Cyclic algebras are the best-understood central simple algebras. (When n is not invertible in K or K does not have a primitive nth root of unity, a similar construction gives the cyclic algebra associated to a cyclic Z/n-extension χ of K and a nonzero element a of K.)
The Merkurjev–Suslin theorem in algebraic K-theory has a strong consequence about the Brauer group. Namely, for a positive integer n, let K be a field in which n is invertible such that K contains a primitive nth root of unity. Then the subgroup of the Brauer group of K killed by n is generated by cyclic algebras of degree n. Equivalently, any division algebra of period dividing n is Brauer equivalent to a tensor product of cyclic algebras of degree n. Even for a prime number p, there are examples showing that a division algebra of period p need not be actually isomorphic to a tensor product of cyclic algebras of degree p.
It is a major open problem (raised by Albert) whether every division algebra of prime degree over a field is cyclic. This is true if the degree is 2 or 3, but the problem is wide open for primes at least 5. The known results are only for special classes of fields. For example, if K is a global field or local field, then a division algebra of any degree over K is cyclic, by Albert–Brauer–Hasse–Noether. A "higher-dimensional" result in the same direction was proved by Saltman: if K is a field of transcendence degree 1 over the local field Qp, then every division algebra of prime degree over K is cyclic.
The period-index problem
For any central simple algebra A over a field K, the period of A divides the index of A, and the two numbers have the same prime factors. The period-index problem is to bound the index in terms of the period, for fields K of interest. For example, if A is a central simple algebra over a local field or global field, then Albert–Brauer–Hasse–Noether showed that the index of A is equal to the period of A.
For a central simple algebra A over a field K of transcendence degree n over an algebraically closed field, it is conjectured that ind(A) divides per(A)n−1. This is true for , the case being an important advance by de Jong, sharpened in positive characteristic by de Jong–Starr and Lieblich.
Class field theory
The Brauer group plays an important role in the modern formulation of class field theory. If Kv is a non-Archimedean local field, local class field theory gives a canonical isomorphism , the Hasse invariant.
The case of a global field K (such as a number field) is addressed by global class field theory. If D is a central simple algebra over K and v is a place of K, then is a central simple algebra over Kv, the completion of K at v. This defines a homomorphism from the Brauer group of K into the Brauer group of Kv. A given central simple algebra D splits for all but finitely many v, so that the image of D under almost all such homomorphisms is 0. The Brauer group Br K fits into an exact sequence constructed by Hasse:
where S is the set of all places of K and the right arrow is the sum of the local invariants; the Brauer group of the real numbers is identified with Z/2Z. The injectivity of the left arrow is the content of the Albert–Brauer–Hasse–Noether theorem.
The fact that the sum of all local invariants of a central simple algebra over K is zero is a typical reciprocity law. For example, applying this to a quaternion algebra over Q gives the quadratic reciprocity law.
Galois cohomology
For an arbitrary field K, the Brauer group can be expressed in terms of Galois cohomology as follows:
where Gm denotes the multiplicative group, viewed as an algebraic group over K. More concretely, the cohomology group indicated means , where Ks denotes a separable closure of K.
The isomorphism of the Brauer group with a Galois cohomology group can be described as follows. The automorphism group of the algebra of matrices is the projective linear group PGL(n). Since all central simple algebras over K become isomorphic to the matrix algebra over a separable closure of K, the set of isomorphism classes of central simple algebras of degree n over K can be identified with the Galois cohomology set . The class of a central simple algebra in is the image of its class in H1 under the boundary homomorphism
associated to the short exact sequence .
The Brauer group of a scheme
The Brauer group was generalized from fields to commutative rings by Auslander and Goldman. Grothendieck went further by defining the Brauer group of any scheme.
There are two ways of defining the Brauer group of a scheme X, using either Azumaya algebras over X or projective bundles over X. The second definition involves projective bundles that are locally trivial in the étale topology, not necessarily in the Zariski topology. In particular, a projective bundle is defined to be zero in the Brauer group if and only if it is the projectivization of some vector bundle.
The cohomological Brauer group of a quasi-compact scheme X is defined to be the torsion subgroup of the étale cohomology group . (The whole group need not be torsion, although it is torsion for regular, integral, quasi-compact schemes X.) The Brauer group is always a subgroup of the cohomological Brauer group. Gabber showed that the Brauer group is equal to the cohomological Brauer group for any scheme with an ample line bundle (for example, any quasi-projective scheme over a commutative ring).
The whole group can be viewed as classifying the gerbes over X with structure group Gm.
For smooth projective varieties over a field, the Brauer group is an important birational invariant. For example, when X is also rationally connected over the complex numbers, the Brauer group of X is isomorphic to the torsion subgroup of the singular cohomology group , which is therefore a birational invariant. Artin and Mumford used this description of the Brauer group to give the first example of a unirational variety X over C that is not stably rational (that is, no product of X with a projective space is rational).
Relation to the Tate conjecture
Artin conjectured that every proper scheme over the integers has finite Brauer group. This is far from known even in the special case of a smooth projective variety X over a finite field. Indeed, the finiteness of the Brauer group for surfaces in that case is equivalent to the Tate conjecture for divisors on X, one of the main problems in the theory of algebraic cycles.
For a regular integral scheme of dimension 2 which is flat and proper over the ring of integers of a number field, and which has a section, the finiteness of the Brauer group is equivalent to the finiteness of the Tate–Shafarevich group Ш for the Jacobian variety of the general fiber (a curve over a number field). The finiteness of Ш is a central problem in the arithmetic of elliptic curves and more generally abelian varieties.
The Brauer–Manin obstruction
Let X be a smooth projective variety over a number field K. The Hasse principle would predict that if X has a rational point over all completions Kv of K, then X has a K-rational point. The Hasse principle holds for some special classes of varieties, but not in general. Manin used the Brauer group of X to define the Brauer–Manin obstruction, which can be applied in many cases to show that X has no K-points even when X has points over all completions of K.
Notes
References
Ring theory
Algebraic number theory
Topological methods of algebraic geometry | Brauer group | [
"Mathematics"
] | 2,853 | [
"Fields of abstract algebra",
"Algebraic number theory",
"Ring theory",
"Number theory"
] |
404,082 | https://en.wikipedia.org/wiki/Central%20simple%20algebra | In ring theory and related areas of mathematics a central simple algebra (CSA) over a field K is a finite-dimensional associative K-algebra A that is simple, and for which the center is exactly K. (Note that not every simple algebra is a central simple algebra over its center: for instance, if K is a field of characteristic 0, then the Weyl algebra is a simple algebra with center K, but is not a central simple algebra over K as it has infinite dimension as a K-module.)
For example, the complex numbers C form a CSA over themselves, but not over the real numbers R (the center of C is all of C, not just R). The quaternions H form a 4-dimensional CSA over R, and in fact represent the only non-trivial element of the Brauer group of the reals (see below).
Given two central simple algebras A ~ M(n,S) and B ~ M(m,T) over the same field F, A and B are called similar (or Brauer equivalent) if their division rings S and T are isomorphic. The set of all equivalence classes of central simple algebras over a given field F, under this equivalence relation, can be equipped with a group operation given by the tensor product of algebras. The resulting group is called the Brauer group Br(F) of the field F. It is always a torsion group.
Properties
According to the Artin–Wedderburn theorem a finite-dimensional simple algebra A is isomorphic to the matrix algebra M(n,S) for some division ring S. Hence, there is a unique division algebra in each Brauer equivalence class.
Every automorphism of a central simple algebra is an inner automorphism (this follows from the Skolem–Noether theorem).
The dimension of a central simple algebra as a vector space over its centre is always a square: the degree is the square root of this dimension. The Schur index of a central simple algebra is the degree of the equivalent division algebra: it depends only on the Brauer class of the algebra.
The period or exponent of a central simple algebra is the order of its Brauer class as an element of the Brauer group. It is a divisor of the index, and the two numbers are composed of the same prime factors.
If S is a simple subalgebra of a central simple algebra A then dimF S divides dimF A.
Every 4-dimensional central simple algebra over a field F is isomorphic to a quaternion algebra; in fact, it is either a two-by-two matrix algebra, or a division algebra.
If D is a central division algebra over K for which the index has prime factorisation
then D has a tensor product decomposition
where each component Di is a central division algebra of index , and the components are uniquely determined up to isomorphism.
Splitting field
We call a field E a splitting field for A over K if A⊗E is isomorphic to a matrix ring over E. Every finite dimensional CSA has a splitting field: indeed, in the case when A is a division algebra, then a maximal subfield of A is a splitting field. In general by theorems of Wedderburn and Koethe there is a splitting field which is a separable extension of K of degree equal to the index of A, and this splitting field is isomorphic to a subfield of A. As an example, the field C splits the quaternion algebra H over R with
We can use the existence of the splitting field to define reduced norm and reduced trace for a CSA A. Map A to a matrix ring over a splitting field and define the reduced norm and trace to be the composite of this map with determinant and trace respectively. For example, in the quaternion algebra H, the splitting above shows that the element t + x i + y j + z k has reduced norm t2 + x2 + y2 + z2 and reduced trace 2t.
The reduced norm is multiplicative and the reduced trace is additive. An element a of A is invertible if and only if its reduced norm in non-zero: hence a CSA is a division algebra if and only if the reduced norm is non-zero on the non-zero elements.
Generalization
CSAs over a field K are a non-commutative analog to extension fields over K – in both cases, they have no non-trivial 2-sided ideals, and have a distinguished field in their center, though a CSA can be non-commutative and need not have inverses (need not be a division algebra). This is of particular interest in noncommutative number theory as generalizations of number fields (extensions of the rationals Q); see noncommutative number field.
See also
Azumaya algebra, generalization of CSAs where the base field is replaced by a commutative local ring
Severi–Brauer variety
Posner's theorem
References
Further reading
Algebras
Ring theory | Central simple algebra | [
"Mathematics"
] | 1,040 | [
"Mathematical structures",
"Algebras",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
404,181 | https://en.wikipedia.org/wiki/Closed%20and%20exact%20differential%20forms | In mathematics, especially vector calculus and differential topology, a closed form is a differential form α whose exterior derivative is zero (), and an exact form is a differential form, α, that is the exterior derivative of another differential form β. Thus, an exact form is in the image of d, and a closed form is in the kernel of d.
For an exact form α, for some differential form β of degree one less than that of α. The form β is called a "potential form" or "primitive" for α. Since the exterior derivative of a closed form is zero, β is not unique, but can be modified by the addition of any closed form of degree one less than that of α.
Because , every exact form is necessarily closed. The question of whether every closed form is exact depends on the topology of the domain of interest. On a contractible domain, every closed form is exact by the Poincaré lemma. More general questions of this kind on an arbitrary differentiable manifold are the subject of de Rham cohomology, which allows one to obtain purely topological information using differential methods.
Examples
A simple example of a form that is closed but not exact is the 1-form given by the derivative of argument on the punctured plane Since is not actually a function (see the next paragraph) is not an exact form. Still, has vanishing derivative and is therefore closed.
Note that the argument is only defined up to an integer multiple of since a single point can be assigned different arguments etc. We can assign arguments in a locally consistent manner around but not in a globally consistent manner. This is because if we trace a loop from counterclockwise around the origin and back to the argument increases by Generally, the argument changes by
over a counter-clockwise oriented loop
Even though the argument is not technically a function, the different local definitions of at a point differ from one another by constants. Since the derivative at only uses local data, and since functions that differ by a constant have the same derivative, the argument has a globally well-defined derivative
The upshot is that is a one-form on that is not actually the derivative of any well-defined function We say that is not exact. Explicitly, is given as:
which by inspection has derivative zero. Because has vanishing derivative, we say that it is closed.
On the other hand, for the one-form
.
Thus is not even closed, never mind exact.
The form generates the de Rham cohomology group meaning that any closed form is the sum of an exact form and a multiple of where accounts for a non-trivial contour integral around the origin, which is the only obstruction to a closed form on the punctured plane (locally the derivative of a potential function) being the derivative of a globally defined function.
Examples in low dimensions
Differential forms in and were well known in the mathematical physics of the nineteenth century. In the plane, 0-forms are just functions, and 2-forms are functions times the basic area element , so that it is the 1-forms
that are of real interest. The formula for the exterior derivative here is
where the subscripts denote partial derivatives. Therefore the condition for to be closed is
In this case if is a function then
The implication from 'exact' to 'closed' is then a consequence of the symmetry of second derivatives, with respect to and .
The gradient theorem asserts that a 1-form is exact if and only if the line integral of the form depends only on the endpoints of the curve, or equivalently,
if the integral around any smooth closed curve is zero.
Vector field analogies
On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, k-forms correspond to k-vector fields (by duality via the metric), so there is a notion of a vector field corresponding to a closed or exact form.
In 3 dimensions, an exact vector field (thought of as a 1-form) is called a conservative vector field, meaning that it is the derivative (gradient) of a 0-form (smooth scalar field), called the scalar potential. A closed vector field (thought of as a 1-form) is one whose derivative (curl) vanishes, and is called an irrotational vector field.
Thinking of a vector field as a 2-form instead, a closed vector field is one whose derivative (divergence) vanishes, and is called an incompressible flow (sometimes solenoidal vector field). The term incompressible is used because a non-zero divergence corresponds to the presence of sources and sinks in analogy with a fluid.
The concepts of conservative and incompressible vector fields generalize to n dimensions, because gradient and divergence generalize to n dimensions; curl is defined only in three dimensions, thus the concept of irrotational vector field does not generalize in this way.
Poincaré lemma
The Poincaré lemma states that if B is an open ball in Rn, any closed p-form ω defined on B is exact, for any integer p with .
More generally, the lemma states that on a contractible open subset of a manifold (e.g., ), a closed p-form, p > 0, is exact.
Formulation as cohomology
When the difference of two closed forms is an exact form, they are said to be cohomologous to each other. That is, if ζ and η are closed forms, and one can find some β such that
then one says that ζ and η are cohomologous to each other. Exact forms are sometimes said to be cohomologous to zero. The set of all forms cohomologous to a given form (and thus to each other) is called a de Rham cohomology class; the general study of such classes is known as cohomology. It makes no real sense to ask whether a 0-form (smooth function) is exact, since d increases degree by 1; but the clues from topology suggest that only the zero function should be called "exact". The cohomology classes are identified with locally constant functions.
Using contracting homotopies similar to the one used in the proof of the Poincaré lemma, it can be shown that de Rham cohomology is homotopy-invariant.
Relevance to thermodynamics
Consider a thermodynamic system whose equilibrium states are specified by thermodynamic variables, . The first law of thermodynamics can be stated as follows: In any process that results in an infinitesimal change of state where the internal energy of the system changes by an amount and
an amount of work is done on the system, one must also supply an amount of heat
The second law of thermodynamics is an empirical law of nature which says that there is no thermodynamic system for which in every circumstance, or in mathematical terms that, the differential form is not closed. Caratheodory's theorem further states that there exists an integrating denominator such that
is a closed 1-form. The integrating denominator is the temperature, and the state function is the equilibrium entropy.
Application in electrodynamics
In electrodynamics, the case of the magnetic field produced by a stationary electrical current is important. There one deals with the vector potential of this field. This case corresponds to , and the defining region is the full . The current-density vector is It corresponds to the current two-form
For the magnetic field one has analogous results: it corresponds to the induction two-form and can be derived from the vector potential , or the corresponding one-form ,
Thereby the vector potential corresponds to the potential one-form
The closedness of the magnetic-induction two-form corresponds to the property of the magnetic field that it is source-free: i.e., that there are no magnetic monopoles.
In a special gauge, , this implies
(Here is the magnetic constant.)
This equation is remarkable, because it corresponds completely to a well-known formula for the electrical field , namely for the electrostatic Coulomb potential of a charge density . At this place one can already guess that
and
and
and
can be unified to quantities with six rsp. four nontrivial components, which is the basis of the relativistic invariance of the Maxwell equations.
If the condition of stationarity is left, on the left-hand side of the above-mentioned equation one must add, in the equations for to the three space coordinates, as a fourth variable also the time t, whereas on the right-hand side, in the so-called "retarded time", must be used, i.e. it is added to the argument of the current-density. Finally, as before, one integrates over the three primed space coordinates. (As usual c is the vacuum velocity of light.)
Notes
Citations
References
.
Differential forms
Lemmas in analysis | Closed and exact differential forms | [
"Mathematics",
"Engineering"
] | 1,848 | [
"Theorems in mathematical analysis",
"Tensors",
"Differential forms",
"Lemmas in mathematical analysis",
"Lemmas"
] |
404,365 | https://en.wikipedia.org/wiki/Anyolite | Anyolite is a metamorphic rock composed of intergrown green zoisite, black/dark green pargasite and ruby. It has been found in the Arusha Region of Tanzania and in Austria. It is sometimes incorrectly advertised as a variety of the mineral zoisite. The term anyolite is, however, not an officially accepted term for a metamorphic rock.
Its name derives from the Maasai word anyoli, meaning "green". Anyolite is also referred to as ruby in zoisite, ruby zoisite, ruby-zoisite or Tanganyika artstone.
The contrasting colours make anyolite a popular material for sculptures and other decorative objects. It was first discovered at the Mundarara Mine, near Longido, Tanzania in 1954.
In 2010 it was suggested that a 2 kilogram stone known as the Gem of Tanzania owned by the defunct company Wrekin Construction and fraudulently valued at £11 million was actually a lump of anyolite worth about £100, although it was eventually sold for £8000. It is reported that the stone originally came from a mine near Arusha, Tanzania.
References
External links
Metamorphic rocks
Gemstones | Anyolite | [
"Physics"
] | 248 | [
"Materials",
"Gemstones",
"Matter"
] |
404,854 | https://en.wikipedia.org/wiki/Hydrostatic%20test | A hydrostatic test is a way in which pressure vessels such as pipelines, plumbing, gas cylinders, boilers and fuel tanks can be tested for strength and leaks. The test involves filling the vessel or pipe system with a liquid, usually water, which may be dyed to aid in visual leak detection, and pressurization of the vessel to the specified test pressure. Pressure tightness can be tested by shutting off the supply valve and observing whether there is a pressure loss. The location of a leak can be visually identified more easily if the water contains a colorant. Strength is usually tested by measuring permanent deformation of the container.
Hydrostatic testing is the most common method employed for testing pipes and pressure vessels. Using this test helps maintain safety standards and durability of a vessel over time. Newly manufactured pieces are initially qualified using the hydrostatic test. They are then revalidated at regular intervals according to the relevant standard. In some cases where a hydrostatic test is not practicable a pneumatic pressure test may be an acceptable alternative.
Testing of pressure vessels for transport and storage of gases is very important because such containers can explode if they fail under pressure.
Testing procedures
Hydrostatic tests are conducted under the constraints of either the industry's or the customer's specifications, or may be required by law. The vessel is filled with a nearly incompressible liquid – usually water or oil – pressurised to test pressure, and examined for leaks or permanent changes in shape. Red or fluorescent dyes may be added to the water to make leaks easier to see. The test pressure is always considerably higher than the operating pressure to give a factor of safety. This factor of safety is typically 166.66%, 143% or 150% of the designed working pressure, depending on the regulations that apply. For example, if a cylinder was rated to DOT-2015 PSI (approximately 139 bar), it would be tested at around 3360 PSI (approximately 232 bar).
Water is commonly used because it is cheap and easily available, and is usually harmless to the system to be tested. Hydraulic fluids and oils may be specified where contamination with water could cause problems. These fluids are nearly incompressible, therefore requiring relatively little work to develop a high pressure, and is therefore also only able to release a small amount of energy in case of a failure - only a small volume will escape under high pressure if the container fails. If high pressure gas were used, then the gas would expand to V=(nRT)/p with its compressed volume resulting in an explosion, with the attendant risk of damage or injury.
Small pressure vessels are normally tested using a water jacket test. The vessel is visually examined for defects and then placed in a container filled with water, and in which the change in volume of the vessel can be measured, usually by monitoring the water level in a calibrated tube. The vessel is then pressurised for a specified period, usually 30 or more seconds, and if specified, the expansion will be measured by reading off the amount of liquid that has been forced into the measuring tube by the volume increase of the pressurised vessel. The vessel is then depressurised, and the permanent volume increase due to plastic deformation while under pressure () is measured by comparing the final volume in the measuring tube with the volume before pressurisation.
A leak will give a similar result to permanent set, but will be detectable by holding the volume in the pressurised vessel by closing the inlet valve for a period before depressurising, as the pressure will drop steadily during this period if there is a leak. In most cases a permanent set that exceeds the specified maximum will indicate failure. A leak may also be a failure criterion, but it may be that the leak is due to poor sealing of the test equipment. If the vessel fails, it will normally go through a condemning process marking the cylinder as unsafe.
The information needed to specify the test is stamped onto the cylinder. This includes the design standard, serial number, manufacturer, and manufacture date. After testing, the vessel or its nameplate will usually be stamp marked with the date of the successful test, and the test facility's identification mark.
A simpler test, that is also considered a hydrostatic test but can be performed by anyone who has a garden hose, is to pressurise the vessel by filling it with water and to physically examine the outside for leaks. This type of test is suitable for containers such as boat fuel tanks, which are not pressure vessels but must work under the hydrostatic pressure of the contents. A hydrostatic test head is usually specified as a height above the tank top. The tank is pressurised by filling water to the specified height through a temporary standpipe if necessary. It may be necessary to seal vents and other outlets during the test.
Examples
Portable fire extinguishers are safety tools that are required in most public buildings. Fire extinguishers are also recommended in homes. Over time the conditions in which they are housed, and the manner in which they are handled affect the structural integrity of the extinguisher. A structurally weakened fire extinguisher can malfunction or even burst when it is needed the most. To maintain the quality and safety of this product, hydrostatic testing is utilized. All critical components of the fire extinguisher should be tested to ensure proper function.
Pipeline testing
Hydrotesting of pipes, pipelines and vessels is performed to expose defective materials that have missed prior detection, ensure that any remaining defects are insignificant enough to allow operation at design pressures, expose possible leaks and serve as a final validation of the integrity of the constructed system. ASME B31.3 requires this testing to ensure tightness and strength.
Buried high pressure oil and gas pipelines are tested for strength by pressurising them to at least 125% of their maximum allowable working pressure (MAWP) at any point along their length. Since many long distance transmission pipelines are designed to have a steel hoop stress of 80% of specified minimum yield strength (SMYS) at Maximum allowable operating pressure MAOP, this means that the steel is stressed to SMYS and above during the testing, and test sections must be selected to ensure that excessive plastic deformation does not occur.
For piping built to ASME B31.3, if the design temperature is greater than the test temperature, then the test pressure must be adjusted for the related allowable stress at the design temperature. This is done by multiplying 1.5 MAWP by the ratio of the allowable stress at the test temperature to allowable stress at the design temperature per ASME B31.3 Section 345.4.2 Equation 24. Test pressures need not exceed a value that would produce a stress higher than yield stress at test temperature. ASME B31.3 section 345.4.2 (c)
Other codes require a more onerous approach. BS PD 8010-2 requires testing to 150% of the design pressure – which should not be less than the MAOP plus surge and other incidental effects that will occur during normal operation.
Leak testing is performed by balancing changes in the measured pressure in the test section against the theoretical pressure changes calculated from changes in the measured temperature of the test section.
Australian standard AS2885.5 "Pipelines – Gas and liquid petroleum: Part 5: Field pressure testing" gives an excellent explanation of the factors involved.
In the aerospace industry, depending on the airline, company or customer, certain codes will need to be followed. For example, Bell Helicopter has a certain specification that will have to be followed for any parts that will be used in their helicopters.
Testing frequency
Most countries have legislation or pressure vessel codes which requires vessels to be regularly tested, for example every two years (with a visual inspection annually) for high pressure gas cylinders and every five or ten years for lower pressure ones such as used in fire extinguishers. Gas cylinders which fail are normally destroyed as part of the testing protocol to avoid the dangers inherent in them being subsequently used.
These common US standard gas cylinders have the following requirements:
DOT-3AL gas cylinders must be tested every 5 years and have an unlimited life.
DOT-3HT gas cylinders must be tested every 3 years and have a 24-year life.
DOT-3AA gas cylinders must be tested every 5 years and have an unlimited life. (Unless stamped with a star (*) in which case the cylinder meets certain specifications and can have a 10-year hydrostatic test life).
Typically organizations such as DOT PHMSA, ISO, ASTM and ASME specify the guidelines for the different types of pressure vessels.
Safety
Hydraulic testing is a hazardous process and should be performed with caution by competent personnel. Adhering to prescribed procedures defined in relevant technical standards appropriate to the specific application and jurisdiction will usually reduce these risks to an acceptable level.
A leak of high pressure liquid can cut or penetrate the skin and inject itself into body tissues. This can cause serious direct injury to the operator, and if the fluid is toxic or contaminated there will be additional adverse effects.
A pressurised hose that is not securely attached or which fails under pressure may whip around spraying water or oil and could hit someone and cause injuries. A can be used to restrain such hoses.
Enclosing the components to be tested, hazard signage, use of appropriate personal protective equipment and providing barriers to access for non-essential personnel are common precautions.
Equipment:
Pressure gauges
Water supply
Pumps and hoses for water filling
High pressure pump and hoses for pressurising
Means of measuring volumetric expansion when applicable
References
External links
Piping
Nondestructive testing
Pressure vessels | Hydrostatic test | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,971 | [
"Structural engineering",
"Chemical equipment",
"Building engineering",
"Chemical engineering",
"Physical systems",
"Nondestructive testing",
"Materials testing",
"Hydraulics",
"Mechanical engineering",
"Piping",
"Pressure vessels"
] |
405,532 | https://en.wikipedia.org/wiki/W%20and%20Z%20bosons | In particle physics, the W and Z bosons are vector bosons that are together known as the weak bosons or more generally as the intermediate vector bosons. These elementary particles mediate the weak interaction; the respective symbols are , , and . The bosons have either a positive or negative electric charge of 1 elementary charge and are each other's antiparticles. The boson is electrically neutral and is its own antiparticle. The three particles each have a spin of 1. The bosons have a magnetic moment, but the has none. All three of these particles are very short-lived, with a half-life of about . Their experimental discovery was pivotal in establishing what is now called the Standard Model of particle physics.
The bosons are named after the weak force. The physicist Steven Weinberg named the additional particle the " particle", and later gave the explanation that it was the last additional particle needed by the model. The bosons had already been named, and the bosons were named for having zero electric charge.
The two bosons are verified mediators of neutrino absorption and emission. During these processes, the boson charge induces electron or positron emission or absorption, thus causing nuclear transmutation.
The boson mediates the transfer of momentum, spin and energy when neutrinos scatter elastically from matter (a process which conserves charge). Such behavior is almost as common as inelastic neutrino interactions and may be observed in bubble chambers upon irradiation with neutrino beams. The boson is not involved in the absorption or emission of electrons or positrons. Whenever an electron is observed as a new free particle, suddenly moving with kinetic energy, it is inferred to be a result of a neutrino interacting with the electron (with the momentum transfer via the Z boson) since this behavior happens more often when the neutrino beam is present. In this process, the neutrino simply strikes the electron (via exchange of a boson) and then scatters away from it, transferring some of the neutrino's momentum to the electron.
Basic properties
These bosons are among the heavyweights of the elementary particles. With masses of and , respectively, the and bosons are almost 80 times as massive as the proton – heavier, even, than entire iron atoms.
Their high masses limit the range of the weak interaction. By way of contrast, the photon is the force carrier of the electromagnetic force and has zero mass, consistent with the infinite range of electromagnetism; the hypothetical graviton is also expected to have zero mass. (Although gluons are also presumed to have zero mass, the range of the strong nuclear force is limited for different reasons; see Color confinement.)
All three bosons have particle spin s = 1. The emission of a or boson either lowers or raises the electric charge of the emitting particle by one unit, and also alters the spin by one unit. At the same time, the emission or absorption of a boson can change the type of the particle – for example changing a strange quark into an up quark. The neutral Z boson cannot change the electric charge of any particle, nor can it change any other of the so-called "charges" (such as strangeness, baryon number, charm, etc.). The emission or absorption of a boson can only change the spin, momentum, and energy of the other particle. (See also Weak neutral current.)
Relations to the weak nuclear force
The and bosons are carrier particles that mediate the weak nuclear force, much as the photon is the carrier particle for the electromagnetic force.
W bosons
The bosons are best known for their role in nuclear decay. Consider, for example, the beta decay of cobalt-60.
→ + + +
This reaction does not involve the whole cobalt-60 nucleus, but affects only one of its 33 neutrons. The neutron is converted into a proton while also emitting an electron (often called a beta particle in this context) and an electron antineutrino:
→ + +
Again, the neutron is not an elementary particle but a composite of an up quark and two down quarks (). It is one of the down quarks that interacts in beta decay, turning into an up quark to form a proton (). At the most fundamental level, then, the weak force changes the flavour of a single quark:
→ +
which is immediately followed by decay of the itself:
→ +
Z bosons
The boson is its own antiparticle. Thus, all of its flavour quantum numbers and charges are zero. The exchange of a boson between particles, called a neutral current interaction, therefore leaves the interacting particles unaffected, except for a transfer of spin and/or momentum.
boson interactions involving neutrinos have distinct signatures: They provide the only known mechanism for elastic scattering of neutrinos in matter; neutrinos are almost as likely to scatter elastically (via boson exchange) as inelastically (via W boson exchange). Weak neutral currents via boson exchange were confirmed shortly thereafter (also in 1973), in a neutrino experiment in the Gargamelle bubble chamber at CERN.
Predictions of the W, W and Z bosons
Following the success of quantum electrodynamics in the 1950s, attempts were undertaken to formulate a similar theory of the weak nuclear force. This culminated around 1968 in a unified theory of electromagnetism and weak interactions by Sheldon Glashow, Steven Weinberg, and Abdus Salam, for which they shared the 1979 Nobel Prize in Physics. Their electroweak theory postulated not only the bosons necessary to explain beta decay, but also a new boson that had never been observed.
The fact that the and bosons have mass while photons are massless was a major obstacle in developing electroweak theory. These particles are accurately described by an SU(2) gauge theory, but the bosons in a gauge theory must be massless. As a case in point, the photon is massless because electromagnetism is described by a U(1) gauge theory. Some mechanism is required to break the SU(2) symmetry, giving mass to the and in the process. The Higgs mechanism, first put forward by the 1964 PRL symmetry breaking papers, fulfills this role. It requires the existence of another particle, the Higgs boson, which has since been found at the Large Hadron Collider. Of the four components of a Goldstone boson created by the Higgs field, three are absorbed by the , , and bosons to form their longitudinal components, and the remainder appears as the spin-0 Higgs boson.
The combination of the SU(2) gauge theory of the weak interaction, the electromagnetic interaction, and the Higgs mechanism is known as the Glashow–Weinberg–Salam model. Today it is widely accepted as one of the pillars of the Standard Model of particle physics, particularly given the 2012 discovery of the Higgs boson by the CMS and ATLAS experiments.
The model predicts that and bosons have the following masses:
where is the SU(2) gauge coupling, is the U(1) gauge coupling, and is the Higgs vacuum expectation value.
Discovery
Unlike beta decay, the observation of neutral current interactions that involve particles requires huge investments in particle accelerators and particle detectors, such as are available in only a few high-energy physics laboratories in the world (and then only after 1983). This is because bosons behave in somewhat the same manner as photons, but do not become important until the energy of the interaction is comparable with the relatively huge mass of the boson.
The discovery of the and bosons was considered a major success for CERN. First, in 1973, came the observation of neutral current interactions as predicted by electroweak theory. The huge Gargamelle bubble chamber photographed the tracks produced by neutrino interactions and observed events where a neutrino interacted but did not produce a corresponding lepton. This is a hallmark of a neutral current interaction and is interpreted as a neutrino exchanging an unseen boson with a proton or neutron in the bubble chamber. The neutrino is otherwise undetectable, so the only observable effect is the momentum imparted to the proton or neutron by the interaction.
The discovery of the and bosons themselves had to wait for the construction of a particle accelerator powerful enough to produce them. The first such machine that became available was the Super Proton Synchrotron, where unambiguous signals of bosons were seen in January 1983 during a series of experiments made possible by Carlo Rubbia and Simon van der Meer. The actual experiments were called UA1 (led by Rubbia) and UA2 (led by Pierre Darriulat), and were the collaborative effort of many people. Van der Meer was the driving force on the accelerator end (stochastic cooling). UA1 and UA2 found the boson a few months later, in May 1983. Rubbia and van der Meer were promptly awarded the 1984 Nobel Prize in Physics, a most unusual step for the conservative Nobel Foundation.
The , , and bosons, together with the photon (), comprise the four gauge bosons of the electroweak interaction.
Measurements of W boson mass
In May 2024, the Particle Data Group estimated the World Average mass for the W boson to be 80369.2 ± 13.3 MeV, based on experiments to date.
As of 2021, experimental measurements of the W boson mass had been similarly assessed to converge around , all consistent with one another and with the Standard Model.
In April 2022, a new analysis of historical data from the Fermilab Tevatron collider before its closure in 2011 determined the mass of the W boson to be , which was seven standard deviations above that predicted by the Standard Model. Besides being inconsistent with the Standard Model, the new measurement was also inconsistent with previous measurements such as ATLAS. This suggests that either the old or the new measurements had an unexpected systematic error, such as an undetected quirk in the equipment. This led to careful reevaluation of this data analysis and other historical measurement, as well as the planning of future measurements to confirm the potential new result. Fermilab Deputy Director Joseph Lykken reiterated that "... the (new) measurement needs to be confirmed by another experiment before it can be interpreted fully."
In 2023, an improved ATLAS experiment measured the W boson mass at , aligning with predictions from the Standard Model.
The Particle Data Group convened a working group on the Tevatron measurement of W boson mass, including W-mass experts from all hadron collider experiments to date, to understand the discrepancy. In May 2024 they concluded that the CDF measurement was an outlier, and the best estimate of the mass came from leaving out that measurement from the meta-analysis. "The corresponding value of the W boson mass is mW = 80369.2 ± 13.3 MeV, which we quote as the World Average."
In September 2024, the CMS experiment measured the W boson mass at 80 360.2 ± 9.9 MeV. This was the most precise measurement to date, obtained from observations of a large number of decays.
Decay
The and bosons decay to fermion pairs but neither the nor the bosons have sufficient energy to decay into the highest-mass top quark. Neglecting phase space effects and higher order corrections, simple estimates of their branching fractions can be calculated from the coupling constants.
W bosons
bosons can decay to a lepton and antilepton (one of them charged and another neutral) or to a quark and antiquark of complementary types (with opposite electric charges and ). The decay width of the W boson to a quark–antiquark pair is proportional to the corresponding squared CKM matrix element and the number of quark colours, The decay widths for the W boson are then proportional to:
{| class="wikitable" style="text-align:center;"
!colspan="2" width="100"|Leptons
!colspan="6" width="100"|Quarks
|-
|
| 1
|
| 3
|
| 3
|
| 3
|-
|
| 1
|
| 3
|
| 3
|
| 3
|-
|
| 1
|colspan="6"|Energy conservation forbids decay to .
|-
|}
Here, , , denote the three flavours of leptons (more exactly, the positive charged antileptons). , , denote the three flavours of neutrinos. The other particles, starting with and , all denote quarks and antiquarks (factor is applied). The various denote the corresponding CKM matrix coefficients.
Unitarity of the CKM matrix implies that
thus each of two quark rows Therefore, the leptonic branching ratios of the boson are approximately . The hadronic branching ratio is dominated by the CKM-favored and final states. The sum of the hadronic branching ratios has been measured experimentally to be , with
Z0 boson
bosons decay into a fermion and its antiparticle. As the boson is a mixture of the pre-symmetry-breaking and bosons (see weak mixing angle), each vertex factor includes a factor where is the third component of the weak isospin of the fermion (the "charge" for the weak force), is the electric charge of the fermion (in units of the elementary charge), and is the weak mixing angle. Because the weak isospin is different for fermions of different chirality, either left-handed or right-handed, the coupling is different as well.
The relative strengths of each coupling can be estimated by considering that the decay rates include the square of these factors, and all possible diagrams (e.g. sum over quark families, and left and right contributions). The results tabulated below are just estimates, since they only include tree-level interaction diagrams in the Fermi theory.
{| class="wikitable" style="text-align:center;"
!colspan=2| Particles
!colspan=2| Weak isospin
!rowspan=2|
!colspan=2| Branching ratio
|-
! Name
! Symbols
!
!
! Predicted for
! Experimental measurements
|-
| align="left" | Neutrinos (all)
| , ,
|
| 0
|
|
|
|-
| align="left" | Charged leptons (all)
| , ,
|colspan=2 |
|
|
|
|-
| align="right" | Electron
|
| − +
|
|
|
|
|-
| align="right" | Muon
|
| − +
|
|
|
|
|-
| align="right" | Tau
|
|
|
|
|
|
|-
| align="left" | Hadrons
|colspan=4|
|
|
|-
| align="right" | Down-type quarks
| , ,
|
|
|
|
|
|-
| align="right" | Up-type quarks( )
| ,
| −
| −
|
|
|
|}
To keep the notation compact, the table uses
* The impossible decay into a top quark–antiquark pair is left out of the table.
Subheadings and denote the chirality or "handedness" of the fermions.
In 2018, the CMS collaboration observed the first exclusive decay of the boson to a ψ meson and a lepton–antilepton pair.
See also
List of particles
Weak charge
: analogous pair of bosons predicted by the Grand Unified Theory
Footnotes
References
External links
The Review of Particle Physics, the ultimate source of information on particle properties.
The W and Z particles: a personal recollection by Pierre Darriulat
When CERN saw the end of the alphabet by Daniel Denegri
W and Z particles at Hyperphysics
Bosons
Elementary particles
Electroweak theory
Gauge bosons
Standard Model
Force carriers
Subatomic particles with spin 1 | W and Z bosons | [
"Physics"
] | 3,396 | [
"Standard Model",
"Physical phenomena",
"Elementary particles",
"Matter",
"Force carriers",
"Electroweak theory",
"Bosons",
"Fundamental interactions",
"Particle physics",
"Subatomic particles"
] |
405,711 | https://en.wikipedia.org/wiki/Superluminal%20motion | In astronomy, superluminal motion is the apparently faster-than-light motion seen in some radio galaxies, BL Lac objects, quasars, blazars and recently also in some galactic sources called microquasars. Bursts of energy moving out along the relativistic jets emitted from these objects can have a proper motion that appears greater than the speed of light. All of these sources are thought to contain a black hole, responsible for the ejection of mass at high velocities. Light echoes can also produce apparent superluminal motion.
Explanation
Superluminal motion occurs as a special case of a more general phenomenon arising from the difference between the apparent speed of distant objects moving across the sky and their actual speed as measured at the source.
In tracking the movement of such objects across the sky, a naive calculation of their speed can be derived by a simple distance divided by time calculation. If the distance of the object from the Earth is known, the angular speed of the object can be measured, and the speed can be naively calculated via:
This calculation does not yield the actual speed of the object, as it fails to account for the fact that the speed of light is finite. When measuring the movement of distant objects across the sky, there is a large time delay between what has been observed and what has occurred, due to the large distance the light from the distant object has to travel to reach us. The error in the above naive calculation comes from the fact that when an object has a component of velocity directed towards the Earth, as the object moves closer to the Earth that time delay becomes smaller. This means that the apparent speed as calculated above is greater than the actual speed. Correspondingly, if the object is moving away from the Earth, the above calculation underestimates the actual speed.
This effect in itself does not generally lead to superluminal motion being observed. But when the actual speed of the object is close to the speed of light, the apparent speed can be observed as greater than the speed of light, as a result of the above effect. As the actual speed of the object approaches the speed of light, the effect is most pronounced as the component of the velocity towards the Earth increases. This means that in most cases, 'superluminal' objects are travelling almost directly towards the Earth. However it is not strictly necessary for this to be the case, and superluminal motion can still be observed in objects with appreciable velocities not directed towards the Earth.
Superluminal motion is most often observed in two opposing jets emanating from the core of a star or black hole. In this case, one jet is moving away from and one towards the Earth. If Doppler shifts are observed in both sources, the velocity and the distance can be determined independently of other observations.
Some contrary evidence
As early as 1983, at the "superluminal workshop" held at Jodrell Bank Observatory, referring to the seven then-known superluminal jets,
Schilizzi ... presented maps of arc-second resolution [showing the large-scale outer jets] ... which ... have revealed outer double structure in all but one (3C 273) of the known superluminal sources. An embarrassment is that the average projected size [on the sky] of the outer structure is no smaller than that of the normal radio-source population.
In other words, the jets are evidently not, on average, close to the Earth's line-of-sight. (Their apparent length would appear much shorter if they were.)
In 1993, Thomson et al. suggested that the (outer) jet of the quasar 3C 273 is nearly collinear to the Earth's line-of-sight. Superluminal motion of up to ~9.6c has been observed along the (inner) jet of this quasar.
Superluminal motion of up to 6c has been observed in the inner parts of the jet of M87. To explain this in terms of the "narrow-angle" model, the jet must be no more than 19° from the Earth's line-of-sight. But evidence suggests that the jet is in fact at about 43° to the Earth's line-of-sight. The same group of scientists later revised that finding and argue in favour of a superluminal bulk movement in which the jet is embedded.
Suggestions of turbulence and/or "wide cones" in the inner parts of the jets have been put forward to try to counter such problems, and there seems to be some evidence for this.
Signal velocity
The model identifies a difference between the information carried by the wave at its signal velocity c, and the information about the wave front's apparent rate of change of position. If a light pulse is envisaged in a wave guide (glass tube) moving across an observer's field of view, the pulse can only move at c through the guide. If that pulse is also directed towards the observer, he will receive that wave information, at c. If the wave guide is moved in the same direction as the pulse, the information on its position, passed to the observer as lateral emissions from the pulse, changes. He may see the rate of change of position as apparently representing motion faster than c when calculated, like the edge of a shadow across a curved surface. This is a different signal, containing different information, to the pulse and does not break the second postulate of special relativity. c is strictly maintained in all local fields.
Derivation of the apparent velocity
A relativistic jet coming out of the center of an active galactic nucleus is moving along AB with a velocity v, and is observed from the point O. At time a light ray leaves the jet from point A and another ray leaves at time from point B. An observer at O receives the rays at time and respectively. The angle is small enough that the two distances marked can be considered equal.
, where
Apparent transverse velocity along ,
The apparent transverse velocity is maximal for angle ( is used)
, where
If (i.e. when velocity of jet is close to the velocity of light) then despite the fact that . And of course means that the apparent transverse velocity along , the only velocity on the sky that can be measured, is larger than the velocity of light in vacuum, i.e. the motion is apparently superluminal.
History
The apparent superluminal motion in the faint nebula surrounding Nova Persei was first observed in 1901 by Charles Dillon Perrine. “Mr. Perrine’s photograph of November 7th and 8th, 1901, secured with the Crossley Reflector, led to the remarkable discovery that the masses of nebulosity were apparently in motion, with a speed perhaps several hundred times as great as hitherto observed.” “Using the 36-in. telescope (Crossley), he discovered the apparent superluminal motion of the expanding light bubble around Nova Persei (1901). Thought to be a nebula, the visual appearance was actually caused by light from the nova event reflected from the surrounding interstellar medium as the light moved outward from the star. Perrine studied this phenomenon using photographic, spectroscopic, and polarization techniques.”
Superluminal motion was first observed in 1902 by Jacobus Kapteyn in the ejecta of the nova GK Persei, which had exploded in 1901. His discovery was published in the German journal Astronomische Nachrichten, and received little attention from English-speaking astronomers until many decades later.
In 1966, Martin Rees pointed out that "an object moving relativistically in suitable directions may appear to a distant observer to have a transverse velocity much greater than the velocity of light". In 1969 and 1970 such sources were found as very distant astronomical radio sources, such as radio galaxies and quasars, and were called superluminal sources. The discovery was the result of a new technique called Very Long Baseline Interferometry, which allowed astronomers to set limits to the angular size of components and to determine positions to better than milli-arcseconds, and in particular to determine the change in positions on the sky, called proper motions, in a timespan of typically years. The apparent velocity is obtained by multiplying the observed proper motion by the distance, which could be up to 6 times the speed of light.
In the introduction to a workshop on superluminal radio sources, Pearson and Zensus reported
The first indications of changes in the structure of some sources were obtained by an American-Australian team in a series of transpacific VLBI observations between 1968 and 1970 (Gubbay et al. 1969). Following the early experiments, they had realised the potential of the NASA tracking antennas for VLBI measurements and set up an interferometer operating between California and Australia. The change in the source visibility that they measured for 3C 279, combined with changes in total flux density, indicated that a component first seen in 1969 had reached a diameter of about 1 milliarcsecond, implying expansion at an apparent velocity of at least twice the speed of light. Aware of Rees's model, (Moffet et al. 1972) concluded that their measurement presented evidence for relativistic expansion of this component. This interpretation, although by no means unique, was later confirmed, and in hindsight it seems fair to say that their experiment was the first interferometric measurement of superluminal expansion.
In 1994, a galactic speed record was obtained with the discovery of a superluminal source in the Milky Way, the cosmic x-ray source GRS 1915+105. The expansion occurred on a much shorter timescale. Several separate blobs were seen to expand in pairs within weeks by typically 0.5 arcsec. Because of the analogy with quasars, this source was called a microquasar.
See also
EPR paradox
Quantum entanglement
Superluminal communication
Ultra-high-energy cosmic ray
Notes
External links
A more detailed explanation.
A mathematical deduction of superluminal motion.
Superluminal motion Flash Applet.
Astrophysics
Faster-than-light travel | Superluminal motion | [
"Physics",
"Astronomy"
] | 2,080 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
405,766 | https://en.wikipedia.org/wiki/Linear%20combination%20of%20atomic%20orbitals | A linear combination of atomic orbitals or LCAO is a quantum superposition of atomic orbitals and a technique for calculating molecular orbitals in quantum chemistry. In quantum mechanics, electron configurations of atoms are described as wavefunctions. In a mathematical sense, these wave functions are the basis set of functions, the basis functions, which describe the electrons of a given atom. In chemical reactions, orbital wavefunctions are modified, i.e. the electron cloud shape is changed, according to the type of atoms participating in the chemical bond.
It was introduced in 1929 by Sir John Lennard-Jones with the description of bonding in the diatomic molecules of the first main row of the periodic table, but had been used earlier by Linus Pauling for H2+.
Mathematical description
An initial assumption is that the number of molecular orbitals is equal to the number of atomic orbitals included in the linear expansion. In a sense, n atomic orbitals combine to form n molecular orbitals, which can be numbered i = 1 to n and which may not all be the same. The expression (linear expansion) for the i th molecular orbital would be:
or
where is a molecular orbital represented as the sum of n atomic orbitals , each multiplied by a corresponding coefficient , and r (numbered 1 to n) represents which atomic orbital is combined in the term. The coefficients are the weights of the contributions of the n atomic orbitals to the molecular orbital. The Hartree–Fock method is used to obtain the coefficients of the expansion.
The orbitals are thus expressed as linear combinations of basis functions, and the basis functions are single-electron functions which may or may not be centered on the nuclei of the component atoms of the molecule. In either case the basis functions are usually also referred to as atomic orbitals (even though only in the former case this name seems to be adequate). The atomic orbitals used are typically those of hydrogen-like atoms since these are known analytically i.e. Slater-type orbitals but other choices are possible such as the Gaussian functions from standard basis sets or the pseudo-atomic orbitals from plane-wave pseudopotentials.
By minimizing the total energy of the system, an appropriate set of coefficients of the linear combinations is determined. This quantitative approach is now known as the Hartree–Fock method. However, since the development of computational chemistry, the LCAO method often refers not to an actual optimization of the wave function but to a qualitative discussion which is very useful for predicting and rationalizing results obtained via more modern methods. In this case, the shape of the molecular orbitals and their respective energies are deduced approximately from comparing the energies of the atomic orbitals of the individual atoms (or molecular fragments) and applying some recipes known as level repulsion and the like. The graphs that are plotted to make this discussion clearer are called correlation diagrams. The required atomic orbital energies can come from calculations or directly from experiment via Koopmans' theorem.
This is done by using the symmetry of the molecules and orbitals involved in bonding, and thus is sometimes called symmetry adapted linear combination (SALC). The first step in this process is assigning a point group to the molecule. Each operation in the point group is performed upon the molecule. The number of bonds that are unmoved is the character of that operation. This reducible representation is decomposed into the sum of irreducible representations. These irreducible representations correspond to the symmetry of the orbitals involved.
Molecular orbital diagrams provide simple qualitative LCAO treatment. The Hückel method, the extended Hückel method and the Pariser–Parr–Pople method, provide some quantitative theories.
See also
Quantum chemistry computer programs
Hartree–Fock method
Basis set (chemistry)
Tight binding
Holstein–Herring method
External links
LCAO @ chemistry.umeche.maine.edu Link
References
Chemical bonding
Physical chemistry
Electronic structure methods | Linear combination of atomic orbitals | [
"Physics",
"Chemistry",
"Materials_science"
] | 809 | [
"Applied and interdisciplinary physics",
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Electronic structure methods",
"Computational chemistry",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Physical chemistry"
] |
3,279,949 | https://en.wikipedia.org/wiki/Line%E2%80%93plane%20intersection | In analytic geometry, the intersection of a line and a plane in three-dimensional space can be the empty set, a point, or a line. It is the entire line if that line is embedded in the plane, and is the empty set if the line is parallel to the plane but outside it. Otherwise, the line cuts through the plane at a single point.
Distinguishing these cases, and determining equations for the point and line in the latter cases, have use in computer graphics, motion planning, and collision detection.
Algebraic form
In vector notation, a plane can be expressed as the set of points for which
where is a normal vector to the plane and is a point on the plane. (The notation denotes the dot product of the vectors and .)
The vector equation for a line is
where is a unit vector in the direction of the line, is a point on the line, and is a scalar in the real number domain. Substituting the equation for the line into the equation for the plane gives
Expanding gives
And solving for gives
If then the line and plane are parallel. There will be two cases: if then the line is contained in the plane, that is, the line intersects the plane at each point of the line. Otherwise, the line and plane have no intersection.
If there is a single point of intersection. The value of can be calculated and the point of intersection, , is given by
.
Parametric form
A line is described by all points that are a given direction from a point. A general point on a line passing through points and can be represented as
where is the vector pointing from to .
Similarly a general point on a plane determined by the triangle defined by the points , and can be represented as
where is the vector pointing from to , and is the vector pointing from to .
The point at which the line intersects the plane is therefore described by setting the point on the line equal to the point on the plane, giving the parametric equation:
This can be rewritten as
which can be expressed in matrix form as
where the vectors are written as column vectors.
This produces a system of linear equations which can be solved for , and . If the solution satisfies the condition , then the intersection point is on the line segment between and , otherwise it is elsewhere on the line. Likewise, if the solution satisfies , then the intersection point is in the parallelogram formed by the point and vectors and . If the solution additionally satisfies , then the intersection point lies in the triangle formed by the three points , and .
The determinant of the matrix can be calculated as
If the determinant is zero, then there is no unique solution; the line is either in the plane or parallel to it.
If a unique solution exists (determinant is not 0), then it can be found by inverting the matrix and rearranging:
which expands to
and then to
thus giving the solutions:
The point of intersection is then equal to
Uses
In the ray tracing method of computer graphics a surface can be represented as a set of pieces of planes. The intersection of a ray of light with each plane is used to produce an image of the surface. In vision-based 3D reconstruction, a subfield of computer vision, depth values are commonly measured by so-called triangulation method, which finds the intersection between light plane and ray reflected toward camera.
The algorithm can be generalised to cover intersection with other planar figures, in particular, the intersection of a polyhedron with a line.
See also
Plücker coordinates#Plane-line meet calculating the intersection when the line is expressed by Plücker coordinates.
Plane–plane intersection
References
Intersection of a Line and a Plane
Euclidean geometry
Computational physics
Geometric algorithms
Geometric intersection
Planes (geometry)
cs:Analytická geometrie#Vzájemná poloha dvou rovin v třírozměrném prostoru | Line–plane intersection | [
"Physics",
"Mathematics"
] | 791 | [
"Planes (geometry)",
"Infinity",
"Mathematical objects",
"Computational physics"
] |
3,281,044 | https://en.wikipedia.org/wiki/2%2C3-sigmatropic%20rearrangement | 2,3-Sigmatropic rearrangements are a type of sigmatropic rearrangements and can be classified into two types. Rearrangements of allylic sulfoxides, amine oxides, selenoxides are neutral. Rearrangements of carbanions of allyl ethers are anionic. The general scheme for this kind of rearrangement is:
Atom Y may be sulfur, selenium, or nitrogen. If Y is nitrogen, the reaction is referred to as the Sommelet–Hauser rearrangement if a quaternary ammonium salt is involved or the aza-Wittig reaction if an alpha-metalated tertiary amine is involved; if Y is oxygen, then it is called a 2,3-Wittig rearrangement (not to be confused with the well-known Wittig reaction, which involves a phosphonium ylide). If Y is sulfur, the product can be treated with a thiophil to generate an allylic alcohol in what is known as the Mislow–Evans rearrangement.
A [2,3]-rearrangement may result in carbon-carbon bond formation. It can also be used as a ring-expansion reaction.
Stereoselectivity
2,3-sigmatropic rearrangements can offer high stereoselectivity. At the newly formed double bond there is a strong preference for formation of the E-alkene or trans isomer product. The stereochemistry of the newly formed C-C bond is harder to predict. It can be inferred from the five-membered ring transition state. Generally, the E-alkene will favor the formation of anti product, while Z-alkene will favor formation of syn product.
Diastereoselectivity can be high for Z-alkene with alkynyl, alkenyl, or aryl anion-stabilizing group. Diastereoselectivity is usually lower with E-alkenes. Hydrocarbon groups will prefer exo orientation in the envelope-like transition state. Anion-stabilizing group will prefer the endo orientation in transition state.
References
Rearrangement reactions
Reaction mechanisms | 2,3-sigmatropic rearrangement | [
"Chemistry"
] | 466 | [
"Reaction mechanisms",
"Organic reactions",
"Physical organic chemistry",
"Chemical kinetics",
"Rearrangement reactions"
] |
3,281,166 | https://en.wikipedia.org/wiki/Thermodynamic%20process | Classical thermodynamics considers three main kinds of thermodynamic processes: (1) changes in a system, (2) cycles in a system, and (3) flow processes.
(1) A Thermodynamic process is a process in which the thermodynamic state of a system is changed. A change in a system is defined by a passage from an initial to a final state of thermodynamic equilibrium. In classical thermodynamics, the actual course of the process is not the primary concern, and often is ignored. A state of thermodynamic equilibrium endures unchangingly unless it is interrupted by a thermodynamic operation that initiates a thermodynamic process. The equilibrium states are each respectively fully specified by a suitable set of thermodynamic state variables, that depend only on the current state of the system, not on the path taken by the processes that produce the state. In general, during the actual course of a thermodynamic process, the system may pass through physical states which are not describable as thermodynamic states, because they are far from internal thermodynamic equilibrium. Non-equilibrium thermodynamics, however, considers processes in which the states of the system are close to thermodynamic equilibrium, and aims to describe the continuous passage along the path, at definite rates of progress.
As a useful theoretical but not actually physically realizable limiting case, a process may be imagined to take place practically infinitely slowly or smoothly enough to allow it to be described by a continuous path of equilibrium thermodynamic states, when it is called a "quasi-static" process. This is a theoretical exercise in differential geometry, as opposed to a description of an actually possible physical process; in this idealized case, the calculation may be exact.
A really possible or actual thermodynamic process, considered closely, involves friction. This contrasts with theoretically idealized, imagined, or limiting, but not actually possible, quasi-static processes which may occur with a theoretical slowness that avoids friction. It also contrasts with idealized frictionless processes in the surroundings, which may be thought of as including 'purely mechanical systems'; this difference comes close to defining a thermodynamic process.
(2) A cyclic process carries the system through a cycle of stages, starting and being completed in some particular state. The descriptions of the staged states of the system are not the primary concern. The primary concern is the sums of matter and energy inputs and outputs to the cycle. Cyclic processes were important conceptual devices in the early days of thermodynamical investigation, while the concept of the thermodynamic state variable was being developed.
(3) Defined by flows through a system, a flow process is a steady state of flows into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. Flow processes are of interest in engineering.
Kinds of process
Cyclic process
Defined by a cycle of transfers into and out of a system, a cyclic process is described by the quantities transferred in the several stages of the cycle. The descriptions of the staged states of the system may be of little or even no interest. A cycle is a sequence of a small number of thermodynamic processes that indefinitely often, repeatedly returns the system to its original state. For this, the staged states themselves are not necessarily described, because it is the transfers that are of interest. It is reasoned that if the cycle can be repeated indefinitely often, then it can be assumed that the states are recurrently unchanged. The condition of the system during the several staged processes may be of even less interest than is the precise nature of the recurrent states. If, however, the several staged processes are idealized and quasi-static, then the cycle is described by a path through a continuous progression of equilibrium states.
Flow process
Defined by flows through a system, a flow process is a steady state of flow into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. The states of the inflow and outflow materials consist of their internal states, and of their kinetic and potential energies as whole bodies. Very often, the quantities that describe the internal states of the input and output materials are estimated on the assumption that they are bodies in their own states of internal thermodynamic equilibrium. Because rapid reactions are permitted, the thermodynamic treatment may be approximate, not exact.
A cycle of quasi-static processes
A quasi-static thermodynamic process can be visualized by graphically plotting the path of idealized changes to the system's state variables. In the example, a cycle consisting of four quasi-static processes is shown. Each process has a well-defined start and end point in the pressure-volume state space. In this particular example, processes 1 and 3 are isothermal, whereas processes 2 and 4 are isochoric. The PV diagram is a particularly useful visualization of a quasi-static process, because the area under the curve of a process is the amount of work done by the system during that process. Thus work is considered to be a process variable, as its exact value depends on the particular path taken between the start and end points of the process. Similarly, heat may be transferred during a process, and it too is a process variable.
Conjugate variable processes
It is often useful to group processes into pairs, in which each variable held constant is one member of a conjugate pair.
Pressure – volume
The pressure–volume conjugate pair is concerned with the transfer of mechanical energy as the result of work.
An isobaric process occurs at constant pressure. An example would be to have a movable piston in a cylinder, so that the pressure inside the cylinder is always at atmospheric pressure, although it is separated from the atmosphere. In other words, the system is dynamically connected, by a movable boundary, to a constant-pressure reservoir.
An isochoric process is one in which the volume is held constant, with the result that the mechanical PV work done by the system will be zero. On the other hand, work can be done isochorically on the system, for example by a shaft that drives a rotary paddle located inside the system. It follows that, for the simple system of one deformation variable, any heat energy transferred to the system externally will be absorbed as internal energy. An isochoric process is also known as an isometric process or an isovolumetric process. An example would be to place a closed tin can of material into a fire. To a first approximation, the can will not expand, and the only change will be that the contents gain internal energy, evidenced by increase in temperature and pressure. Mathematically, . The system is dynamically insulated, by a rigid boundary, from the environment.
Temperature – entropy
The temperature-entropy conjugate pair is concerned with the transfer of energy, especially for a closed system.
An isothermal process occurs at a constant temperature. An example would be a closed system immersed in and thermally connected with a large constant-temperature bath. Energy gained by the system, through work done on it, is lost to the bath, so that its temperature remains constant.
An adiabatic process is a process in which there is no matter or heat transfer, because a thermally insulating wall separates the system from its surroundings. For the process to be natural, either (a) work must be done on the system at a finite rate, so that the internal energy of the system increases; the entropy of the system increases even though it is thermally insulated; or (b) the system must do work on the surroundings, which then suffer increase of entropy, as well as gaining energy from the system.
An isentropic process is customarily defined as an idealized quasi-static reversible adiabatic process, of transfer of energy as work. Otherwise, for a constant-entropy process, if work is done irreversibly, heat transfer is necessary, so that the process is not adiabatic, and an accurate artificial control mechanism is necessary; such is therefore not an ordinary natural thermodynamic process.
Chemical potential - particle number
The processes just above have assumed that the boundaries are also impermeable to particles. Otherwise, we may assume boundaries that are rigid, but are permeable to one or more types of particle. Similar considerations then hold for the chemical potential–particle number conjugate pair, which is concerned with the transfer of energy via this transfer of particles.
In a constant chemical potential process the system is particle-transfer connected, by a particle-permeable boundary, to a constant-μ reservoir.
The conjugate here is a constant particle number process. These are the processes outlined just above. There is no energy added or subtracted from the system by particle transfer. The system is particle-transfer-insulated from its environment by a boundary that is impermeable to particles, but permissive of transfers of energy as work or heat. These processes are the ones by which thermodynamic work and heat are defined, and for them, the system is said to be closed.
Thermodynamic potentials
Any of the thermodynamic potentials may be held constant during a process. For example:
An isenthalpic process introduces no change in enthalpy in the system.
Polytropic processes
A polytropic process is a thermodynamic process that obeys the relation:
where P is the pressure, V is volume, n is any real number (the "polytropic index"), and C is a constant. This equation can be used to accurately characterize processes of certain systems, notably the compression or expansion of a gas, but in some cases, liquids and solids.
Processes classified by the second law of thermodynamics
According to Planck, one may think of three main classes of thermodynamic process: natural, fictively reversible, and impossible or unnatural.
Natural process
Only natural processes occur in nature. For thermodynamics, a natural process is a transfer between systems that increases the sum of their entropies, and is irreversible. Natural processes may occur spontaneously upon the removal of a constraint, or upon some other thermodynamic operation, or may be triggered in a metastable or unstable system, as for example in the condensation of a supersaturated vapour. Planck emphasised the occurrence of friction as an important characteristic of natural thermodynamic processes that involve transfer of matter or energy between system and surroundings.
Effectively reversible process
To describe the geometry of graphical surfaces that illustrate equilibrium relations between thermodynamic functions of state, no one can fictively think of so-called "reversible processes". They are convenient theoretical objects that trace paths across graphical surfaces. They are called "processes" but do not describe naturally occurring processes, which are always irreversible. Because the points on the paths are points of thermodynamic equilibrium, it is customary to think of the "processes" described by the paths as fictively "reversible". Reversible processes are always quasistatic processes, but the converse is not always true.
Unnatural process
Unnatural processes are logically conceivable but do not occur in nature. They would decrease the sum of the entropies if they occurred.
Quasistatic process
A quasistatic process is an idealized or fictive model of a thermodynamic "process" considered in theoretical studies. It does not occur in physical reality. It may be imagined as happening infinitely slowly so that the system passes through a continuum of states that are infinitesimally close to equilibrium.
See also
Flow process
Heat
Phase transition
Work (thermodynamics)
References
Further reading
Physics for Scientists and Engineers - with Modern Physics (6th Edition), P. A. Tipler, G. Mosca, Freeman, 2008,
Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, (Verlagsgesellschaft), (VHC Inc.)
McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994,
Physics with Modern Applications, L.H. Greenberg, Holt-Saunders International W.B. Saunders and Co, 1978,
Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, 2nd Edition, 1978, John Murray,
Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009,
Chemical Thermodynamics, D.J.G. Ives, University Chemistry, Macdonald Technical and Scientific, 1971,
Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974,
Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008,
Equilibrium chemistry
Thermodynamic cycles
Thermodynamic systems
Thermodynamics | Thermodynamic process | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,840 | [
"Thermodynamic systems",
"Thermodynamic processes",
"Physical systems",
"Equilibrium chemistry",
"Thermodynamics",
"Dynamical systems"
] |
3,281,317 | https://en.wikipedia.org/wiki/Timoshenko%20Medal | The Timoshenko Medal is an award given annually by the American Society of Mechanical Engineers (ASME) to an individual
"in recognition of distinguished contributions to the field of applied mechanics."
The Timoshenko Medal, widely regarded as the highest international award in the field of applied mechanics, was established in 1957 in honor of Stephen Timoshenko, an authority in the field. The Medal "commemorates his contributions as author and teacher."
The actual award is a bronze medal and honorarium.
The first award was given in 1957 to Stephen Prokofyevich Timoshenko.
Nomination procedure
The Timoshenko Medal Committee consists of the five recent Timoshenko Medalists, the five members of the executive committee of the ASME International Applied Mechanics Division (AMD), and the five recent past chairs of the AMD. See the list of current members of the Committee
Upon receiving recommendations from the international community of applied mechanics, the Committee nominates a single medalist every year. This nomination is subsequently approved by the ASME; no case has been reported that the ASME has ever overruled a nomination of the Timoshenko Medal Committee.
Acceptance speech
Every year, at the Applied Mechanics Dinner at the ASME winter annual meeting, the Timoshenko Medalist of the year delivers a lecture. Taken as a whole, these lectures provide a long perspective of the field of applied mechanics, as well as capsules of the lives of extraordinary individuals. A project has been initiated to post all Timoshenko Medal Lectures online.
Timoshenko Medal recipients
2023 – Guruswami Ravichandran, California Institute of Technology, USA.
2022 – Michael A. Sutton, University of South Carolina, USA.
2021 – Huajian Gao, Nanyang Technological University, Singapore.
2020 – Mary Cunningham Boyce, Columbia University, USA.
2019 – J. N. Reddy, Texas A&M University, USA.
2018 – Ares J. Rosakis, California Institute of Technology, USA.
2017 – Viggo Tvergaard, Technical University of Denmark
2016 – Raymond Ogden, University of Glasgow, Scotland, UK.
2015 – Michael Ortiz, California Institute of Technology, USA.
2014 – Robert McMeeking, UC Santa Barbara, USA.
2013 – Richard M. Christensen, Stanford University, USA.
2012 – Subra Suresh, National Science Foundation (NSF)
2011 – Alan Needleman, The University of North Texas (United States)
2010 – Wolfgang Knauss, Caltech (United States)
2009 – Zdenek P. Bazant, Northwestern University (United States)
2008 – Sia Nemat-Nasser, Department of Mechanical and Aerospace Engineering, University of California, San Diego (United States)
2007 – Thomas J. R. Hughes, Institute for Computational Engineering and Sciences, The University of Texas at Austin (United States)
2006 – Kenneth L. Johnson, The University of Cambridge (United Kingdom)
2005 – Grigory Isaakovich Barenblatt, Department of Mathematics, University of California, Berkeley (United States)
2004 – Morton E. Gurtin, Department of Mathematical Sciences, Carnegie Mellon University (United States)
2003 – L. Ben Freund Brown University (United States)
2002 – John W. Hutchinson, Harvard University (United States)
2001 – Ted Belytschko, Northwestern University
2000 – Rodney J. Clifton
1999 – Anatol Roshko, California Institute of Technology, USA.
1998 – Olgierd C. Zienkiewicz, Imperial College London, Institute for Numerical Methods in Engineering at the University of Wales (United Kingdom)
1997 – John R. Willis
1996 – J. Tinsley Oden, Institute for Computational Engineering and Sciences, The University of Texas at Austin (United States)
1995 – Daniel D. Joseph, University of Minnesota (United States)
1994 – James R. Rice, Harvard University (United States)
1993 – John L. Lumley, Cornell University (United States)
1992 – Jan D. Achenbach, Northwestern University (United States)
1991 – Yuan-Cheng Fung, Department of Bioengineering, University of California, San Diego (United States)
1990 – Stephen H. Crandall, Massachusetts Institute of Technology (United States)
1989 – Bernard Budiansky, Harvard University (United States)
1988 – George K. Batchelor
1987 – Ronald S. Rivlin
1986 – George Rankine Irwin
1985 – Eli Sternberg
1984 – Joseph B. Keller
1983 – Daniel C. Drucker
1982 – John W. Miles
1981 – John H. Argyris, Imperial College London (UK), University of Stuttgart (Germany)
1980 – Paul M. Naghdi
1979 – Jerald L. Ericksen
1978 – George F. Carrier, Harvard University (United States)
1977 – John D. Eshelby
1976 – Erastus H. Lee
1975 – Chia-Chiao Lin
1974 – Albert E. Green
1973 – Eric Reissner
1972 – Jacob P. Den Hartog
1971 – Howard W. Emmons, Harvard University (United States)
1970 – James J. Stoker
1969 – Jakob Ackeret
1968 – Warner T. Koiter
1967 – Hillel Poritsky
1966 – William Prager
1965 – Sydney Goldstein
1964 – Raymond D. Mindlin, Columbia University (United States)
1963 – Michael James Lighthill
1962 – Maurice A. Biot
1961 – James N. Goodier
1960 – Cornelius B. Biezeno
– Richard Grammel
1959 – Sir Richard Southwell, University of Cambridge, Imperial College London (UK)
1958 – Arpad L. Nadai
– Sir Geoffrey Taylor
– Theodore von Karman, California Institute of Technology, USA.
1957 – Stephen P. Timoshenko
See also
List of engineering awards
List of mechanical engineering awards
List of awards named after people
American Society of Mechanical Engineers
Applied mechanics
Applied Mechanics Division
Mechanician
Footnotes
Timoshenko Lectures: A project has started to make the Timoshenko Medalist Lectures available on-line
External links
Information for nomination
Honors & Awards - Timoshenko Medal, ASME official page, where forms for nomination can be obtained.
Homepage of the ASME International Applied Mechanics Division
Mechanical engineering awards
Awards established in 1957 | Timoshenko Medal | [
"Engineering"
] | 1,255 | [
"Mechanical engineering awards",
"Mechanical engineering"
] |
3,282,143 | https://en.wikipedia.org/wiki/Robust%20control | In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some (typically compact) set. Robust methods aim to achieve robust performance and/or stability in the presence of bounded modelling errors.
The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness, prompting research to improve them. This was the start of the theory of robust control, which took shape in the 1980s and 1990s and is still active today.
In contrast with an adaptive control policy, a robust control policy is static, rather than adapting to measurements of variations, the controller is designed to work assuming that certain variables will be unknown but
bounded.
Criteria for robustness
Informally, a controller designed for a particular set of parameters is said to be robust if it also works well under a different set of assumptions. High-gain feedback is a simple example of a robust control method; with sufficiently high gain, the effect of any parameter variations will be negligible. From the closed-loop transfer function perspective, high open-loop gain leads to substantial disturbance rejection in the face of system parameter uncertainty. Other examples of robust control include sliding mode and terminal sliding mode control.
The major obstacle to achieving high loop gains is the need to maintain system closed-loop stability. Loop shaping which allows stable closed-loop operation can be a technical challenge.
Robust control systems often incorporate advanced topologies which include multiple feedback loops and feed-forward paths. The control laws may be represented by high order transfer functions required to simultaneously accomplish desired disturbance rejection performance with the robust closed-loop operation.
High-gain feedback is the principle that allows simplified models of operational amplifiers and emitter-degenerated bipolar transistors to be used in a variety of different settings. This idea was already well understood by Bode and Black in 1927.
The modern theory of robust control
The theory of robust control system began in the late 1970s and early 1980s and soon developed a number of techniques for dealing with bounded system uncertainty.
Probably the most important example of a robust control technique is H-infinity loop-shaping, which was developed by Duncan McFarlane and Keith Glover of Cambridge University; this method minimizes the sensitivity of a system over its frequency spectrum, and this guarantees that the system will not greatly deviate from expected trajectories when disturbances enter the system.
An emerging area of robust control from application point of view is sliding mode control (SMC), which is a variation of variable structure control (VSC). The robustness properties of SMC with respect to matched uncertainty as well as the simplicity in design attracted a variety of applications.
While robust control has been traditionally dealt with along deterministic approaches, in the last two decades this approach has been criticized on the basis that it is too rigid to describe real uncertainty, while it often also leads to over conservative solutions. Probabilistic robust control has been introduced as an alternative, see e.g. that interprets robust control within the so-called scenario optimization theory.
Another example is loop transfer recovery (LQG/LTR), which was developed to overcome the robustness problems of linear-quadratic-Gaussian control (LQG) control.
Other robust techniques includes quantitative feedback theory (QFT), passivity based control, Lyapunov based control, etc.
When system behavior varies considerably in normal operation, multiple control laws may have to be devised. Each distinct control law addresses a specific system behavior mode. An example is a computer hard disk drive. Separate robust control system modes are designed in order to address the rapid magnetic head traversal operation, known as the seek, a transitional settle operation as the magnetic head approaches its destination, and a track following mode during which the disk drive performs its data access operation.
One of the challenges is to design a control system that addresses these diverse system operating modes and enables smooth transition from one mode to the next as quickly as possible.
Such state machine-driven composite control system is an extension of the gain scheduling idea where the entire control strategy changes based upon changes in system behavior.
See also
Control theory
Control engineering
Fractional-order control
H-infinity control
H-infinity loop-shaping
Sliding mode control
Intelligent control
Process control
Robust decision making
Root locus
Servomechanism
Stable polynomial
State space (controls)
System identification
Stability radius
Iso-damping
Active disturbance rejection control
Quantitative feedback theory
References
Further reading
Control theory
Stochastic control | Robust control | [
"Mathematics"
] | 924 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
3,283,910 | https://en.wikipedia.org/wiki/Cyclostationary%20process | A cyclostationary process is a signal having statistical properties that vary cyclically with time.
A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City can be modeled as a cyclostationary process: the maximum temperature on July 21 is statistically different from the temperature on December 20; however, it is a reasonable approximation that the temperature on December 20 of different years has identical statistics. Thus, we can view the random process composed of daily maximum temperatures as 365 interleaved stationary processes, each of which takes on a new value once per year.
Definition
There are two differing approaches to the treatment of cyclostationary processes.
The stochastic approach is to view measurements as an instance of an abstract stochastic process model. As an alternative, the more empirical approach is to view the measurements as a single time series of data—that which has actually been measured in practice and, for some parts of theory, conceptually extended from an observed finite time interval to an infinite interval. Both mathematical models lead to probabilistic theories: abstract stochastic probability for the stochastic process model and the more empirical Fraction Of Time (FOT) probability for the alternative model. The FOT probability of some event associated with the time series is defined to be the fraction of time that event occurs over the lifetime of the time series. In both approaches, the process or time series is said to be cyclostationary if and only if its associated probability distributions vary periodically with time. However, in the non-stochastic time-series approach, there is an alternative but equivalent definition: A time series that contains no finite-strength additive sine-wave components is said to exhibit cyclostationarity if and only if there exists some nonlinear time-invariant transformation of the time series that produces finite-strength (non-zero) additive sine-wave components.
Wide-sense cyclostationarity
An important special case of cyclostationary signals is one that exhibits cyclostationarity in second-order statistics (e.g., the autocorrelation function). These are called wide-sense cyclostationary signals, and are analogous to wide-sense stationary processes. The exact definition differs depending on whether the signal is treated as a stochastic process or as a deterministic time series.
Cyclostationary stochastic process
A stochastic process of mean and autocorrelation function:
where the star denotes complex conjugation, is said to be wide-sense cyclostationary with period if both and are cyclic in with period i.e.:
The autocorrelation function is thus periodic in t and can be expanded in Fourier series:
where is called cyclic autocorrelation function and equal to:
The frequencies are called cycle frequencies.
Wide-sense stationary processes are a special case of cyclostationary processes with only .
Spectral Correlation Function
A signal that offers insight into the cyclic relationships between its spectral components. This function, denoted as , helps analyze cyclostationary signals, which exhibit periodic statistical properties. The Spectral Correlation Function highlights correlations between frequencies separated by a cyclic frequency α, allowing identification of modulated or structured signal behaviors in the frequency domain.
The function is mathematically defined as:
.
Cyclostationary time series
A signal that is just a function of time and not a sample path of a stochastic process can exhibit cyclostationarity properties in the framework of the fraction-of-time point of view. This way, the cyclic autocorrelation function can be defined by:
If the time-series is a sample path of a stochastic process it is . If the signal is further cycloergodic, all sample paths exhibit the same cyclic time-averages with probability equal to 1 and thus with probability 1.
Frequency domain behavior
The Fourier transform of the cyclic autocorrelation function at cyclic frequency α is called cyclic spectrum or spectral correlation density function and is equal to:
The cyclic spectrum at zero cyclic frequency is also called average power spectral density. For a Gaussian cyclostationary process, its rate distortion function can be expressed in terms of its cyclic spectrum.
The reason is called the spectral correlation density function is that it equals the limit, as filter bandwidth approaches zero, of the expected value of the product of the output of a one-sided bandpass filter with center frequency and the conjugate of the output of another one-sided bandpass filter with center frequency , with both filter outputs frequency shifted to a common center frequency, such as zero, as originally observed and proved in.
For time series, the reason the cyclic spectral density function is called the spectral correlation density function is that it equals the limit, as filter bandwidth approaches zero, of the average over all time of the product of the output of a one-sided bandpass filter with center frequency and the conjugate of the output of another one-sided bandpass filter with center frequency , with both filter outputs frequency shifted to a common center frequency, such as zero, as originally observed and proved in.
Example: linearly modulated digital signal
An example of cyclostationary signal is the linearly modulated digital signal :
where are i.i.d. random variables. The waveform , with Fourier transform , is the supporting pulse of the modulation.
By assuming and , the auto-correlation function is:
The last summation is a periodic summation, hence a signal periodic in t. This way, is a cyclostationary signal with period and cyclic autocorrelation function:
with indicating convolution. The cyclic spectrum is:
Typical raised-cosine pulses adopted in digital communications have thus only non-zero cyclic frequencies.
This same result can be obtained for the non-stochastic time series model of linearly modulated digital signals in which expectation is replaced with infinite time average, but this requires a somewhat modified mathematical method as originally observed and proved in.
Cyclostationary models
It is possible to generalise the class of autoregressive moving average models to incorporate cyclostationary behaviour. For example, Troutman treated autoregressions in which the autoregression coefficients and residual variance are no longer constant but vary cyclically with time. His work follows a number of other studies of cyclostationary processes within the field of time series analysis.
Polycyclostationarity
In practice, signals exhibiting cyclicity with more than one incommensurate period arise and require a generalization of the theory of cyclostationarity. Such signals are called polycyclostationary if they exhibit a finite number of incommensurate periods and almost cyclostationary if they exhibit a countably infinite number. Such signals arise frequently in radio communications due to multiple transmissions with differing sine-wave carrier frequencies and digital symbol rates. The theory was introduced in for stochastic processes and further developed in for non-stochastic time series.
Higher Order and Strict Sense Cyclostationarity
The wide sense theory of time series exhibiting cyclostationarity, polycyclostationarity and almost cyclostationarity originated and developed by Gardner was also generalized by Gardner to a theory of higher-order temporal and spectral moments and cumulants and a strict sense theory of cumulative probability distributions. The encyclopedic book comprehensively teaches all of this and provides a scholarly treatment of the originating publications by Gardner and contributions thereafter by others.
Applications
Cyclostationarity has extremely diverse applications in essentially all fields of engineering and science, as thoroughly documented in and. A few examples are:
Cyclostationarity is used in telecommunications for signal synchronization, transmitter and receiver optimization, and spectrum sensing for cognitive radio;
In signals intelligence, cyclostationarity is used for signal interception;
In econometrics, cyclostationarity is used to analyze the periodic behavior of financial-markets;
Queueing theory utilizes cyclostationary theory to analyze computer networks and car traffic;
Cyclostationarity is used to analyze mechanical signals produced by rotating and reciprocating machines.
Angle-time cyclostationarity of mechanical signals
Mechanical signals produced by rotating or reciprocating machines are remarkably well modelled as cyclostationary processes. The cyclostationary family accepts all signals with hidden periodicities, either of the additive type (presence of tonal components) or multiplicative type (presence of periodic modulations). This happens to be the case for noise and vibration produced by gear mechanisms, bearings, internal combustion engines, turbofans, pumps, propellers, etc.
The explicit modelling of mechanical signals as cyclostationary processes has been found useful in several applications, such as in noise, vibration, and harshness (NVH) and in condition monitoring. In the latter field, cyclostationarity has been found to generalize the envelope spectrum, a popular analysis technique used in the diagnostics of bearing faults.
One peculiarity of rotating machine signals is that the period of the process is strictly linked to the angle of rotation of a specific component – the “cycle” of the machine. At the same time, a temporal description must be preserved to reflect the nature of dynamical phenomena that are governed by differential equations of time. Therefore, the angle-time autocorrelation function is used,
where stands for angle, for the time instant corresponding to angle and for time delay. Processes whose angle-time autocorrelation function exhibit a component periodic in angle, i.e. such that has a non-zero Fourier-Bohr coefficient for some angular period , are called (wide-sense) angle-time cyclostationary.
The double Fourier transform of the angle-time autocorrelation function defines the order-frequency spectral correlation,
where is an order (unit in events per revolution) and a frequency (unit in Hz).
For constant speed of rotation, , angle is proportional to time, . Consequently, the angle-time autocorrelation is simply a cyclicity-scaled traditional autocorrelation; that is, the cycle frequencies are scaled by . On the other hand, if the speed of rotation changes with time, then the signal is no longer cyclostationary (unless the speed varies periodically). Therefore, it is not a model for cyclostationary signals. It is not even a model for time-warped cyclostationarity, although it can be a useful approximation for sufficiently slow changes in speed of rotation.
References
External links
Noise in mixers, oscillators, samplers, and logic: an introduction to cyclostationary noise manuscript annotated presentation presentation
Statistical signal processing | Cyclostationary process | [
"Engineering"
] | 2,246 | [
"Statistical signal processing",
"Engineering statistics"
] |
2,393,975 | https://en.wikipedia.org/wiki/Yang%E2%80%93Mills%20existence%20and%20mass%20gap | The Yang–Mills existence and mass gap problem is an unsolved problem in mathematical physics and mathematics, and one of the seven Millennium Prize Problems defined by the Clay Mathematics Institute, which has offered a prize of US$1,000,000 for its solution.
The problem is phrased as follows:
Yang–Mills Existence and Mass Gap. Prove that for any compact simple gauge group G, a non-trivial quantum Yang–Mills theory exists on and has a mass gap Δ > 0. Existence includes establishing axiomatic properties at least as strong as those cited in , and .
In this statement, a quantum Yang–Mills theory is a non-abelian quantum field theory similar to that underlying the Standard Model of particle physics; is Euclidean 4-space; the mass gap Δ is the mass of the least massive particle predicted by the theory.
Therefore, the winner must prove that:
Yang–Mills theory exists and satisfies the standard of rigor that characterizes contemporary mathematical physics, in particular constructive quantum field theory, and
The mass of all particles of the force field predicted by the theory are strictly positive.
For example, in the case of G=SU(3)—the strong nuclear interaction—the winner must prove that glueballs have a lower mass bound, and thus cannot be arbitrarily light.
The general problem of determining the presence of a spectral gap in a system is known to be undecidable.
Background
The problem requires the construction of a QFT satisfying the Wightman axioms and showing the existence of a mass gap. Both of these topics are described in sections below.
The Wightman axioms
The Millennium problem requires the proposed Yang–Mills theory to satisfy the Wightman axioms or similarly stringent axioms. There are four axioms:
W0 (assumptions of relativistic quantum mechanics)
Quantum mechanics is described according to von Neumann; in particular, the pure states are given by the rays, i.e. the one-dimensional subspaces, of some separable complex Hilbert space.
The Wightman axioms require that the Poincaré group acts unitarily on the Hilbert space. In other words, they have position dependent operators called quantum fields which form covariant representations of the Poincaré group.
The group of space-time translations is commutative, and so the operators can be simultaneously diagonalised. The generators of these groups give us four self-adjoint operators, , which transform under the homogeneous group as a four-vector, called the energy-momentum four-vector.
The second part of the zeroth axiom of Wightman is that the representation U(a, A) fulfills the spectral condition—that the simultaneous spectrum of energy-momentum is contained in the forward cone:
The third part of the axiom is that there is a unique state, represented by a ray in the Hilbert space, which is invariant under the action of the Poincaré group. It is called a vacuum.
W1 (assumptions on the domain and continuity of the field)
For each test function f, there exists a set of operators which, together with their adjoints, are defined on a dense subset of the Hilbert state space, containing the vacuum. The fields A are operator-valued tempered distributions. The Hilbert state space is spanned by the field polynomials acting on the vacuum (cyclicity condition).
W2 (transformation law of the field)
The fields are covariant under the action of Poincaré group, and they transform according to some representation S of the Lorentz group, or SL(2,C) if the spin is not integer:
W3 (local commutativity or microscopic causality)
If the supports of two fields are space-like separated, then the fields either commute or anticommute.
Cyclicity of a vacuum, and uniqueness of a vacuum are sometimes considered separately. Also, there is the property of asymptotic completeness—that the Hilbert state space is spanned by the asymptotic spaces and , appearing in the collision S matrix. The other important property of field theory is the mass gap which is not required by the axioms—that the energy-momentum spectrum has a gap between zero and some positive number.
Mass gap
In quantum field theory, the mass gap is the difference in energy between the vacuum and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle.
For a given real field , we can say that the theory has a mass gap if the two-point function has the property
with being the lowest energy value in the spectrum of the Hamiltonian and thus the mass gap. This quantity, easy to generalize to other fields, is what is generally measured in lattice computations. It was proved in this way that Yang–Mills theory develops a mass gap on a lattice.
Importance of Yang–Mills theory
Most known and nontrivial (i.e. interacting) quantum field theories in 4 dimensions are effective field theories with a cutoff scale. Since the beta function is positive for most models, it appears that most such models have a Landau pole as it is not at all clear whether or not they have nontrivial UV fixed points. This means that if such a QFT is well-defined at all scales, as it has to be to satisfy the axioms of axiomatic quantum field theory, it would have to be trivial (i.e. a free field theory).
Quantum Yang–Mills theory with a non-abelian gauge group and no quarks is an exception, because asymptotic freedom characterizes this theory, meaning that it has a trivial UV fixed point. Hence it is the simplest nontrivial constructive QFT in 4 dimensions. (QCD is a more complicated theory because it involves quarks.)
Quark confinement
At the level of rigor of theoretical physics, it has been well established that the quantum Yang–Mills theory for a non-abelian Lie group exhibits a property known as confinement; though proper mathematical physics has more demanding requirements on a proof. A consequence of this property is that above the confinement scale, the color charges are connected by chromodynamic flux tubes leading to a linear potential between the charges. Hence isolated color charge and isolated gluons cannot exist. In the absence of confinement, we would expect to see massless gluons, but since they are confined, all we would see are color-neutral bound states of gluons, called glueballs. If glueballs exist, they are massive, which is why a mass gap is expected.
References
Further reading
External links
The Millennium Prize Problems: Yang–Mills and Mass Gap
Millennium Prize Problems
Gauge theories
Quantum chromodynamics
Unsolved problems in mathematics
Unsolved problems in physics | Yang–Mills existence and mass gap | [
"Physics",
"Mathematics"
] | 1,410 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Unsolved problems in physics",
"Millennium Prize Problems"
] |
2,394,829 | https://en.wikipedia.org/wiki/Hydrophily | Hydrophily is a fairly uncommon form of pollination whereby pollen is distributed by the flow of waters, particularly in rivers and streams. Hydrophilous species fall into two categories:
(i) Those that distribute their pollen to the surface of water. e.g. Vallisneria'''s male flower or pollen grain are released on the surface of water, which are passively carried away by water currents; some of them eventually reach the female flower
(ii) Those that distribute it beneath the surface. e.g. seagrasses in which female flower remain submerged in water and pollen grains are released inside the water.
Surface pollination
Surface pollination is more frequent, and appears to be a transitional phase between wind pollination and true hydrophily. In these the pollen floats on the surface and reaches the stigmas of the female flowers as in Hydrilla, Callitriche, Ruppia, Zostera, Elodea. In Vallisneria the male flowers become detached and float on the surface of the water; the anthers are thus brought in contact with the stigmas of the female flowers. Surface hydrophily has been observed in several species of Potamogeton as well as some marine species.
Submerged pollination
Species exhibiting true submerged hydrophily include Najas, where the pollen grains are heavier than water, and sinking down are caught by the stigmas of the extremely simple female flowers, Posidonia australis or Zostera marina and Hydrilla''.
Evolution
Hydrophily is unique to obligate submersed aquatic angiosperms with sexually reproductive parts completely submerged below the water surface. Hydrophily is the adaptive evolution of completely submersed angiosperms to aquatic habitats. True hydrophily occurs in 18 submersed angiosperm genera, which is associated with an unusually high incidence of unisexual flowers.
References
Sources
Plant morphology
Pollination
it:Impollinazione#Impollinazione idrogama | Hydrophily | [
"Biology"
] | 416 | [
"Plant morphology",
"Plants"
] |
2,396,039 | https://en.wikipedia.org/wiki/Blood%E2%80%93testis%20barrier | The blood–testis barrier is a physical barrier between the blood vessels and the seminiferous tubules of the animal testes. The name "blood-testis barrier" is misleading as it is not a blood-organ barrier in a strict sense, but is formed between Sertoli cells of the seminiferous tubule and isolates the further developed stages of germ cells from the blood. A more correct term is the Sertoli cell barrier (SCB).
Structure
The walls of seminiferous tubules are lined with primitive germ layer cells and by Sertoli cells. The barrier is formed by tight junctions, adherens junctions and gap junctions between the Sertoli cells, which are sustentacular cells (supporting cells) of the seminiferous tubules, and divides the seminiferous tubule into a basal compartment (outer side of the tubule, in contact with blood and lymph) and an endoluminal compartment (inner side of the tubule, isolated from blood and lymph). The tight junctions are formed by intercellular adhesion molecules in between cells that are anchored to actin fibers within the cells. For the visualization of the actin fibers within the seminiferous tubules see Sharma et al.'s immunofluorescence studies.
Function
The presence of the SCB allows Sertoli cells to control the adluminal environment in which germ cells (spermatocytes, spermatids and sperm) develop by influencing the chemical composition of the luminal fluid.
The barrier also prevents passage of cytotoxic agents (bodies or substances that are toxic to cells) into the seminiferous tubules.
The fluid in the lumen of seminiferous tubules is quite different from plasma; it contains very little protein and glucose but is rich in androgens, estrogens, potassium, inositol and glutamic and aspartic acid. This composition is maintained by blood–testis barrier.
The barrier also protects the germ cells from blood-borne noxious agents,
prevents antigenic products of germ cell maturation from entering the circulation and generating an autoimmune response, and may help establish an osmotic gradient that facilitates movement of fluid into the tubular lumen.
Note
Steroids penetrate the barrier.
Some proteins pass from Sertoli cells to Leydig cells to function in a paracrine fashion.
Clinical significance
Auto-immune response
The blood–testes barrier can be damaged by trauma to the testes (including torsion or impact), by surgery or as a result of vasectomy. When the blood–testes barrier is breached, and sperm enters the bloodstream, the immune system mounts an autoimmune response against the sperm, since the immune system has not been tolerized against the unique sperm antigens that are only expressed by these cells. The anti-sperm antibodies generated by the immune system can bind to various antigenic sites on the surface of the developing sperm within the testes. If they bind to the head, the sperm may be less able to fertilize an egg, and, if they bind to the tail, the motility of the sperm can be reduced.
See also
References
External links
Overview at okstate.edu
Animal reproductive system
Animal physiology
Testicle | Blood–testis barrier | [
"Biology"
] | 695 | [
"Animals",
"Animal physiology"
] |
2,396,555 | https://en.wikipedia.org/wiki/List%20of%20experimental%20errors%20and%20frauds%20in%20physics | Experimental science demands repeatability of results, but many experiments are not repeatable due to fraud or error. The list of papers whose results were later retracted or discredited, thus leading to invalid science, is growing. Some errors are introduced when the experimenter's desire for a certain result unconsciously influences selection of data (a problem which is possible to avoid in some cases with double-blind protocols). There have also been cases of deliberate scientific misconduct.
Famous experimental errors
N-rays (1903)
A reported faint visual effect that experimenters could still "see" even when the supposed causative element in their apparatus had been secretly disconnected.
Claimed experimental disproof of special relativity (1906)
Published in Annalen der Physik and said to be the first journal paper to cite Einstein's 1905 electrodynamics paper. Walter Kaufmann stated that his results were not compatible with special relativity. According to Gerald Holton, it took a decade for the shortcomings of Kaufmann's test to be realised: during this time, critics of special relativity were able to claim that the theory was invalidated by the available experimental evidence.
Premature verification of the gravitational redshift effect (1924)
A number of earlier experimenters claimed to have found the presence or lack of gravitational redshift, but Walter Sydney Adams's result was supposed to have settled the issue. Unfortunately, the measurement and the prediction were both in error such that it initially appeared to be valid. It is no longer considered credible and there has been much debate about whether the results were fraud or that his data may have been contaminated by stray light from Sirius A. The first "reliable" confirmations of the effect appeared in the 1960s.
First reproducible synthetic diamond (1955)
Originally reported in Nature in 1955 and later. Diamond synthesis was later determined to be impossible with the apparatus. Subsequent analysis indicated that the first gemstone (used to secure further funding) was natural rather than synthetic. Artificial diamonds have since been produced.
Claimed detection of gravitational waves (1970)
In 1970, Joseph Weber, an electrical engineer turned physicist and working with the University of Maryland, reported the detection of 311 excitations on his test equipment designed to measure gravitational waves. He utilized an apparatus consisting of two one ton aluminum bars, each a separate detector, in some configurations being hung within a vacuum chamber, or having one bar displaced to Argonne National Laboratory near Chicago, about 1,000 kilometers away, all for further isolation. He took extreme measures to isolate the equipment from seismic and other interferences, but Weber's criteria for data analysis turned out to be ill-defined and partly subjective. In 1974, the first indirect detection of gravitational waves was confirmed from observations of a binary pulsar, but by the end of the 1970s, Weber's work was considered spurious as it could not be replicated by others. Still, Weber is considered one of the fathers of gravitational wave detection and inspiration for other projects such as LIGO, which made the first direct observation of gravitational waves in 2015.
Oops-Leon particle (1976)
Data from Fermilab in 1976 appeared to indicate a new particle at about 6 GeV which decayed into electron-positron pairs. Subsequent data and analysis indicated that the apparent peak resulted from random noise. The name is a pun on upsilon, the proposed name for the new particle and Leon M. Lederman, the principal investigator. The illusory particle is unrelated to the Upsilon meson, discovered in 1977 by the same group.
Cold fusion (1989)
Since the announcement of Pons and Fleischmann in 1989, cold fusion has been considered to be an example of a pathological science. Two panels convened by the US Department of Energy, one in 1989 and a second in 2004, did not recommend a dedicated federal program for cold fusion research. In 2007, Nature reported that the American Chemical Society would host an invited symposium on cold fusion and low energy nuclear reactions at their national meeting for the first time in many years.
Neutrinoless double beta decay (2001)
Members of the Heidelberg–Moscow collaboration claimed to have discovered neutrino-less double beta decay in in 2001.
Faster-than-light neutrino anomaly (2011)
In 2011, the OPERA experiment at CERN mistakenly measured neutrinos appearing to travel faster than the speed of light. The results were published in September, noting that further investigation into systematics would be necessary. This investigation found an improperly connected fibre optic cable and a clock oscillator ticking too fast, which together had caused an underestimate of uncertainty in the initial measurement.
Cosmic microwave background polarization (2014)
On March 17, 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the cosmological theory of inflation. However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported. Eventually, the initial findings were revealed to be artifacts of interstellar dust.
Room-temperature superconductivity in LK-99 (2023)
In July 2023, a team at Korea University led by Lee Sukbae and Kim Ji-Hoon announced the discovery of LK-99, a supposed room-temperature superconductor based on lead apatite doped with copper. As evidence, they published conductivity measurements and a video showing partial levitation that the researchers claimed displayed the Meissner effect. Other research groups were not able to replicate the results and suggested that impurities in the material led to spurious effects mimicking phenomena associated with superconductivity. Copper(I) sulfide, a compound produced in the synthesis process, turned out to be a close match for the claimed properties of LK-99, and pure samples of LK-99 were insulators rather than any form of conductor.
Alleged scientific misconduct cases
Photon wave–particle duality using canal-ray experiments (1926)
Emil Rupp had been considered one of the best experimenters of his time until he was forced to admit that his notable track record was at least partly due to the fabrication of results.
Water memory (1988)
French immunologist Jacques Benveniste published a paper in Nature which seemed to support a mechanism by which homeopathy could operate. The journal editors accompanied the paper with an editorial urging readers to "suspend judgement" until the results could be replicated. Benveniste's results failed to have been replicated in subsequent double blind experiments.
Organic molecular semiconductors (~1999)
A succession of high-profile peer-reviewed papers previously published by Jan Hendrik Schön were subsequently found to have used obviously fabricated data.
Early production of element 118 (1999)
Element 118 (oganesson) was announced, and then the announcement withdrawn by Berkeley after claims of irreproducibility. The researcher involved, Victor Ninov, denies doing anything wrong.
Sonofusion (2002)
In 2002, nuclear engineer Rusi Taleyarkhan and his collaborators claimed to have observed evidence of sonofusion or bubble fusion. An investigation in 2008 by Purdue University review board judged him guilty of research misconduct for "falsification of the research record".
Room-temperature superconductivity (2020-2023)
In various papers Ranga P. Dias and collaborators claimed to have discovered the first room-temperature superconductors using materials like carbonaceous sulfur hydride. Dias was found guilty of scientific misconduct, including data fabrication. The papers were retracted.
See also
Academic dishonesty
List of scientific misconduct incidents
List of topics characterized as pseudoscience
Bogdanov affair
References
Error
Physics experiments | List of experimental errors and frauds in physics | [
"Physics"
] | 1,585 | [
"Experimental physics",
"Physics experiments"
] |
2,397,362 | https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker%20conditions | In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a global maximum or minimum over the domain of the choice variables and a global minimum (maximum) over the multipliers. The Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.
The KKT conditions were originally named after Harold W. Kuhn and Albert W. Tucker, who first published the conditions in 1951. Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.
Nonlinear optimization problem
Consider the following nonlinear optimization problem in standard form:
minimize
subject to
where is the optimization variable chosen from a convex subset of , is the objective or utility function, are the inequality constraint functions and are the equality constraint functions. The numbers of inequalities and equalities are denoted by and respectively. Corresponding to the constrained optimization problem one can form the Lagrangian function
where
The Karush–Kuhn–Tucker theorem then states the following.
Since the idea of this approach is to find a supporting hyperplane on the feasible set , the proof of the Karush–Kuhn–Tucker theorem makes use of the hyperplane separation theorem.
The system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where a closed-form solution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.
Necessary conditions
Suppose that the objective function and the constraint functions and have subderivatives at a point . If is a local optimum and the optimization problem satisfies some regularity conditions (see below), then there exist constants and , called KKT multipliers, such that the following four groups of conditions hold:
Stationarity
For minimizing :
For maximizing :
Primal feasibility
Dual feasibility
Complementary slackness
The last condition is sometimes written in the equivalent form:
In the particular case , i.e., when there are no inequality constraints, the KKT conditions turn into the Lagrange conditions, and the KKT multipliers are called Lagrange multipliers.
Proof
Interpretation: KKT conditions as balancing constraint-forces in state space
The primal problem can be interpreted as moving a particle in the space of , and subjecting it to three kinds of force fields:
is a potential field that the particle is minimizing. The force generated by is .
are one-sided constraint surfaces. The particle is allowed to move inside , but whenever it touches , it is pushed inwards.
are two-sided constraint surfaces. The particle is allowed to move only on the surface .
Primal stationarity states that the "force" of is exactly balanced by a linear sum of forces and .
Dual feasibility additionally states that all the forces must be one-sided, pointing inwards into the feasible set for .
Complementary slackness states that if , then the force coming from must be zero i.e., , since the particle is not on the boundary, the one-sided constraint force cannot activate.
Matrix representation
The necessary conditions can be written with Jacobian matrices of the constraint functions. Let be defined as and let be defined as . Let and . Then the necessary conditions can be written as:
Stationarity
For maximizing :
For minimizing :
Primal feasibility
Dual feasibility
Complementary slackness
Regularity conditions (or constraint qualifications)
One can ask whether a minimizer point of the original, constrained optimization problem (assuming one exists) has to satisfy the above KKT conditions. This is similar to asking under what conditions the minimizer of a function in an unconstrained problem has to satisfy the condition . For the constrained case, the situation is more complicated, and one can state a variety of (increasingly complicated) "regularity" conditions under which a constrained minimizer also satisfies the KKT conditions. Some common examples for conditions that guarantee this are tabulated in the following, with the LICQ the most frequently used one:
The strict implications can be shown
LICQ ⇒ MFCQ ⇒ CPLD ⇒ QNCQ
and
LICQ ⇒ CRCQ ⇒ CPLD ⇒ QNCQ
In practice weaker constraint qualifications are preferred since they apply to a broader selection of problems.
Sufficient conditions
In some cases, the necessary conditions are also sufficient for optimality. In general, the necessary conditions are not sufficient for optimality and additional information is required, such as the Second Order Sufficient Conditions (SOSC). For smooth functions, SOSC involve the second derivatives, which explains its name.
The necessary conditions are sufficient for optimality if the objective function of a maximization problem is a differentiable concave function, the inequality constraints are differentiable convex functions, the equality constraints are affine functions, and Slater's condition holds. Similarly, if the objective function of a minimization problem is a differentiable convex function, the necessary conditions are also sufficient for optimality.
It was shown by Martin in 1985 that the broader class of functions in which KKT conditions guarantees global optimality are the so-called Type 1 invex functions.
Second-order sufficient conditions
For smooth, non-linear optimization problems, a second order sufficient condition is given as follows.
The solution found in the above section is a constrained local minimum if for the Lagrangian,
then,
where is a vector satisfying the following,
where only those active inequality constraints corresponding to strict complementarity (i.e. where ) are applied. The solution is a strict constrained local minimum in the case the inequality is also strict.
If , the third order Taylor expansion of the Lagrangian should be used to verify if is a local minimum. The minimization of is a good counter-example, see also Peano surface.
Economics
Often in mathematical economics the KKT approach is used in theoretical models in order to obtain qualitative results. For example, consider a firm that maximizes its sales revenue subject to a minimum profit constraint. Letting be the quantity of output produced (to be chosen), be sales revenue with a positive first derivative and with a zero value at zero output, be production costs with a positive first derivative and with a non-negative value at zero output, and be the positive minimal acceptable level of profit, then the problem is a meaningful one if the revenue function levels off so it eventually is less steep than the cost function. The problem expressed in the previously given minimization form is
Minimize
subject to
and the KKT conditions are
Since would violate the minimum profit constraint, we have and hence the third condition implies that the first condition holds with equality. Solving that equality gives
Because it was given that and are strictly positive, this inequality along with the non-negativity condition on guarantees that is positive and so the revenue-maximizing firm operates at a level of output at which marginal revenue is less than marginal cost — a result that is of interest because it contrasts with the behavior of a profit maximizing firm, which operates at a level at which they are equal.
Value function
If we reconsider the optimization problem as a maximization problem with constant inequality constraints:
The value function is defined as
so the domain of is
Given this definition, each coefficient is the rate at which the value function increases as increases. Thus if each is interpreted as a resource constraint, the coefficients tell you how much increasing a resource will increase the optimum value of our function . This interpretation is especially important in economics and is used, for instance, in utility maximization problems.
Generalizations
With an extra multiplier , which may be zero (as long as ), in front of the KKT stationarity conditions turn into
which are called the Fritz John conditions. This optimality conditions holds without constraint qualifications and it is equivalent to the optimality condition KKT or (not-MFCQ).
The KKT conditions belong to a wider class of the first-order necessary conditions (FONC), which allow for non-smooth functions using subderivatives.
See also
Farkas' lemma
Lagrange multiplier
The Big M method, for linear problems, which extends the simplex algorithm to problems that contain "greater-than" constraints.
Interior-point method a method to solve the KKT conditions.
Slack variable
Slater's condition
References
Further reading
External links
Karush–Kuhn–Tucker conditions with derivation and examples
Examples and Tutorials on the KKT Conditions
Mathematical optimization
Mathematical economics | Karush–Kuhn–Tucker conditions | [
"Mathematics"
] | 1,847 | [
"Mathematical optimization",
"Applied mathematics",
"Mathematical analysis",
"Mathematical economics"
] |
2,397,539 | https://en.wikipedia.org/wiki/Beal%20conjecture | The Beal conjecture is the following conjecture in number theory:
If
,
where A, B, C, x, y, and z are positive integers with x, y, z ≥ 3, then A, B, and C have a common prime factor.
Equivalently,
The equation has no solutions in positive integers and pairwise coprime integers A, B, C if x, y, z ≥ 3.
The conjecture was formulated in 1993 by Andrew Beal, a banker and amateur mathematician, while investigating generalizations of Fermat's Last Theorem. Since 1997, Beal has offered a monetary prize for a peer-reviewed proof of this conjecture or a counterexample. The value of the prize has increased several times and is currently $1 million.
In some publications, this conjecture has occasionally been referred to as a generalized Fermat equation, the Mauldin conjecture, and the Tijdeman-Zagier conjecture.
Related examples
To illustrate, the solution has bases with a common factor of 3, the solution has bases with a common factor of 7, and has bases with a common factor of 2. Indeed the equation has infinitely many solutions where the bases share a common factor, including generalizations of the above three examples, respectively
and
Furthermore, for each solution (with or without coprime bases), there are infinitely many solutions with the same set of exponents and an increasing set of non-coprime bases. That is, for solution
we additionally have
where
Any solutions to the Beal conjecture will necessarily involve three terms all of which are 3-powerful numbers, i.e. numbers where the exponent of every prime factor is at least three. It is known that there are an infinite number of such sums involving coprime 3-powerful numbers; however, such sums are rare. The smallest two examples are:
What distinguishes Beal's conjecture is that it requires each of the three terms to be expressible as a single power.
Relation to other conjectures
Fermat's Last Theorem established that has no solutions for n > 2 for positive integers A, B, and C. If any solutions had existed to Fermat's Last Theorem, then by dividing out every common factor, there would also exist solutions with A, B, and C coprime. Hence, Fermat's Last Theorem can be seen as a special case of the Beal conjecture restricted to x = y = z.
The Fermat–Catalan conjecture is that has only finitely many solutions with A, B, and C being positive integers with no common prime factor and x, y, and z being positive integers satisfying
. Beal's conjecture can be restated as "All Fermat–Catalan conjecture solutions will use 2 as an exponent".
The abc conjecture would imply that there are at most finitely many counterexamples to Beal's conjecture.
Partial results
In the cases below where n is an exponent, multiples of n are also proven, since a kn-th power is also an n-th power. Where solutions involving a second power are alluded to below, they can be found specifically at Fermat–Catalan conjecture#Known solutions. All cases of the form (2, 3, n) or (2, n, 3) have the solution 23 + 1n = 32 which is referred below as the Catalan solution.
The case x = y = z ≥ 3 is Fermat's Last Theorem, proven to have no solutions by Andrew Wiles in 1994.
The case (x, y, z) = (2, 3, 7) and all its permutations were proven to have only four non-Catalan solutions, none of them contradicting Beal conjecture, by Bjorn Poonen, Edward F. Schaefer, and Michael Stoll in 2005.
The case (x, y, z) = (2, 3, 8) and all its permutations were proven to have only two non-Catalan solutions, which doesn't contradict Beal conjecture, by Nils Bruin in 2003.
The case (x, y, z) = (2, 3, 9) and all its permutations are known to have only one non-Catalan solution, which doesn't contradict Beal conjecture, by Nils Bruin in 2003.
The case (x, y, z) = (2, 3, 10) and all its permutations were proven by David Zureick-Brown in 2009 to have only the Catalan solution.
The case (x, y, z) = (2, 3, 11) and all its permutations were proven by Freitas, Naskręcki and Stoll to have only the Catalan solution.
The case (x, y, z) = (2, 3, 15) and all its permutations were proven by Samir Siksek and Michael Stoll in 2013 to have only the Catalan solution.
The case (x, y, z) = (2, 4, 4) and all its permutations were proven to have no solutions by combined work of Pierre de Fermat in the 1640s and Euler in 1738. (See one proof here and another here)
The case (x, y, z) = (2, 4, 5) and all its permutations are known to have only two non-Catalan solutions, which doesn't contradict Beal conjecture, by Nils Bruin in 2003.
The case (x, y, z) = (2, 4, n) and all its permutations were proven for n ≥ 6 by Michael Bennett, Jordan Ellenberg, and Nathan Ng in 2009.
The case (x, y, z) = (2, 6, n) and all its permutations were proven for n ≥ 3 by Michael Bennett and Imin Chen in 2011 and by Bennett, Chen, Dahmen and Yazdani in 2014.
The case (x, y, z) = (2, 2n, 3) was proven for 3 ≤ n ≤ 107 except n = 7 and various modulo congruences when n is prime to have no non-Catalan solution by Bennett, Chen, Dahmen and Yazdani.
The cases (x, y, z) = (2, 2n, 9), (2, 2n, 10), (2, 2n, 15) and all their permutations were proven for n ≥ 2 by Bennett, Chen, Dahmen and Yazdani in 2014.
The case (x, y, z) = (3, 3, n) and all its permutations have been proven for 3 ≤ n ≤ 109 and various modulo congruences when n is prime.
The case (x, y, z) = (3, 4, 5) and all its permutations were proven by Siksek and Stoll in 2011.
The case (x, y, z) = (3, 5, 5) and all its permutations were proven by Bjorn Poonen in 1998.
The case (x, y, z) = (3, 6, n) and all its permutations were proven for n ≥ 3 by Bennett, Chen, Dahmen and Yazdani in 2014.
The case (x, y, z) = (2n, 3, 4) and all its permutations were proven for n ≥ 2 by Bennett, Chen, Dahmen and Yazdani in 2014.
The cases (5, 5, 7), (5, 5, 19), (7, 7, 5) and all their permutations were proven by Sander R. Dahmen and Samir Siksek in 2013.
The cases (x, y, z) = (n, n, 2) and all its permutations were proven for n ≥ 4 by Darmon and Merel in 1995 following work from Euler and Poonen.
The cases (x, y, z) = (n, n, 3) and all its permutations were proven for n ≥ 3 by Édouard Lucas, Bjorn Poonen, and Darmon and Merel.
The case (x, y, z) = (2n, 2n, 5) and all its permutations were proven for n ≥ 2 by Bennett in 2006.
The case (x, y, z) = (2l, 2m, n) and all its permutations were proven for l, m ≥ 5 primes and n = 3, 5, 7, 11 by Anni and Siksek.
The case (x, y, z) = (2l, 2m, 13) and all its permutations were proven for l, m ≥ 5 primes by Billerey, Chen, Dembélé, Dieulefait, Freitas.
The case (x, y, z) = (3l, 3m, n) is direct for l, m ≥ 2 and n ≥ 3 from work by Kraus.
The Darmon–Granville theorem uses Faltings's theorem to show that for every specific choice of exponents (x, y, z), there are at most finitely many coprime solutions for (A, B, C).
The impossibility of the case A = 1 or B = 1 is implied by Catalan's conjecture, proven in 2002 by Preda Mihăilescu. (Notice C cannot be 1, or one of A and B must be 0, which is not permitted.)
A potential class of solutions to the equation, namely those with A, B, C also forming a Pythagorean triple, were considered by L. Jesmanowicz in the 1950s. J. Jozefiak proved that there are an infinite number of primitive Pythagorean triples that cannot satisfy the Beal equation. Further results are due to Chao Ko.
Peter Norvig, Director of Research at Google, reported having conducted a series of numerical searches for counterexamples to Beal's conjecture. Among his results, he excluded all possible solutions having each of x, y, z ≤ 7 and each of A, B, C ≤ 250,000, as well as possible solutions having each of x, y, z ≤ 100 and each of A, B, C ≤ 10,000.
If A, B are odd and x, y are even, Beal's conjecture has no counterexample.
By assuming the validity of Beal's conjecture, there exists an upper bound for any common divisor of x, y and z in the expression .
Prize
For a published proof or counterexample, banker Andrew Beal initially offered a prize of US $5,000 in 1997, raising it to $50,000 over ten years, but has since raised it to US $1,000,000.
The American Mathematical Society (AMS) holds the $1 million prize in a trust until the Beal conjecture is solved. It is supervised by the Beal Prize Committee (BPC), which is appointed by the AMS president.
Variants
The counterexamples , , and show that the conjecture would be false if one of the exponents were allowed to be 2. The Fermat–Catalan conjecture is an open conjecture dealing with such cases (the condition of this conjecture is that the sum of the reciprocals is less than 1). If we allow at most one of the exponents to be 2, then there may be only finitely many solutions (except the case ).
If A, B, C can have a common prime factor then the conjecture is not true; a classic counterexample is .
A variation of the conjecture asserting that x, y, z (instead of A, B, C) must have a common prime factor is not true. A counterexample is in which 4, 3, and 7 have no common prime factor. (In fact, the maximum common prime factor of the exponents that is valid is 2; a common factor greater than 2 would be a counterexample to Fermat's Last Theorem.)
The conjecture is not valid over the larger domain of Gaussian integers. After a prize of $50 was offered for a counterexample, Fred W. Helenius provided .
See also
ABC conjecture
Euler's sum of powers conjecture
Jacobi–Madden equation
Prouhet–Tarry–Escott problem
Taxicab number
Pythagorean quadruple
Sums of powers, a list of related conjectures and theorems
Distributed computing
BOINC
References
External links
The Beal Prize office page
Bealconjecture.com
Math.unt.edu
Mathoverflow.net discussion about the name and date of origin of the theorem
Diophantine equations
Conjectures
Unsolved problems in number theory
Abc conjecture | Beal conjecture | [
"Mathematics"
] | 2,676 | [
"Unsolved problems in mathematics",
"Mathematical objects",
"Equations",
"Unsolved problems in number theory",
"Diophantine equations",
"Conjectures",
"Abc conjecture",
"Mathematical problems",
"Number theory"
] |
2,398,004 | https://en.wikipedia.org/wiki/Nicholas%20C.%20Handy | Nicholas Charles Handy (17 June 1941 – 2 October 2012) was a British theoretical chemist. He retired as Professor of quantum chemistry at the University of Cambridge in September 2004.
Education and early life
Handy was born in Wiltshire, England and educated at Clayesmore School. He studied the Mathematical Tripos at the University of Cambridge and completed his PhD on theoretical chemistry supervised by Samuel Francis Boys.
Research
Handy wrote 320 scientific papers published in physical and theoretical chemistry journals.
Handy developed several methods in quantum chemistry and theoretical spectroscopy. His contributions have helped greatly to the understanding of:
the transcorrelated method
the long range behaviour of Hartree–Fock orbitals
semiclassical methods for vibrational energies
the variational method for rovibrational wave-functions (in normal mode and internal coordinates)
Full configuration interaction with Slater determinants (benchmark studies)
convergence of the Møller–Plesset series
the reaction path Hamiltonian
Anharmonic spectroscopic and thermodynamic properties using higher derivative methods
Brueckner-doubles theory
Open shell Møller–Plesset theory
frequency-dependent properties
Density functional theory : quadrature, new functionals and molecular properties.
Awards and honours
Handy was elected a Fellow of the Royal Society (FRS) in 1990. He was awarded the Leverhulme Medal in 2002 and was a member of the International Academy of Quantum Molecular Science.
Death
On 2 October 2012 Nicholas died after a brief battle with pancreatic cancer.
References
1941 births
2012 deaths
British chemists
Theoretical chemists
Fellows of the Royal Society
Members of the International Academy of Quantum Molecular Science
Alumni of St Catharine's College, Cambridge
Fellows of St Catharine's College, Cambridge
Schrödinger Medal recipients
People educated at Clayesmore School | Nicholas C. Handy | [
"Chemistry"
] | 362 | [
"Quantum chemistry",
"Theoretical chemistry",
"Theoretical chemists",
"Physical chemists"
] |
2,398,131 | https://en.wikipedia.org/wiki/Tactile%20graphic | Tactile graphics, including tactile pictures, tactile diagrams, tactile maps, and tactile graphs, are images that use raised surfaces so that a visually impaired person can feel them. They are used to convey non-textual information such as maps, paintings, graphs and diagrams.
Tactile graphics can be seen as a subset of accessible images. Images can be made accessible to the visually impaired in various ways, such as verbal description, sound, or haptic (tactual) feedback.
One of the most common uses for tactile graphics is the production of tactile maps.
Tactile maps
The types and forms of tactile maps began with the oldest and most rudimentary or a mixed media format. This tactile map is produced by simply attaching objects to a substrate to represent different items or symbols. More recent tactile maps are produced by computers through different means such as an ink-jet printers.
Thermoform is one of the most common methods of producing tactile maps. This process is also known as vacuum forming. Thermoform maps or plans are created from a process where a sheet of plastic is heated and vacuumed on top of a model or master. The master can be made from many substances, although certain materials are more durable than others. Since this process involves creating a mold, it is somewhat time-consuming.
Swell paper has a special coating of heat-reactive chemicals. Microcapsules of alcohol implanted in the paper fracture when exposed to heat and make the surface of the paper inflate. Placing black ink on the paper prior to a heat process provides control over the raised surface areas. This type of map is not as robust as the Thermoform map, but can be produced with less effort and expense.
Modified Braille embossers can also be used to produce tactile paper maps.
Ink-jet tactile maps are made by layering a specially designed ink. Each layer is cured by UV irradiation before the next layer is added. This technology is an offshoot of other industries, such as circuit board manufacturing and biomedical applications.
The substrate for tactile maps is a very important attribute, since different materials can enhance or reduce legibility and durability. Several types of substrates can be used to produce a tactile map. These include rough and smooth plastic, rough and smooth paper, microcapsule paper, Braillon, and aluminum. Many factors should be considered when choosing a substrate; these include but are not limited to function, durability, and portability.
Tactile map variables: Just as Jacques Bertin retinal variables help determine how visual maps are produced; tactile maps have a formula as well. Although researchers have not standardized tactile map variables, these nine are usually included depending on the substrate: vibration, flutter, pressure, temperature, size, shape, texture/grain, orientation, and elevation.
Typical tactile elevations: Thermoformed maps usually have an elevation of at least 1mm. Swell Paper averages 0.5 mm and braille embossers have a range from 0.25–1 mm. Ink-jet printers can be controlled to vary elevation as needed. A (2009) study conducted by Sandra Jehoel tested various height levels and estimated that preferred tactile elevations fall between 40 and 80 micrometres depending on the substrate background, shape of the object and smoothness of lines. Symbols such as a triangle, square and a circle should have a minimum base line length of 6.4, 5.0 and 5.5 mm respectively for proper recognition.
Audio tactile maps or graphic tablets are interactive devices. Electronic tactile talking touch pad instruments use Macromedia Flash software with audio files to convey information to the blind or visually impaired user. As the user's finger engages a feature or symbol a recording provides information about the object, symbol or area. For example, the sound of splashing water can be used for areas such as rivers or oceans. This format has great potential for transmitting information over the Internet which can be downloaded to a computer or hand-held device.
A great deal of hardware already exists that can be used by the blind or visually impaired to interact with computer screen graphics. A vibrating mouse or other force feedback devices can be adapted to turn any visual software generated map into a hybrid tactile map. The interactive signal to a device can be varied when crossing a boundary or symbol.
High resolution refreshable braille displays containing 1,500 to 12,000 pixels are already available in market. Graphic braille display available in the market is DV-2 (from KGS ) with 1,536 pixels, Hyperbraille with 7,200 pixels and TACTISPLAY Table/Walk (from Tactisplay Corp.) with 2,400/12,000 pixels respectively. TACTISPLAY table has total 12,000 pixels arranged in 120x100.
Zoom maps are a recently developed tactile map. These maps are designed specifically for those who can read braille and have had no previous interaction with tactile maps. The term zoom is comparable to a zoom-able visual raster internet map. A country is divided into regions on the first map then the next zoomed map will have a breakdown of the regions and so forth until a city level is reached. These successive maps rely on a dependable texture as the map zoom progresses. This produces a familiarity as one zooms from the proceeding map. This is achieved in many instances with line orientation, area and consistent shape. The Braille text on the map is placed next to a rectangular textured legend for area identification.
References
External links
IVEO Hands-On Learning System
Alternate Text Production Center
Tactile graphic resource page
Blindness equipment
Design
Cartography
Accessibility | Tactile graphic | [
"Engineering"
] | 1,203 | [
"Accessibility",
"Design"
] |
11,422,055 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD64 | In molecular biology, SNORD64 (also known as HBII-13) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
SNORD64 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
snoRNA HBII-13 is expressed mainly in the tissues of brain, but is also in the lungs, the kidneys and muscle; however HBII-13 has no identified target RNA.
The HBII-13 gene is located in a 460 kb intron of the large paternally-expressed transcription unit (SNURF-SNRNP-UBE3A AS) along with several other snoRNAs HBII-436, HBII-437, HBII-438A/B and the clusters of HBII-85, HBII-52. This host gene is an antisense transcript to maternally expressed UBE3A gene.
References
External links
Small nuclear RNA | Small nucleolar RNA SNORD64 | [
"Chemistry"
] | 331 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,059 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD65 | In molecular biology, SNORD65 (also known as HBII-135) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. SNORD19 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
SNORD65 is the human orthologue of the mouse MBII-135 snoRNA and is predicted to guide 2'O-ribose methylation of the small subunit (SSU) ribosomal RNA (rRNA), 18S, on position U627.
References
External links
Small nuclear RNA | Small nucleolar RNA SNORD65 | [
"Chemistry"
] | 244 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,061 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD66 | In molecular biology, SNORD66 (also known as HBII-142) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
HBII-142 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
HBII-142 is the human orthologue of the mouse MBII-142 snoRNA and is predicted to guide 2'O-ribose methylation of 18S ribosomal RNA (rRNA) at residue C1272.
An experiment that looked at 22 different non-small-cell lung cancer tissues found that SNORD33, SNORD66 and SNORD76 were over-expressed relative to matched noncancerous lung tissues.
References
External links
Small nuclear RNA | Small nucleolar RNA SNORD66 | [
"Chemistry"
] | 280 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,066 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD67 | In molecular biology, SNORD67 (also known as HBII-166) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
HBII-166 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
snoRNA HBII-166 is the human orthologue of the mouse MBII-166 and is predicted to guide 2'O-ribose methylation of spliceosomal RNA U6 at residue C60.
References
External links
Small nuclear RNA
Spliceosome
RNA splicing | Small nucleolar RNA SNORD67 | [
"Chemistry"
] | 242 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,070 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD69 | In molecular biology, snoRNA HBII-210 belongs to the C/D family of snoRNAs. It is the human orthologue of the mouse MBII-210 and is predicted to guide the 2'O-ribose methylation of large 28S rRNA on residue G4464.
References
External links
Small nuclear RNA | Small nucleolar RNA SNORD69 | [
"Chemistry"
] | 73 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,075 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD71 | In molecular biology, snoRNA HBII-239 belongs to the family of C/D snoRNAs. It is the human orthologue of the mouse MBII-239 described and is predicted to guide 2'O-ribose methylation of 5.8S rRNA on residue U14.
References
External links
Small nuclear RNA | Small nucleolar RNA SNORD71 | [
"Chemistry"
] | 73 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,078 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD72 | In molecular biology, SNORD72 (also known as HBII-240) belongs to the C/D family of snoRNAs.
It is the human orthologue of the mouse MBII-240 and is predicted to guide 2'O-ribose methylation of the large 28S rRNA at residue U4590.
References
External links
Small nuclear RNA | Small nucleolar RNA SNORD72 | [
"Chemistry"
] | 80 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,100 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD89 | In molecular biology, snoRNA HBII-289 belongs to the family of C/D snoRNAs.
It is the human orthologue of the mouse MBII-289 and has no identified RNA target.
References
External links
Small nuclear RNA | Small nucleolar RNA SNORD89 | [
"Chemistry"
] | 54 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,101 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD90 | In molecular biology, snoRNA SNORD90 (HBII-295) is a non-coding RNA that belongs to the family of C/D snoRNAs. Initially described as HBII-295 this RNA has now been called SNORD70 by the HUGO Gene Nomenclature Committee. It is the human orthologue of the mouse MBII-295 and has no identified RNA target. This RNA is expressed from an intron of the MNAB/OR1K1 gene.
There is evidence that SNORD90 is involved in guiding N6-methyladenosine (m6A) modifications onto target RNA transcripts. Specifically, SNORD90 has been shown to increase m6A levels on neuregulin 3 (NRG3) leading to its down-regulation through recognition by YTHDF2.
References
External links
HGNC database entry
Small nuclear RNA | Small nucleolar RNA SNORD90 | [
"Chemistry"
] | 187 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,102 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD95 | In molecular biology, snoRNA U95 (also known as SNORD95 or Z38) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA U95 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
U95 was identified by computational screening of the introns of ribosomal protein genes for conserved C/D box sequence motifs and expression experimentally verified by northern blotting.
U95 is predicted to guide the 2'O-ribose methylation of 28S ribosomal RNA (rRNA) residues A2802 and C2811.
References
External links
Small nuclear RNA | Small nucleolar RNA SNORD95 | [
"Chemistry"
] | 260 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,259 | https://en.wikipedia.org/wiki/Spi-1%20%28PU.1%29%205%E2%80%B2%20UTR%20regulatory%20element | The Spi-1 (PU.1) 5′ UTR regulatory element is an RNA element found in the 5′ UTR of Spi-1 mRNA which is able to inhibit the translation Spi-1 transcripts by 8-fold. Spi-1 regulates myeloid gene expression during haemopoietic development. Mutations in this regulatory region of the 5′ UTR can lead to overexpression of Spi-1 which has been linked to development of leukaemia.
See also
InvR
References
External links
Cis-regulatory RNA elements | Spi-1 (PU.1) 5′ UTR regulatory element | [
"Chemistry"
] | 117 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,266 | https://en.wikipedia.org/wiki/SraB%20RNA | The SraB RNA is a small non-coding RNA discovered in E. coli during a large scale experimental screen. The 14 novel RNAs discovered were named 'sra' for small RNA, examples include SraC, SraD and SraG. This ncRNA was found to be expressed only in stationary phase. The exact function of this RNA is unknown but it has been shown to affect survival of Salmonella enterica to antibiotic administration in egg albumin. The authors suggest this may be due to SraB regulating a response to components in albumin.
See also
Escherichia coli sRNA
References
External links
Non-coding RNA | SraB RNA | [
"Chemistry"
] | 130 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,317 | https://en.wikipedia.org/wiki/Tobamovirus%20internal%20ribosome%20entry%20site%20%28IRES%29 | The Tobamovirus internal ribosome entry site (IRES) is an element that allows cap and end-independent translation of mRNA in the host cell. The IRES achieves this by mediating the internal initiation of translation by recruiting a ribosomal 43S pre-initiation complex directly to the initiation codon and eliminates the requirement for the eukaryotic initiation factor, eIF4F.
See also
Mnt IRES
N-myc IRES
TrkB IRES
References
External links
Cis-regulatory RNA elements
Tobamovirus | Tobamovirus internal ribosome entry site (IRES) | [
"Chemistry"
] | 114 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,320 | https://en.wikipedia.org/wiki/Togavirus%205%E2%80%B2%20plus%20strand%20cis-regulatory%20element | The Togavirus 5′ plus strand cis-regulatory element is an RNA element which is thought to be essential for both plus and minus strand RNA synthesis.
Genus Alphavirus belongs to the family Togaviridae. Alpha viruses contain secondary structural motifs in the 5′ UTR that allow them to avoid detection by IFIT1.
See also
Rubella virus 3′ cis-acting element
References
External links
Cis-regulatory RNA elements | Togavirus 5′ plus strand cis-regulatory element | [
"Chemistry"
] | 87 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,340 | https://en.wikipedia.org/wiki/TPP%20riboswitch | The TPP riboswitch, also known as the THI element and Thi-box riboswitch, is a highly conserved RNA secondary structure. It serves as a riboswitch that binds thiamine pyrophosphate (TPP) directly and modulates gene expression through a variety of mechanisms in archaea, bacteria and eukaryotes. TPP is the active form of thiamine (vitamin B1), an essential coenzyme synthesised by coupling of pyrimidine and thiazole moieties in bacteria. The THI element is an extension of a previously detected thiamin-regulatory element, the thi box, there is considerable variability in the predicted length and structures of the additional and facultative stem-loops represented in dark blue in the secondary structure diagram Analysis of operon structures has identified a large number of new candidate thiamin-regulated genes, mostly transporters, in various prokaryotic organisms. The x-ray crystal structure of the TPP riboswitch aptamer has been solved.
See also
Tetrahydrofolate riboswitch
FMN riboswitch
References
External links
PDB entry for the TPP riboswitch tertiary structure
Cis-regulatory RNA elements
Riboswitch | TPP riboswitch | [
"Chemistry"
] | 268 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,422,746 | https://en.wikipedia.org/wiki/Ultrasonic%20transducer | Ultrasonic transducers and ultrasonic sensors are devices that generate or sense ultrasound energy. They can be divided into three broad categories: transmitters, receivers and transceivers. Transmitters convert electrical signals into ultrasound, receivers convert ultrasound into electrical signals, and transceivers can both transmit and receive ultrasound.
Applications and performance
Ultrasound can be used for measuring wind speed and direction (anemometer), tank or channel fluid level, and speed through air or water. For measuring speed or direction, a device uses multiple detectors and calculates the speed from the relative distances to particulates in the air or water. To measure tank or channel liquid level, and also sea level (tide gauge), the sensor measures the distance (ranging) to the surface of the fluid. Further applications include: humidifiers, sonar, medical ultrasonography, burglar alarms and non-destructive testing.
Systems typically use a transducer that generates sound waves in the ultrasonic range, above 20 kHz, by turning electrical energy into sound, then upon receiving the echo turn the sound waves into electrical energy which can be measured and displayed.
This technology, as well, can detect approaching objects and track their positions.
Ultrasound can also be used to make point-to-point distance measurements by transmitting and receiving discrete bursts of ultrasound between transducers. This technique is known as Sonomicrometry where the transit-time of the ultrasound signal is measured electronically (ie digitally) and converted mathematically to the distance between transducers assuming the speed of sound of the medium between the transducers is known. This method can be very precise in terms of temporal and spatial resolution because the time-of-flight measurement can be derived from tracking the same incident (received) waveform either by reference level or zero crossing. This enables the measurement resolution to far exceed the wavelength of the sound frequency generated by the transducers.
Transducers
Ultrasonic transducers convert alternating current (AC) into ultrasound and vice versa. The transducers typically use piezoelectric transducers or capacitive transducers to generate or receive ultrasound. Piezoelectric crystals are able to change their sizes and shapes in response to voltage being applied. On the other hand, capacitive transducers use electrostatic fields between a conductive diaphragm and a backing plate.
The beam pattern of a transducer can be determined by the active transducer area and shape, the ultrasound wavelength, and the sound velocity of the propagation medium. The diagrams show the sound fields of an unfocused and a focusing ultrasonic transducer in water, plainly at differing energy levels.
Since piezoelectric materials generate a voltage when force is applied to them, they can also work as ultrasonic detectors. Some systems use separate transmitters and receivers, while others combine both functions into a single piezoelectric transceiver.
Ultrasound transmitters can also use non-piezoelectric principles such as magnetostriction. Materials with this property change size slightly when exposed to a magnetic field and make practical transducers.
A capacitor ("condenser") microphone has a thin diaphragm that responds to ultrasound waves. Changes in the electric field between the diaphragm and a closely spaced backing plate convert sound signals to electric currents, which can be amplified.
The diaphragm (or membrane) principle is also used in the relatively new micro-machined ultrasonic transducers (MUTs). These devices are fabricated using silicon micro-machining technology (MEMS technology), which is particularly useful for the fabrication of transducer arrays. The vibration of the diaphragm may be measured or induced electronically using the capacitance between the diaphragm and a closely spaced backing plate (CMUT), or by adding a thin layer of piezo-electric material on the diaphragm (PMUT). Alternatively, recent research showed that the vibration of the diaphragm may be measured by a tiny optical ring resonator integrated inside the diaphragm (OMUS).
Ultrasonic transducers can also be used for acoustic levitation.
Use in depth sounding
It involves transmitting acoustic waves into water and recording the time interval between emission and return of a pulse; the resulting time of flight, along with knowledge of the speed of sound in water, allows determining the distance between sonar and target. This information is then typically used for navigation purposes or in order to obtain depths for charting purposes. Distance is measured by multiplying half the time from the signal's outgoing pulse to its return by the speed of sound in the water, which is approximately 1.5 kilometres per second [T÷2×(4700 feet per second or 1.5 kil per second )] For precise applications of echosounding, such as hydrography, the speed of sound must also be measured typically by deploying a sound velocity probe into the water. Echo sounding is effectively a special purpose application of sonar used to locate the bottom. Since a traditional pre-SI unit of water depth was the fathom, an instrument used for determining water depth is sometimes called a fathometer. The first practical fathometer was invented by Herbert Grove Dorsey and patented in 1928.
Use in medicine
Medical ultrasonic transducers (probes) come in a variety of different shapes and sizes for use in making cross-sectional images of various parts of the body. The transducer may be used in contact with the skin, as in fetal ultrasound imaging, or inserted into a body opening such as the rectum or vagina. Clinicians who perform ultrasound-guided procedures often use a probe positioning system to hold the ultrasonic transducer.
Compared to other medical imaging modalities, ultrasound has several advantages. It provides images in real-time, is portable, and can consequently be brought to the bedside. It is substantially lower in cost than other imaging strategies and does not use harmful ionizing radiation. Drawbacks include various limits on its field of view, the need for patient cooperation, dependence on patient physique, difficulty imaging structures obscured by bone, air or gases, and the necessity of a skilled operator, usually with professional training. Owing to these drawbacks, novel wearable ultrasound implementations are gaining popularity. These miniature devices continuously monitor vitals and alert at the emergence of early signs of abnormality.
Use in industry
Ultrasonic sensors can detect the movement of targets and measure the distance to them in many automated factories and process plants. Sensors can have an on or off digital output for detecting the movement of objects, or an analog output proportional to distance. They can sense the edge of the material as part of a web guiding system.
Ultrasonic sensors are widely used in cars as parking sensors to aid the driver in reversing into parking spaces. They are being tested for a number of other automotive uses including ultrasonic people detection and assisting in autonomous UAV navigation.
Because ultrasonic sensors use sound rather than light for detection, they work in applications where photoelectric sensors may not. Ultrasonics is a great solution for clear object detection and for liquid level measurement, applications that photoelectrics struggle with because of target translucence. As well, target color or reflectivity do not affect ultrasonic sensors, which can operate reliably in high-glare environments.
Passive ultrasonic sensors may be used to detect high-pressure gas or liquid leaks, or other hazardous conditions that generate ultrasonic sound. In these devices, ultrasound from the transducer (microphone) is converted down to the human hearing range (Audible Sound = 20 Hz to 20 kHz).
High-power ultrasonic emitters are used in commercially available ultrasonic cleaning devices. An ultrasonic transducer is affixed to a stainless steel pan which is filled with a solvent (frequently water or isopropanol). An electrical square wave feeds the transducer, creating sound in the solvent strong enough to cause cavitation.
Ultrasonic technology has been used for multiple cleaning purposes. One of which that been gaining a decent amount of traction in the past decade is ultrasonic gun cleaning.
In ultrasonic welding and ultrasonic wire bonding, plastics and metals are joining using vibrations created by power ultrasonic transducers.
Ultrasonic testing is also widely used in metallurgy and engineering to evaluate corrosion, welds, and material defects using different types of scans.
Notes
References
Further reading
Escolà, Alexandre; Planas, Santiago; Rosell, Joan Ramon; Pomar, Jesús; Camp, Ferran; Solanelles, Francesc; Gracia, Felip; Llorens, Jordi; Gil, Emilio (2011-02-28). "Performance of an Ultrasonic Ranging Sensor in Apple Tree Canopies". Sensors. 11 (3): 2459–2477. doi:10.3390/s110302459. ISSN 1424-8220. PMC 3231637. .
Ultrasound
Sensors | Ultrasonic transducer | [
"Technology",
"Engineering"
] | 1,862 | [
"Sensors",
"Measuring instruments"
] |
11,423,298 | https://en.wikipedia.org/wiki/Online%20refuelling | In nuclear power technology, online refuelling is a technique for changing the fuel of a nuclear reactor while the reactor is critical. This allows the reactor to continue to generate electricity during routine refuelling, and therefore improve the availability and profitability of the plant.
Benefits of online refuelling
Online refuelling allows a nuclear reactor to continue to generate electricity during periods of routine refuelling, and therefore improves the availability and therefore the economy of the plant. Additionally, this allows for more flexibility in reactor refuelling schedules, exchanging a small number of fuel elements at a time rather than high-intensity offline refuelling programmes.
The ability to refuel a reactor while generating power has the greatest benefits where refuelling is required at high frequency, for example during the production of plutonium suitable for nuclear weapons during which low-burnup fuel is required from short irradiation periods in a reactor. Conversely, frequent rearrangement of fuel within the core can balance the thermal load and allow higher fuel burnup, therefore reducing both the fuel requirements, and subsequently the amount of high-level nuclear waste for disposal.
Although online refuelling is generally desirable, it requires design compromises which means that it is often uneconomical. This includes added complexity to refuelling equipment, and the requirement for these to pressurise during refuelling gas and water-cooled reactors. Online refuelling equipment for Magnox reactors proved to be less reliable than the reactor systems, and retrospectively its use was regarded as a mistake. Molten salt reactors and pebble-bed reactors also require online handling and processing equipment to replace the fuel during operation.
Reactor designs with online refuelling
Reactors with online refuelling capability to date have typically been either liquid sodium cooled, gas cooled, or cooled by water in pressurised channels. Water-cooled reactors utilising pressurised vessels, for example PWR and BWR reactors and their Generation III descendants, are unsuitable for online refuelling as the coolant is depressurised to allow for disassembly of the pressure vessel and therefore requires a major reactor shutdown. This is typically carried out every 18–24 months.
Notable past and present nuclear power plant designs that have incorporated the ability to refuel online include:
CANDU reactors: Pressurised heavy-water cooled and moderated, natural uranium fuel reactors of Canadian design. Operated 1947–present.
IPHWR reactors are CANDU derived Indian design reactors. They are heavy water cooled heavy water moderated reactors. Operated 1984-present.
Magnox reactors: -cooled, graphite-moderated, natural uranium fuel reactors of British design. Operated 1954–2015.
RBMK reactors: Boiling water cooled, graphite-moderated, enriched uranium fuel reactors of Russian design. Operated 1974–present.
UNGG reactors: -cooled, graphite-moderated, natural uranium fuel reactors of French design. Operated 1966 - 1994.
BN-350; BN-600 & BN-800 reactors: Sodium cooled fast-breeder reactor of Russian design. Operated 1973–present.
AGR (Advanced gas-cooled) reactors: -cooled, graphite-moderated, enriched uranium fuel reactors of British design. Operated 1976–present.
There are a number of planned reactor designs which include provision for online refuelling, including pebble-bed and molten salt Generation IV reactors.
References
Nuclear technology | Online refuelling | [
"Physics"
] | 698 | [
"Nuclear technology",
"Nuclear physics"
] |
11,423,396 | https://en.wikipedia.org/wiki/Burnup | In nuclear power technology, burnup is a measure of how much energy is extracted from a given amount of nuclear fuel. It may be measured as the fraction of fuel atoms that underwent fission in %FIMA (fissions per initial heavy metal atom) or %FIFA (fissions per initial fissile atom) as well as the actual energy released per mass of initial fuel in gigawatt-days/metric ton of heavy metal (GWd/tHM), or similar units. The amount of initial fuel in the denominator is defined as all uranium, plutonium, and thorium isotopes, not including alloying or other chemical compounds or mixtures in the fuel charge.
Measures of burnup
Expressed as a percentage: if 5% of the initial heavy metal atoms have undergone fission, the burnup is 5%FIMA. If these 5% were the total of 235U that were in the fuel at the beginning, the burnup is 100%FIFA (as 235U is fissile and the other 95% heavy metals like 238U are not). In reactor operations, this percentage is difficult to measure, so the alternative definition is preferred. This can be computed by multiplying the thermal power of the plant by the time of operation and dividing by the mass of the initial fuel loading. For example, if a 3000 MW thermal (equivalent to 1000 MW electric at 33.333% efficiency, which is typical of US LWRs) plant uses 24 tonnes of enriched uranium (tU) and operates at full power for 1 year, the average burnup of the fuel is (3000 MW·365 d)/24 metric tonnes = 45.63 GWd/t, or 45,625 MWd/tHM (where HM stands for heavy metal, meaning actinides like thorium, uranium, plutonium, etc.).
Converting between percent and energy/mass requires knowledge of κ, the thermal energy released per fission event. A typical value is 193.7 MeV () of thermal energy per fission (see Nuclear fission). With this value, the maximum burnup of 100%FIMA, which includes fissioning not just fissile content but also the other fissionable nuclides, is equivalent to about 909 GWd/t. Nuclear engineers often use this to roughly approximate 10% burnup as just less than 100 GWd/t.
The actual fuel may be any actinide that can support a chain reaction (meaning it is fissile), including uranium, plutonium, and more exotic transuranic fuels. This fuel content is often referred to as the heavy metal to distinguish it from other metals present in the fuel, such as those used for cladding. The heavy metal is typically present as either metal or oxide, but other compounds such as carbides or other salts are possible.
History
Generation II reactors were typically designed to achieve about 40 GWd/tU. With newer fuel technology, and particularly the use of nuclear poisons, these same reactors are now capable of achieving up to 60 GWd/tU. After so many fissions have occurred, the build-up of fission products poisons the chain reaction and the reactor must be shut down and refueled.
Some more-advanced light-water reactor designs are expected to achieve over 90 GWd/t of higher-enriched fuel.
Fast reactors are more immune to fission-product poisoning and can inherently reach higher burnups in one cycle. In 1985, the EBR-II reactor at Argonne National Laboratory took metallic fuel up to 19.9% burnup, or just under 200 GWd/t.
The Deep Burn Modular Helium Reactor (DB-MHR) might reach 500 GWd/t of transuranic elements.
In a power station, high fuel burnup is desirable for:
Reducing downtime for refueling
Reducing the number of fresh nuclear fuel elements required and spent nuclear fuel elements generated while producing a given amount of energy
Reducing the potential for diversion of plutonium from spent fuel for use in nuclear weapons
It is also desirable that burnup should be as uniform as possible both within individual fuel elements and from one element to another within a fuel charge. In reactors with online refuelling, fuel elements can be repositioned during operation to help achieve this. In reactors without this facility, fine positioning of control rods to balance reactivity within the core, and repositioning of remaining fuel during shutdowns in which only part of the fuel charge is replaced may be used.
On the other hand, there are signs that increasing burnup above 50 or 60 GWd/tU leads to significant engineering challenges and that it does not necessarily lead to economic benefits. Higher-burnup fuels require higher initial enrichment to sustain reactivity. Since the amount of separative work units (SWUs) is not a linear function of enrichment, it is more expensive to achieve higher enrichments. There are also operational aspects of high burnup fuels that are associated especially with reliability of such fuel. The main concerns associated with high burnup fuels are:
Increased burnup places additional demands on fuel cladding, which must withstand the reactor environment for longer periods.
Longer residence in the reactor requires higher corrosion resistance.
Higher burnup leads to higher accumulation of gaseous fission products inside the fuel pin, resulting in significant increases in internal pressure.
Higher burnup leads to increased radiation-induced growth, which can lead to undesirable changes in core geometry (fuel assembly bow or fuel rod bow). Fuel assembly bow can result in an increased drop times for control rods due to friction between control rods and bowed guide tubes.
While high burnup fuel generates a smaller volume of fuel for reprocessing, the fuel has a higher specific activity.
Fuel requirements
In once-through nuclear fuel cycles such as are currently in use in much of the world, used fuel elements are disposed of whole as high level nuclear waste, and the remaining uranium and plutonium content is lost. Higher burnup allows more of the fissile 235U and of the plutonium bred from the 238U to be utilised, reducing the uranium requirements of the fuel cycle.
Waste
In once-through nuclear fuel cycles, higher burnup reduces the number of elements that need to be buried. However, short-term heat emission, one deep geological repository limiting factor, is predominantly from medium-lived fission products, particularly 137Cs (30.08 year half life) and 90Sr (28.9 year half life). As there are proportionately more of these in high-burnup fuel, the heat generated by the spent fuel is roughly constant for a given amount of energy generated.
Similarly, in fuel cycles with nuclear reprocessing, the amount of high-level waste for a given amount of energy generated is not closely related to burnup. High-burnup fuel generates a smaller volume of fuel for reprocessing, but with a higher specific activity.
Unprocessed used fuel from current light-water reactors consists of 5% fission products and 95% actinides (most of it uranium), and is dangerously radiotoxic, requiring special custody, for 300,000 years. Most of the long-term radiotoxic elements are transuranic, and therefore could be recycled as fuel. 70% of fission products are either stable or have half lives less than one year. Another six percent (129I and 99Tc) can be transmuted to elements with extremely short half lives (130I: 12.36 hours; 100Tc: 15.46 seconds). 93Zr, having a very long half life, constitutes 5% of fission products, but can be alloyed with uranium and transuranics during fuel recycling, or used in zircalloy cladding, where its radioactivity is irrelevant. The remaining 20% of fission products, or 1% of unprocessed fuel, for which the longest-lived isotopes are 137Cs and 90Sr, require special custody for only 300 years. Therefore, the mass of material needing special custody is 1% of the mass of unprocessed used fuel. In the case of or this "special custody" could also take the form of use for food irradiation or as fuel in a radioisotope thermoelectric generator. As both the native elements strontium and caesium and their oxides—chemical forms in which they can be found in oxide or metal fuel—form soluble hydroxides upon reaction with water, they can be extracted from spent fuel relatively easily and precipitated into a solid form for use or disposal in a further step if desired. If tritium has not been removed from the fuel in a step prior to this aqueous extraction, the water used in this process will be contaminated, requiring expensive isotope separation or allowing the tritium to decay to safe levels before the water can be released into the biosphere.
Proliferation
Burnup is one of the key factors determining the isotopic composition of spent nuclear fuel, the others being its initial composition and the neutron spectrum of the reactor. Very low fuel burnup is essential for the production of weapons-grade plutonium for nuclear weapons, in order to produce plutonium that is predominantly 239Pu with the smallest possible proportion of 240Pu and 242Pu.
Plutonium and other transuranic isotopes are produced from uranium by neutron absorption during reactor operation. While it is possible in principle to remove plutonium from used fuel and divert it to weapons usage, in practice there are formidable obstacles to doing so. First, fission products must be removed. Second, plutonium must be separated from other actinides. Third, fissionable isotopes of plutonium must be separated from non-fissionable isotopes, which is more difficult than separating fissionable from non-fissionable isotopes of uranium, not least because the mass difference is one atomic unit instead of three. All processes require operation on strongly radioactive materials. Since there are many simpler ways to make nuclear weapons, nobody has constructed weapons from used civilian electric power reactor fuel, and it is likely that nobody ever will do so. Furthermore, most plutonium produced during operation is fissioned. To the extent that fuel is reprocessed on-site, as proposed for the Integral Fast Reactor, opportunities for diversion are further limited. Therefore, production of plutonium during civilian electric power reactor operation is not a significant problem.
Cost
One 2003 MIT graduate student thesis concludes that "the fuel cycle cost associated with a burnup level of 100 GWd/tHM is higher than for a burnup of 50 GWd/tHM. In addition, expenses will be required for the development of fuels capable of sustaining such high levels of irradiation. Under current conditions, the benefits of high burnup (lower spent fuel and plutonium discharge rates, degraded plutonium isotopics) are not rewarded. Hence there is no incentive for nuclear power plant operators to invest in high burnup fuels."
A study sponsored by the Nuclear Energy University Programs investigated the economic and technical feasibility, in the longer term, of higher burnup.
References
External links
Basic Requirements of High Burn-up fuels in LWRs
Nuclear technology | Burnup | [
"Physics"
] | 2,282 | [
"Nuclear technology",
"Nuclear physics"
] |
11,424,590 | https://en.wikipedia.org/wiki/LaSalle%20Lake%20State%20Fish%20and%20Wildlife%20Area | LaSalle Lake State Fish and Wildlife Area is an Illinois state park on in LaSalle County, Illinois, United States. It is a man-made lake, built as a cooling pond for the LaSalle County Generating Station.
References
State parks of Illinois
Protected areas of LaSalle County, Illinois
Cooling ponds | LaSalle Lake State Fish and Wildlife Area | [
"Chemistry",
"Environmental_science"
] | 62 | [
"Cooling ponds",
"Water pollution"
] |
11,428,320 | https://en.wikipedia.org/wiki/CAVEman | CAVEman is a 4D high-resolution model of a functioning human elaborated by the University of Calgary. It resides in a cube-shaped virtual reality room, like a cave, also known as the "research holodeck", in which the human model floats in space, projected from three walls and the floor below.
References
External links
University of Calgary Unveils the CAVEman Virtual Human
CAVEman unveiled
Meet the CAVEman of the future
Health informatics
Holography | CAVEman | [
"Biology"
] | 97 | [
"Health informatics",
"Medical technology"
] |
11,434,033 | https://en.wikipedia.org/wiki/Earth%27s%20field%20NMR | Nuclear magnetic resonance (NMR) in the geomagnetic field is conventionally referred to as Earth's field NMR (EFNMR). EFNMR is a special case of low field NMR.
When a sample is placed in a constant magnetic field and stimulated (perturbed) by a time-varying (e.g., pulsed or alternating) magnetic field, NMR active nuclei resonate at characteristic frequencies. Examples of such NMR active nuclei are the isotopes carbon-13 and hydrogen-1 (which in NMR is conventionally known as proton NMR). The resonant frequency of each isotope is directly proportional to the strength of the applied magnetic field, and the magnetogyric or gyromagnetic ratio of that isotope. The signal strength is proportional to the stimulating magnetic field and the number of nuclei of that isotope in the sample. Thus, in the 21 tesla magnetic field that may be found in high-resolution laboratory NMR spectrometers, protons resonate at 900 MHz. However, in the Earth's magnetic field the same nuclei resonate at audio frequencies of around 2 kHz and generate feeble signals.
The location of a nucleus within a complex molecule affects the 'chemical environment' (i.e. the rotating magnetic fields generated by the other nuclei) experienced by the nucleus. Thus, different hydrocarbon molecules containing NMR active nuclei in different positions within the molecules produce slightly different patterns of resonant frequencies.
EFNMR signals can be affected by magnetically noisy laboratory environments and natural variations in the Earth's field, which originally compromised its usefulness. However, this disadvantage has been overcome by the introduction of electronic equipment which compensates changes in ambient magnetic fields.
Whereas chemical shifts are important in NMR, they are insignificant in the Earth's field. The absence of chemical shifts causes features such as spin–spin multiplets (separated by high fields) to be superimposed in EFNMR. Instead, EFNMR spectra are dominated by spin–spin coupling (J-coupling) effects. Software optimised for analysing these spectra can provide useful information about the structure of the molecules in the sample.
Applications
Applications of EFNMR include:
Proton precession magnetometers (PPM) or proton magnetometers, which produce magnetic resonance in a known sample in the magnetic field to be measured, measure the sample's resonant frequency, then calculate and display the field strength.
EFNMR spectrometers, which use the principle of NMR spectroscopy to analyse molecular structures in a variety of applications, from investigating the structure of ice crystals in polar ice-fields, to rocks and hydrocarbons on-site.
Earth's field MRI scanners, which use the principle of magnetic resonance imaging.
The advantages of the Earth's field instruments over conventional (high field strength) instruments include the portability of the equipment giving the ability to analyse substances on-site, and their lower cost. The much lower geomagnetic field strength, that would otherwise result in poor signal-to-noise ratios, is compensated by homogeneity of the Earth's field giving the ability to use much larger samples. Their relatively low cost and simplicity make them good educational tools.
Although those commercial EFNMR spectrometers and MRI instruments aimed at universities etc. are necessarily sophisticated and are too costly for most hobbyists, internet search engines find data and designs for basic proton precession magnetometers which claim to be within the capability of reasonably competent electronic hobbyists or undergraduate students to build from readily available components costing no more than a few tens of US dollars.
Mode of operation
Free induction decay (FID) is the magnetic resonance due to Larmor precession that results from the stimulation of nuclei by means of either a pulsed dc magnetic field or a pulsed resonant frequency (rf) magnetic field, somewhat analogous respectively to the effects of plucking or bowing a stringed instrument. Whereas a pulsed rf field is usual in conventional (high field) NMR spectrometers, the pulsed dc polarising field method of stimulating FID is usual in EFNMR spectrometers and PPMs.
EFNMR equipment typically incorporates several coils, for stimulating the samples and for sensing the resulting NMR signals. Signal levels are very low, and specialised electronic amplifiers are required to amplify the EFNMR signals to usable levels. The stronger the polarising magnetic field, the stronger the EFNMR signals and the better the signal-to-noise ratios. The main trade-offs are performance versus portability and cost.
Since the FID resonant frequencies of NMR active nuclei are directly proportional to the magnetic field affecting those nuclei, we can use widely available NMR spectroscopy data to analyse suitable substances in the Earth's magnetic field.
An important feature of EFNMR compared with high-field NMR is that some aspects of molecular structure can be observed more clearly at low fields and low frequencies, whereas other features observable at high fields may not be observable at low fields. This is because:
Electron-mediated heteronuclear J-couplings (spin–spin couplings) are field independent, producing clusters of two or more frequencies separated by several Hz, which are more easily observed in a fundamental resonance of about 2 kHz. "Indeed it appears that enhanced resolution is possible due to the long spin relaxation times and high field homogeneity which prevail in EFNMR."
Chemical shifts of several parts per million (ppm) are clearly separated in high field NMR spectra, but have separations of only a few millihertz at proton EFNMR frequencies, and so are undetectable in an experiment that takes place on a timescale of tenths of a second.
For more context and explanation of NMR principles, please refer to the main articles on NMR and NMR spectroscopy. For more detail see proton NMR and carbon-13 NMR.
Proton EFNMR frequencies
The geomagnetic field strength and hence precession frequency varies with location and time.
Larmor precession frequency = magnetogyric ratio x magnetic field
Proton magnetogyric ratio = 42.576 Hz/μT (also written 42.576 MHz/T or 0.042576 Hz/nT)
Earth's magnetic field: 30 μT near Equator to 60 μT near Poles, around 50 μT at mid-latitudes.
Thus proton (hydrogen nucleus) EFNMR frequencies are audio frequencies of about 1.3 kHz near the Equator to 2.5 kHz near the Poles, around 2 kHz being typical of mid-latitudes. In terms of the electromagnetic spectrum EFNMR frequencies are in the VLF and ULF radio frequency bands, and the audio-magnetotelluric (AMT) frequencies of geophysics.
Examples of molecules containing hydrogen nuclei useful in proton EFNMR are water, hydrocarbons such as natural gas and petroleum, and carbohydrates such as occur in plants and animals.
See also
Rate of change of Earth's magnetic field
Zero field NMR
References
External links
TeachSpin EFNMR web site
Magritek EFNMR web site
Two dimensional EFNMR imaging
Earth's field NMR/MRI practical course, SS24 October 2009. Department of Physics, University of Oxford
NMR Using Earth’s Magnetic Field
Open source Earth's Field NMR Spectrometer
Magnetic Resonance Imaging System Based on Earth’s Magnetic Field
Applications of Earth’s Field NMR to porous systems and polymer gels
Geomagnetism
Nuclear magnetic resonance | Earth's field NMR | [
"Physics",
"Chemistry"
] | 1,620 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
11,434,205 | https://en.wikipedia.org/wiki/Mass%20flux | In physics and engineering, mass flux is the rate of mass flow per unit of area. Its SI units are kgs−1m−2. The common symbols are j, J, q, Q, φ, or Φ (Greek lowercase or capital Phi), sometimes with subscript m to indicate mass is the flowing quantity.
This flux quantity is also known simply as "mass flow". "Mass flux" can also refer to an alternate form of flux in Fick's law that includes the molecular mass, or in Darcy's law that includes the mass density.
Less commonly the defining equation for mass flux in this article is used interchangeably with the defining equation in mass flow rate.
Definition
Mathematically, mass flux is defined as the limit
where
is the mass current (flow of mass per unit time ) and is the area through which the mass flows.
For mass flux as a vector , the surface integral of it over a surface S, followed by an integral over the time duration to , gives the total amount of mass flowing through the surface in that time ():
The area required to calculate the flux is real or imaginary, flat or curved, either as a cross-sectional area or a surface.
For example, for substances passing through a filter or a membrane, the real surface is the (generally curved) surface area of the filter, macroscopically - ignoring the area spanned by the holes in the filter/membrane. The spaces would be cross-sectional areas. For liquids passing through a pipe, the area is the cross-section of the pipe, at the section considered.
The vector area is a combination of the magnitude of the area through which the mass passes through, A, and a unit vector normal to the area, . The relation is .
If the mass flux passes through the area at an angle θ to the area normal , then
where is the dot product of the unit vectors. That is, the component of mass flux passing through the surface (i.e. normal to it) is . While the component of mass flux passing tangential to the area is given by , there is no mass flux actually passing through the area in the tangential direction. The only component of mass flux passing normal to the area is the cosine component.
Example
Consider a pipe of flowing water. Suppose the pipe has a constant cross section and we consider a straight section of it (not at any bends/junctions), and the water is flowing steadily at a constant rate, under standard conditions. The area A is the cross-sectional area of the pipe. Suppose the pipe has radius . The area is then
To calculate the mass flux (magnitude), we also need the amount of mass of water transferred through the area and the time taken. Suppose a volume passes through in time t = 2 s. Assuming the density of water is , we have:
(since initial volume passing through the area was zero, final is , so corresponding mass is ), so the mass flux is
Substituting the numbers gives:
which is approximately 596.8 kg s−1 m−2.
Equations for fluids
Alternative equation
Using the vector definition, mass flux is also equal to:
where:
= mass density,
= velocity field of mass elements flowing (i.e. at each point in space the velocity of an element of matter is some velocity vector ).
Sometimes this equation may be used to define as a vector.
Mass and molar fluxes for composite fluids
Mass fluxes
In the case fluid is not pure, i.e. is a mixture of substances (technically contains a number of component substances), the mass fluxes must be considered separately for each component of the mixture.
When describing fluid flow (i.e. flow of matter), mass flux is appropriate. When describing particle transport (movement of a large number of particles), it is useful to use an analogous quantity, called the molar flux.
Using mass, the mass flux of component i is
The barycentric mass flux of component i is
where is the average mass velocity of all the components in the mixture, given by
where
= mass density of the entire mixture,
= mass density of component i,
= velocity of component i.
The average is taken over the velocities of the components.
Molar fluxes
If we replace density by the "molar density", concentration , we have the molar flux analogues.
The molar flux is the number of moles per unit time per unit area, generally:
So the molar flux of component i is (number of moles per unit time per unit area):
and the barycentric molar flux of component i is
where this time is the average molar velocity of all the components in the mixture, given by:
Usage
Mass flux appears in some equations in hydrodynamics, in particular the continuity equation:
which is a statement of the mass conservation of fluid. In hydrodynamics, mass can only flow from one place to another.
Molar flux occurs in Fick's first law of diffusion:
where is the diffusion coefficient.
See also
Mass-flux fraction
Flux
Fick's law
Darcy's law
Wave mass flux and wave momentum
Defining equation (physical chemistry)
Momentum density
Notes
References
Physical quantities
Vector calculus | Mass flux | [
"Physics",
"Mathematics"
] | 1,068 | [
"Physical phenomena",
"Quantity",
"Physical quantities",
"Physical properties"
] |
16,812,637 | https://en.wikipedia.org/wiki/European%20windstorm | European windstorms are powerful extratropical cyclones which form as cyclonic windstorms associated with areas of low atmospheric pressure. They can occur throughout the year, but are most frequent between October and March, with peak intensity in the winter months. Deep areas of low pressure are common over the North Atlantic, and occasionally start as nor'easters off the New England coast. They frequently track across the North Atlantic Ocean towards the north of Scotland and into the Norwegian Sea, which generally minimizes the impact to inland areas; however, if the track is further south, it may cause adverse weather conditions across Central Europe, Northern Europe and especially Western Europe. The countries most commonly affected include the United Kingdom, Ireland, the Netherlands, Norway, Germany, the Faroe Islands and Iceland.
The strong wind phenomena intrinsic to European windstorms, that give rise to "damage footprints" at the surface, can be placed into three categories, namely the "warm jet", the "cold jet" and the "sting jet". These phenomena vary in terms of physical mechanisms, atmospheric structure, spatial extent, duration, severity level, predictability and location relative to cyclone and fronts.
On average, these storms cause economic damage of around €1.9 billion per year and insurance losses of €1.4 billion per year (1990–1998). They cause the highest amount of natural catastrophe insurance loss in Europe.
Cyclogenesis
North Atlantic Oscillation
The state of the North Atlantic Oscillation relates strongly to the frequency, intensity, and tracks of European windstorms. An enhanced number of storms have been noted over the North Atlantic region during positive NAO phases (compared to negative NAO phases) and is due to larger areas of suitable growth conditions. The occurrence of extreme North Atlantic cyclones is aligned with the NAO state during the cyclones' development phase. The strongest storms are embedded within, and form in large scale atmospheric flow. It should be kept in mind that, on the other hand, the cyclones themselves play a major role in steering the NAO phase. Aggregate European windstorm losses show a strong dependence on NAO, with losses increasing/decreasing 10–15% at all return periods.
Connection to North American cold spells
A connection between wintertime cold air outbreaks in North America and European windstorms has been hypothesized in the last years. Cold spells over Central Canada and Eastern US appear to be associated with more frequent windstorms and flash floods over Iberia, whereas cold spells over Eastern Canada show a connection to windstorms over Northern Europe and the British Isles. The reason behind those teleconnections is not fully clear yet, but changes in the behavior of the Polar jet stream are likely to be at least related to this effect.
Clustering
Temporal clustering of windstorm events has also been noted, with eight consecutive storms hitting Europe during the winter of 1989/90. Cyclones Lothar and Martin in 1999 were separated by only 36 hours. Cyclone Kyrill in 2007 followed only four days after Cyclone Per. In November 2011, Cyclone Berit moved across Northern Europe, and just a day later another storm, named Yoda, hit the same area.
Nomenclature
Naming of individual storms
Up to the second half of the 19th century, European windstorms were usually named either by the year, the date, or the Saint's day of their occurrence. Although standardised naming schemes now exist, a storm may still be named differently in different countries. For instance, the Norwegian weather service also names independently notable storms that affect Norway, which can result in multiple names being used in different countries they affect, such as:
The 1999 storm called "Anatol" in Germany is known as the "December hurricane" or "Adam" in Denmark and as "Carola" in Sweden.
The 2011 storm called "Dagmar" in Norway and Sweden is known as "Patrick" in Germany and "Tapani" in Finland.
The 2013 event known as the St. Jude storm in the English media is known as Christian in German and French (following the Free University of Berlin's Adopt-a-Vortex program). It was named Simone by the Swedish Meteorological and Hydrological Institute, and referred to as the October storm in Danish and Dutch. It was later given the name Allan by the Danish Meteorological Institute following the political decision to name strong storms which affect Denmark.
In 2011, a social media campaign resulted in the storm officially called Cyclone Friedhelm being widely referred to as Hurricane Bawbag and Hurricane Fannybaws. Such usage of the term Hurricane is not without precedent, as the 1968 Scotland storm was referred to as "Hurricane Low Q".
UK and Ireland
The UK Met Office and Ireland's Met Éireann held discussions about developing a common naming system for Atlantic storms. In 2015 a pilot project by the two forecasters was launched as "Name our storms" which sought public participation in naming large-scale cyclonic windstorms affecting the UK and/or Ireland over the winter of 2015/16. The UK/Ireland storm naming system began its first operational season in 2015/2016, with Storm Abigail.
Germany
During 1954, Karla Wege, a student at the Free University of Berlin's meteorological institute suggested that names should be assigned to all areas of low and high pressure that influenced the weather of Central Europe. The university subsequently started to name every area of high or low pressure within its weather forecasts, from a list of 260 male and 260 female names submitted by its students. The female names were assigned to areas of low pressure while male names were assigned to areas of high pressure. The names were subsequently exclusively used by Berlin's media until February 1990, after which the German media started to commonly use the names, however, they were not officially approved by the German Meteorological Service Deutscher Wetterdienst. The DWD subsequently banned the usage of the names by their offices during July 1991, after complaints had poured in about the naming system. However, the order was leaked to the German press agency, Deutsche Presse-Agentur, who ran it as its lead weather story. Germany's ZDF television channel subsequently ran a phone in poll on 17 July 1991 and claimed that 72% of the 40,000 responses favored keeping the names. This made the DWD pause and think about the naming system and these days the DWD accept the naming system and request that it is maintained.
During 1998 a debate started about whether it was discriminatory to name areas of high pressure with male names and the areas of low pressure with female names. The issue was subsequently resolved by alternating male and female names each year. In November 2002 the "Adopt-a-Vortex" scheme began, which allows members of the public or companies to buy naming rights for a letter chosen by the buyer that are then assigned alphabetically to high and low pressure areas in Europe during each year. The naming comes with the slim chance that the system will be notable. The money raised by this is used by the meteorology department to maintain weather observations at the Free University.
Names are listed alphabetically beginning in January.
Name of phenomena
Several European languages use cognates of the word huracán (ouragan, uragano, orkan, huragan, orkaan, ураган, which may or may not be differentiated from tropical hurricanes in these languages) to indicate particularly strong cyclonic winds occurring in Europe. The term hurricane as applied to these storms is not in reference to the structurally different tropical cyclone of the same name, but to the hurricane strength of the wind on the Beaufort scale (winds ≥ 118 km/h or ≥ 73 mph).
In English, use of term hurricane to refer to European windstorms is mostly discouraged, as these storms do not display the structure of tropical storms. Likewise the use of the French term ouragan is similarly discouraged as hurricane is in English, as it is typically reserved for tropical storms only. European windstorms in Latin Europe are generally referred to by derivatives of tempestas (tempest, tempête, tempestado, tempesta), meaning storm, weather, or season, from the Latin tempus, meaning time.
Globally storms of this type forming between 30° and 60° latitude are known as extratropical cyclones. The name European windstorm reflects that these storms in Europe are primarily notable for their strong winds and associated damage, which can span several nations on the continent. The strongest cyclones are called windstorms within academia and the insurance industry. The name European windstorm has not been adopted by the UK Met Office in broadcasts (though it is used in their academic research), the media or by the general public, and appears to have gained currency in academic and insurance circles as a linguistic and terminologically neutral name for the phenomena.
In contrast to some other European languages there is a lack of a widely accepted name for these storms in English. The Met Office and UK media generally refer to these storms as severe gales. The current definition of severe gales (which warrants the issue of a weather warning) are repeated gusts of or more over inland areas. European windstorms are also described in forecasts variously as winter storms, winter lows, autumnal lows, Atlantic lows and cyclonic systems. They are also sometimes referred to as bullseye isobars and dartboard lows in reference to their appearance on weather charts. A Royal Society exhibition has used the name European cyclones, with North-Atlantic cyclone and North-Atlantic windstorms also being used. Though with the advent of the "Name our Storms" project, they are generally known as storms.
Economic impact
Insurance losses
Insurance losses from European windstorms are the second greatest source of loss for any natural peril globally. Only Atlantic hurricanes in the United States are larger. Windstorm losses exceed those caused by flooding in Europe. For instance one windstorm, Kyrill in 2007, exceeded the losses of the 2007 United Kingdom floods.
On average, some 200,000 buildings are damaged by high winds in the UK every year.
Energy supplies
European windstorms wipe out electrical generation capacity across large areas, making supplementation from abroad difficult (windturbines shut down to avoid damage and nuclear capacity may shut if cooling water is contaminated or flooding of the power plant occurs). Transmission capabilities can also be severely limited if power lines are brought down by snow, ice or high winds. In the wake of Cyclone Gudrun in 2005 Denmark and Latvia had difficulty importing electricity, and Sweden lost 25% of its total power capacity as the Ringhals Nuclear Power Plant and Barsebäck nuclear power plant nuclear plants were shut down.
During the Boxing Day Storm of 1998 the reactors at Hunterston B nuclear power station were shut down when power was lost, possibly due to arcing at pylons caused by salt spray from the sea. When the grid connection was restored, the generators that had powered the station during the blackout were shut down and left on "manual start", so when the power failed again the station was powered by batteries for a short time of around 30 minutes, until the diesel generators were started manually. During this period the reactors were left without forced cooling, in a similar fashion to the Fukushima Daiichi nuclear disaster, but the event at Hunterston was rated as International Nuclear Event Scale 2.
A year later in 1999 during the Lothar storm Flooding at the Blayais Nuclear Power Plant resulted in a "level 2" event on the International Nuclear Event Scale. Cyclone Lothar and Martin in 1999 left 3.4 million customers in France without electricity, and forced Électricité de France to acquire all the available portable power generators in Europe, with some even being brought in from Canada. These storms brought a fourth of France's high-tension transmission lines down and 300 high-voltage transmission pylons were toppled. It was one of the greatest energy disruptions ever experienced by a modern developed country.
Following the Great Storm of 1987 the High Voltage Cross-Channel Link between the UK and France was interrupted, and the storm caused a domino-effect of power outages throughout the Southeast of England. Conversely windstorms can produce too much wind power. Cyclone Xynthia hit Europe in 2010, generating 19000 megawatts of electricity from Germany's 21000 wind turbines. The electricity produced was too much for consumers to use, and prices on the European Energy Exchange in Leipzig plummeted, which resulted in the grid operators having to pay over 18 euros per megawatt-hour to offload it, costing around half a million euros in total.
Disruption of the gas supply during Cyclone Dagmar in 2011 left Royal Dutch Shell's Ormen Lange gas processing plant in Norway inoperable after its electricity was cut off by the storm. This left gas supplies in the United Kingdom vulnerable as this facility can supply up to 20 per cent of the United Kingdom's needs via the Langeled pipeline. However, the disruption came at a time of low demand. The same storm also saw the Leningrad Nuclear Power Plant also affected, as algae and mud stirred up by the storm were sucked into the cooling system, resulting in one of the generators being shut down. A similar situation was reported in the wake of Storm Angus in 2016 (though not linked specifically to the storm) when reactor 1 at Torness Nuclear Power Station in Scotland was taken offline after a sea water intake tripped due to excess seaweed around the inlet. Also following Storm Angus the UK's National Grid launched an investigation into whether a ship's anchor damaged four of the eight cables of the Cross Channel high voltage interconnector, which would leave it only able to operate at half of its capacity until February 2017.
Notable windstorms
Historic windstorms
Grote Mandrenke, 1362 – A southwesterly Atlantic gale swept across England, the Netherlands, northern Germany and southern Denmark, killing over 25,000 and changing the Dutch-German-Danish coastline.
Burchardi flood, 1634 – Also known as "second Grote Mandrenke", hit Nordfriesland, drowned about 8,000–15,000 people and destroyed the island of Strand.
Great Storm of 1703 – Severe gales affect south coast of England.
Night of the Big Wind, 1839 – The most severe windstorm to hit Ireland in recent centuries, with hurricane-force winds, killed between 250 and 300 people and rendered hundreds of thousands of homes uninhabitable.
Royal Charter Storm, 25–26 October 1859 – The Royal Charter Storm was considered to be the most severe storm to hit the British Isles in the 19th century, with a total death toll estimated at over 800. It takes its name from the ship Royal Charter, which was driven by the storm onto the east coast of Anglesey, Wales, with the loss of over 450 lives.
The Tay Bridge Disaster, 1879 – Severe gales (estimated to be Force 10–11) swept the east coast of Scotland, infamously resulting in the collapse of the Tay Rail Bridge and the loss of 75 people who were on board the ill-fated train.
1928 Thames flood, 6–7 January 1928 – Snow melt combined with heavy rainfall and a storm surge in the North Sea led to flooding in central London and the loss of 14 lives.
Severe storms since 1950
North Sea flood of 1953 – Considered to be the worst natural disaster of the 20th century both in the Netherlands and the United Kingdom, claiming over 2,500 lives, including 133 lost when the car ferry MV Princess Victoria sank in the North Channel east of Belfast.
Great Sheffield Gale and the North Sea flood of 1962 – Powerful windstorm crossed the United Kingdom, killing nine people and devastating the city of Sheffield with powerful winds. The storm then reached the German coast of the North Sea with wind speeds up to 200 km/h. The accompanying storm surge combined with the high tide pushed water up the Weser and Elbe, breaching dikes and caused extensive flooding, especially in Hamburg. 315 people were killed, around 60,000 were left homeless.
Gale of January 1976 2–5 January 1976 – Widespread wind damage was reported across Europe from Ireland to Central Europe. Coastal flooding occurred in the United Kingdom, Belgium and Germany with the highest storm surge of the 20th century recorded on the German North Sea coast.
1979 Fastnet Race – Force 10 to 11 storm forced the retirement or, in several cases, sinking of numerous yachts. Less than a third of the contesting boats finished with 19 killed.
Great Storm of 1987 – This storm affected southeastern England and northern France. In England maximum mean wind speeds of 70 knots (an average over 10 minutes) were recorded. The highest gust of was recorded at Pointe du Raz in Brittany. In all, 19 people were killed in England and 4 in France. 15 million trees were uprooted in England.
1990 storm series – Between 25 January and 1 March 1990, eight severe storms crossed Europe including the Burns' Day storm (Daria), Vivian & Wiebke. The total costs resulting from these storms was estimated at almost €13 billion.
Braer Storm of January 1993 – the most intense storm of this kind on record.
Cyclones Lothar and Martin, 1999 – France, Switzerland and Germany were hit by severe storms Lothar (), and Martin (). 140 people were killed during the storms. Lothar and Martin together left 3.4 million customers in France without electricity. It was one of the greatest energy disruptions ever experienced by a modern developed country. The total costs resulting from both storms was estimated at almost 19.2 billion $US.
Kyrill, 2007 – Storm warnings were given for many countries in western, central and northern Europe with severe storm warnings for some areas. At least 53 people were killed in northern and central Europe, causing travel chaos across the region.
Xynthia, 2010 – A severe windstorm moved across the Canary Islands to Portugal and western and northern Spain, before moving on to hit south-western France. The highest gust speeds were recorded at Alto de Orduña, measured at . 50 people were reported to have died.
Storm David - 2018 - The storm caused an estimated €1.14 billion – €2.6 billion in damage. Wind gusts up to wreaked havoc in UK, The Netherlands, Belgium, and Germany. The death toll reached 15.
Storm Eunice - 2022 - The storm with wind gusts up to killed 17 people in Europe. The storm impacted the UK, the Netherlands, Belgium, France, Denmark, and Poland.
Storm Ciarán - 2023 - A severe windstorm that struck south-west England and north-western France in early November 2023. Gusts of was recorded in Pointe du Raz, Brittany, France. Many tornadoes were reported during the storm especially in southern England, the Channel Islands and northern France.
Storm Ingunn - 2024 - An extremely powerful windstorm that brought 155 mph to the Faroe Islands and prompted the issuance of a rare red warning for wind for Norway.
Most intense storms
See also
List of European windstorms
Beaufort scale
Tropical cyclone effects in Europe
Nor'easter
List of sting jet cyclones
List of severe weather phenomena
Pacific Northwest windstorm
European windstorm seasons
2024-25 European windstorm season
2023-24 European windstorm season
2022–23 European windstorm season
2021–22 European windstorm season
2020–21 European windstorm season
2019–20 European windstorm season
2018–19 European windstorm season
2017–18 European windstorm season
2016–17 UK and Ireland windstorm season
2015–16 UK and Ireland windstorm season
References
External links
Met Office, Winter Storms
Met Office, University of Exeter & University of Reading: Extreme Wind Storms Catalogue
Free University of Berlin low-pressure naming lists
European Windstorm Centre, An unofficial independent forecaster
Extratropical cyclones
Types of cyclone
Weather hazards
Use British English from May 2013 | European windstorm | [
"Physics"
] | 4,041 | [
"Weather",
"Physical phenomena",
"Weather hazards"
] |
16,815,031 | https://en.wikipedia.org/wiki/Calogero%E2%80%93Degasperis%E2%80%93Fokas%20equation | In mathematics, the Calogero–Degasperis–Fokas equation is the nonlinear partial differential equation
This equation was named after F. Calogero, A. Degasperis, and A. Fokas.
See also
Boomeron equation
Zoomeron equation
External links
Partial differential equations
Integrable systems | Calogero–Degasperis–Fokas equation | [
"Physics",
"Mathematics"
] | 67 | [
"Integrable systems",
"Mathematical analysis",
"Theoretical physics",
"Mathematical analysis stubs"
] |
16,818,152 | https://en.wikipedia.org/wiki/Demethylating%20agent | Demethylating agents are chemical substances that can inhibit methylation, resulting in the expression of the previously hypermethylated silenced genes (see Methylation#Cancer for more detail). Cytidine analogs such as 5-azacytidine (azacitidine) and 5-azadeoxycytidine (decitabine) are the most commonly used demethylating agents. They work by inhibiting DNA methyltransferases. Both compounds have been approved in the treatment of myelodysplastic syndrome (MDS) by Food and Drug Administration (FDA) in United States. Azacitidine and decitabine are marketed as Vidaza and Dacogen respectively. Azacitidine is the first drug to be approved by FDA for treating MDS and has been given orphan drug status. Procaine is a DNA-demethylating agent with growth-inhibitory effects in human cancer cells. There are many other demethylating agents that can be used to inhibit the growth of other diseases.
Mechanism of action
There is very little known about the mechanism of action of these drugs. However, it was shown in 2015 that a possible mechanism of action of these drugs in colorectal cancer-initiating cells is through activating dsRNA expression which leads to the activation of the MDA5/MAVS RNA recognition pathway inducing some sort of viral mimicry inside the cell.
Clinical applications
The silencing of genes created by abnormal DNA methylation is a major contributor to the formation of cancerous tumors. Variations in DNA methylation of normal cells compared to malignant cells shows a prominent mechanism in how cancerous cells proliferate. Those variations are particularly prevalent in cell cycle regulation, DNA repair, and natural tumor suppression mechanisms. A leading therapeutic strategy in treating solid tumors stems from the use of demethylating agents to suppress DNA methylation in cancerous growths. Azacitidine and decitabine are both frequently used demethylating agents while decitabine is significantly more potent in its demethylating abilities. Both of these drugs are inhibitors of DNA Methyltransferases (DNMT) which are enzymes that are responsible for methylating DNA. In the 1970’s, these drugs have shown promising results in hematological cancers in organisms such as mice. The FDA initially rejected the use of azacitidine clinically due to negative side effects caused by elevated toxicity levels. However, in later clinical trials performed on patients with MDS, myelodysplastic syndromes, azacitidine provided effective and exhibited consistent results which led to FDA approval in 2004. The commercial name of azacitidine became Vidaza. Decitabine, with the commercial name Dacogen, followed with FDA approval in 2006. As more research is completed in the field of genetic mutations, specifically involving DNA Methylation, these drugs can be utilized to their maximum efficiency to clinically treat cancerous tumors. As of 2017, there were no approved demethylating agents for the treatment of solid tumors which can be a focus of research in the future. Treatment utilizing demethylating agents can have further clinical use by targeting cancer stem cells and triggering apoptosis. Demethylating agents and their relevance in clinical studies as therapy to treat lymphocytic leukemia can be seen in. Procaine can also be used as therapeutic development to inhibit the growth of cancer cells in humans. There is a new world of possibilities of using demethylating agents to treat different diseases such as leukemia and cancer as therapeutic treatment.
Procaine (PCA) is a demethylating agent considered to be effective in inhibiting the growth of human cancer cells. Several studies have explored and elucidated the effects of procaine on human liver cancer cells and breast cancer cells. Studies have shown that procaine, as an inhibitor of DNA methylation in breast cancer cells, can effectively cause hypomethylation and demethylation of the entire group of breast cancer cell DNA genomes by reducing 5-methylcytosine DNA content. In addition, procaine can effectively restore the gene expression of tumor suppressor genes by demethylating densely hypermethylated CpG-enriched DNA. For human liver cancer cells, procaine is capable of reducing tumor volume by suppressing the cell viability of HLE, HuH7, and HuH6 cells, and it has shown effective inhibition of S/G2/M transition in HLE cells.
See also
DNA methylation
DNA demethylation
References
Biochemistry
DNA
Methylation
Epigenetics | Demethylating agent | [
"Chemistry",
"Biology"
] | 944 | [
"Biochemistry",
"Methylation",
"nan"
] |
16,818,862 | https://en.wikipedia.org/wiki/USAF%20Stability%20and%20Control%20DATCOM | The United States Air Force Stability and Control DATCOM is a collection, correlation, codification, and recording of best knowledge, opinion, and judgment in the area of aerodynamic stability and control prediction methods. It presents substantiated techniques for use (1) early in the design or concept study phase, (2) to evaluate changes resulting from proposed engineering fixes, and (3) as a training on crosstraining aid. It bridges the gap between theory and practice by including a combination of pertinent discussion and proven practical methods. For any given configuration and flight condition, a complete set of stability and control derivatives can be determined without resort to outside information.
A spectrum of methods is presented, ranging from very simple and easily applied techniques to quite accurate and thorough procedures. Comparatively simple methods are presented in complete form, while the more complex methods are often handled by reference to separate treatments. Tables which compare calculated results with test data provide indications of method accuracy. Extensive references to related material are also included.
The report was compiled from September 1975 to September 1977 by the McDonnell Douglas Corporation in conjunction with the engineers at the Flight Dynamics Laboratory at Wright-Patterson Air Force Base.
Methodology
Fundamentally, the purpose of the DATCOM (Data Compendium), is to provide a systematic summary of methods for estimating basic stability and control derivatives. The DATCOM is organized in such a way that it is self-sufficient. For any given flight condition and configuration the complete set of derivatives can be determined without resort to outside information. The book is intended to be used for preliminary design purposes before the acquisition of test data. The use of reliable test data in lieu of the DATCOM is always recommended. However, there are many cases where the DATCOM can be used to advantage in conjunction with test data.
For instance, if the lift-curve slope of a wing-body combination is desired, the DATCOM recommends that the lift-curve slopes of the isolated wing and body, respectively, be estimated by methods presented and that appropriate wing-body interference factors (also presented) be applied. If wing-alone test data are available, it is obvious that these test data should be substituted in place of the estimated wing-alone characteristics in determining the lift-curve slope of the combination. Also, if test data are available on a configuration similar to a given configuration, the characteristics of the similar configuration can be corrected to those for the given configuration by judiciously using the DATCOM material.
Sections
The DATCOM Manual is divided into 9 sections:
Guide to DATCOM and Methods Summary
General Information
Effects of External Stores
Characteristics at Angle of Attack
Characteristics in Sideslip
Characteristics of High-Lift and Control Devices
Dynamic Derivatives
Mass and Inertia
Characteristics of VTOL-STOL Aircraft
Implementation
Many textbooks utilized in universities implement the DATCOM method of stability and control. Shortly before compilation of the DATCOM was completed, a computerized version called Digital DATCOM was created. The USAF S&C Digital DATCOM implements the DATCOM methods in an easy to use manner.
References
Hoak, D. E., et al., "The USAF Stability and Control DATCOM," Air Force Wright Aeronautical Laboratories, TR-83-3048, Oct. 1960 (Revised 1978).
Aerodynamics
Wright-Patterson Air Force Base | USAF Stability and Control DATCOM | [
"Chemistry",
"Engineering"
] | 664 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
16,819,755 | https://en.wikipedia.org/wiki/Membrane%20fouling | Membrane fouling is a process whereby a solution or a particle is deposited on a membrane surface or in membrane pores in a processes such as in a membrane bioreactor, reverse osmosis, forward osmosis, membrane distillation, ultrafiltration, microfiltration, or nanofiltration so that the membrane's performance is degraded. It is a major obstacle to the widespread use of this technology. Membrane fouling can cause severe flux decline and affect the quality of the water produced. Severe fouling may require intense chemical cleaning or membrane replacement. This increases the operating costs of a treatment plant. There are various types of foulants: colloidal (clays, flocs), biological (bacteria, fungi), organic (oils, polyelectrolytes, humics) and scaling (mineral precipitates).
Fouling can be divided into reversible and irreversible fouling based on the attachment strength of particles to the membrane surface. Reversible fouling can be removed by a strong shear force or backwashing. Formation of a strong matrix of fouling layer with the solute during a continuous filtration process will result in reversible fouling being transformed into an irreversible fouling layer. Irreversible fouling is the strong attachment of particles which cannot be removed by physical cleaning.
Influential factors
Factors that affect membrane fouling:
Recent fundamental studies indicate that membrane fouling is influenced by numerous factors such as system hydrodynamics, operating conditions, membrane properties, and material properties (solute). At low pressure, low feed concentration, and high feed velocity, concentration polarisation effects are minimal and flux is almost proportional to trans-membrane pressure difference. However, in the high pressure range, flux becomes almost independent of applied pressure. Deviation from linear flux-pressure relation is due to concentration polarization. At low feed flow rate or with high feed concentration, the limiting flux situation is observed even at relatively low pressures.
Measurement
Flux, transmembrane pressure (TMP), Permeability, and Resistance are the best indicators of membrane fouling. Under constant flux operation, TMP increases to compensate for the fouling. On the other hand, under constant pressure operation, flux declines due to membrane fouling. In some technologies such as membrane distillation, fouling reduces membrane rejection, and thus permeate quality (e.g. as measured by electrical conductivity) is a primary measurement for fouling.
Fouling control
Even though membrane fouling is an inevitable phenomenon during membrane filtration, it can be minimised by strategies such as cleaning, appropriate membrane selection and choice of operating conditions.
Membranes can be cleaned physically, biologically or chemically. Physical cleaning includes gas scour, sponges, water jets or backflushing using permeate or pressurized air. Biological cleaning uses biocides to remove all viable microorganisms, whereas chemical cleaning involves the use of acids and bases to remove foulants and impurities.
Additionally, researchers have investigated the impact different coatings have on resistance to wear. A 2018 study from the Global Aqua Innovation Center in Japan reported improved surface roughness properties of PA membranes by coating them with multi-walled carbon nanotubes.
Another strategy to minimise membrane fouling is the use of the appropriate membrane for a specific operation. The nature of the feed water must first be known; then a membrane that is less prone to fouling with that solution is chosen. For aqueous filtration, a hydrophilic membrane is preferred. For membrane distillation, a hydrophobic membrane is preferred.
Operating conditions during membrane filtration are also vital, as they may affect fouling conditions during filtration. For instance, crossflow filtration is often preferred to dead end filtration, because turbulence generated during the filtration entails a thinner deposit layer and therefore minimises fouling (e.g. tubular pinch effect). In some applications such as in many MBR applications, air scour is used to promote turbulence at the membrane surface.
Impact of Fouling on the Mechanical Properties of Membranes
Membrane performance can suffer from fouling-induced mechanical degradation. This may result in unwanted pressure and flux gradients, both of the solute and the solvent. The mechanism of membrane failure may be the direct consequence of fouling by means of physical alterations to the membrane, or by indirect means, in which the foulant removal processes yield membrane damage.
Direct Impacts of Fouling
It is important to note that the majority of membranes used commercially are polymers such as polyvinylidene fluoride (PVDF), polyacrylonitrile (PAN), polyethersulfone (PES) and polyamide (PA), which are materials which offer desirable properties (elasticity and strength) to withstand constant osmotic pressures. The accumulation of foulants, however, degrades these properties through physical alterations to the membrane structure.
The accumulation of foulants can lead to the formation of cracks, surface roughening, and changes in pore size distribution. These physical changes are the result of impacts of hard material with a soft polymer membrane, weakening its structural integrity. Degradation of the mechanical structure makes the membranes more susceptible to mechanical damage, potentially reducing its overall lifespan. A 2006 study observed this degradation by uniaxially straining hollow fibers that were both clean and fouled. The researchers reported the relative embrittlement of the fouled fibers.
Indirect Impacts of Fouling
Beyond direct physical damage, fouling can also induce indirect effects on membrane mechanical properties due to the strategies used to combat it. Backwashing subjects not only the particulates, but the membrane to strong shear forces. Greater fouling frequency therefore exposes the membrane to cyclic loading which can lead to fatigue failure. This is a process whereby existing imperfections in the membrane (such as microcracks) can grow and propagate due to the complex stress state dynamics. These impacts are not unknown; A 2007 study simulated aging via cyclic backwash pulses, and reported similar embrittlement due to the effects.
Additionally, repeated chemical treatment of fouling subjects membranes to excessive amounts of chlorine or other treatment chemicals which can cause degradation. This chemical degradation can lead to delamination of the membrane components, ultimately leading to failure.
See also
Vibratory shear-enhanced process
Water purification
References
Water technology
Fouling
Membrane technology | Membrane fouling | [
"Chemistry",
"Materials_science"
] | 1,321 | [
"Separation processes",
"Membrane technology",
"Water technology",
"Materials degradation",
"Fouling"
] |
16,820,016 | https://en.wikipedia.org/wiki/Solobacterium%20moorei | Solobacterium moorei is a bacterium that has been identified as a contributor to halitosis. It is a gram-positive anaerobic bacillus, erroneously known as Bulleidia moorei, in the family Erysipelotrichaceae of the order Erysipelotrichales. This particular strain was identified by Kageyama and Benno in 2000, previously an unclassified Clostridium group RCA59.
References
External links
Type strain of Solobacterium moorei at BacDive - the Bacterial Diversity Metadatabase
Gram-negative bacteria
Mollicutes
Bacteria described in 2000 | Solobacterium moorei | [
"Chemistry",
"Biology"
] | 126 | [] |
16,821,253 | https://en.wikipedia.org/wiki/CFD-DEM | The CFD-DEM model, or Computational Fluid Dynamics / Discrete Element Method model, is a process used to model or simulate systems combining fluids with solids or particles. In CFD-DEM, the motion of discrete solids or particles phase is obtained by the Discrete Element Method (DEM) which applies Newton's laws of motion to every particle, while the flow of continuum fluid is described by the local averaged Navier–Stokes equations that can be solved using the traditional Computational Fluid Dynamics (CFD) approach. The interactions between the fluid phase and solids phase is modeled by use of Newton's third law.
The direct incorporation of CFD into DEM to study the gas fluidization process so far has been attempted by Tsuji et al. and most recently by Hoomans et al., Deb et al. and Peng et al. A recent overview over fields of application was given by Kieckhefen et al.
Parallelization
OpenMP has been shown to be more efficient in performing coupled CFD-DEM calculations in parallel framework as compared to MPI by Amritkar et al. Recently, a multi-scale parallel strategy is developed. Generally, the simulation domain is divided into many sub-domains and each process calculates only one sub-domain using MPI passing boundary information; for each sub-domain, the CPUs are used to solve the fluid phase while the general purpose GPUs are used to solve the movement of particles. However, in this computation method CPUs and GPUs work in serial. That is, the CPUs are idle while the GPUs are calculating the solid particles, and the GPUs are idle when the CPUs are calculating the fluid phase. To further accelerate the computation, the CPU and GPU computing can be overlapped using the shared memory of a Linux system. Thus, the fluid phase and particles can be calculated at the same time.
Reducing computation cost using Coarse Grained Particles
The computation cost of CFD-DEM is huge due to a large number of particles and small time steps to resolve particle-particle collisions. To reduce computation cost, many real particles can be lumped into a Coarse Grained Particle (CGP). The diameter of the CGP is calculated by the following equation:
where is the number of real particles in CGP. Then, the movement of CGPs can be tracked using DEM.
In simulations using Coarse Grained Particles, the real particles in a CGP are subjected to the same drag force, same temperature and same species mass fractions. The momentum, heat and mass transfers between fluid and particles are firstly calculated using the diameter of real particles and then scaled by times. The value of is directly related to computation cost and accuracy. When is equal to unity, the simulation becomes DEM-based achieving results that are of the highest possible accuracy. As this ratio increases, the speed of the simulation increases drastically but its accuracy deteriorates. Apart from an increase in speed, general criteria for selecting a value for this parameter is not yet available. However, for systems with distinct mesoscale structures, like bubbles and clusters, the parcel size should be small enough to resolve the deformation, aggregation, and breakage of bubbles or clusters. The process of lumping particles together reduces the collision frequency, which directly influences the energy dissipation. To account for this error, an effective restitution coefficient was proposed by Lu et al., based on kinetic theory of granular flow, by assuming the energy dissipation during collisions for the original system and the coarse grained system are identical.
References
Computational physics | CFD-DEM | [
"Physics"
] | 728 | [
"Computational physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.