id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
665,738 | https://en.wikipedia.org/wiki/Magnetic%20shape-memory%20alloy | A magnetic shape-memory alloy (MSMA) is a type of smart material that can undergo significant and reversible changes in shape in response to a magnetic field. This behavior arises due to a combination of magnetic and shape-memory properties within the alloy, allowing it to produce mechanical motion or force under magnetic actuation. MSMAs are commonly made from ferromagnetic materials, particularly nickel-manganese-gallium (Ni-Mn-Ga), and are useful in applications requiring rapid, controllable, and repeatable movement.
Introduction
MSM alloys are ferromagnetic materials that can produce motion and forces under moderate magnetic fields. Typically, MSMAs are alloys of Nickel, Manganese and Gallium (Ni-Mn-Ga).
A magnetically induced deformation of about 0.2% was presented in 1996 by Dr. Kari Ullakko and co-workers at MIT. Since then, improvements on the production process and on the subsequent treatment of the alloys have led to deformations of up to 6% for commercially available single crystalline Ni-Mn-Ga MSM elements, as well as up to 10-12 % and 20% for new alloys in R&D stage.
The large magnetically induced strain, as well as the short response times make the MSM technology very attractive for the design of innovative actuators to be used in pneumatics, robotics, medical devices and mechatronics. MSM alloys change their magnetic properties depending on the deformation. This companion effect, which co-exist with the actuation, can be useful for the design of displacement, speed or force sensors and mechanical energy harvesters.
The magnetic shape memory effect occurs in the low temperature martensite phase of the alloy, where the elementary cells composing the alloy have tetragonal geometry. If the temperature is increased beyond the martensite–austenite transformation temperature, the alloy goes to the austenite phase where the elementary cells have cubic geometry. With such geometry the magnetic shape memory effect is lost.
The transition from martensite to austenite produces force and deformation. Therefore, MSM alloys can be also activated thermally, like thermal shape memory alloys (see, for instance, Nickel-Titanium (Ni-Ti) alloys).
The magnetic shape memory effect
The mechanism responsible for the large strain of MSM alloys is the so-called magnetically induced reorientation (MIR), and is sketched in the figure. Like other ferromagnetic materials, MSM alloys exhibit a macroscopic magnetization when subjected to an external magnetic field, emerging from the alignment of elementary magnetizations along the field direction. However, differently from standard ferromagnetic materials, the alignment is obtained by the geometric rotation of the elementary cells composing the alloy, and not by rotation of the magnetization vectors within the cells (like in magnetostriction).
A similar phenomenon occurs when the alloy is subjected to an external force. Macroscopically, the force acts like the magnetic field, favoring the rotation of the elementary cells and achieving elongation or contraction depending on its application within the reference coordinate system. The elongation and contraction processes are shown in the figure where, for example, the elongation is achieved magnetically and the contraction mechanically.
The rotation of the cells is a consequence of the large magnetic anisotropy of MSM alloys, and the high mobility of the internal regions. Simply speaking, an MSM element is composed by internal regions, each having a different orientation of the elementary cells (the regions are shown by the figure in green and blue colors). These regions are called twin-variants. The application of a magnetic field or of an external stress shifts the boundaries of the variants, called twin boundaries, and thus favors one variant or the other. When the element is completely contracted or completely elongated, it is formed by only one variant and it is said to be in a single variant state. The magnetization of the MSM element along a fixed direction differs if the element is in the contraction or in the elongation single variant state. The magnetic anisotropy is the difference between the energy required to magnetize the element in contraction single variant state and in elongation single variant state. The value of the anisotropy is related to the maximum work-output of the MSM alloy, and thus to the available strain and force that can be used for applications.
Properties
The main properties of the MSM effect for commercially available elements are summarized in (where other aspects of the technology and of the related applications are described):
Strain up to 6%
Max. generated stress up to 3 MPa
Minimum magnetic field for maximum strain: 500 kA/m
Full strain (6%) up to 2 MPa load
Workoutput per unit volume of about 150 kJ/m^3
Energetic efficiency (conversion between input magnetic energy and output mechanical work) about 90%
Internal friction stress of around 0.5 MPa
Magnetic and thermal activation
Operating temperatures between -40 and 60 °C
Change in magnetic permeability and electric resistivity during deformation
Fatigue Properties
The fatigue life of MSMAs is of particular interest for actuation applications due to the high frequency cycling, so improving the microstructure of these alloys has been of particular interest. Researchers have improved the fatigue life up to 2x109 cycles with a maximum stress of 2MPa, providing promising data to support real application of MSMAs in devices. Although high fatigue life has been demonstrated, this property has been found to be controlled by the internal twinning stress in the material, which is dependent on the crystal structure and twin boundaries. Additionally, inducing a fully strained (elongated or contracted) MSMA has been found to reduce fatigue life, so this must be taken into consideration when designing functional MSMA systems. In general, reducing defects such as surface roughness that cause stress concentration can increase the fatigue life and fracture resistance of MSMAs.
Development of the alloys
Standard alloys are Nickel-Manganese-Gallium (Ni-Mn-Ga) alloys, which are investigated since the first relevant MSM effect has been published in 1996. Other alloys under investigation are Iron-Palladium (Fe-Pd) alloys, Nickel-Iron-Gallium (Ni-Fe-Ga) alloys, and several derivates of the basic Ni-Mn-Ga alloy which further contain Iron (Fe), Cobalt (Co) or Copper (Cu). The main motivation behind the continuous development and testing of new alloys is to achieve improved thermo-magneto-mechanical properties, such as a lower internal friction, a higher transformation temperature and a higher Curie temperature, which would allow the use of MSM alloys in several applications. In fact, the actual temperature range of standard alloys is up to 50 °C. Recently, an 80 °C alloy has been presented.
Due to the twin boundary motion mechanism required for the magnetic shape memory effect to occur, the highest performing MSMAs in terms of maximum induced strain have been single crystals. Additive manufacturing has been demonstrated as a technique to produce porous polycrystalline MSMAs. As opposed to fully dense polycrystalline MSMAs, porous structures allow more freedom of motion, which reduces the internal stress required to activate martensitic twin boundary motion. Additionally, post-process heat treatments such as sintering and annealing have been found to significantly increase the hardness and reduce the elastic moduli of Ni-Mn-Ga alloys.
Applications
MSM actuator elements can be used where fast and precise motion is required. They are of interest due to the faster actuation using magnetic field as compared to the heating/cooling cycles required for conventional shape memory alloys, which also promises higher fatigue lifetime. Possible application fields are robotics, manufacturing, medical surgery, valves, dampers, sorting. MSMAs have been of particular interest in the application of actuators (i.e. microfluidic pumps for lab-on-a-chip devices) since they are capable of large force and stroke outputs in relatively small spatial regions. Also, due to the high fatigue life and their ability to produce electromotive forces from a magnetic flux, MSMAs are of interest in energy harvesting applications.
The twinning stress, or internal frictional stress, of an MSMA determines the efficiency of actuation, so the operation design of MSM actuators is based on the mechanical and magnetic properties of a given alloy; for example, the magnetic permeability of an MSMA is a function of strain. The most common MSM actuator design consists of an MSM element controlled by permanent magnets producing a rotating magnetic field and a spring restoring a mechanical force during the shape memory cycling. Limitations on the magnetic shape memory effect due to crystal defects determine the efficiency of MSMAs in applications. Since the MSM effect is also temperature dependent, these alloys can be tailored to shift the transition temperature by controlling microstructure and composition.
References
Smart materials | Magnetic shape-memory alloy | [
"Materials_science",
"Engineering"
] | 1,826 | [
"Smart materials",
"Materials science"
] |
17,876,651 | https://en.wikipedia.org/wiki/Predictor%E2%80%93corrector%20method | In numerical analysis, predictor–corrector methods belong to a class of algorithms designed to integrate ordinary differential equationsto find an unknown function that satisfies a given differential equation. All such algorithms proceed in two steps:
The initial, "prediction" step, starts from a function fitted to the function-values and derivative-values at a preceding set of points to extrapolate ("anticipate") this function's value at a subsequent, new point.
The next, "corrector" step refines the initial approximation by using the predicted value of the function and another method to interpolate that unknown function's value at the same subsequent point.
Predictor–corrector methods for solving ODEs
When considering the numerical solution of ordinary differential equations (ODEs), a predictor–corrector method typically uses an explicit method for the predictor step and an implicit method for the corrector step.
Example: Euler method with the trapezoidal rule
A simple predictor–corrector method (known as Heun's method) can be constructed from the Euler method (an explicit method) and the trapezoidal rule (an implicit method).
Consider the differential equation
and denote the step size by .
First, the predictor step: starting from the current value , calculate an initial guess value via the Euler method,
Next, the corrector step: improve the initial guess using trapezoidal rule,
That value is used as the next step.
PEC mode and PECE mode
There are different variants of a predictor–corrector method, depending on how often the corrector method is applied. The Predict–Evaluate–Correct–Evaluate (PECE) mode refers to the variant in the above example:
It is also possible to evaluate the function f only once per step by using the method in Predict–Evaluate–Correct (PEC) mode:
Additionally, the corrector step can be repeated in the hope that this achieves an even better approximation to the true solution. If the corrector method is run twice, this yields the PECECE mode:
The PECEC mode has one fewer function evaluation than PECECE mode.
More generally, if the corrector is run k times, the method is in P(EC)k
or P(EC)kE mode. If the corrector method is iterated until it converges, this could be called PE(CE)∞.
See also
Backward differentiation formula
Beeman's algorithm
Heun's method
Mehrotra predictor–corrector method
Numerical continuation
Notes
References
.
External links
Predictor–corrector methods for differential equations
Algorithms
Numerical analysis | Predictor–corrector method | [
"Mathematics"
] | 541 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
17,879,473 | https://en.wikipedia.org/wiki/Yuwen%20Zhang | Yuwen Zhang is a Chinese American professor of mechanical engineering who is well known for his contributions to phase change heat transfer. He is presently a Curators' Distinguished Professor and Huber and Helen Croft Chair in Engineering in the Department of Mechanical and Aerospace Engineering at the University of Missouri in Columbia, Missouri.
Early life and education
Yuwen Zhang was born in 1964 in Xiaoyi, Shanxi, China and spent his early life there until 1981 when he graduated from Chengguan High School and was admitted to college. He earned his B.E. degree in thermal turbomachinery, M.E. and D.Eng. degrees in engineering thermophysics from Xi'an Jiaotong University, in 1985, 1988 and 1991, respectively. He received a Ph.D. degree in mechanical engineering from the University of Connecticut in 1998.
Career
He taught at Xi'an Jiaotong University from 1991 to 1994 and was a research associate at Wright State University (1994-1995) and University of Connecticut (1995-1996). He was a research scientist at University of Connecticut (1999-2000) and a senior engineer at Thermoflow, Inc. (2000) before joining the Department of Mechanical Engineering at New Mexico State University as an assistant professor in 2001. He joined the faculty at the Department of Mechanical and Aerospace Engineering at the University of Missouri (MU) in 2003 as an associate professor and became a full professor in 2009. He was awarded a James C. Dowell Professorship in 2012 and served as the Department Chair from 2013 to 2017. He was named a Curators' Distinguished Professor in 2020 and a Huber and Helen Croft Chair in Engineering in 2021.
Technical contributions
Yuwen Zhang's research area is in the field of heat and mass transfer with applications in nanomanufacturing, thermal management, and energy storage and conversion. He has published over 300 journal papers and more than 180 conference publications at national and international conferences.
He has developed pioneer models for a latent heat thermal energy storage system, as well as multiscale, multiphysics models on additive manufacturing (AM), including selective laser sintering (SLS) and laser chemical vapor deposition/infiltration (LCVD/LCVI). He is the first to develop fundamental models of fluid flow and heat transfer in the oscillating heat pipes, which is a heat transfer device that can be used in thermal management of electronic devices and energy systems. He carried out theoretical studies on femtosecond laser interaction with metal and biological materials from molecular scales to system levels, and solved inverse heat transfer problems for the determination of the heating condition and/or temperature-dependent macro and micro thermophysical properties under uncertainty. He also investigated the mechanism of heat transfer enhancement in nanofluids, which are stable colloidal suspensions of solid nanomaterials with sizes typically on the order of 1-100 nm in the base fluid, via molecular dynamics (MD) simulations.
Thermal management and temperature uniformity improvement of Li-ion batteries using external and internal cooling methods were also systematically studied by utilizing pin fin heat-sinks and metal/non-metal foams, as well as using electrolyte flow inside the embedded microchannels in the porous electrodes as a novel internal cooling technique. Moreover, he has pioneered application of AI and machine learning for efficient and accurate solution of multiphase heat and mass transfer and inverse heat conduction problems.
Honors and awards
Fellow, American Society of Thermal and Fluids Engineers (ASTFE), 2024
Huber and Helen Croft Chair in Engineering, University of Missouri, 2021
Curators' Distinguished Professor, University of Missouri, 2020
Coulter Award, University of Missouri Coulter Translational Partnership Program, 2018
Fellow, American Association for the Advancement of Sciences (AAAS), 2015
James C. Dowell Professorship, University of Missouri, 2012
Certificate of Appreciation for Service as K-15 Committee Chair, ASME Heat Transfer Division, 2014
Certificate of Distinguished Service, American Institute of Aeronautics and Astronautics (AIAA), 2011
Missouri Honor Senior Faculty Research Award, College of Engineering at the University of Missouri, 2010
Chancellor's Award for Outstanding Research and Creative Activity, University of Missouri, 2010
Fellow of American Society of Mechanical Engineers (ASME), 2007
Associate Fellow of American Institute of Aeronautics and Astronautics (AIAA), 2008
Faculty Fellow Award, College of Engineering at University of Missouri, 2007
Computational Research Award, Department of Mechanical Engineering, New Mexico State University, 2003
Young Investigator Award, Office of Naval Research (one of 26 awarded nationally in all fields), 2002
References
External links
Biographical information at the University of Missouri
University of Missouri faculty
Educators from Columbia, Missouri
People from Columbia, Missouri
University of Connecticut alumni
Xi'an Jiaotong University alumni
New Mexico State University faculty
Fluid dynamicists
Thermodynamicists
Fellows of the American Society of Mechanical Engineers
Fellows of the American Association for the Advancement of Science
American mechanical engineers
Chinese emigrants to the United States
Chinese mechanical engineers
1965 births
Living people | Yuwen Zhang | [
"Physics",
"Chemistry"
] | 1,020 | [
"Fluid dynamics",
"Fluid dynamicists",
"Thermodynamics",
"Thermodynamicists"
] |
17,880,353 | https://en.wikipedia.org/wiki/Keulegan%E2%80%93Carpenter%20number | In fluid dynamics, the Keulegan–Carpenter number, also called the period number, is a dimensionless quantity describing the relative importance of the drag forces over inertia forces for bluff objects in an oscillatory fluid flow. Or similarly, for objects that oscillate in a fluid at rest. For small Keulegan–Carpenter number inertia dominates, while for large numbers the (turbulence) drag forces are important.
The Keulegan–Carpenter number KC is defined as:
where:
V is the amplitude of the flow velocity oscillation (or the amplitude of the object's velocity, in case of an oscillating object),
T is the period of the oscillation, and
L is a characteristic length scale of the object, for instance the diameter for a cylinder under wave loading.
The Keulegan–Carpenter number is named after Garbis H. Keulegan (1890–1989) and Lloyd H. Carpenter.
A closely related parameter, also often used for sediment transport under water waves, is the displacement parameter δ:
with A the excursion amplitude of fluid particles in oscillatory flow and L a characteristic diameter of the sediment material. For sinusoidal motion of the fluid, A is related to V and T as A = VT/(2π), and:
The Keulegan–Carpenter number can be directly related to the Navier–Stokes equations, by looking at characteristic scales for the acceleration terms:
convective acceleration:
local acceleration:
Dividing these two acceleration scales gives the Keulegan–Carpenter number.
A somewhat similar parameter is the Strouhal number, in form equal to the reciprocal of the Keulegan–Carpenter number. The Strouhal number gives the vortex shedding frequency resulting from placing an object in a steady flow, so it describes the flow unsteadiness as a result of an instability of the flow downstream of the object. Conversely, the Keulegan–Carpenter number is related to the oscillation frequency of an unsteady flow into which the object is placed.
See also
Morison equation
Notes
Bibliography
Dimensionless numbers of fluid mechanics
Fluid dynamics
Water waves
Marine engineering | Keulegan–Carpenter number | [
"Physics",
"Chemistry",
"Engineering"
] | 442 | [
"Physical phenomena",
"Water waves",
"Chemical engineering",
"Waves",
"Marine engineering",
"Piping",
"Fluid dynamics"
] |
17,884,227 | https://en.wikipedia.org/wiki/3-Oxopentanoic%20acid | 3-Oxopentanoic acid, or beta-ketopentanoate, is a 5-carbon ketone body. It is made from odd carbon fatty acids in the liver and rapidly enters the brain.
As opposed to 4-carbon ketone bodies, beta-ketopentanoate is anaplerotic, meaning it can refill the pool of TCA cycle intermediates. The triglyceride triheptanoin is used clinically to produce beta-ketopentanoate.
References
Beta-keto acids
Carboxylic acids | 3-Oxopentanoic acid | [
"Chemistry"
] | 117 | [
"Carboxylic acids",
"Functional groups"
] |
17,885,012 | https://en.wikipedia.org/wiki/Crystallographic%20database | A crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. Crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. They are characterized by symmetry, morphology, and directionally dependent physical properties. A crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. (Molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in X-ray, neutron, and electron diffraction based crystallography).
Crystal structures of crystalline material are typically determined from X-ray or neutron single-crystal diffraction data and stored in crystal structure databases. They are routinely identified by comparing reflection intensities and lattice spacings from X-ray powder diffraction data with entries in powder-diffraction fingerprinting databases.
Crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single-crystal electron diffraction data or structure factor amplitude and phase angle information from Fourier transforms of HRTEM images of crystallites. They are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice-fringe fingerprint plots with entries in a lattice-fringe fingerprinting database.
Crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. Many provide structure visualization capabilities. They can be browser based or installed locally. Newer versions are built on the relational database model and support the Crystallographic Information File (CIF) as a universal data exchange format.
Overview
Crystallographic data are primarily extracted from published scientific articles and supplementary material. Newer versions of crystallographic databases are built on the relational database model, which enables efficient cross-referencing of tables. Cross-referencing serves to derive additional data or enhance the search capacity of the database.
Data exchange among crystallographic databases, structure visualization software, and structure refinement programs has been facilitated by the emergence of the Crystallographic Information File (CIF) format. The CIF format is the standard file format for the exchange and archiving of crystallographic data.
It was adopted by the International Union of Crystallography (IUCr), who also provides full specifications of the format. It is supported by all major crystallographic databases.
The increasing automation of the crystal structure determination process has resulted in ever higher publishing rates of new crystal structures and, consequentially, new publishing models. Minimalistic articles contain only crystal structure tables, structure images, and, possibly, abstract-like structure description. They tend to be published in author-financed or subsidized open-access journals. Acta Crystallographica Section E and Zeitschrift für Kristallographie belong in this category. More elaborate contributions may go to traditional subscriber-financed journals. Hybrid journals, on the other hand, embed individual author-financed open-access articles among subscriber-financed ones. Publishers may also make scientific articles available online, as Portable Document Format (PDF) files.
Crystal structure data in CIF format are linked to scientific articles as supplementary material. CIFs may be accessible directly from the publisher's website, crystallographic databases, or both. In recent years, many publishers of crystallographic journals have come to interpret CIFs as formatted versions of open data, i.e. representing non-copyrightable facts, and therefore tend to make them freely available online, independent of the accessibility status of linked scientific articles.
Trends
As of 2008, more than 700,000 crystal structures had been published and stored in crystal structure databases. The publishing rate has reached more than 50,000 crystal structures per year. These numbers refer to published and republished crystal structures from experimental data. Crystal structures are republished owing to corrections for symmetry errors, improvements of lattice and atomic parameters, and differences in diffraction technique or experimental conditions. As of 2016, there are about 1,000,000 molecule and crystal structures known and published, approximately half of them in open access.
Crystal structures are typically categorized as minerals, metals-alloys, inorganics, organics, nucleic acids, and biological macromolecules. Individual crystal structure databases cater for users in specific chemical, molecular-biological, or related disciplines by covering super- or subsets of these categories. Minerals are a subset of mostly inorganic compounds. The category ‘metals-alloys’ covers metals, alloys, and intermetallics. Metals-alloys and inorganics can be merged into ‘non-organics’. Organic compounds and biological macromolecules are separated according to molecular size. Organic salts, organometallics, and metalloproteins tend to be attributed to organics or biological macromolecules, respectively. Nucleic acids are a subset of biological macromolecules.
Comprehensiveness can refer to the number of entries in a database. On those terms, a crystal structure database can be regarded as comprehensive, if it contains a collection of all (re-)published crystal structures in the category of interest and is updated frequently. Searching for structures in such a database can replace more time-consuming scanning of the open literature. Access to crystal structure databases differs widely. It can be divided into reading and writing access. Reading access rights (search, download) affect the number and range of users. Restricted reading access is often coupled with restricted usage rights. Writing access rights (upload, edit, delete), on the other hand, determine the number and range of contributors to the database. Restricted writing access is often coupled with high data integrity.
In terms of user numbers and daily access rates, comprehensive and thoroughly vetted open-access crystal structure databases naturally surpass comparable databases with more restricted access and usage rights. Independent of comprehensiveness, open-access crystal structure databases have spawned open-source software projects, such as search-analysis tools, visualization software, and derivative databases. Scientific progress has been slowed down by restricting access or usage rights as well as limiting comprehensiveness or data integrity. Restricted access or usage rights are commonly associated with commercial crystal structure databases. Lack of comprehensiveness or data integrity, on the other hand, are associated with some of the open-access crystal structure databases other than the Crystallography Open Database (COD), and is "macromolecular open-access counterpart", the world wide Protein Database. Apart from that, several crystal structure databases are freely available for primarily educational purposes, in particular mineralogical databases and educational offshoots of the COD .
Crystallographic databases can specialize in crystal structures, crystal phase identification, crystallization, crystal morphology, or various physical properties. More integrative databases combine several categories of compounds or specializations. Structures of incommensurate phases, 2D materials, nanocrystals, thin films on substrates, and predicted crystal structures are collected in tailored special structure databases.
Search
Search capacities of crystallographic databases differ widely. Basic functionality comprises search by keywords, physical properties, and chemical elements. Of particular importance is search by compound name and lattice parameters. Very useful are search options that allow the use of wildcard characters and logical connectives in search strings. If supported, the scope of the search can be constrained by the exclusion of certain chemical elements.
More sophisticated algorithms depend on the material type covered. Organic compounds might be searched for on the basis of certain molecular fragments. Inorganic compounds, on the other hand, might be of interest with regard to a certain type of coordination geometry. More advanced algorithms deal with conformation analysis (organics), supramolecular chemistry (organics), interpolyhedral connectivity (‘non-organics’) and higher-order molecular structures (biological macromolecules). Search algorithms used for a more complex analysis of physical properties, e.g. phase transitions or structure-property relationships, might apply group-theoretical concepts.
Modern versions of crystallographic databases are based on the relational database model. Communication with the database usually happens via a dialect of the Structured Query Language (SQL). Web-based databases typically process the search algorithm on the server interpreting supported scripting elements, while desktop-based databases run locally installed and usually precompiled search engines.
Crystal phase identification
Crystalline material may be divided into single crystals, twin crystals, polycrystals, and crystal powder. In a single crystal, the arrangement of atoms, ions, or molecules is defined by a single crystal structure in one orientation. Twin crystals, on the other hand, consist of single-crystalline twin domains, which are aligned by twin laws and separated by domain walls.
Polycrystals are made of a large number of small single crystals, or crystallites, held together by thin layers of amorphous solid. Crystal powder is obtained by grinding crystals, resulting in powder particles, made up of one or more crystallites. Both polycrystals and crystal powder consist of many crystallites with varying orientation.
Crystal phases are defined as regions with the same crystal structure, irrespective of orientation or twinning. Single and twinned crystalline specimens therefore constitute individual crystal phases. Polycrystalline or crystal powder samples may consist of more than one crystal phase. Such a phase comprises all the crystallites in the sample with the same crystal structure.
Crystal phases can be identified by successfully matching suitable crystallographic parameters with their counterparts in database entries. Prior knowledge of the chemical composition of the crystal phase can be used to reduce the number of database entries to a small selection of candidate structures and thus simplify the crystal phase identification process considerably.
Powder diffraction fingerprinting (1D)
Applying standard diffraction techniques to crystal powders or polycrystals is tantamount to collapsing the 3D reciprocal space, as obtained via single-crystal diffraction, onto a 1D axis. The resulting partial-to-total overlap of symmetry-independent reflections renders the structure determination process more difficult, if not impossible.
Powder diffraction data can be plotted as diffracted intensity (I) versus reciprocal lattice spacing (1/d). Reflection positions and intensities of known crystal phases, mostly from X-ray diffraction data, are stored, as d-I data pairs, in the Powder Diffraction File (PDF) database. The list of d-I data pairs is highly characteristic of a crystal phase and, thus, suitable for the identification, also called ‘fingerprinting’, of crystal phases.
Search-match algorithms compare selected test reflections of an unknown crystal phase with entries in the database. Intensity-driven algorithms utilize the three most intense lines (so-called ‘Hanawalt search’), while d-spacing-driven algorithms are based on the eight to ten largest d-spacings (so-called ‘Fink search’).
X-ray powder diffraction fingerprinting has become the standard tool for the identification of single or multiple crystal phases and is widely used in such fields as metallurgy, mineralogy, forensic science, archeology, condensed matter physics, and the biological and pharmaceutical sciences.
Lattice-fringe fingerprinting (2D)
Powder diffraction patterns of very small single crystals, or crystallites, are subject to size-dependent peak broadening, which, below a certain size, renders powder diffraction fingerprinting useless. In this case, peak resolution is only possible in 3D reciprocal space,
i.e. by applying single-crystal electron diffraction techniques.
High-Resolution Transmission Electron Microscopy (HRTEM) provides images and diffraction patterns of nanometer sized crystallites. Fourier transforms of HRTEM images and electron diffraction patterns both supply information about the projected reciprocal lattice geometry
for a certain crystal orientation, where the projection axis coincides with the optical axis of the microscope.
Projected lattice geometries can be represented by so-called ‘lattice-fringe fingerprint plots’ (LFFPs), also called angular covariance plots. The horizontal axis of such a plot is given in reciprocal lattice length and is limited by the point resolution of the microscope. The vertical axis is defined as acute angle between Fourier transformed lattice fringes or electron diffraction spots. A 2D data point is defined by the length of a reciprocal lattice vector and its (acute) angle with another reciprocal lattice vector. Sets of 2D data points that obey Weiss's zone law are subsets of the entirety of data points in an LFFP. A suitable search-match algorithm using LFFPs, therefore, tries to find matching zone axis subsets in the database. It is, essentially, a variant of a lattice matching algorithm.
In the case of electron diffraction patterns, structure factor amplitudes can be used, in a later step, to further discern among a selection of candidate structures (so-called 'structure factor fingerprinting'). Structure factor amplitudes from electron diffraction data are far less reliable than their counterparts from X-ray single-crystal and powder diffraction data. Existing precession electron diffraction techniques greatly improve the quality of structure factor amplitudes, increase their number and, thus, make structure factor amplitude information much more useful for the fingerprinting process.
Fourier transforms of HRTEM images, on the other hand, supply information not only about the projected reciprocal lattice geometry and structure factor amplitudes, but also structure factor phase angles. After crystallographic image processing, structure factor phase angles are far more reliable than structure factor amplitudes. Further discernment of candidate structures is then mainly based on structure factor phase angles and, to a lesser extent, structure factor amplitudes (so-called 'structure factor fingerprinting').
Morphological fingerprinting (3D)
The Generalized Steno Law states that the interfacial angles between identical faces of any single crystal of the same material are, by nature, restricted to the same value. This offers the opportunity to fingerprint crystalline materials on the basis of optical goniometry, which is also known as crystallometry. In order to employ this technique successfully, one must consider the observed point group symmetry of the measured faces and creatively apply the rule that "crystal morphologies are often combinations of simple (i.e. low multiplicity) forms where the individual faces have the lowest possible Miller indices for any given zone axis". This shall ensure that the correct indexing of the crystal faces is obtained for any single crystal.
It is in many cases possible to derive the ratios of the crystal axes for crystals with low symmetry from optical goniometry with high accuracy and precision and to identify a crystalline material on their basis alone employing databases such as 'Crystal Data'. Provided that the crystal faces have been correctly indexed and the interfacial angles were measured to better than a few fractions of a tenth of a degree, a crystalline material can be identified quite unambiguously on the basis of angle comparisons to two rather comprehensive databases: the 'Bestimmungstabellen für Kristalle (Определитель Кристаллов)' and the 'Barker Index of Crystals'.
Since Steno's Law can be further generalized for a single crystal of any material to include the angles between either all identically indexed net planes (i.e. vectors of the reciprocal lattice, also known as 'potential reflections in diffraction experiments') or all identically indexed lattice directions (i.e. vectors of the direct lattice, also known as zone axes), opportunities exist for morphological fingerprinting of nanocrystals in the transmission electron microscope (TEM) by means of transmission electron goniometry.
The specimen goniometer of a TEM is thereby employed analogously to the goniometer head of an optical goniometer. The optical axis of the TEM is then analogous to the reference direction of an optical goniometer. While in optical goniometry net-plane normals (reciprocal lattice vectors) need to be successively aligned parallel to the reference direction of an optical goniometer in order to derive measurements of interfacial angles, the corresponding alignment needs to be done for zone axes (direct lattice vector) in transmission electron goniometry. (Note that such alignments are by their nature quite trivial for nanocrystals in a TEM after the microscope has been aligned by standard procedures.)
Since transmission electron goniometry is based on Bragg's Law for the transmission (Laue) case (diffraction of electron waves), interzonal angles (i.e. angles between lattice directions) can be measured by a procedure that is analogous to the measurement of interfacial angles in an optical goniometer on the basis of Snell's Law, i.e. the reflection of light. The complements to interfacial angles of external crystal faces can, on the other hand, be directly measured from a zone-axis diffraction pattern or from the Fourier transform of a high resolution TEM image that shows crossed lattice fringes.
Lattice matching (3D)
Lattice parameters of unknown crystal phases can be obtained from X-ray, neutron, or electron diffraction data. Single-crystal diffraction experiments supply orientation matrices, from which lattice parameters can be deduced. Alternatively, lattice parameters can be obtained from powder or polycrystal diffraction data via profile fitting without structural model (so-called 'Le Bail method').
Arbitrarily defined unit cells can be transformed to a standard setting and, from there, further reduced to a primitive smallest cell. Sophisticated algorithms compare such reduced cells with corresponding database entries. More powerful algorithms also consider derivative super- and subcells. The lattice-matching process can be further sped up by precalculating and storing reduced cells for all entries. The algorithm searches for matches within a certain range of the lattice parameters. More accurate lattice parameters allow a narrower range and, thus, a better match.
Lattice matching is useful in identifying crystal phases in the early stages of single-crystal
diffraction experiments and, thus, avoiding unnecessary full data collection and structure determination procedures for already known crystal structures. The method is particularly important for single-crystalline samples that need to be preserved. If, on the other hand, some or all of the crystalline sample material can be ground, powder diffraction fingerprinting is usually the better option for crystal phase identification, provided that the peak resolution is good enough. However, lattice matching algorithms are still better at treating derivative super- and subcells.
Visualization
Newer versions of crystal structure databases integrate the visualization of crystal and molecular structures. Specialized or integrative crystallographic databases may provide morphology or tensor visualization output.
Crystal structures
The crystal structure describes the three-dimensional periodic arrangement of atoms, ions, or molecules in a crystal. The unit cell represents the simplest repeating unit of the crystal structure. It is a parallelepiped containing a certain spatial arrangement of atoms, ions, molecules, or molecular fragments. From the unit cell the crystal structure can be fully reconstructed via translations.
The visualization of a crystal structure can be reduced to the arrangement of atoms, ions, or molecules in the unit cell, with or without cell outlines. Structure elements extending beyond single unit cells, such as isolated molecular or polyhedral units as well as chain, net, or framework structures, can often be better understood by extending the structure representation into adjacent cells.
The space group of a crystal is a mathematical description of the symmetry inherent in the structure. The motif of the crystal structure is given by the asymmetric unit, a minimal subset of the unit cell contents. The unit cell contents can be fully reconstructed via the symmetry operations of the space group on the asymmetric unit. Visualization interfaces usually allow for switching between asymmetric unit and full structure representations.
Bonds between atoms or ions can be identified by characteristic short distances between them. They can be classified as covalent, ionic, hydrogen, or other bonds including hybrid forms. Bond angles can be deduced from the bond vectors in groups of atoms or ions. Bond distances and angles can be made available to the user in tabular form or interactively, by selecting pairs or groups of atoms or ions. In ball-and-stick models of crystal structures, balls represent atoms and sticks represent bonds.
Since organic chemists are particularly interested in molecular structures, it might be useful to be able to single out individual molecular units interactively from the drawing. Organic molecular units need to be given both as 2D structural formulae and full 3D molecular structures. Molecules on special-symmetry positions need to be reconstructed from the asymmetric unit. Protein crystallographers are interested in molecular structures of biological macromolecules, so that provisions need to be made to be able to represent molecular subunits as helices, sheets, or coils, respectively.
Crystal structure visualization can be integrated into a crystallographic database. Alternatively, the crystal structure data are exchanged between the database and the visualization software, preferably using the CIF format. Web-based crystallographic databases can integrate crystal structure visualization capability. Depending on the complexity of the structure, lighting, and 3D effects, crystal structure visualization can require a significant amount of processing power, which is why the actual visualization is typically run on the client.
Currently, web-integrated crystal structure visualization is based on Java applets from open-source projects such as Jmol. Web-integrated crystal structure visualization is tailored for examining crystal structures in web browsers, often supporting wide color spectra (up to 32 bit) and window size adaptation. However, web-generated crystal structure images are not always suitable for publishing due to issues such as resolution depth, color choice, grayscale contrast, or labeling (positioning, font type, font size).
Morphology and physical properties
Mineralogists, in particular, are interested in morphological appearances of individual crystals, as defined by the actually formed crystal faces (tracht) and their relative sizes (habit). More advanced visualization capabilities allow for displaying surface characteristics, imperfections inside the crystal, lighting (reflection, shadow, and translucency), and 3D effects (interactive rotatability, perspective, and stereo viewing).
Crystal physicists, in particular, are interested in anisotropic physical properties of crystals. The directional dependence of a crystal's physical property is described by a 3D tensor and depends on the orientation of the crystal. Tensor shapes are more palpable by adding lighting effects (reflection and shadow). 2D sections of interest are selected for display by rotating the tensor interactively around one or more axes.
Crystal morphology or physical property data can be stored in specialized databases or added to more comprehensive crystal structure databases. The Crystal Morphology Database (CMD) is an example for a web-based crystal morphology database with integrated visualization capabilities.
See also
Chemical database
Biological database
References
External links
Crystal structures
American Mineralogist Crystal Structure Database (AMCSD) (contents: crystal structures of minerals, access: free, size: large)
Cambridge Structural Database (CSD) (contents: crystal structures of organics and metal-organics, access: restricted, size: very large)
Crystallography Open Database (COD) (contents: crystal structures of organics, metalorganics, minerals, inorganics, metals, alloys, and intermetallics, access: free, size: very large)
COD+ (Web Interface for COD) (contents: crystal structures of organics, metalorganics, minerals, inorganics, metals, alloys, and intermetallics, access: free, size: very large)
Database of Zeolite Structures (contents: crystal structures of zeolites, access: free, size: small)
Incommensurate Structures Database (contents: incommensurate structures, access: free, size: small)
Inorganic Crystal Structure Database (ICSD) (contents: crystal structures of minerals and inorganics, access: restricted, size: large)
MaterialsProject Database (contents: crystal structures of inorganic compounds, access: free, size: large)
Materials Platform for Data Science (MPDS) or PAULING FILE (contents: critically evaluated crystal structures, as well as physical properties and phase diagrams, from the world scientific literature, access: partially free, size: very large)
MaterialsWeb Database (contents: crystal structures of inorganic 2D materials and bulk compounds, access: free, size: large)
Metals Structure Database (CRYSTMET) (contents: crystal structures of metals, alloys, and intermetallics, access: restricted, size: large)
Mineralogy Database (contents: crystal structures of minerals, access: free, size: medium)
MinCryst (contents: crystal structures of minerals, access: free, size: medium)
NIST Structural Database NIST Structural Database (contents: crystal structures of metals, alloys, and intermetallics, access: restricted, size: large)
NIST Surface Structure Database (contents: surface and interface structures, access: restricted, size: small-medium)
Nucleic Acid Database (contents: crystal and molecular structures of nucleic acids, access: free, size: medium)
Pearson's Crystal Data (contents: crystal structures of inorganics, minerals, salts, oxides, hydrides, metals, alloys, and intermetallics, access: restricted, size: very large)
Worldwide Protein Data Bank (PDB) (contents: crystal and molecular structures of biological macromolecules, access: free, size: very large)
Wiki Crystallography Database (WCD) (contents: crystal structures of organics, metalorganics, minerals, inorganics, metals, alloys, and intermetallics, access: free, size: medium)
Crystal phase identification
Match! (method: powder diffraction fingerprinting)
NIST Crystal Data (method: lattice matching)
Powder Diffraction File (PDF) (method: powder diffraction fingerprinting)
Specialized databases
Educational Subset of the Crystallography Open Database (EDU-COD) (specialization: crystal and molecule structures for college education, access: free, size: medium)
Biological Macromolecule Crystallization Database (BMCD) (specialization: crystallization of biological macromolecules, access: free, size: medium)
Crystal Morphology Database (CMD) (specialization: morphology of crystals, access: free, size: very small)
Database of Hypothetical Structures (specialization: predicted zeolite-like crystal structures, access: free, size: large)
Database of Zeolite Structures (specialization: crystal structures of zeolites, access: free, size: small)
Hypothetical MOFs Database (specialization: predicted metal-organic framework crystal structures, access: free, size: large)
Incommensurate Structures Database (specialization: incommensurate structures, access: free, size: small)
Marseille Protein Crystallization Database (MPCD) (specialization: crystallization of biological macromolecules, access: free, size: medium)
MOFomics (specialization: pore structures of metal-organic frameworks, access: free, size: medium)
Nano-Crystallography Database (NCD) (specialization: crystal structures of nanometer sized crystallites, access: free, size: small)
NIST Surface Structure Database (specialization: surface and interface structures, access: restricted, size: small-medium)
Predicted Crystallography Open Database (PCOD) (spezialization: predicted crystal structures of organics, metal-organics, metals, alloys, intermetallics, and inorganics, access: free, size: very large)
Theoretical Crystallography Open Database (TCOD) (spezialization: crystal structures of organics, metal-organics, metals, alloys, intermetallics, and inorganics that were refined or predicted from density functional theory with some experimental input, access: free, size: small)
ZEOMICS (specialization: pore structures of zeolites, access: free, size: small)
Physical chemistry | Crystallographic database | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,778 | [
"Applied and interdisciplinary physics",
"Crystallographic databases",
"Crystallography",
"nan",
"Physical chemistry"
] |
17,885,310 | https://en.wikipedia.org/wiki/Dantzig%E2%80%93Wolfe%20decomposition | Dantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Philip Wolfe and initially published in 1960. Many texts on linear programming have sections dedicated to discussing this decomposition algorithm.
Dantzig–Wolfe decomposition relies on delayed column generation for improving the tractability of large-scale linear programs. For most linear programs solved via the revised simplex algorithm, at each step, most columns (variables) are not in the basis. In such a scheme, a master problem containing at least the currently active columns (the basis) uses a subproblem or subproblems to generate columns for entry into the basis such that their inclusion improves the objective function.
Required form
In order to use Dantzig–Wolfe decomposition, the constraint matrix of the linear program must have a specific form. A set of constraints must be identified as "connecting", "coupling", or "complicating" constraints wherein many of the variables contained in the constraints have non-zero coefficients. The remaining constraints need to be grouped into independent submatrices such that if a variable has a non-zero coefficient within one submatrix, it will not have a non-zero coefficient in another submatrix. This description is visualized below:
The D matrix represents the coupling constraints and each Fi represents the independent submatrices. Note that it is possible to run the algorithm when there is only one F submatrix.
Problem reformulation
After identifying the required form, the original problem is reformulated into a master program and n subprograms. This reformulation relies on the fact that every point of a non-empty, bounded convex polyhedron can be represented as a convex combination of its extreme points.
Each column in the new master program represents a solution to one of the subproblems. The master program enforces that the coupling constraints are satisfied given the set of subproblem solutions that are currently available. The master program then requests additional solutions from the subproblem such that the overall objective to the original linear program is improved.
The algorithm
While there are several variations regarding implementation, the Dantzig–Wolfe decomposition algorithm can be briefly described as follows:
Starting with a feasible solution to the reduced master program, formulate new objective functions for each subproblem such that the subproblems will offer solutions that improve the current objective of the master program.
Subproblems are re-solved given their new objective functions. An optimal value for each subproblem is offered to the master program.
The master program incorporates one or all of the new columns generated by the solutions to the subproblems based on those columns' respective ability to improve the original problem's objective.
Master program performs x iterations of the simplex algorithm, where x is the number of columns incorporated.
If objective is improved, goto step 1. Else, continue.
The master program cannot be further improved by any new columns from the subproblems, thus return.
Implementation
There are examples of the implementation of Dantzig–Wolfe decomposition available in the closed source AMPL and GAMS mathematical modeling software. There are general, parallel, and fast implementations available as open-source software, including some provided by JuMP and the GNU Linear Programming Kit.
The algorithm can be implemented such that the subproblems are solved in parallel, since their solutions are completely independent. When this is the case, there are options for the master program as to how the columns should be integrated into the master. The master may wait until each subproblem has completed and then incorporate all columns that improve the objective or it may choose a smaller subset of those columns. Another option is that the master may take only the first available column and then stop and restart all of the subproblems with new objectives based upon the incorporation of the newest column.
Another design choice for implementation involves columns that exit the basis at each iteration of the algorithm. Those columns may be retained, immediately discarded, or discarded via some policy after future iterations (for example, remove all non-basic columns every 10 iterations).
A (2001) computational evaluation of Dantzig-Wolfe in general and Dantzig-Wolfe and parallel computation is the PhD thesis by J. R. Tebboth
See also
Delayed column generation
Benders' decomposition
References
Linear programming
Decomposition methods | Dantzig–Wolfe decomposition | [
"Engineering"
] | 896 | [
"Decomposition methods",
"Industrial engineering"
] |
12,261,967 | https://en.wikipedia.org/wiki/C5H4N4O | {{DISPLAYTITLE:C5H4N4O}}
The molecular formula C5H4N4O (molar mass: 136.11 g/mol, exact mass: 136.0385 u) may refer to:
Allopurinol
1-Hydroxy-7-azabenzotriazole (HOAt)
Hypoxanthine
Molecular formulas | C5H4N4O | [
"Physics",
"Chemistry"
] | 82 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
12,263,271 | https://en.wikipedia.org/wiki/DAX1 | DAX1 (dosage-sensitive sex reversal, adrenal hypoplasia critical region, on chromosome X, gene 1) is a nuclear receptor protein that in humans is encoded by the NR0B1 gene (nuclear receptor subfamily 0, group B, member 1). The NR0B1 gene is located on the short (p) arm of the X chromosome between bands Xp21.3 and Xp21.2, from base pair 30,082,120 to base pair 30,087,136.
Function
This gene encodes a protein that lacks the normal DNA-binding domain contained in other nuclear receptors. The encoded protein acts as a dominant-negative regulator of transcription of other nuclear receptors including steroidogenic factor 1. This protein also functions as an anti-testis gene by acting antagonistically to SRY. Mutations in this gene result in both X-linked congenital adrenal hypoplasia and hypogonadotropic hypogonadism.
DAX1 plays an important role in the normal development of several hormone-producing tissues. These tissues include the adrenal glands above each kidney, the pituitary gland and hypothalamus, which are located in the brain, and the reproductive structures (the testes and ovaries). DAX1 controls the activity of certain genes in the cells that form these tissues during embryonic development. Proteins that control the activity of other genes are known as transcription factors. DAX1 also plays a role in regulating hormone production in these tissues after they have been formed.
Role in disease
X-linked adrenal hypoplasia congenita is caused by mutations in the NR0B1 gene. More than 90 NR0B1 mutations that cause X-linked adrenal hypoplasia congenita have been identified. Many of these mutations delete all or part of the NR0B1 gene, preventing the production of DAX1 protein. Some mutations cause the production of an abnormally short protein. Other mutations cause a change in one of the building blocks (amino acids) of DAX1. These mutations are thought to result in a misshapen, nonfunctional protein. Loss of DAX1 function leads to adrenal insufficiency and hypogonadotropic hypogonadism, which are the main characteristics of this disorder.
Duplication of genetic material on the X chromosome in the region that contains the NR0B1 gene can cause a condition called dosage-sensitive sex reversal. The extra copy of the NR0B1 gene prevents the formation of male reproductive tissues. People who have this duplication usually appear to be female, but are genetically male with both an X and a Y chromosome.
In some cases, genetic material is deleted from the X chromosome in a region that contains several genes, including NR0B1. This deletion results in a condition called adrenal hypoplasia congenita with complex glycerol kinase deficiency. In addition to the signs and symptoms of adrenal hypoplasia congenita, individuals with this condition may have elevated levels of lipids in their blood and urine and may have problems regulating blood sugar levels. In rare cases, the amount of genetic material deleted is even more extensive and affected individuals also have Duchenne muscular dystrophy.
Interactions
DAX1 has been shown to interact with:
COPS2,
NRIP1,
Steroidogenic factor 1, and
SREBF1.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on 46,XY Disorder of Sex Development and 46,XY Complete Gonadal Dysgenesis
OMIM entries on 46,XY Disorder of Sex Development and 46,XY Complete Gonadal Dysgenesis
GeneReviews/NIH/NCBI/UW entry on X-Linked Adrenal Hypoplasia Congenita including Complex Glycerol Kinase Deficiency
GeneCard for NR0B1
Transcription factors | DAX1 | [
"Chemistry",
"Biology"
] | 831 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
12,264,442 | https://en.wikipedia.org/wiki/Sea%20surface%20microlayer | The sea surface microlayer (SML) is the boundary interface between the atmosphere and ocean, covering about 70% of Earth's surface. With an operationally defined thickness between 1 and , the SML has physicochemical and biological properties that are measurably distinct from underlying waters. Recent studies now indicate that the SML covers the ocean to a significant extent, and evidence shows that it is an aggregate-enriched biofilm environment with distinct microbial communities. Because of its unique position at the air-sea interface, the SML is central to a range of global marine biogeochemical and climate-related processes.
The sea surface microlayer is the boundary layer where all exchange occurs between the atmosphere and the ocean. The chemical, physical, and biological properties of the SML differ greatly from the sub-surface water just a few centimeters beneath.
Despite the huge extent of the ocean's surface, until now relatively little attention has been paid to the sea surface microlayer (SML) as the ultimate interface where heat, momentum and mass exchange between the ocean and the atmosphere takes place. Via the SML, large-scale environmental changes in the ocean such as warming, acidification, deoxygenation, and eutrophication potentially influence cloud formation, precipitation, and the global radiation balance. Due to the deep connectivity between biological, chemical, and physical processes, studies of the SML may reveal multiple sensitivities to global and regional changes.
Understanding the processes at the ocean's surface, in particular involving the SML as an important and determinant interface, could provide an essential contribution to the reduction of uncertainties regarding ocean-climate feedbacks. As of 2017, processes occurring within the SML, as well as the associated rates of material exchange through the SML, remained poorly understood and were rarely represented in marine and atmospheric numerical models.
Overview
The sea surface microlayer (SML) is the boundary interface between the atmosphere and ocean, covering about 70% of the Earth's surface. The SML has physicochemical and biological properties that are measurably distinct from underlying waters. Because of its unique position at the air-sea interface, the SML is central to a range of global biogeochemical and climate-related processes. Although known for the last six decades, the SML often has remained in a distinct research niche, primarily as it was not thought to exist under typical oceanic conditions. Recent studies now indicate that the SML covers the ocean to a significant extent, highlighting its global relevance as the boundary layer linking two major components of the Earth system – the ocean and the atmosphere.
In 1983, Sieburth hypothesised that the SML was a hydrated gel-like layer formed by a complex mixture of carbohydrates, proteins, and lipids. In recent years, his hypothesis has been confirmed, and scientific evidence indicates that the SML is an aggregate-enriched biofilm environment with distinct microbial communities. In 1999 Ellison et al. estimated that 200 Tg C yr−1 (200 million tonnes of carbon per year) accumulates in the SML, similar to sedimentation rates of carbon to the ocean's seabed, though the accumulated carbon in the SML probably has a very short residence time. Although the total volume of the microlayer is very small compared to the ocean's volume, Carlson suggested in his seminal 1993 paper that unique interfacial reactions may occur in the SML that may not occur in the underlying water or at a much slower rate there. He therefore hypothesised that the SML plays an important role in the diagenesis of carbon in the upper ocean. Biofilm-like properties and highest possible exposure to solar radiation leads to an intuitive assumption that the SML is a biochemical microreactor.
Historically, the SML has been summarized as being a microhabitat composed of several layers distinguished by their ecological, chemical and physical properties with an operational total thickness of between 1 and 1000 μm. In 2005 Hunter defined the SML as a "microscopic portion of the surface ocean which is in contact with the atmosphere and which may have physical, chemical or biological properties that are measurably different from those of adjacent sub-surface waters". He avoids a definite range of thickness as it depends strongly on the feature of interest. A thickness of 60 μm has been measured based on sudden changes of the pH, and could be meaningfully used for studying the physicochemical properties of the SML. At such thickness, the SML represents a laminar layer, free of turbulence, and greatly affecting the exchange of gases between the ocean and atmosphere. As a habitat for neuston (surface-dwelling organisms ranging from bacteria to larger siphonophores), the thickness of the SML in some ways depends on the organism or ecological feature of interest. In 2005, Zaitsev described the SML and associated near-surface layer (down to 5 cm) as an incubator or nursery for eggs and larvae for a wide range of aquatic organisms.
Hunter's definition includes all interlinked layers from the laminar layer to the nursery without explicit reference to defined depths. In 2017, Wurl et al. proposed Hunter's definition be validated with a redeveloped SML paradigm that includes its global presence, biofilm-like properties and role as a nursery. The new paradigm pushes the SML into a new and wider context relevant to many ocean and climate sciences.
According to Wurl et al., the SML can never be devoid of organics due to the abundance of surface-active substances (e.g., surfactants) in the upper ocean and the phenomenon of surface tension at air-liquid interfaces. The SML is analogous to the thermal boundary layer, and remote sensing of the sea surface temperature shows ubiquitous anomalies between the sea surface skin and bulk temperature. Even so, the differences in both are driven by different processes. Enrichment, defined as concentration ratios of an analyte in the SML to the underlying bulk water, has been used for decades as evidence for the existence of the SML. Consequently, depletions of organics in the SML are debatable; however, the question of enrichment or depletion is likely to be a function of the thickness of the SML (which varies with sea state; including losses via sea spray, the concentrations of organics in the bulk water, and the limitations of sampling techniques to collect thin layers . Enrichment of surfactants, and changes in the sea surface temperature and salinity, serve as universal indicators for the presence of the SML. Organisms are perhaps less suitable as indicators of the SML because they can actively avoid the SML and/or the harsh conditions in the SML may reduce their populations. However, the thickness of the SML remains "operational" in field experiments because the thickness of the collected layer is governed by the sampling method. Advances in SML sampling technology are needed to improve our understanding of how the SML influences air-sea interactions.
Marine surface habitats sit at the interface between the atmosphere and the ocean. The biofilm-like habitat at the surface of the ocean harbours surface-dwelling microorganisms, commonly referred to as neuston.
The sea surface microlayer (SML) constitutes the uppermost layer of the ocean, only 1–1000 μm thick, with unique chemical and biological properties that distinguish it from the underlying water (ULW). Due to the location at the air-sea interface, the SML can influence exchange processes across this boundary layer, such as air-sea gas exchange and the formation of sea spray aerosols.
Due to its exclusive position between the atmosphere and the hydrosphere and by spanning about 70% of the Earth's surface, the sea-surface microlayer (sea-SML) is regarded as a fundamental component in air–sea exchange processes and in biogeochemical cycling. Although having a minor thickness of <1000 μm, the elusive SML is long known for its distinct physicochemical characteristics compared to the underlying water, e.g., by featuring the accumulation of dissolved and particulate organic matter, transparent exopolymer particles (TEP), and surface-active molecules. Therefore, the SML is a gelatinous biofilm, maintaining physical stability through surface tension forces. It also forms a vast habitat for different organisms, collectively termed as neuston with a recent global estimate of 2 × 1023 microbial cells for the sea-SML.
Life at air–water interfaces has never been considered easy, mainly because of the harsh environmental conditions that influence the SML. However, high abundances of microorganisms, especially of bacteria and picophytoplankton, accumulating in the SML compared to the underlying water were frequently reported, accompanied by a predominant heterotrophic activity. This is because primary production at the immediate air–water interface is often hindered by photoinhibition. However, some exceptions of photosynthetic organisms, e.g., Trichodesmium, Synechococcus, or Sargassum, show more tolerance towards high light intensities and, hence, can become enriched in the SML. Previous research has provided evidence that neustonic organisms can cope with wind and wave energy, solar and ultraviolet (UV) radiation, fluctuations in temperature and salinity, and a higher potential predation risk by the zooneuston. Furthermore, wind action promoting sea spray formation and bubbles rising from deeper water and bursting at the surface release SML-associated microbes into the atmosphere. In addition to being more concentrated compared to planktonic counterparts, the bacterioneuston, algae, and protists display distinctive community compositions compared to the underlying water, in both marine and freshwater habitats. Furthermore, the bacterial community composition was often dependent on the SML sampling device being used. While being well defined with respect to bacterial community composition, little is known about viruses in the SML, i.e., the virioneuston. This review has its focus on virus–bacterium dynamics at air–water interfaces, even if viruses likely interact with other SML microbes, including archaea and the phytoneuston, as can be deduced from viral interference with their planktonic counterparts. Although viruses were briefly mentioned as pivotal SML components in a recent review on this unique habitat, a synopsis of the emerging knowledge and the major research gaps regarding bacteriophages at air–water interfaces is still missing in the literature.
Properties
Organic compounds such as amino acids, carbohydrates, fatty acids, and phenols are highly enriched in the SML interface. Most of these come from biota in the sub-surface waters, which decay and become transported to the surface, though other sources exist also such as atmospheric deposition, coastal runoff, and anthropogenic nutrification. The relative concentration of these compounds is dependent on the nutrient sources as well as climate conditions such as wind speed and precipitation. These organic compounds on the surface create a "film," referred to as a "slick" when visible, which affects the physical and optical properties of the interface. These films occur because of the hydrophobic tendencies of many organic compounds, which causes them to protrude into the air-interface. The existence of organic surfactants on the ocean surface impedes wave formation for low wind speeds. For increasing concentrations of surfactant there is an increasing critical wind speed necessary to create ocean waves. Increased levels of organic compounds at the surface also hinders air-sea gas exchange at low wind speeds. One way in which particulates and organic compounds on the surface are transported into the atmosphere is the process called "bubble bursting". Bubbles generate the major portion of marine aerosols. They can be dispersed to heights of several meters, picking up whatever particles latch on to their surface. However, the major supplier of materials comes from the SML.
Processes
Surfaces and interfaces are critical zones where major physical, chemical, and biological exchanges occur. As the ocean covers 362 million km2, about 71% of the Earth's surface, the ocean-atmosphere interface is plausibly one of the largest and most important interfaces on the planet. Every substance entering or leaving the ocean from or to the atmosphere passes through this interface, which on the water-side -and to a lesser extent on the air-side- shows distinct physical, chemical, and biological properties. On the water side the uppermost 1 to 1000 μm of this interface are referred to as the sea surface microlayer (SML). Like a skin, the SML is expected to control the rates of exchange of energy and matter between air and sea, thereby potentially exerting both short-term and long-term impacts on various Earth system processes, including biogeochemical cycling, production and uptake of radiately active gases like or DMS, thus ultimately climate regulation. As of 2017, processes occurring within the SML, as well as the associated rates of material exchange through the SML, remained poorly understood and were rarely represented in marine and atmospheric numerical models.
An improved understanding of the biological, chemical, and physical processes at the ocean's upper surface could provide an essential contribution to the reduction of uncertainties regarding ocean-climate feedbacks. Due to its positioning between atmosphere and ocean, the SML is the first to be exposed to climate changes including temperature, climate relevant trace gases, wind speed, and precipitation as well as to pollution by human waste, including nutrients, toxins, nanomaterials, and plastic debris.
Bacterioneuston
The term neuston describes the organisms in the SML and was first suggested by Naumann in 1917. As in other marine ecosystems, bacterioneuston communities have important roles in SML functioning. Bacterioneuston community composition of the SML has been analysed and compared to the underlying water in different habitats with varying results, and has primarily focused on coastal waters and shelf seas, with limited study of the open ocean . In the North Sea, a distinct bacterial community was found in the SML with Vibrio spp. and Pseudoalteromonas spp. dominating the bacterioneuston. During an artificially induced phytoplankton bloom in a fjord mesocosm experiment, the most dominant denaturing gradient gel electrophoresis (DGGE) bands of the bacterioneuston consisted of two bacterial families: Flavobacteriaceae and Alteromonadaceae. Other studies have however, found little or no differences in the bacterial community composition of the SML and the ULW. Difficulties in direct comparisons between studies can arise because of the different methods used to sample the SML, which result in varied sampling depths.
Even less is known about the community control mechanisms in the SML and how the bacterial community assembles at the air-sea interface. The bacterioneuston community could be altered by differing wind conditions and radiation levels, with high wind speeds inhibiting the formation of a distinct bacterioneuston community. Wind speed and radiation levels refer to external controls, however, bacterioneuston community composition might also be influenced by internal factors such as nutrient availability and organic matter (OM) produced either in the SML or in the ULW.
One of the principal OM components consistently enriched in the SML are transparent exopolymer particles (TEP), which are rich in carbohydrates and form by the aggregation of dissolved precursors excreted by phytoplankton in the euphotic zone. Higher TEP formation rates in the SML, facilitated through wind shear and dilation of the surface water, have been proposed as one explanation for the observed enrichment in TEP. Also, due to their natural positive buoyancy, when not ballasted by other particles sticking to them, TEP ascend through the water column and ultimately end up at the SML . A second possible pathway of TEP from the water column to the SML is by bubble scavenging.
Next to rising bubbles, another potential transport mechanism for bacteria from the ULW to the SML could be ascending particles or more specifically TEP. Bacteria readily attach to TEP in the water column. TEP can serve as microbial hotspots and can be used directly as a substrate for bacterial degradation, and as grazing protection for attached bacteria, e.g., by acting as an alternate food source for zooplankton. TEP have also been suggested to serve as light protection for microorganisms in environments with high irradiation.
Virioneuston
Viruses in the sea surface microlayer, the so-called virioneuston, have recently become of interest to researchers as enigmatic biological entities in the boundary surface layers with potentially important ecological impacts. Given this vast air–water interface sits at the intersection of major air–water exchange processes spanning more than 70% of the global surface area, it is likely to have profound implications for marine biogeochemical cycles, on the microbial loop and gas exchange, as well as the marine food web structure, the global dispersal of airborne viruses originating from the sea surface microlayer, and human health.
Viruses are the most abundant biological entities in the water column of the world's oceans. In the free water column, the virioplankton typically outnumbers the bacterioplankton by one order of magnitude reaching typical bulk water concentrations of 107 viruses mL−1. Moreover, they are known as integral parts of global biogeochemical cycles to shape and drive microbial diversity and to structure trophic networks. Like other neuston members, the virioneuston likely originates from the bulk seawater. For instance, in 1977 Baylor et al. postulated adsorption of viruses onto air bubbles as they rise to the surface, or viruses can stick to organic particles also being transported to the SML via bubble scavenging.
Within the SML, viruses interacting with the bacterioneuston will probably induce the viral shunt, a phenomenon that is well known for marine pelagic systems. The term viral shunt describes the release of organic carbon and other nutritious compounds from the virus-mediated lysis of host cells, and its addition to the local dissolved organic matter (DOM) pool. The enriched and densely packed bacterioneuston forms an excellent target for viruses compared to the bacterioplankton populating the subsurface. This is because high host-cell numbers will increase the probability of host–virus encounters. The viral shunt might effectively contribute to the SML's already high DOM content enhancing bacterial production as previously suggested for pelagic ecosystems and in turn replenishing host cells for viral infections. By affecting the DOM pool, viruses in the SML might directly interfere with the microbial loop being initiated when DOM is microbially recycled, converted into biomass, and passed along the food web. In addition, the release of DOM from lysed host cells by viruses contributes to organic particle generation. However, the role of the virioneuston for the microbial loop has never been investigated.
Measurement
Devices used to sample the concentrations of particulates and compounds of the SML include a glass fabric, metal mesh screens, and other hydrophobic surfaces. These are placed on a rotating cylinder which collects surface samples as it rotates on top of the ocean surface.
The glass plate sampler is commonly used. It was first described in 1972 by Harvey and Burzell as a simple but effective method of collecting small sea surface microlayer samples. A clean glass plate is immersed vertically into the water and then withdrawn in a controlled manner. Harvey and Burzell used a plate which was 20 cm square and 4 mm thick. They withdrew it from the sea at the rate of 20 cm per second. Typically the uppermost 20–150 μm of the surface microlayer adheres to the plate as it is withdrawn. The sample is then wiped from both sides of the plate into a sampling vial.
For a plate of the size used by Harvey and Burzel, the resulting sample volumes are between about 3 and 12 cubic centimetres. The sampled SML thickness h in micrometres is given by:
where V is the sample volume in cm3, A is the total immersed plate area of both sides in cm2, and N is the number of times the sample was dipped.
Remote sensing
Ocean surface habitats sit at the interface between the ocean and the atmosphere. The biofilm-like habitat at the surface of the ocean harbours surface-dwelling microorganisms, commonly referred to as neuston. This vast air–water interface sits at the intersection of major air–water exchange processes spanning more than 70% of the global surface area . Bacteria in the surface microlayer of the ocean, called bacterioneuston, are of interest due to practical applications such as air-sea gas exchange of greenhouse gases, production of climate-active marine aerosols, and remote sensing of the ocean. Of specific interest is the production and degradation of surfactants (surface active materials) via microbial biochemical processes. Major sources of surfactants in the open ocean include phytoplankton, terrestrial runoff, and deposition from the atmosphere.
Unlike coloured algal blooms, surfactant-associated bacteria may not be visible in ocean colour imagery. Having the ability to detect these "invisible" surfactant-associated bacteria using synthetic aperture radar has immense benefits in all-weather conditions, regardless of cloud, fog, or daylight. This is particularly important in very high winds, because these are the conditions when the most intense air-sea gas exchanges and marine aerosol production take place. Therefore, in addition to colour satellite imagery, SAR satellite imagery may provide additional insights into a global picture of biophysical processes at the boundary between the ocean and atmosphere, air-sea greenhouse gas exchanges and production of climate-active marine aerosols.
Aeroplankton
A stream of airborne microorganisms, including marine viruses, bacteria and protists, circles the planet above weather systems but below commercial air lanes. Some peripatetic microorganisms are swept up from terrestrial dust storms, but most originate from marine microorganisms in sea spray. In 2018, scientists reported that hundreds of millions of these viruses and tens of millions of bacteria are deposited daily on every square meter around the planet.
Compared to the sub-surface waters, the sea surface microlayer contains elevated concentration of bacteria and viruses, as well as toxic metals and organic pollutants. These materials can be transferred from the sea-surface to the atmosphere in the form of wind-generated aqueous aerosols due to their high vapor tension and a process known as volatilisation. When airborne, these microbes can be transported long distances to coastal regions. If they hit land they can have detrimental effects on animals, vegetation and human health. Marine aerosols that contain viruses can travel hundreds of kilometers from their source and remain in liquid form as long as the humidity is high enough (over 70%). These aerosols are able to remain suspended in the atmosphere for about 31 days. Evidence suggests that bacteria can remain viable after being transported inland through aerosols. Some reached as far as 200 meters at 30 meters above sea level. It was also noted that the process which transfers this material to the atmosphere causes further enrichment in both bacteria and viruses in comparison to either the SML or sub-surface waters (up to three orders of magnitude in some locations).
Mathematical modeling
The stagnant film model is a mathematical model used to simulate the sea surface microlayer. It is a kinematic model which can be used to describe how gas exchange from the ocean's surface and the atmosphere reaches equilibrium. The model assumes both the ocean and atmosphere are composed mostly of well-mixed, constantly moving fluid layers with the sea surface microlayer present as a permanent thin-film layer in the middle. Gas exchange occurs by molecular diffusion between the two fluid layers through the sea surface microlayer.
See also
Ocean surface topography
Planetary boundary layer
Surface layer
References
Surface science
Marine biology | Sea surface microlayer | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 4,994 | [
"Condensed matter physics",
"Surface science",
"Marine biology"
] |
13,890,363 | https://en.wikipedia.org/wiki/Precession%20%28mechanical%29 | Precession is the process of a round part in a round hole, rotating with respect to each other, wherein the inner part begins rolling around the circumference of the outer bore, in a direction opposite of rotation. This is caused by too much clearance between them and a radial force on the part that constantly changes direction. The direction of rotation of the inner part is opposite to the direction of rotation of the radial force.
In a rotating machine, such as motor, engine, gear train, etc., precession can occur when too much clearance exists between a shaft and a bushing, or between the races and rolling elements in roller and ball bearings. Often a result of wear, inadequate lubrication (too little or too thin), or lack of precision engineering, such precession is usually accompanied by excess vibration and an audible rubbing or buzzing noise. This tends to accelerate the wear process, possibly leading to spalling, galling, or false brinelling (fretting wear) of the contact surfaces.
In stationary parts on a rotating object, such as a bolt threaded into a hole, because the sideways, or radial, load constantly shifts position during use, this lateral force translates into a rolling force that moves opposite to the direction of rotation. This can cause threaded parts to either tighten or loosen under a load, depending on the direction of rotation, typically with a force that can far exceed the typical torque of a wrench. For example, this is a common problem in bicycle pedals, thus on nearly all bikes built after the 1930s, the left-side pedal is equipped with left-hand (backwards) threads, to prevent it from unscrewing itself while riding.
This precession is a process purely due to contact forces and does not depend on inertia and is not inversely proportional to spin rate. It is completely unrelated to torque-free and torque-induced precession.
Examples
Precession caused by fretting can cause fastenings under large torque loads to unscrew themselves.
Automobile lug nuts
Automobiles have also used left-threaded lug nuts on left-side wheels, but now commonly use tapered lug nuts, which do not precess.
Bicycle pedals
Bicycle pedals are left-threaded on the left-hand crank so that precession tightens the pedal rather than loosening it. This may seem counter-intuitive since the pedals rotate in the direction that would unscrew them from the cranks, but the torque exerted due to the precession is several orders of magnitude greater than that caused by bearing friction or even a jammed pedal bearing.
For a pedal, a rotating load arises from downward pedaling force on a spindle rotating with its crank making the predominantly downward force effectively rotate about the pedal spindle, opposite to the rotation of the pedal. What may be less evident is that even tightly fitting parts have relative clearance due to their elasticity, metals not being rigid materials as is evident from steel springs. Under load, micro deformations, enough to cause motion, occur in such joints. This can be seen from wear marks where pedal spindles seat on crank faces."
Shimano SPD axle units, which can be unscrewed from the pedal body for servicing, have a left-hand thread where the axle unit screws into the right-hand pedal; the opposite case to the pedal-crank interface. Otherwise precession of the pedal body around the axle would tend to unscrew one from the other.
Bicycle bottom brackets
English threaded bicycle bottom brackets are left-threaded on the right-hand (usually drive) side into the bottom bracket shell. This is the opposite of pedals into cranks because the sense of the relative motion between the parts is opposite. (Italian and French threaded bottom brackets have right-hand threading on both sides.)
Bicycle sprockets
Splined sprockets precess against any lockring which is screwed into the freehub. Shimano uses a lockring with detents to hold cassette sprockets in place, and this resists precession. Sturmey-Archer once used 12-splined sprockets for 2- and 3-speed racing hubs, and these were secured with a left-threaded lockring for the same reason. (Fixed gear bicycles also use a left-threaded lockring but this is not because of precession; it is merely to ensure that the lockring tends to tighten, should the sprocket begin to unscrew.)
Bearings in manual transmissions
A bearing supported gear in a manual transmission rotates synchronously with its shaft due to the dog-gear engagement. In this case, the small diametrical clearance in the bearing will induce precession of the roller group relative to the gear mitigating any fretting that occurs if the same bearing rollers always push against the same spot on the gear. Typically the 4th and 5th gears will have precession inducing features, while 1st through 3rd gears might not since cars spend less time in those gears. Transmission failure due to lack of precession is possible in gear boxes when low gears are engaged for long periods of time.
See also
Fretting
References
Mechanics
mechanical | Precession (mechanical) | [
"Physics",
"Engineering"
] | 1,073 | [
"Physical quantities",
"Precession",
"Mechanics",
"Mechanical engineering",
"Wikipedia categories named after physical quantities"
] |
13,890,972 | https://en.wikipedia.org/wiki/HPO%20formalism | The history projection operator (HPO) formalism is an approach to temporal quantum logic developed by Chris Isham. It deals with the logical structure of quantum mechanical propositions asserted at different points in time.
Introduction
In standard quantum mechanics a physical system is associated with a Hilbert space . States of the system at a fixed time are represented by normalised vectors in the space and physical observables are represented by Hermitian operators on .
A physical proposition about the system at a fixed time can be represented by an orthogonal projection operator on (See quantum logic). This representation links together the lattice operations in the lattice of logical propositions and the lattice of projection operators on a Hilbert space (See quantum logic).
The HPO formalism is a natural extension of these ideas to propositions about the system that are concerned with more than one time.
History propositions
Homogeneous histories
A homogeneous history proposition is a sequence of single-time propositions specified at different times . These times are called the temporal support of the history. We shall denote the proposition as and read it as
" at time is true and then at time is true and then and then at time is true"
Inhomogeneous histories
Not all history propositions can be represented by a sequence of single-time propositions at different times. These are called inhomogeneous history propositions. An example is the proposition OR for two homogeneous histories .
History projection operators
The key observation of the HPO formalism is to represent history propositions by projection operators on a history Hilbert space. This is where the name "History Projection Operator" (HPO) comes from.
For a homogeneous history we can use the tensor product to define a projector
where is the projection operator on that represents the proposition at time .
This is a projection operator on the tensor product "history Hilbert space"
Not all projection operators on can be written as the sum of tensor products of the form . These other projection operators are used to represent inhomogeneous histories by applying lattice operations to homogeneous histories.
Temporal quantum logic
Representing history propositions by projectors on the history Hilbert space naturally encodes the logical structure of history propositions. The lattice operations on the set of projection operations on the history Hilbert space can be applied to model the lattice of logical operations on history propositions.
If two homogeneous histories and don't share the same temporal support they can be modified so that they do. If is in the temporal support of but not (for example) then a new homogeneous history proposition which differs from by including the "always true" proposition at each time can be formed. In this way the temporal supports of can always be joined. We shall therefore assume that all homogeneous histories share the same temporal support.
We now present the logical operations for homogeneous history propositions and such that
Conjunction (AND)
If and are two homogeneous histories then the history proposition " and " is also a homogeneous history. It is represented by the projection operator
Disjunction (OR)
If and are two homogeneous histories then the history proposition " or " is in general not a homogeneous history. It is represented by the projection operator
Negation (NOT)
The negation operation in the lattice of projection operators takes to
where is the identity operator on the Hilbert space. Thus the projector used to represent the proposition (i.e. "not ") is
Example: Two-time history
As an example, consider the negation of the two-time homogeneous history proposition . The projector to represent the proposition is
The terms which appear in this expression:
.
can each be interpreted as follows:
is false and is true
is true and is false
both is false and is false
These three homogeneous histories, joined with the OR operation, include all the possibilities for how the proposition " and then " can be false. We therefore see that the definition of agrees with what the proposition should mean.
References
C.J. Isham, Quantum Logic and the Histories Approach to Quantum Theory, J. Math. Phys. 35 (1994) 2157–2185, arXiv:gr-qc/9308006v1
Logic
Quantum measurement | HPO formalism | [
"Physics"
] | 830 | [
"Quantum measurement",
"Quantum mechanics"
] |
13,891,942 | https://en.wikipedia.org/wiki/Bloch%20spectrum | The Bloch spectrum is a concept in quantum mechanics in the field of theoretical physics; this concept addresses certain energy spectrum considerations. Let H be the one-dimensional Schrödinger equation operator
where Uα is a periodic function of period α. The Bloch spectrum of H is defined as the set of values E for which all the solutions of (H − E)φ = 0 are bounded on the whole real axis. The Bloch spectrum consists of the half-line E0 < E from which certain closed intervals [E2j−1, E2j] (j = 1, 2, ...) are omitted. These are forbidden bands (or gaps) so the (E2j−2, E2j−1) are allowed bands.
References
Quantum mechanics | Bloch spectrum | [
"Physics"
] | 159 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
13,893,964 | https://en.wikipedia.org/wiki/Sven%20Erik%20J%C3%B8rgensen | Sven Erik Jørgensen (29 August 1934 – 5 March 2016) was an ecologist and chemist from Denmark.
Biography
Also a well known biathlon person.
Academic degrees and honors
In 1958, he was awarded Master of Science in chemical engineering from the Technical University of Denmark, then Doctor of Environmental Engineerin (Karlsruhe Institute of Technology) and Doctor of Science in ecological modelling (University of Copenhagen). He taught courses in ecological modelling in 32 countries. After his retirement, he became professor emeritus in environmental chemistry at the University of Copenhagen.
He was an honourable doctor at Coimbra University, Portugal and at University of Dar es Salaam, Tanzania
He received several awards: Ruđer Bošković award, Prigogine Prize, Blaise Pascal Medal, Einstein professorship at the Chinese Academy of Sciences and the Santa Chiara Prize for multidisciplinary teaching. In 2004, together with William J. Mitsch, he was awarded the Stockholm Water Prize.
Works
In 1975 he founded a journal, Ecological Modelling, and in 1978 he founded ISEM, the International Society of Ecological Modelling.
He published 366 papers of which 275 were in peer-reviewed international journals, and edited or authored 76 books, of which several have been translated into other languages (Chinese, Russian, Spanish, and Portuguese).
In 2011, he authored a textbook in ecological modeling “Fundamentals of Ecological Modelling”, which was published as a fourth edition together with Brian D. Fath of the Department of Biological Sciences, Towson University. It has been translated into Chinese and Russian (third edition). He was co-editor in chief of the "Encyclopedia of Ecology" published in 2008, and of the "Encyclopedia of Environmental Management" published during December 2012. He co-authored the textbook “Introduction to Systems Ecology, published in English in 2012 and in Chinese in 2013.
He was the editorial board member of 18 international journals in the fields of ecology and environmental management. He was the president of ISEM and was elected to the European Academy of Sciences and Arts, for which he was chairman of the Section for Environmental Sciences.
Personal life
He married in 1970 and had one son. Jørgensen died during 2016 in Copenhagen.
Awards
Stockholm Water Prize, 2004 for the pioneering development and global dissemination of ecological models of lakes and wetlands, widely applied as effective tools in sustainable water resource management by the University of Copenhagen School of Pharmaceutical Sciences in Dermank.
See also
References
External links
Sven Erik Jørgensen's Web page
1934 births
2016 deaths
Technical University of Denmark alumni
Karlsruhe Institute of Technology alumni
University of Copenhagen alumni
Academic staff of the University of Copenhagen
University of Dar es Salaam
Danish ecologists
Thermodynamicists
Systems ecologists | Sven Erik Jørgensen | [
"Physics",
"Chemistry"
] | 553 | [
"Thermodynamics",
"Thermodynamicists"
] |
13,893,984 | https://en.wikipedia.org/wiki/Psychrometric%20constant | The psychrometric constant relates the partial pressure of water in air to the air temperature. This lets one interpolate actual vapor pressure from paired dry and wet thermometer bulb temperature readings.
psychrometric constant [kPa °C−1],
P = atmospheric pressure [kPa],
latent heat of water vaporization, 2.45 [MJ kg−1],
specific heat of air at constant pressure, [MJ kg−1 °C−1],
ratio molecular weight of water vapor/dry air = 0.622.
Both and are constants.
Since atmospheric pressure, P, depends upon altitude, so does .
At higher altitude water evaporates and boils at lower temperature.
Although is constant, varied air composition results in varied .
Thus on average, at a given location or altitude, the psychrometric constant is approximately constant. Still, it is worth remembering that weather impacts both atmospheric pressure and composition.
Vapor Pressure Estimation
Saturated vapor pressure,
Actual vapor pressure,
here e[T] is vapor pressure as a function of temperature, T.
Tdew = the dewpoint temperature at which water condenses.
Twet = the temperature of a wet thermometer bulb from which water can evaporate to air.
Tdry = the temperature of a dry thermometer bulb in air.
References
Chemical properties
Gas laws
Chemical engineering thermodynamics
Physical chemistry
Gases
Underwater diving physics | Psychrometric constant | [
"Physics",
"Chemistry",
"Engineering"
] | 293 | [
"Matter",
"Applied and interdisciplinary physics",
"Underwater diving physics",
"Chemical engineering",
"Phases of matter",
"Chemical engineering thermodynamics",
"Gas laws",
"nan",
"Statistical mechanics",
"Physical chemistry",
"Gases"
] |
13,896,453 | https://en.wikipedia.org/wiki/Odfjell%20Drilling | Odfjell Drilling Ltd. is an oil drilling, well service, and engineering company.
Current operations
The company has 3 divisions:
Mobile Offshore Drilling - Owns 6 drillships and operates in Norway, United Kingdom, Angola, Vietnam, and Brazil.
Drilling & Technology - provides platform drilling, project management and engineering services from offices in Bergen, Stavanger, and Aberdeen.
Well Services - provides casing and tubing running services (TRS), drill tool rental, and well intervention services to the onshore and offshore oil and gas industry.
History
The company was established in 1973 as an affiliate of Odfjell. In 1974, the first rigs were delivered from Aker ASA, and started service for ELF and Saga Petroleum. The first production drilling contract was awarded by Statoil on the Statfjord oil field in 1979. In 1984, the company expanded to the United Kingdom with a semi-submersible rig for Hamilton Brothers Oil & Gas. In 1989 the company opened an office in Singapore. In 1995, the company decided to concentrate on the North Sea.
In 2013, the company became a public company via an initial public offering on the Oslo Stock Exchange.
In 2017, the company sold its 37% interest in Robotic Drilling Systems, which it acquired in 2014.
In 2018, the company announced plans to expand its rig count from 4 to 6 to 10.
In 2018, the company acquired a drilling rig from Samsung.
See also
List of oilfield service companies
References
External links
Drilling rig operators
Engineering companies of Norway
Service companies of Norway
Offshore engineering
Petroleum industry in Norway
Companies based in Bergen
Technology companies established in 1973
Companies listed on the Oslo Stock Exchange
Norwegian companies established in 1973 | Odfjell Drilling | [
"Engineering"
] | 341 | [
"Construction",
"Offshore engineering"
] |
13,896,912 | https://en.wikipedia.org/wiki/Xanthine%20dehydrogenase | Xanthine dehydrogenase, also known as XDH, is a protein that, in humans, is encoded by the XDH gene.
Function
Xanthine dehydrogenase belongs to the group of molybdenum-containing hydroxylases involved in the oxidative metabolism of purines. The enzyme is a homodimer. Xanthine dehydrogenase can be converted to xanthine oxidase by reversible sulfhydryl oxidation or by irreversible proteolytic modification.
Xanthine dehydrogenase catalyzes the following chemical reaction:
xanthine + NAD+ + H2O urate + NADH + H+
The three substrates of this enzyme are xanthine, NAD+, and H2O, whereas its three products are urate, NADH, and H+.
This enzyme participates in purine metabolism.
Nomenclature
This enzyme belongs to the family of oxidoreductases, to be specific, those acting on CH or CH2 groups with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is xanthine:NAD+ oxidoreductase. Other names in common use include NAD+-xanthine dehydrogenase, xanthine-NAD+ oxidoreductase, xanthine/NAD+ oxidoreductase, and xanthine oxidoreductase.
Clinical significance
Defects in xanthine dehydrogenase cause xanthinuria, may contribute to adult respiratory stress syndrome, and may potentiate influenza infection through an oxygen metabolite-dependent mechanism. It has been shown that patients with lung adenocarcinoma tumors which have high levels of XDH gene expression have lower survivals. Addiction to XDH protein has been used to target NSCLC tumors and cell lines in a precision oncology manner.
See also
Aldehyde oxidase and xanthine dehydrogenase, a/b hammerhead domain
MOCOS
Xanthine oxidase
References
Further reading
External links
EC 1.17.1
EC 1.17.3
NADH-dependent enzymes
Enzymes of known structure
Molybdenum enzymes
Metalloproteins
Genes on human chromosome 2 | Xanthine dehydrogenase | [
"Chemistry"
] | 465 | [
"Metalloproteins",
"Bioinorganic chemistry"
] |
13,898,437 | https://en.wikipedia.org/wiki/Estradiol%2017beta-dehydrogenase | In enzymology, an estradiol 17beta-dehydrogenase () is an enzyme that catalyzes the chemical reaction
estradiol-17beta + NAD(P)+ estrone + NAD(P)H + H+
The 3 substrates of this enzyme are estradiol-17beta, NAD+, and NADP+, whereas its 4 products are estrone, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is estradiol-17beta:NAD(P)+ 17-oxidoreductase. Other names in common use include 20alpha-hydroxysteroid dehydrogenase, 17beta,20alpha-hydroxysteroid dehydrogenase, 17beta-estradiol dehydrogenase, estradiol dehydrogenase, estrogen 17-oxidoreductase, and 17beta-HSD. This enzyme participates in androgen and estrogen metabolism.
Structural studies
As of late 2007, 29 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , , , , , , , , , , , , , , , , , , , and .
References
EC 1.1.1
NADPH-dependent enzymes
NADH-dependent enzymes
Enzymes of known structure
Steroid hormone biosynthesis | Estradiol 17beta-dehydrogenase | [
"Chemistry",
"Biology"
] | 327 | [
"Steroid hormone biosynthesis",
"Biosynthesis"
] |
11,330,015 | https://en.wikipedia.org/wiki/Synthetic%20genomics | Synthetic genomics is a nascent field of synthetic biology that uses aspects of genetic modification on pre-existing life forms, or artificial gene synthesis to create new DNA or entire lifeforms.
Overview
Synthetic genomics is unlike genetic modification in the sense that it does not use naturally occurring genes in its life forms. It may make use of custom designed base pair series, though in a more expanded and presently unrealized sense synthetic genomics could utilize genetic codes that are not composed of the two base pairs of DNA that are currently used by life.
The development of synthetic genomics is related to certain recent technical abilities and technologies in the field of genetics. The ability to construct long base pair chains cheaply and accurately on a large scale has allowed researchers to perform experiments on genomes that do not exist in nature. Coupled with the developments in protein folding models and decreasing computational costs the field of synthetic genomics is beginning to enter a productive stage of vitality.
History
Researchers were able to create a synthetic organism for the first time in 2010. This breakthrough was undertaken by Synthetic Genomics, Inc., which continues to specialize in the research and commercialization of custom designed genomes. It was accomplished by synthesizing a 600 kbp genome (resembling that of Mycoplasma genitalium, save the insertion of a few watermarks) via the Gibson Assembly method and Transformation Associated Recombination.
Recombinant DNA technology
Soon after the discovery of restriction endonucleases and ligases, the field of genetics began using these molecular tools to assemble artificial sequences from smaller fragments of synthetic or naturally-occurring DNA. The advantage in using the recombinatory approach as opposed to continual DNA synthesis stems from the inverse relationship that exists between synthetic DNA length and percent purity of that synthetic length. In other words, as you synthesize longer sequences, the number of error-containing clones increases due to the inherent error rates of current technologies. Although recombinant DNA technology is more commonly used in the construction of fusion proteins and plasmids, several techniques with larger capacities have emerged, allowing for the construction of entire genomes.
Polymerase cycling assembly
Polymerase cycling assembly (PCA) uses a series of oligonucleotides (or oligos), approximately 40 to 60 nucleotides long, that altogether constitute both strands of the DNA being synthesized. These oligos are designed such that a single oligo from one strand contains a length of approximately 20 nucleotides at each end that is complementary to sequences of two different oligos on the opposite strand, thereby creating regions of overlap. The entire set is processed through cycles of: (a) hybridization at 60 °C; (b) elongation via Taq polymerase and a standard ligase; and (c) denaturation at 95 °C, forming progressively longer contiguous strands and ultimately resulting in the final genome. PCA was used to generate the first synthetic genome in history, that of the Phi X 174 virus.
Gibson assembly method
The Gibson assembly method, designed by Daniel Gibson during his time at the J. Craig Venter Institute, requires a set of double-stranded DNA cassettes that constitute the entire genome being synthesized. Note that cassettes differ from contigs by definition, in that these sequences contain regions of homology to other cassettes for the purposes of recombination. In contrast to Polymerase Cycling Assembly, Gibson Assembly is a single-step, isothermal reaction with larger sequence-length capacity; ergo, it is used in place of Polymerase Cycling Assembly for genomes larger than 6 kb.
A T5 exonuclease performs a chew-back reaction at the terminal segments, working in the 5' to 3' direction, thereby producing complementary overhangs. The overhangs hybridize to each other, a Phusion DNA polymerase fills in any missing nucleotides and the nicks are sealed with a ligase. However, the genomes capable of being synthesized using this method alone is limited because as DNA cassettes increase in length, they require propagation in vitro in order to continue hybridizing; accordingly, Gibson assembly is often used in conjunction with transformation-associated recombination (see below) to synthesize genomes several hundred kilobases in size.
Transformation-associated recombination
The goal of transformation-associated recombination (TAR) technology in synthetic genomics is to combine DNA contigs by means of homologous recombination performed by the yeast artificial chromosome (YAC). Of importance is the CEN element within the YAC vector, which corresponds to the yeast centromere. This sequence gives the vector the ability to behave in a chromosomal manner, thereby allowing it to perform homologous recombination. First, gap repair cloning is performed to generate regions of homology flanking the DNA contigs. Gap Repair Cloning is a particular form of the polymerase chain reaction in which specialized primers with extensions beyond the sequence of the DNA target are utilized. Then, the DNA cassettes are exposed to the YAC vector, which drives the process of homologous recombination, thereby connecting the DNA cassettes. Polymerase Cycling Assembly and TAR technology were used together to construct the 600 kb Mycoplasma genitalium genome in 2008, the first synthetic organism ever created. Similar steps were taken in synthesizing the larger Mycoplasma mycoides genome a few years later.
Unnatural base pair (UBP)
An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA which is created in a laboratory and does not occur in nature. In 2012, a group of American scientists led by Floyd E. Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team designed an unnatural base pair (UBP). The two new artificial nucleotides or Unnatural Base Pair (UBP) were named d5SICS and dNaM. More technically, these artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. In 2014 the same team from the Scripps Research Institute reported that they synthesized a stretch of circular DNA known as a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed, and inserted it into cells of the common bacterium E. coli that successfully replicated the unnatural base pairs through multiple generations. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into E. coli bacteria. Then, the natural bacterial replication pathways use them to accurately replicate the plasmid containing d5SICS–dNaM.
The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 20 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins. The artificial strings of DNA do not encode for anything yet, but scientists speculate they could be designed to manufacture new proteins which could have industrial or pharmaceutical uses.
Computer-made form
In April 2019, scientists at ETH Zurich reported the creation of the world's first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.
See also
Artificial gene synthesis
Artificially Expanded Genetic Information System
Bioroid
Genetic engineering
Hachimoji DNA
Synthetic biological circuit
Synthetic genomes
References
External links
Synthetic Genomes: Technologies and Impact - A 2004 study completed for the DOE on the subject.
Effects of Developments in Synthetic Genomics: Hearing before the Committee on Energy and Commerce, House of Representatives, One Hundred Eleventh Congress, Second Session, May 27, 2010
Algae biomass producers
Genetic engineering
Genome editing
Synthetic biology | Synthetic genomics | [
"Chemistry",
"Engineering",
"Biology"
] | 1,703 | [
"Synthetic biology",
"Genetics techniques",
"Biological engineering",
"Genome editing",
"Genetic engineering",
"Bioinformatics",
"Molecular genetics",
"Molecular biology",
"Algae biomass producers"
] |
11,334,840 | https://en.wikipedia.org/wiki/Poly-Turf | Poly-Turf was a brand of artificial turf in the early 1970s, manufactured by American Biltrite of Wellesley, Massachusetts. It was the first specifically designed for American football, with a patented layered structure which included a "shock pad" between the artificial grass and the asphalt sub-surface. It used polypropylene for its artificial grass blades, rather than the nylon used in AstroTurf and 3M's Tartan Turf.
History in Miami
In the late 1960s, the natural grass surface at the Orange Bowl in Miami was constantly in poor condition, primarily due to heavy usage; 34 games were scheduled there during the 1968 football season.
Poly-Turf was installed at the city-owned stadium in 1970, and utilized for six seasons. The stadium was used for both college and professional football, primarily by the University of Miami Hurricanes and the Miami Dolphins of the NFL. It also hosted the eponymous New Year's Day college bowl game, Super Bowl games, and high school football.
The University of Nebraska Cornhuskers won the first three Orange Bowl games played on Poly-Turf, which included two national championships. The first Super Bowl played on artificial turf was played on Poly-Turf in the Orange Bowl in January 1971, when the Baltimore Colts defeated the Dallas Cowboys 16-13 in Super Bowl V. The next Super Bowl at the stadium was the final game played on Poly-Turf in Miami; Super Bowl X in January 1976. Its flaws received additional media exposure the week prior to the game, and Dolphins receiver Nat Moore documented them in a local article.
The longer polypropylene blades of Poly-Turf tended to mat down and become very slick under hot & sunny conditions. Other NFL owners were skeptical of the brand before the first regular season games were played in 1970. The field was replaced after two seasons, before the Dolphins' 1972 undefeated season. It was replaced by another Poly-Turf surface. While it had similar problems, it lasted longer than the first installation, and was used for four years. Over just six years, both installations deteriorated rapidly and some football players suffered an increasing number of leg and ankle injuries; some players claimed to trip over seams. Prior to the second installation in 1972, the city did not consult with the Dolphins about the replacement; Dolphins' head coach Don Shula preferred a different brand, either AstroTurf or Tartan Turf. The field discolored from green to blue due to the severe UV nature of the Miami sun.
Return to natural grass
The city removed the Poly-Turf in 1976 and re-installed natural grass, a special type known as Prescription Athletic Turf (PAT), which remained until the stadium's closure in early 2008. As late as December 1975, the city had planned to retain the Poly-Turf for the 1976 season, but that decision was changed a few weeks later, prior to the Super Bowl.
The Orange Bowl became the first major football venue to replace its artificial turf with natural grass, and it was the third NFL stadium to install Prescription Athletic Turf; Denver's Mile High Stadium and Washington's RFK Stadium installed PAT fields a year earlier in the spring of 1975.
Other installations
Other NFL stadiums which installed Poly-Turf included Schaefer Stadium, opened in 1971 for the New England Patriots, and Tulane Stadium in New Orleans, home of the Saints, Tulane University, and the Sugar Bowl. Notable college stadiums included Legion Field in Birmingham, Alabama and Alumni Stadium at Boston College.
American Biltrite ceased production of Poly-Turf in 1973; 3M stopped the manufacture of its Tartan Turf in 1974, citing rising oil prices in light of the 1973 oil embargo. This left AstroTurf as the only major manufacturer of artificial turf (with only minor competition along the way) until FieldTurf was started in the late 1990s.
References
Artificial turf | Poly-Turf | [
"Chemistry"
] | 774 | [
"Synthetic materials",
"Artificial turf"
] |
11,336,042 | https://en.wikipedia.org/wiki/Turbulent%20Prandtl%20number | The turbulent Prandtl number (Prt) is a non-dimensional term defined as the ratio between the momentum eddy diffusivity and the heat transfer eddy diffusivity. It is useful for solving the heat transfer problem of turbulent boundary layer flows. The simplest model for Prt is the Reynolds analogy, which yields a turbulent Prandtl number of 1. From experimental data, Prt has an average value of 0.85, but ranges from 0.7 to 0.9 depending on the Prandtl number of the fluid in question.
Definition
The introduction of eddy diffusivity and subsequently the turbulent Prandtl number works as a way to define a simple relationship between the extra shear stress and heat flux that is present in turbulent flow. If the momentum and thermal eddy diffusivities are zero (no apparent turbulent shear stress and heat flux), then the turbulent flow equations reduce to the laminar equations. We can define the eddy diffusivities for momentum transfer and heat transfer as and where is the apparent turbulent shear stress and is the apparent turbulent heat flux.The turbulent Prandtl number is then defined as
The turbulent Prandtl number has been shown to not generally equal unity (e.g. Malhotra and Kang, 1984; Kays, 1994; McEligot and Taylor, 1996; and Churchill, 2002). It is a strong function of the molecular Prandtl number amongst other parameters and the Reynolds Analogy is not applicable when the molecular Prandtl number differs significantly from unity as determined by Malhotra and Kang; and elaborated by McEligot and Taylor and Churchill
Application
Turbulent momentum boundary layer equation: Turbulent thermal boundary layer equation,
Substituting the eddy diffusivities into the momentum and thermal equations yieldsandSubstitute into the thermal equation using the definition of the turbulent Prandtl number to get
Consequences
In the special case where the Prandtl number and turbulent Prandtl number both equal unity (as in the Reynolds analogy), the velocity profile and temperature profiles are identical. This greatly simplifies the solution of the heat transfer problem. If the Prandtl number and turbulent Prandtl number are different from unity, then a solution is possible by knowing the turbulent Prandtl number so that one can still solve the momentum and thermal equations.
In a general case of three-dimensional turbulence, the concept of eddy viscosity and eddy diffusivity are not valid. Consequently, the turbulent Prandtl number has no meaning.
References
Books
Convection
Dimensionless numbers of fluid mechanics
Fluid dynamics
Heat transfer | Turbulent Prandtl number | [
"Physics",
"Chemistry",
"Engineering"
] | 529 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Chemical engineering",
"Convection",
"Thermodynamics",
"Piping",
"Fluid dynamics"
] |
11,336,178 | https://en.wikipedia.org/wiki/Knudsen%20cell | In crystal growth, a Knudsen cell is an effusion evaporator source for relatively low partial pressure elementary sources (e.g. Ga, Al, Hg, As). Because it is easy to control the temperature of the evaporating material in Knudsen cells, they are commonly used in molecular-beam epitaxy.
Development
The Knudsen effusion cell was developed by Martin Knudsen (1871–1949). A typical Knudsen cell contains a crucible (made of pyrolytic boron nitride, quartz, tungsten or graphite), heating filaments (often made of metal tantalum), water cooling system, heat shields, and an orifice shutter.
Vapor pressure measurement
The Knudsen cell is used to measure the vapor pressures of a solid with very low vapor pressure. Such a solid forms a vapor at low pressure by sublimation. The vapor slowly effuses through the pinhole, and the loss of mass is proportional to the vapor pressure and can be used to determine this pressure. The heat of sublimation can also be determined by measuring the vapor pressure as a function of temperature, using the Clausius–Clapeyron relation.
References
Crystallography
Semiconductor growth | Knudsen cell | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 261 | [
"Materials science stubs",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics"
] |
11,336,212 | https://en.wikipedia.org/wiki/Acrasin | Each species of slime mold has its own specific chemical messenger, which are collectively referred to as acrasins. These chemicals signal that many individual cells aggregate to form a single large cell or plasmodium. One of the earliest acrasins to be identified was cyclic AMP, found in the species Dictyostelium discoideum by Brian Shaffer, which exhibits a complex swirling-pulsating spiral pattern when forming a pseudoplasmodium.
The term acrasin was descriptively named after Acrasia from Edmund Spenser's Faerie Queene, who seduced men against their will and then transformed them into beasts. Acrasia is itself a play on the Greek akrasia that describes loss of free will.
Extraction
Brian Shaffer was the first to purify acrasin, now known to be cyclic AMP, in 1954, using methanol. Glorin, the acrasin of P. violaceum, can be purified by inhibiting the acrasin-degrading enzyme acrasinase with alcohol, extracting with alcohol and separating with column chromatography.
Notes
Evidence for the formation of cell aggregates by chemotaxis in the development of the slime mold Dictyostelium discoideum - J.T.Bonner and L.J.Savage Journal of Experimental Biology Vol. 106, pp. 1, October (1947) Cell Biology
Aggregation in cellular slime moulds: in vitro isolation of acrasin - B.M.Shaffer Nature Vol. 79, pp. 975, (1953) Cell Biology
Identification of a pterin as the acrasin of the cellular slime mold Dictyostelium lacteum - Proceedings of the National Academy of Sciences United States Vol. 79, pp. 6270–6274, October (1982) Cell Biology
Hunting Slime Moulds - Adele Conover, Smithsonian Magazine Online (2001)
References
Cell biology | Acrasin | [
"Biology"
] | 404 | [
"Cell biology"
] |
11,336,559 | https://en.wikipedia.org/wiki/Traveler%27s%20dilemma | In game theory, the traveler's dilemma (sometimes abbreviated TD) is a non-zero-sum game in which each player proposes a payoff. The lower of the two proposals wins; the lowball player receives the lowball payoff plus a small bonus, and the highball player receives the same lowball payoff, minus a small penalty. Surprisingly, the Nash equilibrium is for both players to aggressively lowball. The traveler's dilemma is notable in that naive play appears to outperform the Nash equilibrium; this apparent paradox also appears in the centipede game and the finitely-iterated prisoner's dilemma.
Formulation
The original game scenario was formulated in 1994 by Kaushik Basu and goes as follows:
"An airline loses two suitcases belonging to two different travelers. Both suitcases happen to be identical and contain identical antiques. An airline manager tasked to settle the claims of both travelers explains that the airline is liable for a maximum of $100 per suitcase—he is unable to find out directly the price of the antiques."
"To determine an honest appraised value of the antiques, the manager separates both travelers so they can't confer, and asks them to write down the amount of their value at no less than $2 and no larger than $100. He also tells them that if both write down the same number, he will treat that number as the true dollar value of both suitcases and reimburse both travelers that amount. However, if one writes down a smaller number than the other, this smaller number will be taken as the true dollar value, and both travelers will receive that amount along with a bonus/malus: $2 extra will be paid to the traveler who wrote down the lower value and a $2 deduction will be taken from the person who wrote down the higher amount. The challenge is: what strategy should both travelers follow to decide the value they should write down?"
The two players attempt to maximize their own payoff, without any concern for the other player's payoff.
Analysis
Backward induction only applies where there is perfect information. If it is used where there is information asymmetry- the Airline manager does not know the value of the antique- the result will be irrational behavior. This is what happens in the following analysis-
One might expect a traveler's optimum choice to be $100; that is, the traveler values the antiques at the airline manager's maximum allowed price. Remarkably, and, to many, counter-intuitively, the Nash equilibrium solution is in fact just $2; that is, the traveler values the antiques at the airline manager's minimum allowed price.
For an understanding of why $2 is the Nash equilibrium consider the following proof:
Alice, having lost her antiques, is asked their value. Alice's first thought is to quote $100, the maximum permissible value.
On reflection, though, she realizes that her fellow traveler, Bob, might also quote $100. And so Alice changes her mind, and decides to quote $99, which, if Bob quotes $100, will pay $101.But 'low-balling' leads to the worst possible outcome. Rational people will use Muth Rational Expectations- i.e. both will choose 100 as this maximizes their individual and collective benefit.
But Bob, being in an identical position to Alice, might also think of quoting $99. And so Alice changes her mind, and decides to quote $98, which, if Bob quotes $99, will pay $100. This is greater than the $99 Alice would receive if both she and Bob quoted $99.
This cycle of thought continues, until Alice finally decides to quote just $2—the minimum permissible price.
The above analysis depends crucially on (1) imperfect information- the airline manager does not know the true value and (2) irrationality- in particular failure to use the Muth Rational strategy.
Another proof goes as follows:
If Alice only wants to maximize her own payoff, choosing $99 trumps choosing $100. If Bob chooses any dollar value 2–98 inclusive, $99 and $100 give equal payoffs; if Bob chooses $99 or $100, choosing $99 nets Alice an extra dollar.
A similar line of reasoning shows that choosing $98 is always better for Alice than choosing $99. The only situation where choosing $99 would give a higher payoff than choosing $98 is if Bob chooses $100—but if Bob is only seeking to maximize his own profit, he will always choose $99 instead of $100.
This line of reasoning can be applied to all of Alice's whole-dollar options until she finally reaches $2, the lowest price.
Experimental results
The ($2, $2) outcome in this instance is the Nash equilibrium of the game. By definition this means that if your opponent chooses this Nash equilibrium value then your best choice is that Nash equilibrium value of $2. This will not be the optimum choice if there is a chance of your opponent choosing a higher value than $2. When the game is played experimentally, most participants select a value higher than the Nash equilibrium and closer to $100 (corresponding to the Pareto optimal solution). More precisely, the Nash equilibrium strategy solution proved to be a bad predictor of people's behavior in a traveler's dilemma with small bonus/malus and a rather good predictor if the bonus/malus parameter was big.
Furthermore, the travelers are rewarded by deviating strongly from the Nash equilibrium in the game and obtain much higher rewards than would be realized with the purely rational strategy. These experiments (and others, such as focal points) show that the majority of people do not use purely rational strategies, but the strategies they do use are demonstrably optimal. This paradox could reduce the value of pure game theory analysis, but could also point to the benefit of an expanded reasoning that understands how it can be quite rational to make non-rational choices, at least in the context of games that have players that can be counted on to not play "rationally." For instance, Capraro has proposed a model where humans do not act a priori as single agents but they forecast how the game would be played if they formed coalitions and then they act so as to maximize the forecast. His model fits the experimental data on the Traveler's dilemma and similar games quite well. Recently, the traveler's dilemma was tested with decision undertaken in groups rather than individually, in order to test the assumption that groups decisions are more rational, delivering the message that, usually, two heads are better than one. Experimental findings show that groups are always more rational – i.e. their claims are closer to the Nash equilibrium - and more sensitive to the size of the bonus/malus.
Some players appear to pursue a Bayesian Nash equilibrium.
Similar games
The traveler's dilemma can be framed as a finitely repeated prisoner's dilemma. Similar paradoxes are attributed to the centipede game and to the p-beauty contest game (or more specifically, "Guess 2/3 of the average"). One variation of the original traveler's dilemma in which both travelers are offered only two integer choices, $2 or $3, is identical mathematically to the standard non-iterated Prisoner's dilemma and thus the traveler's dilemma can be viewed as an extension of prisoner's dilemma. (The minimum guaranteed payout is $1, and each dollar beyond that may be considered equivalent to a year removed from a three-year prison sentence.) These games tend to involve deep iterative deletion of dominated strategies in order to demonstrate the Nash equilibrium, and tend to lead to experimental results that deviate markedly from classical game-theoretical predictions.
Payoff matrix
The canonical payoff matrix is shown below (if only integer inputs are taken into account):
Denoting by the set of strategies available to both players and by
the payoff function of one of them we can write
(Note that the other player receives since the game is quantitatively symmetric).
References
Non-cooperative games
Dilemmas | Traveler's dilemma | [
"Mathematics"
] | 1,658 | [
"Game theory",
"Non-cooperative games"
] |
3,234,326 | https://en.wikipedia.org/wiki/Camphorsulfonic%20acid | Camphorsulfonic acid, sometimes abbreviated CSA or 10-CSA is an organosulfur compound. Like typical sulfonic acids, it is a relatively strong acid that is a colorless solid at room temperature and is soluble in water and a wide variety of organic substances.
This compound is commercially available. It can be prepared by sulfonation of camphor with sulfuric acid and acetic anhydride:
Although this reaction appears to be a sulfonation of an unactivated methyl group, the actual mechanism is believed to involve a retro-semipinacol rearrangement, deprotonation next to the tertiary carbocation to form an alkene, sulfonation of the alkene intermediate, and finally, semipinacol rearrangement to re-establish the ketone function.
In organic synthesis, CSA and its derivatives can be used as resolving agents for chiral amines and other cations. The synthesis of osanetant was an example of this. 3-bromocamphor-8-sulfonic acid was used in the synthesis of enantiopure devazepide.
Camphorsulfonic acid is also being used for the synthesis of quinolines. Camphorsulfonic acid is used in some pharmaceutical formulations, where is it referred to as camsilate or camsylate, including trimetaphan camsilate and lanabecestat camsylate. Some studies (c.f. Lednicer) support that D-CSA was used for the resolution of Chloramphenicol.
References
Sulfonic acids | Camphorsulfonic acid | [
"Chemistry"
] | 332 | [
"Functional groups",
"Sulfonic acids"
] |
3,234,441 | https://en.wikipedia.org/wiki/Third%20phase | Third phase is the term for a stable emulsion which forms in a liquid–liquid extraction when the original two phases (aqueous and organic) are mixed.
The third phase can be caused by a detergent (surfactant) or a fine solid. While third phase is a term for an unwanted emulsion, a stable emulsion is wanted in emulsion polymerization all the things which can be used to make a stable 'emulsion' for a latex synthesis can prove to encourage a third phase to form.
One term for the third phase found in PUREX plants is crud (Chalk River Unknown Deposit). One common crud is formed by the reaction of zirconium salts (from fission) with degraded tributyl phosphate (TBP). The TBP degrades into dibutyl hydrogen phosphate and then into butyl dihydrogen phosphate. The dibutyl hydrogen phosphate and the zirconium can form polymeric solid which is very insoluble.
Colloidal chemistry | Third phase | [
"Chemistry"
] | 214 | [
"Colloidal chemistry",
"Surface science",
"Colloids"
] |
3,237,549 | https://en.wikipedia.org/wiki/Coherent%20backscattering | In physics, coherent backscattering is observed when coherent radiation (such as a laser beam) propagates through a medium which has a large number of scattering centers (such as milk or a thick cloud) of size comparable to the wavelength of the radiation.
The waves are scattered many times while traveling through the medium. Even for incoherent radiation, the scattering typically reaches a local maximum in the direction of backscattering. For coherent radiation, however, the peak is two times higher.
Coherent backscattering is very difficult to detect and measure for two reasons. The first is fairly obvious, that it is difficult to measure the direct backscatter without blocking the beam, but there are methods for overcoming this problem. The second is that the peak is usually extremely sharp around the backward direction, so that a very high level of angular resolution is needed for the detector to see the peak without averaging its intensity out over the surrounding angles where the intensity can undergo large dips. At angles other than the backscatter direction, the light intensity is subject to numerous essentially random fluctuations called speckles.
This is one of the most robust interference phenomena that survives multiple scattering, and it is regarded as an aspect of a quantum mechanical phenomenon known as weak localization (Akkermans et al. 1986). In weak localization, interference of the direct and reverse paths leads to a net reduction of light transport in the forward direction. This phenomenon is typical of any coherent wave which is multiple scattered. It is typically discussed for light waves, for which it is similar to the weak localization phenomenon for electrons in disordered semi-conductors and often seen as the precursor to Anderson (or strong) localization of light. Weak localization of light can be detected since it is manifested as an enhancement of light intensity in the backscattering direction. This substantial enhancement is called the cone of coherent backscattering.
Coherent backscattering has its origin in the interference between direct and reverse paths in the backscattering direction. When a multiply scattering medium is illuminated by a laser beam, the scattered intensity results from the interference between the amplitudes associated with the various scattering paths; for a disordered medium, the interference terms are washed out when averaged over many sample configurations, except in a narrow angular range around exact backscattering where the average intensity is enhanced. This phenomenon is the result of many sinusoidal two-wave interference patterns which add up. The cone is the Fourier transform of the spatial distribution of the intensity of the scattered light on the sample surface, when the latter is illuminated by a point-like source. The enhanced backscattering relies on the constructive interference between reverse paths. One can make an analogy with a Young's interference experiment, where two diffracting slits would be positioned in place of the "input" and "output" scatterers.
See also
Back scattering alignment (BSA), a coordinate system most commonly used in radar
Forward scattering alignment (FSA), a coordinate system primarily used in optics
Opposition surge, an astronomical phenomenon caused by the coherent backscatter effect
References
Scattering, absorption and radiative transfer (optics)
Mesoscopic physics | Coherent backscattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 643 | [
" absorption and radiative transfer (optics)",
"Quantum mechanics",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics",
"Mesoscopic physics"
] |
3,238,801 | https://en.wikipedia.org/wiki/Farnesol | Farnesol is a natural 15-carbon organic compound which is an acyclic sesquiterpene alcohol. Under standard conditions, it is a colorless liquid. It is hydrophobic, and thus insoluble in water, but miscible with oils. As the pyrophosphate ester, farnesol is a precursor to many terpenes and terpenoids.
Uses
Farnesol is present in many essential oils such as citronella, neroli, cyclamen, lemon grass, tuberose, rose, musk, balsam, and tolu. It is used in perfumery to emphasize the odors of sweet, floral perfumes. It enhances perfume scent by acting as a co-solvent that regulates the volatility of the odorants. It is especially used in lilac perfumes. Farnesol and its ester derivatives are important precursors for a variety of other compounds used as fragrances and vitamins.
Cosmetics
Farnesol is used as a deodorant in cosmetic products. Farnesol is subject to restrictions on its use in perfumery, because some people may become sensitised to it.
Natural source and synthesis
In nature
The pyrophosphate ester of farnesol is the building blocks of possibly all acyclic sesquiterpenoids. These compounds are doubled to form 30-carbon squalene, which is the precursor for steroids in plants, animals, and fungi.
Farnesyl pyrophosphate is produced from the reaction of geranyl pyrophosphate and isopentenyl pyrophosphate. Farnesyl pyrophosphate is the precursor to farnesol and farnesene.
Farnesol is used by the fungus Candida albicans as a quorum sensing molecule that inhibits filamentation.
Commercial production
Obtaining farnesol from natural sources is uneconomical. In industry, farnesol is produced from linalool.
History of the name
Farnesol was named (ca. 1900–1905) after the Farnese acacia tree (Vachellia farnesiana), since the flowers from the tree were the commercial source of the floral essence in which the chemical was identified. This particular acacia species, in turn, is named after Cardinal Odoardo Farnese (1573–1626) of the Italian Farnese family who (from 1550 though the 17th century) maintained some of the first private European botanical gardens in the Farnese Gardens in Rome. The addition of the -ol suffix implies an alcohol. The plant itself was brought to the Farnese gardens from the Caribbean and Central America, where it originates.
See also
Farnesylation
Farnesene
Farnesyl pyrophosphate
Geranylgeraniol
Nerolidol
References
Fatty alcohols
Alkene derivatives
Sesquiterpenes
Perfume ingredients
Flavors
Insect pheromones
Primary alcohols | Farnesol | [
"Chemistry"
] | 606 | [
"Insect pheromones",
"Chemical ecology"
] |
3,239,191 | https://en.wikipedia.org/wiki/Post-transcriptional%20modification | Transcriptional modification or co-transcriptional modification is a set of biological processes common to most eukaryotic cells by which an RNA primary transcript is chemically altered following transcription from a gene to produce a mature, functional RNA molecule that can then leave the nucleus and perform any of a variety of different functions in the cell. There are many types of post-transcriptional modifications achieved through a diverse class of molecular mechanisms.
One example is the conversion of precursor messenger RNA transcripts into mature messenger RNA that is subsequently capable of being translated into protein. This process includes three major steps that significantly modify the chemical structure of the RNA molecule: the addition of a 5' cap, the addition of a 3' polyadenylated tail, and RNA splicing. Such processing is vital for the correct translation of eukaryotic genomes because the initial precursor mRNA produced by transcription often contains both exons (coding sequences) and introns (non-coding sequences); splicing removes the introns and links the exons directly, while the cap and tail facilitate the transport of the mRNA to a ribosome and protect it from molecular degradation.
Post-transcriptional modifications may also occur during the processing of other transcripts which ultimately become transfer RNA, ribosomal RNA, or any of the other types of RNA used by the cell.
mRNA processing
5' processing
Capping
Capping of the pre-mRNA involves the addition of 7-methylguanosine (m7G) to the 5' end. To achieve this, the terminal 5' phosphate requires removal, which is done with the aid of enzyme RNA triphosphatase. The enzyme guanosyl transferase then catalyses the reaction, which produces the diphosphate 5' end. The diphosphate 5' end then attacks the alpha phosphorus atom of a GTP molecule in order to add the guanine residue in a 5'5' triphosphate link. The enzyme (guanine-N7-)-methyltransferase ("cap MTase") transfers a methyl group from S-adenosyl methionine to the guanine ring. This type of cap, with just the (m7G) in position is called a cap 0 structure. The ribose of the adjacent nucleotide may also be methylated to give a cap 1. Methylation of nucleotides downstream of the RNA molecule produce cap 2, cap 3 structures and so on. In these cases the methyl groups are added to the 2' OH groups of the ribose sugar.
The cap protects the 5' end of the primary RNA transcript from attack by ribonucleases that have specificity to the 3'5' phosphodiester bonds.
3' processing
Cleavage and polyadenylation
The pre-mRNA processing at the 3' end of the RNA molecule involves cleavage of its 3' end and then the addition of about 250 adenine residues to form a poly(A) tail. The cleavage and adenylation reactions occur primarily if a polyadenylation signal sequence (5'- AAUAAA-3') is located near the 3' end of the pre-mRNA molecule, which is followed by another sequence, which is usually (5'-CA-3') and is the site of cleavage. A GU-rich sequence is also usually present further downstream on the pre-mRNA molecule. More recently, it has been demonstrated that alternate signal sequences such as UGUA upstream off the cleavage site can also direct cleavage and polyadenylation in the absence of the AAUAAA signal. These two signals are not mutually independent, and often coexist. After the synthesis of the sequence elements, several multi-subunit proteins are transferred to the RNA molecule. The transfer of these sequence specific binding proteins cleavage and polyadenylation specificity factor (CPSF), Cleavage Factor I (CF I) and cleavage stimulation factor (CStF) occurs from RNA Polymerase II. The three factors bind to the sequence elements. The AAUAAA signal is directly bound by CPSF. For UGUA dependent processing sites, binding of the multi protein complex is done by Cleavage Factor I (CF I). The resultant protein complex formed contains additional cleavage factors and the enzyme Polyadenylate Polymerase (PAP). This complex cleaves the RNA between the polyadenylation sequence and the GU-rich sequence at the cleavage site marked by the (5'-CA-3') sequences. Poly(A) polymerase then adds about 200 adenine units to the new 3' end of the RNA molecule using ATP as a precursor. As the poly(A) tail is synthesized, it binds multiple copies of poly(A)-binding protein, which protects the 3'end from ribonuclease digestion by enzymes including the CCR4-Not complex.
Intron splicing
RNA splicing is the process by which introns, regions of RNA that do not code for proteins, are removed from the pre-mRNA and the remaining exons connected to re-form a single continuous molecule. Exons are sections of mRNA which become "expressed" or translated into a protein. They are the coding portions of a mRNA molecule. Although most RNA splicing occurs after the complete synthesis and end-capping of the pre-mRNA, transcripts with many exons can be spliced co-transcriptionally. The splicing reaction is catalyzed by a large protein complex called the spliceosome assembled from proteins and small nuclear RNA molecules that recognize splice sites in the pre-mRNA sequence. Many pre-mRNAs, including those encoding antibodies, can be spliced in multiple ways to produce different mature mRNAs that encode different protein sequences. This process is known as alternative splicing, and allows production of a large variety of proteins from a limited amount of DNA.
Histone mRNA processing
Histones H2A, H2B, H3 and H4 form the core of a nucleosome and thus are called core histones. Processing of core histones is done differently because typical histone mRNA lacks several features of other eukaryotic mRNAs, such as poly(A) tail and introns. Thus, such mRNAs do not undergo splicing and their 3' processing is done independent of most cleavage and polyadenylation factors. Core histone mRNAs have a special stem-loop structure at 3-prime end that is recognized by a stem–loop binding protein and a downstream sequence, called histone downstream element (HDE) that recruits U7 snRNA. Cleavage and polyadenylation specificity factor 73 cuts mRNA between stem-loop and HDE
Histone variants, such as H2A.Z or H3.3, however, have introns and are processed as normal mRNAs including splicing and polyadenylation.
See also
Post-translational modification
RNA editing
RNA-Seq
References
Further reading
Cell biology
Molecular biology
Gene expression
RNA | Post-transcriptional modification | [
"Chemistry",
"Biology"
] | 1,440 | [
"Cell biology",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
3,239,238 | https://en.wikipedia.org/wiki/Ultramicrotomy | Ultramicrotomy is a method for cutting specimens into extremely thin slices, called ultra-thin sections, that can be studied and documented at different magnifications in a transmission electron microscope (TEM). It is used mostly for biological specimens, but sections of plastics and soft metals can also be prepared. Sections must be very thin because the 50 to 125 kV electrons of the standard electron microscope cannot pass through biological material much thicker than 150 nm. For best resolutions, sections should be from 30 to 60 nm. This is roughly the equivalent to splitting a 0.1 mm-thick human hair into 2,000 slices along its diameter, or cutting a single red blood cell into 100 slices.
Ultramicrotomy process
Ultra-thin sections of specimens are cut using a specialized instrument called an "ultramicrotome". The ultramicrotome is fitted with either a diamond knife, for most biological ultra-thin sectioning, or a glass knife, often used for initial cuts. There are numerous other pieces of equipment involved in the ultramicrotomy process. Before selecting an area of the specimen block to be ultra-thin sectioned, the technician examines semithin or "thick" sections range from 0.5 to 2 μm. These thick sections are also known as survey sections and are viewed under a light microscope to determine whether the right area of the specimen is in a position for thin sectioning. "Ultra-thin" sections from 50 to 100 nm thick are able to be viewed in the TEM.
Tissue sections obtained by ultramicrotomy are compressed by the cutting force of the knife. In addition, interference microscopy of the cut surface of the blocks reveals that the sections are often not flat. With Epon or Vestopal as embedding medium the ridges and valleys usually do not exceed 0.5 μm in height, i.e., 5–10 times the thickness of ordinary sections (1).
A small sample is taken from the specimen to be investigated. Specimens may be from biological matter, like animal or plant tissue, or from inorganic material such as rock, metal, magnetic tape, plastic, film, etc. The sample block is first trimmed to create a block face 1 mm by 1 mm in size. "Thick" sections (1 μm) are taken to be looked at on an optical microscope. An area is chosen to be sectioned for TEM and the block face is re-trimmed to a size no larger than 0.7 mm on a side. Block faces usually have a square, trapezoidal, rectangular, or triangular shape. Finally, thin sections are cut with a glass or diamond knife using an ultramicrotome and the sections are left floating on water that is held in a boat or trough. The sections are then retrieved from the water surface and mounted on a copper, nickel, gold, or other metal grid. Ideal section thickness for transmission electron microscopy with accelerating voltages between 50kV and 120kV is about 30–100 nm.
Advances
In 1952 Humberto Fernandez Morán introduced cryo ultramicrotomy, which is a similar technique but done at freezing temperatures between −20 and −150°C. Cryo ultramicrotomy can be used to cut ultra-thin frozen biological specimens. One of the advantages over the more "traditional" ultramicrotomy process is speed, since it should be possible to freeze and section a specimen in 1 to 2 hours.
References
Microscopy
Electron microscopy
Biological techniques and tools | Ultramicrotomy | [
"Chemistry",
"Biology"
] | 710 | [
"Electron",
"Electron microscopy",
"nan",
"Microscopy"
] |
3,239,291 | https://en.wikipedia.org/wiki/Fracture%20toughness | In materials science, fracture toughness is the critical stress intensity factor of a sharp crack where propagation of the crack suddenly becomes rapid and unlimited. A component's thickness affects the constraint conditions at the tip of a crack with thin components having plane stress conditions and thick components having plane strain conditions. Plane strain conditions give the lowest fracture toughness value which is a material property. The critical value of stress intensity factor in mode I loading measured under plane strain conditions is known as the plane strain fracture toughness, denoted . When a test fails to meet the thickness and other test requirements that are in place to ensure plane strain conditions, the fracture toughness value produced is given the designation . Fracture toughness is a quantitative way of expressing a material's resistance to crack propagation and standard values for a given material are generally available.
Slow self-sustaining crack propagation known as stress corrosion cracking, can occur in a corrosive environment above the threshold and below . Small increments of crack extension can also occur during fatigue crack growth, which after repeated loading cycles, can gradually grow a crack until final failure occurs by exceeding the fracture toughness.
Material variation
Fracture toughness varies by approximately 4 orders of magnitude across materials. Metals hold the highest values of fracture toughness. Cracks cannot easily propagate in tough materials, making metals highly resistant to cracking under stress and gives their stress–strain curve a large zone of plastic flow. Ceramics have a lower fracture toughness but show an exceptional improvement in the stress fracture that is attributed to their 1.5 orders of magnitude strength increase, relative to metals. The fracture toughness of composites, made by combining engineering ceramics with engineering polymers, greatly exceeds the individual fracture toughness of the constituent materials.
Mechanisms
Intrinsic mechanisms
Intrinsic toughening mechanisms are processes which act ahead of the crack tip to increase the material's toughness. These will tend to be related to the structure and bonding of the base material, as well as microstructural features and additives to it. Examples of mechanisms include:
crack deflection by secondary phases,
crack bifurcation due to fine grain structure
changes in the crack path due to grain boundaries
Any alteration to the base material which increases its ductility can also be thought of as intrinsic toughening.
Grain boundaries
The presence of grains in a material can also affect its toughness by affecting the way cracks propagate. In front of a crack, a plastic zone can be present as the material yields. Beyond that region, the material remains elastic. The conditions for fracture are the most favorable at the boundary between this plastic and elastic zone, and thus cracks often initiate by the cleavage of a grain at that location.
At low temperatures, where the material can become completely brittle, such as in a body-centered cubic (BCC) metal, the plastic zone shrinks away, and only the elastic zone exists. In this state, the crack will propagate by successive cleavage of the grains. At these low temperatures, the yield strength is high, but the fracture strain and crack tip radius of curvature are low, leading to a low toughness.
At higher temperatures, the yield strength decreases, and leads to the formation of the plastic zone. Cleavage is likely to initiate at the elastic-plastic zone boundary, and then link back to the main crack tip. This is usually a mixture of cleavages of grains, and ductile fracture of grains known as fibrous linkages. The percentage of fibrous linkages increase as temperature increases until the linkup is entirely fibrous linkages. In this state, even though yield strength is lower, the presence of ductile fracture and a higher crack tip radius of curvature results in a higher toughness.
Inclusions
Inclusions in a material such as a second phase particles can act similar to brittle grains that can affect crack propagation. Fracture or decohesion at the inclusion can either be caused by the external applied stress or by the dislocations generated by the requirement of the inclusion to maintain contiguity with the matrix around it. Similar to grains, the fracture is most likely to occur at the plastic-elastic zone boundary. Then the crack can linkup back to the main crack. If the plastic zone is small or the density of the inclusions is small, the fracture is more likely to directly link up with the main crack tip. If the plastic zone is large, or the density of inclusions is high, additional inclusion fractures may occur within the plastic zone, and linkup occurs by progressing from the crack to the closest fracturing inclusion within the zone.
Transformation toughening
Transformation toughening is a phenomenon whereby a material undergoes one or more martensitic (displacive, diffusionless) phase transformations which result in an almost instantaneous change in volume of that material. This transformation is triggered by a change in the stress state of the material, such as an increase in tensile stress, and acts in opposition to the applied stress. Thus when the material is locally put under tension, for example at the tip of a growing crack, it can undergo a phase transformation which increases its volume, lowering the local tensile stress and hindering the crack's progression through the material. This mechanism is exploited to increase the toughness of ceramic materials, most notably in Yttria-stabilized zirconia for applications such as ceramic knives and thermal barrier coatings on jet engine turbine blades.
Extrinsic mechanisms
Extrinsic toughening mechanisms are processes which act behind the crack tip to resist its further opening. Examples include
fibre/lamella bridging, where these structures hold the two fracture surfaces together after the crack has propagated through the matrix,
crack wedging from the friction between two rough fracture surfaces, and
microcracking, where smaller cracks form in the material around the main crack, relieving the stress at the crack tip by effectively increasing the material's compliance.
Test methods
Fracture toughness tests are performed to quantify the resistance of a material to failure by cracking. Such tests result in either a single-valued measure of fracture toughness or in a resistance curve. Resistance curves are plots where fracture toughness parameters (K, J etc.) are plotted against parameters characterizing the propagation of crack. The resistance curve or the single-valued fracture toughness is obtained based on the mechanism and stability of fracture. Fracture toughness is a critical mechanical property for engineering applications. There are several types of test used to measure fracture toughness of materials, which generally utilise a notched specimen in one of various configurations. A widely utilized standardized test method is the Charpy impact test whereby a sample with a V-notch or a U-notch is subjected to impact from behind the notch. Also widely used are crack displacement tests such as three-point beam bending tests with thin cracks preset into test specimens before applying load.
Testing requirements
Choice of specimen
The ASTM standard E1820 for the measurement of fracture toughness recommends three coupon types for fracture toughness testing, the single-edge bending coupon [SE(B)], the compact tension coupon [C(T)] and the disk-shaped compact tension coupon [DC(T)].
Each specimen configuration is characterized by three dimensions, namely the crack length (a), the thickness (B) and the width (W). The values of these dimensions are determined by the demand of the particular test that is being performed on the specimen. The vast majority of the tests are carried out on either compact or three-point flexural test configuration. For the same characteristic dimensions, compact configuration takes a lesser amount of material compared to three-point flexural test.
Material orientation
Orientation of fracture is important because of the inherent non-isotropic nature of most engineering materials. Due to this, there may be planes of weakness within the material, and crack growth along this plane may be easier compared to other direction. Due to this importance ASTM has devised a standardized way of reporting the crack orientation with respect to forging axis. The letters L, T and S are used to denote the longitudinal, transverse and short transverse directions, where the longitudinal direction coincides with forging axis. The orientation is defined with two letters the first one being the direction of principal tensile stress and the second one is the direction of crack propagation. Generally speaking, the lower bound of the toughness of a material is obtained in the orientation where the crack grows in the direction of forging axis.
Pre-cracking
For accurate results, a sharp crack is required before testing. Machined notches and slots do not meet this criterion. The most effective way of introducing a sufficiently sharp crack is by applying cyclic loading to grow a fatigue crack from a slot. Fatigue cracks are initiated at the tip of the slot and allowed to extend until the crack length reaches its desired value.
The cyclic loading is controlled carefully so as to not affect the toughness of the material through strain-hardening. This is done by choosing cyclic loads that produce a far smaller plastic zone compared to plastic zone of the main fracture. For example, according to ASTM E399, the maximum stress intensity Kmax should be no larger than 0.6 during the initial stage and less than 0.8 when crack approaches its final size.
In certain cases grooves are machined into the sides of a fracture toughness specimen so that the thickness of the specimen is reduced to a minimum of 80% of the original thickness along the intended path of crack extensions. The reason is to maintain a straight crack front during R-curve test.
The four main standardized tests are described below with KIc and KR tests valid for linear-elastic fracture mechanics (LEFM) while J and JR tests valid for elastic-plastic fracture mechanics (EPFM).
Plane-strain fracture toughness testing
When performing a fracture toughness test, the most common test specimen configurations are the single edge notch bend (SENB or three-point bend), and the compact tension (CT) specimens. Testing has shown that plane-strain conditions generally prevail when:
where is the minimum necessary thickness, the fracture toughness of the material and is the material yield strength.
The test is performed by loading steadily at a rate such that KI increases from 0.55 to 2.75 (MPa)/s. During the test, the load and the crack mouth opening displacement (CMOD) is recorded and the test is continued till the maximum load is reached. The critical load PQ is calculated through from the load vs CMOD plot. A provisional toughness KQ is given as
.
The geometry factor is a dimensionless function of a/W and is given in polynomial form in the E 399 standard. The geometry factor for compact test geometry can be found here. This provisional toughness value is recognized as valid when the following requirements are met:
and
When a material of unknown fracture toughness is tested, a specimen of full material section thickness is tested or the specimen is sized based on a prediction of the fracture toughness. If the fracture toughness value resulting from the test does not satisfy the requirement of the above equation, the test must be repeated using a thicker specimen. In addition to this thickness calculation, test specifications have several other requirements that must be met (such as the size of the shear lips) before a test can be said to have resulted in a KIC value.
When a test fails to meet the thickness and other plain-strain requirements, the fracture toughness value produced is given the designation Kc. Sometimes, it is not possible to produce a specimen that meets the thickness requirement. For example, when a relatively thin plate with high toughness is being tested, it might not be possible to produce a thicker specimen with plane-strain conditions at the crack tip.
Determination of R-curve, K-R
The specimen showing stable crack growth shows an increasing trend in fracture toughness as the crack length increases (ductile crack extension). This plot of fracture toughness vs crack length is called the resistance (R)-curve. ASTM E561 outlines a procedure for determining toughness vs crack growth curves in materials. This standard does not have a constraint over the minimum thickness of the material and hence can be used for thin sheets however the requirements for LEFM must be fulfilled for the test to be valid. The criteria for LEFM essentially states that in-plane dimension has to be large compared to the plastic zone. There is a misconception about the effect of thickness on the shape of R curve. It is hinted that for the same material thicker section fails by plane strain fracture and shows a single-valued fracture toughness, the thinner section fails by plane stress fracture and shows the rising R-curve. However, the main factor that controls the slope of R curve is the fracture morphology not the thickness. In some material section thickness changes the fracture morphology from ductile tearing to cleavage from thin to thick section, in which case the thickness alone dictates the slope of R-curve. There are cases where even plane strain fracture ensues in rising R-curve due to "microvoid coalescence" being the mode of failure.
The most accurate way of evaluating K-R curve is taking presence of plasticity into account depending on the relative size of the plastic zone. For the case of negligible plasticity, the load vs displacement curve is obtained from the test and on each point the compliance is found. The compliance is reciprocal of the slope of the curve that will be followed if the specimen is unloaded at a certain point, which can be given as the ratio of displacement to load for LEFM. The compliance is used to determine the instantaneous crack length through the relationship given in the ASTM standard.
The stress intensity should be corrected by calculating an effective crack length. ASTM standard suggests two alternative approaches. The first method is named Irwin's plastic zone correction. Irwin's approach describes the effective crack length to be
Irwin's approach leads to an iterative solution as K itself is a function of crack length.
The other method, namely the secant method, uses the compliance-crack length equation given by ASTM standard to calculate effective crack length from an effective compliance. Compliance at any point in Load vs displacement curve is essentially the reciprocal of the slope of the curve that ensues if the specimen is unloaded at that point. Now the unloading curve returns to the origin for linear elastic material but not for elastic plastic material as there is a permanent deformation. The effective compliance at a point for the elastic plastic case is taken as the slope of the line joining the point and origin (i.e the compliance if the material was an elastic one). This effective compliance is used to get an effective crack growth and the rest of the calculation follows the equation
The choice of plasticity correction is factored on the size of plastic zone. ASTM standard covering resistance curve suggests using Irwin's method is acceptable for small plastic zone and recommends using Secant method when crack-tip plasticity is more prominent. Also since the ASTM E 561 standard does not contain requirements on the specimen size or maximum allowable crack extension, thus the size independence of the resistance curve is not guaranteed. Few studies show that the size dependence is less detected in the experimental data for the Secant method.
Determination of JIC
Strain energy release rate per unit fracture surface area is calculated by J-integral method which is a contour path integral around the crack tip where the path begins and ends on either crack surfaces. J-toughness value signifies the resistance of the material in terms of amount of stress energy required for a crack to grow. JIC toughness value is measured for elastic-plastic materials. Now the single-valued JIC is determined as the toughness near the onset of the ductile crack extension (effect of strain hardening is not important). The test is performed with multiple specimen loading each of the specimen to various levels and unloading. This gives the crack mouth opening compliance which is to be used to get crack length with the help of relationships given in ASTM standard E 1820, which covers the J-integral testing. Another way of measuring crack growth is to mark the specimen with heat tinting or fatigue cracking. The specimen is eventually broken apart and the crack extension is measured with the help of the marks.
The test thus performed yields several load vs crack mouth opening displacement (CMOD) curves, which are used to calculate J as following:-
The linear elastic J is calculated using and K is determined from where is the net thickness for side-grooved specimen and equal to B for not side-grooved specimen.
The elastic plastic J is calculated using
Where
= 2 for SENB specimen
is initial ligament length given by the difference between width and initial crack length
is the plastic area under the load-displacement curve.
Specialized data reduction technique is used to get a provisional . The value is accepted if the following criterion is met:
Determination of tear resistance (Kahn tear test)
The tear test (e.g. Kahn tear test) provides a semi-quantitative measure of toughness in terms of tear resistance. This type of test requires a smaller specimen, and can, therefore, be used for a wider range of product forms. The tear test can also be used for very ductile aluminium alloys (e.g. 1100, 3003), where linear elastic fracture mechanics do not apply.
Standard test methods
A number of organizations publish standards related to fracture toughness measurements, namely ASTM, BSI, ISO, JSME.
ASTM C1161 Test Method for Flexural Strength of Advanced Ceramics at Ambient Temperature
ASTM C1421 Standard Test Methods for Determination of Fracture Toughness of Advanced Ceramics at Ambient Temperature
ASTM E399 Test Method for Plane-strain Fracture Toughness of Metallic Materials
ASTM E740 Practice for Fracture Testing with Surface-Crack Tension Specimens
ASTM E1820 Standard Test Method for Measurement of Fracture Toughness
ASTM E1823 Terminology Relating to Fatigue and Fracture Testing
ISO 12135 Metallic materials — Unified method of test for the determination of quasistatic fracture toughness
ISO 28079:2009, the Palmqvist method, used to determine the fracture toughness for cemented carbides
Crack deflection toughening
Many ceramics with polycrystalline structures develop large cracks that propagate along the boundaries between grains, rather than through the individual crystals themselves since the toughness of the grain boundaries is much lower than that of the crystals. The orientation of the grain boundary facets and residual stress cause the crack to advance in a complex, tortuous manner that is difficult to analyze. Simply calculating the additional surface energy associated with the increased grain boundary surface area due to this tortuosity is not accurate, as some of the energy to create the crack surface comes from the residual stress.
Faber–Evans model
A mechanics of materials model, introduced by Katherine Faber and Anthony G. Evans, has been developed to predict the increase in fracture toughness in ceramics due to crack deflection around second-phase particles that are prone to microcracking in a matrix. The model takes into account the particle morphology, aspect ratio, spacing, and volume fraction of the second phase, as well as the reduction in local stress intensity at the crack tip when the crack is deflected or the crack plane bows. The actual crack tortuosity is obtained through imaging techniques, allowing the deflection and bowing angles to be directly input into the model.
The resulting increase in fracture toughness is then compared to that of a flat crack through the plain matrix. The magnitude of the toughening is determined by the mismatch strain caused by thermal contraction incompatibility and the microfracture resistance of the particle/matrix interface. This toughening becomes noticeable when there is a narrow size distribution of particles that are appropriately sized. Researchers typically accept the findings of Faber's analysis, which suggest that deflection effects in materials with roughly equiaxial grains may increase the fracture toughness by about twice the grain boundary value.
See also
Brittle–ductile transition zone
Charpy impact test
Ductile-brittle transition temperature
Impact (mechanics)
Izod impact strength test
Puncture resistance
Shock (mechanics)
Three-point flexural fracture toughness testing
Toughness of ceramics by indentation
References
Further reading
Anderson, T. L., Fracture Mechanics: Fundamentals and Applications (CRC Press, Boston 1995).
Davidge, R. W., Mechanical Behavior of Ceramics (Cambridge University Press 1979).
Knott, K. F., Fundamentals of Fracture Mechanics (1973).
Suresh, S., Fatigue of Materials (Cambridge University Press 1998, 2nd edition).
Fracture mechanics
fr:Ténacité | Fracture toughness | [
"Materials_science",
"Engineering"
] | 4,208 | [
"Structural engineering",
"Materials degradation",
"Materials science",
"Fracture mechanics"
] |
9,874,039 | https://en.wikipedia.org/wiki/Strong%20secrecy | Strong secrecy is a term used in formal proof-based cryptography for making propositions about the security of cryptographic protocols. It is a stronger notion of security than syntactic (or weak) secrecy. Strong secrecy is related with the concept of semantic security or indistinguishability used in the computational proof-based approach. Bruno Blanchet provides the following definition for strong secrecy:
Strong secrecy means that an adversary cannot see any difference when the value of the secret changes
For example, if a process encrypts a message m an attacker can differentiate between different messages, since their ciphertexts will be different. Thus m is not a strong secret. If however, probabilistic encryption were used, m would be a strong secret. The randomness incorporated into the encryption algorithm will yield different ciphertexts for the same value of m.
See also
Semantic security
Notes
Cryptography | Strong secrecy | [
"Mathematics",
"Engineering"
] | 182 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
9,875,903 | https://en.wikipedia.org/wiki/Gammator | ӀA Gammator was a gamma irradiator made by the Radiation Machinery Corporation during the U.S. Atoms for Peace project of the 1950s and 1960s. The gammator was distributed by the "Atomic Energy Commission to schools, hospitals, and private firms to promote nuclear understanding." Around 120-140 Gammators were distributed throughout the U.S. and the whereabouts of several of them are unknown, although the Department of Energy has removed and destroyed many of the units.
Specifications
A Gammator weighed about 1,850 pounds and contained about 400 curies of caesium-137 in a pellet roughly the size of a pen.
Concerns
Because of the massive shielding of a Gammator, the machine is very safe when used as intended (e.g. school science experiments); according to the Los Alamos National Laboratory, it is similar to machines used to irradiate blood. However, this amount of nuclear material could pose a significant problem if used as the radioactive component in a dirty bomb.
References
Nuclear technology
Atoms for Peace | Gammator | [
"Physics"
] | 211 | [
"Nuclear technology",
"Nuclear physics"
] |
9,876,168 | https://en.wikipedia.org/wiki/Bauhaus%20Project%20%28computing%29 | The Bauhaus project is a software research project collaboration among the University of Stuttgart, the University of Bremen, and a commercial spin-off company Axivion, also known as Bauhaus Software Technologies.
The Bauhaus project serves the fields of software maintenance and software reengineering.
Created in response to the problem of software rot, the project aims to analyze and recover the means and methods developed for legacy software by understanding the software's architecture. As part of its research, the project develops software tools (such as the Bauhaus Toolkit) for software architecture, software maintenance and reengineering and program understanding.
The project derives its name from the former Bauhaus art school.
History
The Bauhaus project was initiated by Profs. Erhard Ploedereder and Rainer Koschke at the University of Stuttgart in 1996. It was originally a collaboration between the Institute for Computer Science (ICS) of the University of Stuttgart and the , which is no longer involved.
The Bauhaus project was funded by the state of Baden-Württemberg, the Deutschen Forschungsgemeinschaft, the Bundesministerium für Bildung und Forschung, T-Nova Deutsche Telekom Innovationsgesellschaft Ltd., and Xerox Research.
Early versions of Bauhaus integrated and used Rigi for visualization.
The commercial spin-off Axivion GmbH, headquartered in Stuttgart, was started in 2005. Research then was done at Axivion, the Institute of Software Technology, Department of Programming Languages at the University of Stuttgart as well as at the Software Engineering Group of the Faculty 03 at the University of Bremen.
Formerly, the academic version of the "Bauhaus" was offered. Today, the software product is sold commercially as Axivion Suite. The latter includes MISRA C checks among other verification services.
On August 11, 2022, the Qt Group acquired Axivion GmbH. Since then, the Axivion Suite has been further developed and distributed by the Qt Group's Quality Assurance business unit.
Bauhaus Toolkit now Axivion Suite
The Bauhaus Toolkit (or simply the "Bauhaus tool") includes a static code analysis tool for C, C++, C#, Java and Ada code. It comprises various analyses such as architecture checking, interface analysis, and clone detection. Bauhaus was originally derived from the older Rigi reverse engineering environment, which was expanded by Bauhaus due to the Rigi's limitations. It is considered one of the most notable visualization tools in the field.
The Bauhaus tool suite aids the analysis of source code by creating abstractions (representations) of the code in an intermediate language as well as through a resource flow graph (RFG). The RFG is a hierarchal graph with typed nodes and edges, which are structured in various views.
While the Axivion Suite has its origins in the Bauhaus project, it is now considered a different product with a broader range of services, such as static code analyses, such as MISRA checking, architecture verification, include analysis, defect detection, and clone management.
Reception
The Bauhaus tool suite has been used successfully in research and commercial projects. It has been noted that Bauhaus is "perhaps [the] most extensive" customization of the well-known Rigi environment,
The members of the project were repeatedly awarded with Best Paper Awards and were invited to submit journal papers several times.
In 2003, the Bauhaus project received the do it software award from MFG Stiftung Baden-Württemberg.
Footnotes
Regarding the project's founding, the years 1996 and 1997 seem to appear equally often among the various sources.
References
External links
The Bauhaus Project – Former project page at ISTE
Software metrics
Static program analysis tools | Bauhaus Project (computing) | [
"Mathematics",
"Engineering"
] | 791 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
9,877,288 | https://en.wikipedia.org/wiki/Skew%20lattice | In abstract algebra, a skew lattice is an algebraic structure that is a non-commutative generalization of a lattice. While the term skew lattice can be used to refer to any non-commutative generalization of a lattice, since 1989 it has been used primarily as follows.
Definition
A skew lattice is a set S equipped with two associative, idempotent binary operations and , called meet and join, that validate the following dual pair of absorption laws
,
.
Given that and are associative and idempotent, these identities are equivalent to validating the following dual pair of statements:
if ,
if .
Historical background
For over 60 years, noncommutative variations of lattices have been studied with differing motivations. For some the motivation has been an interest in the conceptual boundaries of lattice theory; for others it was a search for noncommutative forms of logic and Boolean algebra; and for others it has been the behavior of idempotents in rings. A noncommutative lattice, generally speaking, is an algebra where and are associative, idempotent binary operations connected by absorption identities guaranteeing that in some way dualizes . The precise identities chosen depends upon the underlying motivation, with differing choices producing distinct varieties of algebras.
Pascual Jordan, motivated by questions in quantum logic, initiated a study of noncommutative lattices in his 1949 paper, Über Nichtkommutative Verbände, choosing the absorption identities
He referred to those algebras satisfying them as Schrägverbände. By varying or augmenting these identities, Jordan and others obtained a number of varieties of noncommutative lattices.
Beginning with Jonathan Leech's 1989 paper, Skew lattices in rings, skew lattices as defined above have been the primary objects of study. This was aided by previous results about bands. This was especially the case for many of the basic properties.
Basic properties
Natural partial order and natural quasiorder
In a skew lattice , the natural partial order is defined by if , or dually, . The natural preorder on is given by if or dually . While and agree on lattices, properly refines in the noncommutative case. The induced natural equivalence is defined by if , that is,
and or dually, and . The blocks of the partition are
lattice ordered by if and exist such that . This permits us to draw Hasse diagrams of skew lattices such as the following pair:
E.g., in the diagram on the left above, that and are related is expressed by the dashed
segment. The slanted lines reveal the natural partial order between elements of the distinct -classes. The elements , and form the singleton -classes.
Rectangular Skew Lattices
Skew lattices consisting of a single -class are called rectangular. They are characterized by the equivalent identities: , and . Rectangular skew lattices are isomorphic to skew lattices having the following construction (and conversely): given nonempty
sets and , on define and . The -class partition of a skew lattice , as indicated in the above diagrams, is the unique partition of into its maximal rectangular subalgebras, Moreover, is a congruence with the induced quotient algebra being the maximal lattice image of , thus making every skew lattice a lattice of rectangular subalgebras. This is the Clifford–McLean theorem for skew lattices, first given for bands separately by Clifford and McLean. It is also known as the first decomposition theorem for skew lattices.
Right (left) handed skew lattices and the Kimura factorization
A skew lattice is right-handed if it satisfies the identity or dually, .
These identities essentially assert that and in each -class. Every skew lattice has a unique maximal right-handed image where the congruence is defined by if both and (or dually, and ). Likewise a skew lattice is left-handed if and in each -class. Again the maximal left-handed image of a skew lattice is the image where the congruence is defined in dual fashion to . Many examples of skew lattices are either right- or left-handed. In the lattice of congruences, and is the identity congruence . The induced epimorphism factors through both induced epimorphisms and . Setting , the homomorphism defined by , induces an isomorphism . This is the Kimura factorization of into a fibred product of its maximal right- and left-handed images.
Like the Clifford–McLean theorem, Kimura factorization (or the second decomposition theorem for skew lattices) was first given for regular bands (bands that satisfy the middle absorption
identity, ). Indeed, both and are regular band operations. The above symbols , and come, of course, from basic semigroup theory.
Subvarieties of skew lattices
Skew lattices form a variety. Rectangular skew lattices, left-handed and right-handed skew lattices all form subvarieties that are central to the basic structure theory of skew lattices. Here are several
more.
Symmetric skew lattices
A skew lattice S is symmetric if for any , if . Occurrences of commutation are thus unambiguous for such skew lattices, with subsets of pairwise commuting elements generating commutative subalgebras, i.e., sublattices. (This is not true for skew lattices in general.) Equational bases for this subvariety, first given by Spinks are:
and .
A lattice section of a skew lattice is a sublattice of meeting each -class of at a single element. is thus an internal copy of the lattice with the composition being an isomorphism. All symmetric skew lattices for which admit a lattice section. Symmetric or not, having a lattice section guarantees that also has internal copies of and given respectively by and , where and are the and congruence classes of in . Thus and are isomorphisms. This leads to a commuting diagram of embedding dualizing the preceding Kimura diagram.
Cancellative skew lattices
A skew lattice is cancellative if and implies and likewise and implies . Cancellatice skew lattices are symmetric and can be shown to form a variety. Unlike lattices, they need not be distributive, and conversely.
Distributive skew lattices
Distributive skew lattices are determined by the identities:
(D1)
(D'1)
Unlike lattices, (D1) and (D'1) are not equivalent in general for skew lattices, but they are for symmetric skew lattices. The condition (D1) can be strengthened to
(D2)
in which case (D'1) is a consequence. A skew lattice satisfies both (D2) and its dual, , if and only if it factors as the product of a distributive lattice and a rectangular skew lattice. In this latter case (D2) can be strengthened to
and . (D3)
On its own, (D3) is equivalent to (D2) when symmetry is added. We thus have six subvarieties of skew lattices determined respectively by (D1), (D2), (D3) and their duals.
Normal skew lattices
As seen above, and satisfy the identity . Bands satisfying the stronger identity, , are called normal. A skew lattice is normal skew if it satisfies
For each element a in a normal skew lattice , the set defined by {} or equivalently {} is a sublattice of , and conversely. (Thus normal skew lattices have also been called local lattices.) When both and are normal, splits isomorphically into a product of a lattice and a rectangular skew lattice , and conversely. Thus both normal skew lattices and split skew lattices form varieties. Returning to distribution, so that characterizes the variety of distributive, normal skew lattices, and (D3) characterizes the variety of symmetric, distributive, normal skew lattices.
Categorical skew lattices
A skew lattice is categorical if nonempty composites of coset bijections are coset bijections. Categorical skew lattices form a variety. Skew lattices in rings and normal skew lattices are examples
of algebras in this variety. Let with , and , be the coset bijection from to taking to , be the coset bijection from to taking to and finally be the coset bijection from to taking to . A skew lattice is categorical if one always has the equality , i.e. , if the
composite partial bijection if nonempty is a coset bijection from a -coset of to an -coset
of . That is .
All distributive skew lattices are categorical. Though symmetric skew lattices might not be. In a sense they reveal the independence between the properties of symmetry and distributivity.
Skew Boolean algebras
A zero element in a skew lattice S is an element 0 of S such that for all or, dually, (0)
A Boolean skew lattice is a symmetric, distributive normal skew lattice with 0, such that is a Boolean lattice for each Given such skew lattice S, a difference operator \ is defined by x \ y = where the latter is evaluated in the Boolean lattice In the presence of (D3) and (0), \ is characterized by the identities:
and (S B)
One thus has a variety of skew Boolean algebras characterized by identities (D3), (0) and (S B). A primitive skew Boolean algebra consists of 0 and a single non-0 D-class. Thus it is the result of adjoining a 0 to a rectangular skew lattice D via (0) with , if
and otherwise. Every skew Boolean algebra is a subdirect product of primitive algebras. Skew Boolean algebras play an important role in the study of discriminator varieties and other generalizations in universal algebra of Boolean behavior.
Skew lattices in rings
Let be a ring and let denote the set of all idempotents in . For all set and .
Clearly but also is associative. If a subset is closed under and , then is a distributive, cancellative skew lattice. To find such skew lattices in one looks at bands in , especially the ones that are maximal with respect to some constraint. In fact, every multiplicative band in that is maximal with respect to being right regular (= ) is also closed under and so forms a right-handed skew lattice. In general, every right regular band in generates a right-handed skew lattice in . Dual remarks also hold for left regular bands (bands satisfying the identity ) in . Maximal regular bands need not to be closed under as defined; counterexamples are easily found using multiplicative rectangular bands. These cases are closed, however, under the cubic variant of defined by since in these cases reduces to to give the dual rectangular band. By replacing the condition of regularity by normality , every maximal normal multiplicative band in is also closed under with , where , forms a Boolean skew lattice. When itself is closed under multiplication, then it is a normal band and thus forms a Boolean skew lattice. In fact, any skew Boolean algebra can be embedded into such an algebra. When A has a multiplicative identity , the condition that is multiplicatively closed is well known to imply that forms a Boolean algebra. Skew lattices in rings continue to be a good source of examples and motivation.
Primitive skew lattices
Skew lattices consisting of exactly two D-classes are called primitive skew lattices. Given such a skew lattice with -classes in , then for any and , the subsets
{} and {}
are called, respectively, cosets of A in B and cosets of B in A. These cosets partition B and A with and . Cosets are always rectangular subalgebras in their -classes. What is more, the partial order induces a coset bijection defined by:
iff , for and .
Collectively, coset bijections describe between the subsets and . They also determine and for pairs of elements from distinct -classes. Indeed, given and , let be the
cost bijection between the cosets in and in . Then:
and .
In general, given and with and , then belong to a common - coset in and belong to a common -coset in if and only if . Thus each coset bijection is, in some sense, a maximal collection of mutually parallel pairs .
Every primitive skew lattice factors as the fibred product of its maximal left and right- handed primitive images . Right-handed primitive skew lattices are constructed as follows. Let and be partitions of disjoint nonempty sets and , where all and share a common size. For each pair pick a fixed bijection from onto . On and separately set and ; but given and , set
and
where and with belonging to the cell of and belonging to the cell of . The various are the coset bijections. This is illustrated in the following partial Hasse diagram where and the arrows indicate the -outputs and from and .
One constructs left-handed primitive skew lattices in dual fashion. All right [left] handed primitive skew lattices can be constructed in this fashion.
The coset structure of skew lattices
A nonrectangular skew lattice is covered by its maximal primitive skew lattices: given comparable -classes in , forms a maximal primitive subalgebra of and every -class in lies in such a subalgebra. The coset structures on these primitive subalgebras combine to determine the outcomes and at least when and are comparable under . It turns out that and are determined in general by cosets and their bijections, although in
a slightly less direct manner than the -comparable case. In particular, given two incomparable D-classes A and B with join D-class J and meet D-class in , interesting connections arise between the two coset decompositions of J (or M) with respect to A and B.
Thus a skew lattice may be viewed as a coset atlas of rectangular skew lattices placed on the vertices of a lattice and coset bijections between them, the latter seen as partial isomorphisms
between the rectangular algebras with each coset bijection determining a corresponding pair of cosets. This perspective gives, in essence, the Hasse diagram of the skew lattice, which is easily
drawn in cases of relatively small order. (See the diagrams in Section 3 above.) Given a chain of D-classes in , one has three sets of coset bijections: from A to B, from B to C and from A to C. In general, given coset bijections and , the composition of partial bijections could be empty. If it is not, then a unique coset bijection exists such that . (Again, is a bijection between a pair of cosets in and .) This inclusion can be strict. It is always an equality (given ) on a given skew lattice S precisely when S is categorical. In this case, by including the identity maps on each rectangular D-class and adjoining empty bijections between properly comparable D-classes, one has a category of rectangular algebras and coset bijections between them. The simple examples in Section 3 are categorical.
See also
Semigroup theory
Lattice theory
References
Lattice theory
Semigroup theory | Skew lattice | [
"Mathematics"
] | 3,344 | [
"Mathematical structures",
"Lattice theory",
"Fields of abstract algebra",
"Algebraic structures",
"Semigroup theory",
"Order theory"
] |
9,878,102 | https://en.wikipedia.org/wiki/Rooster%20tail | A rooster tail is a term used in fluid dynamics, automotive gear shifting, and meteorology. It is a region of commotion or turbulence within a fluid, caused by movement. In fluid dynamics, it lies directly in the wake of an object traveling within a fluid, and is accompanied by a vertical protrusion. If it occurs in a river, wise boaters upstream steer clear of its appearance. The degree of their formation can indicate the efficiency of a boat's hull design. The magnitude of these features in a boat increases with speed, while the relationship is inversely proportional for airplanes. Energetic volcanic eruptions can create rooster tail formations from their ejecta. They can form in relation to coronal loops near the Sun's surface.
In gear shifting in motor vehicles, it is the relation between the coefficient of friction and the sliding speed of the clutch. Cars can throw rooster tails in their wake and loose materials under its wheels. In meteorology, a rooster tail satellite pattern can be applied to either low or high level cloudiness, with the low cloud line seen in the wake of tropical cyclones and the high cloud pattern seen either within mare's tails or within the outflow jet of tropical cyclones.
In fluid dynamics
Rooster tails are caused by constructive interference near and to the wake of objects within a flowing fluid.
In water
A fast current of water flowing over a rock near the surface of a stream or river can create a rooster tail—such commotions at the water's surface are avoided by boaters due to the near surface obstruction. Propellers on boats can produce a rooster tail of water in their wake, in the form of a fountain which shoots into the air behind the boat. The faster a boat goes, the larger the rooster tails become. The efficiency of a boat's hull design can be judged by the magnitude of the rooster tail—larger rooster tails indicate less efficient designs. If a water skier is in tow, the skis also throw off a rooster tail. Airplanes lifting off from a lake produce lengthening rooster tails behind their amphibious floats as their speed increases, until the plane lifts off the surface.
In air
An airplane leaves rooster tails in its wake in the form of two circulations at the tip of its wings. As the plane speeds up, the rooster tails become smaller.
Related to rock
In low gravity and dusty environments, such as the Moon, they can be created by the wheels of moving vehicles. A special energetic volcanic eruption known as a strombolian eruption produces bright arcs of ejecta, referred to as rooster tails, composed of basaltic cinders or volcanic ash.
Near the Sun
Coronal loops are the basic structures of the magnetic solar corona, the bright area seen around the Sun during solar eclipses. These loops are the closed-magnetic flux cousins of the open-magnetic flux that can be found in coronal hole (polar) regions and the solar wind. Loops of magnetic flux well up from the solar body and fill with hot solar plasma. Due to the heightened magnetic activity in these coronal loop regions, coronal loops can often be the precursor to solar flares and coronal mass ejections (CMEs). Emerging magnetic flux within coronal loops can cause a rooster tail.
In relation to cars
The curve describing the relationship between the coefficient of friction and sliding speed of the clutch in manual transmission automobiles on a graph is known as a rooster tail characteristic. Formations can occur when a car's motor revs up over puddles, loose soil, or mud.
In meteorology
Rooster tails have been mentioned in weather satellite interpretation since 2003 connected with tropical cyclones. In the low cloud field, it represents a convergence zone on the westward extent of the Saharan Air Layer seen at the back of tropical cyclones gaining latitude. If there are two systems, the one nearer the pole strengthens, while the system nearest the Equator weakens within an area with downward motion in the mid-levels of the troposphere.
This description has also been used with high cloudiness spreading in a narrow channel towards the Equator within the outflow jet of a tropical cyclone, such as Hurricane Felix (1995). Mare's tail patterns within cirrus clouds are occasionally referred to by this term due to their appearance.
References
Fluid dynamics
Plasma phenomena
Tropical cyclone meteorology | Rooster tail | [
"Physics",
"Chemistry",
"Engineering"
] | 867 | [
"Physical phenomena",
"Plasma physics",
"Chemical engineering",
"Plasma phenomena",
"Piping",
"Fluid dynamics"
] |
9,878,823 | https://en.wikipedia.org/wiki/MSH2 | DNA mismatch repair protein Msh2 also known as MutS homolog 2 or MSH2 is a protein that in humans is encoded by the MSH2 gene, which is located on chromosome 2. MSH2 is a tumor suppressor gene and more specifically a caretaker gene that codes for a DNA mismatch repair (MMR) protein, MSH2, which forms a heterodimer with MSH6 to make the human MutSα mismatch repair complex. It also dimerizes with MSH3 to form the MutSβ DNA repair complex. MSH2 is involved in many different forms of DNA repair, including transcription-coupled repair, homologous recombination, and base excision repair.
Mutations in the MSH2 gene are associated with microsatellite instability and some cancers, especially with hereditary nonpolyposis colorectal cancer (HNPCC). At least 114 disease-causing mutations in this gene have been discovered.
Clinical significance
Hereditary nonpolyposis colorectal cancer (HNPCC), sometimes referred to as Lynch syndrome, is inherited in an autosomal dominant fashion, where inheritance of only one copy of a mutated mismatch repair gene is enough to cause disease phenotype. Mutations in the MSH2 gene account for 40% of genetic alterations associated with this disease and is the leading cause, together with MLH1 mutations. Mutations associated with HNPCC are broadly distributed in all domains of MSH2, and hypothetical functions of these mutations based on the crystal structure of the MutSα include protein–protein interactions, stability, allosteric regulation, MSH2-MSH6 interface, and DNA binding. Mutations in MSH2 and other mismatch repair genes cause DNA damage to go unrepaired, resulting in an increase in mutation frequency. These mutations build up over a person's life that otherwise would not have occurred had the DNA been repaired properly.
Microsatellite instability
The viability of MMR genes including MSH2 can be tracked via microsatellite instability, a biomarker test that analyzes short sequence repeats which are very difficult for cells to replicate without a functioning mismatch repair system. Because these sequences vary in the population, the actual number of copies of short sequence repeats does not matter, just that the number the patient does have is consistent from tissue to tissue and over time. This phenomenon occurs because these sequences are prone to mistakes by the DNA replication complex, which then need to be fixed by the mismatch repair genes. If these are not working, over time either duplications or deletions of these sequences will occur, leading to different numbers of repeats in the same patient.
71% of HNPCC patients show microsatellite instability. Detection methods for microsatellite instability include polymerase chain reaction (PCR) and immunohistochemical (IHC) methods, polymerase chain checking the DNA and immunohistochemical surveying mismatch repair protein levels. "Currently, there are evidences that universal testing for MSI starting with either IHC or PCR-based MSI testing is cost effective, sensitive, specific and is generally widely accepted."
Role in mismatch repair
In eukaryotes from yeast to humans, MSH2 dimerizes with MSH6 to form the MutSα complex, which is involved in base mismatch repair and short insertion/deletion loops. MSH2 heterodimerization stabilizes MSH6, which is not stable because of its N-terminal disordered domain. Conversely, MSH2 does not have a nuclear localization sequence (NLS), so it is believed that MSH2 and MSH6 dimerize in the cytoplasm and then are imported into the nucleus together. In the MutSα dimer, MSH6 interacts with the DNA for mismatch recognition while MSH2 provides the stability that MSH6 requires. MSH2 can be imported into the nucleus without dimerizing to MSH6, in this case, MSH2 is probably dimerized to MSH3 to form MutSβ. MSH2 has two interacting domains with MSH6 in the MutSα heterodimer, a DNA interacting domain, and an ATPase domain.
The MutSα dimer scans double stranded DNA in the nucleus, looking for mismatched bases. When the complex finds one, it repairs the mutation in an ATP dependent manner. The MSH2 domain of MutSα prefers ADP to ATP, with the MSH6 domain preferring the opposite. Studies have indicated that MutSα only scans DNA with the MSH2 domain harboring ADP, while the MSH6 domain can contain either ADP or ATP. MutSα then associates with MLH1 to repair the damaged DNA.
MutSβ is formed when MSH2 complexes with MSH3 instead of MSH6. This dimer repairs longer insertion/deletion loops than MutSα. Because of the nature of the mutations that this complex repairs, this is probably the state of MSH2 that causes the microsatellite instability phenotype. Large DNA insertions and deletions intrinsically bend the DNA double helix. The MSH2/MSH3 dimer can recognize this topology and initiate repair. The mechanism by which it recognizes mutations is different as well, because it separates the two DNA strands, which MutSα does not.
Double-strand break repair
Msh2 modulates accurate homologous recombination, a prominent DNA double-strand break repair pathway in mammalian chromosomes. Repair of DNA double-strand breaks by accurate homologous recombination predominates over the inaccurate double-strand break repair pathway of “non-homologous end joining” in hamster, mouse and human somatic cells.
Interactions
MSH2 has been shown to interact with:
ATR,
BRCA1,
CHEK2,
EXO1,
MAX,
MSH3,
MSH6, and
p53.
Epigenetic MSH2 deficiencies in cancer
DNA damage appears to be the primary underlying cause of cancer, and deficiencies in expression of DNA repair genes appear to underlie many forms of cancer. If DNA repair is deficient, DNA damage tends to accumulate. Such excess DNA damage may increase mutations due to error-prone translesion synthesis and error prone repair (see e.g. microhomology-mediated end joining). Elevated DNA damage may also increase epigenetic alterations due to errors during DNA repair. Such mutations and epigenetic alterations may give rise to cancer.
Reductions in expression of DNA repair genes (usually caused by epigenetic alterations) are very common in cancers, and are ordinarily much more frequent than mutational defects in DNA repair genes in cancers. (See Frequencies of epimutations in DNA repair genes.) In a study of MSH2 in non-small cell lung cancer (NSCLC), no mutations were found while 29% of NSCLC had epigenetic reduction of MSH2 expression. In acute lymphoblastoid leukemia (ALL), no MSH2 mutations were found while 43% of ALL patients showed MSH2 promoter methylation and 86% of relapsed ALL patients had MSH2 promoter methylation. There were, however, mutations in four other genes in ALL patients that destabilized the MSH2 protein, and these were defective in 11% of children with ALL and 16% of adults with this cancer.
Methylation of the promoter region of the MSH2 gene is correlated with the lack of expression of the MSH2 protein in esophageal cancer, in non-small-cell lung cancer, and in colorectal cancer. These correlations suggest that methylation of the promoter region of the MSH2 gene reduces expression of the MSH2 protein. Such promoter methylation would reduce DNA repair in the four pathways in which MSH2 participates: DNA mismatch repair, transcription-coupled repair homologous recombination, and base excision repair. Such reductions in repair likely allow excess DNA damage to accumulate and contribute to carcinogenesis.
The frequencies of MSH2 promoter methylation in several different cancers are indicated in the Table.
See also
Mismatch repair#MutS homologs
References
Further reading
External links
DNA repair
Mutation
Oncogenes | MSH2 | [
"Biology"
] | 1,750 | [
"Molecular genetics",
"DNA repair",
"Cellular processes"
] |
9,878,887 | https://en.wikipedia.org/wiki/Very%20special%20relativity | Ignoring gravity, experimental bounds seem to suggest that special relativity with its Lorentz symmetry and Poincaré symmetry describes spacetime. Surprisingly, Bogoslovsky and independently Cohen and Glashow have demonstrated that a small subgroup of the Lorentz group is sufficient to explain all the current bounds.
The minimal subgroup in question can be described as follows: The stabilizer of a null vector is the special Euclidean group SE(2), which contains T(2) as the subgroup of parabolic transformations. This T(2), when extended to include either parity or time reversal (i.e. subgroups of the orthochronous and time-reversal respectively), is sufficient to give us all the standard predictions. Their new symmetry is called very special relativity (VSR).
See also
Lorentz violation
References
Theory of relativity | Very special relativity | [
"Physics"
] | 171 | [
"Relativity stubs",
"Theory of relativity"
] |
9,880,147 | https://en.wikipedia.org/wiki/Cyclin%20D | Cyclin D is a member of the cyclin protein family that is involved in regulating cell cycle progression. The synthesis of cyclin D is initiated during G1 and drives the G1/S phase transition. Cyclin D protein is anywhere from 155 (in zebra mussel) to 477 (in Drosophila) amino acids in length.
Once cells reach a critical cell size (and if no mating partner is present in yeast) and if growth factors and mitogens (for multicellular organism) or nutrients (for unicellular organism) are present, cells enter the cell cycle. In general, all stages of the cell cycle are chronologically separated in humans and are triggered by cyclin-Cdk complexes which are periodically expressed and partially redundant in function. Cyclins are eukaryotic proteins that form holoenzymes with cyclin-dependent protein kinases (Cdk), which they activate. The abundance of cyclins is generally regulated by protein synthesis and degradation through APC/C- and CRL-dependent pathways.
Cyclin D is one of the major cyclins produced in terms of its functional importance. It interacts with four Cdks: Cdk2, 4, 5, and 6. In proliferating cells, cyclin D-Cdk4/6 complex accumulation is of great importance for cell cycle progression. Namely, cyclin D-Cdk4/6 complex partially phosphorylates retinoblastoma tumor suppressor protein (Rb), whose inhibition can induce expression of some genes (for example: cyclin E) important for S phase progression.
Drosophila and many other organisms only have one cyclin D protein. In mice and humans, two more cyclin D proteins have been identified. The three homologues, called cyclin D1, cyclin D2, and cyclin D3 are expressed in most proliferating cells and the relative amounts expressed differ in various cell types.
Homologues
The most studied homologues of cyclin D are found in yeast and viruses.
The yeast homologue of cyclin D, referred to as CLN3, interacts with Cdc28 (cell division control protein) during G1.
In viruses, like Saimiriine herpesvirus 2 (Herpesvirus saimiri) and Human herpesvirus 8 (HHV-8/Kaposi's sarcoma-associated herpesvirus) cyclin D homologues (one member of a chromosome pair) have acquired new functions in order to manipulate the host cell's metabolism to the viruses’ benefit. Viral cyclin D binds human Cdk6 and inhibits Rb by phosphorylating it, resulting in free transcription factors which result in protein transcription that promotes passage through G1 phase of the cell cycle. Other than Rb, viral cyclin D-Cdk6 complex also targets p27Kip, a Cdk inhibitor of cyclin E and A. In addition, viral cyclin D-Cdk6 is resistant to Cdk inhibitors, such as p21CIP1/WAF1 and p16INK4a which in human cells inhibits Cdk4 by preventing it from forming an active complex with cyclin D.
Structure
Cyclin D possesses a tertiary structure similar to other cyclins called the cyclin fold. This contains a core of two compact domains with each having five alpha helices. The first five-helix bundle is a conserved cyclin box, a region of about 100 amino acid residues on all cyclins, which is needed for Cdk binding and activation. The second five-helix bundle is composed of the same arrangement of helices, but the primary sequence of the two subdomains is distinct. All three D-type cyclins (D1, D2, D3) have the same alpha 1 helix hydrophobic patch. However, it is composed of different amino acid residues as the same patch in cyclins E, A, and B.
Function
Growth factors stimulate the Ras/Raf/ERK that induce cyclin D production. One of the members of the pathways, MAPK activates a transcription factor Myc, which alters transcription of genes important in cell cycle, among which is cyclin D. In this way, cyclin D is synthesized as long as the growth factor is present.
Cyclin D levels in proliferating cells are sustained as long as the growth factors are present, a key player for G1/S transition is active cyclin D-Cdk4/6 complexes. Cyclin D has no effect on G1/S transition unless it forms a complex with Cdk 4 or 6.
G1/S transition
One of the best known substrates of cyclin D/Cdk4 and -6 is the retinoblastoma tumor suppressor protein (Rb). Rb is an important regulator of genes responsible for progression through the cell cycle, in particular through G1/S phase.
One model proposes that cyclin D quantities, and thus cyclin D- Cdk4 and -6 activity, gradually increases during G1 rather than oscillating in a set pattern as do S and M cyclins. This happens in response to sensors of external growth-regulatory signals and cell growth, and Rb is phosphorylated as a result. Rb reduces its binding to E2F and thereby allows E2F-mediated activation of the transcription of cyclin E and cyclin A, which bind to Cdk1 and Cdk2 respectively to create complexes that continue with Rb phosphorylation. Cyclin A and E dependent kinase complexes also function to inhibit the E3 ubiquitin ligase APC/C activating subunit Cdh1 through phosphorylation, which stabilizes substrates such as cyclin A. The coordinated activation of this sequence of interrelated positive feedback loops through cyclins and cyclin dependent kinases drives commitment to cell division to and past the G1/S checkpoint.
Another model proposes that cyclin D levels remain nearly constant through G1. Rb is mono-phosphorylated during early to mid-G1by cyclin D-Cdk4,6, opposing the idea that its activity gradually increases. Cyclin D dependent monophosphorylated Rb still interacts with E2F transcription factors in a way that inhibits transcription of enzymes that drive the G1/S transition. Rather, E2F dependent transcription activity increases when that of Cdk2 increases and hyperphosphorylates Rb towards the end of G1.
Rb may not be the only target for cyclin D to promote cell proliferation and progression through the cell cycle. The cyclin D-Cdk4,6, complex, through phosphorylation and inactivation of metabolic enzymes, also influences cell survival. Through close analysis of different Rb-docking helices, a consensus helix sequence motif was identified, which can be utilized to identify potential non-canonical substrates that cyclin D-Cdk4,6 could use to promote proliferation.
Docking to Rb
RxL- and LxCxE- based docking mutations broadly affect cyclin-Cdk complexes. Mutations of key Rb residues previously observed to be needed for Cdk complex docking interactions result in reduced overall kinase activity towards Rb. The LxCxE binding cleft in the Rb pocket domain, which has been shown to interact with proteins such as cyclin D and viral oncoproteins, has only a marginal 1.7 fold reduction in phosphorylation by cyclin D-Cdk4,6 when removed. Similarly, when the RxL motif, shown to interact with the S phase cyclins E and A, is removed, cyclin D-Cdk4,6 activity has a 4.1 fold reduction. Thus, the RxL- and LxCxE based docking sites have interactions with cyclin D-Cdk4,6 like they do with other cyclins, and removal of them have modest a modest effect in G1 progression.
Cyclin D-Cdk 4,6 complexes target Rb for phosphorylation through docking a C-terminal helix. When the final 37 amino acid residues are truncated, it had previously been shown that Rb phosphorylation levels are reduced and G1 arrest is induced. Kinetic assays have shown that with the same truncation, the reduction of Rb phosphorylation by cyclin D1-Cdk4,6 is 20 fold and Michaelis-Menten constant (Km) is significantly increased. The phosphorylation of Rb by cyclin A-Cdk2, cyclin B-Cdk1, and cyclin E-Cdk2 are unaffected.
The C terminus has a stretch of 21 amino acids with alpha-helix propensity. Deletion of this helix or disruption of it via proline residue substitutions also show a significant reduction in Rb phosphorylation. The orientation of the residues, along with the acid-base properties and polarities are all critical for docking. Thus, the LxCxE, RxL, and helix docking sites all interact with different parts of cyclin D, but disruption of any two of the three mechanism can disrupt the phosphorylation of Rb in vitro. The helix binding, perhaps the most important, functions as a structural requirement. It makes evolving more difficult, leading the cyclin D-Cdk4/6 complex to have relatively small number of substrates relative to other cyclin-Cdk complexes. Ultimately this contributes to the adequate phosphorylation of a key target in Rb.
All six cyclin D-Cdk4,6 complexes (cyclin D1/D2/D3 with Cdk4/6) target Rb for phosphorylation through helix-based docking. The shared α 1 helix hydrophobic patch that all cyclin D's have is not responsible for recognizing the C-terminal helix. Rather, it recognizes the RxL sequences that are linear, including those on Rb. Through experiments with purified cyclin D1-Cdk2, it was concluded that the helix docking site likely lies on cyclin D rather than the Cdk4,6. As a result, likely another region on cyclin D recognizes the Rb C-terminal helix.
Since Rb's C – terminal helix exclusively binds cyclin D-Cdk4,6 and not other cell cycle dependent cyclin-Cdk complexes, through experiments mutating this helix in HMEC cells, it has been conclusively shown that the cyclin D – Rb interaction is critical in the following roles (1) promoting the G1/S transition (2) allowing Rb dissociation from chromatin, and (3) E2F1 activation.
Regulation
In vertebrates
Cyclin D is regulated by the downstream pathway of mitogen receptors via the Ras/MAP kinase and the β-catenin-Tcf/LEF pathways and PI3K. The MAP kinase ERK activates the downstream transcription factors Myc, AP-1 and Fos which in turn activate the transcription of the Cdk4, Cdk6 and cyclin D genes, and increase ribosome biogenesis. Rho family GTPases, integrin linked kinase and focal adhesion kinase (FAK) activate cyclin D gene in response to integrin.
p27kip1 and p21cip1 are cyclin-dependent kinase inhibitors (CKIs) which negatively regulate CDKs. However they are also promoters of the cyclin D-CDK4/6 complex. Without p27 and p21, cyclin D levels are reduced and the complex is not formed at detectable levels.
In eukaryotes, overexpression of translation initiation factor 4E (eIF4E) leads to an increased level of cyclin D protein and increased amount of cyclin D mRNA outside of the nucleus. This is because eIF4E promotes the export of cyclin D mRNAs out of the nucleus.
Inhibition of cyclin D via inactivation or degradation leads to cell cycle exit and differentiation. Inactivation of cyclin D is triggered by several cyclin-dependent kinase inhibitor protein (CKIs) like the INK4 family (e.g. p14, p15, p16, p18). INK4 proteins are activated in response to hyperproliferative stress response that inhibits cell proliferation due to overexpression of e.g. Ras and Myc. Hence, INK4 binds to cyclin D- dependent CDKs and inactivates the whole complex. Glycogen synthase kinase three beta, GSK3β, causes Cyclin D degradation by inhibitory phosphorylation on threonine 286 of the Cyclin D protein. GSK3β is negatively controlled by the PI3K pathway in form of phosphorylation, which is one of several ways in which growth factors regulate cyclin D. Amount of cyclin D in the cell can also be regulated by transcriptional induction, stabilization of the protein, its translocation to the nucleus and its assembly with Cdk4 and Cdk6.
It has been shown that the inhibition of cyclin D (cyclin D1 and 2, in particular) could result from the induction of WAF1/CIP1/p21 protein by PDT. By inhibiting cyclin D, this induction also inhibits Ckd2 and 6. All these processes combined lead to an arrest of the cell in G0/G1 stage.
There are two ways in which DNA damage affects Cdks. Following DNA damage, cyclin D (cyclin D1) is rapidly and transiently degraded by the proteasome upon its ubiquitylation by the CRL4-AMBRA1 ubiquitin ligase. This degradation causes release of p21 from Cdk4 complexes, which inactivates Cdk2 in a p53-independent manner. Another way in which DNA damage targets Cdks is p53-dependent induction of p21, which inhibits cyclin E-Cdk2 complex. In healthy cells, wild-type p53 is quickly degraded by the proteasome. However, DNA damage causes it to accumulate by making it more stable.
In yeast
A simplification in yeast is that all cyclins bind to the same Cdc subunit, the Cdc28. Cyclins in yeast are controlled by expression, inhibition via CKIs like Far1, and degradation by ubiquitin-mediated proteolysis.
Role in cancer
Given that many human cancers happen in response to errors in cell cycle regulation and in growth factor dependent intracellular pathways, involvement of cyclin D in cell cycle control and growth factor signaling makes it a possible oncogene. In normal cells overproduction of cyclin D shortens the duration of G1 phase only, and considering the importance of cyclin D in growth factor signaling, defects in its regulation could be responsible for absence of growth regulation in cancer cells. Uncontrolled production of cyclin D affects amounts of cyclin D-Cdk4 complex being formed, which can drive the cell through the G0/S checkpoint, even when the growth factors are not present.
Evidence that cyclin D1 is required for tumorigenesis includes the finding that inactivation of cyclin D1 by anti-sense or gene deletion reduced breast tumor and gastrointestinal tumor growth in vivo. Cyclin D1 overexpression is sufficient for the induction of mammary tumorigenesis, attributed to the induction of cell proliferation, increased cell survival, induction of chromosomal instability, restraint of autophagy and potentially non-canonical functions.
Overexpression is induced as a result of gene amplification, growth factor or oncogene induced expression by Src, Ras, ErbB2, STAT3, STAT5, impaired protein degradation, or chromosomal translocation. Gene amplification is responsible for overproduction of cyclin D protein in bladder cancer and esophageal carcinoma, among others.
In cases of sarcomas, colorectal cancers and melanomas, cyclin D overproduction is noted, however, without the amplification of the chromosomal region that encodes it (chromosome 11q13, putative oncogene PRAD1, which has been identified as a translocation event in case of mantle cell lymphoma).
In parathyroid adenoma, cyclin D hyper-production is caused by chromosomal translocation, which would place expression of cyclin D (more specifically, cyclin D1) under an inappropriate promoter, leading to overexpression. In this case, cyclin D gene has been translocated to the parathyroid hormone gene, and this event caused abnormal levels of cyclin D.
The same mechanisms of overexpression of cyclin D is observed in some tumors of the antibody-producing B cells. Likewise, overexpression of cyclin D protein due to gene translocation is observed in human breast cancer.
Additionally, the development of cancer is also enhanced by the fact that retinoblastoma tumor suppressor protein (Rb), one of the key substrates of cyclin D-Cdk 4/6 complex, is quite frequently mutated in human tumors. In its active form, Rb prevents crossing of the G1 checkpoint by blocking transcription of genes responsible for advances in cell cycle. Cyclin D/Cdk4 complex phosphorylates Rb, which inactivates it and allows for the cell to go through the checkpoint. In the event of abnormal inactivation of Rb, in cancer cells, an important regulator of cell cycle progression is lost. When Rb is mutated, levels of cyclin D and p16INK4 are normal.
Another regulator of passage through G1 restriction point is Cdk inhibitor p16, which is encoded by INK4 gene. P16 functions in inactivating cyclin D/Cdk 4 complex. Thus, blocking transcription of INK4 gene would increase cyclin D/Cdk4 activity, which would in turn result in abnormal inactivation of Rb. On the other hand, in case of cyclin D in cancer cells (or loss of p16INK4) wild-type Rb is retained. Due to the importance of p16INK/cyclin D/Cdk4 or 6/Rb pathway in growth factor signaling, mutations in any of the players involved can give rise to cancer.
Mutant phenotype
Studies with mutants suggest that cyclins are positive regulators of cell cycle entry. In yeast, expression of any of the three G1 cyclins triggers cell cycle entry. Since cell cycle progression is related to cell size, mutations in Cyclin D and its homologues show a delay in cell cycle entry and thus, cells with variants in cyclin D have bigger than normal cell size at cell division.
p27−/− knockout phenotype show an overproduction of cells because cyclin D is not inhibited anymore, while p27−/− and cyclin D−/− knockouts develop normally.
See also
CDK
Cyclins
Cell cycle
References
External links
Drosophila Cyclin D - The Interactive Fly
Cell cycle
Cell cycle regulators
Proteins | Cyclin D | [
"Chemistry",
"Biology"
] | 4,136 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Cellular processes",
"Molecular biology",
"Proteins",
"Cell cycle",
"Cell cycle regulators"
] |
9,880,204 | https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%202 | Cyclin-dependent kinase 2, also known as cell division protein kinase 2, or Cdk2, is an enzyme that in humans is encoded by the CDK2 gene. The protein encoded by this gene is a member of the cyclin-dependent kinase family of Ser/Thr protein kinases. This protein kinase is highly similar to the gene products of S. cerevisiae cdc28, and S. pombe cdc2, also known as Cdk1 in humans. It is a catalytic subunit of the cyclin-dependent kinase complex, whose activity is restricted to the G1-S phase of the cell cycle, where cells make proteins necessary for mitosis and replicate their DNA. This protein associates with and is regulated by the regulatory subunits of the complex including cyclin E or A. Cyclin E binds G1 phase Cdk2, which is required for the transition from G1 to S phase while binding with Cyclin A is required to progress through the S phase. Its activity is also regulated by phosphorylation. Multiple alternatively spliced variants and multiple transcription initiation sites of this gene have been reported. The role of this protein in G1-S transition has been recently questioned as cells lacking Cdk2 are reported to have no problem during this transition.
Dispensability in normally functioning tissue
Original cell-culture based experiments demonstrated cell cycle arrest at the G1-S transition resulting from the deletion of Cdk2. Later experiments showed that Cdk2 deletions lengthened the G1 phase of the cell cycle in mouse embryo fibroblasts. However, they still entered S phase after this period and were able to complete the remaining phases of the cell cycle. When Cdk2 was deleted in mice, the animals remained viable despite a reduction in body size. However, meiotic function of both male and female mice was inhibited. This suggests that Cdk2 is non-essential for the cell cycle of healthy cells, but essential for meiosis and reproduction. Cells in Cdk2 knockout mice likely undergo fewer divisions, contributing to the reduction in body size. Germ cells also stop dividing at prophase of meiosis, leading to reproductive sterility. Cdk1 is now believed to compensate for many aspects of Cdk2 deletion, except for meiotic function.
Mechanism of activation
Cyclin-dependent kinase 2 is structured in two lobes. The lobe beginning at the N-terminus (N-lobe) contains many beta sheets, while the C-terminus lobe (C-lobe) is rich in alpha helices. Cdk2 is capable of binding to many different cyclins, including cyclins A, B, E, and possibly C. Recent studies suggest Cdk2 binds preferentially to cyclins A and E, while Cdk1 prefers cyclins A and B.
Cdk2 becomes active when a cyclin protein (either A or E) binds at the active site located between the N and C lobes of the kinase. Due to the location of the active site, partner cyclins interact with both lobes of Cdk2. Cdk2 contains an important alpha helix located in the C lobe of the kinase, called the C-helix or the PSTAIRE-helix. Hydrophobic interactions cause the C-helix to associate with another helix in the activating cyclin. Activation induces a conformational change where the helix rotates and moves closer to the N-lobe. This allows the glutamic acid located on the C-helix to form an ion pair with a nearby lysine side chain. The significance of this movement is that it brings the side
chain of Glu 51, which belongs to a triad of catalytic site residues
conserved in all eukaryotic kinases, into the catalytic site. This
triad (Lys 33, Glu 51 and Asp 145) is involved in ATP phosphate
orientation and magnesium coordination, and is thought to be
critical for catalysis. This conformational change also relocates the activation loop to the C-lobe, revealing the ATP binding site now available for new interactions. Finally, the Threonine-160 residue is exposed and phosphorylated as the C-lobe activation segment is displaced from the catalytic site and the threonine residue is no longer sterically hindered. The phosphorylated threonine residue creates stability in the final enzyme conformation. It is important to note that throughout this activation process, cyclins binding to Cdk2 do not undergo any conformational change.
Role in DNA replication
The success of the cell division process is dependent on the precise regulation of processes at both cellular and tissue levels. Complex interactions between proteins and DNA within the cell allow genomic DNA to be passed to daughter cells. Interactions between cells and extracellular matrix proteins allow new cells to be incorporated into existing tissues. At the cellular level, the process is controlled by different levels of cyclin-dependent kinases (Cdks) and their partner cyclins. Cells utilize various checkpoints as a means of delaying cell cycle progression until it can repair defects.
Cdk2 is active during G1 and S phase of the cell cycle, and therefore acts as a G1-S phase checkpoint control. Prior to G1 phase, levels of Cdk4 and Cdk6 increase along with cyclin D. This allows for the partial phosphorylation of Rb, and partial activation of E2F at the beginning of G1 phase, which promotes cyclin E synthesis and increased Cdk2 activity. At the end of G1 phase, the Cdk2/Cyclin E complex reaches maximum activity and plays a significant role in the initiation of S phase. Other non-Cdk proteins also become active during the G1-S phase transition. For example, the retinoblastoma (Rb) and p27 proteins are phosphorylated by Cdk2 – cyclin A/E complexes, fully deactivating them. This allows E2F transcription factors to express genes that promote entry into S phase where DNA is replicated prior to division. Additionally, NPAT, a known substrate of the Cdk2-Cyclin E complex, functions to activate histone gene transcription when phosphorylated. This increases the synthesis of histone proteins (the major protein component of chromatin), and subsequently supports the DNA replication stage of the cell cycle. Finally, at the end of S phase, the ubiquitin proteasome degrades cyclin E.
Cancer cell proliferation
Although Cdk2 is mostly dispensable in the cell cycle of normally functioning cells, it is critical to the abnormal growth processes of cancer cells. The CCNE1 gene produces cyclin E, one of the two major protein binding partners of Cdk2. Overexpression of CCNE1 occurs in many tumor cells, causing the cells to become dependent on Cdk2 and cyclin E. Abnormal cyclin E activity is also observed in breast, lung, colorectal, gastric, and bone cancers, as well as in leukemia and lymphoma. Likewise, abnormal expression of cyclin A2 is associated with chromosomal instability and tumor proliferation, while inhibition leads to decreased tumor growth. Therefore, CDK2 and its cyclin binding partners represent possible therapeutic targets for new cancer therapeutics. Pre-clinical models have shown preliminary success in limiting tumor growth, and have also been observed to reduce side effects of current chemotherapy drugs.
Identifying selective Cdk2 inhibitors is difficult due to the extreme similarity between the active sites of Cdk2 and other Cdks, especially Cdk1. Cdk1 is the only essential cyclin dependent kinase in the cell cycle, and inhibition could lead to unintended side effects. Most CDK2 inhibitor candidates target the ATP binding site and can be divided into two main subclasses: type I and type II. Type I inhibitors competitively target the ATP binding site in its active state. Type II inhibitors target CDK2 in its unbound state, either occupying the ATP binding site or hydrophobic pocket within the kinase. Type II inhibitors are believed to be more selective. Recently, the availability of new CDK crystal structures led to the identification of a potential allosteric binding site near the C-helix. Inhibitors of this allosteric site are classified as type III inhibitors. Another possible target is the T-loop of CDK2. When cyclin A binds to CDK2, the N-terminal lobe rotates to activate the ATP binding site and switch the position of the activation loop, called the T-loop.
Inhibitors
Interpretation of dynamic simulations and binding free energy studies unveiled that Ligand2 (Out of 17 in-house synthesized pyrrolone-fused benzosuberene (PBS) compounds) has a stable and equivalent free energy to Flavopiridol, SU9516, and CVT-313 inhibitors. Ligand2 scrutinized as a selective inhibitor of CDK2 without off-target binding (CDK1 and CDK9) based on ligand efficiency and binding affinity.
Known CDK inhibitors are p21Cip1 (CDKN1A) and p27Kip1 (CDKN1B).
Drugs that inhibit Cdk2 and arrest the cell cycle, such as GW8510 and the experimental cancer drug seliciclib, may reduce the sensitivity of the epithelium to many cell cycle-active antitumor agents and, therefore, represent a strategy for prevention of chemotherapy-induced alopecia.
Rosmarinic acid methyl ester is a plant-derived Cdk2 inhibitor, which was shown to suppress proliferation of vascular smooth muscle cells and to reduce neointima formation in mouse restenosis model.
See also the PDB gallery below showing interactions with many inhibitors (inc Purvalanol B)
Gene regulation
In melanocytic cell types, expression of the CDK2 gene is regulated by the Microphthalmia-associated transcription factor.
Interactions
Cyclin-dependent kinase 2 has been shown to interact with:
BRCA1,
CDK2AP1,
CDKN1B
CDKN3,
CEBPA,
Cyclin A1,
Cyclin E1,
Flap structure-specific endonuclease 1,
ORC1L,
P21,
PPM1B,
PPP2CA,
Retinoblastoma-like protein 1,
Retinoblastoma-like protein 2, and
SKP2.
References
Further reading
External links
Cell cycle
Proteins
EC 2.7.11
Cell cycle regulators | Cyclin-dependent kinase 2 | [
"Chemistry",
"Biology"
] | 2,210 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Cellular processes",
"Molecular biology",
"Proteins",
"Cell cycle",
"Cell cycle regulators"
] |
9,154,054 | https://en.wikipedia.org/wiki/Stationary%20engineer | A stationary engineer (also called an operating engineer, power engineer or process operator) is a technically trained professional who operates, troubleshoots and oversees industrial machinery and equipment that provide and utilize energy in various forms.
The title "power engineer" is used differently between the United States and Canada.
Stationary engineers are responsible for the safe operation and maintenance of a wide range of equipment including boilers, steam turbines, gas turbines, gas compressors, generators, motors, air conditioning systems, heat exchangers, heat recovery steam generators (HRSGs) that may be directly fired (duct burners) or indirectly fired (gas turbine exhaust heat collectors), hot water generators, and refrigeration machinery in addition to its associated auxiliary equipment (air compressors, natural gas compressors, electrical switchgear, pumps, etc.).
Stationary engineers are trained in many areas, including mechanical, thermal, chemical, electrical, metallurgy, instrumentation, and a wide range of safety skills. They typically work in factories, office buildings, hospitals, warehouses, power generation plants, industrial facilities, and residential and commercial buildings.
The use of the title Stationary Engineer predates other engineering designations and is not to be confused with Professional Engineer, a title typically given to design engineers in their given field. The job of today's engineer has been greatly changed by computers and automation as well as the replacement of steam engines on ships and trains. Workers have adapted to the challenges of the changing job market.
Today, stationary engineers are required to be significantly more involved with the technical aspect of the job, as many plants and buildings are updated with increasingly more automated systems of control valves and distributed control systems.
History
The profession of stationary engineering emerged during the Industrial Revolution with the development of steam-powered pumps by Thomas Savery and Thomas Newcomen which were used to draw water from mines, and the industrial steam engines perfected by James Watt. Railroad engineers operated early steam locomotives and continue to operate trains today, as well as marine engineers, who operated the boilers on steamships. The certification and classification of stationary engineers was developed in order to reduce incidents of boiler explosions in the late 19th century. Notable individuals who worked as stationary engineers include George Stephenson, William Faulkner, and Henry Ford.
Power Engineering (Technical Regulators)
In Canada, power engineers are regulated by their respective jurisdictions. Each province has a safety authority that is granted power through "enabling acts" and overseen by the Canadian Standards Association. Examinations and licensing in all 10 provinces and three territories are regulated by the Standardization of Power Engineers Examinations Committee (SOPEEC) who receives recommendations by the Interprovincial Power Engineering Curriculum Committee (IPECC).
Jurisdictional authorities
Alberta Boilers Safety Association (ABSA)
Technical Safety British Columbia (TSBC)
Office of The Fire Commissioner
Government of New Brunswick
Government of Newfoundland and Labrador
Northwest Territories
Nova Scotia
Nunavut
Technical Standards and Safety Authority (TSSA)
Government of Prince Edward Island
Régie du bâtiment du Québec and Emploi-Québec (RBQ, EQ)
Technical Safety Authority of Saskatchewan
Yukon
United States regulation
In the United States power engineers are governed solely by their individual states, or by their specific municipalities. Several States, such as Maine have opted to align with Canada's guidelines regarding power engineering education, however, this is not common. In the United States, stationary engineers must be licensed in several cities and states. The New York City Department of Buildings requires a Stationary Engineer's License to practice in the City of New York; to obtain the license one must pass a written and practical exam and have at least five years' experience working directly under a licensed stationary engineer, or one year if in possession of a Bachelor of Science degree in mechanical engineering. Holders of the Stationary Engineer's License primarily work in large power generation facilities, such as cogeneration power plants, peaking units, and large central heating and refrigeration plants (CHRPs). For the State of California, Stationary Engineers are the State of California Military Department's sole source of Airfield Lighting and Repair.
External links
The International Union of Operating Engineers
The National Association of Power Engineers, history of "Stationary Engineers"
The American Society of Power Engineers
The National Institute for the Uniform Licensing of Power Engineers
National Institute of Power Engineers
Standardization of Power Engineer Examinations Committee
References
Power engineering | Stationary engineer | [
"Engineering"
] | 873 | [
"Power engineering",
"Electrical engineering",
"Energy engineering"
] |
9,154,659 | https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff%20equation | In astrophysics, the Tolman–Oppenheimer–Volkoff (TOV) equation constrains the structure of a spherically symmetric body of isotropic material which is in static gravitational equilibrium, as modeled by general relativity. The equation is
Here, is a radial coordinate, and and are the density and pressure, respectively, of the material at radius . The quantity , the total mass within , is discussed below.
The equation is derived by solving the Einstein equations for a general time-invariant, spherically symmetric metric. For a solution to the Tolman–Oppenheimer–Volkoff equation, this metric will take the form
where is determined by the constraint
When supplemented with an equation of state, , which relates density to pressure, the Tolman–Oppenheimer–Volkoff equation completely determines the structure of a spherically symmetric body of isotropic material in equilibrium. If terms of order are neglected, the Tolman–Oppenheimer–Volkoff equation becomes the Newtonian hydrostatic equation, used to find the equilibrium structure of a spherically symmetric body of isotropic material when general-relativistic corrections are not important.
If the equation is used to model a bounded sphere of material in a vacuum, the zero-pressure condition and the condition should be imposed at the boundary. The second boundary condition is imposed so that the metric at the boundary is continuous with the unique static spherically symmetric solution to the vacuum field equations, the Schwarzschild metric:
Total mass
is the total mass contained inside radius , as measured by the gravitational field felt by a distant observer. It satisfies .
Here, is the total mass of the object, again, as measured by the gravitational field felt by a distant observer. If the boundary is at , continuity of the metric and the definition of require that
Computing the mass by integrating the density of the object over its volume, on the other hand, will yield the larger value
The difference between these two quantities,
will be the gravitational binding energy of the object divided by and it is negative.
Derivation from general relativity
Let us assume a static, spherically symmetric perfect fluid. The metric components are similar to those for the Schwarzschild metric:
By the perfect fluid assumption, the stress-energy tensor is diagonal (in the central spherical coordinate system), with eigenvalues of energy density and pressure:
and
Where is the fluid density and is the fluid pressure.
To proceed further, we solve Einstein's field equations:
Let us first consider the component:
Integrating this expression from 0 to , we obtain
where is as defined in the previous section.
Next, consider the component. Explicitly, we have
which we can simplify (using our expression for ) to
We obtain a second equation by demanding continuity of the stress-energy tensor: . Observing that (since the configuration is assumed to be static) and that (since the configuration is also isotropic), we obtain in particular
Rearranging terms yields:
This gives us two expressions, both containing . Eliminating , we obtain:
Pulling out a factor of and rearranging factors of 2 and results in the Tolman–Oppenheimer–Volkoff equation:
{|cellpadding="2" style="border:2px solid #ccccff"
|
|}
History
Richard C. Tolman analyzed spherically symmetric metrics in 1934 and 1939. The form of the equation given here was derived by J. Robert Oppenheimer and George Volkoff in their 1939 paper, "On Massive Neutron Cores". In this paper, the equation of state for a degenerate Fermi gas of neutrons was used to calculate an upper limit of ~0.7 solar masses for the gravitational mass of a neutron star. Since this equation of state is not realistic for a neutron star, this limiting mass is likewise incorrect. Using gravitational wave observations from binary neutron star mergers (like GW170817) and the subsequent information from electromagnetic radiation (kilonova), the data suggest that the maximum mass limit is close to 2.17 solar masses. Earlier estimates for this limit range from 1.5 to 3.0 solar masses.
Post-Newtonian approximation
In the post-Newtonian approximation, i.e., gravitational fields that slightly deviates from Newtonian field, the equation can be expanded in powers of . In other words, we have
See also
Chandrasekhar's white dwarf equation
Hydrostatic equation
Tolman–Oppenheimer–Volkoff limit
Solutions of the Einstein field equations
Static spherically symmetric perfect fluid
References
Astrophysics
Exact solutions in general relativity
J. Robert Oppenheimer | Tolman–Oppenheimer–Volkoff equation | [
"Physics",
"Astronomy",
"Mathematics"
] | 939 | [
"Exact solutions in general relativity",
"Mathematical objects",
"Equations",
"Astrophysics",
"Astronomical sub-disciplines"
] |
9,157,119 | https://en.wikipedia.org/wiki/Feasible%20region | In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down.
For example, consider the problem of minimizing the function with respect to the variables and subject to and Here the feasible set is the set of pairs (x, y) in which the value of x is at least 1 and at most 10 and the value of y is at least 5 and at most 12. The feasible set of the problem is separate from the objective function, which states the criterion to be optimized and which in the above example is
In many problems, the feasible set reflects a constraint that one or more variables must be non-negative. In pure integer programming problems, the feasible set is the set of integers (or some subset thereof). In linear programming problems, the feasible set is a convex polytope: a region in multidimensional space whose boundaries are formed by hyperplanes and whose corners are vertices.
Constraint satisfaction is the process of finding a point in the feasible region.
Convex feasible set
A convex feasible set is one in which a line segment connecting any two feasible points goes through only other feasible points, and not through any points outside the feasible set. Convex feasible sets arise in many types of problems, including linear programming problems, and they are of particular interest because, if the problem has a convex objective function that is to be minimized, it will generally be easier to solve in the presence of a convex feasible set and any local optimum will also be a global optimum.
No feasible set
If the constraints of an optimization problem are mutually contradictory, there are no points that satisfy all the constraints and thus the feasible region is the empty set. In this case the problem has no solution and is said to be infeasible.
Bounded and unbounded feasible sets
Feasible sets may be bounded or unbounded. For example, the feasible set defined by the constraint set {x ≥ 0, y ≥ 0} is unbounded because in some directions there is no limit on how far one can go and still be in the feasible region. In contrast, the feasible set formed by the constraint set {x ≥ 0, y ≥ 0, x + 2y ≤ 4} is bounded because the extent of movement in any direction is limited by the constraints.
In linear programming problems with n variables, a necessary but insufficient condition for the feasible set to be bounded is that the number of constraints be at least n + 1 (as illustrated by the above example).
If the feasible set is unbounded, there may or may not be an optimum, depending on the specifics of the objective function. For example, if the feasible region is defined by the constraint set {x ≥ 0, y ≥ 0}, then the problem of maximizing x + y has no optimum since any candidate solution can be improved upon by increasing x or y; yet if the problem is to minimize x + y, then there is an optimum (specifically at (x, y) = (0, 0)).
Candidate solution
In optimization and other branches of mathematics, and in search algorithms (a topic in computer science), a candidate solution is a member of the set of possible solutions in the feasible region of a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem—it is simply in the set that satisfies all constraints; that is, it is in the set of feasible solutions. Algorithms for solving various types of optimization problems often narrow the set of candidate solutions down to a subset of the feasible solutions, whose points remain as candidate solutions while the other feasible solutions are henceforth excluded as candidates.
The space of all candidate solutions, before any feasible points have been excluded, is called the feasible region, feasible set, search space, or solution space. This is the set of all possible solutions that satisfy the problem's constraints. Constraint satisfaction is the process of finding a point in the feasible set.
Genetic algorithm
In the case of the genetic algorithm, the candidate solutions are the individuals in the population being evolved by the algorithm.
Calculus
In calculus, an optimal solution is sought using the first derivative test: the first derivative of the function being optimized is equated to zero, and any values of the choice variable(s) that satisfy this equation are viewed as candidate solutions (while those that do not are ruled out as candidates). There are several ways in which a candidate solution might not be an actual solution. First, it might give a minimum when a maximum is being sought (or vice versa), and second, it might give neither a minimum nor a maximum but rather a saddle point or an inflection point, at which a temporary pause in the local rise or fall of the function occurs. Such candidate solutions may be able to be ruled out by use of the second derivative test, the satisfaction of which is sufficient for the candidate solution to be at least locally optimal. Third, a candidate solution may be a local optimum but not a global optimum.
In taking antiderivatives of monomials of the form the candidate solution using Cavalieri's quadrature formula would be This candidate solution is in fact correct except when
Linear programming
In the simplex method for solving linear programming problems, a vertex of the feasible polytope is selected as the initial candidate solution and is tested for optimality; if it is rejected as the optimum, an adjacent vertex is considered as the next candidate solution. This process is continued until a candidate solution is found to be the optimum.
References
Optimal decisions
Mathematical optimization | Feasible region | [
"Mathematics"
] | 1,197 | [
"Mathematical optimization",
"Mathematical analysis"
] |
9,162,580 | https://en.wikipedia.org/wiki/Archer%27s%20paradox | The archer's paradox is the phenomenon of an arrow traveling in the direction it is pointed at full draw, when it seems that the arrow would have to pass through the starting position it was in before being drawn, where it was pointed to the side of the target.
The bending of the arrow when released is the explanation for why the paradox occurs and should not be confused with the paradox itself.
Flexing of the arrow when shot from a modern 'centre shot' bow is still present and is caused by a variety of factors, mainly the way the string is deflected from the fingers as the arrow is released.
The term was first used by E. J. Rendtroff in 1913, but detailed descriptions of the phenomenon appear in archery literature as early as Horace A. Ford's 1859 text "Archery: Its Theory and Practice". As understanding was gained about the arrow flexing around and out of the way of the bow as it is shot (as first filmed by Clarence Hickman) and then experiencing oscillating back-and-forth bending as it travels toward the target, this dynamic flexing has incorrectly become a common usage of the term. This misuse sometimes causes misunderstanding on the part of those only familiar with modern target bows, which often have risers with an eccentrically cutout "arrow window"; being "centre shot", these bows do not exhibit any paradoxical behaviour as the arrow is always pointing visually along its line of flight.
Details
In order to be accurate, an arrow must have the correct stiffness, or "dynamic spine", to flex out of the way of the bow and to return to the correct path as it leaves the bow. Incorrect dynamic spine results in unpredictable contact between the arrow and the bow, therefore unpredictable forces on the arrow as it leaves the bow, and therefore reduced accuracy. Additionally, if an archer shoots several arrows with different dynamic spines, as they clear the bow they will be deflected on launch by different amounts and so will strike in different places. Competition archers therefore strive not only for arrows that have a spine within a suitable range for their bow, but also for highly consistent spine within sets of arrows. This is done using a static spine tester.
Choice of bow and spine
Less powerful bows require arrows with less dynamic spine. (Spine is the stiffness of the arrow.) Less powerful bows have less effect in deforming the arrow as it is accelerated (see Euler buckling, case I) from the bow and the arrow must be "easier" to flex around the riser of the bow before settling to its path. Conversely, powerful bows need stiffer arrows with more spine, as the bow will have a much greater bending effect on the arrow as it is accelerated. An arrow with too much dynamic spine for the bow will not flex and as the string comes closer to the bow stave, the arrow will be forced off to the side. Too little dynamic spine will result in the arrow deforming too much and being propelled off to the other side of the target. In extreme cases, the arrow may break before it can accelerate, which can be a safety hazard.
Calibration
Dynamic spine is largely determined by shaft length, head weight, and static spine. Static spine is the stiffness of the center portion of the shaft under static conditions. The Archery Trade Association (ATA) (formerly the Archery Manufacturers and Merchants Organization (AMO)) static spine test method hangs a weight from the center of a suspended section of the arrow shaft. The American Society for Testing and Materials (ASTM) F2031-05 ("Standard Test Method for Measurement of Arrow Shaft Static Spine (Stiffness)") hangs an weight from the center of a suspended section of the arrow shaft. The (obsolete) British Grand National Archery Society (GNAS) system used a weight and a variable length with the arrow supported just behind the head and just in front of the nock. Because of this, GNAS cannot be directly converted to ATA or ASTM.
The primary unit of measurement for spine is deflection in thousandths of an inch (a deflection of 500 equals ) Deflection is sometimes converted to pounds of bow weight by dividing 26 by the deflection in inches. (26 in⋅lb divided by 0.500 in equals a spine of 52 lb)
Solutions
Some modern bows have a cutout in the direct center of the body or riser that the arrow flies through; this allows the arrow to always move with the string. However, dynamic spine arrows are still used.
References
External links
The Archer's Paradox in SLOW MOTION - video
Solid mechanics
Archery
Physical paradoxes | Archer's paradox | [
"Physics"
] | 949 | [
"Solid mechanics",
"Mechanics"
] |
2,350,270 | https://en.wikipedia.org/wiki/List%20of%20SI%20electromagnetism%20units |
See also
SI
Speed of light
List of electromagnetism equations
References
External links
History of the electrical units.
Electromagnetism
Electromagnetism | List of SI electromagnetism units | [
"Physics",
"Mathematics"
] | 32 | [
"Electromagnetism",
"Physical phenomena",
"Quantity",
"Lists of units of measurement",
"Fundamental interactions",
"Units of measurement"
] |
2,352,910 | https://en.wikipedia.org/wiki/Solar%20cell | A solar cell, also known as a photovoltaic cell (PV cell), is an electronic device that converts the energy of light directly into electricity by means of the photovoltaic effect. It is a form of photoelectric cell, a device whose electrical characteristics (such as current, voltage, or resistance) vary when it is exposed to light. Individual solar cell devices are often the electrical building blocks of photovoltaic modules, known colloquially as "solar panels". Almost all commercial PV cells consist of crystalline silicon, with a market share of 95%. Cadmium telluride thin-film solar cells account for the remainder. The common single-junction silicon solar cell can produce a maximum open-circuit voltage of approximately 0.5 to 0.6 volts.
Photovoltaic cells may operate under sunlight or artificial light. In addition to producing energy, they can be used as a photodetector (for example infrared detectors), detecting light or other electromagnetic radiation near the visible range, or measuring light intensity.
The operation of a PV cell requires three basic attributes:
The absorption of light, generating excitons (bound electron-hole pairs), unbound electron-hole pairs (via excitons), or plasmons.
The separation of charge carriers of opposite types.
The separate extraction of those carriers to an external circuit.
In contrast, a solar thermal collector supplies heat by absorbing sunlight, for the purpose of either direct heating or indirect electrical power generation from heat. A "photoelectrolytic cell" (photoelectrochemical cell), on the other hand, refers either to a type of photovoltaic cell (like that developed by Edmond Becquerel and modern dye-sensitized solar cells), or to a device that splits water directly into hydrogen and oxygen using only solar illumination.
Photovoltaic cells and solar collectors are the two means of producing solar power.
Applications
Assemblies of solar cells are used to make solar modules that generate electrical power from sunlight, as distinguished from a "solar thermal module" or "solar hot water panel". A solar array generates solar power using solar energy.
Vehicular applications
Application of solar cells as an alternative energy source for vehicular applications is a growing industry. Electric vehicles that operate off of solar energy and/or sunlight are commonly referred to as solar cars. These vehicles use solar panels to convert absorbed light into electrical energy that is then stored in batteries. There are multiple input factors that affect the output power of solar cells such as temperature, material properties, weather conditions, solar irradiance and more.
The first instance of photovoltaic cells within vehicular applications was around midway through the second half of the 1900's. In an effort to increase publicity and awareness in solar powered transportation Hans Tholstrup decided to set up the first edition of the World Solar Challenge in 1987. It was a 3000 km race across the Australian outback where competitors from industry research groups and top universities around the globe were invited to compete. General Motors ended up winning the event by a significant margin with their Sunraycer vehicle that achieved speeds of over 40 mph. Contrary to popular belief however solar powered cars are one of the oldest alternative energy vehicles.
Current solar vehicles harness energy from the Sun via Solar panels which are a collected group of solar cells working in tandem towards a common goal. These solid-state devices use quantum mechanical transitions in order to convert a given amount of solar power into electrical power. The electricity produced as a result is then stored in the vehicle's battery in order to run the motor of the vehicle. Batteries in solar-powered vehicles differ from those in standard ICE cars because they are fashioned in a way to impart more power towards the electrical components of the vehicle for a longer duration.
Cells, modules, panels and systems
Multiple solar cells in an integrated group, all oriented in one plane, constitute a solar photovoltaic panel or module. Photovoltaic modules often have a sheet of glass on the sun-facing side, allowing light to pass while protecting the semiconductor wafers. Solar cells are usually connected in series creating additive voltage. Connecting cells in parallel yields a higher current.
However, problems in paralleled cells such as shadow effects can shut down the weaker (less illuminated) parallel string (a number of series connected cells) causing substantial power loss and possible damage because of the reverse bias applied to the shadowed cells by their illuminated partners.
Although modules can be interconnected to create an array with the desired peak DC voltage and loading current capacity, which can be done with or without using independent MPPTs (maximum power point trackers) or, specific to each module, with or without module level power electronic (MLPE) units such as microinverters or DC-DC optimizers. Shunt diodes can reduce shadowing power loss in arrays with series/parallel connected cells.
By 2020, the United States cost per watt for a utility scale system had declined to $0.94.
History
The photovoltaic effect was experimentally demonstrated first by French physicist Edmond Becquerel. In 1839, at age 19, he built the world's first photovoltaic cell in his father's laboratory. Willoughby Smith first described the "Effect of Light on Selenium during the passage of an Electric Current" in a 20 February 1873 issue of Nature. In 1883 Charles Fritts built the first solid state photovoltaic cell by coating the semiconductor selenium with a thin layer of gold to form the junctions; the device was only around 1% efficient. Other milestones include:
1888 – Russian physicist Aleksandr Stoletov built the first cell based on the outer photoelectric effect discovered by Heinrich Hertz in 1887.
1904 – Julius Elster, together with Hans Friedrich Geitel, devised the first practical photoelectric cell.
1905 – Albert Einstein proposed a new quantum theory of light and explained the photoelectric effect in a landmark paper, for which he received the Nobel Prize in Physics in 1921.
1941 – Vadim Lashkaryov discovered p–n junctions in Cu2O and Ag2S protocells.
1946 – Russell Ohl patented the modern junction semiconductor solar cell, while working on the series of advances that would lead to the transistor.
1948 - Introduction to the World of Semiconductors states Kurt Lehovec may have been the first to explain the photo-voltaic effect in the peer reviewed journal Physical Review.
1954 – The first practical photovoltaic cell was publicly demonstrated at Bell Laboratories. The inventors were Calvin Souther Fuller, Daryl Chapin and Gerald Pearson.
1958 – Solar cells gained prominence with their incorporation onto the Vanguard I satellite.
Space applications
Solar cells were first used in a prominent application when they were proposed and flown on the Vanguard satellite in 1958, as an alternative power source to the primary battery power source. By adding cells to the outside of the body, the mission time could be extended with no major changes to the spacecraft or its power systems. In 1959 the United States launched Explorer 6, featuring large wing-shaped solar arrays, which became a common feature in satellites. These arrays consisted of 9600 Hoffman solar cells.
By the 1960s, solar cells were (and still are) the main power source for most Earth orbiting satellites and a number of probes into the solar system, since they offered the best power-to-weight ratio. However, this success was possible because in the space application, power system costs could be high, because space users had few other power options, and were willing to pay for the best possible cells. The space power market drove the development of higher efficiencies in solar cells up until the National Science Foundation "Research Applied to National Needs" program began to push development of solar cells for terrestrial applications.
In the early 1990s the technology used for space solar cells diverged from the silicon technology used for terrestrial panels, with the spacecraft application shifting to gallium arsenide-based III-V semiconductor materials, which then evolved into the modern III-V multijunction photovoltaic cell used on spacecraft.
In recent years, research has moved towards designing and manufacturing lightweight, flexible, and highly efficient solar cells. Terrestrial solar cell technology generally uses photovoltaic cells that are laminated with a layer of glass for strength and protection. Space applications for solar cells require that the cells and arrays are both highly efficient and extremely lightweight. Some newer technology implemented on satellites are multi-junction photovoltaic cells, which are composed of different p–n junctions with varying bandgaps in order to utilize a wider spectrum of the sun's energy. Additionally, large satellites require the use of large solar arrays to produce electricity. These solar arrays need to be broken down to fit in the geometric constraints of the launch vehicle the satellite travels on before being injected into orbit. Historically, solar cells on satellites consisted of several small terrestrial panels folded together. These small panels would be unfolded into a large panel after the satellite is deployed in its orbit. Newer satellites aim to use flexible rollable solar arrays that are very lightweight and can be packed into a very small volume. The smaller size and weight of these flexible arrays drastically decreases the overall cost of launching a satellite due to the direct relationship between payload weight and launch cost of a launch vehicle.
In 2020, the US Naval Research Laboratory conducted its first test of solar power generation in a satellite, the Photovoltaic Radio-frequency Antenna Module (PRAM) experiment aboard the Boeing X-37.
Improved manufacturing methods
Improvements were gradual over the 1960s. This was also the reason that costs remained high, because space users were willing to pay for the best possible cells, leaving no reason to invest in lower-cost, less-efficient solutions. The price was determined largely by the semiconductor industry; their move to integrated circuits in the 1960s led to the availability of larger boules at lower relative prices. As their price fell, the price of the resulting cells did as well. These effects lowered 1971 cell costs to some $100 per watt.
In late 1969 Elliot Berman joined Exxon's task force which was looking for projects 30 years in the future and in April 1973 he founded Solar Power Corporation (SPC), a wholly owned subsidiary of Exxon at that time. The group had concluded that electrical power would be much more expensive by 2000, and felt that this increase in price would make alternative energy sources more attractive. He conducted a market study and concluded that a price per watt of about $20/watt would create significant demand. The team eliminated the steps of polishing the wafers and coating them with an anti-reflective layer, relying on the rough-sawn wafer surface. The team also replaced the expensive materials and hand wiring used in space applications with a printed circuit board on the back, acrylic plastic on the front, and silicone glue between the two, "potting" the cells. Solar cells could be made using cast-off material from the electronics market. By 1973 they announced a product, and SPC convinced Tideland Signal to use its panels to power navigational buoys, initially for the U.S. Coast Guard.
Research and industrial production
Research into solar power for terrestrial applications became prominent with the U.S. National Science Foundation's Advanced Solar Energy Research and Development Division within the "Research Applied to National Needs" program, which ran from 1969 to 1977, and funded research on developing solar power for ground electrical power systems. A 1973 conference, the "Cherry Hill Conference", set forth the technology goals required to achieve this goal and outlined an ambitious project for achieving them, kicking off an applied research program that would be ongoing for several decades. The program was eventually taken over by the Energy Research and Development Administration (ERDA), which was later merged into the U.S. Department of Energy.
Following the 1973 oil crisis, oil companies used their higher profits to start (or buy) solar firms, and were for decades the largest producers. Exxon, ARCO, Shell, Amoco (later purchased by BP) and Mobil all had major solar divisions during the 1970s and 1980s. Technology companies also participated, including General Electric, Motorola, IBM, Tyco and RCA.
Declining costs and exponential growth
Adjusting for inflation, it cost $96 per watt for a solar module in the mid-1970s. Process improvements and a very large boost in production have brought that figure down more than 99%, to 30¢ per watt in 2018
and as low as 20¢ per watt in 2020.
Swanson's law is an observation similar to Moore's Law that states that solar cell prices fall 20% for every doubling of industry capacity. It was featured in an article in the British weekly newspaper The Economist in late 2012. Balance of system costs were then higher than those of the panels. Large commercial arrays could be built, as of 2018, at below $1.00 a watt, fully commissioned.
As the semiconductor industry moved to ever-larger boules, older equipment became inexpensive. Cell sizes grew as equipment became available on the surplus market; ARCO Solar's original panels used cells in diameter. Panels in the 1990s and early 2000s generally used 125 mm wafers; since 2008, almost all new panels use greater than 156mm cells, and by 2020 even larger 182mm ‘M10’ cells. The widespread introduction of flat screen televisions in the late 1990s and early 2000s led to the wide availability of large, high-quality glass sheets to cover the panels.
During the 1990s, polysilicon ("poly") cells became increasingly popular. These cells offer less efficiency than their monosilicon ("mono") counterparts, but they are grown in large vats that reduce cost. By the mid-2000s, poly was dominant in the low-cost panel market, but more recently the mono returned to widespread use.
Manufacturers of wafer-based cells responded to high silicon prices in 2004–2008 with rapid reductions in silicon consumption. In 2008, according to Jef Poortmans, director of IMEC's organic and solar department, current cells use of silicon per watt of power generation, with wafer thicknesses in the neighborhood of 200 microns. Crystalline silicon panels dominate worldwide markets and are mostly manufactured in China and Taiwan. By late 2011, a drop in European demand dropped prices for crystalline solar modules to about $1.09 per watt down sharply from 2010. Prices continued to fall in 2012, reaching $0.62/watt by 4Q2012.
Solar PV is growing fastest in Asia, with China and Japan currently accounting for half of worldwide deployment. Global installed PV capacity reached at least 301 gigawatts in 2016, and grew to supply 1.3% of global power by 2016.
It was anticipated that electricity from PV will be competitive with wholesale electricity costs all across Europe and the energy payback time of crystalline silicon modules can be reduced to below 0.5 years by 2020.
Falling costs are considered one of the biggest factors in the rapid growth of renewable energy, with the cost of solar photovoltaic electricity falling by ~85% between 2010 (when solar and wind made up 1.7% of global electricity generation) and 2021 (where they made up 8.7%). In 2019 solar cells accounted for ~3 % of the world's electricity generation.
Subsidies and grid parity
Solar-specific feed-in tariffs vary by country and within countries. Such tariffs encourage the development of solar power projects. Widespread grid parity, the point at which photovoltaic electricity is equal to or cheaper than grid power without subsidies, likely requires advances on all three fronts. Proponents of solar hope to achieve grid parity first in areas with abundant sun and high electricity costs such as in California and Japan. In 2007 BP claimed grid parity for Hawaii and other islands that otherwise use diesel fuel to produce electricity. George W. Bush set 2015 as the date for grid parity in the US. The Photovoltaic Association reported in 2012 that Australia had reached grid parity (ignoring feed in tariffs).
The price of solar panels fell steadily for 40 years, interrupted in 2004 when high subsidies in Germany drastically increased demand there and greatly increased the price of purified silicon (which is used in computer chips as well as solar panels). The recession of 2008 and the onset of Chinese manufacturing caused prices to resume their decline. In the four years after January 2008 prices for solar modules in Germany dropped from €3 to €1 per peak watt. During that same time production capacity surged with an annual growth of more than 50%. China increased market share from 8% in 2008 to over 55% in the last quarter of 2010. In December 2012 the price of Chinese solar panels had dropped to $0.60/Wp (crystalline modules). (The abbreviation Wp stands for watt peak capacity, or the maximum capacity under optimal conditions.)
As of the end of 2016, it was reported that spot prices for assembled solar panels (not cells) had fallen to a record-low of US$0.36/Wp. The second largest supplier, Canadian Solar Inc., had reported costs of US$0.37/Wp in the third quarter of 2016, having dropped $0.02 from the previous quarter, and hence was probably still at least breaking even. Many producers expected costs would drop to the vicinity of $0.30 by the end of 2017. It was also reported that new solar installations were cheaper than coal-based thermal power plants in some regions of the world, and this was expected to be the case in most of the world within a decade.
Theory
A solar cell is made of semiconducting materials, such as silicon, that have been fabricated into a p–n junction. Such junctions are made by doping one side of the device p-type and the other n-type, for example in the case of silicon by introducing small concentrations of boron or phosphorus respectively.
In operation, photons in sunlight hit the solar cell and are absorbed by the semiconductor. When the photons are absorbed, electrons are excited from the valence band to the conduction band (or from occupied to unoccupied molecular orbitals in the case of an organic solar cell), producing electron-hole pairs.
If the electron-hole pairs are created near the junction between p-type and n-type materials the local electric field sweeps them apart to opposite electrodes, producing an excess of electrons on one side and an excess of holes on the other. When the solar cell is unconnected (or the external electrical load is very high) the electrons and holes will ultimately restore equilibrium by diffusing back across the junction against the field and recombine with each other giving off heat, but if the load is small enough then it is easier for equilibrium to be restored by the excess electrons going around the external circuit, doing useful work along the way.
An array of solar cells converts solar energy into a usable amount of direct current (DC) electricity. An inverter can convert the power to alternating current (AC).
The most commonly known solar cell is configured as a large-area p–n junction made from silicon. Other possible solar cell types are organic solar cells, dye sensitized solar cells, perovskite solar cells, quantum dot solar cells etc. The illuminated side of a solar cell generally has a transparent conducting film for allowing light to enter into the active material and to collect the generated charge carriers. Typically, films with high transmittance and high electrical conductance such as indium tin oxide, conducting polymers or conducting nanowire networks are used for the purpose.
Efficiency
Solar cell efficiency may be broken down into reflectance efficiency, thermodynamic efficiency, charge carrier separation efficiency and conductive efficiency. The overall efficiency is the product of these individual metrics.
The power conversion efficiency of a solar cell is a parameter which is defined by the fraction of incident power converted into electricity.
A solar cell has a voltage dependent efficiency curve, temperature coefficients, and allowable shadow angles.
Due to the difficulty in measuring these parameters directly, other parameters are substituted: thermodynamic efficiency, quantum efficiency, integrated quantum efficiency, VOC ratio, and fill factor. Reflectance losses are a portion of quantum efficiency under "external quantum efficiency". Recombination losses make up another portion of quantum efficiency, VOC ratio, and fill factor. Resistive losses are predominantly categorized under fill factor, but also make up minor portions of quantum efficiency, VOC ratio.
The fill factor is the ratio of the actual maximum obtainable power to the product of the open-circuit voltage and short-circuit current. This is a key parameter in evaluating performance. In 2009, typical commercial solar cells had a fill factor > 0.70. Grade B cells were usually between 0.4 and 0.7. Cells with a high fill factor have a low equivalent series resistance and a high equivalent shunt resistance, so less of the current produced by the cell is dissipated in internal losses.
Single p–n junction crystalline silicon devices are now approaching the theoretical limiting power efficiency of 33.16%, noted as the Shockley–Queisser limit in 1961. In the extreme, with an infinite number of layers, the corresponding limit is 86% using concentrated sunlight.
In 2014, three companies broke the record of 25.6% for a silicon solar cell. Panasonic's was the most efficient. The company moved the front contacts to the rear of the panel, eliminating shaded areas. In addition they applied thin silicon films to the (high quality silicon) wafer's front and back to eliminate defects at or near the wafer surface.
In 2015, a 4-junction GaInP/GaAs//GaInAsP/GaInAs solar cell achieved a new laboratory record efficiency of 46.1% (concentration ratio of sunlight = 312) in a French-German collaboration between the Fraunhofer Institute for Solar Energy Systems (Fraunhofer ISE), CEA-LETI and SOITEC.
In September 2015, Fraunhofer ISE announced the achievement of an efficiency above 20% for epitaxial wafer cells. The work on optimizing the atmospheric-pressure chemical vapor deposition (APCVD) in-line production chain was done in collaboration with NexWafe GmbH, a company spun off from Fraunhofer ISE to commercialize production.
For triple-junction thin-film solar cells, the world record is 13.6%, set in June 2015.
In 2016, researchers at Fraunhofer ISE announced a GaInP/GaAs/Si triple-junction solar cell with two terminals reaching 30.2% efficiency without concentration.
In 2017, a team of researchers at National Renewable Energy Laboratory (NREL), EPFL and CSEM (Switzerland) reported record one-sun efficiencies of 32.8% for dual-junction GaInP/GaAs solar cell devices. In addition, the dual-junction device was mechanically stacked with a Si solar cell, to achieve a record one-sun efficiency of 35.9% for triple-junction solar cells.
Materials
Solar cells are typically named after the semiconducting material they are made of. These materials must have certain characteristics in order to absorb sunlight. Some cells are designed to handle sunlight that reaches the Earth's surface, while others are optimized for use in space. Solar cells can be made of a single layer of light-absorbing material (single-junction) or use multiple physical configurations (multi-junctions) to take advantage of various absorption and charge separation mechanisms.
Solar cells can be classified into first, second and third generation cells. The first generation cells—also called conventional, traditional or wafer-based cells—are made of crystalline silicon, the commercially predominant PV technology, that includes materials such as polysilicon and monocrystalline silicon. Second generation cells are thin film solar cells, that include amorphous silicon, CdTe and CIGS cells and are commercially significant in utility-scale photovoltaic power stations, building integrated photovoltaics or in small stand-alone power system. The third generation of solar cells includes a number of thin-film technologies often described as emerging photovoltaics—most of them have not yet been commercially applied and are still in the research or development phase. Many use organic materials, often organometallic compounds as well as inorganic substances. Despite the fact that their efficiencies had been low and the stability of the absorber material was often too short for commercial applications, there is research into these technologies as they promise to achieve the goal of producing low-cost, high-efficiency solar cells. As of 2016, the most popular and efficient solar cells were those made from thin wafers of silicon which are also the oldest solar cell technology.
Crystalline silicon
By far, the most prevalent bulk material for solar cells is crystalline silicon (c-Si), also known as "solar grade silicon". Bulk silicon is separated into multiple categories according to crystallinity and crystal size in the resulting ingot, ribbon or wafer. These cells are entirely based around the concept of a p–n junction. Solar cells made of c-Si are made from wafers between 160 and 240 micrometers thick.
Monocrystalline silicon
Monocrystalline silicon (mono-Si) solar cells feature a single-crystal composition that enables electrons to move more freely than in a multi-crystal configuration. Consequently, monocrystalline solar panels deliver a higher efficiency than their multicrystalline counterparts. The corners of the cells look clipped, like an octagon, because the wafer material is cut from cylindrical ingots, that are typically grown by the Czochralski process. Solar panels using mono-Si cells display a distinctive pattern of small white diamonds.
Epitaxial silicon development
Epitaxial wafers of crystalline silicon can be grown on a monocrystalline silicon "seed" wafer by chemical vapor deposition (CVD), and then detached as self-supporting wafers of some standard thickness (e.g., 250 μm) that can be manipulated by hand, and directly substituted for wafer cells cut from monocrystalline silicon ingots. Solar cells made with this "kerfless" technique can have efficiencies approaching those of wafer-cut cells, but at appreciably lower cost if the CVD can be done at atmospheric pressure in a high-throughput inline process. The surface of epitaxial wafers may be textured to enhance light absorption.
In June 2015, it was reported that heterojunction solar cells grown epitaxially on n-type monocrystalline silicon wafers had reached an efficiency of 22.5% over a total cell area of 243.4 cm.
Polycrystalline silicon
Polycrystalline silicon, or multicrystalline silicon (multi-Si) cells are made from cast square ingots—large blocks of molten silicon carefully cooled and solidified. They consist of small crystals giving the material its typical metal flake effect. Polysilicon cells are the most common type used in photovoltaics and are less expensive, but also less efficient, than those made from monocrystalline silicon.
Ribbon silicon
Ribbon silicon is a type of polycrystalline silicon—it is formed by drawing flat thin films from molten silicon and results in a polycrystalline structure. These cells are cheaper to make than multi-Si, due to a great reduction in silicon waste, as this approach does not require sawing from ingots. However, they are also less efficient.
Mono-like-multi silicon (MLM)
This form was developed in the 2000s and introduced commercially around 2009. Also called cast-mono, this design uses polycrystalline casting chambers with small "seeds" of mono material. The result is a bulk mono-like material that is polycrystalline around the outsides. When sliced for processing, the inner sections are high-efficiency mono-like cells (but square instead of "clipped"), while the outer edges are sold as conventional poly. This production method results in mono-like cells at poly-like prices.
Thin film
Thin-film technologies reduce the amount of active material in a cell. Most designs sandwich active material between two panes of glass. Since silicon solar panels only use one pane of glass, thin film panels are approximately twice as heavy as crystalline silicon panels, although they have a smaller ecological impact (determined from life cycle analysis).
Cadmium telluride
Cadmium telluride is the only thin film material so far to rival crystalline silicon in cost/watt. However cadmium is highly toxic and tellurium (anion: "telluride") supplies are limited. The cadmium present in the cells would be toxic if released. However, release is impossible during normal operation of the cells and is unlikely during fires in residential roofs. A square meter of CdTe contains approximately the same amount of Cd as a single C cell nickel-cadmium battery, in a more stable and less soluble form.
Copper indium gallium selenide
Copper indium gallium selenide (CIGS) is a direct band gap material. It has the highest efficiency (~20%) among all commercially significant thin film materials (see CIGS solar cell). Traditional methods of fabrication involve vacuum processes including co-evaporation and sputtering. Recent developments at IBM and Nanosolar attempt to lower the cost by using non-vacuum solution processes.
Silicon thin film
Silicon thin-film cells are mainly deposited by chemical vapor deposition (typically plasma-enhanced, PE-CVD) from silane gas and hydrogen gas. Depending on the deposition parameters, this can yield amorphous silicon (a-Si or a-Si:H), protocrystalline silicon or nanocrystalline silicon (nc-Si or nc-Si:H), also called microcrystalline silicon.
Amorphous silicon is the most well-developed thin film technology to-date. An amorphous silicon (a-Si) solar cell is made of non-crystalline or microcrystalline silicon. Amorphous silicon has a higher bandgap (1.7 eV) than crystalline silicon (c-Si) (1.1 eV), which means it absorbs the visible part of the solar spectrum more strongly than the higher power density infrared portion of the spectrum. The production of a-Si thin film solar cells uses glass as a substrate and deposits a very thin layer of silicon by plasma-enhanced chemical vapor deposition (PECVD).
Protocrystalline silicon with a low volume fraction of nanocrystalline silicon is optimal for high open-circuit voltage. Nc-Si has about the same bandgap as c-Si and nc-Si and a-Si can advantageously be combined in thin layers, creating a layered cell called a tandem cell. The top cell in a-Si absorbs the visible light and leaves the infrared part of the spectrum for the bottom cell in nc-Si.
Gallium arsenide thin film
The semiconductor material gallium arsenide (GaAs) is also used for single-crystalline thin film solar cells. Although GaAs cells are very expensive, they hold the world's record in efficiency for a single-junction solar cell at 28.8%. Typically fabricated on crystalline silicon wafer with a 41% fill factor, by moving to porous silicon fill factor can be increased to 56% with potentially reduced cost. Using less active GaAs material by fabricating nanowires is another potential pathway to cost reduction. GaAs is more commonly used in multijunction photovoltaic cells for concentrated photovoltaics (CPV, HCPV) and for solar panels on spacecraft, as the industry favours efficiency over cost for space-based solar power. Based on the previous literature and some theoretical analysis, there are several reasons why GaAs has such high power conversion efficiency. First, GaAs bandgap is 1.43ev which is almost ideal for solar cells. Second, because Gallium is a by-product of the smelting of other metals, GaAs cells are relatively insensitive to heat and it can keep high efficiency when temperature is quite high. Third, GaAs has the wide range of design options. Using GaAs as active layer in solar cell, engineers can have multiple choices of other layers which can better generate electrons and holes in GaAs.
Multijunction cells
Multi-junction cells consist of multiple thin films, each essentially a solar cell grown on top of another, typically using metalorganic vapour phase epitaxy. Each layer has a different band gap energy to allow it to absorb electromagnetic radiation over a different portion of the spectrum. Multi-junction cells were originally developed for special applications such as satellites and space exploration, but are now used increasingly in terrestrial concentrator photovoltaics (CPV), an emerging technology that uses lenses and curved mirrors to concentrate sunlight onto small, highly efficient multi-junction solar cells. By concentrating sunlight up to a thousand times, High concentration photovoltaics (HCPV) has the potential to outcompete conventional solar PV in the future.
Tandem solar cells based on monolithic, series connected, gallium indium phosphide (GaInP), gallium arsenide (GaAs), and germanium (Ge) p–n junctions, are increasing sales, despite cost pressures. Between December 2006 and December 2007, the cost of 4N gallium metal rose from about $350 per kg to $680 per kg. Additionally, germanium metal prices have risen substantially to $1000–1200 per kg this year. Those materials include gallium (4N, 6N and 7N Ga), arsenic (4N, 6N and 7N) and germanium, pyrolitic boron nitride (pBN) crucibles for growing crystals, and boron oxide, these products are critical to the entire substrate manufacturing industry.
A triple-junction cell, for example, may consist of the semiconductors: GaAs, Ge, and . Triple-junction GaAs solar cells were used as the power source of the Dutch four-time World Solar Challenge winners Nuna in 2003, 2005 and 2007 and by the Dutch solar cars Solutra (2005), Twente One (2007) and 21Revolution (2009). GaAs based multi-junction devices are the most efficient solar cells to date. On 15 October 2012, triple junction metamorphic cells reached a record high of 44%. In 2022, researchers at Fraunhofer Institute for Solar Energy Systems ISE in Freiburg, Germany, demonstrated a record solar cell efficiency of 47.6% under 665-fold sunlight concentration with a four-junction concentrator solar cell.
GaInP/Si dual-junction solar cells
In 2016, a new approach was described for producing hybrid photovoltaic wafers combining the high efficiency of III-V multi-junction solar cells with the economies and wealth of experience associated with silicon. The technical complications involved in growing the III-V material on silicon at the required high temperatures, a subject of study for some 30 years, are avoided by epitaxial growth of silicon on GaAs at low temperature by plasma-enhanced chemical vapor deposition (PECVD).
Si single-junction solar cells have been widely studied for decades and are reaching their practical efficiency of ~26% under 1-sun conditions. Increasing this efficiency may require adding more cells with bandgap energy larger than 1.1 eV to the Si cell, allowing to convert short-wavelength photons for generation of additional voltage. A dual-junction solar cell with a band gap of 1.6–1.8 eV as a top cell can reduce thermalization loss, produce a high external radiative efficiency and achieve theoretical efficiencies over 45%. A tandem cell can be fabricated by growing the GaInP and Si cells. Growing them separately can overcome the 4% lattice constant mismatch between Si and the most common III–V layers that prevent direct integration into one cell. The two cells therefore are separated by a transparent glass slide so the lattice mismatch does not cause strain to the system. This creates a cell with four electrical contacts and two junctions that demonstrated an efficiency of 18.1%. With a fill factor (FF) of 76.2%, the Si bottom cell reaches an efficiency of 11.7% (± 0.4) in the tandem device, resulting in a cumulative tandem cell efficiency of 29.8%. This efficiency exceeds the theoretical limit of 29.4% and the record experimental efficiency value of a Si 1-sun solar cell, and is also higher than the record-efficiency 1-sun GaAs device. However, using a GaAs substrate is expensive and not practical. Hence researchers try to make a cell with two electrical contact points and one junction, which does not need a GaAs substrate. This means there will be direct integration of GaInP and Si.
Research in solar cells
Perovskite solar cells
Perovskite solar cells are solar cells that include a perovskite-structured material as the active layer. Most commonly, this is a solution-processed hybrid organic-inorganic tin or lead halide based material. Efficiencies have increased from below 5% at their first usage in 2009 to 25.5% in 2020, making them a very rapidly advancing technology and a hot topic in the solar cell field. Researchers at University of Rochester reported in 2023 that significant further improvements in cell efficiency can be achieved by utilizing Purcell effect.
Perovskite solar cells are also forecast to be extremely cheap to scale up, making them a very attractive option for commercialisation. So far most types of perovskite solar cells have not reached sufficient operational stability to be commercialised, although many research groups are investigating ways to solve this. Energy and environmental sustainability of perovskite solar cells and tandem perovskite are shown to be dependent on the structures. Photonic front contacts for light management can improve the perovskite cells' performance, via enhanced broadband absorption, while allowing better operational stability due to protection against the harmful high-energy (above Visible) radiation. The inclusion of the toxic element lead in the most efficient perovskite solar cells is a potential problem for commercialisation.
Bifacial solar cells
With a transparent rear side, bifacial solar cells can absorb light from both the front and rear sides. Hence, they can produce more electricity than conventional monofacial solar cells. The first patent of bifacial solar cells was filed by Japanese researcher Hiroshi Mori, in 1966. Later, it is said that Russia was the first to deploy bifacial solar cells in their space program in the 1970s. In 1976, the Institute for Solar Energy of the Technical University of Madrid, began a research program for the development of bifacial solar cells led by Prof. Antonio Luque. Based on 1977 US and Spanish patents by Luque, a practical bifacial cell was proposed with a front face as anode and a rear face as cathode; in previously reported proposals and attempts both faces were anodic and interconnection between cells was complicated and expensive. In 1980, Andrés Cuevas, a PhD student in Luque's team, demonstrated experimentally a 50% increase in output power of bifacial solar cells, relative to identically oriented and tilted monofacial ones, when a white background was provided. In 1981 the company Isofoton was founded in Málaga to produce the developed bifacial cells, thus becoming the first industrialization of this PV cell technology. With an initial production capacity of 300 kW/yr of bifacial solar cells, early landmarks of Isofoton's production were the 20kWp power plant in San Agustín de Guadalix, built in 1986 for Iberdrola, and an off grid installation by 1988 also of 20kWp in the village of Noto Gouye Diama (Senegal) funded by the Spanish international aid and cooperation programs.
Due to the reduced manufacturing cost, companies have again started to produce commercial bifacial modules since 2010. By 2017, there were at least eight certified PV manufacturers providing bifacial modules in North America. The International Technology Roadmap for Photovoltaics (ITRPV) predicted that the global market share of bifacial technology will expand from less than 5% in 2016 to 30% in 2027.
Due to the significant interest in the bifacial technology, a recent study has investigated the performance and optimization of bifacial solar modules worldwide. The results indicate that, across the globe, ground-mounted bifacial modules can only offer ~10% gain in annual electricity yields compared to the monofacial counterparts for a ground albedo coefficient of 25% (typical for concrete and vegetation groundcovers). However, the gain can be increased to ~30% by elevating the module 1 m above the ground and enhancing the ground albedo coefficient to 50%. Sun et al. also derived a set of empirical equations that can optimize bifacial solar modules analytically. In addition, there is evidence that bifacial panels work better than traditional panels in snowy environments as bifacials on dual-axis trackers made 14% more electricity in a year than their monofacial counterparts and 40% during the peak winter months.
An online simulation tool is available to model the performance of bifacial modules in any arbitrary location across the entire world. It can also optimize bifacial modules as a function of tilt angle, azimuth angle, and elevation above the ground.
Intermediate band
Intermediate band photovoltaics in solar cell research provides methods for exceeding the Shockley–Queisser limit on the efficiency of a cell. It introduces an intermediate band (IB) energy level in between the valence and conduction bands. Theoretically, introducing an IB allows two photons with energy less than the bandgap to excite an electron from the valence band to the conduction band. This increases the induced photocurrent and thereby efficiency.
Luque and Marti first derived a theoretical limit for an IB device with one midgap energy level using detailed balance. They assumed no carriers were collected at the IB and that the device was under full concentration. They found the maximum efficiency to be 63.2%, for a bandgap of 1.95eV with the IB 0.71eV from either the valence or conduction band.
Under one sun illumination the limiting efficiency is 47%. Several means are under study to realize IB semiconductors with such optimum 3-bandgap configuration, namely via materials engineering (controlled inclusion of deep level impurities or highly-mismatched alloys) and nano-structuring (quantum-dots in host hetero-crystals).
Liquid inks
In 2014, researchers at California NanoSystems Institute discovered using kesterite and perovskite improved electric power conversion efficiency for solar cells.
In December 2022, it was reported that MIT researchers had developed ultralight fabric solar cells. These cells offer a weight one-hundredth that of traditional panels while generating 18 times more power per kilogram. Thinner than a human hair, these cells can be laminated onto various surfaces, such as boat sails, tents, tarps, or drone wings, to extend their functionality. Using ink-based materials and scalable techniques, researchers coat the solar cell structure with printable electronic inks, completing the module with screen-printed electrodes. Tested on high-strength fabric, the cells produce 370 watts-per-kilogram, representing an improvement over conventional solar cells.
Upconversion and downconversion
Photon upconversion is the process of using two low-energy (e.g., infrared) photons to produce one higher energy photon; downconversion is the process of using one high energy photon (e.g., ultraviolet) to produce two lower energy photons. Either of these techniques could be used to produce higher efficiency solar cells by allowing solar photons to be more efficiently used. The difficulty, however, is that the conversion efficiency of existing phosphors exhibiting up- or down-conversion is low, and is typically narrow band.
One upconversion technique is to incorporate lanthanide-doped materials (, , or a combination), taking advantage of their luminescence to convert infrared radiation to visible light. Upconversion process occurs when two infrared photons are absorbed by rare-earth ions to generate a (high-energy) absorbable photon. As example, the energy transfer upconversion process (ETU), consists in successive transfer processes between excited ions in the near infrared. The upconverter material could be placed below the solar cell to absorb the infrared light that passes through the silicon. Useful ions are most commonly found in the trivalent state. ions have been the most used. ions absorb solar radiation around 1.54 μm. Two ions that have absorbed this radiation can interact with each other through an upconversion process. The excited ion emits light above the Si bandgap that is absorbed by the solar cell and creates an additional electron–hole pair that can generate current. However, the increased efficiency was small. In addition, fluoroindate glasses have low phonon energy and have been proposed as suitable matrix doped with ions.
Light-absorbing dyes
Dye-sensitized solar cells (DSSCs) are made of low-cost materials and do not need elaborate manufacturing equipment, so they can be made in a DIY fashion. In bulk it should be significantly less expensive than older solid-state cell designs. DSSC's can be engineered into flexible sheets and although its conversion efficiency is less than the best thin film cells, its price/performance ratio may be high enough to allow them to compete with fossil fuel electrical generation.
Typically a ruthenium metalorganic dye (Ru-centered) is used as a monolayer of light-absorbing material, which is adsorbed onto a thin film of titanium dioxide. The dye-sensitized solar cell depends on this mesoporous layer of nanoparticulate titanium dioxide (TiO2) to greatly amplify the surface area (200–300 m2/g , as compared to approximately 10 m2/g of flat single crystal) which allows for a greater number of dyes per solar cell area (which in term in increases the current). The photogenerated electrons from the light absorbing dye are passed on to the n-type and the holes are absorbed by an electrolyte on the other side of the dye. The circuit is completed by a redox couple in the electrolyte, which can be liquid or solid. This type of cell allows more flexible use of materials and is typically manufactured by screen printing or ultrasonic nozzles, with the potential for lower processing costs than those used for bulk solar cells. However, the dyes in these cells also suffer from degradation under heat and UV light and the cell casing is difficult to seal due to the solvents used in assembly. Due to this reason, researchers have developed solid-state dye-sensitized solar cells that use a solid electrolyte to avoid leakage. The first commercial shipment of DSSC solar modules occurred in July 2009 from G24i Innovations.
Quantum dots
Quantum dot solar cells (QDSCs) are based on the Gratzel cell, or dye-sensitized solar cell architecture, but employ low band gap semiconductor nanoparticles, fabricated with crystallite sizes small enough to form quantum dots (such as CdS, CdSe, , PbS, etc.), instead of organic or organometallic dyes as light absorbers. Due to the toxicity associated with Cd and Pb based compounds there are also a series of "green" QD sensitizing materials in development (such as CuInS2, CuInSe2 and CuInSeS). QD's size quantization allows for the band gap to be tuned by simply changing particle size. They also have high extinction coefficients and have shown the possibility of multiple exciton generation.
In a QDSC, a mesoporous layer of titanium dioxide nanoparticles forms the backbone of the cell, much like in a DSSC. This layer can then be made photoactive by coating with semiconductor quantum dots using chemical bath deposition, electrophoretic deposition or successive ionic layer adsorption and reaction. The electrical circuit is then completed through the use of a liquid or solid redox couple. The efficiency of QDSCs has increased to over 5% shown for both liquid-junction and solid state cells, with a reported peak efficiency of 11.91%. In an effort to decrease production costs, the Prashant Kamat research group demonstrated a solar paint made with and CdSe that can be applied using a one-step method to any conductive surface with efficiencies over 1%. However, the absorption of quantum dots (QDs) in QDSCs is weak at room temperature. The plasmonic nanoparticles can be utilized to address the weak absorption of QDs (e.g., nanostars). Adding an external infrared pumping source to excite intraband and interband transition of QDs is another solution.
Organic/polymer solar cells
Organic solar cells and polymer solar cells are built from thin films (typically 100 nm) of organic semiconductors including polymers, such as polyphenylene vinylene and small-molecule compounds like copper phthalocyanine (a blue or green organic pigment) and carbon fullerenes and fullerene derivatives such as PCBM.
They can be processed from liquid solution, offering the possibility of a simple roll-to-roll printing process, potentially leading to inexpensive, large-scale production. In addition, these cells could be beneficial for some applications where mechanical flexibility and disposability are important. Current cell efficiencies are, however, very low, and practical devices are essentially non-existent.
Energy conversion efficiencies achieved to date using conductive polymers are very low compared to inorganic materials. However, Konarka Power Plastic reached efficiency of 8.3% and organic tandem cells in 2012 reached 11.1%.
The active region of an organic device consists of two materials, one electron donor and one electron acceptor. When a photon is converted into an electron hole pair, typically in the donor material, the charges tend to remain bound in the form of an exciton, separating when the exciton diffuses to the donor-acceptor interface, unlike most other solar cell types. The short exciton diffusion lengths of most polymer systems tend to limit the efficiency of such devices. Nanostructured interfaces, sometimes in the form of bulk heterojunctions, can improve performance.
In 2011, MIT and Michigan State researchers developed solar cells with a power efficiency close to 2% with a transparency to the human eye greater than 65%, achieved by selectively absorbing the ultraviolet and near-infrared parts of the spectrum with small-molecule compounds. Researchers at UCLA more recently developed an analogous polymer solar cell, following the same approach, that is 70% transparent and has a 4% power conversion efficiency. These lightweight, flexible cells can be produced in bulk at a low cost and could be used to create power generating windows.
In 2013, researchers announced polymer cells with some 3% efficiency. They used block copolymers, self-assembling organic materials that arrange themselves into distinct layers. The research focused on P3HT-b-PFTBT that separates into bands some 16 nanometers wide.
Adaptive cells
Adaptive cells change their absorption/reflection characteristics depending on environmental conditions. An adaptive material responds to the intensity and angle of incident light. At the part of the cell where the light is most intense, the cell surface changes from reflective to adaptive, allowing the light to penetrate the cell. The other parts of the cell remain reflective increasing the retention of the absorbed light within the cell.
In 2014, a system was developed that combined an adaptive surface with a glass substrate that redirect the absorbed to a light absorber on the edges of the sheet. The system also includes an array of fixed lenses/mirrors to concentrate light onto the adaptive surface. As the day continues, the concentrated light moves along the surface of the cell. That surface switches from reflective to adaptive when the light is most concentrated and back to reflective after the light moves along.
Surface texturing
For the past years, researchers have been trying to reduce the price of solar cells while maximizing efficiency. Thin-film solar cell is a cost-effective second generation solar cell with much reduced thickness at the expense of light absorption efficiency. Efforts to maximize light absorption efficiency with reduced thickness have been made. Surface texturing is one of techniques used to reduce optical losses to maximize light absorbed. Currently, surface texturing techniques on silicon photovoltaics are drawing much attention. Surface texturing could be done in multiple ways. Etching single crystalline silicon substrate can produce randomly distributed square based pyramids on the surface using anisotropic etchants. Recent studies show that c-Si wafers could be etched down to form nano-scale inverted pyramids. Multicrystalline silicon solar cells, due to poorer crystallographic quality, are less effective than single crystal solar cells, but mc-Si solar cells are still being used widely due to less manufacturing difficulties. It is reported that multicrystalline solar cells can be surface-textured to yield solar energy conversion efficiency comparable to that of monocrystalline silicon cells, through isotropic etching or photolithography techniques. Incident light rays onto a textured surface do not reflect back out to the air as opposed to rays onto a flat surface. Rather some light rays are bounced back onto the other surface again due to the geometry of the surface. This process significantly improves light to electricity conversion efficiency, due to increased light absorption. This texture effect as well as the interaction with other interfaces in the PV module is a challenging optical simulation task. A particularly efficient method for modeling and optimization is the OPTOS formalism. In 2012, researchers at MIT reported that c-Si films textured with nanoscale inverted pyramids could achieve light absorption comparable to 30 times thicker planar c-Si. In combination with anti-reflective coating, surface texturing technique can effectively trap light rays within a thin film silicon solar cell. Consequently, required thickness for solar cells decreases with the increased absorption of light rays.
Encapsulation
Solar cells are commonly encapsulated in a transparent polymeric resin to protect the delicate solar cell regions for coming into contact with moisture, dirt, ice, and other conditions expected either during operation or when used outdoors. The encapsulants are commonly made from polyvinyl acetate or glass. Most encapsulants are uniform in structure and composition, which increases light collection owing to light trapping from total internal reflection of light within the resin. Research has been conducted into structuring the encapsulant to provide further collection of light. Such encapsulants have included roughened glass surfaces, diffractive elements, prism arrays, air prisms, v-grooves, diffuse elements, as well as multi-directional waveguide arrays. Prism arrays show an overall 5% increase in the total solar energy conversion. Arrays of vertically aligned broadband waveguides provide a 10% increase at normal incidence, as well as wide-angle collection enhancement of up to 4%, with optimized structures yielding up to a 20% increase in short circuit current. Active coatings that convert infrared light into visible light have shown a 30% increase. Nanoparticle coatings inducing plasmonic light scattering increase wide-angle conversion efficiency up to 3%. Optical structures have also been created in encapsulation materials to effectively "cloak" the metallic front contacts.
Autonomous maintenance
Novel self-cleaning mechanisms for solar panels are being developed. For instance, in 2019 via wet-chemically etched nanowires and a hydrophobic coating on the surface water droplets could remove 98% of dust particles, which may be especially relevant for applications in the desert.
In March 2022, MIT researchers announced the development of a waterless cleaning system for solar panels and mirrors to address the issue of dust accumulation, which can reduce solar output by up to 30 percent in one month. This system utilizes electrostatic repulsion to detach dust particles from the panel's surface, eliminating the need for water or brushes. An electrical charge imparted to the dust particles by passing a simple electrode over the panel causes them to be repelled by a charge applied to the panel itself. The system can be automated using a basic electric motor and guide rails.
Manufacture
Solar cells share some of the same processing and manufacturing techniques as other semiconductor devices. However, the strict requirements for cleanliness and quality control of semiconductor fabrication are more relaxed for solar cells, lowering costs.
Polycrystalline silicon wafers are made by wire-sawing block-cast silicon ingots into 180 to 350 micrometer wafers. The wafers are usually lightly p-type-doped. A surface diffusion of n-type dopants is performed on the front side of the wafer. This forms a p–n junction a few hundred nanometers below the surface.
Anti-reflection coatings are then typically applied to increase the amount of light coupled into the solar cell. Silicon nitride has gradually replaced titanium dioxide as the preferred material, because of its excellent surface passivation qualities. It prevents carrier recombination at the cell surface. A layer several hundred nanometers thick is applied using plasma-enhanced chemical vapor deposition. Some solar cells have textured front surfaces that, like anti-reflection coatings, increase the amount of light reaching the wafer. Such surfaces were first applied to single-crystal silicon, followed by multicrystalline silicon somewhat later.
A full area metal contact is made on the back surface, and a grid-like metal contact made up of fine "fingers" and larger "bus bars" are screen-printed onto the front surface using a silver paste. This is an evolution of the so-called "wet" process for applying electrodes, first described in a US patent filed in 1981 by Bayer AG. The rear contact is formed by screen-printing a metal paste, typically aluminium. Usually this contact covers the entire rear, though some designs employ a grid pattern. The paste is then fired at several hundred degrees Celsius to form metal electrodes in ohmic contact with the silicon. Some companies use an additional electroplating step to increase efficiency. After the metal contacts are made, the solar cells are interconnected by flat wires or metal ribbons, and assembled into modules or "solar panels". Solar panels have a sheet of tempered glass on the front, and a polymer encapsulation on the back.
Different types of manufacturing and recycling partly determine how effective it is in decreasing emissions and having a positive environmental effect. Such differences and effectiveness could be quantified for production of the most optimal types of products for different purposes in different regions across time.
Manufacturers and certification
National Renewable Energy Laboratory tests and validates solar technologies. Three reliable groups certify solar equipment: UL and IEEE (both U.S. standards) and IEC.
The IEA's 2022 Special Report highlights China's dominance over the solar PV supply chain, with an investment exceeding USD 50 billion and the creation of around 300,000 jobs since 2011. China commands over 80% of all manufacturing stages for solar panels. This control has drastically cut costs but also led to issues like supply-demand imbalances and polysilicon production constraints. Nevertheless, China's strategic policies have reduced solar PV costs by more than 80%, increasing global affordability. In 2021, China's solar PV exports were over USD 30 billion.
Meeting global energy and climate targets necessitates a major expansion in solar PV manufacturing, aiming for over 630 GW by 2030 according to the IEA's "Roadmap to Net Zero Emissions by 2050". China's dominance, controlling nearly 95% of key solar PV components and 40% of the world's polysilicon production in Xinjiang, poses risks of supply shortages and cost surges. Critical mineral demand, like silver, may exceed 30% of 2020's global production by 2030.
In 2021, China's share of solar PV module production reached approximately 70%, an increase from 50% in 2010. Other key producers included Vietnam (5%), Malaysia (4%), Korea (4%), and Thailand (2%), with much of their production capacity developed by Chinese companies aimed at exports, notably to the United States.
China
As of September 2018, sixty percent of the world's solar photovoltaic modules were made in China. As of May 2018, the largest photovoltaic plant in the world is located in the Tengger desert in China. In 2018, China added more photovoltaic installed capacity (in GW) than the next 9 countries combined. In 2021, China's share of solar PV module production reached approximately 70%.
In the first half of 2023, China's production of PV modules exceeded 220 GW, marking an increase of over 62% compared to the same period in 2022. In 2022, China maintained its position as the world's largest PV module producer, holding a dominant market share of 77.8%.
Vietnam
In 2022, Vietnam was the second-largest PV module producer, only behind China, with its production capacity rising to 24.1 GW, marking a significant 47% increase from the 16.4 GW produced in 2021. Vietnam accounts for 6.4% of the world's photovoltaic production.
Malaysia
In 2022, Malaysia was the third-largest PV module producer, with a production capacity of 10.8 GW, accounting for 2.8% of global production. This placed it behind China, which dominated with 77.8%, and Vietnam, which contributed 6.4%.
United States
Solar energy production in the U.S. has doubled from 2013 to 2019. This was driven first by the falling price of quality silicon, and later simply by the globally plunging cost of photovoltaic modules. In 2018, the U.S. added 10.8GW of installed solar photovoltaic energy, an increase of 21%.
Latin America: Latin America has emerged as a promising region for solar energy development in recent years, with over 10 GW of installations in 2020. The solar market in Latin America has been driven by abundant solar resources, falling costs, competitive auctions and growing electricity demand. Some of the leading countries for solar energy in Latin America are Brazil, Mexico, Chile and Argentina. However, the solar market in Latin America also faces some challenges, such as political instability, financing gaps and power transmission bottlenecks.
Middle East and Africa: The Middle East and Africa has also experienced significant growth in solar energy deployment in recent years, with over 8 GW installations in 2020. The solar market in the Middle East and Africa has been driven by the low-cost generation of solar energy, the diversification of energy sources, the fight against climate change and rural electrification are motivated. Some of the notable countries for solar energy in the Middle East and Africa are Saudi Arabia, United Arab Emirates, Egypt, Morocco and South Africa. However, the solar market in the Middle East and Africa also faces several obstacles, including social unrest, regulatory uncertainty and technical barriers.
Materials sourcing
Like many other energy generation technologies, the manufacture of solar cells, especially its rapid expansion, has many environmental and supply-chain implications. Global mining may adapt and potentially expand for sourcing the needed minerals which vary per type of solar cell. Recycling solar panels could be a source for materials that would otherwise need to be mined.
Disposal
Solar cells degrade over time and lose their efficiency. Solar cells in extreme climates, such as desert or polar, are more prone to degradation due to exposure to harsh UV light and snow loads respectively. Usually, solar panels are given a lifespan of 25–30 years before they get decommissioned.
The International Renewable Energy Agency estimated that the amount of solar panel electronic waste generated in 2016 was 43,500–250,000 metric tons. This number is estimated to increase substantially by 2030, reaching an estimated waste volume of 60–78 million metric tons in 2050.
Recycling
The most widely used solar cells in the market are crystalline solar cells. A product is truly recyclable if it can harvested again. In the 2016 Paris Agreement, 195 countries agreed to reduce their carbon emissions by shifting their focus away from fossil fuels and towards renewable energy sources. Owing to this, Solar will be a major contributor to electricity generation all over the world. So, there will be a plethora of solar panels to be recycled after the end of their life cycle. In fact, many researchers around the globe have voiced their concern about finding ways to use silicon cells after recycling.
Additionally, these cells have hazardous elements/compounds, including lead (Pb), cadmium (Cd) or cadmium sulfide (CdS), selenium (Se), and barium (Ba) as dopants aside from the valuables silicon (Si), aluminum (Al), silver (Ag), and copper (Cu). The harmful elements/compounds if not disposed of with the proper technique can have severe harmful effects on human life and wildlife alike.
There are various ways c-Si can be recycled. Mainly thermal and chemical separation methods are used. This happens in two stages
PV solar cell separation: in thermal delamination, the ethylene vinyl acetate (EVA) is removed and materials such as glass, Tedlar®, aluminium frame, steel, copper and plastics are separated;
cleansing the surface of PV solar cells: unwanted layers (antireflection layer, metal coating and p–n semiconductor) are removed from the silicon solar cells separated from the PV modules; as a result, the silicon substrate, suitable for re-use, can be recovered.
The First Solar panel recycling plant opened in Rousset, France in 2018. It was set to recycle 1300 tonnes of solar panel waste a year, and can increase its capacity to 4000 tonnes. If recycling is driven only by market-based prices, rather than also environmental regulations, the economic incentives for recycling remain uncertain and as of 2021 the environmental impact of different types of developed recycling techniques still need to be quantified.
See also
Anomalous photovoltaic effect
Autonomous building
Black silicon
Electromotive force (Solar cell)
Energy development
Sustainable development
Flexible substrate
Green technology
Hot spot (photovoltaics)
Inkjet solar cell
List of types of solar cells
List of solar engines
Metallurgical grade silicon
Microgeneration
Nanoflake
Photovoltaics
Plasmonic solar cell
Printed electronics
Roll-to-roll processing
Shockley-Queisser limit
Solar cell research
Solar Energy Materials and Solar Cells (journal)
Solar module quality assurance
Solar roof
Solar shingles
Solar tracker
Spectrophotometry
Standardization#Environmental protection
Theory of solar cells
Thermophotovoltaics
Variable renewable energy
References
Bibliography
External links
PV Lighthouse Calculators and Resources for photovoltaic scientists and engineers
Photovoltaics CDROM online
Solar cell manufacturing techniques
Solar Energy Laboratory at University of Southampton
NASA's Photovoltaic Info
"Electric Energy From Sun Produced by Light Cell" Popular Mechanics, July 1931 article on various 1930s research on solar cells
Energy conversion
Solar cells
Energy harvesting
American inventions
Russian inventions
Physical chemistry
20th-century inventions | Solar cell | [
"Physics",
"Chemistry"
] | 13,926 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
2,352,949 | https://en.wikipedia.org/wiki/Joint%20constraints | Joint constraints are rotational constraints on the joints of an artificial system. They are used in an inverse kinematics chain, in fields including 3D animation or robotics. Joint constraints can be implemented in a number of ways, but the most common method is to limit rotation about the X, Y and Z axis independently. An elbow, for instance, could be represented by limiting rotation on X and Z axis to 0 degrees, and constraining the Y-axis rotation to 130 degrees.
To simulate joint constraints more accurately, dot-products can be used with an independent axis to repulse the child bones orientation from the unreachable axis. Limiting the orientation of the child bone to a border of vectors tangent to the surface of the joint, repulsing the child bone away from the border, can also be useful in the precise restriction of shoulder movement.
References
Computer graphics
3D computer graphics
Computational physics
Robot kinematics
Anatomical simulation | Joint constraints | [
"Physics",
"Engineering"
] | 188 | [
"Robotics engineering",
"Robot kinematics",
"Computational physics stubs",
"Computational physics"
] |
17,887,653 | https://en.wikipedia.org/wiki/Orthogonal%20array | In mathematics, an orthogonal array (more specifically, a fixed-level orthogonal array) is a "table" (array) whose entries come from a fixed finite set of symbols (for example, {1,2,...,v}), arranged in such a way that there is an integer t so that for every selection of t columns of the table, all ordered t-tuples of the symbols, formed by taking the entries in each row restricted to these columns, appear the same number of times. The number t is called the strength of the orthogonal array. Here are two examples:
The example at left is that of an orthogonal array with symbol set {1,2} and strength 2. Notice that the four ordered pairs (2-tuples) formed by the rows restricted to the first and third columns, namely (1,1), (2,1), (1,2) and (2,2), are all the possible ordered pairs of the two element set and each appears exactly once. The second and third columns would give, (1,1), (2,1), (2,2) and (1,2); again, all possible ordered pairs each appearing once. The same statement would hold had the first and second columns been used. This is thus an orthogonal array of strength two.
In the example on the right, the rows restricted to the first three columns contain the 8 possible ordered triples consisting of 0's and 1's, each appearing once. The same holds for any other choice of three columns. Thus this is an orthogonal array of strength 3.
A mixed-level orthogonal array is one in which each column may have a different number of symbols. An example is given below.
Orthogonal arrays generalize, in a tabular form, the idea of mutually orthogonal Latin squares. These arrays have many connections to other combinatorial designs and have applications in the statistical design of experiments, coding theory, cryptography and various types of software testing.
Definition
For t ≤ k, an orthogonal array of type (N, k, v, t) – an OA(N, k, v, t) for short – is an N × k array whose entries are chosen from a set X with v points (a v-set) such that in every subset of t columns of the array, every t-tuple of points of X is repeated the same number of times. The number of repeats is usually denoted λ.
In many applications these parameters are given the following names:
N is the number of experimental runs,
k is the number of factors,
v is the number of levels,
t is the strength, and
λ is the index.
The definition of strength leads to the parameter relation
N = λvt.
An orthogonal array is simple if it does not contain any repeated rows. (Subarrays of t columns may have repeated rows, as in the OA(18, 7, 3, 2) example pictured in this section.)
An orthogonal array is linear if X is a finite field Fq of order q (q a prime power) and the rows of the array form a subspace of the vector space (Fq)k. The right-hand example in the introduction is linear over the field F2. Every linear orthogonal array is simple.
In a mixed-level orthogonal array, the symbols in the columns may be chosen from different sets having different numbers of points, as in the following example:
{| class="wikitable"
|-
| 0 || 0 || 0 || 0 || 0
|-
| 1 || 1 || 1 || 1 || 0
|-
| 0 || 0 || 1 || 1 || 1
|-
| 1 || 1 || 0 || 0 || 1
|-
| 0 || 1 || 0 || 1 || 2
|-
| 1 || 0 || 1 || 0 || 2
|-
| 0 || 1 || 1 || 0 || 3
|-
| 1 || 0 || 0 || 1 || 3
|}
This array has strength 2:
Any pair of the first four columns contains each of the ordered pairs (0, 0), (0, 1), (1, 0) and (1, 1) two times.
Columns 4 and 5 – or column 5 with any one of the other columns – contains each ordered pair (i, j) once, where i = 0 or 1 and j = 0, 1, 2, or 3.
It may thus be denoted may be denoted OA(8, 5, 2441, 2), as is discussed below. The expression 2441 indicates that four factors have 2 levels and one has 4 levels.
As in this example, there is no single ``index" or repetition number λ in a mixed-level orthogonal array of strength t: Each subarray of t columns can have a different λ.
Terminology and notation
The terms symmetric and asymmetric are sometimes used for fixed-level and mixed-level. Here symmetry refers to the property that all factors have the same number of levels, not to the "shape" of the array: a symmetric orthogonal array is almost never a symmetric matrix.
The notation OA(N, k, v, t) is sometimes contracted so that one may, for example, write simply OA(k, v), as long as the text makes clear the unstated parameter values. In the other direction, it may be expanded for mixed-level arrays. Here one would write OA(N, k, v1···vk, t), where column i has vi levels. This notation is usually shortened when values v are repeated, so that one writes OA(8, 5, 2441, 2) for the example at the end of the last section, rather than OA(8, 5, 2·2·2·2·4, 2). In similar fashion, one may shorten OA(N, k, v, t) to OA(N, vk, t) for fixed-level arrays.
This OA notation does not explicitly include the index λ, but λ can be recovered from the other parameters via the relation N = λvt. This is effective when the parameters all have specific numerical values, but less so when a class of orthogonal arrays is intended. For example, when indicating the class of arrays having strength t = 2 and index λ=1, the notation OA(N, k, v, 2) is insufficient to determine λ by itself. This is typically remedied by writing OA(v2, k, v, 2) instead. While notations that explicitly include the parameter λ do not have this problem, they cannot easily be extended to denote mixed-level arrays.
Some authors define an OA(N, k, v, t) as being k × N rather than N × k. In such cases the strength of the array is defined in terms of a subset of t rows rather than columns.
Except for the prefix OA, the notation OA(N, k, v, t) is the same as that introduced by Rao. While this notation is very common, it not universal. Hedayat, Sloane and Stufken recommend it as standard, but list eight alternatives found in the literature, and there are others.
Examples
An example of an OA(16, 5, 4, 2); a strength 2, 4-level design of index 1 with 16 runs:
An example of an OA(27, 5, 3, 2) (written as its transpose for ease of viewing):
This example has index λ = 3.
Trivial examples
An array consisting of all k-tuples of a v-set, arranged so that the k-tuples are rows, automatically ("trivially") has strength k, and so is an OA(vk, k, v, k).
Any OA(N, k, v, k) would be considered trivial since such arrays are easily constructed by simply listing all the k-tuples of the v-set λ times.
Mutually orthogonal Latin squares
An OA(n2, 3, n, 2) is equivalent to a Latin square of order n. For k ≤ n+1, an OA(n2, k, n, 2) is equivalent to a set of k − 2 mutually orthogonal Latin squares of order n. Such index one, strength 2 orthogonal arrays are also known as Hyper-Graeco-Latin square designs in the statistical literature.
Let A be a strength 2, index 1 orthogonal array on an n-set of elements, identified with the set of natural numbers {1,...,n}. Choose and fix, in order, two columns of A, called the indexing columns. Because the strength is 2 and the index is 1, all ordered pairs (i, j) with 1 ≤ i, j ≤ n appear exactly once in the rows of the indexing columns. Here i and j will in turn index the rows and columns of a n×n square. Take any other column of A and fill the (i, j) cell of this square with the entry that is in this column of A and in the row of A whose indexing columns contain (i, j). The resulting square is a Latin square of order n. For example, consider this OA(9, 4, 3, 2):
By choosing columns 3 and 4 (in that order) as the indexing columns, the first column produces the Latin square
while the second column produces the Latin square
These two squares, moreover, are mutually orthogonal. In general, the Latin squares produced in this way from an orthogonal array will be orthogonal Latin squares, so the k − 2 columns other than the indexing columns will produce a set of k − 2 mutually orthogonal Latin squares.
This construction is completely reversible and so strength 2, index 1 orthogonal arrays can be constructed from sets of mutually orthogonal Latin squares.
Latin squares, Latin cubes and Latin hypercubes
Orthogonal arrays provide a uniform way to describe these diverse objects which are of interest in the statistical design of experiments.
Latin squares
As mentioned in the previous section, a Latin square of order n can be thought of as an OA(n2, 3, n, 2). Actually, the orthogonal array can lead to six Latin squares since any ordered pair of distinct columns can be used as the indexing columns. However, these are all isotopic and are considered equivalent. For concreteness we shall always assume that the first two columns in their natural order are used as the indexing columns.
Latin cubes
In the statistics literature, a Latin cube is a three-dimensional n × n × n matrix consisting of n layers, each having n rows and n columns such that the n distinct elements which appear are repeated n2 times and arranged so that in each layer parallel to each of the three pairs of opposite faces of the cube all the n distinct elements appear and each is repeated exactly n times in that layer.
Note that with this definition a layer of a Latin cube need not be a Latin square. In fact, no row, column or file (the cells of a particular position in the different layers) need be a permutation of the n symbols.
A Latin cube of order n is equivalent to an OA(n3, 4 ,n, 2).
Two Latin cubes of order n are orthogonal if, among the n3 pairs of elements chosen from corresponding cells of the two cubes, each distinct ordered pair of the elements occurs exactly n times. A set of k − 3 mutually orthogonal Latin cubes of order n is equivalent to an OA(n3, k, n, 2). An example of a pair of mutually orthogonal Latin cubes of order three was given as the OA(27, 5, 3, 2) in the Examples section above.
Unlike the case with Latin squares, in which there are no constraints, the indexing columns of the orthogonal array representation of a Latin cube must be selected so as to form an OA(n3, 3, n, 3).
Latin hypercubes
An m-dimensional Latin hypercube of order n of the rth class is an n × n × ... ×n m-dimensional matrix having nr distinct elements, each repeated nm − r times, and such that each element occurs exactly n m − r − 1 times in each of its m sets of n parallel (m − 1)-dimensional linear subspaces (or "layers"). Two such Latin hypercubes of the same order n and class r with the property that, when one is superimposed on the other, every element of the one occurs exactly nm − 2r times with every element of the other, are said to be orthogonal.
A set of k − m mutually orthogonal m-dimensional Latin hypercubes of order n is equivalent to an OA(nm, k, n, 2), where the indexing columns form an OA(nm, m, n, m).
History
The concepts of Latin squares and mutually orthogonal Latin squares were generalized to Latin cubes and hypercubes, and orthogonal Latin cubes and hypercubes by . generalized these results to arrays of strength t. The present notion of orthogonal array as a generalization of these ideas, due to legendary scientist C. R. Rao, appears in , with his generalization to mixed-level arrays appearing in 1973.
Rao initially used the term "array" with no modifier, and defined it to mean simply a subset of all treatment combinations – a simple array. The possibility of non-simple arrays arose naturally when making treatment combinations the rows of a matrix. Hedayat, Sloane and Stufken credit K. Bush with the term "orthogonal array".
Other constructions
Hadamard matrices
There exists an OA(4λ, 4λ − 1, 2, 2) if and only if there exists a Hadamard matrix of order 4λ. To proceed in one direction, let H be a Hadamard matrix of order 4m in standardized form (first row and column entries are all +1). Delete the first row and take the transpose to obtain the desired orthogonal array. The following example illustrates this. (The reverse construction is similar.)
The order 8 standardized Hadamard matrix below (±1 entries indicated only by sign),
produces the OA(8, 7, 2, 2):
Using columns 1, 2 and 4 as indexing columns, the remaining columns produce four mutually orthogonal Latin cubes of order 2.
Codes
Let C ⊆ (Fq)n, be a linear code of dimension m with minimum distance d. Then C⊥ (the orthogonal complement of the vector subspace C) is a (linear) OA(qn-m, n, q, d − 1)
where λ = qn − m − d + 1.
Applications
Threshold schemes
Secret sharing (also called secret splitting) consists of methods for distributing a secret amongst a group of participants, each of whom is allocated a share of the secret. The secret can be reconstructed only when a sufficient number of shares, of possibly different types, are combined; individual shares are of no use on their own. A secret sharing scheme is perfect if every collection of participants that does not meet the criteria for obtaining the secret, has no additional knowledge of what the secret is than does an individual with no share.
In one type of secret sharing scheme there is one dealer and n players. The dealer gives shares of a secret to the players, but only when specific conditions are fulfilled will the players be able to reconstruct the secret. The dealer accomplishes this by giving each player a share in such a way that any group of t (for threshold) or more players can together reconstruct the secret but no group of fewer than t players can. Such a system is called a (t, n)-threshold scheme.
An OA(vt, n+1, v, t) may be used to construct a perfect (t, n)-threshold scheme.
Let A be the orthogonal array. The first n columns will be used to provide shares to the players, while the last column represents the secret to be shared. If the dealer wishes to share a secret S, only the rows of A whose last entry is S are used in the scheme. The dealer randomly selects one of these rows, and hands out to player i the entry in this row in column i as shares.
Factorial designs
A factorial experiment is a statistically structured experiment in which several factors (watering levels, antibiotics, fertilizers, etc.) are applied to each experimental unit at finitely many levels, which may be quantitative or qualitative. In a full factorial experiment all combinations of levels of the factors need to be tested. In a fractional factorial design only a subset of treatment combinations are used.
An orthogonal array can be used to design a fractional factorial experiment. The columns represent the various factors and the entries are the levels at which the factors are observed. An experimental run is a row of the orthogonal array, that is, a specific combination of factor levels. The strength of the array determines the resolution of the fractional design. When using one of these designs, the treatment units and trial order should be randomized as much as the design allows. For example, one recommendation is that an appropriately sized orthogonal array be randomly selected from those available, and that the run order then be randomized.
Mixed-level designs occur naturally in the statistical setting.
Quality control
Orthogonal arrays played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in the early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations. Taguchi's catalog contains both fixed- and mixed-level arrays.
Testing
Orthogonal array testing is a black box testing technique which is a systematic, statistical way of software testing. It is used when the number of inputs to the system is relatively small, but too large to allow for exhaustive testing of every possible input to the systems. It is particularly effective in finding errors associated with faulty logic within computer software systems. Orthogonal arrays can be applied in user interface testing, system testing, regression testing and performance testing.
The permutations of factor levels comprising a single treatment are so chosen that their responses are uncorrelated and hence each treatment gives a unique piece of information. The net effect of organizing the experiment in such treatments is that the same piece of information is gathered in the minimum number of experiments.
See also
Combinatorial design
Latin hypercube sampling
Graeco-Latin squares
Notes
References
External links
Hyper-Graeco-Latin square designs
A SAS example using PROC FACTEX
Kuhfeld, Warren F. "Orthogonal Arrays". SAS Institute Inc. SAS provides a catalog of over 117,000 orthogonal arrays.
Combinatorics
Design of experiments
Latin squares
Combinatorial design | Orthogonal array | [
"Mathematics"
] | 3,947 | [
"Discrete mathematics",
"Recreational mathematics",
"Combinatorial design",
"Combinatorics",
"Latin squares"
] |
17,887,714 | https://en.wikipedia.org/wiki/Supersonic%20gas%20separation | Supersonic gas separation is a technology to remove one or several gaseous components out of a mixed gas (typically raw natural gas). The process condensates the target components by cooling the gas through expansion in a Laval nozzle and then separates the condensates from the dried gas through an integrated cyclonic gas/liquid separator. The separator is only using a part of the field pressure as energy and has technical and commercial advantages when compared to commonly used conventional technologies.
Background
Raw natural gas out of a well is usually not a salable product but a mix of various hydro-carbonic gases with other gases, liquids and solid contaminants. This raw gas needs gas conditioning to get it ready for pipeline transport and processing in a gas processing plant to separate it into its components.Some of the common processing steps are CO2 removal, dehydration, LPG extraction, dew-pointing. Technologies used to achieve these steps are adsorption, absorption, membranes and low temperature systems achieved by refrigeration or expansion through a Joule Thomson Valve or a Turboexpander. If such expansion is done through the Supersonic Gas Separator instead, frequently mechanical, economical and operational advantages can be gained as detailed below.
The supersonic gas separator
A supersonic gas separator consists of several consecutive sections in tubular form, usually designed as flanged pieces of pipe.
The feed gas (consisting of at least two components) first enters a section with an arrangement of static blades or wings, which induce a fast swirl in the gas. Thereafter the gas stream flows through a Laval nozzle, where it accelerates to supersonic speeds and undergoes a deep pressure drop to about 30% of feed pressure. This is a near isentropic process and the corresponding temperature reduction leads to condensation of target components of the mixed feed gas, which form a fine mist. The droplets agglomerate to larger drops, and the swirl of the gas causes cyclonic separation.
The dry gas continues forward, while the liquid phase together with some slip gas (about 30% of the total stream) is separated by a concentric divider and exits the device as a separate stream. The final section are diffusers for both streams, where the gas is slowed down and about 80% of the feed pressure (depending on application) is recovered. This section might also include another set of static devices to undo the swirling motion.
The installation scheme
The supersonic separator requires a certain process scheme, which includes further auxiliary equipment and often forms a skid or processing block. The typical basic scheme for supersonic separation is an arrangement where the feed gas is pre-cooled in a heat exchanger by the dry stream of the separator unit.
The liquid phase from the supersonic separator goes into a 2-phase or 3-phase separator, where the slip gas is separated from water and/or from liquid hydrocarbons. The gaseous phase of this secondary separator joins the dry gas of the supersonic separator, the liquids go for transport, storage or further processing and the water for treatment and disposal.
Depending on the task at hand other schemes are possible and for certain cases have advantages. Those variations are very much part of the supersonic gas separation process to achieve thermodynamic efficiency and several of them are protected by patents.
Advantages and application
The supersonic gas separator recovers part of the pressure drop needed for cooling and as such has a higher efficiency than a JT valve in all conditions of operation.
The supersonic gas separator can in many cases have a 10–20% higher efficiency than a turboexpander.
The supersonic separator has a smaller footprint and a lower weight than a turboexpander or contactor columns. This is of particular advantage for platforms, FPSOs and crowded installations. It needs a lower capital investment and lower operating expenditure as it is completely static. Very little maintenance is required and no (or greatly reduced) amounts of chemicals.
The fact that no operational or maintenance personnel is required might enable unmanning of usually manned platforms with the associated large savings in capital and operational expenditure.
The fields of application commercially developed until today on an industrial scale are:
dehydration
dewpointing (water and/or hydrocarbons)
LPG extraction
Applications in the development stage for near term commercialization are:
CO2 and H2S bulk removal
Commercial realization
There are several patents on supersonic gas separation, relating to features of the device as well as methods.
The technology has been researched and proven in laboratory installations since about 1998, special HYSYS modules have been developed as well as 3D gas computer modeling. The supersonic gas separation technology has meanwhile moved successfully into industrial applications (e.g. in Nigeria, Malaysia and Russia) for dehydration as well as for LPG extraction.
Consultancy, engineering and equipment for supersonic gas separation are being offered by ENGO Engineering Ltd. under the brand "3S". They are also provided by Twister BV, a Dutch firm affiliated with Royal Dutch Shell, under the brand "Twister Supersonic Separator".
References
Natural gas technology | Supersonic gas separation | [
"Chemistry"
] | 1,067 | [
"Natural gas technology"
] |
17,890,049 | https://en.wikipedia.org/wiki/Thiosemicarbazide | Thiosemicarbazide is the chemical compound with the formula H2NC(S)NHNH2. A white, odorless solid, it is related to thiourea (H2NC(S)NH2) by the insertion of an NH center. They are commonly used as ligands for transition metals. Many thiosemicarbazides are known. These feature an organic substituent in place of one or more H's of the parent molecule. 4-Methyl-3-thiosemicarbazide is a simple example.
According to X-ray crystallography, the CSN3 core of the molecule is planar as are the three H's nearest the thiocarbonyl group.
Reactions
Thiosemicarbazides are precursors to thiosemicarbazones. They are precursors to heterocycles. Formylation of thiosemicarbazide provides access to triazole.
References
Thiosemicarbazones
Functional groups | Thiosemicarbazide | [
"Chemistry"
] | 214 | [
"Functional groups",
"Thiosemicarbazones",
"Semicarbazides"
] |
17,890,418 | https://en.wikipedia.org/wiki/Protein%E2%80%93lipid%20interaction | Protein–lipid interaction is the influence of membrane proteins on the lipid physical state or vice versa.
The questions which are relevant to understanding of the structure and function of the membrane are: 1) Do intrinsic membrane proteins bind tightly to lipids (see annular lipid shell), and what is the nature of the layer of lipids adjacent to the protein? 2) Do membrane proteins have long-range effects on the order or dynamics of membrane lipids? 3) How do the lipids influence the structure and/or function of membrane proteins? 4) How do peripheral membrane proteins which bind to the layer surface interact with lipids and influence their behavior?
Binding of lipids to intrinsic membrane proteins in the bilayer
A large research effort involves approaches to know whether proteins have binding sites which are specific for particular lipids and whether the protein–lipid complexes can be considered to be long-lived, on the order of the time required for the turnover a typical enzyme, that is 10−3 sec. This is now known through the use of 2H-NMR, ESR, and fluorescent methods.
There are two approaches used to measure the relative affinity of lipids binding to specific membrane proteins. These involve the use of lipid analogues in reconstituted phospholipid vesicles containing the protein of interest:
1) Spin-labeled phospholipids are motionally restricted when they are adjacent to membrane proteins. The result is a component in the ESR spectrum which is broadened. The experimental spectrum can be analyzed as the sum of the two components, a rapidly tumbling species in the "bulk" lipid phase with a sharp spectrum, and a motionally restricted component adjacent to the protein. Membrane protein denaturation causes further broadening of ESR spin label spectrum and throws more light on membrane lipid-proteins interactions
2) Spin-labeled and brominated lipid derivatives are able to quench the intrinsic tryptophan fluorescence from membrane proteins. The efficiency of quenching depends on the distance between the lipid derivative and the fluorescent tryptophans.
Perturbations of the lipid bilayer due to the presence of lateral membrane proteins
Most 2H-NMR experiments with deuterated phospholipids demonstrate that the presence of proteins has little effect on either the order parameter of the lipids in the bilayer or the lipid dynamics, as measured by relaxation times. The overall view resulting from NMR experiments is 1) that the exchange rate between boundary and free lipids is rapid, (107 sec−1), 2) that the order parameters of the bound lipid are barely affected by being adjacent to proteins, 3) that the dynamics of the acyl chain reorientations are slowed only slightly in the frequency range of 109 sec−1, and 4) that the orientation and the dynamics of the polar headgroups are similarly unaffected in any substantial manner by being adjacent to transmembrane proteins. 13C-NMR spectrum also gives information on specific lipid-protein interactions of biomembranes
Recent results using non labeled optical methods such as Dual Polarisation Interferometry which measure the birefringence(or order) within lipid bilayers have been used to show how peptide and protein interactions can influence bilayer order, specifically demonstrating the real time association to bilayer and critical peptide concentration after which the peptides penetrate and disrupt the bilayer order.
Backbone and solid chain dynamics of membrane proteins
Solid-state NMR techniques have the potential to yield detailed information about the dynamics of individual amino acid residues within a membrane protein. However, the techniques can require large amounts (100–200 mg) of isotopically labeled proteins and are most informative when applied to small proteins where spectroscopic assignments are possible.
Binding of peripheral membrane proteins to the lipid bilayer
Many peripheral membrane proteins bind to the membrane primarily through interactions with integral membrane proteins. But there is a diverse group of proteins which interact directly with the surface of the lipid bilayer. Some, such as myelin basic protein, and spectrin have mainly structural roles. A number of water-soluble proteins can bind to the bilayer surface transiently or under specific conditions.
Misfolding processes, typically exposing hydrophobic regions of proteins, often are associated with binding to lipid membranes and subsequent aggregation, for example, during neurodegenerative disorders, neuronal stress and apoptosis.
See also
Annular lipid shell
Collodion bag
Lipid
References
Further reading
Robert B. Gennis. "Biomembranes, Molecular structure and function". Springer Verlag, New York, 1989.
H L Scott, Jr & T J Coe. "A theoretical study of lipid-protein interactions in bilayers". Biophys J. 1983 June; 42(3): 219–224.
Lipid biochemistry
Protein biochemistry
Membrane biology | Protein–lipid interaction | [
"Chemistry",
"Biology"
] | 1,004 | [
"Lipid biochemistry",
"Membrane biology",
"Protein biochemistry",
"Molecular biology",
"Biochemistry"
] |
17,894,123 | https://en.wikipedia.org/wiki/Extinction%20cross | The extinction cross is an optical phenomenon that is seen when trying to extinguish a laser beam or non-planar white light using crossed polarizers. Ideally, crossed (90° rotated) polarizers block all light, because light that is polarized along the polarization axis of the first polarizer is perpendicular to the polarization axis of the second. When the beam is not perfectly collimated, however, a characteristic fringing pattern is produced.
See also
Polarization (waves)
Further reading
Mineralogy notes 6 See "6.3.5. Review of Uniaxial Optical Properties"
Nikon MicroscopyU See Figure 1a
Polarization (waves)
Optical phenomena | Extinction cross | [
"Physics"
] | 141 | [
"Optical phenomena",
"Physical phenomena",
"Polarization (waves)",
"Astrophysics"
] |
17,894,942 | https://en.wikipedia.org/wiki/Ultra-large-scale%20systems | Ultra-large-scale system (ULSS) is a term used in fields including Computer Science, Software Engineering and Systems Engineering to refer to software intensive systems with unprecedented amounts of hardware, lines of source code, numbers of users, and volumes of data. The scale of these systems gives rise to many problems: they will be developed and used by many stakeholders across multiple organizations, often with conflicting purposes and needs; they will be constructed from heterogeneous parts with complex dependencies and emergent properties; they will be continuously evolving; and software, hardware and human failures will be the norm, not the exception. The term 'ultra-large-scale system' was introduced by Northrop and others to describe challenges facing the United States Department of Defense. The term has subsequently been used to discuss challenges in many areas, including the computerization of financial markets. The term "ultra-large-scale system" (ULSS) is sometimes used interchangeably with the term "large-scale complex IT system" (LSCITS). These two terms were introduced at similar times to describe similar problems, the former being coined in the United States and the latter in the United Kingdom.
Background
The term ultra-large-scale system was introduced in a 2006 report from the Software Engineering Institute at Carnegie Mellon University authored by Linda Northrop and colleagues. The report explained that software intensive systems are reaching unprecedented scales (by measures including lines of code; numbers of users and stakeholders; purposes the system is put to; amounts of data stored, accessed, manipulated, and refined; numbers of connections and interdependencies among components; and numbers of hardware elements). When systems become ultra-large-scale, traditional approaches to engineering and management will no longer be adequate. The report argues that the problem is no longer of engineering systems or system of systems, but of engineering "socio-technical ecosystems".
In 2013, Linda Northrop and her team conducted a talk to review outcome of the 2006 study and the reality of 2013. In summary, the talk concluded that (a) ULS systems are in the midst of society and the changes to current social fabric and institutions are significant; (b) The 2006 original research team was probably too conservative in their report; (c) Recent technologies have exacerbated the pace of scale growth; and (d) There are great opportunities.
At a similar time to the publication of the report by Northrop and others, a research and training initiative was being initiated in the UK on Large-scale Complex IT Systems. Many of the challenges recognized in this initiative were the same as, or were similar to those recognized as the challenges of ultra-large-scale systems. Greg Goth quotes Dave Cliff, director of the UK initiative as saying "The ULSS proposal and the LSCITS proposal were written entirely independently, yet we came to very similar conclusions about what needs to be done and about how to do it". A difference pointed out by Ian Sommerville is that the UK initiative began with a five to ten year vision, while that of Northrop and her co-authors was much longer term. This seems to have led to there being two slightly different perspectives on ultra-large-scale systems. For example, Richard Gabriel's perspective is that ultra-large-scale systems are desirable but currently impossible to build due to limitations in the fields of software design and systems engineering. On the other hand, Sommerville's perspective is that ultra-large-scale systems are already emerging (for example in air traffic control), the key problem being not how to achieve them but how to ensure they are adequately engineered.
Characteristics of an ultra-large-scale system
Ultra-large-scale systems hold the characteristics of systems of systems (systems that have: operationally independent sub-systems; managerially independent components and sub-systems; evolutionary development; emergent behavior; and geographic distribution). In addition to these, the Northrop report argues that a ULSS will:
Have decentralized data, development, evolution and operational control
Address inherently conflicting, unknowable, and diverse requirements
Evolve continuously while it is operating, with different capabilities being deployed and removed
Contain heterogeneous, inconsistent and changing elements
Erode the people system boundary. People will not just be users, but elements of the system and affecting its overall emergent behavior.
Encounter failure as the norm, rather than the exception, with it being extremely unlikely that all components are functioning at any one time
Require new paradigms for acquisition and policy, and new methods for control
The Northrop report states that "the sheer scale of ULS systems will change everything. ULS systems will necessarily be decentralized in a variety of ways, developed and used by a wide variety of stakeholders with conflicting needs, evolving continuously, and constructed from heterogeneous parts. People will not just be users of a ULS system; they will be elements of the system. The realities of software and hardware failures will be fundamentally integrated into the design and operation of ULS systems. The acquisition of a ULS system will be simultaneous with its operation and will require new methods for control. In ULS systems, these characteristics will dominate. Consequently, ULS systems will place unprecedented demands on software acquisition, production, deployment, management, documentation, usage, and evolution practices."
Domains
The term ultra-large-scale system was introduced by Northrop and others to discuss challenges faced by the United States Department of Defense in engineering software intensive systems. In 2008 Greg Goth wrote that although Northrop's report focused on the US military's future requirements, "its description of how the fundamental principles of software design will change in a global economy... is finding wide appeal". The term is now used to discuss problems in several domains.
Defense
The Northrop report argued that "the U.S. Department of Defense (DoD) has a goal of information dominance... this goal depends on increasingly complex systems characterized by thousands of platforms, sensors, decision nodes, weapons, and warfighters connected through heterogeneous wired and wireless networks....These systems will push far beyond the size of today's systems by every measure... They will be ultra-large-scale systems."
Financial trading
Following the 2010 Flash Crash, Cliff and Northrop have argued "The very high degree of interconnectedness in the global markets means that entire trading systems, implemented and managed separately by independent organizations, can rightfully be considered as significant constituent entities in the larger global super-system.... The sheer number of human agents and computer systems connected within the global financial-markets system-of-systems is so large that it is an instance of an ultra-large-scale system, and that largeness-of-scale has significant effects on the nature of the system".
Healthcare
Kevin Sullivan has stated that the US healthcare system is "clearly an ultra-large-scale system" and that building national scale cyberinfrastructure for healthcare "demands not just a rigorous, modern software and systems engineering effort, but an approach at the cutting edge of our understanding of information processing systems and their development and deployment in complex socio-technical environments".
Others
Other domains said to be seeing the rise of ultra-large-scale systems include government, transport systems (for example air traffic control systems), energy distribution systems (for example smart grids) and large enterprises.
Research
Fundamental gaps in our current understanding of software and software development at the scale of ULS systems present profound impediments to the technically and economically effective achievement of significant gains in core system functionality. These gaps are strategic, not tactical. They are unlikely to be addressed adequately by incremental research within established categories. Rather, we require a broad new conception of both the nature of such systems and new ideas for how to develop them. We will need to look at them differently, not just as systems or systems of systems, but as socio-technical ecosystems. We will face fundamental challenges in the design and evolution, orchestration and control, and monitoring and assessment of ULS systems. These challenges require breakthrough research.
In the United States
The Northrop report proposed that a portfolio of interdisciplinary research be developed, following a ULS systems research agenda that highlights the following areas:
Human interaction – People are key participants in ULS systems. Many problems in complex systems today stem from failures at the individual and organizational level. Understanding ULS system behavior will depend on the view that humans are elements of a socially constituted computational process. This research involves anthropologists, sociologists, and social scientists conducting detailed socio-technical analyses of user interactions in the field, with the goal of understanding how to construct and evolve such socio-technical systems effectively.
Computational emergence – ULS systems must satisfy the needs of participants at multiple levels of an organization. These participants will often behave opportunistically to meet their own objectives. Some aspects of ULS systems will be "programmed" by properly incentivizing and constraining behavior rather than by explicitly prescribing. This research area explores the use of methods and tools based on economics and game theory (for example, mechanism design) to ensure globally optimal ULS system behavior by exploiting the strategic self-interests of the system's constituencies. This research area also includes exploring metaheuristics and digital evolution to augment the cognitive limits of human designers, so they can manage ongoing ULS system adaptation more effectively.
Design – Current design theory, methods, notations, tools, and practices and the acquisition methods that support them are inadequate to design ULS systems effectively. This research area broadens the traditional technology-centric definition of design to include people and organizations; social, cognitive, and economic considerations; and design structures such as design rules and government policies. It involves research in support of designing ULS systems from all of these points of view and at many levels of abstraction, from the hardware to the software to the people and organizations in which they work.
Computational engineering – New approaches will be required to enable intellectual control at an entirely new level of scope and scale for system analysis, design, and operation. ULS systems will be defined in many languages, each with its own abstractions and semantic structures. This research area focuses on evolving the expressiveness of representations to accommodate this semantic diversity. Because the complexity of ULS systems will challenge human comprehension, this area also focuses on providing automated support for computing the behavior of components and their compositions in systems and for maintaining desired properties as ULS systems evolve.
Adaptive system infrastructure – ULS systems require an infrastructure that permits organizations in distributed locations to work in parallel to develop, select, deploy, and evolve system components. This research area investigates integrated development environments and runtime platforms that support the decentralized nature of ULS systems. This research also focuses on technologies, methods, and theories that will enable ULS systems to be developed in their deployment environments.
Adaptable and predictable system quality – ULS systems will be long-running and must operate robustly in environments fraught with failures, overloads, and cyberattacks. These systems must maintain robustness in the presence of adaptations that are not centrally controlled or authorized.
Managing traditional qualities such as security, performance, reliability, and usability is necessary but not sufficient to meet the challenges of ULS systems. This research area focuses on how to maintain quality in a ULS system in the face of continuous change, ongoing failures, and attacks. It also includes identifying, predicting, and controlling new indicators of system health (akin to the U. S. gross domestic product) that are needed because of the scale of ULS systems.
Policy, acquisition, and management – Policy and management frameworks for ULS systems must address organizational, technical, and operational policies at all levels. Rules and policies must be developed and automated to enable fast and effective local action while preserving global capabilities. This research area focuses on transforming acquisition policies and processes to accommodate the rapid and continuous evolution of ULS systems by treating suppliers and supply chains as intrinsic and essential components of a ULS system.
The proposed research does not supplant current, important software research but rather significantly expands its horizons. Moreover, because it is focused on systems of the future, the SEI team purposely avoided couching descriptions in terms of today's technology. The envisioned outcome of the proposed research is a spectrum of technologies and methods for developing these systems of the future, with national-security, economic, and societal benefits that extend far beyond ULS systems themselves.
In the United Kingdom
The UK's research programme in Large-scale Complex IT Systems has been concerned with issues around ULSS development and considers that an LSCITS (Large-scale complex IT system) shares many of the characteristics of a ULSS.
In China
The National Natural Science Foundation of China has outlined a five-year project for researchers to study the assembly of ultra-large spacecraft. Although vague, the project would have applications for potential megaprojects, including colossal space-based solar power stations. Work on an Ultra-Large Aperture On-Orbit Assembly Project under the Chinese Academy of Sciences (CAS) and with support from the Chinese Ministry of Science and Technology is already underway.
See also
Complex adaptive system
Emergence
IT portfolio management
Operator overloading
Self-organization
Sociotechnical systems
Software architecture
Systems design
Systems theory
References
Further reading
Peter Van Roy (2008). "The Challenges and Opportunities of Multiple Processors: Why Multi-Core Processors are Easy and Internet is Hard" – via Lambda the Ultimate. Discussion paper that touches on topics important in ULS research.
Chapters from: Three foundational papers on capability-based market-oriented computing (concepts that are the subject of some ULS Systems research), by Mark S. Miller and K. Eric Drexler – via The Agoric Papers — (archived index):
Computer engineering
Computer systems
Electronic design automation
Military acquisition
Systems analysis
Systems engineering
Systems theory | Ultra-large-scale systems | [
"Technology",
"Engineering"
] | 2,827 | [
"Systems engineering",
"Computer engineering",
"Computer systems",
"Computer science",
"Electrical engineering",
"Computers"
] |
5,832,894 | https://en.wikipedia.org/wiki/Lightning%20detector | A lightning detector is a device that detects lightning produced by thunderstorms. There are three primary types of detectors: ground-based systems using multiple antennas, mobile systems using a direction and a sense antenna in the same location (often aboard an aircraft), and space-based systems. The first such device was invented in 1894 by Alexander Stepanovich Popov. It was also the first radio receiver in the world.
Ground-based and mobile detectors calculate the direction and severity of lightning from the current location using radio direction-finding techniques along with an analysis of the characteristic frequencies emitted by lightning. Ground-based systems can use triangulation from multiple locations to determine distance, while mobile systems can estimate distance using signal frequency and attenuation. Space-based detectors on satellites can be used to locate lightning range, bearing and intensity by direct observation.
Ground-based lightning detector networks are used by meteorological services like the National Weather Service in the United States, the Meteorological Service of Canada, the European Cooperation for Lightning Detection (EUCLID), the Institute for Ubiquitous Meteorology (Ubimet) and by other organizations like electrical utilities and forest fire prevention services.
Limitations
Each system used for lightning detection has its own limitations. These include
A single ground-based lightning network must be able to detect a flash with at least three antennas to locate it with an acceptable margin of error. This often leads to the rejection of cloud-to-cloud lightning, as one antenna might detect the position of the flash on the starting cloud and the other antenna the receiving one. As a result, ground-based networks have a tendency to underestimate the number of flashes, especially at the beginning of storms where cloud-to-cloud lightning is prevalent.
Ground-based systems that use multiple locations and time-of-flight detection methods must have a central device to collect strike and timing data to calculate location. In addition, each detection station must have a precision timing source that is used in the calculation.
Since they use attenuation rather than triangulation, mobile detectors sometimes mistakenly indicate a weak lightning flash nearby as a strong one further away, or vice versa.
Space-based lightning networks suffer from neither of these limitations, but the information provided by them is often several minutes old by the time it is widely available, making it of limited use for real-time applications such as air navigation.
Lightning detectors vs. weather radar
Lightning detectors and weather radar work together to detect storms. Lightning detectors indicate electrical activity, while weather radar indicates precipitation. Both phenomena are associated with thunderstorms and can help indicate storm strength.
Air is moving upward due to instability.
Condensation occurs and radar detects echoes above the ground (colored areas).
Eventually, the mass of raindrops is too large to be sustained by the updraft and they fall toward the ground.
The cloud must develop to a certain vertical extent before lightning is produced, so generally, weather radar will indicate a developing storm before a lightning detector does. It is not always clear from early returns if a shower cloud will develop into a thunderstorm, and weather radar also sometimes suffers from a masking effect by attenuation, where precipitation close to the radar can hide (perhaps more intense) precipitation farther away. Lightning detectors do not suffer from a masking effect and can provide confirmation when a shower cloud has evolved into a thunderstorm.
Lightning may also be located outside the precipitation recorded by radar. The second image shows that this happens when strikes originate in the anvil of the thundercloud (top part blown ahead of the cumulonimbus cloud by upper winds) or on the outside edge of the rain shaft. In both cases, there is still an area of radar echoes somewhere nearby.
Aviation use
Large airliners are more likely to use weather radar than lightning detectors, since weather radar can detect smaller storms that also cause turbulence; however, modern avionics systems often include lightning detection as well, for additional safety.
For smaller aircraft, especially in general aviation, there are two main brands of lightning detectors (often referred to as sferics, short for radio atmospherics): Stormscope, produced originally by Ryan (later B.F. Goodrich) and currently by L-3 Communications, and the Strikefinder, produced by Insight. Strikefinder can detect and properly display IC (intracloud) and CG (cloud to ground) strikes, as well as being able to differentiate between real strikes and signal bounces reflected off the Ionosphere. Lightning detectors are inexpensive and lightweight, making them attractive to owners of light aircraft (particularly of single-engine aircraft, where the aircraft nose is not available for installation of a radome).
Professional-quality portable lightning detectors
Inexpensive portable lightning detectors as well as other single sensor lightning mappers, such as those used on aircraft, have limitations including detection of false signals and poor sensitivity, particularly for intracloud (IC) lightning. Professional-quality portable lightning detectors improve performance in these areas by several techniques which facilitate each other, thus magnifying their effects:
False signal elimination: A lightning discharge generates both a radio frequency (RF) electromagnetic signal – commonly experienced as "static" on an AM radio – and very short duration light pulses, comprising the visible "flash". A lightning detector that works by sensing just one of these signals may misinterpret signals coming from sources other than lightning, giving a false alarm. Specifically, RF-based detectors may misinterpret RF noise, also known as RF Interference or RFI. Such signals are generated by many common environmental sources, such as auto ignitions, fluorescent lights, TV sets, light switches, electric motors, and high voltage wires. Likewise, light-flash-based detectors may misinterpret flickering light generated in the environment, such as reflections from windows, sunlight through tree leaves, passing cars, TV sets, and fluorescent lights.
However, since RF signals and light pulses rarely occur simultaneously except when produced by lightning, RF sensors and light pulse sensors can usefully be connected in a "coincidence circuit" which requires both kinds of signals simultaneously in order to produce an output. If such a system is pointed toward a cloud and lightning occurs in that cloud, both signals will be received; the coincidence circuit will produce an output; and the user can be sure the cause was lightning.
When a lightning discharge occurs within a cloud at night, the entire cloud appears to illuminate. In daylight these intracloud flashes are rarely visible to the human eye; nevertheless, optical sensors can detect them. In early missions, astronauts used optical sensors to detect lightning in bright, sunlit clouds far below. This application led to the development of the dual signal portable lightning detector which utilizes light flashes as well as the "sferics" signals detected by previous devices.
Improved Sensitivity: In the past, lightning detectors, both inexpensive portable ones for use on the ground and expensive aircraft systems, detected low frequency radiation because at low frequencies the signals generated by cloud-to-ground (CG) lightning are stronger (have higher amplitude) and thus are easier to detect. However, RF noise is also stronger at low frequencies. To minimize RF noise reception, low-frequency sensors are operated at low sensitivity (signal reception threshold) and thus do not detect less intense lightning signals. This reduces the ability to detect lightning at longer distances since signal intensity decreases with the square of distance. It also reduces detection of intracloud (IC) flashes which generally are weaker than CG flashes.
Enhanced Intracloud Lightning Detection: The addition of an optical sensor and coincidence circuit not only eliminates false alarms caused by RF noise; it also allows the RF sensor to be operated at higher sensitivity and to sense higher frequencies characteristic of IC lightning and enable the weaker high frequency components of IC signals and more distant flashes to be detected.
The improvements described above significantly extend the detector's utility in many areas:
Early warning: Detection of IC flashes is important because they typically occur from 5 to 30 minutes before CG flashes and so can provide earlier warning of developing thunderstorms , greatly enhancing the effectiveness of the detector in personal-safety and storm-spotting applications compared to a CG-only detector . Increased sensitivity also provides warning of already-developed storms which are more distant but may be moving toward the user.
Storm location: Even in daylight, "storm chasers" can use directional optical detectors that can be pointed at an individual cloud to distinguish thunderclouds at a distance. This is particularly important for identifying the strongest thunderstorms which produce tornadoes, since such storms produce higher flash rates with more high frequency radiation than weaker non-tornadic storms.
Microburst prediction: IC flash detection also provides a method for predicting microbursts. The updraft in convective cells starts to become electrified when it reaches altitudes sufficiently cold so that mixed phase hydrometeors (water and ice particles) can exist in the same volume. Electrification occurs due to collisions between ice particles and water drops or water coated ice particles. The lighter ice particles (snow) are charged positively and carried to the upper portion of the cloud leaving behind the negatively charged water drops in the central part of the cloud. These two charge centers create an electric field leading to lightning formation. The updraft continues until all the liquid water is converted to ice, which releases latent heat driving the updraft. When all the water is converted, the updraft collapses rapidly as does the lightning rate. Thus the increase in lightning rate to a large value, mostly due to IC discharges, followed by a rapid dropoff in rate provides a characteristic signal of the collapse of the updraft which carries particles downward in a downburst. When the ice particles reach warmer temperatures near cloudbase they melt causing atmospheric cooling; likewise, the water drops evaporate, also causing cooling. This cooling increases air density which is the driving force for microbursts. The cool air in "gust fronts" often experienced near thunderstorms is caused by this mechanism.
Storm identification/tracking: Some thunderstorms, identified by IC detection and observation, make no CG flashes and would not be detected with a CG sensing system. IC flashes also are many times as frequent as CG so provide a more robust signal. The relative high density (number per unit area) of IC flashes allows convective cells to be identified when mapping lightning whereas CG lightning are too few and far between to identify cells which typically are about 5 km in diameter. In the late stages of a storm the CG flash activity subsides and the storm may appear to have ended—but generally there still is IC activity going on in the residue mid-altitude and higher cirrus anvil clouds, so the potential for CG lightning still exists.
Storm intensity quantification: Another advantage of IC detection is that the flash rate (number per minute) is proportional to the 5th power of the convective velocity of the updrafts in the thundercloud. This non-linear response means that a small change in cloud height, hardly observable on radar, would be accompanied by a large change in flash rate. For example, a hardly noticeable 10% increase in cloud height (a measure of storm severity) would have a 60% change in total flash rate, which is easily observed. "Total lightning" is both the generally invisible (in daylight) IC flashes that stay within the cloud as well as the generally visible CG flashes that can be seen extending from cloud base to ground. Because most of the total lightning is from IC flashes, this ability to quantify storm intensity occurs mostly through detection of IC discharges. Lightning detectors that sense only low frequency energy detect only IC flashes that are nearby, so they are relatively inefficient for predicting microbursts and quantifying convective intensity.
Tornado Prediction: Severe storms that produce tornadoes are known to have very high lightning rates and most lightning from the deepest convective clouds is IC, therefore the ability to detect IC lightning provides a method for identifying clouds with high tornado potential.
Lightning range estimation
When an RF lightning signal is detected at a single location, one can determine its direction using a crossed-loop magnetic direction finder but it is difficult to determine its distance. Attempts have been made using the amplitude of the signal but this does not work very well because lightning signals greatly vary in their intensity. Thus, using amplitude for distance estimation, a strong flash may appear to be nearby and a weaker signal from the same flash – or from a weaker flash from the same storm cell – appears to be farther away. One can tell where lightning will strike within a mile radius by measuring ionization in the air to improve the accuracy of the prediction.
To understand this aspect of lightning detection one needs to know that a lightning 'flash' generally consists of several strokes, a typical number of strokes from a CG flash is in the range 3 to 6 but some flashes can have more than 10 strokes.
The initial stroke leaves an ionized path from the cloud to ground and subsequent 'return strokes', separated by an interval of about 50 milliseconds, go up that channel. The complete discharge sequence is typically about ½ second in duration while the duration of the individual strokes varies greatly between 100 nanoseconds and a few tens of microseconds. The strokes in a CG flash can be seen at night as a non-periodic sequence of illuminations of the lightning channel. This can also be heard on sophisticated lightning detectors as individual staccato sounds for each stroke, forming a distinctive pattern.
Single sensor lightning detectors have been used on aircraft and while the lightning direction can be determined from a crossed loop sensor, the distance can not be determined reliably because the signal amplitude varies between the individual strokes described above,
and these systems use amplitude to estimate distance. Because the strokes have different amplitudes, these detectors provide a line of dots on the display like spokes on a wheel extending out radially from the hub in the general direction of the lightning source. The dots are at different distances along the line because the strokes have different intensities. These characteristic lines of dots in such sensor displays are called "radial spread".
These sensors operate in the very low frequency (VLF) and low frequency (LF) range (below 300 kHz) which provides the strongest lightning signals: those generated by return strokes from the ground. But unless the sensor is close to the flash they do not pick up the weaker signals from IC discharges which have a significant amount of energy in the high frequency (HF) range (up to 30 MHz).
Another issue with VLF lightning receivers is that they pick up reflections from the ionosphere so sometimes can not tell the difference in distance between lightning 100 km away and several hundred km away. At distances of several hundred km the reflected signal (termed the "sky wave") is stronger than the direct signal (termed the "ground wave").
The Earth-ionosphere waveguide traps electromagnetic VLF- and ELF waves. Electromagnetic pulses transmitted by lightning strikes propagate within that waveguide. The waveguide is dispersive, which means that their group velocity depends on frequency. The difference of the group time delay of a lighting pulse at adjacent frequencies is proportional to the distance between transmitter and receiver. Together with the direction finding method, this allows locating lightning strikes by a single station up to distances of 10000 km from their origin. Moreover, the eigenfrequencies of the Earth-ionospheric waveguide, the Schumann resonances
at about 7.5 Hz, are used to determine the global thunderstorm activity.
Because of the difficulty in obtaining distance to lightning with a single sensor, the only current reliable method for positioning lightning is through interconnected networks of spaced sensors covering an area of the Earth's surface using time-of-arrival differences between the sensors and/or crossed-bearings from different sensors. Several such national networks currently operating in the U.S. can provide the position of CG flashes but currently cannot reliably detect and position IC flashes.
There are a few small area networks (such as Kennedy Space Center's LDAR network, one of whose sensors is pictured at the top of this article) that have VHF time of arrival systems and can detect and position IC flashes. These are called lightning mapper arrays. They typically cover a circle 30–40 miles in diameter.
See also
Automated airport weather station
Lightning prediction system
Convective storm detection
UK Military EMP detectors
US Military EMP detectors
References
External links
Vaisala Lightning Detection from Vaisala
Recent North American lightning activity from StrikestarUS
Recent North American lightning activity from Environment Canada
Lightning detection guide (PDF) from the U.S. NOAA
Lightning origin and research on detection from space from NASA
Australian national storm tracker from Weatherzone Australia
European Cooperation for Lightning Detection
WWLLN World Wide Lightning Location Network
Venezuela Lightning Network (VLN)
Live world detection map (blitzortung.org)
Experimenter's Lightning Detector
Echte 3D-Blitzortung inkl. Höhenangabe der Wolkeblitze
avionics
geopositioning
lightning
meteorological instrumentation and equipment
Russian inventions | Lightning detector | [
"Physics",
"Technology",
"Engineering"
] | 3,498 | [
"Physical phenomena",
"Meteorological instrumentation and equipment",
"Avionics",
"Measuring instruments",
"Electrical phenomena",
"Aircraft instruments",
"Lightning"
] |
5,833,628 | https://en.wikipedia.org/wiki/International%20Time%20Bureau | The International Time Bureau (, abbreviated BIH), seated at the Paris Observatory, was the international bureau responsible for combining different measurements of Universal Time.
The bureau also played an important role in the research of time keeping and related fields: Earth rotation, reference frames, and atomic time. In 1987 the responsibilities of the bureau were taken over by the International Bureau of Weights and Measures (BIPM) and the International Earth Rotation and Reference Systems Service (IERS).
History
The creation of the BIH was decided upon during the 1912 Conférence internationale de l'heure radiotélégraphique. The following year an attempt was made to regulate the international status of the bureau through the creation of an international convention. However, the convention wasn't ratified by its member countries due to the outbreak of World War I. In 1919, after the war, it was decided to make the bureau the executive body of the International Commission of Time, one of the commissions of the then newly founded International Astronomical Union (IAU), which had its headquarters in Paris.
Although international in its missions, the BIH was, throughout its history, essentially French in terms of its funding. Until 1966, the director of the Paris Observatory was also the BIH's director. In reality, the BIH's direction was entrusted to a directeur-adjoint (deputy director) or a chef des services du B.I.H. (head of B.I.H. services), i.e, a person in charge of BIH with a title that sometimes varied in the history of BIH. In January 1920, the director of the Paris Observatory, Benjamin Baillaud, delegated BIH's effective management to Guillaume Bigourdan who was in charge of BIH until 1928. Armand Lambert directed BIH from 1928 to 1942. Nicolas Stoyko was BIH's head from 1942 until he retired in 1964. Bernard Guinot, BIH's last director, was in charge from the 1st of October 1964 until the BIH ceased to exist in 1988.
From 1956 until 1987 the BIH was part of the Federation of Astronomical and Geophysical Data Analysis Services (FAGS). In 1987 the bureau's tasks of combining different measurements of Universal Time were taken over by the BIPM. Its tasks related to the correction of time with respect to the celestial reference frame and the Earth's rotation were taken over by the IERS.
References
Standards organizations in France
Timekeeping
Paris Observatory | International Time Bureau | [
"Physics"
] | 507 | [
"Spacetime",
"Timekeeping",
"Physical quantities",
"Time"
] |
5,833,630 | https://en.wikipedia.org/wiki/Transport%20Phenomena%20%28book%29 | Transport Phenomena is the first textbook about transport phenomena. It is specifically designed for chemical engineering students. The first edition was published in 1960, two years after having been preliminarily published under the title Notes on Transport Phenomena based on mimeographed notes prepared for a chemical engineering course taught at the University of Wisconsin–Madison during the academic year 1957-1958. The second edition was published in August 2001. A revised second edition was published in 2007. This text is often known simply as BSL after its authors' initials.
History
As the chemical engineering profession developed in the first half of the 20th century, the concept of "unit operations" arose as being needed in the education of undergraduate chemical engineers. The theories of mass, momentum and energy transfer were being taught at that time only to the extent necessary for a narrow range of applications. As chemical engineers began moving into a number of new areas, problem definitions and solutions required a deeper knowledge of the fundamentals of transport phenomena than those provided in the textbooks then available on unit operations.
In the 1950s, R. Byron Bird, Warren E. Stewart and Edwin N. Lightfoot stepped forward to develop an undergraduate course at the University of Wisconsin–Madison to integrate the teaching of fluid flow, heat transfer, and diffusion. From this beginning, they prepared their landmark textbook Transport Phenomena.
Subjects covered in the book
The book is divided into three basic sections, named Momentum Transport, Energy Transport and Mass Transport:
Momentum Transport
Viscosity and the Mechanisms of Momentum Transport
Momentum Balances and Velocity Distributions in Laminar Flow
The Equations of Change for Isothermal Systems
Velocity Distributions in Turbulent Flow
Interphase Transport in Isothermal Systems
Macroscopic Balances for Isothermal Flow Systems
Energy Transport
Thermal Conductivity and the Mechanisms of Energy Transport
Energy Balances and Temperature Distributions in Solids and Laminar Flow
The Equations of Change for Nonisothermal Systems
Temperature Distributions in Turbulent Flow
Interphase Transport in Nonisothermal Systems
Macroscopic Balances for Nonisothermal Systems
Mass transport
Diffusivity and the Mechanisms of Mass Transport
Concentration Distributions in Solids and Laminar Flow
Equations of Change for Multicomponent Systems
Concentration Distributions in Turbulent Flow
Interphase Transport in Nonisothermal Mixtures
Macroscopic Balances for Multicomponent Systems
Other Mechanisms for Mass Transport
Word play
Transport Phenomena contains many instances of hidden messages and other word play.
For example, the first letters of each sentence of the Preface spell out "This book is dedicated to O. A. Hougen." while in the revised second edition, the first letters of each paragraph spell out "Welcome". The first letters of each paragraph in the Postface spell out "On Wisconsin". In the first printing, in Fig. 9.L (p. 305) "Bird" is typeset safely outside the furnace wall.
Advantages of the first edition over the second edition
According to many chemical engineering professors, the first edition is much better than the second edition. There are many reasons in this regard; The second edition has been revised many times despite the fact that there are still many defects and typographical errors in many parts of the book. On account of revision to defects of the revised second edition book, the authors published "Notes for the 2nd revised edition of TRANSPORT PHENOMENA" on 9 Aug 2011.
See also
Chemical engineer
Distillation Design
Transport phenomena
Unit Operations of Chemical Engineering
Perry's Chemical Engineers' Handbook
External links
Publisher's description of this book
References
Chemical engineering books
Engineering textbooks
Science books
Technology books
Transport phenomena | Transport Phenomena (book) | [
"Physics",
"Chemistry",
"Engineering"
] | 708 | [
"Transport phenomena",
"Chemical engineering books",
"Physical phenomena",
"Chemical engineering"
] |
5,837,707 | https://en.wikipedia.org/wiki/Hail%20cannon | A hail cannon is a shock wave generator claimed to disrupt the formation of hailstones in the atmosphere.
These devices frequently engender conflict between farmers and neighbors when used, because they are loudly and repeatedly fired every 1 to 10 seconds while a storm is approaching and until it has passed through the area, yet there is no scientific evidence for their effectiveness.
Historical use
In the French wine-growing regions, church-bells were traditionally rung in the face of oncoming storms and later replaced by firing rockets or cannons.
Modern systems
A mixture of acetylene and oxygen is ignited in the lower chamber of the machine. As the resulting blast passes through the neck and into the cone, it develops into a shock wave. This shock wave then travels through the cloud formations above, a disturbance which manufacturers claim disrupts the growth phase of hailstones.
Manufacturers claim that what would otherwise have fallen as hailstones then falls as slush or rain. It is said to be critical that the machine is running during the approach of the storm in order to affect the developing hailstones. One manufacturer claims that the radius of the effective area of their device is around .
Scientific evidence
There is no evidence in favor of the effectiveness of these devices. A 2006 review by Jon Wieringa and Iwan Holleman in the journal Meteorologische Zeitschrift summarized a variety of negative and inconclusive scientific measurements, concluding "the use of cannons or explosive rockets is a waste of money and effort".
There is also reason to doubt the efficacy of hail cannons from a theoretical perspective. For example, thunder is a much more powerful sonic wave, and is usually found in the same storms that generate hail, yet it does not seem to disturb the growth of hailstones. Charles Knight, a cloud physicist at the National Center for Atmospheric Research in Boulder, Colorado, said in a July 10, 2008, newspaper article that "I don't find anyone in the scientific community who would validate hail cannons, but there are believers in all sorts of things. It would be very hard to prove they don't work, weather being as unpredictable as it is."
See also
Cloud seeding
Cloudbuster
Pseudoscience
References
External links
Hail Storms by NOAA on Google Maps
Weather modification
Shock waves
Hail | Hail cannon | [
"Physics",
"Engineering"
] | 460 | [
"Planetary engineering",
"Physical phenomena",
"Shock waves",
"Waves",
"Weather modification"
] |
7,623,862 | https://en.wikipedia.org/wiki/Rubber%20elasticity | Rubber elasticity is the ability of solid rubber to be stretched up to a factor of 10 from its original length, and return to close to its original length upon release. This process can be repeated many times with no apparent degradation to the rubber.
Rubber, like all materials, consists of molecules. Rubber's elasticity is produced by molecular processes that occur due to its molecular structure. Rubber's molecules are polymers, or large, chain-like molecules. Polymers are produced by a process called polymerization. This process builds polymers up by sequentially adding short molecular backbone units to the chain through chemical reactions. A rubber polymer follows a random winding path in three dimensions, intermingling with many other rubber polymers.
Natural rubbers, such as polybutadiene and polyisoprene, are extracted from plants as a fluid colloid and then solidified in a process called Vulcanization. During the process, a small amount of a cross-linking molecule, usually sulfur, is added. When heat is applied, sections of rubber's polymer chains chemically bond to the cross-linking molecule. These bonds cause rubber polymers to become cross-linked, or joined to each other by the bonds made with the cross-linking molecules. Because each rubber polymer is very long, each one participates in many crosslinks with many other rubber molecules, forming a continuous network. The resulting molecular structure demonstrates elasticity, making rubber a member of the class of elastic polymers called elastomers.
History
Following its introduction to Europe from America in the late 15th century, natural rubber (polyisoprene) was regarded mostly as a curiosity. Its most useful application was its ability to erase pencil marks on paper by rubbing, hence its name. One of its most peculiar properties is a slight (but detectable) increase in temperature that occurs when a sample of rubber is stretched. If it is allowed to quickly retract, an equal amount of cooling is observed. This phenomenon caught the attention of the English physicist John Gough. In 1805 he published some qualitative observations on this characteristic as well as how the required stretching force increased with temperature.
By the mid-nineteenth century, the theory of thermodynamics was being developed and within this framework, the English mathematician and physicist Lord Kelvin showed that the change in mechanical energy required to stretch a rubber sample should be proportional to the increase in temperature. This would later be associated with a change in entropy. The connection to thermodynamics was firmly established in 1859 when the English physicist James Joule published the first careful measurements of the temperature increase that occurred as a rubber sample was stretched. This work confirmed the theoretical predictions of Lord Kelvin.
In 1838 the American inventor Charles Goodyear found that natural rubber's elastic properties could be immensely improved by adding a small amount of sulfur to produce chemical cross-links between adjacent polyisoprene molecules.
Before it is cross-linked, the liquid natural rubber consists of very long polymer molecules, containing thousands of isoprene backbone units, connected head-to-tail (commonly referred to as chains). Every chain follows a random, three-dimensional path through the polymer liquid and is in contact with thousands of other nearby chains. When heated to about 150C, reactive cross-linker molecules, such as sulfur or dicumyl peroxide, can decompose and the subsequent chemical reactions produce a chemical bond between adjacent chains. A crosslink can be visualized as the letter 'X' but with some of its arms pointing out of the plane. The result is a three dimensional molecular network.
All of the polyisoprene molecules are connected together at multiple points by these chemical bonds (network nodes) resulting in a single giant molecule and all information about the original long polymers is lost. A rubber band is a single molecule, as is a latex glove. The sections of polyisoprene between two adjacent cross-links are called network chains and can contain up to several hundred isoprene units. In natural rubber, each cross-link produces a network node with four chains emanating from it. It is the network that gives rise to these elastic properties.
Because of the enormous economic and technological importance of rubber, predicting how a molecular network responds to mechanical strains has been of enduring interest to scientists and engineers. To understand the elastic properties of rubber, theoretically, it is necessary to know both the physical mechanisms that occur at the molecular level and how the random-walk nature of the polymer chain defines the network. The physical mechanisms that occur within short sections of the polymer chains produce the elastic forces and the network morphology determines how these forces combine to produce the macroscopic stress that is observed when a rubber sample is deformed (e.g. subjected to tensile strain).
Molecular-level models
There are actually several physical mechanisms that produce the elastic forces within the network chains as a rubber sample is stretched. Two of these arise from entropy changes and one is associated with the distortion of the molecular bond angles along the chain backbone. These three mechanisms are immediately apparent when a moderately thick rubber sample is stretched manually.
Initially, the rubber feels quite stiff (i.e. the force must be increased at a high rate with respect to the strain). At intermediate strains, the required increase in force is much lower to cause the same amount of stretch. Finally, as the sample approaches the breaking point, its stiffness increases markedly. What the observer is noticing are the changes in the modulus of elasticity that are due to the different molecular mechanisms. These regions can be seen in Fig. 1, a typical stress vs. strain measurement for natural rubber. The three mechanisms (labelled Ia, Ib, and II) predominantly correspond to the regions shown on the plot.
The concept of entropy comes to us from the area of mathematical physics called statistical mechanics which is concerned with the study of large thermal systems, e.g. rubber networks at room temperature. Although the detailed behavior of the constituent chains are random and far too complex to study individually, we can obtain very useful information about their "average" behavior from a statistical mechanics analysis of a large sample. There are no other examples of how entropy changes can produce a force in our everyday experience. One may regard the entropic forces in polymer chains as arising from the thermal collisions that their constituent atoms experience with the surrounding material. It is this constant jostling that produces a resisting (elastic) force in the chains as they are forced to become straight.
While stretching a rubber sample is the most common example of elasticity, it also occurs when rubber is compressed. Compression may be thought of as a two dimensional expansion as when a balloon is inflated. The molecular mechanisms that produce the elastic force are the same for all types of strain.
When these elastic force models are combined with the complex morphology of the network, it is not possible to obtain simple analytic formulae to predict the macroscopic stress. It is only via numerical simulations on computers that it is possible to capture the complex interaction between the molecular forces and the network morphology to predict the stress and ultimate failure of a rubber sample as it is strained.
The Molecular Kink Paradigm for rubber elasticity
The Molecular Kink Paradigm proceeds from the intuitive notion that molecular chains that make up a natural rubber (polyisoprene) network are constrained by surrounding chains to remain within a "tube." Elastic forces produced in a chain, as a result of some applied strain, are propagated along the chain contour within this tube. Fig. 2 shows a representation of a four-carbon isoprene backbone unit with an extra carbon atom at each end to indicate its connections to adjacent units on a chain. It has three single C-C bonds and one double bond. It is principally by rotating about the C-C single bonds that a polyisoprene chain randomly explores its possible conformations.
Sections of chain containing between two and three isoprene units have sufficient flexibility that they may be considered statistically de-correlated from one another. That is, there is no directional correlation along the chain for distances greater than this distance, referred to as a Kuhn length. These non-straight regions evoke the concept of "kinks" and are in fact a manifestation of the random-walk nature of the chain.
Since a kink is composed of several isoprene units, each having three carbon-carbon single bonds, there are many possible conformations available to a kink, each with a distinct energy and end-to-end distance. Over time scales of seconds to minutes, only these relatively short sections of the chain (i.e. kinks) have sufficient volume to move freely amongst their possible rotational conformations. The thermal interactions tend to keep the kinks in a state of constant flux, as they make transitions between all of their possible rotational conformations. Because the kinks are in thermal equilibrium, the probability that a kink resides in any rotational conformation is given by a Boltzmann distribution and we may associate an entropy with its end-to-end distance. The probability distribution for the end-to-end distance of a Kuhn length is approximately Gaussian and is determined by the Boltzmann probability factors for each state (rotational conformation). As a rubber network is stretched, some kinks are forced into a restricted number of more extended conformations having a greater end-to-end distance and it is the resulting decrease in entropy that produces an elastic force along the chain.
There are three distinct molecular mechanisms that produce these forces, two of which arise from changes in entropy that is referred to as the low chain extension regime, Ia and the moderate chain extension regime, Ib. The third mechanism occurs at high chain extension, as it is extended beyond its initial equilibrium contour length by the distortion of the chemical bonds along its backbone. In this case, the restoring force is spring-like and is referred to as regime II. The three force mechanisms are found to roughly correspond to the three regions observed in tensile stress vs. strain experiments, shown in Fig. 1.
The initial morphology of the network, immediately after chemical cross-linking, is governed by two random processes: (1) The probability for a cross-link to occur at any isoprene unit and, (2) the random walk nature of the chain conformation. The end-to-end distance probability distribution for a fixed chain length (i.e. fixed number of isoprene units) is described by a random walk. It is the joint probability distribution of the network chain lengths and the end-to-end distances between their cross-link nodes that characterizes the network morphology. Because both the molecular physics mechanisms that produce the elastic forces and the complex morphology of the network must be treated simultaneously, simple analytic elasticity models are not possible; an explicit 3-dimensional numerical model is required to simulate the effects of strain on a representative volume element of a network.
Low chain extension regime, Ia
The Molecular Kink Paradigm envisions a representative network chain as a series of vectors that follow the chain contour within its tube. Each vector represents the equilibrium end-to-end distance of a kink. The actual 3-dimensional path of the chain is not pertinent, since all elastic forces are assumed to operate along the chain contour. In addition to the chain's contour length, the only other important parameter is its tortuosity, the ratio of its contour length to its end-to-end distance. As the chain is extended, in response to an applied strain, the induced elastic force is assumed to propagate uniformly along its contour. Consider a network chain whose end points (network nodes) are more or less aligned with the tensile strain axis. As the initial strain is applied to the rubber sample, the network nodes at the ends of the chain begin to move apart and all of the kink vectors along the contour are stretched simultaneously. Physically, the applied strain forces the kinks to stretch beyond their thermal equilibrium end-to-end distances, causing a decrease in their entropy. The increase in free energy associated with this change in entropy, gives rise to a (linear) elastic force that opposes the strain. The force constant for the low strain regime can be estimated by sampling molecular dynamics (MD) trajectories of a kink (i.e. short chains) composed of 2–3 isoprene units, at relevant temperatures (e.g. 300K). By taking many samples of the coordinates over the course of the simulations, the probability distributions of end-to-end distance for a kink can be obtained. Since these distributions (which turn out to be approximately Gaussian) are directly related to the number of states, they may be associated with the entropy of the kink at any end-to-end distance. By numerically differentiating the probability distribution, the change in entropy, and hence free energy, with respect to the kink end-to-end distance can be found. The force model for this regime is found to be linear and proportional to the temperature divided by the chain tortuosity.
Moderate chain extension regime, Ib
At some point in the low extension regime (i.e. as all of the kinks along the chain are being extended simultaneously) it becomes energetically more favourable to have one kink transition to an extended conformation in order to stretch the chain further. The applied strain can force a single isoprene unit within a kink into an extended conformation, slightly increasing the end-to-end distance of the chain, and the energy required to do this is less than that needed to continue extending all of the kinks simultaneously. Numerous experiments strongly suggest that stretching a rubber network is accompanied by a decrease in entropy. As shown in Fig. 2, an isoprene unit has three single C-C bonds and there are two or three preferred rotational angles (orientations) about these bonds that have energy minima. Of the 18 allowed rotational conformations, only 6 have extended end-to-end distances and forcing the isoprene units in a chain to reside in some subset of the extended states must reduce the number of rotational conformations available for thermal motion. It is this reduction in the number of available states that causes the entropy to decrease. As the chain continues to straighten, all of the isoprene units in the chain are eventually forced into extended conformations and the chain is considered to be "taut." A force constant for chain extension can be estimated from the resulting change in free energy associated with this entropy change. As with regime IA, the force model for this regime is linear and proportional to the temperature divided by the chain tortuosity.
High chain extension regime, II
When all of the isoprene units in a network chain have been forced to reside in just a few extended rotational conformations, the chain becomes taut. It may be regarded as sensibly straight, except for the zigzag path that the C-C bonds make along the chain contour. However, further extension is still possible by bond distortions (e.g. bond angle increases), bond stretches, and dihedral angle rotations. These forces are spring-like and are not associated with entropy changes. A taut chain can be extended by only about 40%. At this point the force along the chain is sufficient to mechanically rupture the C-C covalent bond. This tensile force limit has been calculated via quantum chemistry simulations and it is approximately 7 nN, about a factor of a thousand greater than the entropic chain forces at low strain. The angles between adjacent backbone C-C bonds in an isoprene unit vary between about 115–120 degrees and the forces associated with maintaining these angles are quite large, so within each unit, the chain backbone always follows a zigzag path, even at bond rupture. This mechanism accounts for the steep upturn in the elastic stress, observed at high strains (Fig. 1).
Network morphology
Although the network is completely described by only two parameters (the number of network nodes per unit volume and the statistical de-correlation length of the polymer, the Kuhn length), the way in which the chains are connected is actually quite complicated. There is a wide variation in the lengths of the chains and most of them are not connected to the nearest neighbor network node. Both the chain length and its end-to-end distance are described by probability distributions. The term "morphology" refers to this complexity. If the cross-linking agent is thoroughly mixed, there is an equal probability for any isoprene unit to become a network node. For dicumyl peroxide, the cross linking efficiency in natural rubber is unity, but this is not the case for sulfur. The initial morphology of the network is dictated by two random processes: the probability for a cross-link to occur at any isoprene unit and the Markov random walk nature of a chain conformation. The probability distribution function for how far one end of a chain end can ‘wander’ from the other is generated by a Markov sequence. This conditional probability density function relates the chain length in units of the Kuhn length to the end-to-end distance :
The probability that any isoprene unit becomes part of a cross-link node is proportional to the ratio of the concentrations of the cross-linker molecules (e.g., dicumyl-peroxide) to the isoprene units: The factor of two comes about because two isoprene units (one from each chain) participate in the cross-link. The probability for finding a chain containing isoprene units is given by:
where .
The equation can be understood as simply the probability that an isoprene unit is NOT a cross-link (1−px) in N−1 successive units along a chain. Since P(N) decreases with N, shorter chains are more probable than longer ones. Note that the number of statistically independent backbone segments is not the same as the number of isoprene units. For natural rubber networks, the Kuhn length contains about 2.2 isoprene units, so . The product of equations () and () (the joint probability distribution) relates the network chain length () and end-to-end distance () between its terminating cross-link nodes:
The complex morphology of a natural rubber network can be seen in Fig. 3, which shows the probability density vs. end-to-end distance (in units of mean node spacing) for an "average" chain. For the common experimental cross-link density of 4x1019 cm−3, an average chain contains about 116 isoprene units (52 Kuhn lengths) and has a contour length of about 50 nm. Fig. 3 shows that a significant fraction of chains span several node spacings, i.e., the chain ends overlap other network chains. Natural rubber, cross-linked with dicumyl peroxide, has tetra-functional cross-links (i.e. each cross-link node has 4 network chains emanating from it). Depending on their initial tortuosity and the orientation of their endpoints with respect to the strain axis, each chain associated with an active cross-link node can have a different elastic force constant as it resists the applied strain. To preserve force equilibrium (zero net force) on each cross-link node, a node may be forced to move in tandem with the chain having the highest force constant for chain extension. It is this complex node motion, arising from the random nature of the network morphology, that makes the study of the mechanical properties of rubber networks so difficult. As the network is strained, paths composed of these more extended chains emerge that span the entire sample, and it is these paths that carry most of the stress at high strains.
Numerical network simulation model
To calculate the elastic response of a rubber sample, the three chain force models (regimes Ia, Ib and II) and the network morphology must be combined in a micro-mechanical network model. Using the joint probability distribution in equation () and the force extension models, it is possible to devise numerical algorithms to both construct a faithful representative volume element of a network and to simulate the resulting mechanical stress as it is subjected to strain. An iterative relaxation algorithm is used to maintain approximate force equilibrium at each network node as strain is imposed. When the force constant obtained for kinks having 2 or 3 isoprene units (approximately one Kuhn length) is used in numerical simulations, the predicted stress is found to be consistent with experiments. The results of such a calculation are shown in Fig. 1 (dashed red line) for sulphur cross-linked natural rubber and compared with experimental data (solid blue line). These simulations also predict a steep upturn in the stress as network chains become taut and, ultimately, material failure due to bond rupture. In the case of sulphur cross-linked natural rubber, the S-S bonds in the cross-link are much weaker than the C-C bonds on the chain backbone and are the network failure points. The plateau in the simulated stress, starting at a strain of about 7, is the limiting value for the network. Stresses greater than about 7 MPa cannot be supported and the network fails. Near this stress limit, the simulations predict that less than 10% of the chains are taut, i.e. in the high chain extension regime and less than 0.1% of the chains have ruptured. While the very low rupture fraction may seem surprising, it is not inconsistent with the common experience of stretching a rubber band until it breaks. The elastic response of the rubber after breaking is not noticeably different from the original.
Experiments
Variation of tensile stress with temperature
For molecular systems in thermal equilibrium, the addition of energy (e.g. by mechanical work) can cause a change in entropy. This is known from the theories of thermodynamics and statistical mechanics. Specifically, both theories assert that the change in energy must be proportional to the entropy change times the absolute temperature. This rule is only valid so long as the energy is restricted to thermal states of molecules. If a rubber sample is stretched far enough, energy may reside in nonthermal states such as the distortion of chemical bonds and the rule does not apply. At low to moderate strains, theory predicts that the required stretching force is due to a change in entropy in the network chains.
It is therefore expected that the force necessary to stretch a sample to some value of strain should be proportional to the temperature of the sample. Measurements showing how the tensile stress in a stretched rubber sample varies with temperature are shown in Fig. 4. In these experiments, the strain of a stretched rubber sample was held fixed as the temperature was varied between 10 and 70 degrees Celsius. For each value of fixed strain, it is seen that the tensile stress varied linearly (to within experimental error). These experiments provide the most compelling evidence that entropy changes are the fundamental mechanism for rubber elasticity.
The positive linear behaviour of the stress with temperature sometimes leads to the mistaken notion that rubber has a negative coefficient of thermal expansion (i.e. the length of a sample shrinks when heated). Experiments have shown conclusively that, like almost all other materials, the coefficient of thermal expansion natural rubber is positive.
Snap-back velocity
When stretching a piece of rubber (e.g. a rubber band) it will deform lengthwise in a uniform manner. When one end of the sample is released, it snaps back to its original length too quickly for the naked eye to resolve the process. An intuitive expectation is that it returns to its original length in the same manner as when it was stretched (i.e. uniformly). Experimental observations by Mrowca et al. suggest that this expectation is inaccurate. To capture the extremely fast retraction dynamics, they utilized an experimental method devised by Exner and Stefan in 1874. Their method consisted of a rapidly rotating glass cylinder which, after being coated with lamp black, was placed next to the stretched rubber sample. Styli, attached to the mid-point and free end of the rubber sample, were held in contact with the glass cylinder. Then, as the free end of the rubber snapped back, the styli traced out helical paths in the lamp black coating of the rotating cylinder. By adjusting the rotation speed of the cylinder, they could record the position of the styli in less than one complete rotation. The trajectories were transferred to a graph by rolling the cylinder on a piece of damp blotter paper. The mark left by a stylus appeared as a white line (no lamp black) on the paper.
Their data, plotted as the graph in Fig. 5, shows the position of end and midpoint styli as the sample rapidly retracts to its original length. The sample was initially stretched 9.5 in (~24 cm) beyond its unstrained length and then released. The styli returned to their original positions (i.e. a displacement of 0 in) in a little over 6 Ms. The linear behaviour of the displacement vs. time indicates that, after a brief acceleration, both the end and the midpoint of the sample snapped back at a constant velocity of about 50 m/s or 112 mph. However, the midpoint stylus did not start to move until about 3 Ms after the end was released. Evidently, the retraction process travels as a wave, starting at the free end.
At high extensions some of the energy stored in the stretched network chain is due to a change in its entropy, but most of the energy is stored in bond distortions (regime II, above) which do not involve an entropy change. If one assumes that all of the stored energy is converted to kinetic energy, the retraction velocity may be calculated directly from the familiar conservation equation E = mv2. Numerical simulations, based on the molecular kink paradigm, predict velocities consistent with this experiment.
Historical approaches to elasticity theory
Eugene Guth and Hubert M. James proposed the entropic origins of rubber elasticity in 1941.
Thermodynamics
Temperature affects the elasticity of elastomers in an unusual way. When the elastomer is assumed to be in a stretched state, heating causes them to contract. Vice versa, cooling can cause expansion.
This can be observed with an ordinary rubber band. Stretching a rubber band will cause it to release heat, while releasing it after it has been stretched will lead it to absorb heat, causing its surroundings to become cooler. This phenomenon can be explained with the Gibbs free energy. Rearranging ΔG=ΔH−TΔS, where G is the free energy, H is the enthalpy, and S is the entropy, we obtain . Since stretching is nonspontaneous, as it requires external work, TΔS must be negative. Since T is always positive (it can never reach absolute zero), the ΔS must be negative, implying that the rubber in its natural state is more entangled (with more microstates) than when it is under tension. Thus, when the tension is removed, the reaction is spontaneous, leading ΔG to be negative. Consequently, the cooling effect must result in a positive ΔH, so ΔS will be positive there.
The result is that an elastomer behaves somewhat like an ideal monatomic gas, inasmuch as (to good approximation) elastic polymers do not store any potential energy in stretched chemical bonds or elastic work done in stretching molecules, when work is done upon them. Instead, all work done on the rubber is "released" (not stored) and appears immediately in the polymer as thermal energy. In the same way, all work that the elastic does on the surroundings results in the disappearance of thermal energy in order to do the work (the elastic band grows cooler, like an expanding gas). This last phenomenon is the critical clue that the ability of an elastomer to do work depends (as with an ideal gas) only on entropy-change considerations, and not on any stored (i.e. potential) energy within the polymer bonds. Instead, the energy to do work comes entirely from thermal energy, and (as in the case of an expanding ideal gas) only the positive entropy change of the polymer allows its internal thermal energy to be converted efficiently into work.
Polymer chain theories
Invoking the theory of rubber elasticity, a polymer chain in a cross-linked network may be seen as an entropic spring. When the chain is stretched, the entropy is reduced by a large margin because there are fewer conformations available. As such there is a restoring force which causes the polymer chain to return to its equilibrium or unstretched state, such as a high entropy random coil configuration, once the external force is removed. This is the reason why rubber bands return to their original state. Two common models for rubber elasticity are the freely-jointed chain model and the worm-like chain model.
Freely-jointed chain model
The freely joined chain, also called an ideal chain, follows the random walk model. Microscopically, the 3D random walk of a polymer chain assumes the overall end-to-end distance is expressed in terms of the x, y and z directions:
In the model, is the length of a rigid segment, is the number of segments of length , is the distance between the fixed and free ends, and is the "contour length" or . Above the glass transition temperature, the polymer chain oscillates and changes over time. The probability distribution of the chain is the product of the probability distributions of the individual components, given by the following Gaussian distribution:
Therefore, the ensemble average end-to-end distance is simply the standard integral of the probability distribution over all space. Note that the movement could be backwards or forwards, so the net average will be zero. However, the root mean square can be a useful measure of the distance.
The Flory theory of rubber elasticity suggests that rubber elasticity has primarily entropic origins. By using the following basic equations for Helmholtz free energy and its discussion about entropy, the force generated from the deformation of a rubber chain from its original unstretched conformation can be derived. The is the number of conformations of the polymer chain. Since the deformation does not involve enthalpy change, the change in free energy can simply be calculated as the change in entropy. Note that the force equation resembles the behaviour of a spring and follows Hooke's law: , where F is the force, k is the spring constant and x is the distance. Usually, the neo-Hookean model can be used on cross-linked polymers to predict their stress-strain relations:
Note that the elastic coefficient is temperature dependent. If rubber temperature increases, the elastic coefficient increases as well. This is the reason why rubber under constant stress shrinks when its temperature increases.
We can further expand the Flory theory into a macroscopic view, where bulk rubber material is discussed. Assume the original dimension of the rubber material is , and , a deformed shape can then be expressed by applying an individual extension ratio to the length (, , ). So microscopically, the deformed polymer chain can also be expressed with the extension ratio: , , . The free energy change due to deformation can then be expressed as follows:
Assume that the rubber is cross-linked and isotropic, the random walk model gives , and are distributed according to a normal distribution. Therefore, they are equal in space, and all of them are 1/3 of the overall end-to-end distance of the chain: . Plugging in the change of free energy equation above, it is easy to get:
The free energy change per volume is just:
where is the number of strands in network, the subscript "def" means "deformation", , which is the number density per volume of polymer chains, which is the ratio between the end-to-end distance of the chain and the theoretical distance that obey random walk statistics. If we assume incompressibility, the product of extension ratios is 1, implying no change in the volume: .
Case study: Uniaxial deformation:
In a uniaxial deformed rubber, because it is assumed that . So the previous free energy per volume equation is:
The engineering stress (by definition) is the first derivative of the energy in terms of the extension ratio, which is equivalent to the concept of strain:
and the Young's Modulus is defined as derivative of the stress with respect to strain, which measures the stiffness of the rubber in laboratory experiments.
where , is the mass density of the chain, is the number average molecular weight of a network strand between crosslinks. Here, this type of analysis links the thermodynamic theory of rubber elasticity to experimentally measurable parameters. In addition, it gives insights into the cross-linking condition of the materials.
Worm-like chain model
The worm-like chain model (WLC) takes the energy required to bend a molecule into account. The variables are the same except that , the persistence length, replaces . Then, the force follows this equation:
Therefore, when there is no distance between chain ends (r=0), the force required to do so is zero, and to fully extend the polymer chain (), an infinite force is required, which is intuitive. Graphically, the force begins at the origin and initially increases linearly with . The force then plateaus but eventually increases again and approaches infinity as the chain length approaches .
See also
Elasticity (physics)
Hyperelastic material
Polymers
Thermodynamics
References
Rubber properties
Thermodynamics
Mechanics | Rubber elasticity | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 6,909 | [
"Mechanics",
"Thermodynamics",
"Mechanical engineering",
"Dynamical systems"
] |
7,624,304 | https://en.wikipedia.org/wiki/Parity%20game | A parity game is played on a colored directed graph, where each node has been colored by a priority – one of (usually) finitely many natural numbers. Two players, 0 and 1, move a (single, shared) token along the edges of the graph. The owner of the node that the token falls on selects the successor node (does the next move). The players keep moving the token, resulting in a (possibly infinite) path, called a play.
The winner of a finite play is the player whose opponent is unable to move. The winner of an infinite play is determined by the priorities appearing in the play. Typically, player 0 wins an infinite play if the largest priority that occurs infinitely often in the play is even. Player 1 wins otherwise. This explains the word "parity" in the title.
Parity games lie in the third level of the Borel hierarchy, and are consequently determined.
Games related to parity games were implicitly used in Rabin's
proof of decidability of the monadic second-order theory of n successors (S2S for n = 2), where determinacy of such games was
proven. The Knaster–Tarski theorem leads to a relatively simple proof of determinacy of parity games.
Moreover, parity games are history-free determined. This means that if a player has a winning strategy then that player has a winning strategy that depends only on the current board position, and not on the history of the play.
Solving a game
Solving a parity game played on a finite graph means deciding, for a given starting position, which of the two players has a winning strategy. It has been shown that this problem is in NP and co-NP, more precisely UP and co-UP, as well as in QP (quasipolynomial time). It remains an open question whether this decision problem is solvable in PTime.
Given that parity games are history-free determined, solving a given parity game is equivalent to solving the following simple looking graph-theoretic problem. Given a finite colored directed bipartite graph with n vertices , and V colored with colors from 1 to m, is there a choice function selecting a single out-going edge from each vertex of , such that the resulting subgraph has the property that in each cycle the largest occurring color is even.
Recursive algorithm for solving parity games
Zielonka outlined a recursive algorithm that solves parity games. Let be a parity game, where resp. are the sets of nodes belonging to player 0 resp. 1, is the set of all nodes, is the total set of edges, and is the priority assignment function.
Zielonka's algorithm is based on the notation of attractors. Let be a set of nodes and be a player. The -attractor of is the least set of nodes containing such that can force a visit to from every node in . It can be defined by a fix-point computation:
In other words, one starts with the initial set . Then, for each step () one adds all nodes belonging to player 0 that can reach the previous set () with a single edge and all nodes belonging to player 1 that must reach the previous set () no matter which edge player 1 takes.
Zielonka's algorithm is based on a recursive descent on the number of priorities. If the maximal priority is 0, it is immediate to see that player 0 wins the whole game (with an arbitrary strategy). Otherwise, let be the largest one and let be the player associated with the priority. Let be the set of nodes with priority and let be the corresponding attractor of player .
Player can now ensure that every play that visits infinitely often is won by player .
Consider the game in which all nodes and affected edges of are removed. We can now solve the smaller game by recursion and obtain a pair of winning sets . If is empty, then so is for the game , because player can only decide to escape from to which also results in a win for player .
Otherwise, if is not empty, we only know for sure that player can win on as player cannot escape from to (since is an -attractor). We therefore compute the attractor and remove it from to obtain the smaller game . We again solve it by recursion and obtain a pair of winning sets . It follows that and .
In simple pseudocode, the algorithm might be expressed as this:
function
:= maximal priority in
if
return
else
:= nodes in with priority
if
return
return
Related games and their decision problems
A slight modification of the above game, and the related graph-theoretic problem, makes solving the game NP-hard. The modified game has the Rabin acceptance condition, and thus every vertex is colored by a set of colors instead of a single color. Accordingly, we say a vertex v has color j if the color j belongs to the color set of v. An infinite play is winning for player 0 if there exists i such that infinitely many vertices in the play have color 2i, yet finitely many have color 2i+1.
Parity is the special case where every vertex has a single color.
Specifically, in the above bipartite graph scenario, the problem now is to determine if there
is a choice function selecting a single out-going edge from each vertex of V0, such that the resulting subgraph has the property that in each cycle (and hence each strongly connected component) it is the case that there exists an i and a node with color 2i, and no node with color 2i + 1...
Note that as opposed to parity games, this game is no longer symmetric with respect to players 0 and 1.
Relation with logic and automata theory
Despite its interesting complexity theoretic status, parity game solving can be seen as the algorithmic backend to problems in automated verification and controller synthesis. The model-checking problem for the modal μ-calculus for instance is known to be equivalent to parity game solving. Also, decision problems like validity or satisfiability for modal logics can be reduced to parity game solving.
References
Further reading
E. Grädel, W. Thomas, T. Wilke (Eds.) : Automata, Logics, and Infinite Games, Springer LNCS 2500 (2003),
W. Zielonka : Infinite games on finitely coloured graphs with applications to automata on infinite tree, TCS, 200(1-2):135-183, 1998
External links
Two state-of-the-art parity game solving toolsets are the following:
PGSolver Collection
Oink
Game theory game classes
Finite model theory
Quasi-polynomial time algorithms | Parity game | [
"Mathematics"
] | 1,385 | [
"Game theory game classes",
"Finite model theory",
"Game theory",
"Model theory"
] |
7,625,671 | https://en.wikipedia.org/wiki/Tetralemma | The tetralemma is a figure that features prominently in the logic of India.
Definition
It states that with reference to any a logical proposition (or axiom) X, there are four possibilities:
(affirmation)
(negation)
(both)
(neither)
Catuskoti
The history of fourfold negation, the Catuskoti (Sanskrit), is evident in the logico-epistemological tradition of India, given the categorical nomenclature Indian logic in Western discourse. Subsumed within the auspice of Indian logic, 'Buddhist logic' has been particularly focused in its employment of the fourfold negation, as evidenced by the traditions of Nagarjuna and the Madhyamaka, particularly the school of Madhyamaka given the retroactive nomenclature of Prasangika by the Tibetan Buddhist logico-epistemological tradition. Though tetralemma was also used as a form inquiry rather than logic in the Nasadiya Sukta of Rigveda (creation hymn) though seems to be rarely used as a tool of logic before Buddhism.
See also
Catuṣkoṭi, a similar concept in Indian philosophy
De Morgan's laws
Dialetheism
Logical connective
Paraconsistent logic
Prasangika
Pyrrhonism
Semiotic square
Two-truths doctrine
References
External links
Wiktionary definition of tetralemma
History of logic
Logic
Lemmas | Tetralemma | [
"Mathematics"
] | 290 | [
"Mathematical theorems",
"Mathematical problems",
"Lemmas"
] |
7,628,518 | https://en.wikipedia.org/wiki/Spin%20diffusion | Spin diffusion describes a situation wherein the individual nuclear spins undergo continuous exchange of energy. This permits polarization differences within the sample to be reduced on a timescale much shorter than relaxation effects.
Spin diffusion is a process by which magnetization can be exchanged spontaneously between spins. The process is driven by dipolar coupling, and is therefore related to internuclear distances. Spin diffusion has been used to study many structural problems in the past, ranging from domain sizes in polymers and disorder in glassy materials to high-resolution crystal structure determination of small molecules and proteins.
In solid-state nuclear magnetic resonance, spin diffusion plays a major role in Cross Polarization (CP) experiments. As mentioned before, by transferring the magnetization (and thus the population) from nuclei with different values for the spin-lattice relaxation (T1), the overall time for the experiment is reduced. Is a very common practice when the sample contains hydrogen. Another desirable effect is that the signal to noise ratio (S/N) is increased until a theoretical factor γA/γB, being γ the gyromagnetic ratio.
Notes
Quantum field theory
Nuclear magnetic resonance | Spin diffusion | [
"Physics",
"Chemistry"
] | 233 | [
"Quantum field theory",
"Nuclear magnetic resonance",
"Quantum mechanics",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance stubs",
"Nuclear physics"
] |
7,628,701 | https://en.wikipedia.org/wiki/Tracking%20%28particle%20physics%29 | In particle physics, tracking is the process of reconstructing the trajectory (or track) of electrically charged particles in a particle detector known as a tracker. The particles entering such a tracker leave a precise record of their passage through the device, by interaction with suitably constructed components and materials. The presence of a calibrated magnetic field, in all or part of the tracker, allows the local momentum of the charged particle to be directly determined from the reconstructed local curvature of the trajectory for known (or assumed) electric charge of the particle.
Generally, track reconstruction is divided into two stages. First, track finding needs to be performed where a cluster of detector hits believed to originate from the same track are grouped together. Second, a track fitting is performed. Track fitting is the procedure of mathematically fitting a curve to the found hits and from this fit the momentum is obtained.
Identification and reconstruction of trajectories from the digitised output of a modern tracker can, in the simplest cases, in the absence of a magnetic field and absorbing/scattering material, be achieved via straight-line segment fits. A simple helical model, to determine momentum in the presence of a magnetic field, might be sufficient in less simple cases, through to a complete (e.g.) Kalman Filter process, to provide a detailed reconstructed local model throughout the complete track in the most complex cases.
This reconstruction of trajectory plus momentum allows projection to/through other detectors, which measure other important properties of the particle such as energy or particle type (Calorimeter, Cherenkov Detector). These reconstructed charged particles can be used to identify and reconstruct secondary decays, including those arising from 'unseen' neutral particles, as can be done for B-tagging (in experiments like CDF or at the LHC) and to fully reconstruct events (as in many current particle physics experiments, such as ATLAS, BaBar, Belle and CMS).
In particle physics there have been many devices used for tracking. These include cloud chambers (1920–1950), nuclear emulsion plates (1937–), bubble chambers (1952–), spark chambers (1954-), multi wire proportional chambers (1968–) and drift chambers (1971–), including time projection chambers (1974–). With the advent of semiconductors plus modern photolithography, solid state trackers, also called silicon trackers (1980–), are used in experiments requiring compact, high-precision, fast-readout tracking; for example, close to the primary interaction point in a collider like the LHC.
References
Experimental particle physics
Particle detectors | Tracking (particle physics) | [
"Physics",
"Technology",
"Engineering"
] | 537 | [
"Measuring instruments",
"Particle detectors",
"Experimental physics",
"Particle physics",
"Experimental particle physics",
"Particle physics stubs"
] |
7,631,731 | https://en.wikipedia.org/wiki/Waste%20Incineration%20Directive | The Waste Incineration Directive, more formally Directive 2000/76/EC of the European Parliament and of the Council
of 4 December 2000 on the incineration of waste (OJ L332, P91 – 111), was a Directive issued by the European Union and relates to standards and methodologies required by Europe for the practice and technology of incineration. The aim of this Directive is to minimise the impact of negative environmental effects on the environment and human health resulting from emissions to air, soil, surface and ground water from the incineration and co-incineration of waste. The requirements of the Directive were developed to reflect the ability of modern incineration plants to achieve high standards of emission control more effectively. The Directive has been replaced by the Industrial Emissions Directive since 7 January 2014.
See also
List of solid waste treatment technologies
References
External links
Text of the directive and Summary of the directive
European Union directives
Waste legislation in the European Union
Waste legislation in the United Kingdom
Incineration
2000 in law
2000 in the European Union
2000 in the environment | Waste Incineration Directive | [
"Chemistry",
"Engineering"
] | 218 | [
"Combustion engineering",
"Incineration"
] |
7,633,054 | https://en.wikipedia.org/wiki/Trans-Proteomic%20Pipeline | The Trans-Proteomic Pipeline (TPP) is an open-source data analysis software for proteomics developed at the Institute for Systems Biology (ISB) by the Ruedi Aebersold group under the Seattle Proteome Center. The TPP includes PeptideProphet, ProteinProphet, ASAPRatio, XPRESS and Libra.
Software Components
Probability Assignment and Validation
PeptideProphet performs statistical validation of peptide-spectra-matches (PSM) using the results of search engines by estimating a false discovery rate (FDR) on PSM level. The initial PeptideProphet used a fit of a Gaussian distribution for the correct identifications and a fit of a gamma distribution for the incorrect identification. A later modification of the program allowed the usage of a target-decoy approach, using either a variable component mixture model or a semi-parametric mixture model. In the PeptideProphet, specifying a decoy tag will use the variable component mixture model while selecting a non-parametric model will use the semi-parametric mixture model.
ProteinProphet identifies proteins based on the results of PeptideProphet.
Mayu performs statistical validation of protein identification by estimating a false discovery rate (FDR) on protein level.
Spectral library handling
The SpectraST tool is able to generate spectral libraries and search datasets using these libraries.
See also
OpenMS
ProteoWizard
Mass spectrometry software
References
Free science software
Bioinformatics software
Mass spectrometry software
Proteomics | Trans-Proteomic Pipeline | [
"Physics",
"Chemistry",
"Biology"
] | 319 | [
"Chromatography",
"Spectrum (physical sciences)",
"Chemistry software",
"Bioinformatics software",
"Bioinformatics",
"Mass spectrometry software",
"Mass spectrometry",
"Chromatography software"
] |
390,905 | https://en.wikipedia.org/wiki/New%20Horizons | {{Infobox spaceflight
| name = New Horizons
| names_list = New Frontiers 1
| image = New Horizons Transparent.png
| image_caption = New Horizons space probe
| insignia = New Horizons - Logo2 big.png
| mission_type = Pluto/Arrokoth flyby
| operator = NASA
| COSPAR_ID =
| SATCAT =
| website =
| mission_duration = Primary: 9.5 yearsElapsed:
| manufacturer = APLSwRI
| launch_mass =
| dry_mass =
| payload_mass =
| dimensions =
| power = 245 watts
| launch_date = UTC (2:00 pm EST)
| launch_rocket = Atlas V (551) AV-010
| launch_site = Cape Canaveral, SLC41
| launch_contractor = International Launch Services
| disposal_type =
| deactivated =
| last_contact =
| orbit_eccentricity = 1.41905
| orbit_inclination = 2.23014°
| orbit_epoch = January 1, 2017 (JD 2457754.5)
| interplanetary =
| instruments_list =
| programme = New Frontiers
| previous_mission =
| next_mission = Juno}}New Horizons is an interplanetary space probe launched as a part of NASA's New Frontiers program. Engineered by the Johns Hopkins University Applied Physics Laboratory (APL) and the Southwest Research Institute (SwRI), with a team led by Alan Stern, the spacecraft was launched in 2006 with the primary mission to perform a flyby study of the Pluto system in 2015, and a secondary mission to fly by and study one or more other Kuiper belt objects (KBOs) in the decade to follow, which became a mission to 486958 Arrokoth. It is the fifth space probe to achieve the escape velocity needed to leave the Solar System.
On January 19, 2006, New Horizons was launched from Cape Canaveral Air Force Station by an Atlas V rocket directly into an Earth-and-solar escape trajectory with a speed of about . It was the fastest (average speed with respect to Earth) human-made object ever launched from Earth. It is not the fastest speed recorded for a spacecraft, which, as of 2023, is that of the Parker Solar Probe. After a brief encounter with asteroid 132524 APL, New Horizons proceeded to Jupiter, making its closest approach on February 28, 2007, at a distance of . The Jupiter flyby provided a gravity assist that increased New Horizons speed; the flyby also enabled a general test of New Horizons scientific capabilities, returning data about the planet's atmosphere, moons, and magnetosphere.
Most of the post-Jupiter voyage was spent in hibernation mode to preserve onboard systems, except for brief annual checkouts. On December 6, 2014, New Horizons was brought back online for the Pluto encounter, and instrument check-out began. On January 15, 2015, the spacecraft began its approach phase to Pluto.
On July 14, 2015, at 11:49 UTC, it flew above the surface of Pluto, which at the time was 34 AU from the Sun, making it the first spacecraft to explore the dwarf planet. In August 2016, New Horizons was reported to have traveled at speeds of more than . On October 25, 2016, at 21:48 UTC, the last recorded data from the Pluto flyby was received from New Horizons. Having completed its flyby of Pluto, New Horizons then maneuvered for a flyby of Kuiper belt object 486958 Arrokoth (then nicknamed Ultima Thule), which occurred on January 1, 2019, when it was from the Sun. In August 2018, NASA cited results by Alice on New Horizons to confirm the existence of a "hydrogen wall" at the outer edges of the Solar System. This "wall" was first detected in 1992 by the two Voyager spacecraft.
New Horizons is traveling through the Kuiper belt; it is from Earth and from the Sun as of November 2024. NASA has announced it is to extend operations for New Horizons until the spacecraft exits the Kuiper belt, which is expected to occur between 2028 and 2029.
History
In August 1992, JPL scientist Robert Staehle called Pluto discoverer Clyde Tombaugh, requesting permission to visit his planet. "I told him he was welcome to it," Tombaugh later remembered, "though he's got to go one long, cold trip." The call eventually led to a series of proposed Pluto missions leading up to New Horizons.
Stamatios "Tom" Krimigis, head of the Applied Physics Laboratory's space division, one of many entrants in the New Frontiers Program competition, formed the New Horizons team with Alan Stern in December 2000. Appointed as the project's principal investigator, Stern was described by Krimigis as "the personification of the Pluto mission". New Horizons was based largely on Stern's work since Pluto 350 and involved most of the team from Pluto Kuiper Express.
The New Horizons proposal was one of five that were officially submitted to NASA. It was later selected as one of two finalists to be subject to a three-month concept study in June 2001. The other finalist, POSSE (Pluto and Outer Solar System Explorer), was a separate but similar Pluto mission concept by the University of Colorado Boulder, led by principal investigator Larry W. Esposito, and supported by the JPL, Lockheed Martin and the University of California.
However, the APL, in addition to being supported by Pluto Kuiper Express developers at the Goddard Space Flight Center and Stanford University were at an advantage; they had recently developed NEAR Shoemaker for NASA, which had successfully entered orbit around 433 Eros earlier that year, and would later land on the asteroid to scientific and engineering fanfare.
In November 2001, New Horizons was officially selected for funding as part of the New Frontiers program. However, the new NASA Administrator appointed by the Bush administration, Sean O'Keefe, was not supportive of New Horizons and effectively canceled it by not including it in NASA's budget for 2003. NASA's Associate Administrator for the Science Mission Directorate, Ed Weiler, prompted Stern to lobby for the funding of New Horizons in hopes of the mission appearing in the Planetary Science Decadal Survey, a prioritized "wish list," compiled by the United States National Research Council, that reflects the opinions of the scientific community.
After an intense campaign to gain support for New Horizons, the Planetary Science Decadal Survey of 2003–2013 was published in the summer of 2002. New Horizons topped the list of projects considered the highest priority among the scientific community in the medium-size category; ahead of missions to the Moon, and even Jupiter. Weiler stated that it was a result that "[his] administration was not going to fight". Funding for the mission was finally secured following the publication of the report. Stern's team was finally able to start building the spacecraft and its instruments, with a planned launch in January 2006 and arrival at Pluto in 2015. Alice Bowman became Mission Operations Manager (MOM).
Mission profile New Horizons is the first mission in NASA's New Frontiers mission category, larger and more expensive than the Discovery missions but smaller than the missions of the Flagship Program. The cost of the mission, including spacecraft and instrument development, launch vehicle, mission operations, data analysis, and education/public outreach, is approximately $700 million over 15 years (2001–2016). The spacecraft was built primarily by Southwest Research Institute (SwRI) and the Johns Hopkins Applied Physics Laboratory. The mission's principal investigator is Alan Stern of the Southwest Research Institute (formerly NASA Associate Administrator).
After separation from the launch vehicle, overall control was taken by Mission Operations Center (MOC) at the Applied Physics Laboratory in Howard County, Maryland. The science instruments are operated at Clyde Tombaugh Science Operations Center (T-SOC) in Boulder, Colorado. Navigation is performed at various contractor facilities, whereas the navigational positional data and related celestial reference frames are provided by the Naval Observatory Flagstaff Station through Headquarters NASA and JPL.
KinetX is the lead on the New Horizons navigation team and is responsible for planning trajectory adjustments as the spacecraft speeds toward the outer Solar System. Coincidentally the Naval Observatory Flagstaff Station was where the photographic plates were taken for the discovery of Pluto's moon Charon. The Naval Observatory itself is not far from the Lowell Observatory where Pluto was discovered.New Horizons was originally planned as a voyage to the only unexplored planet in the Solar System. When the spacecraft was launched, Pluto was still classified as a planet, later to be reclassified as a dwarf planet by the International Astronomical Union (IAU). Some members of the New Horizons team, including Alan Stern, disagree with the IAU definition and still describe Pluto as the ninth planet. Pluto's satellites Nix and Hydra also have a connection with the spacecraft: the first letters of their names (N and H) are the initials of New Horizons. The moons' discoverers chose these names for this reason, plus Nix and Hydra's relationship to the mythological Pluto.
Mementos
In addition to the science equipment, there are nine cultural artifacts traveling with the spacecraft. These include a collection of 434,738 names stored on a compact disc, a collection of images of New Horizons project personnel on another CD, a piece of Scaled Composites's SpaceShipOne, a "Not Yet Explored" USPS stamp, and two copies of the Flag of the United States.
About of Clyde Tombaugh's ashes are aboard the spacecraft, to commemorate his discovery of Pluto in 1930. A Florida state quarter coin, whose design commemorates human exploration, is included, officially as a trim weight, as is a Maryland state quarter to honor the probe's builders. One of the science packages (a dust counter) is named after Venetia Burney, who, as a child, suggested the name "Pluto" after its discovery.
Goal
The goal of the mission is to understand the formation of the Plutonian system, the Kuiper belt, and the transformation of the early Solar System. The spacecraft collected data on the atmospheres, surfaces, interiors, and environments of Pluto and its moons. It will also study other objects in the Kuiper belt. "By way of comparison, New Horizons gathered 5,000 times as much data at Pluto as Mariner did at the Red Planet."
Some of the questions the mission attempts to answer are: What is Pluto's atmosphere made of and how does it behave? What does its surface look like? Are there large geological structures? How do solar wind particles interact with Pluto's atmosphere?
Specifically, the mission's science objectives are to:
Map the surface compositions of Pluto and Charon
Characterize the geologies and morphologies of Pluto and Charon
Characterize the neutral atmosphere of Pluto and its escape rate
Search for an atmosphere around Charon
Map surface temperatures on Pluto and Charon
Search for rings and additional satellites around Pluto
Conduct similar investigations of one or more Kuiper belt objects
Design and construction
Spacecraft subsystems
The spacecraft is comparable in size and general shape to a grand piano and has been compared to a piano glued to a cocktail bar-sized satellite dish. As a point of departure, the team took inspiration from the Ulysses spacecraft, which also carried a radioisotope thermoelectric generator (RTG) and dish on a box-in-box structure through the outer Solar System. Many subsystems and components have flight heritage from APL's CONTOUR spacecraft, which in turn had heritage from APL's TIMED spacecraft.New Horizons body forms a triangle, almost thick. (The Pioneers have hexagonal bodies, whereas the Voyagers, Galileo, and Cassini–Huygens have decagonal, hollow bodies.) A 7075 aluminium alloy tube forms the main structural column, between the launch vehicle adapter ring at the "rear", and the radio dish antenna affixed to the "front" flat side. The titanium fuel tank is in this tube. The RTG attaches with a 4-sided titanium mount resembling a gray pyramid or stepstool.
Titanium provides strength and thermal isolation. The rest of the triangle is primarily sandwich panels of thin aluminum face sheet (less than ) bonded to aluminum honeycomb core. The structure is larger than strictly necessary, with empty space inside. The structure is designed to act as shielding, reducing electronics errors caused by radiation from the RTG. Also, the mass distribution required for a spinning spacecraft demands a wider triangle.
The interior structure is painted black to equalize temperature by radiative heat transfer. Overall, the spacecraft is thoroughly blanketed to retain heat. Unlike the Pioneers and Voyagers, the radio dish is also enclosed in blankets that extend to the body. The heat from the RTG adds warmth to the spacecraft while it is in the outer Solar System. While in the inner Solar System, the spacecraft must prevent overheating, hence electronic activity is limited, power is diverted to shunts with attached radiators, and louvers are opened to radiate excess heat. While the spacecraft is cruising inactively in the cold outer Solar System, the louvers are closed, and the shunt regulator reroutes power to electric heaters.
Propulsion and attitude control New Horizons has both spin-stabilized (cruise) and three-axis stabilized (science) modes controlled entirely with hydrazine monopropellant. Additional post launch delta-v of over is provided by a internal tank. Helium is used as a pressurant, with an elastomeric diaphragm assisting expulsion. The spacecraft's on-orbit mass including fuel is over on the Jupiter flyby trajectory, but would have been only for the backup direct flight option to Pluto. Significantly, had the backup option been taken, this would have meant less fuel for later Kuiper belt operations.
There are 16 thrusters on New Horizons: four and twelve plumbed into redundant branches. The larger thrusters are used primarily for trajectory corrections, and the small ones (previously used on Cassini and the Voyager spacecraft) are used primarily for attitude control and spinup/spindown maneuvers. Two star cameras are used to measure the spacecraft attitude. They are mounted on the face of the spacecraft and provide attitude information while in spin-stabilized or 3-axis mode. In between the time of star camera readings, spacecraft orientation is provided by dual redundant miniature inertial measurement units. Each unit contains three solid-state gyroscopes and three accelerometers. Two Adcole Sun sensors provide attitude determination. One detects the angle to the Sun, whereas the other measures spin rate and clocking.
Power
A cylindrical radioisotope thermoelectric generator (RTG) protrudes in the plane of the triangle from one vertex of the triangle. The RTG provided of power at launch, and was predicted to drop approximately every year, decaying to by the time of its encounter with the Plutonian system in 2015 and will decay too far to power the transmitters in the 2030s. There are no onboard batteries since RTG output is predictable, and load transients are handled by a capacitor bank and fast circuit breakers. As of January 2019, the power output of the RTG is about .
The RTG, model "GPHS-RTG", was originally a spare from the Cassini mission. The RTG contains of plutonium-238 oxide pellets. Each pellet is clad in iridium, then encased in a graphite shell. It was developed by the U.S. Department of Energy at the Materials and Fuels Complex, a part of the Idaho National Laboratory.
The original RTG design called for of plutonium, but a unit less powerful than the original design goal was produced because of delays at the United States Department of Energy, including security activities, that delayed plutonium production. The mission parameters and observation sequence had to be modified for the reduced wattage; still, not all instruments can operate simultaneously. The Department of Energy transferred the space battery program from Ohio to Argonne in 2002 because of security concerns.
The amount of radioactive plutonium in the RTG is about one-third the amount on board the Cassini–Huygens probe when it launched in 1997. The Cassini launch had been protested by multiple organizations, due to the risk of such a large amount of plutonium being released into the atmosphere in case of an accident. The United States Department of Energy estimated the chances of a launch accident that would release radiation into the atmosphere at 1 in 350, and monitored the launch because of the inclusion of an RTG on board. It was estimated that a worst-case scenario of total dispersal of on-board plutonium would spread the equivalent radiation of 80% the average annual dosage in North America from background radiation over an area with a radius of .
Flight computer
The spacecraft carries two computer systems: the Command and Data Handling system and the Guidance and Control processor. Each of the two systems is duplicated for redundancy, for a total of four computers. The processor used for its flight computers is the Mongoose-V, a 12 MHz radiation-hardened version of the MIPS R3000 CPU. Multiple redundant clocks and timing routines are implemented in hardware and software to help prevent faults and downtime. To conserve heat and mass, spacecraft and instrument electronics are housed together in IEMs (integrated electronics modules). There are two redundant IEMs. Including other functions such as instrument and radio electronics, each IEM contains 9 boards. The software of the probe runs on Nucleus RTOS operating system.
There have been two "safing" events, that sent the spacecraft into safe mode:
On March 19, 2007, the Command and Data Handling computer experienced an uncorrectable memory error and rebooted itself, causing the spacecraft to go into safe mode. The craft fully recovered within two days, with some data loss on Jupiter's magnetotail. No impact on the subsequent mission was expected.
On July 4, 2015, there was a CPU safing event triggered by an over-assignment of commanded science operations on the craft's approach to Pluto. Fortunately, the craft was able to recover within two days without major impacts on its mission. NASA scientists therefore reduced the number of scientific operations on the craft to prevent future events, which could happen during the approach with Pluto.
Telecommunications and data handling
Communication with the spacecraft is via X band. The craft had a communication rate of at Jupiter; at Pluto's distance, a rate of approximately per transmitter was expected. Besides the low data rate, Pluto's distance also causes a latency of about 4.5 hours (one-way). The NASA Deep Space Network (DSN) dishes are used to relay commands once the spacecraft is beyond Jupiter. The spacecraft uses dual modular redundancy transmitters and receivers, and either right- or left-hand circular polarization.
The downlink signal is amplified by dual redundant 12-watt traveling-wave tube amplifiers (TWTAs) mounted on the body under the dish. The receivers are low-power designs. The system can be controlled to power both TWTAs at the same time, and transmit a dual-polarized downlink signal to the DSN that nearly doubles the downlink rate. DSN tests early in the mission with this dual polarization combining technique were successful, and the capability was declared to be operational (when the spacecraft power budget permits both TWTAs to be powered).
In addition to the high-gain antenna, there are two backup low-gain antennas and a medium-gain dish. The high-gain dish has a Cassegrain reflector layout, composite construction, of diameter providing over of gain and a half-power beam width of about a degree. The prime-focus medium-gain antenna, with a aperture and 10° half-power beam width, is mounted to the forward-facing side of the high-gain antenna's secondary reflector. The forward low-gain antenna is stacked atop the feed of the medium-gain antenna. The aft low-gain antenna is mounted within the launch adapter at the rear of the spacecraft. This antenna was used only for early mission phases near Earth, just after launch and for emergencies if the spacecraft had lost attitude control.New Horizons recorded scientific instrument data to its solid-state memory buffer at each encounter, then transmitted the data to Earth. Data storage is done on two low-power solid-state recorders (one primary, one backup) holding up to s each. Because of the extreme distance from Pluto and the Kuiper belt, only one buffer load at those encounters can be saved. This is because New Horizons would require approximately 16 months after leaving the vicinity of Pluto to transmit the buffer load back to Earth. At Pluto's distance, radio signals from the space probe back to Earth took four hours and 25 minutes to traverse 4.7 billion km of space.
Part of the reason for the delay between the gathering of and transmission of data is that all of the New Horizons instrumentation is body-mounted. In order for the cameras to record data, the entire probe must turn, and the one-degree-wide beam of the high-gain antenna was not pointing toward Earth. Previous spacecraft, such as the Voyager program probes, had a rotatable instrumentation platform (a "scan platform") that could take measurements from virtually any angle without losing radio contact with Earth. New Horizons was mechanically simplified to save weight, shorten the schedule, and improve reliability during its 15-year designed lifetime.
The Voyager 2 scan platform jammed at Saturn, and the demands of long time exposures at outer planets led to a change of plans such that the entire probe was rotated to make photos at Uranus and Neptune, similar to how New Horizons rotated.
Instruments New Horizons carries seven instruments: three optical instruments, two plasma instruments, a dust sensor and a radio science receiver/radiometer. The instruments are to be used to investigate the global geology, surface composition, surface temperature, atmospheric pressure, atmospheric temperature and escape rate of Pluto and its moons. The rated power is , though not all instruments operate simultaneously. In addition, New Horizons has an Ultrastable Oscillator subsystem, which may be used to study and test the Pioneer anomaly towards the end of the spacecraft's life.
Long-Range Reconnaissance Imager (LORRI)
The Long-Range Reconnaissance Imager (LORRI) is a long-focal-length imager designed for high resolution and responsivity at visible wavelengths. The instrument is equipped with a 1024×1024 pixel by 12-bits-per-pixel monochromatic CCD imager giving a resolution of 5 μrad (~1 arcsec). The CCD is chilled far below freezing by a passive radiator on the antisolar face of the spacecraft. This temperature differential requires insulation and isolation from the rest of the structure. The aperture Ritchey–Chretien mirrors and metering structure are made of silicon carbide to boost stiffness, reduce weight and prevent warping at low temperatures. The optical elements sit in a composite light shield and mount with titanium and fiberglass for thermal isolation. Overall mass is , with the optical tube assembly (OTA) weighing about , for one of the largest silicon-carbide telescopes flown at the time (now surpassed by Herschel). For viewing on public web sites the 12-bit per pixel LORRI images are converted to 8-bit per pixel JPEG images. These public images do not contain the full dynamic range of brightness information available from the raw LORRI images files.
,
Solar Wind Around Pluto (SWAP)
Solar Wind Around Pluto (SWAP) is a toroidal electrostatic analyzer and retarding potential analyzer (RPA), that makes up one of the two instruments comprising New Horizons Plasma and high-energy particle spectrometer suite (PAM), the other being PEPSSI. SWAP measures particles of up to 6.5 keV and, because of the tenuous solar wind at Pluto's distance, the instrument is designed with the largest aperture of any such instrument ever flown.
Pluto Energetic Particle Spectrometer Science Investigation (PEPSSI)
Pluto Energetic Particle Spectrometer Science Investigation (PEPSSI) is a time of flight ion and electron sensor that makes up one of the two instruments comprising New Horizons plasma and high-energy particle spectrometer suite (PAM), the other being SWAP. Unlike SWAP, which measures particles of up to 6.5 keV, PEPSSI goes up to 1 MeV. The PEPSSI sensor has been designed to measure the mass, energy and distribution of charged particles around Pluto, and is also able to differentiate between protons, electrons, and other heavy ions.
Alice Alice is an ultraviolet imaging spectrometer that is one of two photographic instruments comprising New Horizons Pluto Exploration Remote Sensing Investigation (PERSI); the other being the Ralph telescope. It resolves 1,024 wavelength bands in the far and extreme ultraviolet (from 50–), over 32 view fields. Its goal is to determine the composition of Pluto's atmosphere. This Alice instrument is derived from another Alice aboard ESA's Rosetta spacecraft. The instrument has a mass of 4.4 kg and draws 4.4 watts of power. Its primary role is to determine the relative concentrations of various elements and isotopes in Pluto's atmosphere.
In August 2018, NASA confirmed, based on results by Alice on the New Horizons spacecraft, a "hydrogen wall" at the outer edges of the Solar System that was first detected in 1992 by the two Voyager spacecraft.
Ralph telescope
The Ralph telescope, 75 mm in aperture, is one of two photographic instruments that make up New Horizons Pluto Exploration Remote Sensing Investigation (PERSI), with the other being the Alice instrument. Ralph has two separate channels: MVIC (Multispectral Visible Imaging Camera), a visible-light CCD imager with broadband and color channels; and LEISA (Linear Etalon Imaging Spectral Array), a near-infrared imaging spectrometer. LEISA is derived from a similar instrument on the Earth Observing-1 spacecraft. Ralph was named after Alice's husband on The Honeymooners, and was designed after Alice.
On June 23, 2017, NASA announced that it has renamed the LEISA instrument to the "Lisa Hardaway Infrared Mapping Spectrometer" in honor of Lisa Hardaway, the Ralph program manager at Ball Aerospace, who died in January 2017 at age 50.
Venetia Burney Student Dust Counter (VBSDC)
The Venetia Burney Student Dust Counter (VBSDC), built by students at the University of Colorado Boulder, is operating periodically to make dust measurements. It consists of a detector panel, about , mounted on the anti-solar face of the spacecraft (the ram direction), and an electronics box within the spacecraft. The detector contains fourteen polyvinylidene difluoride (PVDF) panels, twelve science and two reference, which generate voltage when impacted. Effective collecting area is . No dust counter has operated past the orbit of Uranus; models of dust in the outer Solar System, especially the Kuiper belt, are speculative. The VBSDC is always turned on measuring the masses of the interplanetary and interstellar dust particles (in the range of nano- and picograms) as they collide with the PVDF panels mounted on the New Horizons spacecraft. The measured data is expected to greatly contribute to the understanding of the dust spectra of the Solar System. The dust spectra can then be compared with those from observations of other stars, giving new clues as to where Earth-like planets can be found in the universe. The dust counter is named for Venetia Burney, who first suggested the name "Pluto" at the age of 11. A thirteen-minute short film about the VBSDC garnered an Emmy Award for student achievement in 2006.
Radio Science Experiment (REX)
The Radio Science Experiment (REX) used an ultrastable crystal oscillator (essentially a calibrated crystal in a miniature oven) and some additional electronics to conduct radio science investigations using the communications channels. These are small enough to fit on a single card. Because there are two redundant communications subsystems, there are two, identical REX circuit boards.
Journey to Pluto
Launch
On September 24, 2005, the spacecraft arrived at the Kennedy Space Center on board a C-17 Globemaster III for launch preparations. The launch of New Horizons was originally scheduled for January 11, 2006, but was initially delayed until January 17, 2006, to allow for borescope inspections of the Atlas V's kerosene tank. Further delays related to low cloud ceiling conditions downrange, and high winds and technical difficulties—unrelated to the rocket itself—prevented launch for a further two days.
The probe finally lifted off from Pad 41 at Cape Canaveral Air Force Station, Florida, directly south of Space Shuttle Launch Complex 39, at 19:00 UTC on January 19, 2006. The Centaur second stage ignited at 19:04:43 UTC and burned for 5 minutes 25 seconds. It reignited at 19:32 UTC and burned for 9 minutes 47 seconds. The ATK Star 48B third stage ignited at 19:42:37 UTC and burned for 1 minute 28 seconds. Combined, these burns successfully sent the probe on a solar-escape trajectory at . New Horizons took only nine hours to pass the Moon's orbit. Although there were backup launch opportunities in February 2006 and February 2007, only the first twenty-three days of the 2006 window permitted the Jupiter flyby. Any launch outside that period would have forced the spacecraft to fly a slower trajectory directly to Pluto, delaying its encounter by five to six years.
The probe was launched by a Lockheed Martin Atlas V 551 rocket, with a third stage added to increase the heliocentric (escape) speed. This was the first launch of the Atlas V 551 configuration, which uses five solid rocket boosters, and the first Atlas V with a third stage. Previous flights had used zero, two, or three solid boosters, but never five. The vehicle, AV-010, weighed at lift-off, and had earlier been slightly damaged when Hurricane Wilma swept across Florida on October 24, 2005. One of the solid rocket boosters was hit by a door. The booster was replaced with an identical unit, rather than inspecting and requalifying the original.
The launch was dedicated to the memory of launch conductor Daniel Sarokon, who was described by space program officials as one of the most influential people in the history of space travel.
Inner Solar System
Trajectory corrections
On January 28 and 30, 2006, mission controllers guided the probe through its first trajectory-correction maneuver (TCM), which was divided into two parts (TCM-1A and TCM-1B). The total velocity change of these two corrections was about . TCM-1 was accurate enough to permit the cancellation of TCM-2, the second of three originally scheduled corrections. On March 9, 2006, controllers performed TCM-3, the last of three scheduled course corrections. The engines burned for 76 seconds, adjusting the spacecraft's velocity by about . Further trajectory maneuvers were not needed until September 25, 2007 (seven months after the Jupiter flyby), when the engines were fired for 15 minutes and 37 seconds, changing the spacecraft's velocity by , followed by another TCM, almost three years later on June 30, 2010, that lasted 35.6 seconds, when New Horizons had already reached the halfway point (in time traveled) to Pluto.
In-flight tests and crossing of Mars orbit
During the week of February 20, 2006, controllers conducted initial in-flight tests of three onboard science instruments, the Alice ultraviolet imaging spectrometer, the PEPSSI plasma-sensor, and the LORRI long-range visible-spectrum camera. No scientific measurements or images were taken, but instrument electronics, and in the case of Alice, some electromechanical systems were shown to be functioning correctly.
On April 7, 2006, the spacecraft passed the orbit of Mars, moving at roughly away from the Sun at a solar distance of 243 million kilometers.
Asteroid 132524 APL
Because of the need to conserve fuel for possible encounters with Kuiper belt objects subsequent to the Pluto flyby, intentional encounters with objects in the asteroid belt were not planned. After launch, the New Horizons team scanned the spacecraft's trajectory to determine if any asteroids would, by chance, be close enough for observation. In May 2006 it was discovered that New Horizons would pass close to the tiny asteroid 132524 APL on June 13, 2006. Closest approach occurred at 4:05 UTC at a distance of (around one quarter of the average Earth-Moon distance). The asteroid was imaged by Ralph (use of LORRI was not possible because of proximity to the Sun), which gave the team a chance to test Ralph capabilities, and make observations of the asteroid's composition as well as light and phase curves. The asteroid was estimated to be in diameter. The spacecraft successfully tracked the rapidly moving asteroid over June 10–12, 2006.
First Pluto sighting
The first images of Pluto from New Horizons were acquired September 21–24, 2006, during a test of LORRI. They were released on November 28, 2006. The images, taken from a distance of approximately , confirmed the spacecraft's ability to track distant targets, critical for maneuvering toward Pluto and other Kuiper belt objects.
Jupiter encounter New Horizons used LORRI to take its first photographs of Jupiter on September 4, 2006, from a distance of . More detailed exploration of the system began in January 2007 with an infrared image of the moon Callisto, as well as several black-and-white images of Jupiter itself. New Horizons received a gravity assist from Jupiter, with its closest approach at 05:43:40 UTC on February 28, 2007, when it was from Jupiter. The flyby increased New Horizons speed by , accelerating the probe to a velocity of relative to the Sun and shortening its voyage to Pluto by three years.
The flyby was the center of a four-month intensive observation campaign lasting from January to June. Being an ever-changing scientific target, Jupiter has been observed intermittently since the end of the Galileo mission in September 2003. Knowledge about Jupiter benefited from the fact that New Horizons instruments were built using the latest technology, especially in the area of cameras, representing a significant improvement over Galileo cameras, which were modified versions of Voyager cameras, which, in turn, were modified Mariner cameras. The Jupiter encounter also served as a shakedown and dress rehearsal for the Pluto encounter. Because Jupiter is much closer to Earth than Pluto, the communications link can transmit multiple loadings of the memory buffer; thus the mission returned more data from the Jovian system than it was expected to transmit from Pluto.
One of the main goals during the Jupiter encounter was observing its atmospheric conditions and analyzing the structure and composition of its clouds. Heat-induced lightning strikes in the polar regions and "waves" that indicate violent storm activity were observed and measured. The Little Red Spot, spanning up to 70% of Earth's diameter, was imaged from up close for the first time. Recording from different angles and illumination conditions, New Horizons took detailed images of Jupiter's faint ring system, discovering debris left over from recent collisions within the rings or from other unexplained phenomena. The search for undiscovered moons within the rings showed no results. Travelling through Jupiter's magnetosphere, New Horizons collected valuable particle readings. "Bubbles" of plasma that are thought to be formed from material ejected by the moon Io were noticed in the magnetotail.
Jovian moons
The four largest moons of Jupiter were in poor positions for observation; the necessary path of the gravity-assist maneuver meant that New Horizons passed millions of kilometers from any of the Galilean moons. Still, its instruments were intended for small, dim targets, so they were scientifically useful on large, distant moons. Emphasis was put on Jupiter's innermost Galilean moon, Io, whose active volcanoes shoot out tons of material into Jupiter's magnetosphere, and further. Out of eleven observed eruptions, three were seen for the first time. That of Tvashtar reached an altitude of up to . The event gave scientists an unprecedented look into the structure and motion of the rising plume and its subsequent fall back to the surface. Infrared signatures of a further 36 volcanoes were noticed. Callisto's surface was analyzed with LEISA, revealing how lighting and viewing conditions affect infrared spectrum readings of its surface water ice. Minor moons such as Amalthea had their orbit solutions refined. The cameras determined their positions, acting as "reverse optical navigation".
Outer Solar System
After passing Jupiter, New Horizons spent most of its journey towards Pluto in hibernation mode. Redundant components as well as guidance and control systems were shut down to extend their life cycle, decrease operation costs and free the Deep Space Network for other missions. During hibernation mode, the onboard computer monitored the probe's systems and transmitted a signal back to Earth; a "green" code if everything was functioning as expected or a "red" code if mission control's assistance was needed. The probe was activated for about two months a year so that the instruments could be calibrated and the systems checked. The first hibernation mode cycle started on June 28, 2007, the second cycle began on December 16, 2008, the third cycle on August 27, 2009, and the fourth cycle on August 29, 2014, after a 10-week test.New Horizons crossed the orbit of Saturn on June 8, 2008, and Uranus on March 18, 2011. After astronomers announced the discovery of two new moons in the Pluto system, Kerberos and Styx, mission planners started contemplating the possibility of the probe running into unseen debris and dust left over from ancient collisions between the moons. A study based on 18 months of computer simulations, Earth-based telescope observations and occultations of the Pluto system revealed that the possibility of a catastrophic collision with debris or dust was less than 0.3% on the probe's scheduled course. If the hazard increased, New Horizons could have used one of two possible contingency plans, the so-called SHBOTs (Safe Haven by Other Trajectories). Either the probe could have continued on its present trajectory with the antenna facing the incoming particles so the more vital systems would be protected, or it could have positioned its antenna to make a course correction that would take it just from the surface of Pluto where it was expected that the atmospheric drag would have cleaned the surrounding space of possible debris.
While in hibernation mode in July 2012, New Horizons started gathering scientific data with SWAP, PEPSSI and VBSDC. Although it was originally planned to activate just the VBSDC, other instruments were powered on in order to collect valuable heliospheric data. Before activating the other two instruments, ground tests were conducted to make sure that the expanded data gathering in this phase of the mission would not limit available energy, memory and fuel in the future and that all systems were functioning during the flyby. The first set of data was transmitted in January 2013 during a three-week activation from hibernation. The command and data handling software was updated to address the problem of computer resets.
Possible Neptune trojan targets
Other possible targets were Neptune trojans. The probe's trajectory to Pluto passed near Neptune's trailing Lagrange point (""), which may host hundreds of bodies in 1:1 resonance. In late 2013, New Horizons passed within of the high-inclination L5 Neptune trojan , which was discovered shortly before by the New Horizons KBO Search task, a survey to find additional distant objects for New Horizons to fly by after its 2015 encounter with Pluto. At that range, would have been bright enough to be detectable by New Horizons LORRI instrument; however, the New Horizons team eventually decided that they would not target for observations because the preparations for the Pluto approach took precedence. On August 25, 2014, New Horizons crossed the orbit of Neptune, exactly 25 years after the planet was visited by the Voyager 2 probe. This was the last major planet orbit crossing before the Pluto flyby. At the time, the spacecraft was away from Neptune and from the Sun.
Observations of Pluto and Charon 2013–14
Images from July 1 to 3, 2013, by LORRI were the first by the probe to resolve Pluto and Charon as separate objects. On July 14, 2014, mission controllers performed a sixth trajectory-correction maneuver (TCM) since its launch to enable the craft to reach Pluto. Between July 19–24, 2014, New Horizons LORRI snapped 12 images of Charon revolving around Pluto, covering almost one full rotation at distances ranging from about . In August 2014, astronomers made high-precision measurements of Pluto's location and orbit around the Sun using the Atacama Large Millimeter/submillimeter Array (ALMA), an array of radio telescopes located in Chile, to help NASA's New Horizons spacecraft accurately home in on Pluto. On December 6, 2014, mission controllers sent a signal for the craft to "wake up" from its final Pluto-approach hibernation and begin regular operations. The craft's response that it was "awake" reached Earth on December 7, 2014, at 02:30 UTC.
Pluto approach
Distant-encounter operations at Pluto began on January 4, 2015. On this date, images of the targets with the onboard LORRI imager plus the Ralph telescope were only a few pixels in width. Investigators began taking Pluto images and background starfield images to assist mission navigators in the design of course-correcting engine maneuvers that would precisely modify the trajectory of New Horizons to aim the approach.
On February 12, 2015, NASA released new images of Pluto (taken from January 25 to 31) from the approaching probe. New Horizons was more than away from Pluto when it began taking the photos, which showed Pluto and its largest moon, Charon. The exposure time was too short to see Pluto's smaller, much fainter moons.
Investigators compiled a series of images of the moons Nix and Hydra taken from January 27 through February 8, 2015, beginning at a range of . Pluto and Charon appear as a single overexposed object at the center. The right side image has been processed to remove the background starfield. The other two, even smaller moons—Kerberos and Styx—were seen on photos taken on April 25. Starting on May 11, a hazard search was performed, looking for unknown objects that could be a danger to the spacecraft, such as rings or hitherto undiscovered moons, which could then possibly be avoided by a course change. No rings or additional moons were found.
Also in regard to the approach phase during January 2015, on August 21, 2012, the team announced that they would spend mission time attempting long-range observations of the Kuiper belt object temporarily designated VNH0004 (now designated ), when the object was at a distance of from New Horizons. The object would be too distant to resolve surface features or take spectroscopy, but it would be able to make observations that cannot be made from Earth, namely a phase curve and a search for small moons. A second object was planned to be observed in June 2015, and a third in September after the flyby; the team hoped to observe a dozen such objects through 2018. On April 15, 2015, Pluto was imaged showing a possible polar cap.
Software glitch
On July 4, 2015, New Horizons experienced a software anomaly and went into safe mode, preventing the spacecraft from performing scientific observations until engineers could resolve the problem. On July 5, NASA announced that the problem was determined to be a timing flaw in a command sequence used to prepare the spacecraft for its flyby, and the spacecraft would resume scheduled science operations on July 7. The science observations lost because of the anomaly were judged to have no impact on the mission's main objectives and minimal impact on other objectives.
The timing flaw consisted of performing two tasks simultaneously—compressing previously acquired data to release space for more data, and making a second copy of the approach command sequence—that together overloaded the spacecraft's primary computer. After the overload was detected, the spacecraft performed as designed: it switched from the primary computer to the backup computer, entered safe mode, and sent a distress call back to Earth. The distress call was received the afternoon of July 4 and alerted engineers that they needed to contact the spacecraft to get more information and resolve the issue. The resolution was that the problem happened as part of preparations for the approach, and was not expected to happen again because no similar tasks were planned for the remainder of the encounter.
Pluto system encounter
The closest approach of the New Horizons spacecraft to Pluto occurred at 11:49 UTC on July 14, 2015, at a range of from the surface and from the center of Pluto. Telemetry data confirming a successful flyby and a healthy spacecraft was received on Earth from the vicinity of the Pluto system on July 15, 2015, 00:52:37 UTC, after 22 hours of planned radio silence due to the spacecraft being pointed towards the Pluto system. Mission managers estimated a one in 10,000 chance that debris could have destroyed the probe or its communication-systems during the flyby, preventing it from sending data to Earth. The first details of the encounter were received the next day, but the download of the complete data set through the 2 kbps data downlink took just over 15 months.
Objectives
The mission's science objectives were grouped in three distinct priorities. The "primary objectives" were required. The "secondary objectives" were expected to be met but were not demanded. The "tertiary objectives" were desired. These objectives could have been skipped in favor of the above objectives. An objective to measure any magnetic field of Pluto was dropped, due to mass and the expense associated with including a magnetometer on the spacecraft. Instead, SWAP and PEPSSI could indirectly detect magnetic fields around Pluto.
Primary objectives (required)
Characterize the global geology and morphology of Pluto and Charon
Map chemical compositions of Pluto and Charon surfaces
Characterize the neutral (non-ionized) atmosphere of Pluto and its escape rate
Secondary objectives (expected)
Characterize the time variability of Pluto's surface and atmosphere
Image select Pluto and Charon areas in stereo
Map the terminators (day/night border) of Pluto and Charon with high resolution
Map the chemical compositions of select Pluto and Charon areas with high resolution
Characterize Pluto's ionosphere (upper layer of the atmosphere) and its interaction with the solar wind
Search for molecular neutral species such as molecular hydrogen, hydrocarbons, hydrogen cyanide and other nitriles in the atmosphere
Search for any Charon atmosphere
Determine bolometric Bond albedos for Pluto and Charon
Map surface temperatures of Pluto and Charon
Map any additional surfaces of outermost moons: Nix, Hydra, Kerberos, and Styx
Tertiary objectives (desired)
Characterize the energetic particle environment at Pluto and Charon
Refine bulk parameters (radii, masses) and orbits of Pluto and Charon
Search for additional moons and any rings
"The New Horizons flyby of the Pluto system was fully successful, meeting and in many cases exceeding, the Pluto objectives set out for it by NASA and the National Academy of Sciences."
Flyby details
On July 14, 2015, at 11:50 UTC, New Horizons made its closest approach to Pluto, passing within 12,500 km (7,800 mi) at a speed of 13.78 km/s (49,600 km/h; 30,800 mph), while also coming as close as 28,800 km (17,900 mi) to Charon. Starting 3.2 days prior, the spacecraft mapped Pluto and Charon with 40 km (25 mi) resolution, enabling coverage of all sides. Close-range imaging was conducted twice daily to monitor for surface changes, such as snowfall or cryovolcanism. During the flyby, LORRI captured images with up to 50 m (160 ft) resolution, MVIC created four-color global maps at 1.6 km (1 mi) resolution, and LEISA obtained near-infrared hyperspectral maps at resolutions ranging from 7 km/px (4.3 mi/px) globally to 0.6 km/px (0.37 mi/px) for selected areas.
Meanwhile, Alice characterized the atmosphere, both by emissions of atmospheric molecules (airglow), and by dimming of background stars as they pass behind Pluto (occultation). During and after closest approach, SWAP and PEPSSI sampled the high atmosphere and its effects on the solar wind. VBSDC searched for dust, inferring meteoroid collision rates and any invisible rings. REX performed active and passive radio science. The communications dish on Earth measured the disappearance and reappearance of the radio occultation signal as the probe flew by behind Pluto. The results resolved Pluto's diameter (by their timing) and atmospheric density and composition (by their weakening and strengthening pattern). (Alice can perform similar occultations, using sunlight instead of radio beacons.) Previous missions had the spacecraft transmit through the atmosphere, to Earth ("downlink"). Pluto's mass and mass distribution were evaluated by the gravitational tug on the spacecraft. As the spacecraft speeds up and slows down, the radio signal exhibited a Doppler shift. The Doppler shift was measured by comparison with the ultrastable oscillator in the communications electronics.
Reflected sunlight from Charon allowed some imaging observations of the nightside. Backlighting by the Sun gave an opportunity to highlight any rings or atmospheric hazes. REX performed radiometry of the nightside.
Satellite observations New Horizons best spatial resolution of the small satellites is at Nix, at Hydra, and approximately at Kerberos and Styx. Estimates for the dimensions of these bodies are: Nix at ; Hydra at ; Kerberos at ; and Styx at .
Initial predictions envisioned Kerberos as a relatively large and massive object whose dark surface led to it having a faint reflection. This proved to be wrong as images obtained by New Horizons on July 14 and sent back to Earth in October 2015 revealed that Kerberos was smaller in size, across with a highly reflective surface suggesting the presence of relatively clean water ice similarly to the rest of Pluto's smaller moons.
Post-Pluto events
Soon after the Pluto flyby, in July 2015, New Horizons reported that the spacecraft was healthy, its flight path was within the margins, and science data of the Pluto–Charon system had been recorded. The spacecraft's immediate task was to begin returning the 6.25 gigabytes of information collected. The free-space path loss at its distance of 4.5 light-hours () is approximately 303 dB at 7 GHz. Using the high gain antenna and transmitting at full power, the signal from EIRP is +83 dBm, and at this distance, the signal reaching Earth is −220 dBm. The received signal level (RSL) using one, un-arrayed Deep Space Network antenna with 72 dBi of forward gain equals −148 dBm. Because of the extremely low RSL, it could only transmit data at 1 to 2 kilobits per second.
By March 30, 2016, about nine months after the flyby, New Horizons reached the halfway point of transmitting this data. The transfer was completed on October 25, 2016, at 21:48 UTC, when the last piece of data—part of a Pluto–Charon observation sequence by the Ralph/LEISA imager—was received by the Johns Hopkins University Applied Physics Laboratory.
As of November 2018, at a distance of from the Sun and from 486958 Arrokoth, New Horizons was heading in the direction of the constellation Sagittarius at relative to the Sun. The brightness of the Sun from the spacecraft was magnitude −18.5.
On April 17, 2021, New Horizons reached a distance of 50 AU from the Sun, while remaining fully operational.
Mission extension
The New Horizons team requested, and received, a mission extension through 2021 to explore additional Kuiper belt objects (KBOs). Funding was secured on July 1, 2016. During this Kuiper Belt Extended Mission (KEM) the spacecraft performed a close fly-by of 486958 Arrokoth and will conduct more distant observations of an additional two dozen objects, and possibly make a fly-by of another KBO.
Kuiper belt object mission
Target background
Mission planners searched for one or more additional Kuiper belt objects (KBOs) of the order of in diameter as targets for flybys similar to the spacecraft's Plutonian encounter. However, despite the large population of KBOs, many factors limited the number of possible targets. Because the flight path was determined by the Pluto flyby, and the probe only had of hydrazine propellant remaining, the object to be visited needed to be within a cone of less than a degree's width extending from Pluto. The target also needed to be within 55 AU, because beyond 55 AU, the communications link becomes too weak, and the RTG power output decays significantly enough to hinder observations. Desirable KBOs are well over in diameter, neutral in color (to contrast with the reddish Pluto), and, if possible, have a moon that imparts a wobble.
KBO Search
In 2011, mission scientists started the New Horizons KBO Search, a dedicated survey for suitable KBOs using ground telescopes. Large ground telescopes with wide-field cameras, notably the twin 6.5-meter Magellan Telescopes in Chile, the 8.2-meter Subaru Observatory in Hawaii and the Canada–France–Hawaii TelescopePluto-bound probe faces crisis (nature.com May 20, 2014) were used to search for potential targets. By participating in a citizen-science project called Ice Hunters the public helped to scan telescopic images for possible suitable mission candidates. The ground-based search resulted in the discovery of about 143 KBOs of potential interest, but none of these were close enough to the flight path of New Horizons. Only the Hubble Space Telescope was deemed likely to find a suitable target in time for a successful KBO mission. On June 16, 2014, time on Hubble was granted for a search. Hubble has a much greater ability to find suitable KBOs than ground telescopes. The probability that a target for New Horizons would be found was estimated beforehand at about 95%.
Suitable KBOs
On October 15, 2014, it was revealed that Hubble's search had uncovered three potential targets, temporarily designated PT1 ("potential target 1"), PT2 and PT3 by the New Horizons team. PT1 was eventually chosen as the target and would be named 486958 Arrokoth.
All objects had estimated diameters in the range and were too small to be seen by ground telescopes. The targets were at distances from the Sun ranging from 43 to 44 AU, which would put the encounters in the 2018–2019 period. The initial estimated probabilities that these objects were reachable within New Horizons fuel budget were 100%, 7%, and 97%, respectively. All were members of the "cold" (low-inclination, low-eccentricity) classical Kuiper belt objects, and thus were very different from Pluto.
PT1 (given the temporary designation "1110113Y" on the HST web site), the most favorably situated object, had a magnitude of 26.8, is in diameter, and was encountered in January 2019. A course change to reach it required about 35% of New Horizons available trajectory-adjustment fuel supply. A mission to PT3 was in some ways preferable, in that it is brighter and therefore probably larger than PT1, but the greater fuel requirements to reach it would have left less for maneuvering and unforeseen events.
Once sufficient orbital information was provided, the Minor Planet Center gave provisional designations to the three target KBOs: (later 486958 Arrokoth) (PT1), (PT2), and (PT3). By the fall of 2014, a possible fourth target, , had been eliminated by follow-up observations. PT2 was out of the running before the Pluto flyby.
KBO selection
On August 28, 2015, 486958 Arrokoth (then known as and nicknamed Ultima Thule) (PT1) was chosen as the flyby target. The necessary course adjustment was performed with four engine firings between October 22 and November 4, 2015. The flyby occurred on January 1, 2019, at 00:33 UTC.
Observations of other KBOs
Aside from its flyby of 486958 Arrokoth, the extended mission for New Horizons calls for the spacecraft to conduct observations of, and look for ring systems around, between 25 and 35 different KBOs. In addition, it will continue to study the gas, dust and plasma composition of the Kuiper belt before the mission extension ends in 2021.
On November 2, 2015, New Horizons imaged KBO 15810 Arawn with the LORRI instrument from . This KBO was again imaged by the LORRI instrument on April 7–8, 2016, from a distance of . The new images allowed the science team to further refine the location of 15810 Arawn to within and to determine its rotational period of 5.47 hours.
In July 2016, the LORRI camera captured some distant images of Quaoar from ; the oblique view will complement Earth-based observations to study the object's light-scattering properties.
On December 5, 2017, when New Horizons was 40.9 AU from Earth, a calibration image of the Wishing Well cluster marked the most distant image ever taken by a spacecraft (breaking the 27-year record set by Voyager 1s famous Pale Blue Dot). Two hours later, New Horizons surpassed its own record, imaging the Kuiper belt objects and from a distance of 0.50 and 0.34 AU, respectively. These were the closest images taken of a Kuiper belt object besides Pluto and Arrokoth .
The dwarf planet Haumea was observed from afar by the New Horizons spacecraft in October 2007, January 2017, and May 2020, from distances of 49 AU, 59 AU, and 63 AU, respectively. New Horizons has observed the dwarf planets Eris (2020), Haumea (2007, 2017, 2020), Makemake (2007, 2017), and Quaoar (2016, 2017, 2019), as well as the large KBOs Ixion (2016), (2016, 2017, 2019), and (2017, 2018). It also observed Neptune's largest moon Triton (which shares similarities with Pluto and Eris) in 2019.
By December 2023, New Horizons had discovered a total of about 100 KBOs, and flown close enough to about 20 of them to capture characteristics such as shape, rotational period, possible moons, and surface composition. In addition, since 2021, Canadian researchers had been able to use machine learning software to speed up identification processes of potential KBO targets for a third flyby, cutting weeks-long efforts to hours-long.
Encounter with Arrokoth
Objectives
Science objectives of the flyby included characterizing the geology and morphology of Arrokoth and mapping the surface composition (by searching for ammonia, carbon monoxide, methane, and water ice). Searches will be conducted for orbiting moonlets, a coma, rings and the surrounding environment. Additional objectives include:
Mapping the surface geology to learn how it formed and evolved
Measuring the surface temperature
Mapping the 3-D surface topography and surface composition to learn how it is similar to and different from comets such as 67P/Churyumov–Gerasimenko and dwarf planets such as Pluto
Searching for any signs of activity, such as a cloud-like coma
Searching for and studying any satellites or rings
Measuring or constraining the mass
Targeting maneuvers
Arrokoth is the first object to be targeted for a flyby that was discovered after the spacecraft was launched. New Horizons was planned to come within of Arrokoth, three times closer than the spacecraft's earlier encounter with Pluto. Images with a resolution of up to per pixel were expected.
The new mission began on October 22, 2015, when New Horizons carried out the first in a series of four initial targeting maneuvers designed to send it towards Arrokoth. The maneuver, which started at approximately 19:50 UTC and used two of the spacecraft's small hydrazine-fueled thrusters, lasted approximately 16 minutes and changed the spacecraft's trajectory by about . The remaining three targeting maneuvers took place on October 25, October 28, and November 4, 2015.
Approach phase
The craft was brought out of its hibernation at approximately 00:33 UTC on June 5, 2018 (06:12 UTC ERT, Earth-Received Time), in order to prepare for the approach phase. After verifying its health status, the spacecraft transitioned from a spin-stabilized mode to a three-axis-stabilized mode on August 13, 2018. The official approach phase began on August 16, 2018, and continued through December 24, 2018.New Horizons made its first detection of Arrokoth on August 16, 2018, from a distance of . At that time, Arrokoth was visible at magnitude 20 against a crowded stellar background in the direction of the constellation Sagittarius.
Flyby
The Core phase began a week before the encounter and continued for two days after the encounter. The spacecraft flew by the object at a speed of and within . The majority of the science data was collected within 48 hours of the closest approach in a phase called the Inner Core. Closest approach occurred January 1, 2019, at 05:33 UTC SCET at which point the probe was from the Sun. At this distance, the one-way transit time for radio signals between Earth and New Horizons was six hours. Confirmation that the craft had succeeded in filling its digital recorders occurred when data arrived on Earth ten hours later, at 15:29 UTC.
Data download
After the encounter, preliminary, high-priority data was sent to Earth on January 1 and 2, 2019. On January 9, New Horizons returned to a spin-stabilized mode to prepare sending the remainder of its data back to Earth. This download was expected to take 20 months at a data rate of 1–2 kilobits per second.
As of July 2022, approximately 10% of the data was still left to be received.
Post-Arrokoth events
In April 2020, New Horizons was used in conjunction with telescopes on Earth to take pictures of nearby stars Proxima Centauri and Wolf 359; the images from each vantage point – over 6.4 billion km (4 billion miles) apart – were compared to produce "the first demonstration of an easily observable stellar parallax."
Images taken by the LORRI camera while New Horizons was 42 to 45 AU from the Sun were used to measure the cosmic optical background, the visible light analog of the cosmic microwave background, in seven high galactic latitude fields. At that distance New Horizons saw a sky ten times darker than the sky seen by the Hubble Space Telescope because of the absence of diffuse background sky brightness from the zodiacal light in the inner solar system. These measurements indicate that the total amount of light emitted by all galaxies at ultraviolet and visible wavelengths may be lower than previously thought.
The spacecraft reached a distance of on April 17, 2021, at 12:42 UTC, a feat performed only four times before, by Pioneer 10, Pioneer 11, Voyager 1, and Voyager 2. Voyager 1 is the farthest spacecraft from the Sun, more than away when New Horizons reached its landmark in 2021. The support team continued to use the spacecraft in 2021 to study the heliospheric environment (plasma, dust and gas) and to study other Kuiper Belt objects.
Plans
After the spacecraft passed Arrokoth, the instruments continue to have enough power to be operational until the 2030s.
Team leader Alan Stern stated there is potential for a third flyby in the 2020s at the outer edges of the Kuiper belt. This depends on a suitable Kuiper belt object being found or confirmed close enough to the spacecraft's current trajectory. Since May 2020, the New Horizons team has been using time on the Subaru Telescope to look for suitable candidates within the spacecraft's proximity. As of June 2024, no suitable targets have been found. Beginning in fiscal year 2025, New Horizons will focus on specific heliophysics data, as stated by NASA in September 2023. It will remain available for a flyby of a different target until it leaves the Kuiper belt in 2028.New Horizons may also take a picture of Earth from its distance in the Kuiper belt, but only after completing all planned KBO flybys and imaging Uranus and Neptune. This is because pointing a camera towards Earth could cause the camera to be damaged by sunlight, as none of New Horizons' cameras have an active shutter mechanism.
Speed New Horizons has been called "the fastest spacecraft ever launched" because it left Earth at . It is also the first spacecraft launched directly into a solar escape trajectory, which requires an approximate speed while near Earth of , plus additional delta-v to cover air and gravity drag, all to be provided by the launch vehicle. As of May 2, 2024, the spacecraft is from the Sun traveling at .
However, it is not the fastest spacecraft to leave the Solar System. , this record is held by Voyager 1, traveling at relative to the Sun. Voyager 1 attained greater hyperbolic excess velocity than New Horizons due to gravity assists by Jupiter and Saturn. When New Horizons reaches the distance of , it will be traveling at about , around slower than Voyager 1 at that distance. The Parker Solar Probe can also be measured as the fastest object, because of its orbital speed relative to the Sun at perihelion: . Because it remains in solar orbit, its specific orbital energy relative to the Sun is lower than New Horizons and other artificial objects escaping the Solar System.New Horizons Star 48B third stage is also on a hyperbolic escape trajectory from the Solar System and reached Jupiter before the New Horizons spacecraft; it was expected to cross Pluto's orbit on October 15, 2015. Because it was not in controlled flight, it did not receive the correct gravity assist and passed within of Pluto. The Centaur second stage did not achieve solar escape velocity and remains in a heliocentric orbit.
Gallery
Images of the launch
Videos
See also
2006 in spaceflight
Exploration of Pluto
Exploration of dwarf planets
List of artificial objects leaving the Solar System
List of missions to the outer planets
List of New Horizons topics
Mariner Mark II, a planned family of NASA spacecraft including a Pluto mission
New Horizons 2, a proposed trans-Neptunian object flyby mission
Pioneer 10 Pioneer 11 Pluto Kuiper Express, a cancelled NASA Pluto flyby mission
TAU, a proposed mission to fly by Pluto
Timeline of Solar System exploration
Voyager 1 Voyager 2''
Notes
References
Further reading
External links
New Horizons website by NASA
New Horizons website by the Applied Physics Laboratory
New Horizons profile by NASA's Planetary Science Division
New Horizons profile by the National Space Science Data Center
New Horizons Flyby of Ultima Thule – Best Places to Follow Future News.
New Horizons Flyby – Musical Tribute by astrophysicist Brian May (who consulted on the project) and the band Queen.
New Horizons Mission Archive at the NASA Planetary Data System, Small Bodies Node
New Horizons: Kuiper Belt Extended Mission (KEM) Mission Archive at the NASA Planetary Data System, Small Bodies Node
NASA space probes
New Frontiers program
Missions to Pluto
Missions to Jupiter
Missions to minor planets
Radio frequency propagation
Spacecraft escaping the Solar System
Space probes launched in 2006
Articles containing video clips
Spacecraft launched by Atlas rockets
Nuclear-powered robots | New Horizons | [
"Physics"
] | 13,965 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
391,251 | https://en.wikipedia.org/wiki/Stark%E2%80%93Heegner%20theorem | In number theory, the Heegner theorem establishes the complete list of the quadratic imaginary number fields whose rings of integers are principal ideal domains. It solves a special case of Gauss's class number problem of determining the number of imaginary quadratic fields that have a given fixed class number.
Let denote the set of rational numbers, and let be a square-free integer. The field is a quadratic extension of . The class number of is one if and only if the ring of integers of is a principal ideal domain. The Baker–Heegner–Stark theorem can then be stated as follows:
If , then the class number of is one if and only if
These are known as the Heegner numbers.
By replacing with the discriminant of this list is often written as:
History
This result was first conjectured by Gauss in Section 303 of his Disquisitiones Arithmeticae (1798). It was essentially proven by Kurt Heegner in 1952, but Heegner's proof was not accepted until an establishment mathematician Harold Stark rewrote the proof in 1967, which had many commonalities to Heegner's work, but sufficiently many differences that Stark considers the proofs to be different. Heegner "died before anyone really understood what he had done". Stark formally paraphrases Heegner's proof in 1969 (other contemporary papers produced various similar proofs by modular functions.
Alan Baker gave a completely different proof slightly earlier (1966) than Stark's work (or more precisely Baker reduced the result to a finite amount of computation, with Stark's work in his 1963/4 thesis already providing this computation), and won the Fields Medal for his methods. Stark later pointed out that Baker's proof, involving linear forms in 3 logarithms, could be reduced to only 2 logarithms, when the result was already known from 1949 by Gelfond and Linnik.
Stark's 1969 paper also cited the 1895 text by Heinrich Martin Weber and noted that if Weber had "only made the observation that the reducibility of [a certain equation] would lead to a Diophantine equation, the class-number one problem would have been solved 60 years ago". Bryan Birch notes that Weber's book, and essentially the whole field of modular functions, dropped out of interest for half a century: "Unhappily, in 1952 there was no one left who was sufficiently expert in Weber's Algebra to appreciate Heegner's achievement."
Deuring, Siegel, and Chowla all gave slightly variant proofs by modular functions in the immediate years after Stark. Other versions in this genre have also cropped up over the years. For instance, in 1985, Monsur Kenku gave a proof using the Klein quartic (though again utilizing modular functions). And again, in 1999, Imin Chen gave another variant proof by modular functions (following Siegel's outline).
The work of Gross and Zagier (1986) combined with that of Goldfeld (1976) also gives an alternative proof.
Real case
On the other hand, it is unknown whether there are infinitely many d > 0 for which Q() has class number 1. Computational results indicate that there are many such fields. Number Fields with class number one provides a list of some of these.
Notes
References
.
Theorems in algebraic number theory | Stark–Heegner theorem | [
"Mathematics"
] | 698 | [
"Theorems in algebraic number theory",
"Theorems in number theory"
] |
391,267 | https://en.wikipedia.org/wiki/Positron%20emission | Positron emission, beta plus decay, or β+ decay is a subtype of radioactive decay called beta decay, in which a proton inside a radionuclide nucleus is converted into a neutron while releasing a positron and an electron neutrino (). Positron emission is mediated by the weak force. The positron is a type of beta particle (β+), the other beta particle being the electron (β−) emitted from the β− decay of a nucleus.
An example of positron emission (β+ decay) is shown with magnesium-23 decaying into sodium-23:
→ + +
Because positron emission decreases proton number relative to neutron number, positron decay happens typically in large "proton-rich" radionuclides. Positron decay results in nuclear transmutation, changing an atom of one chemical element into an atom of an element with an atomic number that is less by one unit.
Positron emission occurs extremely rarely in nature on Earth. Known instances include cosmic ray interactions and the decay of certain isotopes, such as potassium-40. This rare form of potassium makes up only 0.012% of the element on Earth and has a 1 in 100,000 chance of decaying via positron emission.
Positron emission should not be confused with electron emission or beta minus decay (β− decay), which occurs when a neutron turns into a proton and the nucleus emits an electron and an antineutrino.
Positron emission is different from proton decay, the hypothetical decay of protons, not necessarily those bound with neutrons, not necessarily through the emission of a positron, and not as part of nuclear physics, but rather of particle physics.
Discovery of positron emission
In 1934 Frédéric and Irène Joliot-Curie bombarded aluminium with alpha particles (emitted by polonium) to effect the nuclear reaction + → + , and observed that the product isotope emits a positron identical to those found in cosmic rays by Carl David Anderson in 1932. This was the first example of decay (positron emission). The Curies termed the phenomenon "artificial radioactivity", because is a short-lived nuclide which does not exist in nature. The discovery of artificial radioactivity would be cited when the husband-and-wife team won the Nobel Prize.
Positron-emitting isotopes
Isotopes which undergo this decay and thereby emit positrons include, but are not limited to: carbon-11, nitrogen-13, oxygen-15, fluorine-18, copper-64, gallium-68, bromine-78, rubidium-82, yttrium-86, zirconium-89, sodium-22, aluminium-26, potassium-40, strontium-83, and iodine-124. As an example, the following equation describes the beta plus decay of carbon-11 to boron-11, emitting a positron and a neutrino:
:{| border="0"
|- style="height:2em;"
| ||→ || ||+ || ||+ || ||+ ||
|}
Emission mechanism
Inside protons and neutrons, there are fundamental particles called quarks. The two most common types of quarks are up quarks, which have a charge of +, and down quarks, with a − charge. Quarks arrange themselves in sets of three such that they make protons and neutrons. In a proton, whose charge is +1, there are two up quarks and one down quark ( + − = 1). Neutrons, with no charge, have one up quark and two down quarks ( − − = 0). Via the weak interaction, quarks can change flavor from down to up, resulting in electron emission. Positron emission happens when an up quark changes into a down quark, effectively converting a proton to a neutron.
Nuclei which decay by positron emission may also decay by electron capture. For low-energy decays, electron capture is energetically favored by 2mec2 = , since the final state has an electron removed rather than a positron added. As the energy of the decay goes up, so does the branching fraction of positron emission. However, if the energy difference is less than 2mec2, the positron emission cannot occur and electron capture is the sole decay mode. Certain otherwise electron-capturing isotopes (for instance, ) are stable in galactic cosmic rays, because the electrons are stripped away and the decay energy is too small for positron emission.
Energy conservation
A positron is ejected from the parent nucleus, but the daughter (Z−1) atom still has Z atomic electrons from the parent, i.e. the daughter is a negative ion (at least immediately after the positron emission). Since tables of masses are for atomic masses,
, and, since the mass of the positron is identical to that of the electron, the overall result is that the mass-energy of two electrons is required, and the β+ decay is energetically possible if and only if the mass of the parent atom exceeds the mass of the daughter atom by at least two electron masses (2me c2 = 1.022 MeV).
Isotopes which increase in mass under the conversion of a proton to a neutron, or which decrease in mass by less than 2me, cannot spontaneously decay by positron emission.
Application
These isotopes are used in positron emission tomography, a technique used for medical imaging. The energy emitted depends on the isotope that is decaying; the figure of applies only to the decay of carbon-11.
The short-lived positron emitting isotopes 11C (T = ), 13N (T = ), 15O (T = ), and 18F (T = ) used for positron emission tomography are typically produced by proton irradiation of natural or enriched targets.
References
External links
Live Chart of Nuclides: nuclear structure and decay data (main decay modes) - IAEA
Radioactivity
Electron
Antimatter | Positron emission | [
"Physics",
"Chemistry"
] | 1,291 | [
"Electron",
"Antimatter",
"Matter",
"Molecular physics",
"Nuclear physics",
"Radioactivity"
] |
391,278 | https://en.wikipedia.org/wiki/Proton%20emission | Proton emission (also known as proton radioactivity) is a rare type of radioactive decay in which a proton is ejected from a nucleus. Proton emission can occur from high-lying excited states in a nucleus following a beta decay, in which case the process is known as beta-delayed proton emission, or can occur from the ground state (or a low-lying isomer) of very proton-rich nuclei, in which case the process is very similar to alpha decay. For a proton to escape a nucleus, the proton separation energy must be negative (Sp < 0)—the proton is therefore unbound, and tunnels out of the nucleus in a finite time. The rate of proton emission is governed by the nuclear, Coulomb, and centrifugal potentials of the nucleus, where centrifugal potential affects a large part of the rate of proton emission. The half-life of a nucleus with respect to proton emission is affected by the proton energy and its orbital angular momentum. Proton emission is not seen in naturally occurring isotopes; proton emitters can be produced via nuclear reactions, usually using linear particle accelerators.
Although prompt (i.e. not beta-delayed) proton emission was observed from an isomer in cobalt-53 as early as 1969, no other proton-emitting states were found until 1981, when the proton radioactive ground states of lutetium-151 and thulium-147 were observed at experiments at the GSI in West Germany. Research in the field flourished after this breakthrough, and to date more than 25 isotopes have been found to exhibit proton emission. The study of proton emission has aided the understanding of nuclear deformation, masses, and structure, and it is a pure example of quantum tunneling.
In 2002, the simultaneous emission of two protons was observed from the nucleus iron-45 in experiments at GSI and GANIL (Grand Accélérateur National d'Ions Lourds at Caen). In 2005 it was experimentally determined (at the same facility) that zinc-54 can also undergo double proton decay.
See also
Nuclear drip line
Diproton (a particle possibly involved in double proton decay)
Free neutron
Neutron emission
Photodisintegration
References
External links
Nuclear Structure and Decay Data - IAEA with query on Proton Separation Energy
Nuclear physics
Proton
Radioactivity | Proton emission | [
"Physics",
"Chemistry"
] | 471 | [
"Radioactivity",
"Nuclear physics"
] |
391,283 | https://en.wikipedia.org/wiki/Neutron%20emission | Neutron emission is a mode of radioactive decay in which one or more neutrons are ejected from a nucleus. It occurs in the most neutron-rich/proton-deficient nuclides, and also from excited states of other nuclides as in photoneutron emission and beta-delayed neutron emission. As only a neutron is lost by this process the number of protons remains unchanged, and an atom does not become an atom of a different element, but a different isotope of the same element.
Neutrons are also produced in the spontaneous and induced fission of certain heavy nuclides.
Spontaneous neutron emission
As a consequence of the Pauli exclusion principle, nuclei with an excess of protons or neutrons have a higher average energy per nucleon. Nuclei with a sufficient excess of neutrons have a greater energy than the combination of a free neutron and a nucleus with one less neutron, and therefore can decay by neutron emission. Nuclei which can decay by this process are described as lying beyond the neutron drip line.
Two examples of isotopes that emit neutrons are beryllium-13 (decaying to beryllium-12 with a mean life ) and helium-5 (helium-4, ).
In tables of nuclear decay modes, neutron emission is commonly denoted by the abbreviation n.
{| class="wikitable" align="left"
|+ Neutron emitters to the left of lower dashed line (see also: Table of nuclides)
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|}
Double neutron emission
Some neutron-rich isotopes decay by the emission of two or more neutrons. For example, hydrogen-5 and helium-10 decay by the emission of two neutrons, hydrogen-6 by the emission of 3 or 4 neutrons, and hydrogen-7 by emission of 4 neutrons.
Photoneutron emission
Some nuclides can be induced to eject a neutron by gamma radiation. One such nuclide is 9Be; its photodisintegration is significant in nuclear astrophysics, pertaining to the abundance of beryllium and the consequences of the instability of 8Be. This also makes this isotope useful as a neutron source in nuclear reactors. Another nuclide, 181Ta, is also known to be readily capable of photodisintegration; this process is thought to be responsible for the creation of 180mTa, the only primordial nuclear isomer and the rarest primordial nuclide.
Beta-delayed neutron emission
Neutron emission usually happens from nuclei that are in an excited state, such as the excited 17O* produced from the beta decay of 17N. The neutron emission process itself is controlled by the nuclear force and therefore is extremely fast, sometimes referred to as "nearly instantaneous". This process allows unstable atoms to become more stable. The ejection of the neutron may be as a product of the movement of many nucleons, but it is ultimately mediated by the repulsive action of the nuclear force that exists at extremely short-range distances between nucleons.
Delayed neutrons in reactor control
Most neutron emission outside prompt neutron production associated with fission (either induced or spontaneous), is from neutron-heavy isotopes produced as fission products. These neutrons are sometimes emitted with a delay, giving them the term delayed neutrons, but the actual delay in their production is a delay waiting for the beta decay of fission products to produce the excited-state nuclear precursors that immediately undergo prompt neutron emission. Thus, the delay in neutron emission is not from the neutron-production process, but rather its precursor beta decay, which is controlled by the weak force, and thus requires a far longer time. The beta decay half lives for the precursors to delayed neutron-emitter radioisotopes, are typically fractions of a second to tens of seconds.
Nevertheless, the delayed neutrons emitted by neutron-rich fission products aid control of nuclear reactors by making reactivity change far more slowly than it would if it were controlled by prompt neutrons alone. About 0.65% of neutrons are released in a nuclear chain reaction in a delayed way due to the mechanism of neutron emission, and it is this fraction of neutrons that allows a nuclear reactor to be controlled on human reaction time-scales, without proceeding to a prompt critical state, and runaway melt down.
Neutron emission in fission
Induced fission
A synonym for such neutron emission is "prompt neutron" production, of the type that is best known to occur simultaneously with induced nuclear fission. Induced fission happens only when a nucleus is bombarded with neutrons, gamma rays, or other carriers of energy. Many heavy isotopes, most notably californium-252, also emit prompt neutrons among the products of a similar spontaneous radioactive decay process, spontaneous fission.
Spontaneous fission
Spontaneous fission happens when a nucleus splits into two (occasionally three) smaller nuclei and generally one or more neutrons.
See also
Neutron radiation
Neutron source
Proton emission
References
External links
"Why Are Some Atoms Radioactive?" EPA. Environmental Protection Agency, n.d. Web. 31 Oct. 2014
The LIVEChart of Nuclides - IAEA with filter on delayed neutron emission decay
Nuclear Structure and Decay Data - IAEA with query on Neutron Separation Energy
Emission
Nuclear physics
Radioactivity | Neutron emission | [
"Physics",
"Chemistry"
] | 1,096 | [
"Radioactivity",
"Nuclear physics"
] |
391,421 | https://en.wikipedia.org/wiki/Comparametric%20equation | A comparametric equation is an equation that describes a parametric relationship between a function and a dilated version of the same function, where the equation does not involve the parameter. For example, ƒ(2t) = 4ƒ(t) is a comparametric equation, when we define g(t) = ƒ(2t), so that we have g = 4ƒ no longer contains the parameter, t. The comparametric equation g = 4ƒ has a family of solutions, one of which is ƒ = t2.
To see that ƒ = t2 is a solution, we merely substitute back in: g = ƒ(2t) = (2t)2 = 4t2 = 4ƒ, so that g = 4ƒ.
Comparametric equations arise naturally in signal processing when we have multiple measurements of the same phenomenon, in which each of the measurements was acquired using a different sensitivity. For example, two or more differently exposed pictures of the same subject matter give rise to a comparametric relationship, the solution of which is the response function of the camera, image sensor, or imaging system. In this sense, comparametric equations are the fundamental mathematical basis for HDR (high dynamic range) imaging, as well as HDR audio.
Comparametric equations have been used in many areas of research, and have many practical applications to the real world. They are used in radar, microphone arrays, and have been used in processing crime scene video in homicide trials in which the only evidence against the accused was video recordings of the murder.
Solution
An existing solution is comparametric camera response function (CCRF) for real-time comparametric analysis. It has applications in the analysis of multiple images.
References
Related concepts
Parametric equation
Functional equation
Contraction mapping
Multivariable calculus
Equations | Comparametric equation | [
"Mathematics"
] | 370 | [
"Multivariable calculus",
"Mathematical objects",
"Equations",
"Calculus"
] |
391,908 | https://en.wikipedia.org/wiki/Chern%E2%80%93Simons%20theory | The Chern–Simons theory is a 3-dimensional topological quantum field theory of Schwarz type developed by Edward Witten. It was discovered first by mathematical physicist Albert Schwarz. It is named after mathematicians Shiing-Shen Chern and James Harris Simons, who introduced the Chern–Simons 3-form. In the Chern–Simons theory, the action is proportional to the integral of the Chern–Simons 3-form.
In condensed-matter physics, Chern–Simons theory describes the topological order in fractional quantum Hall effect states. In mathematics, it has been used to calculate knot invariants and three-manifold invariants such as the Jones polynomial.
Particularly, Chern–Simons theory is specified by a choice of simple Lie group G known as the gauge group of the theory and also a number referred to as the level of the theory, which is a constant that multiplies the action. The action is gauge dependent, however the partition function of the quantum theory is well-defined when the level is an integer and the gauge field strength vanishes on all boundaries of the 3-dimensional spacetime.
It is also the central mathematical object in theoretical models for topological quantum computers (TQC). Specifically, an SU(2) Chern–Simons theory describes the simplest non-abelian anyonic model of a TQC, the Yang–Lee–Fibonacci model.
The dynamics of Chern–Simons theory on the 2-dimensional boundary of a 3-manifold is closely related to fusion rules and conformal blocks in conformal field theory, and in particular WZW theory.
The classical theory
Mathematical origin
In the 1940s S. S. Chern and A. Weil studied the global curvature properties of smooth manifolds M as de Rham cohomology (Chern–Weil theory), which is an important step in the theory of characteristic classes in differential geometry. Given a flat G-principal bundle P on M there exists a unique homomorphism, called the Chern–Weil homomorphism, from the algebra of G-adjoint invariant polynomials on g (Lie algebra of G) to the cohomology . If the invariant polynomial is homogeneous one can write down concretely any k-form of the closed connection ω as some 2k-form of the associated curvature form Ω of ω.
In 1974 S. S. Chern and J. H. Simons had concretely constructed a (2k − 1)-form df(ω) such that
where T is the Chern–Weil homomorphism. This form is called Chern–Simons form. If df(ω) is closed one can integrate the above formula
where C is a (2k − 1)-dimensional cycle on M. This invariant is called Chern–Simons invariant. As pointed out in the introduction of the Chern–Simons paper, the Chern–Simons invariant CS(M) is the boundary term that cannot be determined by any pure combinatorial formulation. It also can be defined as
where is the first Pontryagin number and s(M) is the section of the normal orthogonal bundle P. Moreover, the Chern–Simons term is described as the eta invariant defined by Atiyah, Patodi and Singer.
The gauge invariance and the metric invariance can be viewed as the invariance under the adjoint Lie group action in the Chern–Weil theory. The action integral (path integral) of the field theory in physics is viewed as the Lagrangian integral of the Chern–Simons form and Wilson loop, holonomy of vector bundle on M. These explain why the Chern–Simons theory is closely related to topological field theory.
Configurations
Chern–Simons theories can be defined on any topological 3-manifold M, with or without boundary. As these theories are Schwarz-type topological theories, no metric needs to be introduced on M.
Chern–Simons theory is a gauge theory, which means that a classical configuration in the Chern–Simons theory on M with gauge group G is described by a principal G-bundle on M. The connection of this bundle is characterized by a connection one-form A which is valued in the Lie algebra g of the Lie group G. In general the connection A is only defined on individual coordinate patches, and the values of A on different patches are related by maps known as gauge transformations. These are characterized by the assertion that the covariant derivative, which is the sum of the exterior derivative operator d and the connection A, transforms in the adjoint representation of the gauge group G. The square of the covariant derivative with itself can be interpreted as a g-valued 2-form F called the curvature form or field strength. It also transforms in the adjoint representation.
Dynamics
The action S of Chern–Simons theory is proportional to the integral of the Chern–Simons 3-form
The constant k is called the level of the theory. The classical physics of Chern–Simons theory is independent of the choice of level k.
Classically the system is characterized by its equations of motion which are the extrema of the action with respect to variations of the field A. In terms of the field curvature
the field equation is explicitly
The classical equations of motion are therefore satisfied if and only if the curvature vanishes everywhere, in which case the connection is said to be flat. Thus the classical solutions to G Chern–Simons theory are the flat connections of principal G-bundles on M. Flat connections are determined entirely by holonomies around noncontractible cycles on the base M. More precisely, they are in one-to-one correspondence with equivalence classes of homomorphisms from the fundamental group of M to the gauge group G up to conjugation.
If M has a boundary N then there is additional data which describes a choice of trivialization of the principal G-bundle on N. Such a choice characterizes a map from N to G. The dynamics of this map is described by the Wess–Zumino–Witten (WZW) model on N at level k.
Quantization
To canonically quantize Chern–Simons theory one defines a state on each 2-dimensional surface Σ in M. As in any quantum field theory, the states correspond to rays in a Hilbert space. There is no preferred notion of time in a Schwarz-type topological field theory and so one can require that Σ be a Cauchy surface, in fact, a state can be defined on any surface.
Σ is of codimension one, and so one may cut M along Σ. After such a cutting M will be a manifold with boundary and in particular classically the dynamics of Σ will be described by a WZW model. Witten has shown that this correspondence holds even quantum mechanically. More precisely, he demonstrated that the Hilbert space of states is always finite-dimensional and can be canonically identified with the space of conformal blocks of the G WZW model at level k.
For example, when Σ is a 2-sphere, this Hilbert space is one-dimensional and so there is only one state. When Σ is a 2-torus the states correspond to the integrable representations of the affine Lie algebra corresponding to g at level k. Characterizations of the conformal blocks at higher genera are not necessary for Witten's solution of Chern–Simons theory.
Observables
Wilson loops
The observables of Chern–Simons theory are the n-point correlation functions of gauge-invariant operators. The most often studied class of gauge invariant operators are Wilson loops. A Wilson loop is the holonomy around a loop in M, traced in a given representation R of G. As we will be interested in products of Wilson loops, without loss of generality we may restrict our attention to irreducible representations R.
More concretely, given an irreducible representation R and a loop K in M, one may define the Wilson loop by
where A is the connection 1-form and we take the Cauchy principal value of the contour integral and is the path-ordered exponential.
HOMFLY and Jones polynomials
Consider a link L in M, which is a collection of ℓ disjoint loops. A particularly interesting observable is the ℓ-point correlation function formed from the product of the Wilson loops around each disjoint loop, each traced in the fundamental representation of G. One may form a normalized correlation function by dividing this observable by the partition function Z(M), which is just the 0-point correlation function.
In the special case in which M is the 3-sphere, Witten has shown that these normalized correlation functions are proportional to known knot polynomials. For example, in G = U(N) Chern–Simons theory at level k the normalized correlation function is, up to a phase, equal to
times the HOMFLY polynomial. In particular when N = 2 the HOMFLY polynomial reduces to the Jones polynomial. In the SO(N) case, one finds a similar expression with the Kauffman polynomial.
The phase ambiguity reflects the fact that, as Witten has shown, the quantum correlation functions are not fully defined by the classical data. The linking number of a loop with itself enters into the calculation of the partition function, but this number is not invariant under small deformations and in particular, is not a topological invariant. This number can be rendered well defined if one chooses a framing for each loop, which is a choice of preferred nonzero normal vector at each point along which one deforms the loop to calculate its self-linking number. This procedure is an example of the point-splitting regularization procedure introduced by Paul Dirac and Rudolf Peierls to define apparently divergent quantities in quantum field theory in 1934.
Sir Michael Atiyah has shown that there exists a canonical choice of 2-framing, which is generally used in the literature today and leads to a well-defined linking number. With the canonical framing the above phase is the exponential of 2πi/(k + N) times the linking number of L with itself.
Problem (Extension of Jones polynomial to general 3-manifolds)
"The original Jones polynomial was defined for 1-links in the 3-sphere (the 3-ball, the 3-space R3). Can you define the Jones polynomial for 1-links in any 3-manifold?"
See section 1.1 of this paper for the background and the history of this problem. Kauffman submitted a solution in the case of the product manifold of closed oriented surface and the closed interval, by introducing virtual 1-knots. It is open in the other cases. Witten's path integral for Jones polynomial is written for links in any compact 3-manifold formally, but the calculus is not done even in physics level in any case other than the 3-sphere (the 3-ball, the 3-space R3). This problem is also open in physics level. In the case of Alexander polynomial, this problem is solved.
Relationships with other theories
Topological string theories
In the context of string theory, a U(N) Chern–Simons theory on an oriented Lagrangian 3-submanifold M of a 6-manifold X arises as the string field theory of open strings ending on a D-brane wrapping X in the A-model topological string theory on X. The B-model topological open string field theory on the spacefilling worldvolume of a stack of D5-branes is a 6-dimensional variant of Chern–Simons theory known as holomorphic Chern–Simons theory.
WZW and matrix models
Chern–Simons theories are related to many other field theories. For example, if one considers a Chern–Simons theory with gauge group G on a manifold with boundary then all of the 3-dimensional propagating degrees of freedom may be gauged away, leaving a two-dimensional conformal field theory known as a G Wess–Zumino–Witten model on the boundary. In addition the U(N) and SO(N) Chern–Simons theories at large N are well approximated by matrix models.
Chern–Simons gravity theory
In 1982, S. Deser, R. Jackiw and S. Templeton proposed the Chern–Simons gravity theory in three dimensions, in which the Einstein–Hilbert action in gravity theory is modified by adding the Chern–Simons term. ()
In 2003, R. Jackiw and S. Y. Pi extended this theory to four dimensions () and Chern–Simons gravity theory has some considerable effects not only to fundamental physics but also condensed matter theory and astronomy.
The four-dimensional case is very analogous to the three-dimensional case. In three dimensions, the gravitational Chern–Simons term is
This variation gives the Cotton tensor
Then, Chern–Simons modification of three-dimensional gravity is made by adding the above Cotton tensor to the field equation, which can be obtained as the vacuum solution by varying the Einstein–Hilbert action.
Chern–Simons matter theories
In 2013 Kenneth A. Intriligator and Nathan Seiberg solved these 3d Chern–Simons gauge theories and their phases using monopoles carrying extra degrees of freedom. The Witten index of the many vacua discovered was computed by compactifying the space by turning on mass parameters and then computing the index. In some vacua, supersymmetry was computed to be broken. These monopoles were related to condensed matter vortices. ()
The N = 6 Chern–Simons matter theory is the holographic dual of M-theory on .
Four-dimensional Chern–Simons theory
In 2013 Kevin Costello defined a closely related theory defined on a four-dimensional manifold consisting of the product of a two-dimensional 'topological plane' and a two-dimensional (or one complex dimensional) complex curve. He later studied the theory in more detail together with Witten and Masahito Yamazaki, demonstrating how the gauge theory could be related to many notions in integrable systems theory, including exactly solvable lattice models (like the six-vertex model or the XXZ spin chain), integrable quantum field theories (such as the Gross–Neveu model, principal chiral model and symmetric space coset sigma models), the Yang–Baxter equation and quantum groups such as the Yangian which describe symmetries underpinning the integrability of the aforementioned systems.
The action on the 4-manifold where is a two-dimensional manifold and is a complex curve is
where is a meromorphic one-form on .
Chern–Simons terms in other theories
The Chern–Simons term can also be added to models which aren't topological quantum field theories. In 3D, this gives rise to a massive photon if this term is added to the action of Maxwell's theory of electrodynamics. This term can be induced by integrating over a massive charged Dirac field. It also appears for example in the quantum Hall effect. The addition of the Chern–Simons term to various theories gives rise to vortex- or soliton-type solutions Ten- and eleven-dimensional generalizations of Chern–Simons terms appear in the actions of all ten- and eleven-dimensional supergravity theories.
One-loop renormalization of the level
If one adds matter to a Chern–Simons gauge theory then, in general it is no longer topological. However, if one adds n Majorana fermions then, due to the parity anomaly, when integrated out they lead to a pure Chern–Simons theory with a one-loop renormalization of the Chern–Simons level by −n/2, in other words the level k theory with n fermions is equivalent to the level k − n/2 theory without fermions.
See also
Gauge theory (mathematics)
Chern–Simons form
Topological quantum field theory
Alexander polynomial
Jones polynomial
2+1D topological gravity
Skyrmion
References
Specific
External links
Quantum field theory | Chern–Simons theory | [
"Physics"
] | 3,357 | [
"Quantum field theory",
"Quantum mechanics"
] |
392,124 | https://en.wikipedia.org/wiki/External%20Data%20Representation | External Data Representation (XDR) is a standard data serialization format, for uses such as computer network protocols. It allows data to be transferred between different kinds of computer systems. Converting from the local representation to XDR is called encoding. Converting from XDR to the local representation is called decoding. XDR is implemented as a software library of functions which is portable between different operating systems and is also independent of the transport layer.
XDR uses a base unit of 4 bytes, serialized in big-endian order; smaller data types still occupy four bytes each after encoding. Variable-length types such as string and opaque are padded to a total divisible by four bytes. Floating-point numbers are represented in IEEE 754 format.
History
XDR was developed in the mid 1980s at Sun Microsystems, and first widely published in 1987.
XDR became an IETF standard in 1995.
The XDR data format is in use by many systems, including:
Network File System (protocol)
ZFS File System
NDMP Network Data Management Protocol
Open Network Computing Remote Procedure Call
Legato NetWorker backup software (later sold by EMC)
NetCDF (a scientific data format)
The R language and environment for statistical computing
The HTTP-NG Binary Wire Protocol
The SpiderMonkey JavaScript engine, to serialize/deserialize compiled JavaScript code
The Ganglia distributed monitoring system
The sFlow network monitoring standard
The libvirt virtualization library, API and UI
The Firebird (database server) for Remote Binary Wire Protocol
Stellar Payment Network
XDR data types
boolean
int – 32-bit integer
unsigned int – unsigned 32-bit integer
hyper – 64-bit integer
unsigned hyper – unsigned 64-bit integer
IEEE float
IEEE double
quadruple (new in RFC1832)
enumeration
structure
string
fixed length array
variable length array
union – discriminated union
fixed length opaque data
variable length opaque data
void – zero byte quantity
optional – optional data is notated similarly to C pointers, but is represented as the data type "pointed to" with a Boolean "present or not" flag. Semantically this is option type.
See also
Structured Data eXchange Format (SDXF)
Remote Procedure Call
Abstract Syntax Notation One
Data Format Description Language
Comparison of data serialization formats
References
External links
The XDR standard exists in three different versions in the following RFCs:
2006 This document makes no technical changes to RFC 1832 and is published for the purposes of noting IANA considerations, augmenting security considerations, and distinguishing normative from informative references.
1995 version. Added Quadruple precision floating point to RFC 1014.
1987 version.
Cisco's XDR: Technical Notes
jsxdrapi.c, the main source file of SpiderMonkey that uses XDR
protocol.cpp main xdr source file used in Firebird remote protocol
The GNU Libc implementation of rpcgen, the XDR parser.
Mu Dynamics Research Labs racc grammar for XDR
IvmaiAsn ASN1/ECN/XDR Tools (a collection of tools containing an XDR/RPC-to-ASN.1 converter)
Networking standards
Internet Standards
Internet protocols
Data modeling languages
Data serialization formats
Data transmission
Sun Microsystems software | External Data Representation | [
"Technology",
"Engineering"
] | 660 | [
"Networking standards",
"Computer standards",
"Computer networks engineering"
] |
392,876 | https://en.wikipedia.org/wiki/Trilon | A trilon is a three-faceted prism-shaped object.
A trilon can be made to rotate on an axle to show different text or images which may be applied to any of its three facets. Trilons have been used on game shows and billboards.
The game board on the original Concentration may have been the first use of trilons on a game show. The game combined the card game with a rebus puzzle. The original game board consisted of 30 motorized trilons. One facet of each trilon had an identifying number. A description of a prize or other game element was on a second facet, and a portion of a rebus was on the third facet. The rebus was gradually revealed as the game progressed. Puzzle pieces were kept under high security and were attached to the trilons only as needed.
Trilons became a common element on many other game (and reality) shows including:
Three on a Match, which used a board with three columns of four trilons each, but unlike Concentration, these trilons rotated vertically rather than horizontally.
Several incarnations of the Pyramid series (exceptions were the main game board in 1990 and all boards in the 2002 and 2012 versions).
The main game in the game show Whew!
The first season of Street Smarts.
The spaces on the letter board in Wheel of Fortune were trilons until 1997.
The entire game board on the original Family Feud was one large trilon through 1994. One side was itself composed of smaller trilons that could display individual answers during a round.
The board used in the Hidden Pictures rounds on the syndicated version of the Nickelodeon game show Finders Keepers.
The "Jailtime Challenge" round of Where in the World Is Carmen Sandiego? used a game board with 15 trilons that, like those on Three on a Match, rotated vertically.
The game show Debt had a game board with thirty trilons during its first season.
Several pricing games featured in The Price Is Right, such as Bargain Game, Hot Seat and One Away.
The live competitions on the American version of Big Brother.
Mechanically speaking, trilons had a penchant for being temperamental, labor-intensive, and very noisy. They were largely replaced by on-set television monitors, as on Jeopardy! (starting with the 1984 revival, although pull-cards were used instead of trilons to show the categories until 1991). They were replaced by a CGI game board on the 1987 "Classic" revival of Concentration and Family Feud (starting with the 1999 revival).
Trilons have been used in roadside billboards and variable-message signs. Particularly in billboards, many long, thin trilons are placed side-by-side in the frame and periodically rotate simultaneously to cycle the billboard through three separate signs, although many have been replaced by dot-matrix signs capable of displaying a much wider range of messages.
References
Geometric shapes | Trilon | [
"Mathematics"
] | 605 | [
"Geometric shapes",
"Mathematical objects",
"Geometric objects"
] |
393,139 | https://en.wikipedia.org/wiki/Quantum%20turbulence | Quantum turbulence is the name given to the turbulent flow – the chaotic motion of a fluid at high flow rates – of quantum fluids, such as superfluids. The idea that a form of turbulence might be possible in a superfluid via the quantized vortex lines was first suggested by Richard Feynman. The dynamics of quantum fluids are governed by quantum mechanics, rather than classical physics which govern classical (ordinary) fluids. Some examples of quantum fluids include superfluid helium (4He and Cooper pairs of 3He), Bose–Einstein condensates (BECs), polariton condensates, and nuclear pasta theorized to exist inside neutron stars. Quantum fluids exist at temperatures below the critical temperature at which Bose-Einstein condensation takes place.
General properties of superfluids
The turbulence of quantum fluids has been studied primarily in two quantum fluids: liquid Helium and atomic condensates. Experimental observations have been made in the two stable isotopes of Helium, the common 4He and the rare 3He. The latter isotope has two phases, named the A-phase and the B-phase. The A-phase is strongly anisotropic, and although it has very interesting hydrodynamic properties, turbulence experiments have been performed almost exclusively in the B-phase. Helium liquidizes at a temperature of approximately 4K. At this temperature, the fluid behaves like a classical fluid with extraordinarily small viscosity, referred to as helium I. After further cooling, Helium I undergoes Bose-Einstein condensation into a superfluid, referred to as helium II. The critical temperature for Bose-Einstein condensation of helium is 2.17K (at the saturated vapour pressure), while only approximately a few mK for 3He-B.
Although in atomic condensates there is not as much experimental evidence for turbulence as in Helium, experiments have been performed with rubidium, sodium, caesium, lithium and other elements. The critical temperature for these systems is of the order of micro-Kelvin.
There are two fundamental properties of quantum fluids that distinguish them from classical fluids: superfluidity and quantized circulation.
Superfluidity
Superfluidity arises as a consequence of the dispersion relation of elementary excitations, and fluids that exhibit this behaviour flow without viscosity. This is a vital property for quantum turbulence as viscosity in classical fluids causes dissipation of kinetic energy into heat, damping out motion of the fluid. Landau predicted that if a superfluid flows faster than a certain critical velocity (or alternatively an object moves faster than in a static fluid) thermal excitations (rotons) are emitted as it becomes energetically favourable to generate quasiparticles, resulting in the fluid no longer exhibiting superfluid properties. For helium II, this critical velocity is .
Quantized circulation
The property of quantized circulation arises as a consequence of the existence and uniqueness of a complex macroscopic wavefunction , which affects the vorticity (local rotation) in a very profound way, making it crucial for quantum turbulence.
The velocity and density of the fluid can be recovered from the wavefunction by writing it in polar form , where is the magnitude of and is the phase. The velocity of the fluid is then , and the number density is . The mass density is related to the number density by , where is the mass of one boson.
The circulation is defined to be the line integral along a simple closed path within the fluid
For a simply-connected surface , Stokes theorem holds, and the circulation vanishes, as the velocity can be expressed as the gradient of the phase. For a multiply-connected surface, the phase difference between an arbitrary initial point on the curve and the final point (same as initial point as is closed) must be , where in order for the wavefunction to be single-valued. This leads to a quantized value for the circulation
where is the quantum of circulation, and the integer is the charge (or winding number) of the vortex. Multiply charged vortices () in helium II are unstable and for this reason in most practical applications . It is energetically favourable for the fluid to form singly-charged vortices rather than a single vortex of charge , and so a multiply-charged vortex would split into singly-charged vortices. Under certain conditions, it is possible to generate certain vortices with a charge higher than 1.
Properties of vortex lines
Vortex lines are topological line defects of the phase. Their nucleation makes the quantum fluid's region to become a multiply-connected region. As given by Fig 2, density depletion can be observed near the axis, with on the vortex line. The size of the vortex core varies between different quantum fluids. The size of the vortex core is around for helium II, for 3He-B and for typical atomic condensates . The simplest vortex system in a quantum fluid consists of a single straight vortex line; the velocity field of such configuration is purely azimuthal given by . This is the same formula as for a classical vortex line solution of the Euler equation, however, classically, this model is physically unrealistic as the velocity diverges as . This leads to the idea of the Rankine vortex as shown in fig 2, which combines solid body rotation for small and vortex motion for large values of , and is a more realistic model of ordinary classical vortices.
Many similarities can be drawn with vortices in classical fluids, for example the fact that vortex lines obey the classical Kelvin circulation theorem: the circulation is conserved and the vortex lines must terminate at boundaries or exist in the shape of closed loops. In the zero temperature limit, a point on a vortex line will travel accordingly to the velocity field that is generated at that point by the other parts of the vortex line, provided that the vortex line is not straight (an isolated straight vortex does not move). The velocity can also be generated by any other vortex lines in the fluid, a phenomenon also present in classical fluids. A simple example of this is a vortex ring (a torus-shaped vortex) which moves at a self-induced velocity inversely proportional to the radius of the ring , where . The whole ring moves at a velocity
Kelvin waves and vortex reconnections
Vortices in quantum fluids support Kelvin waves, which are helical perturbations of a vortex line away from its straight configuration that rotate at an angular velocity , with
Here where is the wavelength and is the wavevector.
Travelling vortices in quantum fluids can interact with each other, resulting in reconnections of vortex lines and ultimately changing the topology of the vortex configuration when they collide as suggested by Richard Feynman. At non-zero temperatures the vortex lines scatter thermal excitations, which creates a friction force with the normal fluid component (thermal cloud for atomic condensates). This phenomenon leads to the dissipation of kinetic energy. For example, vortex rings will shrink, and Kelvin waves will decrease in amplitude.
Vortex lattice
Vortex lattices are laminar (ordered) configurations of vortex lines that can be created by rotating the system. For a cylindrical vessel of radius , a condition can be derived for the formation of a vortex lattice by minimising the expression , where is the free energy, is the angular momentum of the fluid and is the rotation, with magnitude and axial direction. The critical velocity for the appearance of a vortex lattice is then .
Exceeding this velocity allows for a vortex to form in the fluid. States with more vortices can be formed by increasing the rotation further, past the next critical velocities . The vortices arrange themselves into ordered configurations that are called vortex lattices.
Two fluid nature
At non-zero temperature , thermal effects must be taken into account. For atomic gases at non-zero temperatures, a fraction of the atoms are not part of the condensate, but rather form a rarefied (large free mean path) thermal cloud that co-exist with the condensate (which, in the first approximation, can be identified with the superfluid component). Since helium is a liquid, not a dilute gas like atomic condensates, there is a much stronger interaction between atoms, and the condensate is only a part of the superfluid component. Thermal excitations (consisting of phonons and rotons) form a viscous fluid component (very short free mean path, analogous to classical viscous fluid governed by the Navier-Stokes equation), called the normal fluid which coexists with the superfluid component. This forms the basis of Tisza's and Landau's two-fluid theory describing helium II as the mixture of co-penetrating superfluid and normal fluid components, with a total density dictated by the equation . The table displays the key properties of the superfluid and normal fluid components:.
The relative proportions of the two components change with temperature, from an all normal fluid flow at the transition temperature ( and ), to a complete superfluid flow in the zero temperature limit ( and ). At small velocities, the two-fluid equations are
where here is the pressure, is the entropy per unit mass and is the viscosity of the normal fluid component as given by the table above. The first of these equations can be identified as being the conservation of mass equation, while the second equation can be identified as the conservation of entropy. The results of these equations give rise to the phenomena of second sound and thermal counterflow. At large velocities the superfluid becomes turbulent and vortex lines appear; at even larger velocities both normal fluid and superfluid become turbulent.
Classical vs quantum turbulence
Experiments and numerical solutions show that quantum turbulence is an apparently random tangle of vortex lines inside a quantum fluid. The study of quantum turbulence aims to explore two main questions:
Are vortex tangles really random, or do they contain some characteristic properties or organised structures ?
How does quantum turbulence compare with classical turbulence?
To understand quantum turbulence it is useful to make connection with the turbulence of classical fluids. The turbulence of classical fluids is an everyday phenomenon, which can be readily observed in the flow of a stream or river as was first done by Leonardo da Vinci in his famous sketches. When turning on a water tap, one notices that at first the water flows out in a regular fashion (called laminar flow), but if the tap is turned up to higher flow rates, the flow becomes decorated with irregular bulges, unpredictably splitting into multiple strands as it spatters out in an ever-changing torrent, known as turbulent flow. Leonardo da Vinci first observed and noted in his private notebooks that turbulent flows of classical fluids include areas of circulating fluid called vortices (or eddies).
The simplest case of classical turbulence is that of homogeneous isotropic turbulence (HIT) held in a statistical steady state. Such turbulence can be created inside of a wind tunnel, for example a channel with air flow propelled by a fan from one side to the other. It is often equipped with a mesh to create a turbulent flow of air. A statistically steady state ensures that the main properties of the flow stabilises even though they fluctuate locally. Due to presence of viscosity, without the continuous supply of energy the turbulence of the flow will decay because of frictional forces. In the wind tunnel, energy is consistently provided by the fan. It is useful to introduce the concept of energy distribution over the length scales, the wavevector , and the wavenumber . In one dimension, the wavenumber can be related to the wavelength simply using . The total energy per unit mass is given by
where is the energy spectrum, essentially representing the distribution of turbulent kinetic energy over the wavenumbers. The notion of an energy cascade, where an energy transfer takes place from large scale vortices to smaller scale vortices, which eventually lead to viscous dissipation, was memorably noted by Lewis Fry Richardson. Dissipation occurs at the dissipation length scales (termed the Kolmogorov length scale), where where is the kinematic viscosity. By the pioneering work of Andrey Kolmogorov, the energy spectrum was found to take the form
where is the energy dissipation rate per unit volume . The constant is a dimensionless constant, that takes the value . In k-space the value associated to the Kolmogorov length scale is the Kolmogorov wavenumber , where viscous dissipation occurs.
Kolmogorov cascade in quantum fluids
For temperatures low enough for quantum mechanical effects to govern the fluid, quantum turbulence is a seemingly chaotic tangle of vortex lines with a highly knotted topology, which move each other and reconnect when they collide. In a pure superfluid, there is no normal component to carry the entropy of the system and therefore the fluid flows without viscosity, resulting in the lack of a dissipation scale . Analogously to classical fluids, a quantum length scale (and the corresponding value in k-space ) can be introduced by replacing the kinematic viscosity in the Kolmogorov length scale with the quantum of circulation . For scales larger than , a small polarisation of the vortex lines allows the stretching required to sustain a Kolmogorov energy cascade.
Experiments have been performed in superfluid Helium II to create turbulence, that behave according to the Kolmogorov cascade. One such example of this is the case of two counter-rotating propellers, where both above and below the critical temperature a Kolmogorov energy spectrum was observed that is indistinguishable from those observed in the turbulence of classical fluids. For higher temperatures, the existence of the normal fluid component leads to the presence of viscous forces and eventual heat dissipation which warms the system. As a consequence of this friction the vortices become smoother, and the Kelvin waves that arise due to vortex reconnections are smoother than in low-temperature quantum turbulence. Kolmogorov turbulence arises in quantum fluids for energy input at large length scales, where the energy spectrum follows in the inertial range . For length scales smaller than , instead the energy spectrum follows a regime.
For temperatures in the zero limit, the undamped Kelvin waves result in more kinks appearing in the shapes of the vortices. For large length scales the quantum turbulence manifests as a Kolmogorov energy cascade (numerical simulations using the Gross-Pitaevskii equation and the vortex-filament model confirmed this effect ), with the energy spectrum following . Lacking thermal dissipation, it is intuitive to assume that quantum turbulence in the low temperature limit does not decay as it would for higher temperatures, however experimental evidence showed that this was not the case: quantum turbulence decays even at very low temperatures. The Kelvin waves interact and create shorter Kelvin waves, until they are short enough that emission of sound (phonons), which results in the conversion of kinetic energy into heat, thus dissipation of energy. This process which shifts energy to smaller and smaller length scales at wavenumbers larger than is called the Kelvin wave cascade and proceeds on individual vortices. Low temperature quantum turbulence should thus consist of a double cascade: a Kolmogorov regime (a cascade of eddies) in the inertial range , followed by a bottle-neck plateau, followed by the Kelvin wave cascade (a cascade of waves) that obeys the same law but with different physical origin. This is at current consensus, but it must be stressed that it arises from theory and numerical simulations only: there is currently no direct experimental evidence for the Kelvin wave cascade due to the difficulty of observing and measuring at such small length scales.
Vinen turbulence
Vinen turbulence can be generated in a quantum fluid by the injection of vortex rings into the system, which has been observed both numerically and experimentally. It has been observed also in numerical simulations of turbulent Helium II driven by a small heat flux and in numerical simulations of trapped atomic Bose-Einstein condensates; it has been found even in numerical studies of superfluid models of the early universe. Unlike the Kolmogorov regime which appears to have a classical counterpart, Vinen turbulence has not been identified in classical turbulence..
Vinen turbulence occurs for very low energy inputs into the system, which prevents the formation of the large scale partially polarised structures that are prevalent in Kolmogorov turbulence, as is shown in Fig 9a. The partial polarization contributes strongly to the amount of non-local interactions between the vortex lines, which can be seen in the figure. In stark contrast, Fig 9b displays the Vinen turbulence regime, where there is very little non-local interaction. The energy spectrum of Vinen turbulence peaks at the intermediate scales around , rather than at large length scales . From Fig 10, it can be seen that for small length scales the turbulence follows the typical behaviour of an isolated vortex. As a result of these properties Vinen turbulence appears as an almost completely random flow with a very weak or negligible energy cascade.
Decay of quantum turbulence
Stemming from the different signatures, Kolmogorov and Vinen turbulence follow power laws relating to their temporal decay. For the Kolmogorov regime, after removing the forcing which sustains the turbulence in a statistical steady-state, a decay of for the energy and for the vortex line density (defined as the vortex length per unit volume) are observed. Vinen turbulence decays temporally at a slower rate than Kolmogorov turbulence: the energy decays as and the vortex line density as .
Turbulence in atomic condensates
Computer simulations have played a particularly important role in the development of the theoretical understanding of quantum turbulence
Turbulence in atomic condensates has only been studied very recently meaning that there is less information available. Turbulent atomic condensates contain a much smaller number of vortices compared to turbulence in helium. Because of the small size of typical atomic condensates, there is not a large length scale separation between the system size and the inter-vortex size, and therefore k-space is restricted. Numerical simulations suggest that turbulence is more likely to appear in the Vinen regime. Experiments performed in Cambridge have also found the emergence of wave turbulence scaling appearing.
Generation and detection of quantum turbulence
Physical generation of quantum turbulence
There are a plethora of methods that can be used to generate a vortex tangle (visualised in fig 11) in the laboratory. Here they are listed by the quantum fluid that they can be generated in.
QT in helium II
Suddenly towing a grid in the sample of fluid at rest
Moving the fluid along pipes or channels using bellows or pumps, creating a superfluid wind tunnel (the TOUPIE experiment in Grenoble )
Rotating one or two propellers inside a container; the configuration of two counter-rotating propellers is called the "von Karman flow" (e.g. the SHREK experiment in Grenoble)
Creating shockwaves and cavitation by locally focusing ultrasound (this allows for the generation of quantum turbulence away from the boundaries)
Oscillating/vibrating forks or wires
Applying a heat flux (also termed the "thermal counterflow" ): the prototype experiment is a channel which is open to a helium bath at one end and the opposite is closed and has a resistor. An electric current is passed through the resistor and generates ohmic heat; the heat is carried away from the heater towards the bath by the normal fluid component, while the superfluid moves towards the heater so that the net mass flux is zero as the channel is closed. A relative velocity (counterflow) of the two fluid components is set up in this way which is proportional to the applied heat. Above a small critical value of the counterflow velocity, a turbulent vortex tangle is generated.
Injecting vortex rings (rings are generated by injecting electrons which form a small bubble of about 16 Angstroms in size that are accelerated by an electric field, until, upon exceeding the critical velocity, the vortex ring is nucleated)
QT in 3He-B and atomic condensates
In 3He-B, quantum turbulence can be generated by the vibration of wires. For atomic condensates, quantum turbulence can be generated by shaking or oscillating the trap which confines the BEC and by phase imprinting the quantum vortices.
Detection of quantum turbulence
In classical turbulence, one usually measures the velocity, either at a fixed position against time (typical of physical experiments) or at the same time at many positions (typical of numerical simulations). Quantum turbulence is characterised by a disordered tangle of discrete (individual) vortex lines.
In helium II techniques exists to measure the vortex line density (the length of vortex lines per unit volume based on detecting the second sound attenuation. The average distance between vortex lines, , can be found in terms of the vortex line density as .
Detection in helium II
Measuring the attenuation of second sound waves
Measuring temperature or pressure gradients
Measuring ions trapped in the vortices
Using tracer particles (small glass or plastic spheres/solid hydrogen snowballs) of size of the order of a micron, and then imaging them using lasers. Techniques that can be used are PIV (particle image velocimetry) or PTV (particle tracking velocimetry). Most recently, excimer helium molecules have been used
Using oscillating forks
Using cantilevers
Using cryogenic hot wires
Detection in 3He-B and atomic condensates
Quantum turbulence can be detected in 3He-B in two ways: nuclear magnetic resonance (NMR) and by Andreev scattering of thermal quasiparticles. For atomic condensates, it is typical that the condensate must be expanded (by switching off the trapping potential) so that is sufficiently large for an image to be taken. This procedure has a disadvantage as it leads to the condensate being destroyed. The outcome leads to a 2-dimensional image which allows for the study of 2-dimensional quantum turbulence, but imposes a constraint studying 3-dimensional quantum turbulence using this method. Individual quantum vortices have been observed in 3-dimensions, moving and reconnecting using a technique which extracts small fractions of the condensate at a time, allowing for the observation of a time sequence of the same vortex configuration.
See also
Superfluid helium-4
Macroscopic quantum phenomena
Quantum vortex
Quantum hydrodynamics
2D Quantum Turbulence
References
Turbulence
Superfluidity | Quantum turbulence | [
"Physics",
"Chemistry",
"Materials_science"
] | 4,641 | [
"Physical phenomena",
"Phase transitions",
"Turbulence",
"Phases of matter",
"Superfluidity",
"Condensed matter physics",
"Exotic matter",
"Matter",
"Fluid dynamics"
] |
393,148 | https://en.wikipedia.org/wiki/Roton | In theoretical physics, a roton is an elementary excitation, or quasiparticle, seen in superfluid helium-4 and Bose–Einstein condensates with long-range dipolar interactions or spin-orbit coupling. The dispersion relation of elementary excitations in this superfluid shows a linear increase from the origin, but exhibits first a maximum and then a minimum in energy as the momentum increases. Excitations with momenta in the linear region are called phonons; those with momenta close to the minimum are called rotons. Excitations with momenta near the maximum are called maxons.
The term "roton-like" is also used for the predicted eigenmodes in 3D metamaterials using beyond-nearest-neighbor coupling. The observation of such a "roton-like" dispersion relation was demonstrated under ambient conditions for both acoustic pressure waves in a channel-based metamaterial at audible frequencies and transverse elastic waves in a microscale metamaterial at ultrasound frequencies.
Models
Originally, the roton spectrum was phenomenologically introduced by Lev Landau in 1947. Currently there exist models which try to explain the roton spectrum with varying degrees of success and fundamentality. The requirement for any model of this kind is that it must explain not only the shape of the spectrum itself but also other related observables, such as the speed of sound and structure factor of superfluid helium-4. Microwave and Bragg spectroscopy has been conducted on helium to study the roton spectrum.
Bose–Einstein condensation
Bose–Einstein condensation of rotons has been also proposed and studied. Its first detection has been reported in 2018. Under specific conditions the roton minimum gives rise to a crystal solid-like structure called the supersolid, as shown in experiments from 2019.
See also
Superfluid
Macroscopic quantum phenomena
Bose–Einstein condensate
References
Bibliography
Quasiparticles
Bose–Einstein condensates
Superfluidity
Lev Landau | Roton | [
"Physics",
"Chemistry",
"Materials_science"
] | 413 | [
"Bose–Einstein condensates",
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Superfluidity",
"Subatomic particles",
"Condensed matter physics",
"Exotic matter",
"Quasiparticles",
"Matter",
"Fluid dynamics"
] |
16,703,428 | https://en.wikipedia.org/wiki/Euler%E2%80%93Poisson%E2%80%93Darboux%20equation | In mathematics, the Euler–Poisson–Darboux(EPD) equation is the partial differential equation
This equation is named for Siméon Poisson, Leonhard Euler, and Gaston Darboux. It plays an important role in solving the classical wave equation.
This equation is related to
by , , where and some sources quote this equation when referring to the Euler–Poisson–Darboux equation.
The EPD equation equation is the simplest linear hyperbolic equation in two independent variables whose coefficients exhibit singularities, therefore it has an interest as a paradigm to relativity theory.
Compact support self-similar solution of the EPD equation for thermal conduction was derived starting from the modified Fourier-Cattaneo law.
It is also possible to solve the non-linear EPD equations with the method of generalized separation of variables.
References
External links
Differential calculus
Eponymous equations of physics
Partial differential equations
Leonhard Euler | Euler–Poisson–Darboux equation | [
"Physics",
"Mathematics"
] | 187 | [
"Mathematical analysis",
"Equations of physics",
"Mathematical analysis stubs",
"Calculus",
"Eponymous equations of physics",
"Differential calculus"
] |
16,704,344 | https://en.wikipedia.org/wiki/Gauss%27s%20law%20for%20gravity | In physics, Gauss's law for gravity, also known as Gauss's flux theorem for gravity, is a law of physics that is equivalent to Newton's law of universal gravitation. It is named after Carl Friedrich Gauss. It states that the flux (surface integral) of the gravitational field over any closed surface is proportional to the mass enclosed. Gauss's law for gravity is often more convenient to work from than Newton's law.
The form of Gauss's law for gravity is mathematically similar to Gauss's law for electrostatics, one of Maxwell's equations. Gauss's law for gravity has the same mathematical relation to Newton's law that Gauss's law for electrostatics bears to Coulomb's law. This is because both Newton's law and Coulomb's law describe inverse-square interaction in a 3-dimensional space.
Qualitative statement of the law
The gravitational field g (also called gravitational acceleration) is a vector field – a vector at each point of space (and time). It is defined so that the gravitational force experienced by a particle is equal to the mass of the particle multiplied by the gravitational field at that point.
Gravitational flux is a surface integral of the gravitational field over a closed surface, analogous to how magnetic flux is a surface integral of the magnetic field.
Gauss's law for gravity states:
The gravitational flux through any closed surface is proportional to the enclosed mass.
Integral form
The integral form of Gauss's law for gravity states:
where
(also written ) denotes a surface integral over a closed surface,
∂V is any closed surface (the boundary of an arbitrary volume V),
dA is a vector, whose magnitude is the area of an infinitesimal piece of the surface ∂V, and whose direction is the outward-pointing surface normal (see surface integral for more details),
g is the gravitational field,
G is the universal gravitational constant, and
M is the total mass enclosed within the surface ∂V.
The left-hand side of this equation is called the flux of the gravitational field. Note that according to the law it is always negative (or zero), and never positive. This can be contrasted with Gauss's law for electricity, where the flux can be either positive or negative. The difference is because charge can be either positive or negative, while mass can only be positive.
Differential form
The differential form of Gauss's law for gravity states
where denotes divergence, G is the universal gravitational constant, and ρ is the mass density at each point.
Relation to the integral form
The two forms of Gauss's law for gravity are mathematically equivalent. The divergence theorem states:
where V is a closed region bounded by a simple closed oriented surface ∂V and dV is an infinitesimal piece of the volume V (see volume integral for more details). The gravitational field g must be a continuously differentiable vector field defined on a neighborhood of V.
Given also that
we can apply the divergence theorem to the integral form of Gauss's law for gravity, which becomes:
which can be rewritten:
This has to hold simultaneously for every possible volume V; the only way this can happen is if the integrands are equal. Hence we arrive at
which is the differential form of Gauss's law for gravity.
It is possible to derive the integral form from the differential form using the reverse of this method.
Although the two forms are equivalent, one or the other might be more convenient to use in a particular computation.
Relation to Newton's law
Deriving Gauss's law from Newton's law
Gauss's law for gravity can be derived from Newton's law of universal gravitation, which states that the gravitational field due to a point mass is:
where
er is the radial unit vector,
r is the radius, |r|.
M is the mass of the particle, which is assumed to be a point mass located at the origin.
A proof using vector calculus is shown in the box below. It is mathematically identical to the proof of Gauss's law (in electrostatics) starting from Coulomb's law.
Deriving Newton's law from Gauss's law and irrotationality
It is impossible to mathematically prove Newton's law from Gauss's law alone, because Gauss's law specifies the divergence of g but does not contain any information regarding the curl of g (see Helmholtz decomposition). In addition to Gauss's law, the assumption is used that g is irrotational (has zero curl), as gravity is a conservative force:
Even these are not enough: Boundary conditions on g are also necessary to prove Newton's law, such as the assumption that the field is zero infinitely far from a mass.
The proof of Newton's law from these assumptions is as follows:
Poisson's equation and gravitational potential
Since the gravitational field has zero curl (equivalently, gravity is a conservative force) as mentioned above, it can be written as the gradient of a scalar potential, called the gravitational potential:
Then the differential form of Gauss's law for gravity becomes Poisson's equation:
This provides an alternate means of calculating the gravitational potential and gravitational field. Although computing g via Poisson's equation is mathematically equivalent to computing g directly from Gauss's law, one or the other approach may be an easier computation in a given situation.
In radially symmetric systems, the gravitational potential is a function of only one variable (namely, ), and Poisson's equation becomes (see Del in cylindrical and spherical coordinates):
while the gravitational field is:
When solving the equation it should be taken into account that in the case of finite densities ∂ϕ/∂r has to be continuous at boundaries (discontinuities of the density), and zero for .
Applications
Gauss's law can be used to easily derive the gravitational field in certain cases where a direct application of Newton's law would be more difficult (but not impossible). See the article Gaussian surface for more details on how these derivations are done. Three such applications are as follows:
Bouguer plate
We can conclude (by using a "Gaussian pillbox") that for an infinite, flat plate (Bouguer plate) of any finite thickness, the gravitational field outside the plate is perpendicular to the plate, towards it, with magnitude 2πG times the mass per unit area, independent of the distance to the plate (see also gravity anomalies).
More generally, for a mass distribution with the density depending on one Cartesian coordinate z only, gravity for any z is 2πG times the difference in mass per unit area on either side of this z value.
In particular, a parallel combination of two parallel infinite plates of equal mass per unit area produces no gravitational field between them.
Cylindrically symmetric mass distribution
In the case of an infinite uniform (in z) cylindrically symmetric mass distribution we can conclude (by using a cylindrical Gaussian surface) that the field strength at a distance r from the center is inward with a magnitude of 2G/r times the total mass per unit length at a smaller distance (from the axis), regardless of any masses at a larger distance.
For example, inside an infinite uniform hollow cylinder, the field is zero.
Spherically symmetric mass distribution
In the case of a spherically symmetric mass distribution we can conclude (by using a spherical Gaussian surface) that the field strength at a distance r from the center is inward with a magnitude of G/r2 times only the total mass within a smaller distance than r. All the mass at a greater distance than r from the center has no resultant effect.
For example, a hollow sphere does not produce any net gravity inside. The gravitational field inside is the same as if the hollow sphere were not there (i.e. the resultant field is that of all masses not including the sphere, which can be inside and outside the sphere).
Although this follows in one or two lines of algebra from Gauss's law for gravity, it took Isaac Newton several pages of cumbersome calculus to derive it directly using his law of gravity; see the article shell theorem for this direct derivation.
Derivation from Lagrangian
The Lagrangian density for Newtonian gravity is
Applying Hamilton's principle to this Lagrangian, the result is Gauss's law for gravity:
See Lagrangian (field theory) for details.
See also
Carl Friedrich Gauss
Divergence theorem
Gauss's law for electricity
Gauss's law for magnetism
Vector calculus
Integral
Flux
Gaussian surface
References
Further reading
For usage of the term "Gauss's law for gravity" see, for example,
Theories of gravity
Vector calculus
Gravity
Newtonian gravity | Gauss's law for gravity | [
"Physics"
] | 1,824 | [
"Theoretical physics",
"Theories of gravity"
] |
16,705,504 | https://en.wikipedia.org/wiki/Cypher%20stent | Cypher is a brand of drug-eluting coronary stent from Cordis Corporation, a Cardinal Health company. During a balloon angioplasty, the stent is inserted into the artery to provide a "scaffold" to open the artery. An anti-rejection-type medication, sirolimus, helps to limit the overgrowth of normal cells while the artery heals which reduces the chance of re-blockage in the treated area known as restenosis, and reduces the chances that another procedure is required.
The Cypher stent was approved for use by the FDA in 2003. Following claims of inconsistent manufacturing processes and poor sales, Johnson & Johnson have announced that it will stop selling Cypher stents by the end of 2011.
See also
Sirolimus: Anti-proliferative effects
References
Drug delivery devices | Cypher stent | [
"Chemistry"
] | 172 | [
"Pharmacology",
"Drug delivery devices"
] |
16,711,283 | https://en.wikipedia.org/wiki/BKS%20theory | In the history of quantum mechanics, the Bohr–Kramers–Slater (BKS) theory was perhaps the final attempt at understanding the interaction of matter and electromagnetic radiation on the basis of the so-called old quantum theory, in which quantum phenomena are treated by imposing quantum restrictions on classically describable behaviour. It was advanced in 1924, and sticks to a classical wave description of the electromagnetic field. It was perhaps more a research program than a full physical theory, the ideas that are developed not being worked out in a quantitative way. The purpose of BKS theory was to disprove Einstein's hypothesis of the light quantum.
One aspect, the idea of modelling atomic behaviour under incident electromagnetic radiation using "virtual oscillators" at the absorption and emission frequencies, rather than the (different) apparent frequencies of the Bohr orbits, significantly led Max Born, Werner Heisenberg and Hendrik Kramers to explore mathematics that strongly inspired the subsequent development of matrix mechanics, the first form of modern quantum mechanics. The provocativeness of the theory also generated great discussion and renewed attention to the difficulties in the foundations of the old quantum theory. However, physically the most provocative element of the theory, that momentum and energy would not necessarily be conserved in each interaction but only overall, statistically, was soon shown to be in conflict with experiment.
Walther Bothe won the Nobel Prize in Physics in 1954 for the Bothe–Geiger coincidence experiment that experimentally disproved BKS theory.
Origins
When Albert Einstein introduced the light quantum (photon) in 1905, there was much resistance from the scientific community. However, when in 1923, the Compton effect showed the results could be explained by assuming the light beam behaves as light-quanta and that energy and momentum are conserved, Niels Bohr was still resistant against quantized light, even repudiating it in his 1922 Nobel Prize lecture. So Bohr found a way of using Einstein's approach without also using the light-quantum hypothesis by reinterpreting the principles of energy and momentum conservation as statistical principles. Thus, it was in 1924 that Bohr, Hendrik Kramers and John C. Slater published a provocative description of the interaction of matter and electromagnetic interaction, historically known as the BKS paper that combined quantum transitions and electromagnetic waves with energy and momentum being conserved only on average.
The initial idea of the BKS theory originated with Slater, who proposed to Bohr and Kramers the following elements of a theory of emission and absorption of radiation by atoms, to be developed during his stay in Copenhagen:
Emission and absorption of electromagnetic radiation by matter is realized in agreement with Einstein's photon concept;
A photon emitted by an atom is guided by a classical electromagnetic field (c.f. Louis de Broglie's ideas published September 1923) consisting of spherical waves, thus enabling an explanation of interference;
Even when there are no transitions there exists a classical field to which all atoms contribute; this field contains all frequencies at which an atom can emit or absorb a photon, the probability of such an emission being determined by the amplitude of the corresponding Fourier component of the field; the probabilistic aspect is provisional, to be eliminated when the dynamics of the inside of atoms are better known;
The classical field is not produced by the actual motions of the electrons but by "motions with the frequencies of possible emission and absorption lines" (to be called 'virtual oscillators', creating a field to be referred to as 'virtual' as well).
This fourth point reverts to Max Planck's original view of his quantum introduction in 1900. Planck also did not believe that light was quantized. He believed that a black body had virtual oscillators and that only during interactions between light and the virtual oscillators of the body was the quantum to be considered. Max Planck said in 1911,
Independently, Franz S. Exner had also suggested the statistical validity of energy conservation in the same spirit as the second law of thermodynamics. Erwin Schrödinger, who did his habilitation under the supervision of Exner, was very supportive of the BKS theory. Schrödinger published a paper to provide his own interpretation of the BKS statistical interpretation.
Development with Bohr and Kramers
Slater's main intention seems to have been to reconcile the two conflicting models of radiation, viz. the wave and particle models. He may have had good hopes that his idea with respect to oscillators vibrating at the differences of the frequencies of electron rotations (rather than at the rotation frequencies themselves) might be attractive to Bohr because it solved a problem of the latter's atomic model, even though the physical meaning of these oscillators was far from clear. Nevertheless, Bohr and Kramers had two objections to Slater's proposal:
The assumption that photons exist. Even though Einstein's photon hypothesis could explain in a simple way the photoelectric effect, as well as conservation of energy in processes of de-excitation of an atom followed by excitation of a neighboring one, Bohr had always been reluctant to accept the reality of photons, his main argument being the problem of reconciling the existence of photons with the phenomenon of interference;
The impossibility to account for conservation of energy in a process of de-excitation of an atom followed by excitation of a neighboring one. This impossibility followed from Slater's probabilistic assumption, which did not imply any correlation between processes going on in different atoms.
As Max Jammer puts it, this refocussed the theory "to harmonize the physical picture of the continuous electromagnetic field with the physical picture, not as Slater had proposed of light quanta, but of the discontinuous quantum transitions in the atom." Bohr and Kramers hoped to be able to evade the photon hypothesis on the basis of ongoing work by Kramers to describe "dispersion" (in present-day terms inelastic scattering) of light by means of a classical theory of interaction of radiation and matter. But abandoning the concept of the photon, they instead chose to squarely accept the possibility of non-conservation of energy, and momentum.
Experimental counter-evidence
In the BKS paper the Compton effect was discussed as an application of the idea of "statistical conservation of energy and momentum" in a continuous process of scattering of radiation by a sample of free electrons, where "each of the electrons contributes through the emission of coherent secondary wavelets". Although Arthur Compton had already given an attractive account of his experiment on the basis of the photon picture (including conservation of energy and momentum in individual scattering processes), is it stated in the BKS paper that "it seems at the present state of science hardly justifiable to reject a formal interpretation as that under consideration [i.e. the weaker assumption of statistical conservation] as inadequate". This statement may have prompted experimental physicists to improve `the present state of science' by testing the hypothesis of `statistical energy and momentum conservation'. In any case, already after one year the BKS theory was disproved by coincidence methods studying correlations between the directions into which the emitted radiation and the recoil electron are emitted in individual scattering processes. Such experiments were carried independently, with the Bothe–Geiger coincidence experiment performed by Walther Bothe and Hans Geiger, as well as the experiment by Compton and Alfred W. Simon. They provided experimental evidence pointing in the direction of energy and momentum conservation in individual scattering processes (at least, it was shown that the BKS theory was not able to explain the experimental results). More accurate experiments, performed much later, have also confirmed these results.
Commenting on the experiments, Max von Laue considered that “physics was saved from being led astray.”
From the very beginning, Wolfgang Pauli was extremely critical of the BKS theory, referring to it as the Copenhagen putsch (). In a letter to Kramers, Pauli said that Bohr would have abandoned the theory even if no experiment was ever carried out, arguing that it is the notion of motion and forces that needs to be modified, not the conservation of energy. Pauli could not help to mock the theory, proposing to the Institute of Physics in Copenhague to “fly its flag at half mast on the anniversary of the publication of the work of Bohr, Kramers and Slater.”
As suggested by a letter to Max Born, for Einstein, the corroboration of energy and momentum conservation was probably even more important than his photon hypothesis:
In light of the experimental results, Bohr informed Charles Galton Darwin that "there is nothing else to do than to give our revolutionary efforts as honourable a funeral as possible".
Bohr's reaction, too, was not primarily related to the photon hypothesis. According to Werner Heisenberg, Bohr remarked:
For Bohr the lesson to be learned from the disproof of the BKS theory was not that photons do exist, but rather that the applicability of classical space-time pictures in understanding phenomena within the quantum domain is limited. This theme would become particularly important a few years later in developing the notion of complementarity. According to Heisenberg, Born's statistical interpretation also had its ultimate roots in the BKS theory. Hence, despite its failure the BKS theory still provided an important contribution to the revolutionary transition from classical mechanics to quantum mechanics.
Schrödinger would not abandon the statistical interpretation and would continue to push this theory until the end of his life.
References
Conservation laws
Photons
Quantum mechanics
Niels Bohr
History of physics
Old quantum theory | BKS theory | [
"Physics"
] | 1,978 | [
"Equations of physics",
"Conservation laws",
"Theoretical physics",
"Quantum mechanics",
"Old quantum theory",
"Symmetry",
"Physics theorems"
] |
16,714,816 | https://en.wikipedia.org/wiki/Zakharov%20system | In mathematics, the Zakharov system is a system of non-linear partial differential equations, introduced by Vladimir Zakharov in 1972 to describe the propagation of Langmuir waves in an ionized plasma. The system consists of a complex field u and a real field n satisfying the equations
where is the d'Alembert operator.
See also
Resonant interaction; the Zakharov equation describes non-linear resonant interactions.
References
Zakharov, V. E. (1968). Stability of periodic waves of finite amplitude on the surface of a deep fluid. Journal of Applied Mechanics and Technical Physics, 9(2), 190-194.
.
Partial differential equations
Waves in plasmas
Plasma physics equations | Zakharov system | [
"Physics"
] | 147 | [
"Waves in plasmas",
"Physical phenomena",
"Equations of physics",
"Plasma physics",
"Plasma phenomena",
"Waves",
"Plasma physics stubs",
"Plasma physics equations"
] |
16,715,060 | https://en.wikipedia.org/wiki/Zakharov%E2%80%93Schulman%20system | In mathematics, the Zakharov–Schulman system is a system of nonlinear partial differential equations introduced in to describe the interactions of small amplitude, high frequency waves with acoustic waves.
The equations are
where L1, L2, and L3, are constant coefficient differential operators.
References
Partial differential equations
Acoustics | Zakharov–Schulman system | [
"Physics"
] | 65 | [
"Classical mechanics",
"Acoustics"
] |
12,273,100 | https://en.wikipedia.org/wiki/Scheutjens%E2%80%93Fleer%20theory | Scheutjens–Fleer theory is a lattice-based self-consistent field theory that is the basis for many computational analyses of polymer adsorption.
References
Polymers at Interfaces by G.J. Fleer, M.A. Cohen Stuart, J.M.H.M. Scheutjens, T. Cosgrove, B. Vincent.
Polymer chemistry
Solutions
Thermodynamics
Statistical mechanics | Scheutjens–Fleer theory | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 88 | [
"Thermodynamics stubs",
"Statistical mechanics stubs",
"Polymer stubs",
"Materials science",
"Homogeneous chemical mixtures",
"Thermodynamics",
"Polymer chemistry",
"Solutions",
"Statistical mechanics",
"Physical chemistry stubs",
"Organic chemistry stubs",
"Dynamical systems"
] |
12,274,200 | https://en.wikipedia.org/wiki/Herbrand%20structure | In first-order logic, a Herbrand structure S is a structure over a vocabulary σ that is defined solely by the syntactical properties of σ. The idea is to take the symbol strings of terms as their values, e.g. the denotation of a constant symbol c is just "c" (the symbol). It is named after Jacques Herbrand.
Herbrand structures play an important role in the foundations of logic programming.
Herbrand universe
Definition
The Herbrand universe serves as the universe in the Herbrand structure.
Example
Let , be a first-order language with the vocabulary
constant symbols: c
function symbols: f(·), g(·)
then the Herbrand universe of (or ) is {c, f(c), g(c), f(f(c)), f(g(c)), g(f(c)), g(g(c)), ...}.
The relation symbols are not relevant for a Herbrand universe.
Herbrand structure
A Herbrand structure interprets terms on top of a Herbrand universe.
Definition
Let S be a structure, with vocabulary σ and universe U. Let W be the set of all terms over σ and W0 be the subset of all variable-free terms. S is said to be a Herbrand structure iff
for every n-ary function symbol and
c = c for every constant c in σ
Remarks
is the Herbrand universe of .
A Herbrand structure that is a model of a theory T is called a Herbrand model of T.
Examples
For a constant symbol c and a unary function symbol f(.) we have the following interpretation:
c → c
Herbrand base
In addition to the universe, defined in , and the term denotations, defined in , the Herbrand base completes the interpretation by denoting the relation symbols.
Definition
A Herbrand base is the set of all ground atoms whose argument terms are elements of the Herbrand universe.
Examples
For a binary relation symbol R, we get with the terms from above:
See also
Herbrand's theorem
Herbrandization
Herbrand interpretation
Notes
References
Mathematical logic | Herbrand structure | [
"Mathematics"
] | 441 | [
"Mathematical logic"
] |
12,278,528 | https://en.wikipedia.org/wiki/Deoxidization | Deoxidization is a method used in metallurgy to remove the rest of oxygen content from previously reduced iron ore during steel manufacturing. In contrast, antioxidants are used for stabilization, such as in the storage of food. Deoxidation is important in the steelmaking process as oxygen is often detrimental to the quality of steel produced. Deoxidization is mainly achieved by adding a separate chemical species to neutralize the effects of oxygen or by directly removing the oxygen.
Oxidation
Oxidation is the process of an element losing electrons. For example, iron will transfer two of its electrons to oxygen, forming an oxide. This occurs all throughout as an unintended part of the steelmaking process.
Oxygen blowing is a method of steelmaking where oxygen is blown through pig iron to lower the carbon content. Oxygen forms oxides with the unwanted elements, such as carbon, silicon, phosphorus, and manganese, which appear from various stages of the manufacturing process. These oxides will float to the top of the steel pool and remove themselves from the pig iron. However, some of the oxygen will also react with the iron itself.
Due to the high temperatures involved in smelting, oxygen in the air may dissolve into the molten iron while it is being poured. Slag, a byproduct left over after the smelting process, is used to further absorb impurities such as sulfur or oxides and protect steel from further oxidation. However, it can still be responsible for some oxidation.
Some processes, while still able to lead to oxidation, are not relevant to the oxygen content of steel during its manufacture. For example, rust is a red iron oxide that forms when the iron in steel reacts with the oxygen or water in the air. This usually only occurs once the steel has been in use for varying lengths of time. Some physical components of the steelmaking process itself, such as the electric arc furnace, may also wear down and oxidize. This problem is typically dealt with by the use of refractory metals, which resist environmental conditions.
If steel is not properly deoxidized, it will have lost various properties such as tensile strength, ductility, toughness, weldability, polishability, and machinability. This is due to forming non-metallic inclusions and gas pores, bubbles of gas that get trapped during the solidification process of steel.
Types of deoxidizers
Metallic deoxidizers
This method of deoxidization involves adding specific metals into the steel. These metals will react with the unwanted oxygen, forming a strong oxide that, compared to pure oxygen, will reduce the steel's strength and qualities by a lesser amount.
The chemical equation for deoxidization is represented by:
where n and m are coefficients, D is the deoxidizing agent, and O is oxygen.
Thus, the chemical equilibrium equation involved is:
where aox is the activity, or concentration, of the oxide in the steel,
aD is the activity of the deoxidizing agent,
and aO is the activity of the oxygen.
An increase in the equilibrium constant Keq will cause an increase in aox, and thus more of the oxide product.
Keq can be manipulated by the steel temperature via the following equation:
where AD and BD are parameters specific to different deoxidizers and T is the temperature in K°. Below are the values for certain deoxidizers at a temperature of 1873 K°.
Below is a list of commonly used metallic deoxidizers:
Ferrosilicon, ferromanganese, calcium silicide - used in steelmaking in production of carbon steels, stainless steels, and other ferrous alloys
Manganese - used in steelmaking
Silicon carbide, calcium carbide - used as ladle deoxidizer in steel production
Aluminum dross - also a ladle deoxidizer, used in secondary steelmaking
Calcium - used as a deoxidizer, desulfurizer, or decarbonizer for ferrous and non-ferrous alloys
Titanium - used as a deoxidizer for steels
Phosphorus, copper(I) phosphide - used in production of oxygen-free copper
Calcium hexaboride - used in production of oxygen-free copper, yields higher conductivity copper than phosphorus-deoxidized
Yttrium - used to deoxidize vanadium and other non-ferrous metals
Zirconium
Magnesium
Carbon
Tungsten
Vacuum deoxidation
Vacuum deoxidation is a method which involves using a vacuum to remove impurities. A portion of the carbon and oxygen in steel will react, forming carbon monoxide. CO gas will float up to the top of the liquid steel and be removed by a vacuum system.
As the chemical reaction involved in vacuum deoxidation is:
the reaction between carbon and oxygen is represented by the following chemical equilibrium equation:
where PCO is the partial pressure of the carbon monoxide formed.
Decreasing the oxygen activity(aO) will result in a higher equilibrium constant, thus more product, CO. To achieve this, subjecting the pool of steel to vacuum treatment decreases the value of PCO, allowing for more CO gas to be produced.
Diffusion deoxidation
This method relies on the idea that deoxidation of slag will lead to the deoxidation of steel.
The chemical equilibrium equation used for this process is:
where a[O] is the activity of the oxygen in the slag, and a(O) is the activity of oxygen in the steel.
Reducing the activity in the slag (a[O]) will lower the oxygen levels in the slag. Afterwards, oxygen will diffuse from the steel into the lesser concentrated slag. This method is done by using deoxidizing agents on the slag, such as coke or silicon. As these agents do not come into direct contact with the steel, non-metallic inclusions will not form in the steel itself.
See also
Smelting
References
See also
Desulfurization is the process of decreasing the sulfur content of steel.
Decarburization is the process of decreasing the carbon content in metallurgy.
Deoxidized steels are steels categorized by level of deoxidization treatment.
Vacuum engineering
Vacuum metallurgy
Industrial processes
Metallurgy
Steelmaking | Deoxidization | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,300 | [
"Metallurgical processes",
"Metallurgy",
"Steelmaking",
"Materials science",
"nan"
] |
12,278,602 | https://en.wikipedia.org/wiki/Physical%20theories%20modified%20by%20general%20relativity | This article will use the Einstein summation convention.
The theory of general relativity required the adaptation of existing theories of physical, electromagnetic, and quantum effects to account for non-Euclidean geometries. These physical theories modified by general relativity are described below.
Classical mechanics and special relativity
Classical mechanics and special relativity are lumped together here because special relativity is in many ways intermediate between general relativity and classical mechanics, and shares many attributes with classical mechanics.
In the following discussion, the mathematics of general relativity is used heavily. Also, under the principle of minimal coupling, the physical equations of special relativity can be turned into their general relativity counterparts by replacing the Minkowski metric (ηab) with the relevant metric of spacetime (gab) and by replacing any partial derivatives with covariant derivatives. In the discussions that follow, the change of metrics is implied.
Inertia
Inertial motion is motion free of all forces. In Newtonian mechanics, the force F acting on a particle with mass m is given by Newton's second law, , where the acceleration is given by the second derivative of position r with respect to time t . Zero force means that inertial motion is just motion with zero acceleration:
The idea is the same in special relativity. Using Cartesian coordinates, inertial motion is described mathematically as:
where is the position coordinate and τ is proper time. (In Newtonian mechanics, τ ≡ t, the coordinate time).
In both Newtonian mechanics and special relativity, space and then spacetime are assumed to be flat, and we can construct a global Cartesian coordinate system. In general relativity, these restrictions on the shape of spacetime and on the coordinate system to be used are lost. Therefore, a different definition of inertial motion is required. In relativity, inertial motion occurs along timelike or null geodesics as parameterized by proper time. This is expressed mathematically by the geodesic equation:
where is a Christoffel symbol. Since general relativity describes four-dimensional spacetime, this represents four equations, with each one describing the second derivative of a coordinate with respect to proper time. In the case of flat space in Cartesian coordinates, we have , so this equation reduces to the special relativity form.
Gravitation
For gravitation, the relationship between Newton's theory of gravity and general relativity is governed by the correspondence principle: General relativity must produce the same results as gravity does for the cases where Newtonian physics has been shown to be accurate.
Around a spherically symmetric object, the Newtonian theory of gravity predicts that objects will be physically accelerated towards the center on the object by the rule
where G is Newton's Gravitational constant, M is the mass of the gravitating object, r is the distance to the gravitation object, and is a unit vector identifying the direction to the massive object.
In the weak-field approximation of general relativity, an identical coordinate acceleration must exist. For the Schwarzschild solution (which is the simplest possible spacetime surrounding a massive object), the same acceleration as that which (in Newtonian physics) is created by gravity is obtained when a constant of integration is set equal to 2MG/c2). For more information, see Deriving the Schwarzschild solution.
Transition from Newtonian mechanics to general relativity
Some of the basic concepts of general relativity can be outlined outside the relativistic domain. In particular, the idea that mass/energy generates curvature in space and that curvature affects the motion of masses can be illustrated in a Newtonian setting.
General relativity generalizes the geodesic equation and the field equation to the relativistic realm in which trajectories in space are replaced with Fermi–Walker transport along world lines in spacetime. The equations are also generalized to more complicated curvatures.
Transition from special relativity to general relativity
The basic structure of general relativity, including the geodesic equation and Einstein field equation, can be obtained from special relativity by examining the kinetics and dynamics of a particle in a circular orbit about the earth. In terms of symmetry, the transition involves replacing global Lorentz covariance with local Lorentz covariance.
Conservation of energy–momentum
In classical mechanics, conservation laws for energy and momentum are handled separately in the two principles of conservation of energy and conservation of momentum. With the advent of special relativity, these two conservation principles were united through the concept of mass-energy equivalence.
Mathematically, the general relativity statement of energy–momentum conservation is:
where is the stress–energy tensor, the comma indicates a partial derivative and the semicolon indicates a covariant derivative. The terms involving the Christoffel symbols are absent in the special relativity statement of energy–momentum conservation.
Unlike classical mechanics and special relativity, it is not usually possible to unambiguously define the total energy and momentum in general relativity, so the tensorial conservation laws are local statements only (see ADM energy, though). This often causes confusion in time-dependent spacetimes which apparently do not conserve energy, although the local law is always satisfied. Exact formulation of energy–momentum conservation on an arbitrary geometry requires use of a non-unique stress–energy–momentum pseudotensor.
Electromagnetism
General relativity modifies the description of electromagnetic phenomena by employing a new version of Maxwell's equations. These differ from the special relativity form in that the Christoffel symbols make their presence in the equations via the covariant derivative.
The source equations of electrodynamics in curved spacetime are (in cgs units)
where Fab is the electromagnetic field tensor representing the electromagnetic field and Ja is a four-current representing the sources of the electromagnetic field.
The source-free equations are the same as their special relativity counterparts.
The effect of an electromagnetic field on a charged object is then modified to
,
where q is the charge on the object, m is the rest mass of the object and P a is the four-momentum of the charged object. Maxwell's equations in flat spacetime are recovered in rectangular coordinates by reverting the covariant derivatives to partial derivatives. For Maxwell's equations in flat spacetime in curvilinear coordinates see
or
Theories modified by general relativity
General relativity | Physical theories modified by general relativity | [
"Physics"
] | 1,280 | [
"General relativity",
"Theory of relativity"
] |
12,280,369 | https://en.wikipedia.org/wiki/Subsurface%20ocean%20current | A subsurface ocean current is an oceanic current that runs beneath surface currents. Examples include the Equatorial Undercurrents of the Pacific, Atlantic, and Indian Oceans, the California Undercurrent, and the Agulhas Undercurrent, the deep thermohaline circulation in the Atlantic, and bottom gravity currents near Antarctica. The forcing mechanisms vary for these different types of subsurface currents.
Density current
The most common of these is the density current, epitomized by the Thermohaline current. The density current works on a basic principle: the denser water sinks to the bottom, separating from the less dense water, and causing an opposite reaction from it. There are numerous factors controlling density.
Salinity
One is the salinity of water, a prime example of this being the Mediterranean/Atlantic exchange. The saltier waters of the Mediterranean sink to the bottom and flow along there, until they reach the ledge between the two bodies of water. At this point, they rush over the ledge into the Atlantic, pushing the less saline surface water into the Mediterranean.
Temperature
Another factor of density is temperature. Thermohaline (literally meaning heat-salty) currents are very influenced by heat. Cold water from glaciers, icebergs, etc. descends to join the ultra-deep, cold section of the worldwide Thermohaline current. After spending an exceptionally long time in the depths, it eventually heats up, rising to join the higher Thermohaline current section. Because of the temperature and expansiveness of the Thermohaline current, it is substantially slower, taking nearly 1000 years to run its worldwide circuit.
Turbidity current
One factor of density is so unique that it warrants its own current type. This is the turbidity current. Turbidity current is caused when the density of water is increased by sediment. This current is the underwater equivalent of a landslide. When sediment increases the density of the water, it falls to the bottom, and then follows the form of the land. In doing so, the sediment inside the current gathers more from the ocean bed, which in turn gathers more, and so on. As a limited amount of sediment can be carried by a certain amount of water, more water must become laded with sediment, until a huge, destructive current is washing down some marine hillside. It is theorized that submarine depths, such as the Marianas Trench have been caused in part by this action. There is one additional effect of turbidity currents: upwelling. All of the water rushing into ocean valleys displaces a significant amount of water. This water literally has nowhere to go but up. The upwelling current goes almost straight up. This spreads the nutrient rich ocean life to the surface, feeding some of the world’s largest fisheries. This current also helps Thermohaline currents return to the surface.
Ekman Spiral
An entirely different class of subsurface current is caused by friction with surface currents and objects. When the wind or some other surface force compels surface currents into motion, some of this is translated into subsurface motion. The Ekman Spiral, named after Vagn Walfrid Ekman, is the standard for this transfer of energy. The Ekman Spiral works as follows: when the surface moves, the subsurface inherits some -but not all- of this motion. Due to the Coriolis Effect, however, the current moves at a 45˚ angle to the right of the first (left in the Southern Hemisphere). The current below is slower yet, and moves at a 45˚ angle to the right. This process continues in the same manner, until, at about 100 meters below the surface, the current is moving in the opposite direction of the surface current.
Subsidence
The final type of subsurface current is subsidence, caused when forces push water against some obstacle (like a rock), causing it to pile up there. The water at the bottom of the pileup flows away from it, causing a subsidence current.
Wave Patterns
Various subsurface currents conflict at times, causing bizarre wave patterns. One of the most noticeable of these is the Maelstrom. The word is derived from Nordic words meaning to grind and stream. Essentially, the maelstrom is a large, very powerful whirlpool, a large swirling body of water being drawn down and inward toward its center. This is usually the result of tidal currents.
Effect
Subsurface currents have a large effect on life on earth. They flow beneath the surface of the water, allowing them to be relatively free of external influence. Thus, they function like clockwork, providing nutrient transportation, water transfer, etc., as well as affecting the ocean floor and submarine processes.
See also
Oceanography
References
Ocean currents
Oceanography | Subsurface ocean current | [
"Physics",
"Chemistry",
"Environmental_science"
] | 993 | [
"Ocean currents",
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Fluid dynamics"
] |
12,280,976 | https://en.wikipedia.org/wiki/RNA%20integrity%20number | The RNA integrity number (RIN) is an algorithm for assigning integrity values to RNA measurements.
The integrity of RNA is a major concern for gene expression studies and traditionally has been evaluated using the 28S to 18S rRNA ratio, a method that has been shown to be inconsistent. This inconsistency arises because subjective, human interpretation is necessary to compare the 28S and 18S gel images. The RIN algorithm was devised to overcome this issue. The RIN algorithm is applied to electrophoretic RNA measurements, typically obtained using capillary gel electrophoresis, and based on a combination of different features that contribute information about the RNA integrity to provide a more universal measure. RIN has been demonstrated to be robust and reproducible in studies comparing it to other RNA integrity calculation algorithms, cementing its position as a preferred method of determining the quality of RNA to be analyzed.
A major criticism to RIN is when using with plants or in studies of eukaryotic-prokaryotic cells interactions. The RIN algorithm is unable to differentiate eukaryotic/prokaryotic/chloroplastic ribosomal RNA, creating serious quality index underestimation in such situations.
Terminology
Electrophoresis is the process of separating nucleic acid species based on their length by applying an electric field to them. As nucleic acids are negatively charged, they are pushed by an electric field through a matrix, usually an agarose gel, with the smaller molecules being pushed farther, faster. Capillary electrophoresis is a technique whereby small amounts of a nucleic acid sample can be run on a gel in a very thin tube. There is a detector in the machine that can tell when nucleic acid samples pass through a specific point in the tube, with smaller samples passing through first. This can produce an electropherogram such as the one in Figure 1, where length is related to time at which the samples pass the detector.
A marker is a sample of known size run along with the sample so that the actual size of the rest of the sample can be known by comparing their running distance/time to be relative to this marker.
RNA is a biological macromolecule made of sugars and nitrogenous bases that plays a number of crucial roles in all living cells. There are several subtypes of RNA, with the most prominent in the cell being tRNA (transfer RNA), rRNA (ribosomal RNA), and mRNA (messenger RNA). All three of these are involved in the process of translation, with the most prominent species (~85%) of cellular RNA being rRNA. As a result, this is the most immediately visible species when RNA is analyzed via electrophoresis and is thus used for determining RNA quality (see Computation, below). rRNA comes in various sizes, with those in mammals belonging to the sizes 5S, 18S, and 28S. The 28S and 5S rRNAs form the large subunit and the 18S forms the small subunit of the ribosome, the molecular machinery responsible for synthesizing proteins.
Applications
RNases are ubiquitous and can often contaminate and subsequently degrade RNA samples in the laboratory, so RNA integrity can very easily be compromised, leading to a number of laboratory techniques designed to eliminate their impact. However, these methods are not fool-proof, and so samples can still be degraded, necessitating a method of measuring RNA integrity to ensure the trustworthiness and reproducibility of molecular assays, as RNA integrity is critical for proper results in gene expression studies, such as microarray analysis, Northern blots, or quantitative real-time PCR (qPCR). RNA that has been degraded has a direct impact on calculated expression levels, often leading to significantly decreased apparent expression.
qPCR and similar techniques are very expensive, taking a good deal of both time and money, so continuing research being undertaken to decrease the cost while maintaining qPCR's accuracy and reproducibility for gene expression and other applications. RIN assessment allows a scientist to evaluate an experiment's trustworthiness and reproducibility before incurring substantial costs in performing the gene expression studies.
RIN is a standard method of measuring RNA integrity and can be used to evaluate the quality of RNA produced by new RNA isolation techniques.
Development
As RNA integrity has long been known to be a problem in molecular biology studies, there are a few methods that have been used historically to determine the integrity of RNA. The most popular has long been agarose gel electrophoresis with ethidium bromide staining, allowing one to visualize the bands from the rRNA peaks. The height of the 28S and 18S bands can be compared to each other, with a 2:1 ratio indicating non-degraded RNA. While this method is very cheap and easy, there are several issues with this method, primarily its subjectivity, leading to inconsistent, non-standardized RNA quality assessments, and the large amounts of RNA that are needed to visualize it on an agarose gel, which can be problematic if there is not much RNA to work with. There are also a number of different problems that can arise from agarose gel electrophoresis, such as poor loading, uneven running, and uneven staining that lead to increased variability in the accuracy of using agarose gel electrophoresis to determine RNA integrity.
The RNA Integrity Number was developed by Agilent Technologies in 2005. The algorithm was generated by taking hundreds of samples and having specialists manually assign them all a value of 1 to 10 based on their integrity, with 10 being the highest. Adaptive learning tools using a Bayesian learning technique were used to generate an algorithm that could predict the RIN, predominantly by using the features listed below under "Computation". This allows for all Agilent software to produce the same RIN for a given RNA sample, standardizing the measurement and making it much less subjective than earlier methods.
Computation
RIN for a sample is computed using several characteristics of an RNA electropherogram trace, with the first two listed below being most significant. RIN assigns an electropherogram a value of 1 to 10, with 10 being the least degraded. All the following descriptions apply to mammalian RNA because RNAs in other species have different rRNA sizes:
The total RNA ratio is calculated by taking the ratio of the area under the 18S and 28S rRNA peaks to the total area under the graph, a large number here is desired, indicating much of the rRNA is still at these sizes and thus little to no degradation has occurred. An ideal ratio can be seen in figure 1, where almost all of the RNA is in the 18S and 28S RNA peaks.
For the height of 28S peak, a large value is desired. 28S, the most prominent rRNA species, is used in RIN calculation as it is typically degraded more quickly than 18S rRNA, and so measuring its peak height allows for detection of the early stages of degradation. Again, this is seen in figure 1, where the 28S peak is the largest, and so this is good.
The fast region is the area between the 18S and 5S rRNA peaks on an electropherogram. Initially, as the fast area ratio value increases, it indicates degradation of the 18S and 28S rRNA to an intermediate size, though the ratio subsequently decreases as RNA degrades further, to even smaller sizes. Thus, a low value doesn't necessarily indicate either good or bad RNA integrity.
A small marker height is desired, indicating only small amounts of RNA have been degraded and proceeded to the smallest lengths, indicated by the short marker. If a large number is found here, that indicates that large amounts of the rRNAs have been degraded to small pieces that would be found closer to this marker. This situation can be seen in the 'poor quality' RNA electropherogram found in figure 2, where the height of the peak over the marker (far left) is very large, so the RNA has been greatly degraded.
In prokaryotic samples, the algorithm is somewhat different, but the Agilent 2100 Bioanalyzer Expert software is able to calculate RIN for prokaryotic samples now as well. The difference likely arises from the fact that, while mammalian samples have 28S and 18S ribosomal RNAs as their predominant species, prokaryotic RNAs have the sizes shifted slightly smaller, to 23S and 16S, so the algorithm must be shifted to accommodate that. Another crucial fact about calculating prokaryotic RNA integrity numbers is that RIN has not been validated to the extent that it has for eukaryotic RNA. It has been shown that higher RIN values correlate with better downstream results in eukaryotes, but this hasn't been done as extensively for prokaryotes, so it may mean less in prokaryotes.
These electropherograms for calculating RIN are done using the Agilent Bioanalyzer machine, which is capable of performing electrophoresis and generating the electropherograms. The Agilent 2100 software is uniquely able to perform the RIN software, as the exact algorithm is proprietary, so there are additional important RNA electropherogram features that are used in its calculation that are not publicly available.
References
External links
RIN information from Agilent Technologies
RIN article in BMC Molecular Biology
Gene expression summary from Nature to help show why we need RIN
RIN explanation from Agilent Technologies, with several examples of electropherograms at various RIN values
Gel electrophoresis simulation from the University of Utah to help visualize how the electropherograms are produced
Bioinformatics
Bioinformatics software
Gene expression
Molecular biology
RNA | RNA integrity number | [
"Chemistry",
"Engineering",
"Biology"
] | 2,014 | [
"Biological engineering",
"Bioinformatics software",
"Gene expression",
"Bioinformatics",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
1,686,404 | https://en.wikipedia.org/wiki/Integrated%20nanoliter%20system | The integrated nanoliter system is a measuring, separating, and mixing device that is able to measure fluids to the nanoliter, mix different fluids for a specific product, and separate a solution into simpler solutions.
All features of the integrated nanoliter system are specifically designed for controlling very small volumes of liquid (referred as microfluidic solutions). The integrated nanoliter system's scalability depends on what type of processing method the system is based on (refer as technology platform) with each processing method having its advantages and disadvantages. Possible uses for the integrated nanoliter system are in controlling biological fluids (refer as synthetic biology) and accurately detecting changes in cells for genetic purposes (such as single-cell gene expression analysis) where the smaller scale directly influences the result and accuracy.
Features
The integrated nanoliter system consists of microfabricated fluidic channels, heaters, temperature sensors, and fluorescence detectors. The microfabricated fluidic channels (basically very small pipes) act as the main transportation structures for any fluids as well as where reactions occur within the system. For the desired reactions to occur, the temperature needs to be adjusted. Therefore, heaters are attached to some microfabricated fluidic channels. To monitor and maintain the desired temperature, temperature sensors are crucial for successful and desired reactions. In order to accurately track the fluids before and after a reaction, fluorescence detectors are used for detecting the movements of the fluids within the system. For instance, when a specific fluid passes a certain point where it triggers or excites emission of light, the fluorescence detector is able to receive that emission and calculate the time it takes to reach that certain point.
Technology platforms for scalability
There are three different technology platforms for the integrated nanoliter system's scalability. Therefore, the main processing method of the integrated nanoliter system varies from the type of technology platform it is using. The three technology platforms for scalability are electrokinetic manipulation, vesicle encapsulation, and mechanical valving.
Electrokinetic manipulation
The main processing method for controlling the fluid under this technology platform is the capillary electrophoresis, which is an electrokinetic phenomena. Capillary electrophoresis is a great method for controlling fluids because the charged particles of the fluid are being directed by the controllable electric field within the system. However, a disadvantage of the technique is that the method of controlling the fluid's particles heavily depends on the particles' original charges. Another disadvantage is that the possible fluid "leaks" within the system. These "leaks" occur through diffusion which are dependent on the size of the fluid's particles.
Vesicle encapsulation
The main processing method for controlling the fluid under this technology platform is to confine the fluids of interest in carrier molecules, which are generally droplets of water, vesicles, or micelles. The carrier molecules (with the fluid within them) are controlled by individually directing each carrier molecules within the microfabricated fluidic channels. This method is great for solving the possible fluid "leaks", since confinement of the fluid in a carrier molecule does not depend on the size of the fluid's particles. However, this technique has a disadvantage on how complex the solution can be when using the system.
Mechanical valving
The main processing method for controlling the fluid under this technology platform is the use of small mechanical valves. Mechanical valving is similar to a complex plumbing system because the microfabricated fluidic channels act as the plumbing pipes while the various controllable valves direct the fluid. Mechanical valving is also considered to be the most robust solution to the disadvantages of the electrokinetic manipulation and vesicle encapsulation, since the mechanical valves operate completely independent from the fluid's physical and chemical properties. Because the physical properties that make up the microfabricated fluidic channels and mechanical valves are difficult to process due to the system's extremely small scale, this technique has a disadvantage of creating an integrated nanoliter system with mechanical valving to the nanoliter scale.
Possible uses
Synthetic biology
A possible use of the integrated nanoliter system is in synthetic biology (controlling biological fluids). Since the integrated nanoliter system is generally made up of many controllable microfabricated fluidic networks, integrated nanoliter systems are an ideal environment for controlling biological fluids. A common process of synthetic biology that uses the integrated nanoliter system is processing complex reactions among biological fluids, which usually involves separating a biological solution into individual pure or simpler reagent solutions then mixing the individual solutions for the desired product. An advantage of using the integrated nanoliter system in synthetic biology includes the extremely small length of the microfluidic networks that result in fast diffusion rates. Another advantage is the fast mixing rates due to the combination of diffusion and advection (chaotic mixing). Compared to previous microfluidic systems, another advantage is the smaller necessary amount of reagent solutions for a single operation due to the integrated nanoliter system's microscopic scalability. Smaller necessary amounts of reagent solutions tend to lead to more operations that can be carried out with less delay from gathering or reproducing the necessary amounts of reagent solutions.
Single-cell gene expression analysis
Another possible use of the integrated nanoliter system is in single-cell gene expression analysis. One benefit of using the integrated nanoliter system is its capability to detect the changes of a gene expression more accurately than the previous technique of microarray. The nanoliter system's microscopic scalability (nanoliter to picoliter scale) allows it to analyze the gene expression at the single-cell level (around 1 picoliter), while the microarray analyzes changes of the gene expression by averaging a large group of cells. Another convenient and important benefit is the integrated nanoliter system's capability of having all the necessary biological fluids in the system before operation by storing each biological fluid in a specific microfabricated fluidic network. The integrated nanoliter system is convenient because the biological fluids are all controlled by a computer compared to how previous systems required a manual loading of every biological fluid. The integrated nanoliter system is also important for the gene expression analysis because the analysis would not be undesirably influenced by contamination due to the "closed" system while in operation.
References
Nanotechnology
Nanomaterials | Integrated nanoliter system | [
"Materials_science",
"Engineering"
] | 1,318 | [
"Nanotechnology",
"Nanomaterials",
"Materials science"
] |
1,686,413 | https://en.wikipedia.org/wiki/Hydrometallurgy | Hydrometallurgy is a technique within the field of extractive metallurgy, the obtaining of metals from their ores. Hydrometallurgy involve the use of aqueous solutions for the recovery of metals from ores, concentrates, and recycled or residual materials. Processing techniques that complement hydrometallurgy are pyrometallurgy, vapour metallurgy, and molten salt electrometallurgy. Hydrometallurgy is typically divided into three general areas:
Leaching
Solution concentration and purification
Metal or metal compound recovery
Leaching
Leaching involves the use of aqueous solutions to extract metal from metal-bearing materials which are brought into contact with them. In China in the 11th and 12th centuries, this technique was used to extract copper; this was used for much of the total copper production. In the 17th century it was used for the same purposes in Germany and Spain.
The lixiviant solution conditions vary in terms of pH, oxidation-reduction potential, presence of chelating agents and temperature, to optimize the rate, extent and selectivity of dissolution of the desired metal component into the aqueous phase. By using chelating agents, one can selectively extract certain metals. These agents are typically amines of Schiff bases.
The five basic leaching reactor configurations are in-situ, heap, vat, tank and autoclave.
In-situ leaching
In-situ leaching is also called "solution mining". This process initially involves drilling of holes into the ore deposit. Explosives or hydraulic fracturing are used to create open pathways within the deposit for solution to penetrate into. Leaching solution is pumped into the deposit where it makes contact with the ore. The solution is then collected and processed. The Beverley uranium deposit is an example of in-situ leaching.
Heap leaching
In heap leaching processes, crushed (and sometimes agglomerated) ore is piled in a heap which is lined with an impervious layer. Leach solution is sprayed over the top of the heap, and allowed to percolate downward through the heap. The heap design usually incorporates collection sumps, which allow the "pregnant" leach solution (i.e. solution with dissolved valuable metals) to be pumped for further processing. An example is gold cyanidation, where pulverized ores are extracted with a solution of sodium cyanide, which, in the presence of air, dissolves the gold, leaving behind the nonprecious residue.
Vat leaching
Vat leaching involves contacting material, which has usually undergone size reduction and classification, with leach solution in large vats.
Tank leaching
Stirred tank, also called agitation leaching, involves contacting material, which has usually undergone size reduction and classification, with leach solution in agitated tanks. The agitation can enhance reaction kinetics by enhancing mass transfer. Tanks are often configured as reactors in series.
Autoclave leaching
Autoclave reactors are used for reactions at higher temperatures, which can enhance the rate of the reaction. Similarly, autoclaves enable the use of gaseous reagents in the system.
Solution concentration and purification
After leaching, the leach liquor must normally undergo concentration of the metal ions that are to be recovered. Additionally, undesirable metal ions sometimes require removal.
Precipitation is the selective removal of a compound of the targeted metal or removal of a major impurity by precipitation of one of its compounds. Copper is precipitated as its sulfide as a means to purify nickel leachates.
Cementation is the conversion of the metal ion to the metal by a redox reaction. A typical application involves addition of scrap iron to a solution of copper ions. Iron dissolves and copper metal is deposited.
Solvent Extraction
Ion exchange
Gas reduction. Treating a solution of nickel and ammonia with hydrogen affords nickel metal as its powder.
Electrowinning is a particularly selective if expensive electrolysis process applied to the isolation of precious metals. Gold can be electroplated from its solutions.
Solvent extraction
In the solvent extraction a mixture of an extractant in a diluent is used to extract a metal from one phase to another. In solvent extraction this mixture is often referred to as the "organic" because the main constituent (diluent) is some type of oil.
The PLS (pregnant leach solution) is mixed to emulsification with the stripped organic and allowed to separate. The metal will be exchanged from the PLS to the organic they are modified. The resulting streams will be a loaded organic and a raffinate. When dealing with electrowinning, the loaded organic is then mixed to emulsification with a lean electrolyte and allowed to separate. The metal will be exchanged from the organic to the electrolyte. The resulting streams will be a stripped organic and a rich electrolyte. The organic stream is recycled through the solvent extraction process while the aqueous streams cycle through leaching and electrowinning processes respectively.
Ion exchange
Chelating agents, natural zeolite, activated carbon, resins, and liquid organics impregnated with chelating agents are all used to exchange cations or anions with the solution. Selectivity and recovery are a function of the reagents used and the contaminants present.
Metal recovery
Metal recovery is the final step in a hydrometallurgical process, in which metals suitable for sale as raw materials are produced. Sometimes, however, further refining is needed to produce ultra-high purity metals. The main types of metal recovery processes are electrolysis, gaseous reduction, and precipitation. For example, a major target of hydrometallurgy is copper, which is conveniently obtained by electrolysis. Cu2+ ions are reduced to Cu metal at low potentials, leaving behind contaminating metal ions such as Fe2+ and Zn2+.
Electrolysis
Electrowinning and electrorefining respectively involve the recovery and purification of metals using electrodeposition of metals at the cathode, and either metal dissolution or a competing oxidation reaction at the anode.
Precipitation
Precipitation in hydrometallurgy involves the chemical precipitation from aqueous solutions, either of metals and their compounds or of the contaminants. Precipitation will proceed when, through reagent addition, evaporation, pH change or temperature manipulation, the amount of a species present in the solution exceeds the maximum determined by its solubility.
References
External links
Hydrometallurgy, BioMineWiki
Chemical processes
Metallurgy
Metallurgical processes | Hydrometallurgy | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,334 | [
"Metallurgical processes",
"Metallurgy",
"Materials science",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
1,686,424 | https://en.wikipedia.org/wiki/Pyrometallurgy | Pyrometallurgy is a branch of extractive metallurgy. It consists of the thermal treatment of minerals and metallurgical ores and concentrates to bring about physical and chemical transformations in the materials to enable recovery of valuable metals. Pyrometallurgical treatment may produce products able to be sold such as pure metals, or intermediate compounds or alloys, suitable as feed for further processing. Examples of elements extracted by pyrometallurgical processes include the oxides of less reactive elements like iron, copper, zinc, chromium, tin, and manganese.
Pyrometallurgical processes are generally grouped into one or more of the following categories:
calcining,
roasting,
smelting,
refining.
Most pyrometallurgical processes require energy input to sustain the temperature at which the process takes place. The energy is usually provided in the form of combustion or from electrical heat. When sufficient material is present in the feed to sustain the process temperature solely by exothermic reaction (i.e. without the addition of fuel or electrical heat), the process is said to be "autogenous". Processing of some sulfide ores exploit the exothermicity of their combustion.
Calcination
Calcination is thermal decomposition of a material. Examples include decomposition of hydrates such as ferric hydroxide to ferric oxide and water vapor, the decomposition of calcium carbonate to calcium oxide and carbon dioxide as well as iron carbonate to iron oxide:
CaCO3 → CaO + CO2
Calcination processes are carried out in a variety of furnaces, including shaft furnaces, rotary kilns, and fluidized bed reactors.
Roasting
Roasting consists of thermal gas–solid reactions, which can include oxidation, reduction, chlorination, sulfation, and pyrohydrolysis.
The most common example of roasting is the oxidation of metal sulfide ores. The metal sulfide is heated in the presence of air to a temperature that allows the oxygen in the air to react with the sulfide to form sulfur dioxide gas and solid metal oxide. The solid product from roasting is often called "calcine". In oxidizing roasting, if the temperature and gas conditions are such that the sulfide feed is completely oxidized, the process is known as "dead roasting". Sometimes, as in the case of pre-treating reverberatory or electric smelting furnace feed, the roasting process is performed with less than the required amount of oxygen to fully oxidize the feed. In this case, the process is called "partial roasting" because the sulfur is only partially removed. Finally, if the temperature and gas conditions are controlled such that the sulfides in the feed react to form metal sulfates instead of metal oxides, the process is known as "sulfation roasting". Sometimes, temperature and gas conditions can be maintained such that a mixed sulfide feed (for instance a feed containing both copper sulfide and iron sulfide) reacts such that one metal forms a sulfate and the other forms an oxide, the process is known as "selective roasting" or "selective sulfation".
Smelting
Smelting involves thermal reactions in which at least one product is a molten phase.
Metal oxides can then be smelted by heating with coke or charcoal (forms of carbon), a reducing agent that liberates the oxygen as carbon dioxide leaving a refined mineral. Concern about the production of carbon dioxide is only a recent worry, following the identification of the enhanced greenhouse effect.
Carbonate ores are also smelted with charcoal, but sometimes need to be calcined first.
Other materials may need to be added as flux, aiding the melting of the oxide ores and assisting in the formation of a slag, as the flux reacts with impurities, such as silicon compounds.
Smelting usually takes place at a temperature above the melting point of the metal, but processes vary considerably according to the ore involved and other matters.
Refining
Refining is the removal of impurities from materials by a thermal process. This covers a wide range of processes, involving different kinds of furnace or other plant.
The term "refining" can also refer to certain electrolytic processes. Accordingly, some kinds of pyrometallurgical refining are referred to as "fire refining".
See also
Blast furnace
Flash smelting
Isasmelt furnace
Reverberatory furnace
References
External links
U.S. Patent 5616168 Hydrometallurgical processing of impurity streams generated during the pyrometallurgy of copper
Metallurgy
Metallurgical processes
de:Metallurgie#Pyrometallurgie | Pyrometallurgy | [
"Chemistry",
"Materials_science",
"Engineering"
] | 979 | [
"Metallurgical processes",
"Metallurgy",
"nan",
"Materials science"
] |
1,686,779 | https://en.wikipedia.org/wiki/Fusion%20energy%20gain%20factor | A fusion energy gain factor, usually expressed with the symbol Q, is the ratio of fusion power produced in a nuclear fusion reactor to the power required to maintain the plasma in steady state. The condition of Q = 1, when the power being released by the fusion reactions is equal to the required heating power, is referred to as breakeven, or in some sources, scientific breakeven.
The energy given off by the fusion reactions may be captured within the fuel, leading to self-heating. Most fusion reactions release at least some of their energy in a form that cannot be captured within the plasma, so a system at Q = 1 will cool without external heating. With typical fuels, self-heating in fusion reactors is not expected to match the external sources until at least Q ≈ 5. If Q increases past this point, increasing self-heating eventually removes the need for external heating. At this point the reaction becomes self-sustaining, a condition called ignition, and is generally regarded as highly desirable for practical reactor designs. Ignition corresponds to infinite Q.
Over time, several related terms have entered the fusion lexicon. Energy that is not captured within the fuel can be captured externally to produce electricity. That electricity can be used to heat the plasma to operational temperatures. A system that is self-powered in this way is referred to as running at engineering breakeven. Operating above engineering breakeven, a machine would produce more electricity than it uses and could sell that excess. One that sells enough electricity to cover its operating costs is sometimes known as economic breakeven. Additionally, fusion fuels, especially tritium, are very expensive, so many experiments run on various test gasses like hydrogen or deuterium. A reactor running on these fuels that reaches the conditions for breakeven if tritium was introduced is said to be at extrapolated breakeven.
The current record for highest Q in a tokamak (as recorded during actual D-T fusion) was set by JET at Q = 0.67 in 1997. The record for Qext (the theoretical Q value of D-T fusion as extrapolated from D-D results) in a tokamak is held by JT-60, with Qext = 1.25, slightly besting JET's earlier Qext = 1.14. In December 2022, the National Ignition Facility, an inertial confinement facility, reached Q = 1.54 with a 3.15 MJ output from a 2.05 MJ laser heating, which remains the record for any fusion scheme .
Concept
Q is simply the comparison of the power being released by the fusion reactions in a reactor, Pfus, to the constant heating power being supplied, Pheat, in normal operating conditions. For those designs that do not run in the steady state, but are instead pulsed, the same calculation can be made by summing all of the fusion energy produced in Pfus and all of the energy expended producing the pulse in Pheat. However, there are several definitions of breakeven that consider additional power losses.
Breakeven
In 1955, John Lawson was the first to explore the energy balance mechanisms in detail, initially in classified works but published openly in a now-famous 1957 paper. In this paper he considered and refined work by earlier researchers, notably Hans Thirring, Peter Thonemann, and a review article by Richard Post. Expanding on all of these, Lawson's paper made detailed predictions for the amount of power that would be lost through various mechanisms, and compared that to the energy needed to sustain the reaction. This balance is today known as the Lawson criterion.
In a successful fusion reactor design, the fusion reactions generate an amount of power designated Pfus. Some amount of this energy, Ploss, is lost through a variety of mechanisms, mostly convection of the fuel to the walls of the reactor chamber and various forms of radiation that cannot be captured to generate power. In order to keep the reaction going, the system has to provide heating to make up for these losses, where Ploss = Pheat to maintain thermal equilibrium.
The most basic definition of breakeven is when Q = 1, that is, Pfus = Pheat.
Scientific breakeven
Over time, new types of fusion devices were proposed with different operating systems. Of particular note is the concept of inertial confinement fusion, or ICF. The magnetic approaches, MCF for short, are generally designed to operate in the (quasi) steady state. That is, the plasma is maintained in fusion conditions for time scales much longer than the fusion reactions, on the order of seconds or minutes. The goal is to allow most of the fuel time to undergo a fusion reaction. In contrast, ICF reactions last only for a time on the order of dozens of fusion reactions, and instead attempt to ensure the conditions are such that the much of fuel will undergo fusion even in this very short time span. To do so, ICF devices compress the fuel to extreme conditions, where the self-heating reactions occur very rapidly.
In an MCF device, the initial plasma is set up and maintained by large magnets, which in modern superconducting devices requires very little energy to run. Once set up, the steady state is maintained by injecting heat into the plasma with a variety of devices. These devices represent the vast majority of the energy needed to keep the system running. They are also relatively efficient, with perhaps as much as half of the electricity fed into them ending up as energy in the plasma. For this reason, Pheat in the steady state is something fairly close to all of the energy being fed into the reactor, and the efficiency of the heating systems is generally ignored. When the total efficiency is considered then it is generally not part of the calculation of Q, but instead included in the calculation of engineering breakeven, Qeng (see below).
In contrast, in ICF devices the energy needed to create the required conditions is enormous, and the devices that do so, typically lasers, are extremely inefficient, about 1%. If one were to use a similar definition of Pheat, that is, all the energy being fed into the system, then ICF devices are hopelessly inefficient. For instance, the NIF uses over 400 MJ of electrical power to produce an output of 3.15 MJ. In contrast to MCF, this energy has to be supplied to spark every reaction, not just get the system up and running.
ICF proponents point out that alternative "drivers" could be used that would improve this ratio at least ten times. If one is attempting to understand improvements in the performance of an ICF system, then it is not the performance of the drivers that is interesting, but the performance of the fusion process itself. Thus, it is typical to define Pheat for ICF devices as the amount of driver energy actually hitting the fuel, about 2 MJ in the case of NIF. Using this definition of Pheat, one arrives at a Q of 1.5. This is, ultimately, the same definition as the one used in MCF, but the upstream losses are smaller in those systems and no distinction is needed.
To make this distinction clear, modern works often refer to this definition as scientific breakeven, Qsci or sometimes Qplasma, to contrast it with similar terms.
Extrapolated breakeven
Since the 1950s, most commercial fusion reactor designs have been based on a mix of deuterium and tritium as their primary fuel; other fuels have attractive features but are much harder to ignite. As tritium is radioactive, highly bioactive, and highly mobile, it represents a significant safety concern and adds to the cost of designing and operating such a reactor.
In order to lower costs, many experimental machines are designed to run on test fuels of hydrogen or deuterium alone, leaving out the tritium. In this case, the term extrapolated breakeven, Qext, is used to define the expected performance of the machine running on D-T fuel based on the performance when running on hydrogen or deuterium alone.
The records for extrapolated breakeven are slightly higher than the records for scientific breakeven. Both JET and JT-60 have reached values around 1.25 (see below for details) while running on D-D fuel. When running on D-T, only possible in JET, the maximum performance is about half the extrapolated value.
Engineering breakeven
Another related term, engineering breakeven, denoted QE, Qeng or Qtotal depending on the source, considers the need to extract the energy from the reactor, turn that into electrical energy, and feed some of that back into the heating system. This closed loop sending electricity from the fusion back into the heating system is known as recirculation. In this case, the basic definition changes by adding additional terms to the Pfus side to consider the efficiencies of these processes.
D-T reactions release most of their energy as neutrons and a smaller amount as charged particles like alpha particles. Neutrons are electrically neutral and will travel out of any plasma before they can deposit energy back into it. This means that only the charged particles from the reactions can be captured within the fuel mass and give rise to self-heating. If the fraction of the energy being released in the charged particles is fch, then the power in these particles is Pch = fchPfus. If this self-heating process is perfect, that is, all of Pch is captured in the fuel, that means the power available for generating electricity is the power that is not released in that form, or (1 − fch)Pfus.
In the case of neutrons carrying most of the practical energy, as is the case in the D-T fuel, this neutron energy is normally captured in a "blanket" of lithium that produces more tritium that is used to fuel the reactor. Due to various exothermic and endothermic reactions, the blanket may have a power gain factor MR. MR is typically on the order of 1.1 to 1.3, meaning it produces a small amount of energy as well. The net result, the total amount of energy released to the environment and thus available for energy production, is referred to as PR, the net power output of the reactor.
The blanket is then cooled and the cooling fluid used in a heat exchanger driving conventional steam turbines and generators. That electricity is then fed back into the heating system. Each of these steps in the generation chain has an efficiency to consider. In the case of the plasma heating systems, is on the order of 60 to 70%, while modern generator systems based on the Rankine cycle have around 35 to 40%. Combining these we get a net efficiency of the power conversion loop as a whole, , of around 0.20 to 0.25. That is, about 20 to 25% of can be recirculated.
Thus, the fusion energy gain factor required to reach engineering breakeven is defined as:
To understand how is used, consider a reactor operating at 20 MW and Q = 2. Q = 2 at 20 MW implies that Pheat is 10 MW. Of that original 20 MW about 20% is alphas, so assuming complete capture, 4 MW of Pheat is self-supplied. We need a total of 10 MW of heating and get 4 of that through alphas, so we need another 6 MW of power. Of the original 20 MW of output, 4 MW are left in the fuel, so we have 16 MW of net output. Using MR of 1.15 for the blanket, we get PR about 18.4 MW. Assuming a good of 0.25, that requires 24 MW PR, so a reactor at Q = 2 cannot reach engineering breakeven. At Q = 4 one needs 5 MW of heating, 4 of which come from the fusion, leaving 1 MW of external power required, which can easily be generated by the 18.4 MW net output. Thus for this theoretical design the QE is between 2 and 4.
Considering real-world losses and efficiencies, Q values between 5 and 8 are typically listed for magnetic confinement devices to reach , while inertial devices have dramatically lower values for and thus require much higher Q values, on the order of 50 to 100.
Ignition
As the temperature of the plasma increases, the rate of fusion reactions grows rapidly, and with it, the rate of self-heating. In contrast, non-capturable energy losses like x-rays do not grow at the same rate. Thus, in overall terms, the self-heating process becomes more efficient as the temperature increases, and less energy is needed from external sources to keep it hot.
Eventually Pheat reaches zero, that is, all of the energy needed to keep the plasma at the operational temperature is being supplied by self-heating, and the amount of external energy that needs to be added drops to zero. This point is known as ignition. In the case of D-T fuel, where only 20% of the energy is released as alphas that give rise to self-heating, this cannot occur until the plasma is releasing at least five times the power needed to keep it at its working temperature.
Ignition, by definition, corresponds to an infinite Q, but it does not mean that frecirc drops to zero as the other power sinks in the system, like the magnets and cooling systems, still need to be powered. Generally, however, these are much smaller than the energy in the heaters, and require a much smaller frecirc. More importantly, this number is more likely to be near-constant, meaning that further improvements in plasma performance will result in more energy that can be directly used for commercial generation, as opposed to recirculation.
Commercial breakeven
The final definition of breakeven is commercial breakeven, which occurs when the economic value of any net electricity left over after recirculation is enough to pay for the reactor and all processes to gather and transport reactants, such as tritium and deuterium, to the reactor. This value depends both on the reactor's capital cost and any financing costs related to that, its operating costs including fuel and maintenance, and the spot price of electrical power.
Commercial breakeven relies on factors outside the technology of the reactor itself, and it is possible that even a reactor with a fully ignited plasma operating well beyond engineering breakeven will not generate enough electricity rapidly enough to pay for itself. Whether any of the mainline concepts like ITER can reach this goal is being debated in the field. A large reason for these debates is the current lack of technology and the lack of interest and funding in the area in its current stage. Scientists have only just reached the point in the fusion process where they are having a positive energy gain, meaning the energy produced is marginally more than the energy required to initiate the fusion process, this ratio is called the Q-Factor. Scientists and Nuclear physicists are sure that there is a maximum amount of energy that can be harnessed and gained from fusion reactions but the maximum amount is currently unknown. With enough investment, it is possible to increase the Q-Factor and create a definite increase in the energy and profits but that doesn’t mean that it is enough to reach the commercial breakeven. It is very possible that the Q-Factor will never overcome the commercial break even point.
Practical example
Most fusion reactor designs being studied are based on the D-T reaction, as this is by far the easiest to ignite, and is energy-dense. This reaction gives off most of its energy in the form of a single highly energetic neutron, and only 20% of the energy in the form of an alpha. Thus, for the D-T reaction, fch = 0.2. This means that self-heating does not become equal to the external heating until at least Q = 5.
Efficiency values depend on design details but may be in the range of ηheat = 0.7 (70%) and ηelec = 0.4 (40%). The purpose of a fusion reactor is to produce power, not to recirculate it, so a practical reactor must have frecirc = 0.2 approximately. Lower would be better but will be hard to achieve. Using these values we find for a practical reactor Q = 22.
Using these values and considering ITER, the reactor produces 500 MW of fusion power for 50 MW of supply. If 20% of the output is self-heating, that means 400 MW escape. Assuming the same ηheat = 0.7 and ηelec = 0.4, ITER (in theory) could produce as much as 112 MW of heating. This means ITER would operate at engineering breakeven. However, ITER is not equipped with power-extraction systems, so this remains theoretical until follow-on machines like DEMO.
Transient vs. continual
Many early fusion devices operated for microseconds, using some sort of pulsed power source to feed their magnetic confinement system while using the compression from the confinement as the heating source. Lawson defined breakeven in this context as the total energy released by the entire reaction cycle compared to the total energy supplied to the machine during the same cycle.
Over time, as performance increased by orders of magnitude, the reaction times have extended from microseconds to seconds, and ITER is designed to have shots that run for several minutes. In this case, the definition of "the entire reaction cycle" becomes blurred. In the case of an ignited plasma, for instance, Pheat may be quite high while the system is being set up, and then drop to zero when it is fully developed, so one may be tempted to pick an instant in time when it is operating at its best to determine a high, or infinite, Q. A better solution in these cases is to use the original Lawson definition averaged over the reaction to produce a similar value as the original definition.
There is an additional complication. During the heating phase when the system is being brought up to operational conditions, some of the energy released by the fusion reactions will be used to heat the surrounding fuel, and thus not be released to the environment. This is no longer true when the plasma reaches its operational temperature and enters thermal equilibrium. Thus, if one averages over the entire cycle, this energy will be included as part of the heating term, that is, some of the energy that was captured for heating would otherwise have been released in Pfus and is therefore not indicative of an operational Q.
Operators of the JET reactor argued that this input should be removed from the total:
where:
That is, Ptemp is the power applied to raise the internal energy of the plasma. It is this definition that was used when reporting JET's record 0.67 value.
Some debate over this definition continues. In 1998, the operators of the JT-60 claimed to have reached Q = 1.25 running on D-D fuel, thus reaching extrapolated breakeven. This measurement was based on the JET definition of Q*. Using this definition, JET had also reached extrapolated breakeven some time earlier. If one considers the energy balance in these conditions, and the analysis of previous machines, it is argued the original definition should be used, and thus both machines remain well below break-even of any sort.
Scientific breakeven at NIF
Lawrence Livermore National Laboratory (LLNL), the leader in ICF research, uses the modified Q that defines Pheat as the energy delivered by the driver to the capsule, as opposed to the energy put into the driver by an external power source. This definition produces much higher Q values, and changes the definition of breakeven to be Pfus / Plaser = 1. On occasion, they referred to this definition as "scientific breakeven". This term was not universally used; other groups adopted the redefinition of Q but continued to refer to Pfus = Plaser simply as breakeven.
On 7 October 2013, LLNL announced that roughly one week earlier, on 29 September, it had achieved scientific breakeven in the National Ignition Facility (NIF). In this experiment, Pfus was approximately 14 kJ, while the laser output was 1.8 MJ. By their previous definition, this would be a Q of 0.0077. For this press release, they re-defined Q once again, this time equating Pheat to be only the amount energy delivered to "the hottest portion of the fuel", calculating that only 10 kJ of the original laser energy reached the part of the fuel that was undergoing fusion reactions. This release has been heavily criticized in the field.
On 17 August 2021, the NIF announced that in early August 2021, an experiment had achieved a Q value of 0.7, producing 1.35 MJ of energy from a fuel capsule by focusing 1.9 MJ of laser energy on the capsule. The result was an eight-fold increase over any prior energy output.
On 13 December 2022, the United States Department of Energy announced that NIF had exceeded the previously elusive Q ≥ 1 milestone on 5 December 2022. This was achieved by producing 3.15 MJ after delivering 2.05 MJ to the target, for an equivalent Q of 1.54.
Notes
References
Citations
Bibliography
Fusion power
Energy
Energy economics | Fusion energy gain factor | [
"Physics",
"Chemistry",
"Environmental_science"
] | 4,416 | [
"Physical quantities",
"Plasma physics",
"Energy economics",
"Fusion power",
"Energy (physics)",
"Energy",
"Nuclear fusion",
"Environmental social science"
] |
14,993,993 | https://en.wikipedia.org/wiki/Boussinesq%20approximation%20%28water%20waves%29 | In fluid dynamics, the Boussinesq approximation for water waves is an approximation valid for weakly non-linear and fairly long waves. The approximation is named after Joseph Boussinesq, who first derived them in response to the observation by John Scott Russell of the wave of translation (also known as solitary wave or soliton). The 1872 paper of Boussinesq introduces the equations now known as the Boussinesq equations.
The Boussinesq approximation for water waves takes into account the vertical structure of the horizontal and vertical flow velocity. This results in non-linear partial differential equations, called Boussinesq-type equations, which incorporate frequency dispersion (as opposite to the shallow water equations, which are not frequency-dispersive). In coastal engineering, Boussinesq-type equations are frequently used in computer models for the simulation of water waves in shallow seas and harbours.
While the Boussinesq approximation is applicable to fairly long waves – that is, when the wavelength is large compared to the water depth – the Stokes expansion is more appropriate for short waves (when the wavelength is of the same order as the water depth, or shorter).
Boussinesq approximation
The essential idea in the Boussinesq approximation is the elimination of the vertical coordinate from the flow equations, while retaining some of the influences of the vertical structure of the flow under water waves. This is useful because the waves propagate in the horizontal plane and have a different (not wave-like) behaviour in the vertical direction. Often, as in Boussinesq's case, the interest is primarily in the wave propagation.
This elimination of the vertical coordinate was first done by Joseph Boussinesq in 1871, to construct an approximate solution for the solitary wave (or wave of translation). Subsequently, in 1872, Boussinesq derived the equations known nowadays as the Boussinesq equations.
The steps in the Boussinesq approximation are:
a Taylor expansion is made of the horizontal and vertical flow velocity (or velocity potential) around a certain elevation,
this Taylor expansion is truncated to a finite number of terms,
the conservation of mass (see continuity equation) for an incompressible flow and the zero-curl condition for an irrotational flow are used, to replace vertical partial derivatives of quantities in the Taylor expansion with horizontal partial derivatives.
Thereafter, the Boussinesq approximation is applied to the remaining flow equations, in order to eliminate the dependence on the vertical coordinate.
As a result, the resulting partial differential equations are in terms of functions of the horizontal coordinates (and time).
As an example, consider potential flow over a horizontal bed in the plane, with the horizontal and the vertical coordinate. The bed is located at , where is the mean water depth. A Taylor expansion is made of the velocity potential around the bed level :
where is the velocity potential at the bed. Invoking Laplace's equation for , as valid for incompressible flow, gives:
since the vertical velocity is zero at the – impermeable – horizontal bed . This series may subsequently be truncated to a finite number of terms.
Original Boussinesq equations
Derivation
For water waves on an incompressible fluid and irrotational flow in the plane, the boundary conditions at the free surface elevation are:
where:
is the horizontal flow velocity component: ,
is the vertical flow velocity component: ,
is the acceleration by gravity.
Now the Boussinesq approximation for the velocity potential , as given above, is applied in these boundary conditions. Further, in the resulting equations only the linear and quadratic terms with respect to and are retained (with the horizontal velocity at the bed ). The cubic and higher order terms are assumed to be negligible. Then, the following partial differential equations are obtained:
set A – Boussinesq (1872), equation (25)
This set of equations has been derived for a flat horizontal bed, i.e. the mean depth is a constant independent of position . When the right-hand sides of the above equations are set to zero, they reduce to the shallow water equations.
Under some additional approximations, but at the same order of accuracy, the above set A can be reduced to a single partial differential equation for the free surface elevation :
set B – Boussinesq (1872), equation (26)
From the terms between brackets, the importance of nonlinearity of the equation can be expressed in terms of the Ursell number.
In dimensionless quantities, using the water depth and gravitational acceleration for non-dimensionalization, this equation reads, after normalization:
with:
Linear frequency dispersion
Water waves of different wave lengths travel with different phase speeds, a phenomenon known as frequency dispersion. For the case of infinitesimal wave amplitude, the terminology is linear frequency dispersion. The frequency dispersion characteristics of a Boussinesq-type of equation can be used to determine the range of wave lengths, for which it is a valid approximation.
The linear frequency dispersion characteristics for the above set A of equations are:
with:
the phase speed,
the wave number (, with the wave length).
The relative error in the phase speed for set A, as compared with linear theory for water waves, is less than 4% for a relative wave number . So, in engineering applications, set A is valid for wavelengths larger than 4 times the water depth .
The linear frequency dispersion characteristics of equation B are:
The relative error in the phase speed for equation B is less than 4% for , equivalent to wave lengths longer than 7 times the water depth , called fairly long waves.
For short waves with equation B become physically meaningless, because there are no longer real-valued solutions of the phase speed.
The original set of two partial differential equations (Boussinesq, 1872, equation 25, see set A above) does not have this shortcoming.
The shallow water equations have a relative error in the phase speed less than 4% for wave lengths in excess of 13 times the water depth .
Boussinesq-type equations and extensions
There are an overwhelming number of mathematical models which are referred to as Boussinesq equations. This may easily lead to confusion, since often they are loosely referenced to as the Boussinesq equations, while in fact a variant thereof is considered. So it is more appropriate to call them Boussinesq-type equations. Strictly speaking, the Boussinesq equations is the above-mentioned set B, since it is used in the analysis in the remainder of his 1872 paper.
Some directions, into which the Boussinesq equations have been extended, are:
varying bathymetry,
improved frequency dispersion,
improved non-linear behavior,
making a Taylor expansion around different vertical elevations,
dividing the fluid domain in layers, and applying the Boussinesq approximation in each layer separately,
inclusion of wave breaking,
inclusion of surface tension,
extension to internal waves on an interface between fluid domains of different mass density,
derivation from a variational principle.
Further approximations for one-way wave propagation
While the Boussinesq equations allow for waves traveling simultaneously in opposing directions, it is often advantageous to only consider waves traveling in one direction. Under small additional assumptions, the Boussinesq equations reduce to:
the Korteweg–de Vries equation for wave propagation in one horizontal dimension,
the Kadomtsev–Petviashvili equation for (near uni-directional) wave propagation in two horizontal dimensions,
the nonlinear Schrödinger equation (NLS equation) for the complex-valued amplitude of narrowband waves (slowly modulated waves).
Besides solitary wave solutions, the Korteweg–de Vries equation also has periodic and exact solutions, called cnoidal waves. These are approximate solutions of the Boussinesq equation.
Numerical models
For the simulation of wave motion near coasts and harbours, numerical models – both commercial and academic – employing Boussinesq-type equations exist. Some commercial examples are the Boussinesq-type wave modules in MIKE 21 and SMS. Some of the free Boussinesq models are Celeris, COULWAVE, and FUNWAVE. Most numerical models employ finite-difference, finite-volume or finite element techniques for the discretization of the model equations. Scientific reviews and intercomparisons of several Boussinesq-type equations, their numerical approximation and performance are e.g. , and .
Notes
References
See Part 2, Chapter 5.
Fluid dynamics
Water waves
Equations of fluid dynamics | Boussinesq approximation (water waves) | [
"Physics",
"Chemistry",
"Engineering"
] | 1,755 | [
"Physical phenomena",
"Equations of fluid dynamics",
"Equations of physics",
"Water waves",
"Chemical engineering",
"Waves",
"Piping",
"Fluid dynamics"
] |
14,996,853 | https://en.wikipedia.org/wiki/Vaughan%27s%20identity | In mathematics and analytic number theory, Vaughan's identity is an identity found by that can be used to simplify Vinogradov's work on trigonometric sums. It can be used to estimate summatory functions of the form
where f is some arithmetic function of the natural integers n, whose values in applications are often roots of unity, and Λ is the von Mangoldt function.
Procedure for applying the method
The motivation for Vaughan's construction of his identity is briefly discussed at the beginning of Chapter 24 in Davenport. For now, we will skip over most of the technical details motivating the identity and its usage in applications, and instead focus on the setup of its construction by parts. Following from the reference, we construct four distinct sums based on the expansion of the logarithmic derivative of the Riemann zeta function in terms of functions which are partial Dirichlet series respectively truncated at the upper bounds of and , respectively. More precisely, we define and , which leads us to the exact identity that
This last expansion implies that we can write
where the component functions are defined to be
We then define the corresponding summatory functions for to be
so that we can write
Finally, at the conclusion of a multi-page argument of technical and at times delicate estimations of these sums, we obtain the following form of Vaughan's identity when we assume that , , and :
It is remarked that in some instances sharper estimates can be obtained from Vaughan's identity by treating the component sum more carefully by expanding it in the form of
The optimality of the upper bound obtained by applying Vaughan's identity appears to be application-dependent with respect to the best functions and we can choose to input into equation (V1). See the applications cited in the next section for specific examples that arise in the different contexts respectively considered by multiple authors.
Applications
Vaughan's identity has been used to simplify the proof of the Bombieri–Vinogradov theorem and to study Kummer sums (see the references and external links below).
In Chapter 25 of Davenport, one application of Vaughan's identity is to estimate an important prime-related exponential sum of Vinogradov defined by
In particular, we obtain an asymptotic upper bound for these sums (typically evaluated at irrational ) whose rational approximations satisfy
of the form
The argument for this estimate follows from Vaughan's identity by proving by a somewhat intricate argument that
and then deducing the first formula above in the non-trivial cases when and with .
Another application of Vaughan's identity is found in Chapter 26 of Davenport where the method is employed to derive estimates for sums (exponential sums) of three primes.
Examples of Vaughan's identity in practice are given as the following references / citations in this informative post:.
Generalizations
Vaughan's identity was generalized by .
Notes
References
External links
Proof Wiki on Vaughan's Identity
Joni's Math Notes (very detailed exposition)
Encyclopedia of Mathematics
Terry Tao's blog on the large sieve and the Bombieri-Vinogradov theorem
Theorems in analytic number theory
Mathematical identities | Vaughan's identity | [
"Mathematics"
] | 633 | [
"Theorems in mathematical analysis",
"Theorems in analytic number theory",
"Theorems in number theory",
"Mathematical problems",
"Mathematical identities",
"Mathematical theorems",
"Algebra"
] |
15,003,593 | https://en.wikipedia.org/wiki/Encyclopedia%20of%20Triangle%20Centers | The Encyclopedia of Triangle Centers (ETC) is an online list of thousands of points or "centers" associated with the geometry of a triangle. This resource is hosted at the University of Evansville. It started from a list of 400 triangle centers published in the 1998 book Triangle Centers and Central Triangles by Professor Clark Kimberling.
, the list identifies over 65,000 triangle centers and is managed cooperatively by an international team of geometry researchers.
This resource is seen as a pillar of the modern geometry.. In GeoGebra, this encyclopedia is provided at fingertip by a special command.
Each point in the list is identified by an index number of the form X(n) —for example, X(1) is the incenter. The information recorded about each point includes its trilinear and barycentric coordinates and its relation to lines joining other identified points. Links to The Geometer's Sketchpad diagrams are provided for key points. The Encyclopedia also includes a glossary of terms and definitions.
Each point in the list is assigned a unique name. In cases where no particular name arises from geometrical or historical considerations, the name of a star is used instead. For example, the 770th point in the list is named point Acamar.
Notable points
The first 10 points listed in the Encyclopedia are:
{| class="wikitable"
|-
! ETC reference !! Name !! Definition
|-
! X(1)
| Incenter
|| center of the incircle
|-
! X(2)
| Centroid
|| intersection of the three medians
|-
! X(3)
| Circumcenter
|| center of the circumscribed circle
|-
! X(4)
| orthocenter
|| intersection of the three altitudes
|-
! X(5)
| nine-point center
|| center of the nine-point circle
|-
! X(6)
| symmedian point
|| intersection of the three symmedians
|-
! X(7)
| Gergonne point
|| symmedian point of contact triangle
|-
! X(8)
| Nagel point
|| intersection of lines from each vertex to the corresponding semiperimeter point
|-
! X(9)
| Mittenpunkt
|| symmedian point of the triangle formed by the centers of the three excircles
|-
! X(10)
| Spieker center
|| center of the Spieker circle
|}
Other points with entries in the Encyclopedia include:
{| class="wikitable"
|-
! ETC reference !! Name
|-
! X(11)
| Feuerbach point
|-
! X(13)
| Fermat point
|-
! X(15), X(16)
| first and second isodynamic points
|-
! X(17), X(18)
| first and second Napoleon points
|-
! X(19)
| Clawson point
|-
! X(20)
| de Longchamps point
|-
! X(21)
| Schiffler point
|-
! X(22)
| Exeter point
|-
! X(39)
| Brocard midpoint
|-
! X(40)
| Bevan point
|-
! X(175)
| Isoperimetric point
|-
! X(176)
| Equal detour point
|}
Similar, albeit shorter, lists exist for quadri-figures (quadrilaterals and systems of four lines) and polygon geometry.
See also
Catalogue of Triangle Cubics
List of triangle topics
Triangle center
The Secrets of Triangles
Modern triangle geometry
References
External links
Implementation of ETC points as Perl subroutines by Jason Cantarella
Encyclopedia of Quadri-figures
Encyclopedia of Polygon Geometry
Triangle centers
Mathematical databases
20th-century encyclopedias
21st-century encyclopedias | Encyclopedia of Triangle Centers | [
"Physics",
"Mathematics"
] | 792 | [
"Point (geometry)",
"Triangle centers",
"Points defined for a triangle",
"Geometric centers",
"Symmetry"
] |
11,341,364 | https://en.wikipedia.org/wiki/Microturbulence | Microturbulence is a form of turbulence that varies over small distance scales. (Large-scale turbulence is called macroturbulence.)
Stellar
Microturbulence is one of several mechanisms that can cause broadening of the absorption lines in the stellar spectrum. Stellar microturbulence varies
with the effective temperature and the surface gravity.
The microturbulent velocity is defined as the microscale
non-thermal component of the gas velocity in the region of spectral
line formation.
Convection is the mechanism believed to be responsible for the observed turbulent velocity field, both in low mass stars and massive stars.
When examined by a spectroscope, the velocity of the convective gas along the line of sight produces Doppler shifts in the absorption bands. It is the distribution of these velocities along the line of sight that produces the microturbulence broadening of the absorption lines in low mass stars that have convective envelopes. In massive stars convection can be present only in small regions below the surface; these sub-surface convection zones can excite turbulence at the stellar surface through the emission of acoustic and gravity waves.
The strength of the microturbulence (symbolized by ξ, in units of km s−1) can be determined by comparing the broadening of strong lines versus weak lines.
Magnetic nuclear fusion
Microturbulence plays a critical role in energy transport during magnetic nuclear fusion experiments, such as the Tokamak.
References
External links
Emission spectroscopy
Physical oceanography
Stellar astronomy | Microturbulence | [
"Physics",
"Chemistry",
"Astronomy"
] | 305 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Emission spectroscopy",
"Physical oceanography",
"Spectroscopy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
11,341,870 | https://en.wikipedia.org/wiki/Controlled%20ovarian%20hyperstimulation | Controlled ovarian hyperstimulation is a technique used in assisted reproduction involving the use of fertility medications to induce ovulation by multiple ovarian follicles. These multiple follicles can be taken out by oocyte retrieval (egg collection) for use in in vitro fertilisation (IVF), or be given time to ovulate, resulting in superovulation which is the ovulation of a larger-than-normal number of eggs, generally in the sense of at least two. When ovulated follicles are fertilised in vivo, whether by natural or artificial insemination, there is a very high risk of a multiple pregnancy.
In this article, unless otherwise specified, hyperstimulation will refer to hyperstimulation as part of IVF. In contrast, ovulation induction is ovarian stimulation without subsequent IVF, with the aim of developing one or two
ovulatory follicles.
Procedure
Response prediction
Response predictors determine the protocol for ovulation suppression as well as dosage of medication used for hyperstimulation. Response prediction based on ovarian reserve confers substantially higher live birth rates, lower total costs and more safety.
It is commonly agreed not to exclude anyone from their first IVF attempt only on the basis of poor results on response predictors, as the accuracy of these tests can be poor for the prediction of pregnancy.
Antral follicle count
The response to gonadotropins may be roughly approximated by antral follicle count (AFC), estimated by vaginal ultrasound, which in turn reflects how many primordial follicles there are in reserve in the ovary.
The definition of "poor ovarian response" is the retrieval of less than 4 oocytes following a standard
hyperstimulation protocol, that is, following maximal stimulation. On the other hand, the term "hyper response" refers to the retrieval of more than 15 or 20 oocytes following a standard hyperstimulation protocol. The cut-offs used to predict poor responders versus normal versus hyper-responders upon vaginal ultrasonography vary in the literature, with that of likely poor response varying between an AFC under 3 and under 12, largely resulting from various definitions of the size follicles to be called antral ones.
The following table defines antral follicles as those about 2–8 mm in diameter:
The incidence of poor ovarian response in IVF ranges from 10 to 20%. Older poor responders have a lower range of pregnancy rates compared with younger ones (1.5–12.7 versus 13.0–35%, respectively). Also, the other way around, there is a lower prevalence of poor responders among young women compared to those of advancing age, with 50% of women aged 43–44 years being poor responders.
Other response predictors
Circulating anti-Müllerian hormone (AMH) can predict excessive and poor response to ovarian stimulation. According to NICE guidelines of in vitro fertilization, an anti-Müllerian hormone level of less than or equal to 5.4 pmol/L (0.8 ng/mL) predicts a low response to ovarian hyperstimulation, while a level greater than or equal to 25.0 pmol/L (3.6 ng/mL) predicts a high response. For predicting an excessive response, AMH has a sensitivity and specificity of 82% and 76%, respectively. Overall it may be superior to AFC and basal FSH. Tailoring the dosage of gonadotrophin administration to AMH level has been shown to reduce the incidence of excessive response and cancelled cycles.
Elevated basal follicle stimulating hormone (FSH) levels imply a need of more ampoules of gonadotropins for stimulation, and have a higher cancellation rate because of poor response. However, one study came to the result that this method by itself is worse than only AMH by itself, with live birth rate with AMH being 24%, compared with 18% with FSH.
Advanced maternal age causes decreased success rates in ovarian hyperstimulation. In ovarian hyperstimulation combined with IUI, women aged 38–39 years appear to have reasonable success during the first two cycles, with an overall live birth rate of 6.1% per cycle. However, for women aged ≥40 years, the overall live birth rate is 2.0% per cycle, and there appears to be no benefit after a single cycle of COH/IUI. It is therefore recommended to consider in vitro fertilization after one failed COH/IUI cycle for women aged ≥40 years.
Body mass index
Previous hyperstimulation experiences
Length of menstrual cycles, with shorter cycles being associated with poorer response.
Previous ovarian surgery.
Hyperstimulation medications
FSH preparations
In most patients, injectable gonadotropin preparations are used, usually FSH preparations. The clinical choice of gonadotrophin should depend on availability, convenience and costs. The optimal dosage is mainly a trade-off between the pregnancy rate and risk of ovarian hyperstimulation syndrome. A meta-analysis came to the result that the optimal daily recombinant FSH stimulation dose is 150 IU/day in presumed normal responders younger than 39 years undergoing IVF. Compared with higher doses, this dose is associated with a slightly lower oocyte yield, but similar pregnancy rates and embryo cryopreservation rates. For women predicted to have a poor response, there may not be any benefit to start at a higher FSH dosage than 150 IU per day.
When used in medium dosage, a long-acting FSH preparation has the same outcome in regard to live birth rate and risk of ovarian hyperstimulation syndrome as compared to daily FSH. A long-acting FSH preparation may cause decreased live birth rates compared to daily FSH when using low dosages (60 to 120 μg of corifollitropin alfa).
Recombinant FSH () appears to be equally effective in terms of live birth rate compared to any of the other types of gonadotropin preparations irrespective of the protocol used for ovulation suppression.
Typically, approximately 8–12 days of injections are necessary.
Alternatives and complements to FSH
Administering recombinant hCG in addition to an FSH-preparation has no significant beneficial effect. The hCG is the FSH extracted from the urine in menopausical women.
Clomifene, in addition to gonadotropins, may make little or no difference to the live birth rate but may lower the probability of ovarian hyperstimulation syndrome. A systematic review showed that using clomifene citrate in addition to low dose gonadotropin (in a GnRH antagonist protocol as described in the following section) resulted in a trend towards better pregnancy rates and a greater number of oocytes retrieved when compared with a standard high-dose FSH regime. Such a protocol avails for using lower dosages of FSH-preparations, conferring lower costs per cycle, being particularly useful in cases where cost is a major limiting factor.
Recombinant luteinizing hormone (rLH) in addition to FSH probably increases pregnancy rates, but it is not certain if the live birth rate is also increased. Using low dose human chorionic gonadotropin (hCG) to replace FSH during the late follicular phase in women undergoing hyperstimulation as part of IVF may make little or no difference to pregnancy rates, and possibly leads to in an equivalent number of oocytes retrieved, but with less expenditure of FSH. Before ovarian stimulation with antagonist protocols, pretreatment with combined oral contraceptive pills probably reduces the rate of live birth or ongoing pregnancy, while it is uncertain whether pretreatment with progesterone only has any effect on live birth or ongoing pregnancy rates. For other stimulation protocols, the evidence around pretreatment with combined oral contraceptives and progesterone only is uncertain.
Findings are conflicting, but metformin treatment as a complement in IVF cycles may reduce the risk of ovarian hyperstimulation syndrome and increase live birth rates.
Suppression of spontaneous ovulation
When used in conjunction with in vitro fertilization (IVF), controlled ovarian hyperstimulation confers a need to avoid spontaneous ovulation, since oocyte retrieval of the mature egg from the fallopian tube or uterus is much harder than from the ovarian follicle. The main regimens to achieve ovulation suppression are:
GnRH agonist administration given continuously before starting the gonadotropin hyperstimulation regimen. Physiologically, GnRH agonists are normally released in a cyclical fashion in the body to increase normal gonadotropin release, including luteinizing hormone that triggers ovulation, but continuous exogenous administration of GnRH agonists has the opposite effect of causing cessation of physiological gonadotropin production in the body.
GnRH antagonist administration, which is typically administered in the mid-follicular phase in stimulated cycles after administration of gonadotropins and prior to triggering final maturation of oocytes. The GnRH antagonists that are currently licensed for use in fertility treatment are cetrorelix and ganirelix. In GnRH antagonist cycles, hyperstimulation medication is typically started on the second or third day of a previous natural menstruation.
Agonist vs antagonist
Regarding pregnancy rate, choosing GnRH agonist protocol for a cycle is approximately as efficient as choosing GnRH antagonist protocol. Still, the two protocols differ on a number of aspects:
Practically, the timing of the hyperstimulation and the day of oocyte retrieval in a GnRH antagonist protocol needs to be timed after the spontaneous initiation of the previous menstrual cycle, while the schedule can be started at a time to meet practical needs in a GnRH agonist protocol.
The start of GnRH agonist administration can range from a long protocol of 14 to 18 days prior to gonadotropin administration, to a short protocol where it is started by the time of gonadotropin administration. Its duration can then be from 3 days to final maturation induction. A long GnRH agonist protocol has been associated with a higher pregnancy rate, but there is insufficient evidence for any higher live birth rate, compared to a short GnRH agonist protocol.For GnRH antagonists, administration from the day after the onset of menstruation has been associated with a higher number of mature oocytes compared to starting when follicle diameter reaches 12 mm.
Regarding time per cycle, on the other hand, the cycle duration using GnRH antagonist protocol is typically substantially shorter than one using a standard long GnRH agonist protocol, potentially resulting in a higher number of cycles in any given time period, which is beneficial for women with more limited time to become pregnant.
Regarding antral follicle count, with the GnRH antagonist protocol initial follicular recruitment and selection is undertaken by endogenous endocrine factors prior to starting the exogenous hyperstimulation. This results in a smaller number of growing follicles when compared with the standard long GnRH agonist protocol. This is an advantage in women expected to be high responders, thereby decreasing the risk of ovarian hyperstimulation syndrome.
Regarding subsequent final maturation induction, usage of GnRH agonist protocol necessitates subsequent usage of human chorionic gonadotropin (HCG or hCG) for this purpose, while usage of GnRH antagonist protocol also avails for subsequently using a GnRH agonist for final oocyte maturation. Using a GnRH agonist for final oocyte maturation rather than hCG results in an elimination of the risk of ovarian hyperstimulation syndrome, while having a delivery rate after IVF of approximately 6% less.
Unlike the agonist protocol, the antagonist protocol is rapidly reversible because the GnRH receptors are merely blocked but functional. Administration of enough GnRH agonist to compete with the antagonist will result in release of FSH and LH which subsequently increases the release of Estrogen.
In GnRH agonist protocol, there is a risk of estrogen deprivation symptoms, e.g. hot flushes, vagina dryness. This is because the pituitary gonadotropic cells are desensitized , i.e. the number of receptors have reduced. Whereas in the antagonist protocol there are no deprivation symptoms because its administration occurs after FSH stimulation has been done therefore there is increased level of estrogen.
Thus, in short, a GnRH antagonist protocol may be harder to schedule timewise but has shorter cycle lengths and less (or even eliminated) risk of ovarian hyperstimulation syndrome.
GnRH antagonist protocol has overall better results for expected poor and hyper-responders; A study of these protocols in women undergoing their first IVF and having a poor predicted response (by an AMH level below 5 pmol/L by DSL assay), using the GnRH antagonist protocol was associated with a substantial drop in cycle cancellation (odds ratio 0.20) and required fewer days of gonadotrophin stimulation (10 days versus 14 days) compared to GnRH agonist protocol. Using GnRH antagonist protocol in high responders has been associated with significantly higher clinical pregnancy rates (62 versus 32%).
The pregnancy rate is probably higher with long-course GnRH protocols compared to short or ultra-short GnRH agonist protocols. There is no evidence that stopping or reducing GnRH agonist administration at the start of gonadotropin administration results in a decrease in pregnancy rate.
Monitoring
There is a concomitant monitoring, including frequently checking the estradiol level and, by means of gynecologic ultrasonography, follicular growth. A Cochrane review (updated in 2021) found no difference between cycle monitoring by ultrasound (TVUS) plus serum estradiol compared to monitoring by ultrasound only relative to pregnancy rates and the incidence of ovarian hyperstimulation syndrome (OHSS).
Tracking or supervising the maturation of follicles is performed in order to timely schedule oocyte retrieval. Two-dimensional ultrasound is conventionally used. Automated follicle tracking does not appear to improve the clinical outcome of assisted reproduction treatment.
Retrieval
When used in conjunction with IVF, ovarian hyperstimulation may be followed by final maturation of oocytes, using human chorionic gonadotropin (hCG), or a GnRH agonist if a GnRH antagonist protocol is used for ovulation suppression. A transvaginal oocyte retrieval is then performed just prior to when the follicles would rupture.
It is uncertain if coasting, which is ovarian hyperstimulation without induction of final maturation, reduces the risk of OHSS.
Risks
Perhaps the greatest risk associated with controlled ovarian hyperstimulation is ovarian hyperstimulation syndrome (OHSS). OHSS occurs when, following a "trigger" injection for final oocyte maturation, excessive VEGF production by numerous follicles acts systemically. This can result in a shift of fluid from the bloodstream to "third spaces", including the belly and the space around the lungs. This can make it difficult and painful to breathe or move, and in extremely rare cases can be fatal. Severe cases often require hospitalization, removal of fluid from the abdomen, and replacement of fluid in the blood. OHSS is most prevalent in very high responders, almost always those with more than 20 developing ovarian follicles, who are triggered with hCG. One means of greatly reducing OHSS risk is to trigger with GnRH agonist instead of hCG. This results in a surge of LH from the pituitary, the same hormone that matures the eggs in natural cycles. LH has a much shorter half-life than hCG, so that nearly all of the LH is cleared by the time of egg collection, or about 36 hours after trigger. Any developing signs of OHSS will typically vanish at that point. However, in rare cases, severe OHSS can continue to develop. Reduced success rates have been reported in fresh embryo transfers when the agonist trigger is used without hCG, so that most centers will freeze all embryos in cycles triggered only with the agonist.
Ovarian hyperstimulation does not seem to be associated with an elevated risk of cervical cancer, nor with ovarian cancer or endometrial cancer when neutralizing the confounder of infertility itself. Also, it does not seem to impact increased risk for breast cancer.
Alternatives
Ovulation induction is ovarian stimulation without subsequent IVF, with the aim of developing one or two ovulatory follicles (the maximum number before recommending sexual abstinence in such treatments). It is cheaper and easier to perform than controlled ovarian hyperstimulation, and is therefore the preferred initial stimulation protocol in menstrual disorders including anovulation and oligoovulation.
In vitro maturation is letting ovarian follicles mature in vitro, and with this technique ovarian hyperstimulation is not essential. Rather, oocytes can mature outside the body prior to fertilisation by IVF. Hence, gonadotropins does not need to be injected in the body, or at least a lower dose may be injected. However, there is still not enough evidence to prove the effectiveness and safety of the technique.
Notes
References
External links
Antral Follicle Counts, Resting Follicles, Ovarian Volume and Ovarian Reserve Advanced Fertility Center of Chicago.
Assisted reproductive technology | Controlled ovarian hyperstimulation | [
"Biology"
] | 3,767 | [
"Assisted reproductive technology",
"Medical technology"
] |
11,343,398 | https://en.wikipedia.org/wiki/Forward%20anonymity | Forward anonymity is a property of a cryptographic system which prevents an attacker who has recorded past encrypted communications from discovering its contents and participants in the future. This property is analogous to forward secrecy.
An example of a system which uses forward anonymity is a public key cryptography system, where the public key is well-known and used to encrypt a message, and an unknown private key is used to decrypt it. In this system, one of the keys is always said to be compromised, but messages and their participants are still unknown by anyone without the corresponding private key.
In contrast, an example of a system which satisfies the perfect forward secrecy property is one in which a compromise of one key by an attacker (and consequent decryption of messages encrypted with that key) does not undermine the security of previously used keys. Forward secrecy does not refer to protecting the content of the message, but rather to the protection of keys used to decrypt messages.
History
Originally introduced by Whitfield Diffie, Paul van Oorschot, and Michael James Wiener to describe a property of STS (station-to-station protocol) involving a long term secret, either a private key or a shared password.
Public Key Cryptography
Public Key Cryptography is a common form of a forward anonymous system. It is used to pass encrypted messages, preventing any information about the message from being discovered if the message is intercepted by an attacker. It uses two keys, a public key and a private key. The public key is published, and is used by anyone to encrypt a plaintext message. The Private key is not well known, and is used to decrypt cyphertext. Public key cryptography is known as an asymmetric decryption algorithm because of different keys being used to perform opposing functions. Public key cryptography is popular because, while it is computationally easy to create a pair of keys, it is extremely difficult to determine the private key knowing only the public key. Therefore, the public key being well known does not allow messages which are intercepted to be decrypted. This is a forward anonymous system because one compromised key (the public key) does not compromise the anonymity of the system.
Web of Trust
A variation of the public key cryptography system is a Web of trust, where each user has both a public and private key. Messages sent are encrypted using the intended recipient's public key, and only this recipient's private key will decrypt the message. They are also signed with the senders private key. This creates added security where it becomes more difficult for an attacker to pretend to be a user, as the lack of a private key signature indicates a non-trusted user.
Limitations
A forward anonymous system does not necessarily mean a wholly secure system. A successful cryptanalysis of a message or sequence of messages can still decode the information without the use of a private key or long term secret.
News
Forward anonymity, along with other privacy-protecting measures, received a burst of media attention after the leak of classified information by Edward Snowden, beginning in June, 2013, which indicated that the NSA and FBI, through specially crafted backdoors in software and computer systems, were conducting mass surveillance over large parts of the population of both the United States (see Mass surveillance in the United States), Europe, Asia, and other parts of the world. They justified this practice as an aid to catch predatory pedophiles. Opponents to this practice argue that leaving in a back door to law enforcement increases the risk of attackers being able to decrypt information, as well as questioning its legality under the US Constitution, specifically being a form of illegal Search and Seizure.
References
Cryptography | Forward anonymity | [
"Mathematics",
"Engineering"
] | 778 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
11,344,467 | https://en.wikipedia.org/wiki/Quillen%20adjunction | In homotopy theory, a branch of mathematics, a Quillen adjunction between two closed model categories C and D is a special kind of adjunction between categories that induces an adjunction between the homotopy categories Ho(C) and Ho(D) via the total derived functor construction. Quillen adjunctions are named in honor of the mathematician Daniel Quillen.
Formal definition
Given two closed model categories C and D, a Quillen adjunction is a pair
(F, G): C D
of adjoint functors with F left adjoint to G such that F preserves cofibrations and trivial cofibrations or, equivalently by the closed model axioms, such that G preserves fibrations and trivial fibrations. In such an adjunction F is called the left Quillen functor and G is called the right Quillen functor.
Properties
It is a consequence of the axioms that a left (right) Quillen functor preserves weak equivalences between cofibrant (fibrant) objects. The total derived functor theorem of Quillen says that the total left derived functor
LF: Ho(C) → Ho(D)
is a left adjoint to the total right derived functor
RG: Ho(D) → Ho(C).
This adjunction (LF, RG) is called the derived adjunction.
If (F, G) is a Quillen adjunction as above such that
F(c) → d
with c cofibrant and d fibrant is a weak equivalence in D if and only if
c → G(d)
is a weak equivalence in C then it is called a Quillen equivalence of the closed model categories C and D. In this case the derived adjunction is an adjoint equivalence of categories so that
LF(c) → d
is an isomorphism in Ho(D) if and only if
c → RG(d)
is an isomorphism in Ho(C).
References
Philip S. Hirschhorn, Model Categories and Their Localizations, American Mathematical Soc., Aug 24, 2009 - Mathematics - 457 pages
Homotopy theory
Theory of continuous functions
Adjoint functors | Quillen adjunction | [
"Mathematics"
] | 458 | [
"Theory of continuous functions",
"Topology"
] |
11,344,743 | https://en.wikipedia.org/wiki/Biomolecular%20engineering | Biomolecular engineering is the application of engineering principles and practices to the purposeful manipulation of molecules of biological origin. Biomolecular engineers integrate knowledge of biological processes with the core knowledge of chemical engineering in order to focus on molecular level solutions to issues and problems in the life sciences related to the environment, agriculture, energy, industry, food production, biotechnology and medicine.
Biomolecular engineers purposefully manipulate carbohydrates, proteins, nucleic acids and lipids within the framework of the relation between their structure (see: nucleic acid structure, carbohydrate chemistry, protein structure,), function (see: protein function) and properties and in relation to applicability to such areas as environmental remediation, crop and livestock production, biofuel cells and biomolecular diagnostics. The thermodynamics and kinetics of molecular recognition in enzymes, antibodies, DNA hybridization, bio-conjugation/bio-immobilization and bioseparations are studied. Attention is also given to the rudiments of engineered biomolecules in cell signaling, cell growth kinetics, biochemical pathway engineering and bioreactor engineering.
Timeline
History
During World War II, the need for large quantities of penicillin of acceptable quality brought together chemical engineers and microbiologists to focus on penicillin production. This created the right conditions to start a chain of reactions that lead to the creation of the field of biomolecular engineering. Biomolecular engineering was first defined in 1992 by the U.S. National Institutes of Health as research at the interface of chemical engineering and biology with an emphasis at the molecular level". Although first defined as research, biomolecular engineering has since become an academic discipline and a field of engineering practice. Herceptin, a humanized Mab for breast cancer treatment, became the first drug designed by a biomolecular engineering approach and was approved by the U.S. FDA. Also, Biomolecular Engineering was a former name of the journal New Biotechnology.
Future
Bio-inspired technologies of the future can help explain biomolecular engineering. Looking at the Moore's law "Prediction", in the future quantum and biology-based processors are "big" technologies. With the use of biomolecular engineering, the way our processors work can be manipulated in order to function in the same sense a biological cell work. Biomolecular engineering has the potential to become one of the most important scientific disciplines because of its advancements in the analyses of gene expression patterns as well as the purposeful manipulation of many important biomolecules to improve functionality. Research in this field may lead to new drug discoveries, improved therapies, and advancement in new bioprocess technology. With the increasing knowledge of biomolecules, the rate of finding new high-value molecules including but not limited to antibodies, enzymes, vaccines, and therapeutic peptides will continue to accelerate. Biomolecular engineering will produce new designs for therapeutic drugs and high-value biomolecules for treatment or prevention of cancers, genetic diseases, and other types of metabolic diseases. Also, there is anticipation of industrial enzymes that are engineered to have desirable properties for process improvement as well the manufacturing of high-value biomolecular products at a much lower production cost. Using recombinant technology, new antibiotics that are active against resistant strains will also be produced.
Basic biomolecules
Biomolecular engineering deals with the manipulation of many key biomolecules. These include, but are not limited to, proteins, carbohydrates, nucleic acids, and lipids. These molecules are the basic building blocks of life and by controlling, creating, and manipulating their form and function there are many new avenues and advantages available to society. Since every biomolecule is different, there are a number of techniques used to manipulate each one respectively.
Proteins
Proteins are polymers that are made up of amino acid chains linked with peptide bonds. They have four distinct levels of structure: primary, secondary, tertiary, and quaternary.
Primary structure refers to the amino acid backbone sequence. Secondary structure focuses on minor conformations that develop as a result of the hydrogen bonding between the amino acid chain. If most of the protein contains intermolecular hydrogen bonds it is said to be fibrillar, and the majority of its secondary structure will be beta sheets. However, if the majority of the orientation contains intramolecular hydrogen bonds, then the protein is referred to as globular and mostly consists of alpha helices. There are also conformations that consist of a mix of alpha helices and beta sheets as well as a beta helixes with an alpha sheets.
The tertiary structure of proteins deal with their folding process and how the overall molecule is arranged. Finally, a quaternary structure is a group of tertiary proteins coming together and binding.
With all of these levels, proteins have a wide variety of places in which they can be manipulated and adjusted. Techniques are used to affect the amino acid sequence of the protein (site-directed mutagenesis), the folding and conformation of the protein, or the folding of a single tertiary protein within a quaternary protein matrix.
Proteins that are the main focus of manipulation are typically enzymes. These are proteins that act as catalysts for biochemical reactions. By manipulating these catalysts, the reaction rates, products, and effects can be controlled. Enzymes and proteins are important to the biological field and research that there are specific divisions of engineering focusing only on proteins and enzymes.
Carbohydrates
Carbohydrates are another important biomolecule. These are polymers, called polysaccharides, which are made up of chains of simple sugars connected via glycosidic bonds. These monosaccharides consist of a five to six carbon ring that contains carbon, hydrogen, and oxygen - typically in a 1:2:1 ratio, respectively. Common monosaccharides are glucose, fructose, and ribose. When linked together monosaccharides can form disaccharides, oligosaccharides, and polysaccharides: the nomenclature is dependent on the number of monosaccharides linked together. Common dissacharides, two monosaccharides joined, are sucrose, maltose, and lactose. Important polysaccharides, links of many monosaccharides, are cellulose, starch, and chitin.
Cellulose is a polysaccharide made up of beta 1-4 linkages between repeat glucose monomers. It is the most abundant source of sugar in nature and is a major part of the paper industry.
Starch is also a polysaccharide made up of glucose monomers; however, they are connected via an alpha 1-4 linkage instead of beta. Starches, particularly amylase, are important in many industries, including the paper, cosmetic, and food.
Chitin is a derivation of cellulose, possessing an acetamide group instead of an –OH on one of its carbons. Acetimide group is deacetylated the polymer chain is then called chitosan. Both of these cellulose derivatives are a major source of research for the biomedical and food industries. They have been shown to assist with blood clotting, have antimicrobial properties, and dietary applications. A lot of engineering and research is focusing on the degree of deacetylation that provides the most effective result for specific applications.
Nucleic acids
Nucleic acids are macromolecules that consist of DNA and RNA which are biopolymers consisting of chains of biomolecules. These two molecules are the genetic code and template that make life possible. Manipulation of these molecules and structures causes major changes in function and expression of other macromolecules. Nucleosides are glycosylamines containing a nucleobase bound to either ribose or deoxyribose sugar via a beta-glycosidic linkage. The sequence of the bases determine the genetic code. Nucleotides are nucleosides that are phosphorylated by specific kinases via a phosphodiester bond. Nucleotides are the repeating structural units of nucleic acids. The nucleotides are made of a nitrogenous base, a pentose (ribose for RNA or deoxyribose for DNA), and three phosphate groups. See, Site-directed mutagenesis, recombinant DNA, and ELISAs.
Lipids
Lipids are biomolecules that are made up of glycerol derivatives bonded with fatty acid chains. Glycerol is a simple polyol that has a formula of C3H5(OH)3. Fatty acids are long carbon chains that have a carboxylic acid group at the end. The carbon chains can be either saturated with hydrogen; every carbon bond is occupied by a hydrogen atom or a single bond to another carbon in the carbon chain, or they can be unsaturated; namely, there are double bonds between the carbon atoms in the chain. Common fatty acids include lauric acid, stearic acid, and oleic acid. The study and engineering of lipids typically focuses on the manipulation of lipid membranes and encapsulation. Cellular membranes and other biological membranes typically consist of a phospholipid bilayer membrane, or a derivative thereof. Along with the study of cellular membranes, lipids are also important molecules for energy storage. By utilizing encapsulation properties and thermodynamic characteristics, lipids become significant assets in structure and energy control when engineering molecules.
Of molecules
Recombinant DNA
Recombinant DNA are DNA biomolecules that contain genetic sequences that are not native to the organism's genome. Using recombinant techniques, it is possible to insert, delete, or alter a DNA sequence precisely without depending on the location of restriction sites. Recombinant DNA is used for a wide range of applications.
Method
The traditional method for creating recombinant DNA typically involves the use of plasmids in the host bacteria. The plasmid contains a genetic sequence corresponding to the recognition site of a restriction endonuclease, such as EcoR1. After foreign DNA fragments, which have also been cut with the same restriction endonuclease, have been inserted into host cell, the restriction endonuclease gene is expressed by applying heat, or by introducing a biomolecule, such as arabinose. Upon expression, the enzyme will cleave the plasmid at its corresponding recognition site creating sticky ends on the plasmid. Ligases then joins the sticky ends to the corresponding sticky ends of the foreign DNA fragments creating a recombinant DNA plasmid.
Advances in genetic engineering have made the modification of genes in microbes quite efficient allowing constructs to be made in about a weeks worth of time. It has also made it possible to modify the organism's genome itself. Specifically, use of the genes from the bacteriophage lambda are used in recombination. This mechanism, known as recombineering, utilizes the three proteins Exo, Beta, and Gam, which are created by the genes exo, bet, and gam respectively. Exo is a double stranded DNA exonuclease with 5' to 3' activity. It cuts the double stranded DNA leaving 3' overhangs. Beta is a protein that binds to single stranded DNA and assists homologous recombination by promoting annealing between the homology regions of the inserted DNA and the chromosomal DNA. Gam functions to protect the DNA insert from being destroyed by native nucleases within the cell.
Applications
Recombinant DNA can be engineered for a wide variety of purposes. The techniques utilized allow for specific modification of genes making it possible to modify any biomolecule. It can be engineered for laboratory purposes, where it can be used to analyze genes in a given organism. In the pharmaceutical industry, proteins can be modified using recombination techniques. Some of these proteins include human insulin. Recombinant insulin is synthesized by inserting the human insulin gene into E. coli, which then produces insulin for human use. Other proteins, such as human growth hormone, factor VIII, and hepatitis B vaccine are produced using similar means. Recombinant DNA can also be used for diagnostic methods involving the use of the ELISA method. This makes it possible to engineer antigens, as well as the enzymes attached, to recognize different substrates or be modified for bioimmobilization. Recombinant DNA is also responsible for many products found in the agricultural industry. Genetically modified food, such as golden rice, has been engineered to have increased production of vitamin A for use in societies and cultures where dietary vitamin A is scarce. Other properties that have been engineered into crops include herbicide-resistance and insect-resistance.
Site-directed mutagenesis
Site-directed mutagenesis is a technique that has been around since the 1970s. The early days of research in this field yielded discoveries about the potential of certain chemicals such as bisulfite and aminopurine to change certain bases in a gene. This research continued, and other processes were developed to create certain nucleotide sequences on a gene, such as the use of restriction enzymes to fragment certain viral strands and use them as primers for bacterial plasmids. The modern method, developed by Michael Smith in 1978, uses an oligonucleotide that is complementary to a bacterial plasmid with a single base pair mismatch or a series of mismatches.
General procedure
Site directed mutagenesis is a valuable technique that allows for the replacement of a single base in an oligonucleotide or gene. The basics of this technique involve the preparation of a primer that will be a complementary strand to a wild type bacterial plasmid. This primer will have a base pair mismatch at the site where the replacement is desired. The primer must also be long enough such that the primer will anneal to the wild type plasmid. After the primer anneals, a DNA polymerase will complete the primer. When the bacterial plasmid is replicated, the mutated strand will be replicated as well. The same technique can be used to create a gene insertion or deletion. Often, an antibiotic resistant gene is inserted along with the modification of interest and the bacteria are cultured on an antibiotic medium. The bacteria that were not successfully mutated will not survive on this medium, and the mutated bacteria can easily be cultured.
Applications
Site-directed mutagenesis can be helpful for many different reasons. A single base-pair replacement will change the codon, potentially replacing an amino acid in a protein. Mutagenesis can help determine the function of proteins and the roles of specific amino acids. If an amino acid near the active site is mutated, the kinetic parameters may change drastically, or the enzyme might behave differently. Another application of site-directed mutagenesis is exchanging an amino acid residue far from the active site with a lysine residue or cysteine residue. These amino acids make it easier to covalently bond the enzyme to a solid surface, which allows for enzyme re-use and the use of enzymes in continuous processes. Sometimes, amino acids with non-natural functional groups (such as an aldehyde introduced through an aldehyde tag) are added to proteins. These additions may be for ease of bioconjugation or to study the effects of amino acid changes on the form and function of the proteins.
One example of how mutagenesis is used is found in the coupling of site-directed mutagenesis and PCR to reduce interleukin-6 activity in cancerous cells. In another example, Bacillus subtilis is used in site-directed mutagenesis, to secrete the enzyme subtilisin through the cell wall. Biomolecular engineers can purposely manipulate this gene to essentially make the cell a factory for producing whatever protein the insertion in the gene codes.
Bio-immobilization and bio-conjugation
Bio-immobilization and bio-conjugation is the purposeful manipulation of a biomolecule's mobility by chemical or physical means to obtain a desired property. Immobilization of biomolecules allows exploiting characteristics of the molecule under controlled environments. For example
, the immobilization of glucose oxidase on calcium alginate gel beads can be used in a bioreactor. The resulting product will not need purification to remove the enzyme because it will remain linked to the beads in the column. Examples of types of biomolecules that are immobilized are enzymes, organelles, and complete cells.
Biomolecules can be immobilized using a range of techniques. The most popular are physical entrapment, adsorption, and covalent modification.
Physical entrapment - the use of a polymer to contain the biomolecule in a matrix without chemical modification. Entrapment can be between lattices of polymer, known as gel entrapment, or within micro-cavities of synthetic fibers, known as fiber entrapment. Examples include entrapment of enzymes such as glucose oxidase in gel column for use as a bioreactor. Important characteristic with entrapment is biocatalyst remains structurally unchanged, but creates large diffusion barriers for substrates.
Adsorption- immobilization of biomolecules due to interaction between the biomolecule and groups on support. Can be physical adsorption, ionic bonding, or metal binding chelation. Such techniques can be performed under mild conditions and relatively simple, although the linkages are highly dependent upon pH, solvent and temperature. Examples include enzyme-linked immunosorbent assays.
Covalent modification- involves chemical reactions between certain functional groups and matrix. This method forms stable complex between biomolecule and matrix and is suited for mass production. Due to the formation of chemical bond to functional groups, loss of activity can occur. Examples of chemistries used are DCC coupling PDC coupling and EDC/NHS coupling, all of which take advantage of the reactive amines on the biomolecule's surface.
Because immobilization restricts the biomolecule, care must be given to ensure that functionality is not entirely lost. Variables to consider are pH, temperature, solvent choice, ionic strength, orientation of active sites due to conjugation. For enzymes, the conjugation will lower the kinetic rate due to a change in the 3-dimensional structure, so care must be taken to ensure functionality is not lost.
Bio-immobilization is used in technologies such as diagnostic bioassays, biosensors, ELISA, and bioseparations. Interleukin (IL-6) can also be bioimmobilized on biosensors. The ability to observe these changes in IL-6 levels is important in diagnosing an illness. A cancer patient will have elevated IL-6 level and monitoring those levels will allow the physician to watch the disease progress. A direct immobilization of IL-6 on the surface of a biosensor offers a fast alternative to ELISA.
Polymerase chain reaction
The polymerase chain reaction (PCR) is a scientific technique that is used to replicate a piece of a DNA molecule by several orders of magnitude. PCR implements a cycle of repeated heated and cooling known as thermal cycling along with the addition of DNA primers and DNA polymerases to selectively replicate the DNA fragment of interest. The technique was developed by Kary Mullis in 1983 while working for the Cetus Corporation. Mullis would go on to win the Nobel Prize in Chemistry in 1993 as a result of the impact that PCR had in many areas such as DNA cloning, DNA sequencing, and gene analysis.
Biomolecular engineering techniques involved in PCR
A number of biomolecular engineering strategies have played a very important role in the development and practice of PCR. For instance a crucial step in ensuring the accurate replication of the desired DNA fragment is the creation of the correct DNA primer. The most common method of primer synthesis is by the phosphoramidite method. This method includes the biomolecular engineering of a number of molecules to attain the desired primer sequence. The most prominent biomolecular engineering technique seen in this primer design method is the initial bioimmobilization of a nucleotide to a solid support. This step is commonly done via the formation of a covalent bond between the 3'-hydroxy group of the first nucleotide of the primer and the solid support material.
Furthermore, as the DNA primer is created certain functional groups of nucleotides to be added to the growing primer require blocking to prevent undesired side reactions. This blocking of functional groups as well as the subsequent de-blocking of the groups, coupling of subsequent nucleotides, and eventual cleaving from the solid support are all methods of manipulation of biomolecules that can be attributed to biomolecular engineering. The increase in interleukin levels is directly proportional to the increased death rate in breast cancer patients. PCR paired with Western blotting and ELISA help define the relationship between cancer cells and IL-6.
Enzyme-linked immunosorbent assay (ELISA)
Enzyme-linked immunosorbent assay is an assay that utilizes the principle of antibody-antigen recognition to test for the presence of certain substances. The three main types of ELISA tests which are indirect ELISA, sandwich ELISA, and competitive ELISA all rely on the fact that antibodies have an affinity for only one specific antigen. Furthermore, these antigens or antibodies can be attached to enzymes which can react to create a colorimetric result indicating the presence of the antibody or antigen of interest. Enzyme linked immunosorbent assays are used most commonly as diagnostic tests to detect HIV antibodies in blood samples to test for HIV, human chorionic gonadotropin molecules in urine to indicate pregnancy, and Mycobacterium tuberculosis antibodies in blood to test patients for tuberculosis. Furthermore, ELISA is also widely used as a toxicology screen to test people's serum for the presence of illegal drugs.
Techniques involved in ELISA
Although there are three different types of solid state enzyme-linked immunosorbent assays, all three types begin with the bioimmobilization of either an antibody or antigen to a surface. This bioimmobilization is the first instance of biomolecular engineering that can be seen in ELISA implementation. This step can be performed in a number of ways including a covalent linkage to a surface which may be coated with protein or another substance. The bioimmobilization can also be performed via hydrophobic interactions between the molecule and the surface. Because there are many different types of ELISAs used for many different purposes the biomolecular engineering that this step requires varies depending on the specific purpose of the ELISA.
Another biomolecular engineering technique that is used in ELISA development is the bioconjugation of an enzyme to either an antibody or antigen depending on the type of ELISA. There is much to consider in this enzyme bioconjugation such as avoiding interference with the active site of the enzyme as well as the antibody binding site in the case that the antibody is conjugated with enzyme. This bioconjugation is commonly performed by creating crosslinks between the two molecules of interest and can require a wide variety of different reagents depending on the nature of the specific molecules.
Interleukin (IL-6) is a signaling protein that has been known to be present during an immune response. The use of the sandwich type ELISA quantifies the presence of this cytokine within spinal fluid or bone marrow samples.
Applications and fields
In industry
Biomolecular engineering is an extensive discipline with applications in many different industries and fields. As such, it is difficult to pinpoint a general perspective on the Biomolecular engineering profession. The biotechnology industry, however, provides an adequate representation. The biotechnology industry, or biotech industry, encompasses all firms that use biotechnology to produce goods or services or to perform biotechnology research and development. In this way, it encompasses many of the industrial applications of the biomolecular engineering discipline. By examination of the biotech industry, it can be gathered that the principal leader of the industry is the United States, followed by France and Spain. It is also true that the focus of the biotechnology industry and the application of biomolecular engineering is primarily clinical and medical. People are willing to pay for good health, so most of the money directed towards the biotech industry stays in health-related ventures.
Scale-up
Scaling up a process involves using data from an experimental-scale operation (model or pilot plant) for the design of a large (scaled-up) unit, of commercial size. Scaling up is a crucial part of commercializing a process. For example, insulin produced by genetically modified Escherichia coli bacteria was initialized on a lab-scale, but to be made commercially viable had to be scaled up to an industrial level. In order to achieve this scale-up a lot of lab data had to be used to design commercial sized units. For example, one of the steps in insulin production involves the crystallization of high purity glargin insulin. In order to achieve this process on a large scale we want to keep the Power/Volume ratio of both the lab-scale and large-scale crystallizers the same in order to achieve homogeneous mixing.
We also assume the lab-scale crystallizer has geometric similarity to the large-scale crystallizer. Therefore,
P/V α Ni3di3
where di= crystallizer impeller diameter
Ni= impeller rotation rate
Related industries
Bioengineering
A broad term encompassing all engineering applied to the life sciences. This field of study utilizes the principles of biology along with engineering principles to create marketable products. Some bioengineering applications include:
Biomimetics - The study and development of synthetic systems that mimic the form and function of natural biologically produced substances and processes.
Bioprocess engineering - The study and development of process equipment and optimization that aids in the production of many products such as food and pharmaceuticals.
Industrial microbiology - The implementation of microorganisms in the production of industrial products such as food and antibiotics. Another common application of industrial microbiology is the treatment of wastewater in chemical plants via utilization of certain microorganisms.
Biochemistry
Biochemistry is the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemical processes govern all living organisms and living processes and the field of biochemistry seeks to understand and manipulate these processes.
Biochemical engineering
Biocatalysis – Chemical transformations using enzymes.
Bioseparations – Separation of biologically active molecules.
Thermodynamics and Kinetics (chemistry) – Analysis of reactions involving cell growth and biochemicals.
Bioreactor design and analysis – Design of reactors for performing biochemical transformations.
Biotechnology
Biomaterials – Design, synthesis and production of new materials to support cells and tissues.
Genetic engineering – Purposeful manipulation of the genomes of organisms to produce new phenotypic traits.
Bioelectronics, Biosensor and Biochip – Engineered devices and systems to measure, monitor and control biological processes.
Bioprocess engineering – Design and maintenance of cell-based and enzyme-based processes for the production of fine chemicals and pharmaceuticals.
Bioelectrical engineering
Bioelectrical engineering involves the electrical fields generated by living cells or organisms. Examples include the electric potential developed between muscles or nerves of the body. This discipline requires knowledge in the fields of electricity and biology to understand and utilize these concepts to improve or better current bioprocesses or technology.
Bioelectrochemistry - Chemistry concerned with electron/proton transport throughout the cell
Bioelectronics - Field of research coupling biology and electronics
Biomedical engineering
Biomedical engineering is a sub category of bioengineering that uses many of the same principles but focuses more on the medical applications of the various engineering developments. Some applications of biomedical engineering include:
Biomaterials - Design of new materials for implantation in the human body and analysis of their effect on the body.
Cellular engineering – Design of new cells using recombinant DNA and development of procedures to allow normal cells to adhere to artificial implanted biomaterials
Tissue engineering – Design of new tissues from the basic biological building blocks to form new tissues
Artificial organs – Application of tissue engineering to whole organs
Medical imaging – Imaging of tissues using CAT scan, MRI, ultrasound, x-ray or other technologies
Medical Optics and Lasers – Application of lasers to medical diagnosis and treatment
Rehabilitation engineering – Design of devices and systems used to aid disabled people
Man-machine interfacing - Control of surgical robots and remote diagnostic and therapeutic systems using eye tracking, voice recognition and muscle and brain wave controls
Human factors and ergonomics – Design of systems to improve human performance in a wide range of applications
Chemical engineering
Chemical engineering is the processing of raw materials into chemical products. It involves preparation of raw materials to produce reactants, the chemical reaction of these reactants under controlled conditions, the separation of products, the recycle of byproducts, and the disposal of wastes. Each step involves certain basic building blocks called "unit operations," such as extraction, filtration, and distillation. These unit operations are found in all chemical processes. Biomolecular engineering is a subset of Chemical Engineering that applies these same principles to the processing of chemical substances made by living organisms.
Education and programs
Newly developed and offered undergraduate programs across the United States, often coupled to the chemical engineering program, allow students to achieve a B.S. degree. According to ABET (Accreditation Board for Engineering and Technology), biomolecular engineering curricula "must provide thorough grounding in the basic sciences including chemistry, physics, and biology, with some content at an advanced level… [and] engineering application of these basic sciences to design, analysis, and control, of chemical, physical, and/or biological processes." Common curricula consist of major engineering courses including transport, thermodynamics, separations, and kinetics, with additions of life sciences courses including biology and biochemistry, and including specialized biomolecular courses focusing on cell biology, nano- and biotechnology, biopolymers, etc.
See also
Biomimetics
Biopharmaceuticals
Bioprocess engineering
List of biomolecules
Molecular engineering
References
Further reading
Biomolecular engineering at interfaces (article)
Recent Progress in Biomolecular Engineering
Biomolecular sensors (alk. paper)
External links
AIChE International Conference on Biomolecular Engineering
Biological processes
Biotechnology | Biomolecular engineering | [
"Biology"
] | 6,436 | [
"nan",
"Biotechnology"
] |
11,345,407 | https://en.wikipedia.org/wiki/Diagnostic%20equation | In a physical (and especially geophysical) simulation context, a diagnostic equation (or diagnostic model) is an equation (or model) that links the values of these variables simultaneously, either because the equation (or model) is time-independent, or because the variables all refer to the values they have at the identical time. This is by opposition to a prognostic equation.
For instance, the so-called ideal gas law (PV = nRT) of classical thermodynamics relates the state variables of that gas, all estimated at the same time. It is understood that the values of any one of these variables can change in time, but the relation between these variables will remain valid at each and every particular instant, which implies that one variable cannot change its value without the value of another variable also being affected.
References
James R. Holton (2004) An Introduction to Dynamic Meteorology, Academic Press, International Geophysics Series Volume 88, Fourth Edition, 535 p., , .
External links
Amsglossary.allenpress.com
Atmospheric dynamics | Diagnostic equation | [
"Chemistry"
] | 218 | [
"Atmospheric dynamics",
"Fluid dynamics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.