id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
30,283,480 | https://en.wikipedia.org/wiki/Structure%20of%20liquids%20and%20glasses | The structure of liquids, glasses and other non-crystalline solids is characterized by the absence of long-range order which defines crystalline materials. Liquids and amorphous solids do, however, possess a rich and varied array of short to medium range order, which originates from chemical bonding and related interactions. Metallic glasses, for example, are typically well described by the dense random packing of hard spheres, whereas covalent systems, such as silicate glasses, have sparsely packed, strongly bound, tetrahedral network structures. These very different structures result in materials with very different physical properties and applications.
The study of liquid and glass structure aims to gain insight into their behavior and physical properties, so that they can be understood, predicted and tailored for specific applications. Since the structure and resulting behavior of liquids and glasses is a complex many body problem, historically it has been too computationally intensive to solve using quantum mechanics directly. Instead, a variety of diffraction, nuclear magnetic resonance (NMR), molecular dynamics, and Monte Carlo simulation techniques are most commonly used.
Pair distribution functions and structure factors
The pair distribution function (or pair correlation function) of a material describes the probability of finding an atom at a separation r from another atom.
A typical plot of g versus r of a liquid or glass shows a number of key features:
At short separations (small r), g(r) = 0. This indicates the effective width of the atoms, which limits their distance of approach.
A number of obvious peaks and troughs are present. These peaks indicate that the atoms pack around each other in 'shells' of nearest neighbors. Typically the 1st peak in g(r) is the strongest feature. This is due to the relatively strong chemical bonding and repulsion effects felt between neighboring atoms in the 1st shell.
The attenuation of the peaks at increasing radial distances from the center indicates the decreasing degree of order from the center particle. This illustrates vividly the absence of "long-range order" in liquids and glasses.
At long ranges, g(r) approaches a limiting value of 1, which corresponds to the macroscopic density of the material.
The static structure factor, S(q), which can be measured with diffraction techniques, is related to its corresponding g(r) by Fourier transformation
where q is the magnitude of the momentum transfer vector, and ρ is the number density of the material. Like g(r), the S(q) patterns of liquids and glasses have a number of key features:
For monoatomic systems the S(q=0) limit is related to the isothermal compressibility. Also a rise at the low-q limit indicates the presence of small angle scattering, due to large scale structure or voids in the material.
The sharpest peaks (or troughs) in S(q) typically occur in the q=1–3 ångström range. These normally indicate the presence of some medium range order corresponding to structure in the 2nd and higher coordination shells in g(r).
At high-q the structure is typically a decaying sinusoidal oscillation, with a 2π/r1 wavelength where r1 is the 1st shell peak position in g(r).
At very high-q the S(q) tends to 1, consistent with its definition.
Diffraction
The absence of long-range order in liquids and glasses is evidenced by the absence of Bragg peaks in X-ray and neutron diffraction. For these isotropic materials, the diffraction pattern has circular symmetry, and in the radial direction, the diffraction intensity has a smooth oscillatory shape. This diffracted intensity is usually analyzed to give the static structure factor, S(q), where q is given by q=4πsin(θ)/λ, where 2θ is the scattering angle (the angle between the incident and scattered quanta), and λ is the incident wavelength of the probe (photon or neutron). Typically diffraction measurements are performed at a single (monochromatic) λ, and diffracted intensity is measured over a range of 2θ angles, to give a wide range of q. Alternatively a range of λ, may be used, allowing the intensity measurements to be taken at a fixed or narrow range of 2θ. In x-ray diffraction, such measurements are typically called "energy dispersive", whereas in neutron diffraction this is normally called "time-of-flight" reflecting the different detection methods used. Once obtained, an S(q) pattern can be Fourier transformed to provide a corresponding radial distribution function (or pair correlation function), denoted in this article as g(r). For an isotropic material, the relation between S(q) and its corresponding g(r) is
The g(r), which describes the probability of finding an atom at a separation r from another atom, provides a more intuitive description of the atomic structure. The g(r) pattern obtained from a diffraction measurement represents a spatial, and thermal average of all the pair correlations in the material, weighted by their coherent cross-sections with the incident beam.
Atomistic simulation
By definition, g(r) is related to the average number of particles found within a given volume of shell located at a distance r from the center. The average density of atoms at a given radial distance from another atom is given by the formula:
where n(r) is the mean number of atoms in a shell of width Δr at distance r. The g(r) of a simulation box can be calculated easily by histograming the particle separations using the following equation
where Na is the number of a particles, |rij| is the magnitude of the separation of the pair of particles i,j. Atomistic simulations can also be used in conjunction with interatomic pair potential functions in order to calculate macroscopic thermodynamic parameters such as the internal energy, Gibbs free energy, entropy and enthalpy of the system.
Theories of glass formation and criterion
Structural theory of glass formation, Zachariasen
While studying glass, Zachariasen began to notice repeating properties in glasses. He postulated rules and patterns that, when atoms followed these rules, they were likely to form glasses. The following rules make up Zachariasen's theory, applying only to oxide glasses.
Each oxygen atom in a glass can be bonded to no more than two glass-forming cations
The coordination number of the glass forming cation is 3 or 4
The oxygen coordination polyhedra only share corners, not edges or faces
At least 3 corners of every polyhedra must be shared, creating a continuous random network.
All of these rules provide the correct amount of flexibility to form a glass and not a crystal.
While these rules only apply to oxide glasses, they were the first rules to establish the idea of a continuous random network for glass structure. He was also the first to classify structural roles for various oxides, some being main glass formers (SiO2, GeO2 , P2O5), and some being glass modifiers (Na2O, CaO).
Energy criterion of K.H. Sun
This criterion established a connection between the chemical bond strength and its glass forming tendency. When a material is quenched to form glass, the stronger the bonds, the easier the glass formation.
If a bond strength is higher than 80 kcal per bond (high bond strength), it will be glass network forming, meaning it is likely to form a glass.
If a bond strength is less than 60 kcal per bond (low bond strength), it will be glass network modifying, since it would only form weak bonds, it would disrupt glass forming networks.
If a bond strength is between 60 and 80 kcal per bond (intermediate bond strength, it will be an intermediate. This means it will not form a glass on its own, but it partially can while combined with other network forming atoms.
Dietzel's field strength criterion
Dietzel looked at direct Coulombic interactions between atoms. He categorized cations using field strength where FS=zc/(rc+ra)2, where zc is the charge of the cation, and rc and ra are the radii of the cation and anion respectively. High field strength cations would have a high cation-oxygen bond energy.
If FS was greater than 1.3 (small cation with high charge), it would be a glass network former.
If FS was less than 0.4 (large cation with small charge), it would be a glass network modifier.
If FS was between 0.4 and 1.3 (medium-sized cation with medium charge) it would be an intermediate.
These three criterion help establish three different ways to determine whether or not certain oxides molecules will form glasses, and the likeliness of it.
Other techniques
Other experimental techniques often employed to study the structure of glasses include nuclear magnetic resonance, X-ray absorption fine structure and other spectroscopy methods including Raman spectroscopy. Experimental measurements can be combined with computer simulation methods, such as reverse Monte Carlo or molecular dynamics simulations, to obtain more complete and detailed description of the atomic structure.
Network glasses
Early theories relating to the structure of glass included the crystallite theory whereby glass is an aggregate of crystallites (extremely small crystals). However, structural determinations of vitreous SiO2 and GeO2 made by Warren and co-workers in the 1930s using x-ray diffraction showed the structure of glass to be typical of an amorphous solid
In 1932, Zachariasen introduced the random network theory of glass in which the nature of bonding in the glass is the same as in the crystal but where the basic structural units in a glass are connected in a random manner in contrast to the periodic arrangement in a crystalline material.
Despite the lack of long range order, the structure of glass does exhibit a high degree of ordering on short length scales due to the chemical bonding constraints in local atomic polyhedra. For example, the SiO4 tetrahedra that form the fundamental structural units in silica glass represent a high degree of order, i.e. every silicon atom is coordinated by 4 oxygen atoms and the nearest neighbour Si-O bond length exhibits only a narrow distribution throughout the structure. The tetrahedra in silica also form a network of ring structures which leads to ordering on more intermediate length scales of up to approximately 10 angstroms.
The structure of glasses differs from the structure of liquids just above the glass transition temperature Tg which is revealed by the XRD analysis and high-precision measurements of third- and fifth-order non-linear dielectric susceptibilities. Glasses are generally characterised by a higher degree of connectivity compared liquids.
Alternative views of the structure of liquids and glasses include the interstitialcy model
and the model of string-like correlated motion.
Molecular dynamics computer simulations indicate these two models are closely connected
Oxide glass components can be classified as network formers, intermediates, or network modifiers. Traditional network formers (e.g. silicon, boron, germanium) form a highly cross-linked network of chemical bonds. Intermediates (e.g. titanium, aluminium, zirconium, beryllium, magnesium, zinc) can behave both as a network former or a network modifier, depending on the glass composition. The modifiers (calcium, lead, lithium, sodium, potassium) alter the network structure; they are usually present as ions, compensated by nearby non-bridging oxygen atoms, bound by one covalent bond to the glass network and holding one negative charge to compensate for the positive ion nearby. Some elements can play multiple roles; e.g. lead can act both as a network former (Pb4+ replacing Si4+), or as a modifier. The presence of non-bridging oxygens lowers the relative number of strong bonds in the material and disrupts the network, decreasing the viscosity of the melt and lowering the melting temperature.
The alkali metal ions are small and mobile; their presence in a glass allows a degree of electrical conductivity. Their mobility decreases the chemical resistance of the glass, allowing leaching by water and facilitating corrosion. Alkaline earth ions, with their two positive charges and requirement for two non-bridging oxygen ions to compensate for their charge, are much less mobile themselves and hinder diffusion of other ions, especially the alkali's. The most common commercial glass types contain both alkali and alkaline earth ions (usually sodium and calcium), for easier processing and satisfying corrosion resistance. Corrosion resistance of glass can be increased by dealkalization, removal of the alkali ions from the glass surface by reaction with sulphur or fluorine compounds. Presence of alkaline metal ions has also detrimental effect to the loss tangent of the glass, and to its electrical resistance; glass manufactured for electronics (sealing, vacuum tubes, lamps ...) have to take this in account.
Crystalline SiO2
Silica (the chemical compound SiO2) has a number of distinct crystalline forms: quartz, tridymite, cristobalite, and others (including the high pressure polymorphs stishovite and coesite). Nearly all of them involve tetrahedral SiO4 units linked together by shared vertices in different arrangements. Si-O bond lengths vary between the different crystal forms. For example, in α-quartz the bond length is 161 pm, whereas in α-tridymite it ranges from 154 to 171 pm. The Si–O–Si bond angle also varies from 140° in α-tridymite to 144° in α-quartz to 180° in β-tridymite.
Glassy SiO2
In amorphous silica (fused quartz), the SiO4 tetrahedra form a network that does not exhibit any long-range order. However, the tetrahedra themselves represent a high degree of local ordering, i.e. every silicon atom is coordinated by 4 oxygen atoms and the nearest neighbour Si-O bond length exhibits only a narrow distribution throughout the structure. If one considers the atomic network of silica as a mechanical truss, this structure is isostatic, in the sense that the number of constraints acting between the atoms equals the number of degrees of freedom of the latter. According to the rigidity theory, this allows this material to show a great forming ability. Despite the lack of ordering on extended length scales, the tetrahedra also form a network of ring-like structures which lead to ordering on intermediate length scales (up to approximately 10 angstroms or so). Under the application of high pressure (approximately 40 GPa) silica glass undergoes a continuous polyamorphic phase transition into an octahedral form, i.e. the Si atoms are surrounded by 6 oxygen atoms instead of four in the ambient pressure tetrahedral glass.
See also
Amorphous solid
Chemical structure
Glass
Liquid
Neutron diffraction
Pair distribution function
Polyamorphism
Structure factor
Surface layering
X-ray diffraction
References
Further reading
Condensed matter physics
Glass physics
Liquids | Structure of liquids and glasses | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,116 | [
"Glass engineering and science",
"Phases of matter",
"Materials science",
"Glass physics",
"Condensed matter physics",
"Matter",
"Liquids"
] |
30,284,183 | https://en.wikipedia.org/wiki/Vin%C4%8Da%20Nuclear%20Institute | The Vinča Institute of Nuclear Sciences is a nuclear physics research institution near Belgrade, Serbia. Since its founding, the institute has also conducted research in the fields in physics, chemistry and biology. The scholarly institute is part of the University of Belgrade.
History
The institute was established in 1948 as the Institute for Physics. Several different research groups started in the 1950s, and two research reactors were built.
The institute operates two research reactors; RA and RB. The research reactors were supplied by the USSR. The larger of the two reactors was rated at 6.5 MW and used Soviet-supplied 80% enriched uranium fuel.
The nuclear research program ended in 1968; the reactors were switched off in 1984.
1958 reactor incident
On 15 October 1958, there was a criticality accident at one of the research reactors. Six workers received large doses of radiation. One died shortly afterwards; the other five received the first ever bone marrow transplants in Europe.
Six young researchers, all between 24 and 26 years of age, were conducting an experiment on the reactor, and the results were to be used by one student for his thesis. At some point, they smelled the strong scent of ozone. It took them 10 minutes to discover the origin of the ozone, but by that time they were already irradiated. The news was briefly broadcast by the state agency Tanjug, but the news on the incident was then suppressed. The reasons included the fact that the state commission concluded that the incident was caused by the researchers' carelessness and indiscipline. The patients were first treated in Belgrade, under the care of Dr. Vasa Janković. Thanks to the personal connections of the Institute director Pavle Savić, who was a collaborator of Irène and Frédéric Joliot-Curie, they were transferred to the Curie Institute in Paris.
In Paris, they were treated by oncologist Georges Mathé. Five researchers were heavily radiated: Rosanda Dangubić, Života Vranić, Radojko Maksić, Draško Grujić and Stijepo Hajduković, while Živorad Bogojević received a low dose of radiation. Mathé operated on all five of them, performing the first successful allogeneic bone marrow transplant ever performed on unrelated human beings. The donors were all French: Marcel Pabion, Albert Biron, Raymond Castanier, and Odette Draghi—mother of four young children. The fifth donor was , a member of Mathé's team. On 11 November 1958, Maksić became the first man to receive a graft from an unrelated donor (Pabion). Out of five treated patients, only Vranić died. The others recovered and returned to Belgrade to continue working in Vinča or other institutes. Several years later, Dangubić gave birth to a healthy baby girl.
Removal of radioactive waste
In the 1980s, the waste was kept in the open. The waste was then transferred into two hangars, H1 and H2, while the ground was remediated. Until 1990, the waste from the entire country of Yugoslavia was stored in Vinča. H2 also harbors the barrels with the depleted uranium and DU bullets, remnants of the ammunition collected on four locations in south Serbia after the 1999 NATO bombing of Serbia.
In August 2002, a joint US-Russian mission removed 100 pounds of highly enriched uranium from the Vinča Nuclear Institute, to be flown to Russia.
In 2009, it was reported that the nuclear fuel storage pool, containing large quantities of radioactive waste, was in poor condition.
In 2010, 2.5 tonnes of waste, including 13 kg of 80% highly enriched uranium, were transported from Vinča to a reprocessing facility at Mayak, Russia. This was the IAEA's largest ever technical cooperation project, and thousands of police protected the convoys.
Removal of the nuclear waste allows decommissioning of Vinča's remaining reactor to be completed.
In 2012 the Law on Radiation Protection and Nuclear Safety was adopted. It envisioned that within 10 years, that is by 2022, the waste from Vinča must be transferred to the permanent and safe depository location. A new and modern hangar, H3, was built in the meantime but due to the legal procedures and licensing problems it is still closed. Though, it is meant to be only a transition location where the processed waste from H1 is to be kept before being transported to the permanent location. Still, as of 2018, large quantities of nuclear waste remain in the institute, the permanent location hasn't been selected, and the waste is not being treated and processed at all.
The waste in Vinča is of low to mid-level radioactivity, which means it is potentially hazardous for the health and safety of the wider area of Serbia, not just for Belgrade. Additionally, after removing all the radioactive waste, the institute can truly be transformed into the modern scientific-business park.
Press
Vinča Nuclear Institute is a publisher of three journals, two among them are listed in Scopus and WoS: Thermal Science and Nuclear Technology & Radiation Protection.
References
External links
Vinča Institute of Nuclear Sciences
Science and technology in Yugoslavia
Nuclear industry organizations
Nuclear accidents and incidents
University of Belgrade
Nuclear research institutes | Vinča Nuclear Institute | [
"Chemistry",
"Engineering"
] | 1,072 | [
"Nuclear research institutes",
"Nuclear organizations",
"Nuclear accidents and incidents",
"Nuclear industry organizations",
"Radioactivity"
] |
30,290,624 | https://en.wikipedia.org/wiki/Galactolysis | Galactolysis refers to the catabolism of galactose.
In the liver, galactose is converted through the Leloir pathway to glucose 6-phosphate in the following reactions:
galacto- uridyl phosphogluco-
kinase transferase mutase
gal --------> gal 1 P ------------------> glc 1 P -----------> glc 6 P
^ \
/ v
UDP-glc UDP-gal
^ /
\___/
epimerase
Mutations in the enzymes involved in galactolysis result in metabolic disorders.
Metabolic disorders
There are 3 types of galactosemia or galactose deficiencies:
References
Glycolysis | Galactolysis | [
"Chemistry",
"Biology"
] | 175 | [
"Carbohydrate metabolism",
"Biotechnology stubs",
"Glycolysis",
"Biochemistry stubs",
"Biochemistry"
] |
30,291,598 | https://en.wikipedia.org/wiki/Quelet%20reaction | The Quelet reaction (also called the Blanc–Quelet reaction) is an organic coupling reaction in which a phenolic ether reacts with an aliphatic aldehyde to generate an α-chloroalkyl derivative. The Quelet reaction is an example of a larger class of reaction, electrophilic aromatic substitution. The reaction is named after its creator R. Quelet, who first reported the reaction in 1932, and is similar to the Blanc chloromethylation process.
The reaction proceeds under strong acid catalysis using HCl; zinc(II) chloride may be used as a catalyst in instances where the ether is deactivated. The reaction primarily yields para-substituted products; however it can also produce ortho-substituted compounds if the para site is blocked.
Mechanism
The mechanism of the Quelet reaction is primarily categorized as a reaction in polar acid. First, the carbonyl is protonated forming a highly reactive protonated aldehyde that acts as the electrophile to the nucleophilic pi-bond of the aromatic ring. Next, the aromatic ring is reformed via E1. Finally, the hydroxy group formed from the carbonyl oxygen is protonated a second time and leaves as a molecule of water, creating a carbocation that is attacked by the negatively charged chlorine ion.
Reaction conditions and limitations
The reaction requires a strong acid catalyst, but both Lewis acids and Brownsted-Lowry acids can be used in the Quelet reaction. It has been noted that aqueous formaldehyde sometimes produces a better yield than paraformaldehyde. The reaction was first reported using zinc(II) chloride, however the reaction has been noted to proceed in the absence of this catalyst in highly activated aromatic compounds. If using an aromatic compound where the para-site is blocked, the reaction will add in the ortho-position (see example right).
Not all aromatic compounds can undergo Quelet reactions. For example, too highly halogenated aromatic compounds, aromatic compounds with nitro groups, and terphenyls cannot be used as reactants for Quelet reactions. Even for compounds that can undergo Quelet reactions, there sometimes exists other reactions that produce the same products in higher yields. The Quelet reaction can produce dangerous halomethyl ethers, gaseous and liquid compounds that are toxic to humans, and therefore is sometimes passed up for chloromethylations without these harmful byproducts.
Usage
The Quelet reaction is an important step in the polymerization of aromatic monomers, such as styrene, PPO and PPEK. These chloromethylated aromatic polymers are used in a diverse set of industries, such as fuel cells and membranes for drug delivery.
See also
Blanc reaction
Electrophilic aromatic substitution
Friedel-Crafts Alkylation
References
Name reactions
Addition reactions
Substitution reactions
Carbon-carbon bond forming reactions | Quelet reaction | [
"Chemistry"
] | 589 | [
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
32,894,329 | https://en.wikipedia.org/wiki/Astrobiophysics | Astrobiophysics is a field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general. It differs from astrobiology which is concerned with the search of extraterrestrial life. Examples of the topics covered by this branch of science include the effect of supernovae on life on Earth and the effects of cosmic rays on irradiation at sea level.
References
External links
Kansas University astrobiology page
Astrophysics
Biophysics
. | Astrobiophysics | [
"Physics",
"Astronomy",
"Biology"
] | 103 | [
"Applied and interdisciplinary physics",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Biophysics",
"Astronomical sub-disciplines"
] |
32,895,192 | https://en.wikipedia.org/wiki/Raphanin | Raphanin is the main sulfur component found in radish seeds of Raphanus sativus and is also found in broccoli and red cabbage. It was first described in 1947.
Basic research
In vitro, raphanin inhibits some fungi and various bacteria including Staphylococcus, Streptococcus, Pneumococcus and Escherichia coli.
See also
Raffinose
Sulforaphane
References
Antibiotics
Isothiocyanates | Raphanin | [
"Chemistry",
"Biology"
] | 98 | [
"Biotechnology products",
"Functional groups",
"Antibiotics",
"Isothiocyanates",
"Biocides"
] |
21,308,559 | https://en.wikipedia.org/wiki/Tiltjet | A tiltjet is an aircraft propulsion configuration that was historically tested for proposed vertical take-off and landing (VTOL)-capable fighters.
The tiltjet arrangement is, in concept, broadly similar to that of the tiltrotor; whereas a tiltrotor utilises pivotable rotors, the tiltjet employs jet engines capable of moving to angle their thrust between downwards and rearwards positions. A typical arrangement has the engines mounted on the wingtips, in which the entire propulsion system is rotated from axial to dorsal in order to achieve the transition from hover or vertical flight to horizontal. Aircraft of such a configuration are fully capable of performing VTOL operations, akin to a helicopter, as well as conducting high speed flights. However, the configuration has been restrained to experimental aircraft only, as other configurations for VTOL aircraft have been pursued instead.
History
During the 1950s, rapid advances in the field of jet propulsion, particularly in terms of increased thrust and compact engine units, had contributed to an increased belief in the technical viability of vertical takeoff/landing (VTOL) aircraft, particularly within Western Europe and the United States. During 1950s and 1960s, multiple programmes in Britain, France, and the United States were initiated; likewise, aviation companies inside West Germany were keen not to be left out of this emerging technology. Shortly after 1957, the year in which the post-Second World War ban upon West Germany operating and developing combat aircraft was lifted, German aviation firms Dornier Flugzeugwerke, Heinkel, and Messerschmitt, having also been allowed to resume their own activities that same year, received an official request from the German Federal Government that urged them to perform investigative work on the topic of VTOL aircraft and to produce concept designs.
Around the same period, the American aviation company Bell Aircraft were investigating their own designs for VTOL performance. One proposal was the Bell D-188A, which was envisioned as a supersonic tiltjet fighter; however, it never progressed beyond the mockup stage. Another tiltjet platform designed by the company was the Bell 65 "Air Test Vehicle". Intended purely for experimental purposes, this one-of-a-kind aircraft made extensive use of existing general aviation components throughout to reduce its cost. Having performed its first hover on 16 November 1954, work with the Bell 65 was halted during the following year in favour of more advanced VTOL designs.
In West Germany, interest in developing a VTOL fighter aircraft had resulted in the development of the EWR VJ 101, a supersonic-capable VTOL tiltjet that entered flight testing during the 1960s. Its propulsion system, consisting of multiple Rolls-Royce RB145, a lightweight single-spool turbojet engine was developed as a collaborative effort between the British engine specialist Rolls-Royce Limited and the German engine manufacturer MAN Turbo. Its control systems, developed by American firm Honeywell and Germany company Bodenseewerk, performed various functions across the flight regime of the VJ 101 C, including attitude control during hover and the transition from hover to horizontal aerodynamic flight. The first prototype's maiden hovering flight occurred on 10 April 1963. However, the programme was restructured from being producing a successor to the German Air Force's fleet of Lockheed F-104 Starfighters to a broader research and development programme, aimed at exploring and validating the VJ 101's flight control concepts.
Akin to the fortunes of the tiltjets, various other projects of the era to develop supersonic-capable VTOL fighter aircraft, including the Mirage IIIV and the Hawker Siddeley P.1154 (a supersonic parallel to what would become the Hawker Siddeley Harrier, a subsonic VTOL combat aircraft that reached operational service), ultimately met similar fates. The Harrier jump jet and, substantially later, the Lockheed Martin F-35 Lightning II, has since demonstrated the potential of VTOL fighters.
See also
Thrust vectoring
Tiltrotor
Tiltwing
Tailsitter
VTOL
References
Citations
Bibliography
Hirschel, Ernst Heinrich., Horst Prem and Gero Madelung. Aeronautical Research in Germany: From Lilienthal until Today. Springer Science & Business Media, 2012. .
Aircraft configurations
VTOL aircraft | Tiltjet | [
"Engineering"
] | 860 | [
"Aircraft configurations",
"Aerospace engineering"
] |
21,309,819 | https://en.wikipedia.org/wiki/X-ray%20welding | X-ray welding is an experimental welding process that uses a high powered X-ray source to provide thermal energy required to weld materials.
The phrase "X-ray welding" also has an older, unrelated usage in quality control. In this context, an X-ray welder is a tradesman who consistently welds at such a high proficiency that he rarely introduces defects into the weld pool, and is able to recognize and correct defects in the weld pool, during the welding process. It is assumed (or trusted) by the Quality Control Department of a fabrication or manufacturing shop that the welding work performed by an X-ray welder would pass an X-ray inspection. For example, defects like porosity, concavities, cracks, cold laps, slag and tungsten inclusions, lack of fusion & penetration, etc., are rarely seen in a radiographic X-ray inspection of a weldment performed by an X-ray welder.
With the growing use of synchrotron radiation in the welding process, the older usage of the phrase "X-Ray welding" might cause confusion; but the two terms are unlikely to be used in the same work environment because synchrotron radiation (X-Ray) welding is a remotely automated and mechanized process.
Introduction
Many advances in welding technology have resulted from the introduction of new sources of the thermal energy required for localized melting. These advances include the introduction of modern techniques such as gas tungsten arc, gas-metal arc, submerged-arc, electron beam,
and laser beam welding processes. However, whilst these processes were able to improve stability, reproducibility, and accuracy of welding, they share a common limitation - the energy does not fully penetrate the material to be welded, resulting in the formation of a melt pool on the surface of the material.
To achieve welds which penetrate the full depth of the material, it is necessary to either specially design and prepare the geometry of the joint or cause vaporization of the material to such a degree that a "keyhole" is formed, allowing the heat to penetrate the joint. This is not a significant disadvantage in many types of material, as good joint strengths can be achieved, however for certain material classes such as ceramics or metal ceramic composites, such processing can significantly limit joint strength. They have great potential for use in the aerospace industry, provided a joining process that maintains the strength of the material can be found.
Until recently, sources of x-rays of sufficient intensity to cause enough volumetric heating for welding were not available. However, with the advent of third-generation synchrotron radiation sources, it is possible to achieve the power required for localized melting and even vaporization in a number of materials.
X-ray beams have been shown to have potential as welding sources for classes of materials which cannot be welded conventionally.
References
Welding | X-ray welding | [
"Engineering"
] | 586 | [
"Welding",
"Mechanical engineering"
] |
2,808,506 | https://en.wikipedia.org/wiki/Warped%20Passages | Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions is the debut non-fiction book by Lisa Randall, published in 2005, about particle physics in general and additional dimensions of space (cf. Kaluza–Klein theory) in particular. The book has made it to top 50 at amazon.com, making it the world's first successful book on theoretical physics by a female author. She herself characterizes the book as being about physics and the multi-dimensional universe. The book describes, at a non-technical level, theoretical models Professor Randall developed with the physicist Raman Sundrum, in which various aspects of particle physics (e.g. supersymmetry) are explained in a higher-dimensional braneworld scenario. These models have since generated thousands of citations.
Overview
She comments that her motivation for writing this book was her "thinking that there were people who wanted a more complete and balanced vision of the current state of physics." She has noticed there is a large audience that thinks physics is about the bizarre or exotic. She observes that when people develop an understanding of the science of particle physics and the experiments that produce the science, people get excited. "The upcoming experiments at the Large Hadron Collider (LHC) at CERN near Geneva will test many ideas, including some of the warped extra-dimensional theories I talk about." Another motivation was that she "gambled that there are people who really want to understand the physics and how the many ideas connect."
Background
Randall is currently a professor at Harvard University in Cambridge, Massachusetts, focusing on particle physics and cosmology. She stays current through her research into the nature of matter's most basic elements, and the forces that govern these most basic elements. Randall's experiences, which qualify her as an authority on the subject of the book, are her original "contributions in a wide variety of physics studies, including cosmological inflation, supersymmetry, grand unified theories, and aspects of string theory". "As of last autumn, she was the most cited theoretical physicist in the world during the previous five years." In addition her most recent work involved extra dimensions.
Her background research for the book, on the theories and experiments of extra dimensions and warped geometries, was published in the peer-reviewed Science magazine in 2002.
See also
Euclidean space
Fourth dimension in art
Four-dimensionalism
Fifth dimension
Sixth dimension
Similar books on dimensions
Flatland, a book by Edwin A. Abbott about two- and three-dimensional spaces, to understand the concept of four dimensions
Sphereland, an unofficial sequel to Flatland
Hiding in the Mirror by Lawrence Krauss
References
Further reading
2005 non-fiction books
Popular physics books
Particle physics
String theory books
Cosmology books
Ecco Press books | Warped Passages | [
"Physics"
] | 567 | [
"Particle physics"
] |
898,605 | https://en.wikipedia.org/wiki/Rain%20sensor | A rain sensor or rain switch is a switching device activated
by rainfall. There are two main applications for rain sensors. The first is a water conservation device connected to an automatic irrigation system that causes the system to shut down in the event of rainfall. The second is a device used to protect the interior of an automobile from rain and to support the automatic mode of
windscreen wipers.
Principle of operation
The rain sensor works on the principle of total internal reflection. An infrared light shone at a 45-degree angle on a clear area of the windshield is reflected and is sensed by the sensor inside the car. When it rains, the wet glass causes the light to scatter and a lesser amount of light gets reflected back to the sensor.
An additional application in professional satellite communications antennas is to trigger a rain blower on the aperture of the antenna feed, to remove water droplets from the mylar cover that keeps pressurized and dry air inside the wave-guides.
Irrigation sensors
Rain sensors for irrigation systems are available in both wireless and hard-wired versions, most employing hygroscopic disks that swell in the presence of rain and shrink back down again as they dry out — an electrical switch is in turn depressed or released by the hygroscopic disk stack, and the rate of drying is typically adjusted by controlling the ventilation reaching the stack. However, some electrical type sensors are also marketed that use tipping bucket or conductance type probes to measure rainfall. Wireless and wired versions both use similar mechanisms to temporarily suspend watering by the irrigation controller specifically they are connected to the irrigation controller's sensor terminals, or are installed in series with the solenoid valve common circuit such that they prevent the opening of any valves when rain has been sensed.
Some irrigation rain sensors also contain a freeze sensor to keep the system from operating in freezing temperatures, particularly where irrigation systems are still used over the winter.
Some type of sensor is required on new lawn sprinkler systems in Florida, New Jersey, Minnesota, Connecticut and most parts of Texas.
Automotive sensors
In 1958, the Cadillac Motor Car Division of General Motors experimented with a water-sensitive switch that triggered various electric motors to close the convertible top and raise the open windows of a specially-built Eldorado Biarritz model, in case of rain. The first such device appears to have been used for that same purpose in a concept vehicle designated Le Sabre and built around 1950–51.
General Motors' automatic rain sensor for convertible tops was available as a dealer-installed option during the 1950s for vehicles such as the Chevrolet Bel Air.
For the 1996 model year, Cadillac once again equipped cars with an automatic rain sensor; this time to automatically trigger the windshield wipers and adjust their speed to conditions as necessary.
In December 2017 Tesla started rolling out an OTA update (2017.52.3) enabling their AP2.x cars to utilize the onboard cameras to passively detect rain without the use of a dedicated sensor.
Most vehicles with this feature have an auto position on the control column.
Physics of rain sensor
The most common modern rain sensors are based on the principle of total internal reflection. At all times, an infrared light is beamed at a 45-degree angle into the windshield from the interior. If the glass is dry, the critical angle for total internal refraction is around 42°. This value is obtained with the total internal refraction formula
where is the approximate value on air's refraction index for infrared and is the approximate value of the glass refraction index, also for infrared. In that case, since the incident angle of light is 45°, all the light is reflected and the detector receives maximum intensity.
If the glass is wet, the critical angle changes to around 60° because the refraction index of water is higher than air (). In that case, because the incident angle is 45°, total internal reflection is not obtained. Part of the light beam is transmitted through the glass and the intensity measured for reflection is lower : the system detects water and the wipers turn on.
See also
List of sensors
Rain gauge
References
Irrigation
Sensors
Meteorological instrumentation and equipment
Windscreen wiper | Rain sensor | [
"Technology",
"Engineering"
] | 836 | [
"Sensors",
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
898,732 | https://en.wikipedia.org/wiki/Tensiometer%20%28surface%20tension%29 | In surface science, a tensiometer is a measuring instrument used to measure the surface tension () of liquids or surfaces. Tensiometers are used in research and development laboratories to determine the surface tension of liquids like coatings, lacquers or adhesives. A further application field of tensiometers is the monitoring of industrial production processes like parts cleaning or electroplating.
Types
Goniometer/Tensiometer
Surface scientists commonly use an optical goniometer/tensiometer to measure the surface tension and interfacial tension of a liquid using the pendant or sessile drop methods. A drop is produced and captured using a CCD camera. The drop profile is subsequently extracted, and sophisticated software routines then fit the theoretical Young-Laplace equation to the experimental drop profile. The surface tension can then be calculated from the fitted parameters. Unlike other methods, this technique requires only a small amount of liquid making it suitable for measuring interfacial tensions of expensive liquids.
Du Noüy ring tensiometer
This type of tensiometer uses a platinum ring which is submersed in a liquid. As the ring is pulled out of the liquid, the force required is precisely measured in order to determine the surface tension of the liquid.
The method is well-established as shown by a number of international standards on it such as ASTM D971. This method is widely used for interfacial tension measurement between two liquids but care should be taken to make sure to keep the platinum ring undeformed.
Wilhelmy plate tensiometer
The Wilhelmy plate tensiometer requires a plate to make contact with the liquid surface. It is widely considered the simplest and most accurate method for surface tension measurement. Due to a large wetted length of the platinum plate, the surface tension reading is typically very stable compared to alternative methods. As an additional benefit, the Wilhelmy plate can also be made from paper for disposable use. For interfacial tension measurements, buoyancy of the probe needs to be taken into account which complicates the measurement.
Du Noüy-Padday method
This method uses a rod which is lowered into a test liquid. The rod is then pulled out of the liquid and the force required to pull the rod is precisely measured. The method isn't standardized but is sometimes used. The Du Noüy-Padday rod pull tensiometer will take measurements quickly and will work with liquids with a wide range of viscosities. Interfacial tensions cannot be measured.
Bubble pressure tensiometer
Due to internal attractive forces of a liquid, air bubbles within the liquids are compressed. The resulting pressure (bubble pressure) rises at a decreasing bubble radius. The bubble pressure method makes use of this bubble pressure which is higher than in the surrounding environment (water). A gas stream is pumped into a capillary that is immersed in a fluid. The resulting bubble at the end of the capillary tip continually becomes bigger in surface; thereby, the bubble radius is decreasing.
The pressure rises to a maximum level. At this point the bubble has achieved its smallest radius (the capillary radius) and begins to form a hemisphere. Beyond this point the bubble quickly increases in size and soon bursts, tearing away from the capillary, thereby allowing a new bubble to develop at the capillary tip. It is during this process that a characteristic pressure pattern develops (see picture), which is evaluated for determining the surface tension.
Because of the easy handling and the low cleaning effort of the capillary, bubble pressure tensiometers are a common alternative for monitoring the detergent concentration in cleaning or electroplating processes.
See also
Stalagmometric method
Surface tension
Young-Laplace equation
Capillary action
Piezometer
Pierre Lecomte du Nouy
Interfacial rheology
References
External links
Surface science
Laboratory equipment | Tensiometer (surface tension) | [
"Physics",
"Chemistry",
"Materials_science"
] | 788 | [
"Condensed matter physics",
"Surface science"
] |
898,781 | https://en.wikipedia.org/wiki/Permalloy | Permalloy is a nickel–iron magnetic alloy, with about 80% nickel and 20% iron content. Invented in 1914 by physicist Gustav Elmen at Bell Telephone Laboratories, it is notable for its very high magnetic permeability, which makes it useful as a magnetic core material in electrical and electronic equipment, and also in magnetic shielding to block magnetic fields. Commercial permalloy alloys typically have relative permeability of around 100,000, compared to several thousand for ordinary steel.
In addition to high permeability, its other magnetic properties are low coercivity, near zero magnetostriction, and significant anisotropic magnetoresistance. The low magnetostriction is critical for industrial applications, allowing it to be used in thin films where variable stresses would otherwise cause a ruinously large variation in magnetic properties. Permalloy's electrical resistivity can vary as much as 5% depending on the strength and the direction of an applied magnetic field. Permalloys typically have the face-centered cubic crystal structure with a lattice constant of approximately 0.355 nm in the vicinity of a nickel concentration of 80%. A disadvantage of permalloy is that it is not very ductile or workable, so applications requiring elaborate shapes, such as magnetic shields, are made of other high permeability alloys such as mu metal. Permalloy is used in transformer laminations and magnetic recording heads.
Development
Permalloy was initially developed in the early 20th century for inductive compensation of telegraph cables. When the first transatlantic submarine telegraph cables were laid in the 1860s, it was found that the long conductors caused distortion which reduced the maximum signalling speed to only 10–12 words per minute. The right conditions for transmitting signals through cables without distortion were first worked out mathematically in 1885 by Oliver Heaviside. It was proposed by Carl Emil Krarup in 1902 in Denmark that the cable could be compensated by wrapping it with iron wire, increasing the inductance and making it a loaded line to reduce distortion. However, iron did not have high enough permeability to compensate a transatlantic-length cable. After a prolonged search, permalloy was discovered in 1914 by Gustav Elmen of Bell Laboratories, who found it had higher permeability than silicon steel. Later, in 1923, he found its permeability could be greatly enhanced by heat treatment. A wrapping of permalloy tape could reportedly increase the signalling speed of a telegraph cable fourfold.
This method of cable compensation declined in the 1930s, but by World War II many other uses for Permalloy were found in the electronics industry.
Other compositions
Other compositions of permalloy are available, designated by a numerical prefix denoting the weight percentage of nickel in the alloy, for example "45 permalloy" means an alloy containing 45% nickel, and 55% iron by weight. "Molybdenum permalloy" is an alloy of 81% nickel, 17% iron and 2% molybdenum. The latter was invented at Bell Labs in 1940. At the time, when used in long distance copper telegraph lines, it allowed a tenfold increase in maximum line working speed. Supermalloy, at 79% Ni, 16% Fe, and 5% Mo, is also well known for its high performance as a "soft" magnetic material, characterized by high permeability and low coercivity.
Applications
Due to its high magnetic permeability and low coercivity, Permalloy is often used in applications that require efficient magnetic field generation and sensing. This nickel-iron magnetic alloy, typically composed of about 80% nickel and 20% iron, exhibits low energy loss, which is beneficial for improving the performance of magnetic sensors, transformers, and inductors. Permalloy is also used in the production of magnetic shielding materials, which help protect electronic equipment from external magnetic interference.
See also
Loading coil
Mu-metal
Sendust
Supermalloy (a material with even higher magnetic permeability)
Notes
References
Richard M. Bozorth, Ferromagnetism, Wiley-IEEE Press (1993 reissue), .
P. Ciureanu and S. Middelhoek, eds., Thin Film Resistive Sensors, Institute of Physics Publishing (1992), .
Nickel alloys
Magnetic alloys
Ferromagnetic materials | Permalloy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 896 | [
"Nickel alloys",
"Ferromagnetic materials",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic alloys",
"Materials",
"Alloys",
"Matter"
] |
898,792 | https://en.wikipedia.org/wiki/Euler%27s%20equations%20%28rigid%20body%20dynamics%29 | In classical mechanics, Euler's rotation equations are a vectorial quasilinear first-order ordinary differential equation describing the rotation of a rigid body, using a rotating reference frame with angular velocity ω whose axes are fixed to the body. They are named in honour of Leonhard Euler. Their general vector form is
where M is the applied torques and I is the inertia matrix.
The vector is the angular acceleration. Again, note that all quantities are defined in the rotating reference frame.
In orthogonal principal axes of inertia coordinates the equations become
where Mk are the components of the applied torques, Ik are the principal moments of inertia and ωk are the components of the angular velocity.
In the absence of applied torques, one obtains the Euler top. When the torques are due to gravity, there are special cases when the motion of the top is integrable.
Derivation
In an inertial frame of reference (subscripted "in"), Euler's second law states that the time derivative of the angular momentum L equals the applied torque:
For point particles such that the internal forces are central forces, this may be derived using Newton's second law.
For a rigid body, one has the relation between angular momentum and the moment of inertia Iin given as
In the inertial frame, the differential equation is not always helpful in solving for the motion of a general rotating rigid body, as both Iin and ω can change during the motion. One may instead change to a coordinate frame fixed in the rotating body, in which the moment of inertia tensor is constant. Using a reference frame such as that at the center of mass, the frame's position drops out of the equations.
In any rotating reference frame, the time derivative must be replaced so that the equation becomes
and so the cross product arises, see time derivative in rotating reference frame.
The vector components of the torque in the inertial and the rotating frames are related by
where is the rotation tensor (not rotation matrix), an orthogonal tensor related to the angular velocity vector by
for any vector u.
Now is substituted and the time derivatives are taken in the rotating frame, while realizing that the particle positions and the inertia tensor does not depend on time. This leads to the general vector form of Euler's equations which are valid in such a frame
The equations are also derived from Newton's laws in the discussion of the resultant torque.
More generally, by the tensor transform rules, any rank-2 tensor has a time-derivative such that for any vector , one has . This yields the Euler's equations by plugging in
Principal axes form
When choosing a frame so that its axes are aligned with the principal axes of the inertia tensor, its component matrix is diagonal, which further simplifies calculations. As described in the moment of inertia article, the angular momentum L can then be written
Also in some frames not tied to the body can it be possible to obtain such simple (diagonal tensor) equations for the rate of change of the angular momentum. Then ω must be the angular velocity for rotation of that frames axes instead of the rotation of the body. It is however still required that the chosen axes are still principal axes of inertia. The resulting form of the Euler rotation equations is useful for rotation-symmetric objects that allow some of the principal axes of rotation to be chosen freely.
Special case solutions
Torque-free precessions
Torque-free precessions are non-trivial solution for the situation where the torque on the right hand side is zero. When I is not constant in the external reference frame (i.e. the body is moving and its inertia tensor is not constantly diagonal) then I cannot be pulled through the derivative operator acting on L. In this case I(t) and ω(t) do change together in such a way that the derivative of their product is still zero. This motion can be visualized by Poinsot's construction.
Generalized Euler equations
The Euler equations can be generalized to any simple Lie algebra. The original Euler equations come from fixing the Lie algebra to be , with generators satisfying the relation . Then if (where is a time coordinate, not to be confused with basis vectors ) is an -valued function of time, and (with respect to the Lie algebra basis), then the (untorqued) original Euler equations can be written
To define in a basis-independent way, it must be a self-adjoint map on the Lie algebra with respect to the invariant bilinear form on . This expression generalizes readily to an arbitrary simple Lie algebra, say in the standard classification of simple Lie algebras.
This can also be viewed as a Lax pair formulation of the generalized Euler equations, suggesting their integrability.
See also
Euler angles
Dzhanibekov effect
Moment of inertia
Poinsot's ellipsoid
Rigid rotor
References
C. A. Truesdell, III (1991) A First Course in Rational Continuum Mechanics. Vol. 1: General Concepts, 2nd ed., Academic Press. . Sects. I.8-10.
C. A. Truesdell, III and R. A. Toupin (1960) The Classical Field Theories, in S. Flügge (ed.) Encyclopedia of Physics. Vol. III/1: Principles of Classical Mechanics and Field Theory, Springer-Verlag. Sects. 166–168, 196–197, and 294.
Landau L.D. and Lifshitz E.M. (1976) Mechanics, 3rd. ed., Pergamon Press. (hardcover) and (softcover).
Goldstein H. (1980) Classical Mechanics, 2nd ed., Addison-Wesley.
Symon KR. (1971) Mechanics, 3rd. ed., Addison-Wesley.
Rigid bodies
Rigid bodies mechanics
Rotation in three dimensions
Equations
de:Eulersche Gleichungen
it:Equazioni di Eulero (dinamica) | Euler's equations (rigid body dynamics) | [
"Mathematics"
] | 1,243 | [
"Mathematical objects",
"Equations"
] |
899,159 | https://en.wikipedia.org/wiki/Statically%20indeterminate | In statics and structural mechanics, a structure is statically indeterminate when the equilibrium equations force and moment equilibrium conditions are insufficient for determining the internal forces and reactions on that structure.
Mathematics
Based on Newton's laws of motion, the equilibrium equations available for a two-dimensional body are:
the vectorial sum of the forces acting on the body equals zero. This translates to:
the sum of the horizontal components of the forces equals zero;
the sum of the vertical components of forces equals zero;
the sum of the moments (about an arbitrary point) of all forces equals zero.
In the beam construction on the right, the four unknown reactions are , , , and . The equilibrium equations are:
Since there are four unknown forces (or variables) (, , , and ) but only three equilibrium equations, this system of simultaneous equations does not have a unique solution. The structure is therefore classified as statically indeterminate.
To solve statically indeterminate systems (determine the various moment and force reactions within it), one considers the material properties and compatibility in deformations.
Statically determinate
If the support at is removed, the reaction cannot occur, and the system becomes statically determinate (or isostatic). Note that the system is completely constrained here.
The system becomes an exact constraint kinematic coupling.
The solution to the problem is:
If, in addition, the support at is changed to a roller support, the number of reactions are reduced to three (without ), but the beam can now be moved horizontally; the system becomes unstable or partly constrained—a mechanism rather than a structure. In order to distinguish between this and the situation when a system under equilibrium is perturbed and becomes unstable, it is preferable to use the phrase partly constrained here. In this case, the two unknowns and can be determined by resolving the vertical force equation and the moment equation simultaneously. The solution yields the same results as previously obtained. However, it is not possible to satisfy the horizontal force equation unless .
Statical determinacy
Descriptively, a statically determinate structure can be defined as a structure where, if it is possible to find internal actions in equilibrium with external loads, those internal actions are unique. The structure has no possible states of self-stress, i.e. internal forces in equilibrium with zero external loads are not possible. Statical indeterminacy, however, is the existence of a non-trivial (non-zero) solution to the homogeneous system of equilibrium equations. It indicates the possibility of self-stress (stress in the absence of an external load) that may be induced by mechanical or thermal action.
Mathematically, this requires a stiffness matrix to have full rank.
A statically indeterminate structure can only be analyzed by including further information like material properties and deflections. Numerically, this can be achieved by using matrix structural analyses, finite element method (FEM) or the moment distribution method (Hardy Cross) .
Practically, a structure is called 'statically overdetermined' when it comprises more mechanical constraints like walls, columns or bolts than absolutely necessary for stability.
See also
Christian Otto Mohr
Flexibility method
Kinematic determinacy
Overconstrained mechanism
Structural engineering
References
External links
Beam calculation online (Statically indeterminate)
Statics
Structural analysis | Statically indeterminate | [
"Physics",
"Engineering"
] | 674 | [
"Structural engineering",
"Statics",
"Structural analysis",
"Classical mechanics",
"Mechanical engineering",
"Aerospace engineering"
] |
899,223 | https://en.wikipedia.org/wiki/Prostacyclin | Prostacyclin (also called prostaglandin I2 or PGI2) is a prostaglandin member of the eicosanoid family of lipid molecules. It inhibits platelet activation and is also an effective vasodilator.
When used as a drug, it is also known as epoprostenol. The terms are sometimes used interchangeably.
Function
Prostacyclin chiefly prevents formation of the platelet plug involved in primary hemostasis (a part of blood clot formation). It does this by inhibiting platelet activation. It is also an effective vasodilator. Prostacyclin's interactions contrast with those of thromboxane (TXA2), another eicosanoid. Both molecules are derived from arachidonic acid, and work together with opposite platelet aggregatory effects. These strongly suggest a mechanism of cardiovascular homeostasis between these two hormones in relation to vascular damage.
Medical uses
It is used to treat pulmonary arterial hypertension (PAH), pulmonary fibrosis, as well as atherosclerosis. Prostacyclins are given to people with class III or class IV PAH.
Degradation
Prostacyclin, which has a half-life of 42 seconds, is broken down into 6-keto-PGF1, which is a much weaker vasodilator.
A way to stabilize prostacyclin in its active form, especially during drug delivery, is to prepare prostacyclin in alkaline buffer. Even at physiological pH, prostacyclin can rapidly form the inactive hydration product 6-keto-prostaglandin F1α.
Mechanism
{| class=wikitable
!colspan=2|Prostacyclin effect
!Mechanism
!Cellular response
|-
| rowspan=3|Classicalfunctions
| Vessel tone
|↑cAMP, ↓ET-1↓Ca2+, ↑K+
|↓SMC proliferation↑Vasodilation
|-
| Antiproliferative
|↑cAMP ↑PPARgamma
|↓Fibroblast growth↑Apoptosis
|-
| Antithrombotic
|↓Thromboxane-A2↓PDGF
|↓Platelet aggregation↓Platelet adherence to vessel wall
|-
| rowspan=2|Novelfunctions
| Antiinflammatory
|↓IL-1, IL-6↑IL-10
|↓Proinflammatory cytokines↑Antiinflammatory cytokines
|-
| Antimitogenic
|↓VEGF↓TGF-β
|↓Angiogenesis↑ECM remodeling
|}
As mentioned above, prostacyclin (PGI2) is released by healthy endothelial cells and performs its function through a paracrine signaling cascade that involves G protein-coupled receptors on nearby platelets and endothelial cells. The platelet Gs protein-coupled receptor (prostacyclin receptor) is activated when it binds to PGI2. This activation, in turn, signals adenylyl cyclase to produce cAMP. cAMP goes on to inhibit any undue platelet activation (in order to promote circulation) and also counteracts any increase in cytosolic calcium levels that would result from thromboxane A2 (TXA2) binding (leading to platelet activation and subsequent coagulation). PGI2 also binds to endothelial prostacyclin receptors, and in the same manner, raises cAMP levels in the cytosol. This cAMP then goes on to activate protein kinase A (PKA). PKA then continues the cascade by promoting the phosphorylation of the myosin light chain kinase, which inhibits it and leads to smooth muscle relaxation and vasodilation. It can be noted that PGI2 and TXA2 work as physiological antagonists.
Members
Pharmacology
Synthetic prostacyclin analogues (iloprost, cisaprost) are used intravenously, subcutaneously or by inhalation:
as a vasodilator in severe Raynaud's phenomenon or ischemia of a limb;
in pulmonary hypertension.
in primary pulmonary hypertension (PPH)
The production of prostacyclin is inhibited by the action of NSAIDs on cyclooxygenase enzymes COX1 and COX2. These convert arachidonic acid to prostaglandin H2 (PGH2), the immediate precursor of prostacyclin. Since thromboxane (an eicosanoid stimulator of platelet aggregation) is also downstream of COX enzymes, one might think that the effect of NSAIDs would act to balance. However, prostacyclin concentrations recover much faster than thromboxane levels, so aspirin administration initially has little to no effect but eventually prevents platelet aggregation (the effect of prostaglandins predominates as they are regenerated). This is explained by understanding the cells that produce each molecule, TXA2 and PGI2. Since PGI2 is primarily produced in a nucleated endothelial cell, the COX inhibition by NSAID can be overcome with time by increased COX gene activation and subsequent production of more COX enzymes to catalyze the formation of PGI2. In contrast, TXA2 is released primarily by anucleated platelets, which are unable to respond to NSAID COX inhibition with additional transcription of the COX gene because they lack DNA material necessary to perform such a task. This allows NSAIDs to result in PGI2 dominance that promotes circulation and retards thrombosis.
In patients with pulmonary hypertension, inhaled epoprostenol reduces pulmonary pressure, and improves right ventricular stroke volume in patients undergoing cardiac surgery. A dose of 60 μg is hemodynamically safe, and its effect is completely reversed after 25 minutes. No evidence of platelet dysfunction or an increase in surgical bleeding after administration of inhaled epoprostenol has been found. The drug has been known to cause flushing, headaches and hypotension.
Synthesis
Biosynthesis
Prostacyclin is produced in endothelial cells, which line the walls of arteries and veins, from prostaglandin H2 (PGH2) by the action of the enzyme prostacyclin synthase. Although prostacyclin is considered an independent mediator, it is called PGI2 (prostaglandin I2) in eicosanoid nomenclature, and is a member of the prostanoids (together with the prostaglandins and thromboxane). PGI2, derived primarily from COX-2 in humans, is the major arachidonate metabolite released from the vascular endothelium. This is a controversial point, some assign COX 1 as the major prostacyclin producing cyclooxygenase in the endothelial cells of the blood vessels.
The series-3 prostaglandin PGH3 also follows the prostacyclin synthase pathway, yielding another prostacyclin, PGI3. The unqualified term 'prostacyclin' usually refers to PGI2. PGI2 is derived from the ω-6 arachidonic acid. PGI3 is derived from the ω-3 EPA.
Artificial synthesis
Prostacyclin can be synthesized from the methyl ester of prostaglandin F2α. After its synthesis, the drug is reconstituted in saline and glycerin.
Because prostacyclin is so chemically labile, quantitation of their inactive metabolites, rather than the active compounds, is used to assess their rate of synthesis.
History
During the 1960s, a UK research team, headed by Professor John Vane, began to explore the role of prostaglandins in anaphylaxis and respiratory diseases. Working with a team from the Royal College of Surgeons, Vane discovered that aspirin and other oral anti-inflammatory drugs work by inhibiting the synthesis of prostaglandins. This critical finding opened the door to a broader understanding of the role of prostaglandins in the body.
A team at The Wellcome Foundation led by Salvador Moncada had identified a lipid mediator they called "PG-X," which inhibits platelet aggregation. PG-X, later known as prostacyclin, is 30 times more potent than any other then-known anti-aggregatory agent. They did this while searching for an enzyme that generates a fellow unstable prostanoid, Thromboxane A2
In 1976, Vane and fellow researchers Salvador Moncada, Ryszard Gryglewski, and Stuart Bunting published the first paper on prostacyclin in Nature. The collaboration produced a synthesized molecule, which was named epoprostenol. But, as with native prostacyclin, the epoprostenol molecule is unstable in solution and prone to rapid degradation. This presented a challenge for both in vitro experiments and clinical applications.
To overcome this challenge, the research team that discovered prostacyclin continued the research. The research team synthesized nearly 1,000 analogues.
References
External links
Prostaglandins
Gilead Sciences
Secondary alcohols
Carboxylic acids
Drugs developed by GSK plc | Prostacyclin | [
"Chemistry"
] | 1,967 | [
"Carboxylic acids",
"Functional groups"
] |
899,642 | https://en.wikipedia.org/wiki/Thermal%20infrared%20spectroscopy | Thermal infrared spectroscopy (TIR spectroscopy) is the subset of infrared spectroscopy that deals with radiation emitted in the infrared part of the electromagnetic spectrum. The emitted infrared radiation, though similar to blackbody radiation, is different in that the radiation is banded at characteristic vibrations in the material. The method measures the thermal infrared radiation emitted (as opposed to being transmitted or reflected) from a volume or surface. This method is commonly used to identify the composition of surface by analyzing its spectrum and comparing it to previously measured materials. It is particularly suited to airborne and spaceborne applications.
Thermal infrared spectrometers
Airborne
HyTES: the Hyperspectral Thermal Emission Spectrometer operated by JPL and flown on a Twin Otter or ER2 aircraft
TIMS: the Thermal Infrared Multispectral Scanner, a multispectral radiometer flown on C-130, ER-2, and the Stennis Learjet aircraft.
SEBASS: a hyperspectral sensor developed and operated by The Aerospace Corporation.
Hyper-Cam: a hyperspectral thermal infrared camera developed by Telops.
OWL: a hyperspectral thermal infrared camera developed by Specim.
Spaceborne
ISM: An imaging spectrometer on board the Soviet Phobos 2 spacecraft.
ASTER: a multispectral radiometer on board the Earth-observing Terra satellite.
TIS: A spectrometer is on board the Mangalyan spacecraft.
TES: A hyperspectral spectrometer on board the Mars Global Surveyor spacecraft.
Mini-TES: a small version of the TES instrument carried on both Mars Exploration Rovers.
THEMIS: a multispectral thermal infrared imager on board the 2001 Mars Odyssey spacecraft.
OTES: thermal spectroscopy aboard OSIRIS-REx spacecraft
References
External links
Arizona State University TES Homepage
Description of the TIMS instrument
Infrared spectroscopy
Remote sensing | Thermal infrared spectroscopy | [
"Physics",
"Chemistry",
"Astronomy"
] | 381 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Astronomy stubs",
"Infrared spectroscopy",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
900,160 | https://en.wikipedia.org/wiki/Internal%20wave | Internal waves are gravity waves that oscillate within a fluid medium, rather than on its surface. To exist, the fluid must be stratified: the density must change (continuously or discontinuously) with depth/height due to changes, for example, in temperature and/or salinity. If the density changes over a small vertical distance (as in the case of the thermocline in lakes and oceans or an atmospheric inversion), the waves propagate horizontally like surface waves, but do so at slower speeds as determined by the density difference of the fluid below and above the interface. If the density changes continuously, the waves can propagate vertically as well as horizontally through the fluid.
Internal waves, also called internal gravity waves, go by many other names depending upon the fluid stratification, generation mechanism, amplitude, and influence of external forces. If propagating horizontally along an interface where the density rapidly decreases with height, they are specifically called interfacial (internal) waves. If the interfacial waves are large amplitude they are called internal solitary waves or internal solitons. If moving vertically through the atmosphere where substantial changes in air density influences their dynamics, they are called anelastic (internal) waves. If generated by flow over topography, they are called Lee waves or mountain waves. If the mountain waves break aloft, they can result in strong warm winds at the ground known as Chinook winds (in North America) or Foehn winds (in Europe). If generated in the ocean by tidal flow over submarine ridges or the continental shelf, they are called internal tides. If they evolve slowly compared to the Earth's rotational frequency so that their dynamics are influenced by the Coriolis effect, they are called inertia gravity waves or, simply, inertial waves. Internal waves are usually distinguished from Rossby waves, which are influenced by the change of Coriolis frequency with latitude.
Visualization of internal waves
An internal wave can readily be observed in the kitchen by slowly tilting back and forth a bottle of salad dressing - the waves exist at the interface between oil and vinegar.
Atmospheric internal waves can be visualized by wave clouds: at the wave crests air rises and cools in the relatively lower pressure, which can result in water vapor condensation if the relative humidity is close to 100%. Clouds that reveal internal waves launched by flow over hills are called lenticular clouds because of their lens-like appearance. Less dramatically, a train of internal waves can be visualized by rippled cloud patterns described as herringbone sky or mackerel sky. The outflow of cold air from a thunderstorm can launch large amplitude internal solitary waves at an atmospheric inversion. In northern Australia, these result in Morning Glory clouds, used by some daredevils to glide along like a surfer riding an ocean wave. Satellites over Australia and elsewhere reveal these waves can span many hundreds of kilometers.
Undulations of the oceanic thermocline can be visualized by satellite because the waves increase the surface roughness where the horizontal flow converges, and this increases the scattering of sunlight (as in the image at the top of this page showing of waves generated by tidal flow through the Strait of Gibraltar).
Buoyancy, reduced gravity and buoyancy frequency
According to Archimedes principle, the weight of an immersed object is reduced by the weight of fluid it displaces. This holds for a fluid parcel of density surrounded by an ambient fluid of density . Its weight per unit volume is , in which is the acceleration of gravity. Dividing by a characteristic density, , gives the definition of the reduced gravity:
If , is positive though generally much smaller than . Because water is much more dense than air, the displacement of water by air from a surface gravity wave feels nearly the full force of gravity (). The displacement of the thermocline of a lake, which separates warmer surface from cooler deep water, feels the buoyancy force expressed through the reduced gravity. For example, the density difference between ice water and room temperature water is 0.002 the characteristic density of water. So the reduced gravity is 0.2% that of gravity. It is for this reason that internal waves move in slow-motion relative to surface waves.
Whereas the reduced gravity is the key variable describing buoyancy for interfacial internal waves, a different quantity is used to describe buoyancy in continuously stratified fluid whose density varies with height as . Suppose a water column is in hydrostatic equilibrium and a small parcel of fluid with density is displaced vertically by a small distance . The buoyant restoring force results in a vertical acceleration, given by
This is the spring equation whose solution predicts oscillatory vertical displacement about in time about with frequency given by the buoyancy frequency:
The above argument can be generalized to predict the frequency, , of a fluid parcel that oscillates along a line at an angle to the vertical:
.
This is one way to write the dispersion relation for internal waves whose lines of constant phase lie at an angle to the vertical. In particular, this shows that the buoyancy frequency is an upper limit of allowed internal wave frequencies.
Mathematical modeling of internal waves
The theory for internal waves differs in the description of interfacial waves and vertically propagating internal waves. These are treated separately below.
Interfacial waves
In the simplest case, one considers a two-layer fluid in which a slab of fluid with uniform density overlies a slab of fluid with uniform density . Arbitrarily the interface between the two layers is taken to be situated at The fluid in the upper and lower layers are assumed to be irrotational. So the velocity in each layer is given by the gradient of a velocity potential, and the potential itself satisfies Laplace's equation:
Assuming the domain is unbounded and two-dimensional (in the plane), and assuming the wave is periodic in with wavenumber the equations in each layer reduces to a second-order ordinary differential equation in . Insisting on bounded solutions the velocity potential in each layer is
and
with the amplitude of the wave and its angular frequency. In deriving this structure, matching conditions have been used at the interface requiring continuity of mass and pressure. These conditions also give the dispersion relation:
in which the reduced gravity is based on the density difference between the upper and lower layers:
with the Earth's gravity. Note that the dispersion relation is the same as that for deep water surface waves by setting
Internal waves in uniformly stratified fluid
The structure and dispersion relation of internal waves in a uniformly stratified fluid is found through the solution of the linearized conservation of mass, momentum, and internal energy equations assuming the fluid is incompressible and the background density varies by a small amount (the Boussinesq approximation). Assuming the waves are two dimensional in the x-z plane, the respective equations are
in which is the perturbation density, is the pressure, and is the velocity. The ambient density changes linearly with height as given by and , a constant, is the characteristic ambient density.
Solving the four equations in four unknowns for a wave of the form gives the dispersion relation
in which is the buoyancy frequency and is the angle of the wavenumber vector to the horizontal, which is also the angle formed by lines of constant phase to the vertical.
The phase velocity and group velocity found from the dispersion relation predict the unusual property that they are perpendicular and that the vertical components of the phase and group velocities have opposite sign: if a wavepacket moves upward to the right, the crests move downward to the right.
Internal waves in the ocean
Most people think of waves as a surface phenomenon, which acts between water (as in lakes or oceans) and the air. Where low density water overlies high density water in the ocean, internal waves propagate along the boundary. They are especially common over the continental shelf regions of the world oceans and where brackish water overlies salt water at the outlet of large rivers. There is typically little surface expression of the waves, aside from slick bands that can form over the trough of the waves.
Internal waves are the source of a curious phenomenon called dead water, first reported in 1893 by the Norwegian oceanographer Fridtjof Nansen, in which a boat may experience strong resistance to forward motion in apparently calm conditions. This occurs when the ship is sailing on a layer of relatively fresh water whose depth is comparable to the ship's draft. This causes a wake of internal waves that dissipates a huge amount of energy.
Properties of internal waves
Internal waves typically have much lower frequencies and higher amplitudes than surface gravity waves because the density differences (and therefore the restoring forces) within a fluid are usually much smaller. Wavelengths vary from centimetres to kilometres with periods of seconds to hours respectively.
The atmosphere and ocean are continuously stratified: potential density generally increases steadily downward. Internal waves in a continuously stratified medium may propagate vertically as well as horizontally. The dispersion relation for such waves is curious: For a freely-propagating internal wave packet, the direction of propagation of energy (group velocity) is perpendicular to the direction of propagation of wave crests and troughs (phase velocity). An internal wave may also become confined to a finite region of altitude or depth, as a result of varying stratification or wind. Here, the wave is said to be ducted or trapped, and a vertically standing wave may form, where the vertical component of group velocity approaches zero. A ducted internal wave mode may propagate horizontally, with parallel group and phase velocity vectors, analogous to propagation within a waveguide.
At large scales, internal waves are influenced both by the rotation of the Earth as well as by the stratification of the medium. The frequencies of these geophysical wave motions vary from a lower limit of the Coriolis frequency (inertial motions) up to the Brunt–Väisälä frequency, or buoyancy frequency (buoyancy oscillations). Above the Brunt–Väisälä frequency, there may be evanescent internal wave motions, for example those resulting from partial reflection. Internal waves at tidal frequencies are produced by tidal flow over topography/bathymetry, and are known as internal tides. Similarly, atmospheric tides arise from, for example, non-uniform solar heating associated with diurnal motion.
Onshore transport of planktonic larvae
Cross-shelf transport, the exchange of water between coastal and offshore environments, is of particular interest for its role in delivering meroplanktonic larvae to often disparate adult populations from shared offshore larval pools. Several mechanisms have been proposed for the cross-shelf of planktonic larvae by internal waves. The prevalence of each type of event depends on a variety of factors including bottom topography, stratification of the water body, and tidal influences.
Internal tidal bores
Similarly to surface waves, internal waves change as they approach the shore. As the ratio of wave amplitude to water depth becomes such that the wave “feels the bottom,” water at the base of the wave slows down due to friction with the sea floor. This causes the wave to become asymmetrical and the face of the wave to steepen, and finally the wave will break, propagating forward as an internal bore. Internal waves are often formed as tides pass over a shelf break. The largest of these waves are generated during springtides and those of sufficient magnitude break and progress across the shelf as bores. These bores are evidenced by rapid, step-like changes in temperature and salinity with depth, the abrupt onset of upslope flows near the bottom and packets of high frequency internal waves following the fronts of the bores.
The arrival of cool, formerly deep water associated with internal bores into warm, shallower waters corresponds with drastic increases in phytoplankton and zooplankton concentrations and changes in plankter species abundances. Additionally, while both surface waters and those at depth tend to have relatively low primary productivity, thermoclines are often associated with a chlorophyll maximum layer. These layers in turn attract large aggregations of mobile zooplankton that internal bores subsequently push inshore. Many taxa can be almost absent in warm surface waters, yet plentiful in these internal bores.
Surface slicks
While internal waves of higher magnitudes will often break after crossing over the shelf break, smaller trains will proceed across the shelf unbroken. At low wind speeds these internal waves are evidenced by the formation of wide surface slicks, oriented parallel to the bottom topography, which progress shoreward with the internal waves. Waters above an internal wave converge and sink in its trough and upwell and diverge over its crest. The convergence zones associated with internal wave troughs often accumulate oils and flotsam that occasionally progress shoreward with the slicks. These rafts of flotsam can also harbor high concentrations of larvae of invertebrates and fish an order of magnitude higher than the surrounding waters.
Predictable downwellings
Thermoclines are often associated with chlorophyll maximum layers. Internal waves represent oscillations of these thermoclines and therefore have the potential to transfer these phytoplankton rich waters downward, coupling benthic and pelagic systems. Areas affected by these events show higher growth rates of suspension feeding ascidians and bryozoans, likely due to the periodic influx of high phytoplankton concentrations. Periodic depression of the thermocline and associated downwelling may also play an important role in the vertical transport of planktonic larvae.
Trapped cores
Large steep internal waves containing trapped, reverse-oscillating cores can also transport parcels of water shoreward. These non-linear waves with trapped cores had previously been observed in the laboratory and predicted theoretically. These waves propagate in environments characterized by high shear and turbulence and likely derive their energy from waves of depression interacting with a shoaling bottom further upstream. The conditions favorable to the generation of these waves are also likely to suspend sediment along the bottom as well as plankton and nutrients found along the benthos in deeper water.
References
Footnotes
Other
External links
Discussion and videos of internal waves made by an oscillating cylinder.
Atlas of Oceanic Internal Waves - Global Ocean Associates
Atmospheric dynamics
Fluid dynamics
Waves
Water waves | Internal wave | [
"Physics",
"Chemistry",
"Engineering"
] | 2,960 | [
"Physical phenomena",
"Atmospheric dynamics",
"Water waves",
"Chemical engineering",
"Waves",
"Motion (physics)",
"Piping",
"Fluid dynamics"
] |
900,305 | https://en.wikipedia.org/wiki/Klemperer%20rosette | A Klemperer rosette is a gravitational system of (optionally) alternating heavier and lighter bodies orbiting in a symmetrical pattern around a common barycenter. It was first described by W.B. Klemperer in 1962, and is a special case of a central configuration.
Klemperer described rosette systems as follows:
The simplest rosette would be a series of four alternating heavier and lighter bodies, 90 degrees from one another, in a rhombic configuration [Heavy, Light, Heavy, Light], where the two larger bodies have the same mass, and likewise the two smaller bodies have the same mass, all orbiting their (empty) geometric center. The more general trojan system has unequal masses for the two heavier bodies, which Klemperer also calls a "rhombic" system, and is the only version that is not symmetric around the gravitational center.
The number of "mass types" can be increased, so long as the arrangement is symmetrical and cyclic pattern: e.g. [ 1,2,3 ... 1,2,3 ], [ 1,2,3,4,5 ... 1,2,3,4,5 ], [ 1,2,3,3,2,1 ... 1,2,3,3,2,1 ], etc.
Klemperer's article specifically analyzes regular polygons with 2–9 corners – dumbbell-shaped through nonagon – and non-centrally symmetric "rhombic rosettes" with three orbiting bodies, the outer two stationed at the middle orbiting body's triangular points (L4 and L5), which had already been described and studied by Lagrange in 1772.
Systems with an even number of 4 or more corners can have alternating heavy and light masses at the corners, though the possible range of mass ratios is constrained by para-stability requirements; systems with odd numbers of corners must have equal masses at every corner.
While Klemperer notes that all the rosettes and the rhombus are vulnerable to destabilization, the hexagonal rosette is the most nearly stable because the "planets" sit in each other's semi-stable triangular Lagrangian points, L4 and L5.
The regular polygonal configurations ("rosettes") do not require a central mass (a "sun" at the center is optional, and if present it may bobble above and below the orbital plane), although a Lagrange-type rhombus does. If a central body is present, its mass constrains the ranges for the mass-ratio between the orbiting bodies.
Misuse and misspelling
The term "Klemperer rosette" (often misspelled "Kemplerer rosette") is used to mean a configuration of three or more equal masses, set at the points of an equilateral polygon and given an equal angular velocity about their center of mass. Klemperer does indeed mention this configuration at the start of his article, but only as an already known set of equilibrium systems before introducing the actual rosettes.
In Larry Niven's novel Fleet of Worlds in the Ringworld series, the Puppeteers' eponymous "Fleet of Worlds" is arranged in such a configuration
that Niven calls a "Kemplerer rosette"; this (possibly intentional) misspelling is one viable source of the wider confusion. It is notable that these fictional planets were maintained in position by large engines, in addition to gravitational force.
Instability
Both simple linear perturbation analysis and simulations of rosettes demonstrate that such systems are unstable: Klemperer explains in his original article, any displacement away from the perfectly symmetrical geometry causes a growing oscillation, eventually leading to the disruption of the system. The system is unstable regardless of whether the center of the rosette is in free space, or is in orbit around a central star.
The short-form reason for the instability is that any perturbation corrupts the geometric symmetry, which increases the perturbation, and further undermines the geometry, and so on. The longer explanation is that any tangential perturbation brings a body closer to one neighbor and further from another; the gravitational imbalance becomes greater towards the closer neighbor and less for the further neighbor, pulling the perturbed object more towards its closer neighbor, amplifying the perturbation rather than damping it. An inward radial perturbation causes the perturbed body to get closer to all other objects, increasing the force on the object and increasing its orbital velocity, which leads indirectly to a tangential perturbation and the argument above.
Notes
References
External links
— Rosette simulations
Concepts in astrophysics
Co-orbital objects | Klemperer rosette | [
"Physics",
"Astronomy"
] | 982 | [
"Co-orbital objects",
"Astronomical objects",
"Concepts in astrophysics",
"Astrophysics"
] |
901,062 | https://en.wikipedia.org/wiki/Large%20Helical%20Device | The (LHD) is a fusion research device located in Toki, Gifu, Japan. It is operated by the National Institute for Fusion Science, and is the world's second-largest superconducting stellarator, after Wendelstein 7-X. The LHD employs a heliotron magnetic field originally developed in Japan.
The objective of the project is to conduct fusion plasma confinement research in a steady state in order to elucidate possible solutions to physics and engineering problems in helical plasma reactors. The LHD uses neutral beam injection, ion cyclotron radio frequency (ICRF), and electron cyclotron resonance heating (ECRH) to heat the plasma, much like conventional tokamaks. The helical divertor heat and particle exhaust system uses the large helical coils to produce a diverting field. This configuration allows for the modification of the stochastic layer size, which is positioned between the confined plasma volume and the field lines that terminate on the divertor plate. Boundary plasma research at LHD focuses on the capability of the helical divertor as an exhaust system for heliotrons and stellarators.
History
Design finalized 1987
Start of construction 1990
Plasma operations from 1998
Neutral beam injection of 3 MW was used in 1999.
In 2005 it maintained a plasma for 3,900 seconds.
In 2006 a new helium cooler was added. Using the new cooler, by 2018 a total of 10 long term operations have been achieved, reaching a maximum power level of 11.833 kA.
To aid public acceptance, an exhaust system was designed to catch and filter the radioactive tritium the fusion process produces.
See also
Fusion reactor
National Institutes of Natural Sciences, Japan
References
External links
Large Helical Device Website good diagrams (worth archiving page)
Super Dense Core plasmas in LHD. Harris. 2008 16 slides. advanced - inc ballooning mode and future development options
Fusion power
Stellarators
Plasma physics facilities
Toki, Gifu | Large Helical Device | [
"Physics",
"Chemistry"
] | 405 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics facilities",
"Plasma physics"
] |
901,260 | https://en.wikipedia.org/wiki/Close-packing%20of%20equal%20spheres | In geometry, close-packing of equal spheres is a dense arrangement of congruent spheres in an infinite, regular arrangement (or lattice). Carl Friedrich Gauss proved that the highest average density – that is, the greatest fraction of space occupied by spheres – that can be achieved by a lattice packing is
.
The same packing density can also be achieved by alternate stackings of the same close-packed planes of spheres, including structures that are aperiodic in the stacking direction. The Kepler conjecture states that this is the highest density that can be achieved by any arrangement of spheres, either regular or irregular. This conjecture was proven by T. C. Hales. Highest density is known only for 1, 2, 3, 8, and 24 dimensions.
Many crystal structures are based on a close-packing of a single kind of atom, or a close-packing of large ions with smaller ions filling the spaces between them. The cubic and hexagonal arrangements are very close to one another in energy, and it may be difficult to predict which form will be preferred from first principles.
FCC and HCP lattices
There are two simple regular lattices that achieve this highest average density. They are called face-centered cubic (FCC) (also called cubic close packed) and hexagonal close-packed (HCP), based on their symmetry. Both are based upon sheets of spheres arranged at the vertices of a triangular tiling; they differ in how the sheets are stacked upon one another. The FCC lattice is also known to mathematicians as that generated by the A3 root system.
Cannonball problem
The problem of close-packing of spheres was first mathematically analyzed by Thomas Harriot around 1587, after a question on piling cannonballs on ships was posed to him by Sir Walter Raleigh on their expedition to America.
Cannonballs were usually piled in a rectangular or triangular wooden frame, forming a three-sided or four-sided pyramid. Both arrangements produce a face-centered cubic lattice – with different orientation to the ground. Hexagonal close-packing would result in a six-sided pyramid with a hexagonal base.
The cannonball problem asks which flat square arrangements of cannonballs can be stacked into a square pyramid. Édouard Lucas formulated the problem as the Diophantine equation or and conjectured that the only solutions are and . Here is the number of layers in the pyramidal stacking arrangement and is the number of cannonballs along an edge in the flat square arrangement.
Positioning and spacing
In both the FCC and HCP arrangements each sphere has twelve neighbors. For every sphere there is one gap surrounded by six spheres (octahedral) and two smaller gaps surrounded by four spheres (tetrahedral). The distances to the centers of these gaps from the centers of the surrounding spheres is for the tetrahedral, and for the octahedral, when the sphere radius is 1.
Relative to a reference layer with positioning A, two more positionings B and C are possible. Every sequence of A, B, and C without immediate repetition of the same one is possible and gives an equally dense packing for spheres of a given radius.
The most regular ones are
FCC = ABC ABC ABC... (every third layer is the same)
HCP = AB AB AB AB... (every other layer is the same).
There is an uncountably infinite number of disordered arrangements of planes (e.g. ABCACBABABAC...) that are sometimes collectively referred to as "Barlow packings", after crystallographer William Barlow.
In close-packing, the center-to-center spacing of spheres in the xy plane is a simple honeycomb-like tessellation with a pitch (distance between sphere centers) of one sphere diameter. The distance between sphere centers, projected on the z (vertical) axis, is:
where d is the diameter of a sphere; this follows from the tetrahedral arrangement of close-packed spheres.
The coordination number of HCP and FCC is 12 and their atomic packing factors (APFs) are equal to the number mentioned above, 0.74.
Lattice generation
When forming any sphere-packing lattice, the first fact to notice is that whenever two spheres touch a straight line may be drawn from the center of one sphere to the center of the other intersecting the point of contact. The distance between the centers along the shortest path namely that straight line will therefore be r1 + r2 where r1 is the radius of the first sphere and r2 is the radius of the second. In close packing all of the spheres share a common radius, r. Therefore, two centers would simply have a distance 2r.
Simple HCP lattice
To form an A-B-A-B-... hexagonal close packing of spheres, the coordinate points of the lattice will be the spheres' centers. Suppose, the goal is to fill a box with spheres according to HCP. The box would be placed on the x-y-z coordinate space.
First form a row of spheres. The centers will all lie on a straight line. Their x-coordinate will vary by 2r since the distance between each center of the spheres are touching is 2r. The y-coordinate and z-coordinate will be the same. For simplicity, say that the balls are the first row and that their y- and z-coordinates are simply r, so that their surfaces rest on the zero-planes. Coordinates of the centers of the first row will look like (2r, r, r), (4r, r, r), (6r ,r, r), (8r ,r, r), ... .
Now, form the next row of spheres. Again, the centers will all lie on a straight line with x-coordinate differences of 2r, but there will be a shift of distance r in the x-direction so that the center of every sphere in this row aligns with the x-coordinate of where two spheres touch in the first row. This allows the spheres of the new row to slide in closer to the first row until all spheres in the new row are touching two spheres of the first row. Since the new spheres touch two spheres, their centers form an equilateral triangle with those two neighbors' centers. The side lengths are all 2r, so the height or y-coordinate difference between the rows is r. Thus, this row will have coordinates like this:
The first sphere of this row only touches one sphere in the original row, but its location follows suit with the rest of the row.
The next row follows this pattern of shifting the x-coordinate by r and the y-coordinate by . Add rows until reaching the x and y maximum borders of the box.
In an A-B-A-B-... stacking pattern, the odd numbered planes of spheres will have exactly the same coordinates save for a pitch difference in the z-coordinates and the even numbered planes of spheres will share the same x- and y-coordinates. Both types of planes are formed using the pattern mentioned above, but the starting place for the first row's first sphere will be different.
Using the plane described precisely above as plane #1, the A plane, place a sphere on top of this plane so that it lies touching three spheres in the A-plane. The three spheres are all already touching each other, forming an equilateral triangle, and since they all touch the new sphere, the four centers form a regular tetrahedron. All of the sides are equal to 2r because all of the sides are formed by two spheres touching. The height of which or the z-coordinate difference between the two "planes" is . This, combined with the offsets in the x and y-coordinates gives the centers of the first row in the B plane:
The second row's coordinates follow the pattern first described above and are:
The difference to the next plane, the A plane, is again in the z-direction and a shift in the x and y to match those x- and y-coordinates of the first A plane.
In general, the coordinates of sphere centers can be written as:
where i, j and k are indices starting at 0 for the x-, y- and z-coordinates.
Miller indices
Crystallographic features of HCP systems, such as vectors and atomic plane families, can be described using a four-value Miller index notation ( hkil ) in which the third index i denotes a degenerate but convenient component which is equal to −h − k. The h, i and k index directions are separated by 120°, and are thus not orthogonal; the l component is mutually perpendicular to the h, i and k index directions.
Filling the remaining space
The FCC and HCP packings are the densest known packings of equal spheres with the highest symmetry (smallest repeat units).
Denser sphere packings are known, but they involve unequal sphere packing.
A packing density of 1, filling space completely, requires non-spherical shapes, such as honeycombs.
Replacing each contact point between two spheres with an edge connecting the centers of the touching spheres produces tetrahedrons and octahedrons of equal edge lengths.
The FCC arrangement produces the tetrahedral-octahedral honeycomb.
The HCP arrangement produces the gyrated tetrahedral-octahedral honeycomb.
If, instead, every sphere is augmented with the points in space that are closer to it than to any other sphere, the duals of these honeycombs are produced: the rhombic dodecahedral honeycomb for FCC, and the trapezo-rhombic dodecahedral honeycomb for HCP.
Spherical bubbles appear in soapy water in a FCC or HCP arrangement when the water in the gaps between the bubbles drains out. This pattern also approaches the rhombic dodecahedral honeycomb or trapezo-rhombic dodecahedral honeycomb. However, such FCC or HCP foams of very small liquid content are unstable, as they do not satisfy Plateau's laws. The Kelvin foam and the Weaire–Phelan foam are more stable, having smaller interfacial energy in the limit of a very small liquid content.
There are two types of interstitial holes left by hcp and fcc conformations; tetrahedral and octahedral void. Four spheres surround the tetrahedral hole with three spheres being in one layer and one sphere from the next layer. Six spheres surround an octahedral voids with three spheres coming from one layer and three spheres coming from the next layer. Structures of many simple chemical compounds, for instance, are often described in terms of small atoms occupying tetrahedral or octahedral holes in closed-packed systems that are formed from larger atoms.
Layered structures are formed by alternating empty and filled octahedral planes. Two octahedral layers usually allow for four structural arrangements that can either be filled by an hpc of fcc packing systems. In filling tetrahedral holes a complete filling leads to fcc field array. In unit cells, hole filling can sometimes lead to polyhedral arrays with a mix of hcp and fcc layering.
See also
Cubic crystal system
Hermite constant
Random close pack
Sphere packing
Cylinder sphere packing
Notes
External links
P. Krishna & D. Pandey, "Close-Packed Structures" International Union of Crystallography by University College Cardiff Press. Cardiff, Wales. PDF
Discrete geometry
Crystallography
Packing problems
Spheres | Close-packing of equal spheres | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,367 | [
"Discrete mathematics",
"Packing problems",
"Discrete geometry",
"Materials science",
"Crystallography",
"Condensed matter physics",
"Mathematical problems"
] |
901,382 | https://en.wikipedia.org/wiki/Interaction%20picture | In quantum mechanics, the interaction picture (also known as the interaction representation or Dirac picture after Paul Dirac, who introduced it) is an intermediate representation between the Schrödinger picture and the Heisenberg picture. Whereas in the other two pictures either the state vector or the operators carry time dependence, in the interaction picture both carry part of the time dependence of observables. The interaction picture is useful in dealing with changes to the wave functions and observables due to interactions. Most field-theoretical calculations use the interaction representation because they construct the solution to the many-body Schrödinger equation as the solution to the free-particle problem plus some unknown interaction parts.
Equations that include operators acting at different times, which hold in the interaction picture, don't necessarily hold in the Schrödinger or the Heisenberg picture. This is because time-dependent unitary transformations relate operators in one picture to the analogous operators in the others.
The interaction picture is a special case of unitary transformation applied to the Hamiltonian and state vectors.
Definition
Operators and state vectors in the interaction picture are related by a change of basis (unitary transformation) to those same operators and state vectors in the Schrödinger picture.
To switch into the interaction picture, we divide the Schrödinger picture Hamiltonian into two parts:
Any possible choice of parts will yield a valid interaction picture; but in order for the interaction picture to be useful in simplifying the analysis of a problem, the parts will typically be chosen so that H0,S is well understood and exactly solvable, while H1,S contains some harder-to-analyze perturbation to this system.
If the Hamiltonian has explicit time-dependence (for example, if the quantum system interacts with an applied external electric field that varies in time), it will usually be advantageous to include the explicitly time-dependent terms with H1,S, leaving H0,S time-independent:We proceed assuming that this is the case. If there is a context in which it makes sense to have H0,S be time-dependent, then one can proceed by replacing by the corresponding time-evolution operator in the definitions below.
State vectors
Let be the time-dependent state vector in the Schrödinger picture. A state vector in the interaction picture, , is defined with an additional time-dependent unitary transformation.
Operators
An operator in the interaction picture is defined as
Note that AS(t) will typically not depend on and can be rewritten as just AS. It only depends on if the operator has "explicit time dependence", for example, due to its dependence on an applied external time-varying electric field. Another instance of explicit time dependence may occur when AS(t) is a density matrix (see below).
Hamiltonian operator
For the operator itself, the interaction picture and Schrödinger picture coincide:
This is easily seen through the fact that operators commute with differentiable functions of themselves. This particular operator then can be called without ambiguity.
For the perturbation Hamiltonian , however,
where the interaction-picture perturbation Hamiltonian becomes a time-dependent Hamiltonian, unless [H1,S, H0,S] = 0.
It is possible to obtain the interaction picture for a time-dependent Hamiltonian H0,S(t) as well, but the exponentials need to be replaced by the unitary propagator for the evolution generated by H0,S(t), or more explicitly with a time-ordered exponential integral.
Density matrix
The density matrix can be shown to transform to the interaction picture in the same way as any other operator. In particular, let and be the density matrices in the interaction picture and the Schrödinger picture respectively. If there is probability to be in the physical state |ψn⟩, then
Time-evolution
Time-evolution of states
Transforming the Schrödinger equation into the interaction picture gives
which states that in the interaction picture, a quantum state is evolved by the interaction part of the Hamiltonian as expressed in the interaction picture. A proof is given in Fetter and Walecka.
Time-evolution of operators
If the operator AS is time-independent (i.e., does not have "explicit time dependence"; see above), then the corresponding time evolution for AI(t) is given by
In the interaction picture the operators evolve in time like the operators in the Heisenberg picture with the Hamiltonian .
Time-evolution of the density matrix
The evolution of the density matrix in the interaction picture is
in consistency with the Schrödinger equation in the interaction picture.
Expectation values
For a general operator , the expectation value in the interaction picture is given by
Using the density-matrix expression for expectation value, we will get
Schwinger–Tomonaga equation
The term interaction representation was invented by Schwinger.
In this new mixed representation the state vector is no longer constant in general, but it is constant if there is no coupling between fields. The change of representation leads directly to the Tomonaga–Schwinger equation:
Where the Hamiltonian in this case is the QED interaction Hamiltonian, but it can also be a generic interaction, and is a spacelike surface that is passing through the point . The derivative formally represents a variation over that surface given fixed. It is difficult to give a precise mathematical formal interpretation of this equation.
This approach is called the 'differential' and 'field' approach by Schwinger, as opposed to the 'integral' and 'particle' approach of the Feynman diagrams.
The core idea is that if the interaction has a small coupling constant (i.e. in the case of electromagnetism of the order of the fine structure constant) successive perturbative terms will be powers of the coupling constant and therefore smaller.
Use
The purpose of the interaction picture is to shunt all the time dependence due to H0 onto the operators, thus allowing them to evolve freely, and leaving only H1,I to control the time-evolution of the state vectors.
The interaction picture is convenient when considering the effect of a small interaction term, H1,S, being added to the Hamiltonian of a solved system, H0,S. By utilizing the interaction picture, one can use time-dependent perturbation theory to find the effect of H1,I, e.g., in the derivation of Fermi's golden rule, or the Dyson series in quantum field theory: in 1947, Shin'ichirō Tomonaga and Julian Schwinger appreciated that covariant perturbation theory could be formulated elegantly in the interaction picture, since field operators can evolve in time as free fields, even in the presence of interactions, now treated perturbatively in such a Dyson series.
Summary comparison of evolution in all pictures
For a time-independent Hamiltonian HS, where H0,S is the free Hamiltonian,
References
Further reading
See also
Bra–ket notation
Schrödinger equation
Haag's theorem
Quantum mechanics
es:Imagen de evolución temporal | Interaction picture | [
"Physics"
] | 1,457 | [
"Theoretical physics",
"Quantum mechanics"
] |
901,593 | https://en.wikipedia.org/wiki/Lambda%20cube | In mathematical logic and type theory, the λ-cube (also written lambda cube) is a framework introduced by Henk Barendregt to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type to bind a term or type. The respective dimensions of the λ-cube correspond to:
x-axis (): types that can bind terms, corresponding to dependent types.
y-axis (): terms that can bind types, corresponding to polymorphism.
z-axis (): types that can bind types, corresponding to (binding) type operators.
The different ways to combine these three dimensions yield the 8 vertices of the cube, each corresponding to a different kind of typed system. The λ-cube can be generalized into the concept of a pure type system.
Examples of Systems
(λ→) Simply typed lambda calculus
The simplest system found in the λ-cube is the simply typed lambda calculus, also called λ→. In this system, the only way to construct an abstraction is by making a term depend on a term, with the typing rule:
(λ2) System F
In System F (also named λ2 for the "second-order typed lambda calculus") there is another type of abstraction, written with a , that allows terms to depend on types, with the following rule:
The terms beginning with a are called polymorphic, as they can be applied to different types to get different functions, similarly to polymorphic functions in ML-like languages. For instance, the polymorphic identity fun x -> xof OCaml has type 'a -> 'ameaning it can take an argument of any type 'a and return an element of that type. This type corresponds in λ2 to the type .
(λω) System Fω
In System F a construction is introduced to supply types that depend on other types. This is called a type constructor and provides a way to build "a function with a type as a value". An example of such a type constructor is the type of binary trees with leaves labeled by data of a given type : , where "" informally means " is a type". This is a function that takes a type parameter as an argument and returns the type of s of values of type . In concrete programming, this feature corresponds to the ability to define type constructors inside the language, rather than considering them as primitives. The previous type constructor roughly corresponds to the following definition of a tree with labeled leaves in OCaml:type 'a tree = | Leaf of 'a | Node of 'a tree * 'a tree
This type constructor can be applied to other types to obtain new types. E.g., to obtain type of trees of integers:type int_tree = int tree
System F is generally not used on its own, but is useful to isolate the independent feature of type constructors.
(λP) Lambda-P
In the λP system, also named λΠ, and closely related to the LF Logical Framework, one has so called dependent types. These are types that are allowed to depend on terms. The crucial introduction rule of the system is
where represents valid types. The new type constructor corresponds via the Curry-Howard isomorphism to a universal quantifier, and the system λP as a whole corresponds to first-order logic with implication as only connective. An example of these dependent types in concrete programming is the type of vectors on a certain length: the length is a term, on which the type depends.
(λω) System Fω
System Fω combines both the constructor of System F and the type constructors from System F. Thus System Fω provides both terms that depend on types and types that depend on types.
(λC) Calculus of constructions
In the calculus of constructions, denoted as λC in the cube or as λPω, these four features cohabit, so that both types and terms can depend on types and terms. The clear border that exists in λ→ between terms and types is somewhat abolished, as all types except the universal are themselves terms with a type.
Formal definition
As for all systems based upon the simply typed lambda calculus, all systems in the cube are given in two steps: first, raw terms, together with a notion of β-reduction, and then typing rules that allow to type those terms.
The set of sorts is defined as , sorts are represented with the letter . There is also a set of variables, represented by the letters . The raw terms of the eight systems of the cube are given by the following syntax:
and denoting when does not occur free in .
The environments, as is usual in typed systems, are given by
The notion of β-reduction is common to all systems in the cube. It is written and given by the rulesIts reflexive, transitive closure is written .
The following typing rules are also common to all systems in the cube:
The difference between the systems is in the pairs of sorts that are allowed in the following two typing rules:
The correspondence between the systems and the pairs allowed in the rules is the following:
Each direction of the cube corresponds to one pair (excluding the pair shared by all systems), and in turn each pair corresponds to one possibility of dependency between terms and types:
allows terms to depend on terms.
allows types to depend on terms.
allows terms to depend on types.
allows types to depend on types.
Comparison between the systems
λ→
A typical derivation that can be obtained isor with the arrow shortcutclosely resembling the identity (of type ) of the usual λ→. Note that all types used must appear in the context, because the only derivation that can be done in an empty context is .
The computing power is quite weak, it corresponds to the extended polynomials (polynomials together with a conditional operator).
λ2
In λ2, such terms can be obtained aswith . If one reads as a universal quantification, via the Curry-Howard isomorphism, this can be seen as a proof of the principle of explosion. In general, λ2 adds the possibility to have impredicative types such as , that is terms quantifying over all types including themselves.The polymorphism also allows the construction of functions that were not constructible in λ→. More precisely, the functions definable in λ2 are those provably total in second-order Peano arithmetic. In particular, all primitive recursive functions are definable.
λP
In λP, the ability to have types depending on terms means one can express logical predicates. For instance, the following is derivable:which corresponds, via the Curry-Howard isomorphism, to a proof of .From the computational point of view, however, having dependent types does not enhance computational power, only the possibility to express more precise type properties.
The conversion rule is strongly needed when dealing with dependent types, because it allows to perform computation on the terms in the type.
For instance, if one has and , one needs to apply the conversion rule to obtain to be able to type .
λω
In λω, the following operatoris definable, that is . The derivationcan be obtained already in λ2, however the polymorphic can only be defined if the rule is also present.
From a computing point of view, λω is extremely strong, and has been considered as a basis for programming languages.
λC
The calculus of constructions has both the predicate expressiveness of λP and the computational power of λω, hence why λC is also called λPω, so it is very powerful, both on the logical side and on the computational side.
Relation to other systems
The system Automath is similar to λ2 from a logical point of view. The ML-like languages, from a typing point of view, lie somewhere between λ→ and λ2, as they admit a restricted kind of polymorphic types, that is the types in prenex normal form. However, because they feature some recursion operators, their computing power is greater than that of λ2. The Coq system is based on an extension of λC with a linear hierarchy of universes, rather than only one untypable , and the ability to construct inductive types.
Pure type systems can be seen as a generalization of the cube, with an arbitrary set of sorts, axiom, product and abstraction rules. Conversely, the systems of the lambda cube can be expressed as pure type systems with two sorts , the only axiom , and a set of rules such that .
Via the Curry-Howard isomorphism, there is a one-to-one correspondence between the systems in the lambda cube and logical systems, namely:
All the logics are implicative (i.e. the only connectives are and ), however one can define other connectives such as or in an impredicative way in second and higher order logics. In the weak higher order logics, there are variables for higher order predicates, but no quantification on those can be done.
Common properties
All systems in the cube enjoy
the Church-Rosser property: if and then there exists such that and ;
the subject reduction property: if and then ;
the uniqueness of types: if and then .
All of these can be proven on generic pure type systems.
Any term well-typed in a system of the cube is strongly normalizing, although this property is not common to all pure type systems. No system in the cube is Turing complete.
Subtyping
Subtyping however is not represented in the cube, even though systems like , known as higher-order bounded quantification, which combines subtyping and polymorphism are of practical interest, and can be further generalized to bounded type operators. Further extensions to allow the definition of purely functional objects; these systems were generally developed after the lambda cube paper was published.
The idea of the cube is due to the mathematician Henk Barendregt (1991). The framework of pure type systems generalizes the lambda cube in the sense that all corners of the cube, as well as many other systems can be represented as instances of this general framework. This framework predates the lambda cube by a couple of years. In his 1991 paper, Barendregt also defines the corners of the cube in this framework.
See also
In his Habilitation à diriger les recherches, Olivier Ridoux gives a cut-out template of the lambda cube and also a dual representation of the cube as an octahedron, where the 8 vertices are replaced by faces, as well as a dual representation as a dodecahedron, where the 12 edges are replaced by faces.
Logical cube
Logical hexagon
Square of opposition
Triangle of opposition
Notes
Further reading
External links
Barendregt's Lambda Cube in the context of pure type systems by Roger Bishop Jones
Lambda calculus
Type theory | Lambda cube | [
"Mathematics"
] | 2,262 | [
"Type theory",
"Mathematical logic",
"Mathematical structures",
"Mathematical objects"
] |
901,613 | https://en.wikipedia.org/wiki/Logical%20framework | In logic, a logical framework provides a means to define (or present) a logic as a signature in a higher-order type theory in such a way that provability of a formula in the original logic reduces to a type inhabitation problem in the framework type theory. This approach has been used successfully for (interactive) automated theorem proving. The first logical framework was Automath; however, the name of the idea comes from the more widely known Edinburgh Logical Framework, LF. Several more recent proof tools like Isabelle are based on this idea. Unlike a direct embedding, the logical framework approach allows many logics to be embedded in the same type system.
Overview
A logical framework is based on a general treatment of syntax, rules and proofs by means of a dependently typed lambda calculus. Syntax is treated in a style similar to, but more general than Per Martin-Löf's system of arities.
To describe a logical framework, one must provide the following:
A characterization of the class of object-logics to be represented;
An appropriate meta-language;
A characterization of the mechanism by which object-logics are represented.
This is summarized by:
"Framework = Language + Representation."
LF
In the case of the LF logical framework, the meta-language is the λΠ-calculus. This is a system of first-order dependent function types which are related by the propositions as types principle to first-order minimal logic. The key features of the λΠ-calculus are that it consists of entities of three levels: objects, types and kinds (or type classes, or families of types). It is predicative, all well-typed terms are strongly normalizing and Church-Rosser and the property of being well-typed is decidable. However, type inference is undecidable.
A logic is represented in the LF logical framework by the judgements-as-types representation mechanism. This is inspired by Per Martin-Löf's development of Kant's notion of judgement, in the 1983 Siena Lectures. The two higher-order judgements, the hypothetical and the general, , correspond to the ordinary and dependent function space, respectively. The methodology of judgements-as-types is that judgements are represented as the types of their proofs. A logical system is represented by its signature which assigns kinds and types to a finite set of constants that represents its syntax, its judgements and its rule schemes. An object-logic's rules and proofs are seen as primitive proofs of hypothetico-general judgements .
An implementation of the LF logical framework is provided by the Twelf system at Carnegie Mellon University. Twelf includes
a logic programming engine
meta-theoretic reasoning about logic programs (termination, coverage, etc.)
an inductive meta-logical theorem prover
See also
Grammatical Framework
Turnstile (symbol)
References
Further reading
Robert Harper, Furio Honsell and Gordon Plotkin. A Framework For Defining Logics. Journal of the Association for Computing Machinery, 40(1):143-184, 1993.
Arnon Avron, Furio Honsell, Ian Mason and Randy Pollack. Using typed lambda calculus to implement formal systems on a machine. Journal of Automated Reasoning, 9:309-354, 1992.
Robert Harper. An Equational Formulation of LF. Technical Report, University of Edinburgh, 1988. LFCS report ECS-LFCS-88-67.
Robert Harper, Donald Sannella and Andrzej Tarlecki. Structured Theory Presentations and Logic Representations. Annals of Pure and Applied Logic, 67(1-3):113-160, 1994.
Samin Ishtiaq and David Pym. A Relevant Analysis of Natural Deduction. Journal of Logic and Computation 8, 809-838, 1998.
Samin Ishtiaq and David Pym. Kripke Resource Models of a Dependently-typed, Bunched -calculus. Journal of Logic and Computation 12(6), 1061-1104, 2002.
Per Martin-Löf. "On the Meanings of the Logical Constants and the Justifications of the Logical Laws." "Nordic Journal of Philosophical Logic", 1(1): 11-60, 1996.
Bengt Nordström, Kent Petersson, and Jan M. Smith. Programming in Martin-Löf's Type Theory. Oxford University Press, 1990. (The book is out of print, but a free version has been made available.)
David Pym. A Note on the Proof Theory of the -calculus. Studia Logica 54: 199-230, 1995.
David Pym and Lincoln Wallen. Proof-search in the -calculus. In: G. Huet and G. Plotkin (eds), Logical Frameworks, Cambridge University Press, 1991.
Didier Galmiche and David Pym. Proof-search in type-theoretic languages:an introduction. Theoretical Computer Science 232 (2000) 5-53.
Philippa Gardner. Representing Logics in Type Theory. Technical Report, University of Edinburgh, 1992. LFCS report ECS-LFCS-92-227.
Gilles Dowek. The undecidability of typability in the lambda-pi-calculus. In M. Bezem, J.F. Groote (Eds.), Typed Lambda Calculi and Applications. Volume 664 of Lecture Notes in Computer Science, 139-145, 1993.
David Pym. Proofs, Search and Computation in General Logic. Ph.D. thesis, University of Edinburgh, 1990.
David Pym. A Unification Algorithm for the -calculus. International Journal of Foundations of Computer Science 3(3), 333-378, 1992.
External links
Specific Logical Frameworks and Implementations (a list maintained by Frank Pfenning, but mostly dead links from 1997)
Logic in computer science
Type theory
Proof assistants
Dependently typed programming | Logical framework | [
"Mathematics"
] | 1,228 | [
"Mathematical structures",
"Logic in computer science",
"Mathematical logic",
"Mathematical objects",
"Type theory"
] |
31,406,901 | https://en.wikipedia.org/wiki/RNA%20helicase%20database | The RNA helicase database stored data about RNA helicases. The URL referenced in the article has been invalid since at least December 31st 2017.
See also
Helicase
References
Enzyme databases
Helicases
RNA
RNA-binding proteins | RNA helicase database | [
"Chemistry",
"Biology"
] | 49 | [
"Molecular biology techniques",
"Biochemistry databases",
"Enzyme databases",
"Protein classification"
] |
31,413,530 | https://en.wikipedia.org/wiki/ThYme%20%28database%29 | ThYme (Thioester-active enzYme) is database of enzymes constituting the fatty acid synthesis and polyketide synthesis cycles.
See also
Thioester
References
External links
http://www.enzyme.cbirc.iastate.edu
Enzyme databases
Fatty acids
Genetics databases
Metabolism
Thioesters | ThYme (database) | [
"Chemistry",
"Biology"
] | 67 | [
"Enzyme databases",
"Biochemistry databases",
"Functional groups",
"Protein classification",
"Thioesters",
"Molecular biology techniques",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
31,417,810 | https://en.wikipedia.org/wiki/Anchorage%20in%20reinforced%20concrete | Reinforced concrete is concrete in which reinforcement bars ("rebars"), reinforcement grids, plates or fibers are embedded to create bond and thus to strengthen the concrete in tension. The composite material was invented by French gardener Joseph Monier in 1849 and patented in 1867.
Description
Conventionally the term concrete refers only to concrete that is reinforced with iron or steel. However, other materials are often used to reinforce concrete e.g. organic and inorganic fibres, composites in different forms. While compared to its compressive strength, concrete is weak in tension. Thus adding reinforcement increases the strength in tension. The other purpose of providing reinforcement in concrete is to hold the tension cracked sections together.
Mechanism of composite action of reinforcement and concrete
The reinforcement in a RC structure, such as a steel bar, has to undergo the same strain or deformation as the surrounding concrete in order to prevent discontinuity, slip or separation of the two materials under load. Maintaining composite action requires transfer of load between the concrete and steel. The direct stress is transferred from the concrete to the bar interface so as to change the tensile stress in the reinforcing bar along its length. This load transfer is achieved by means of bond (anchorage) and is idealized as a continuous stress field that develops in the vicinity of the steel-concrete interface.
Anchorage (bond) in concrete: Codes of specifications
Because the actual bond stress varies along the length of a bar anchored in a zone of tension, most international codes of specifications use the concept of development length rather than bond stress. The same concept applies to lap splice length mentioned in the codes where splices (overlapping) provided between two adjacent bars in order to maintain the required continuity of stress in the splice zone.
See also
Reinforced solid
References
Concrete
Concrete buildings and structures
Reinforced concrete
Structural engineering | Anchorage in reinforced concrete | [
"Engineering"
] | 367 | [
"Structural engineering",
"Concrete",
"Construction",
"Civil engineering"
] |
24,284,875 | https://en.wikipedia.org/wiki/Antenna%20amplifier | In electronics, an antenna amplifier (also: aerial amplifier or booster) is a device that amplifies an antenna signal, usually into an output with the same impedance as the input impedance. Typically 75 ohm for coaxial cable and 300 ohm for twin-lead cable.
An antenna amplifier boosts a radio signal considerably for devices that receive radio waves. Many devices have an RF amplifier stage in their circuitry, that amplifies the antenna signal, these include, but are not limited to; radios, televisions, mobile phones and Wi-Fi and Bluetooth devices. Amplifiers amplify everything, both the desired signal present at the antenna, and the noise. Typical signal noises include: ambient background noise (electric brush noise from electric motors, high voltage sources from, for example a gasoline engine ignition, or large dispersed currents in the vicinity of the desired reception electric fence). To add, consideration must be taken for the noise generated by the amplifier itself and all other electrical noise which may be generated by the device that is to receive a signal, for example a lot of consideration has to go into mobile phone circuitry design to eliminate as much noise from its own circuitry in order to not disturb the desired transmission signals from its own antenna(ae).
An indoor antenna may include an amplifier circuit, whereby powered reception of the signal can help with capturing as much of an FM, UHF/VHF signal, for amplifying a radio or television signal. Its draw backs are that any noise is usually amplified as well, and a common result from this is amplification of ghost images (for analog signals), and any other perturberances that may be existing locally or even extra terrestrially like the Cosmic microwave background radiation for devices that work in that frequency range.
The key to a "good" level of input at your receiver with the minimum amount noise includes many design considerations in an electrical amplifier. In theory it is best if you amplify a "clean" signal to a higher level than a "noisy" signal to a higher level, and many circuits include filters to remove all but the desired reception signal. Some consideration has to be taken for cable loss and the signal frequencies desired for example higher frequency (VHF or higher: 2.4 GHz Wi-Fi/third generation mobile phone.) the more the loss that the cable has, and the more susceptible the transmission cable is to noise degradation. Starting with a signal from the antenna which is then directed through a coaxial cable, the amount of loss depends upon a number of factors, cable type and cable length are the two most important. Cable is rated in db loss per length of cable at a specified frequency, for example RG-6 coaxial cable is the cable most used for Television reception.
Belden 1829AC Coax - Series 6 has a loss of 4 db/100 feet at 500 MHz (TV Channel 18)- 495.250
Channel 32 which is 580 MHz, Channel 52 is 700 MHz a 5 db loss At TV channel 2, the cable would have a loss of 1.4 db. So at channel 18 you would lose more than 1/2 the power in 100' of cable between the antenna and the TV.
See also
Balun
Cosmic microwave background radiation
References
Electronic amplifiers
Radio electronics | Antenna amplifier | [
"Technology",
"Engineering"
] | 669 | [
"Radio electronics",
"Electronic amplifiers",
"Amplifiers"
] |
24,285,404 | https://en.wikipedia.org/wiki/Wettable%20powder | A wettable powder (WP) is an insecticide or other pesticide formulation consisting of the active ingredient in a finely ground state combined with wetting agents and sometimes bulking agents. Wettable powders are designed to be applied as a dilute suspension through liquid spraying equipment. As wettable powders are not mixed with water until immediately before use, storing and transporting the products may be simplified as the weight and volume of the water is avoided. Wettable powders may be supplied in bulk or in measured sachets made from water-soluble film to simplify premixing and reduce operator exposure to the product.
References
Pest control techniques
Powders | Wettable powder | [
"Physics"
] | 134 | [
"Materials",
"Powders",
"Matter"
] |
24,287,539 | https://en.wikipedia.org/wiki/Manufacturing%20Automation%20Protocol | Manufacturing Automation Protocol (MAP) was a computer network standard released in 1982 for interconnection of devices from multiple manufacturers. It was developed by General Motors to combat the proliferation of incompatible communications standards used by suppliers of automation products such as programmable controllers. By 1985 demonstrations of interoperability were carried out and 21 vendors offered MAP products. In 1986 the Boeing corporation merged its Technical Office Protocol with the MAP standard, and the combined standard was referred to as "MAP/TOP". The standard was revised several times between the first issue in 1982 and MAP 3.0 in 1987, with significant technical changes that made interoperation between different revisions of the standard difficult.
Although promoted and used by manufacturers such as General Motors, Boeing, and others, it lost market share to the contemporary Ethernet standard and was not widely adopted. Difficulties included changing protocol specifications, the expense of MAP interface links, and the speed penalty of a token-passing network. The token bus network protocol used by MAP became standardized as IEEE standard 802.4 but this committee disbanded in 2004 due to lack of industry attention.
References
Industrial automation
Computer networks | Manufacturing Automation Protocol | [
"Technology",
"Engineering"
] | 225 | [
"Computer network stubs",
"Automation",
"Industrial engineering",
"Computing stubs",
"Industrial automation"
] |
24,288,199 | https://en.wikipedia.org/wiki/Artifact%20%28error%29 | In natural science and signal processing, an artifact or artefact is any error in the perception or representation of any information introduced by the involved equipment or technique(s).
Statistics
In statistics, statistical artifacts are apparent effects that are introduced inadvertently by methods of data analysis rather than by the process being studied.
Computer science
In computer science, digital artifacts are anomalies introduced into digital signals as a result of digital signal processing.
Microscopy
In microscopy, visual artifacts are sometimes introduced during the processing of samples into slide form.
Econometrics
In econometrics, which focuses on computing relationships between related variables, an artifact is a spurious finding, such as one based on either a faulty choice of variables or an over-extension of the computed relationship. Such an artifact may be called a statistical artifact. For instance, imagine a hypothetical finding that presidential approval rating is approximately equal to twice the percentage of citizens making more than $50,000 annually; if 60% of citizens make more than $50,000 annually, this would predict that the approval rating will be 120%. This prediction is a statistical artifact, since it is spurious to use the model when the percentage of citizens making over $50,000 is so high, and gross error to predict an approval rating greater than 100%.
Remote sensing
Medical imaging
In medical imaging, artifacts are misrepresentations of tissue structures produced by imaging techniques such as ultrasound, X-ray, CT scan, and magnetic resonance imaging (MRI). These artifacts may be caused by a variety of phenomena such as the underlying physics of the energy-tissue interaction as between ultrasound and air, susceptibility artifacts, data acquisition errors (such as patient motion), or a reconstruction algorithm's inability to represent the anatomy. Physicians typically learn to recognize some of these artifacts to avoid mistaking them for actual pathology.
In ultrasound imaging, several assumptions are made from the computer system to interpret the returning echoes. These are: echoes originate only from the main ultrasound beam (while there are side lobes and grating lobes apart from the main ultrasound beam); echoes returns to transducer after a single reflection (while an echo can be reflected several times before reaching the transducer); depth of an object relates directly to the amount of time for an echo to reach the transducer (while an echo may reflect several times, delaying the time for the echo return to the transducer); speed of ultrasound in human tissue is constant, echoes travel in a straight path. and acoustic energy of an echo is uniformly attenuated. When these assumptions are not maintained, artifacts occur.
Medical electrophysiological monitoring
In medical electrophysiological monitoring, artifacts are anomalous (interfering) signals that originate from some source other than the electrophysiological structure being studied. These artifact signals may stem from, but are not limited to: light sources; monitoring equipment issues; utility frequency (50 Hz and 60 Hz); or undesired electrophysiological signals such as EMG presenting on an EEG-, EP-, ECG-, or EOG- signal. Offending artifacts may obscure, distort, or completely misrepresent the true underlying electrophysiological signal sought.
Radar
In radar signal processing, some echoes can be related to fixed objects (clutter), multipath returns, jamming, atmospheric effect (brightband or attenuation), anomalous propagation, and many other effects. All those echoes must be filtered in order to obtain the position, velocity and type of the real targets that may include aircraft, and weather.
See also
Sonic artifact, in sound and music production, sonic material that is accidental or unwanted, resulting from the editing of another sound.
Visual artifact, in imaging, any unwanted visual alteration introduced by the imaging equipment.
Compression artifact, in computer graphics, distortion of media by the data compression.
References
Error
Optical illusions
Data compression
Radar theory | Artifact (error) | [
"Physics"
] | 800 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
24,289,155 | https://en.wikipedia.org/wiki/Biotechnology%20and%20Bioengineering | Biotechnology and Bioengineering is a monthly peer-reviewed scientific journal covering biochemical engineering that was established in 1959. In 2009, the BioMedical & Life Sciences Division of the Special Libraries Association listed Biotechnology and Bioengineering as one of the 100 most influential journals in biology and medicine of the past century.
The journal focuses on applied fundamentals and application of engineering principles to biology-based problems. Initially, fermentation processes, as well as mixing phenomena and aeration with an emphasis on agricultural or food science applications were the major focus. The scale up of antibiotics from fermentation processes was also an active topic of publication.
Elmer L. Gaden was editor-in-chief from its initial publication until 1983. Daniel I.C. Wang and Eleftherios T. Papoutsakis each subsequently held this position. Douglas S. Clark, the current editor-in-chief, has served in this capacity since 1996.
The journal was established as Journal of Biochemical and Microbiological Technology and Engineering by Elmer Gaden, Eric M. Crook, and M. B. Donald and was first published in February 1959. It obtained its current title in 1962.
According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.5.
References
External links
Biotechnology journals
Biomedical engineering journals
Wiley (publisher) academic journals
English-language journals
Academic journals established in 1959
Monthly journals | Biotechnology and Bioengineering | [
"Biology"
] | 285 | [
"Biotechnology literature",
"Biotechnology journals"
] |
24,290,409 | https://en.wikipedia.org/wiki/Vacuum%20level | In physics, the vacuum level refers to the energy of a free stationary electron that is outside of any material (it is in a perfect vacuum).
It may be taken as infinitely far away from a solid, or, defined to be near a surface. Its definition and measurement are often discussed in ultraviolet photoelectron spectroscopy literature, for example As the vacuum level is a property of the electron and free space, it is often used as the level of alignment for the energy levels of two different materials. The vacuum level alignment approach may or may not hold due to details of the interface. It is particularly important in the design of vacuum device components such as cathodes.
If defined as being close to a surface, then the vacuum level is typically not a constant due to the equilibrium electric fields in vacuum. The value of the vacuum level depends on the surface chosen due to variations in work function.
The phrase "vacuum level" also occurs often in texts on squeezed light where it refers to an unsqueezed measurement. For example, "Thus, when the noise level in the spectrum analyzer shows broadband squeezing below the vacuum level, it also indicates the presence of entanglement between upper and lower sidebands."
Note that the phrase "vacuum level" may also refer to a measurement of residual pressure in a vacuum system or a device that uses differential pressure such as a carburetor but this usage should be very clear from context.
References
Quantum chemistry | Vacuum level | [
"Physics",
"Chemistry"
] | 296 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
"Physical chemistry stubs",
" and optical physics"
] |
24,293,838 | https://en.wikipedia.org/wiki/Wigner%20rotation | In theoretical physics, the composition of two non-collinear Lorentz boosts results in a Lorentz transformation that is not a pure boost but is the composition of a boost and a rotation. This rotation is called Thomas rotation, Thomas–Wigner rotation or Wigner rotation. If a sequence of non-collinear boosts returns an object to its initial velocity, then the sequence of Wigner rotations can combine to produce a net rotation called the Thomas precession.
The rotation was discovered by Émile Borel in 1913, rediscovered and proved by Ludwik Silberstein in his 1914 book 'Relativity', rediscovered by Llewellyn Thomas in 1926, and rederived by Wigner in 1939. Wigner acknowledged Silberstein.
There are still ongoing discussions about the correct form of equations for the Thomas rotation in different reference systems with contradicting results. Goldstein:
The spatial rotation resulting from the successive application of two non-collinear Lorentz transformations have been declared every bit as paradoxical as the more frequently discussed apparent violations of common sense, such as the twin paradox.
Einstein's principle of velocity reciprocity (EPVR) reads
We postulate that the relation between the coordinates of the two systems is linear. Then the inverse transformation is also linear and the complete non-preference of the one or the other system demands that the transformation shall be identical with the original one, except for a change of to
With less careful interpretation, the EPVR is seemingly violated in some situations, but on closer analysis there is no such violation.
Let it be u the velocity in which the lab reference frame moves respect an object called A and let it be v the velocity in which another object called B is moving, measured from the lab reference frame. If u and v are not aligned, the coordinates of the relative velocities of these two bodies will not be opposite even though the actual velocity vectors themselves are indeed opposites (with the fact that the coordinates are not opposites being due to the fact that the two travellers are not using the same coordinate basis vectors).
If A and B both started in the lab system with coordinates matching those of the lab and subsequently use coordinate systems that result from their respective boosts from that system, then the velocity that A will measure on B will be given in terms of A's new coordinate system by:
And the velocity that B will measure on A will be given in terms of B's coordinate system by:
The Lorentz factor for the velocities that either A sees on B or B sees on A are the same:
but the components are not opposites - i.e.
However this does not mean that the velocities are not opposites as the components in each case are multiplied by different basis vectors (and all observers agree that the difference is by a rotation of coordinates such that the actual velocity vectors are indeed exact opposites).
The angle of rotation can be calculated in two ways:
Or:
And the axis of rotation is:
Setup of frames and relative velocities between them
Two general boosts
When studying the Thomas rotation at the fundamental level, one typically uses a setup with three coordinate frames, . Frame has velocity relative to frame , and frame has velocity relative to frame .
The axes are, by construction, oriented as follows. Viewed from , the axes of and are parallel (the same holds true for the pair of frames when viewed from .) Also viewed from , the spatial axes of and are parallel (and the same holds true for the pair of frames when viewed from .) This is an application of EVPR: If is the velocity of relative to , then is the velocity of relative to . The velocity makes the same angles with respect to coordinate axes in both the primed and unprimed systems. This does not represent a snapshot taken in any of the two frames of the combined system at any particular time, as should be clear from the detailed description below.
This is possible, since a boost in, say, the positive , preserves orthogonality of the coordinate axes. A general boost can be expressed as , where is a rotation taking the into the direction of and is a boost in the new . Each rotation retains the property that the spatial coordinate axes are orthogonal. The boost will stretch the (intermediate) by a factor , while leaving the and in place. The fact that coordinate axes are non-parallel in this construction after two consecutive non-collinear boosts is a precise expression of the phenomenon of Thomas rotation.
The velocity of as seen in is denoted , where ⊕ refers to the relativistic addition of velocity (and not ordinary vector addition), given by
and
is the Lorentz factor of the velocity (the vertical bars indicate the magnitude of the vector). The velocity can be thought of the velocity of a frame relative to a frame , and is the velocity of an object, say a particle or another frame relative to . In the present context, all velocities are best thought of as relative velocities of frames unless otherwise specified. The result is then the relative velocity of frame relative to a frame .
Although velocity addition is nonlinear, non-associative, and non-commutative, the result of the operation correctly obtains a velocity with a magnitude less than . If ordinary vector addition was used, it would be possible to obtain a velocity with a magnitude larger than . The Lorentz factor of both composite velocities are equal,
and the norms are equal under interchange of velocity vectors
Since the two possible composite velocities have equal magnitude, but different directions, one must be a rotated copy of the other. More detail and other properties of no direct concern here can be found in the main article.
Reversed configuration
Consider the reversed configuration, namely, frame moves with velocity relative to frame , and frame , in turn, moves with velocity relative to frame . In short, and by EPVR. Then the velocity of relative to is . By EPVR again, the velocity of relative to is then .
One finds . While they are equal in magnitude, there is an angle between them. For a single boost between two inertial frames, there is only one unambiguous relative velocity (or its negative). For two boosts, the peculiar result of two inequivalent relative velocities instead of one seems to contradict the symmetry of relative motion between any two frames. Which is the correct velocity of relative to ? Since this inequality may be somewhat unexpected and potentially breaking EPVR, this question is warranted.
Formulation in terms of Lorentz transformations
Two boosts equals a boost and rotation
The answer to the question lies in the Thomas rotation, and that one must be careful in specifying which coordinate system is involved at each step. When viewed from , the coordinate axes of and are not parallel. While this can be hard to imagine since both pairs and have parallel coordinate axes, it is easy to explain mathematically.
Velocity addition does not provide a complete description of the relation between the frames. One must formulate the complete description in terms of Lorentz transformations corresponding to the velocities. A Lorentz boost with any velocity (magnitude less than ) is given symbolically by
where the coordinates and transformation matrix are compactly expressed in block matrix form
and, in turn, are column vectors (the matrix transpose of these are row vectors), and is the Lorentz factor of velocity . The boost matrix is a symmetric matrix. The inverse transformation is given by
It is clear that to each admissible velocity there corresponds a pure Lorentz boost,
Velocity addition corresponds to the composition of boosts in that order. The acts on first, then acts on . Notice succeeding operators act on the left in any composition of operators, so should be interpreted as a boost with velocities then , not then . Performing the Lorentz transformations by block matrix multiplication,
the composite transformation matrix is
and, in turn,
Here is the composite Lorentz factor, and and are 3×1 column vectors proportional to the composite velocities. The 3×3 matrix will turn out to have geometric significance.
The inverse transformations are
and the composition amounts to a negation and exchange of velocities,
If the relative velocities are exchanged, looking at the blocks of , one observes the composite transformation to be the matrix transpose of . This is not the same as the original matrix, so the composite Lorentz transformation matrix is not symmetric, and thus not a single boost. This, in turn, translates to the incompleteness of velocity composition from the result of two boosts; symbolically,
To make the description complete, it is necessary to introduce a rotation, before or after the boost. This rotation is the Thomas rotation. A rotation is given by
where the 4×4 rotation matrix is
and is a 3×3 rotation matrix. In this article the axis-angle representation is used, and is the "axis-angle vector", the angle multiplied by a unit vector parallel to the axis. Also, the right-handed convention for the spatial coordinates is used (see orientation (vector space)), so that rotations are positive in the anticlockwise sense according to the right-hand rule, and negative in the clockwise sense. With these conventions; the rotation matrix rotates any 3d vector about the axis through angle anticlockwise (an active transformation), which has the equivalent effect of rotating the coordinate frame clockwise about the same axis through the same angle (a passive transformation).
The rotation matrix is an orthogonal matrix, its transpose equals its inverse, and negating either the angle or axis in the rotation matrix corresponds to a rotation in the opposite sense, so the inverse transformation is readily obtained by
A boost followed or preceded by a rotation is also a Lorentz transformation, since these operations leave the spacetime interval invariant. The same Lorentz transformation has two decompositions for appropriately chosen rapidity and axis-angle vectors;
and if these are two decompositions are equal, the two boosts are related by
so the boosts are related by a matrix similarity transformation.
It turns out the equality between two boosts and a rotation followed or preceded by a single boost is correct: the rotation of frames matches the angular separation of the composite velocities, and explains how one composite velocity applies to one frame, while the other applies to the rotated frame. The rotation also breaks the symmetry in the overall Lorentz transformation making it nonsymmetric. For this specific rotation, let the angle be and the axis be defined by the unit vector , so the axis-angle vector is .
Altogether, two different orderings of two boosts means there are two inequivalent transformations. Each of these can be split into a boost then rotation, or a rotation then boost, doubling the number of inequivalent transformations to four. The inverse transformations are equally important; they provide information about what the other observer perceives. In all, there are eight transformations to consider, just for the problem of two Lorentz boosts. In summary, with subsequent operations acting on the left, they are
Matching up the boosts followed by rotations, in the original setup, an observer in notices to move with velocity then rotate clockwise (first diagram), and because of the rotation an observer in Σ′′ notices to move with velocity then rotate anticlockwise (second diagram). If the velocities are exchanged an observer in notices to move with velocity then rotate anticlockwise (third diagram), and because of the rotation an observer in notices to move with velocity then rotate clockwise (fourth diagram).
The cases of rotations then boosts are similar (no diagrams are shown). Matching up the rotations followed by boosts, in the original setup, an observer in notices to rotate clockwise then move with velocity , and because of the rotation an observer in notices to rotate anticlockwise then move with velocity . If the velocities are exchanged an observer in notices to rotate anticlockwise then move with velocity , and because of the rotation an observer in notices to rotate clockwise then move with velocity .
Finding the axis and angle of the Thomas rotation
The above formulae constitute the relativistic velocity addition and the Thomas rotation explicitly in the general Lorentz transformations. Throughout, in every composition of boosts and decomposition into a boost and rotation, the important formula
holds, allowing the rotation matrix to be defined completely in terms of the relative velocities and . The angle of a rotation matrix in the axis–angle representation can be found from the trace of the rotation matrix, the general result for any axis is . Taking the trace of the equation gives
The angle between and is not the same as the angle between and .
In both frames Σ and Σ′′, for every composition and decomposition, another important formula
holds. The vectors and are indeed related by a rotation, in fact by the same rotation matrix which rotates the coordinate frames. Starting from , the matrix rotates this into anticlockwise, it follows their cross product (in the right-hand convention)
defines the axis correctly, therefore the axis is also parallel to . The magnitude of this pseudovector is neither interesting nor important, only the direction is, so it can be normalized into the unit vector
which still completely defines the direction of the axis without loss of information.
The rotation is simply a "static" rotation and there is no relative rotational motion between the frames, there is relative translational motion in the boost. However, if the frames accelerate, then the rotated frame rotates with an angular velocity. This effect is known as the Thomas precession, and arises purely from the kinematics of successive Lorentz boosts.
Finding the Thomas rotation
In principle, it is pretty easy. Since every Lorentz transformation is a product of a boost and a rotation, the consecutive application of two pure boosts is a pure boost, either followed by or preceded by a pure rotation. Thus, suppose
The task is to glean from this equation the boost velocity and the rotation from the matrix entries of . The coordinates of events are related by
Inverting this relation yields
or
Set Then will record the spacetime position of the origin of the primed system,
or
But
Multiplying this matrix with a pure rotation will not affect the zeroth columns and rows, and
which could have been anticipated from the formula for a simple boost in the -direction, and for the relative velocity vector
Thus given with , one obtains and by little more than inspection of . (Of course, can also be found using velocity addition per above.) From , construct . The solution for is then
With the ansatz
one finds by the same means
Finding a formal solution in terms of velocity parameters and involves first formally multiplying , formally inverting, then reading off form the result, formally building from the result, and, finally, formally multiplying . It should be clear that this is a daunting task, and it is difficult to interpret/identify the result as a rotation, though it is clear a priori that it is. It is these difficulties that the Goldstein quote at the top refers to. The problem has been thoroughly studied under simplifying assumptions over the years.
Group theoretical origin
Another way to explain the origin of the rotation is by looking at the generators of the Lorentz group.
Boosts from velocities
The passage from a velocity to a boost is obtained as follows. An arbitrary boost is given by
where is a triple of real numbers serving as coordinates on the boost subspace of the Lie algebra spanned by the matrices
The vector
is called the boost parameter or boost vector, while its norm is the rapidity. Here is the velocity parameter, the magnitude of the vector .
While for one has , the parameter is confined within , and hence . Thus
The set of velocities satisfying is an open ball in and is called the space of admissible velocities in the literature. It is endowed with a hyperbolic geometry described in the linked article.
Commutators
The generators of boosts, , in different directions do not commute. This has the effect that two consecutive boosts is not a pure boost in general, but a rotation preceding a boost.
Consider a succession of boosts in the x direction, then the y direction, expanding each boost to first order
then
and the group commutator is
Three of the commutation relations of the Lorentz generators are
where the bracket is a binary operation known as the commutator, and the other relations can be found by taking cyclic permutations of x, y, z components (i.e. change x to y, y to z, and z to x, repeat).
Returning to the group commutator, the commutation relations of the boost generators imply for a boost along the x then y directions, there will be a rotation about the z axis. In terms of the rapidities, the rotation angle is given by
equivalently expressible as
and Euler parametrization
In fact, the full Lorentz group is not indispensable for studying the Wigner rotation. Given that this phenomenon involves only two spatial dimensions, the subgroup is sufficient for analyzing the associated problems. Analogous to the Euler parametrization of , can be decomposed into three simple parts, providing a straightforward and intuitive framework for exploring the Wigner rotation problem.
Spacetime diagrams for non-collinear boosts
The familiar notion of vector addition for velocities in the Euclidean plane can be done in a triangular formation, or since vector addition is commutative, the vectors in both orderings geometrically form a parallelogram (see "parallelogram law"). This does not hold for relativistic velocity addition; instead a hyperbolic triangle arises whose edges are related to the rapidities of the boosts. Changing the order of the boost velocities, one does not find the resultant boost velocities to coincide.
See also
Bargmann-Michel-Telegdi equation
Pauli–Lubanski pseudovector
Velocity-addition formula#Hyperbolic geometry
Fermi–Walker transport
Footnotes
References
Sexl Urbantke mention on p. 39 Lobachevsky geometry needs to be introduced into the usual Minkowski spacetime diagrams for non-collinear velocities.
.
Ferraro, R., & Thibeault, M. (1999). "Generic composition of boosts: an elementary derivation of the Wigner rotation". European journal of physics 20(3):143.
(free access)
Thomas L.H The kinematics of an electron with an axis, Phil. Mag. 7, 1927 http://www.clifford.org/drbill/csueb/4250/topics/thomas_papers/Thomas1927.pdf
Silberstein L. The Theory of Relativity, MacMillan 1914
Further reading
Relativistic velocity space, Wigner rotation, and Thomas precession (2004) John A. Rhodes and Mark D. Semon
The Hyperbolic Theory of Special Relativity (2006) by J.F. Barrett
Special relativity
Coordinate systems
Theory of relativity
Mathematical physics | Wigner rotation | [
"Physics",
"Mathematics"
] | 3,918 | [
"Coordinate systems",
"Applied mathematics",
"Theoretical physics",
"Special relativity",
"Theory of relativity",
"Mathematical physics"
] |
24,295,042 | https://en.wikipedia.org/wiki/Winding%20factor | In power engineering, winding factor provides a way to compare of the effectiveness of different designs of stators for alternators. Winding factor is the ratio of electromotive force (EMF) produced by a stator having a short-pitch, distributed, or skewed winding, with a stator having full-pitch, concentrated, and non-skewed, windings.
For most alternators, the stator acts as the armature. Winding factor also applies to other electric machines, but this article focuses on winding factor as it applies to alternators.
Practical alternators have a short-pitched and distributed windings to reduce harmonics and maintain constant torque. Also, either the stator or rotor may be slightly skewed from the rotor's axis to reduce cogging torque. The armature winding of each phase may be distributed in a number of pole slots. Since the EMF induced in different slots are not in phase, their phasor sum is less than their numerical sum. This reduction factor is called distribution factor . The other factors that can reduce the winding factor are pitch factor and skew factor .
Pitch
In alternator design, pitch means angle. The shaft makes a complete rotation in 360 degrees, and is called mechanical degrees. However, the current in a conductors makes a complete cycle in 360 electrical degrees. Electrical degrees and mechanical degrees are related as follows:
where P is the number of poles.
No matter how many poles, each pole always spans exactly 180 electrical degrees, and it is called pole pitch. Coil pitch is the number of electrical degrees spanned by the coil.
Short pitch factor
A full-pitched coil is 180 electrical degrees, meaning it spans the entire pole. A short-pitched coil is less than 180 electrical degrees, meaning it does not spans the entire pole. The amount the coil is short-pitched is given by the variable in electrical degrees:
, and the pitch factor is:
.
A short pitched coil is also called chorded, in reference to the chord of a circle.
Calculating winding factor
The winding factor can be calculated as
where
is the distribution factor.
is the pole factor.
is the skew factor resulting from the winding being skewed from the axis of the rotor.
Example
For a 3-phase 6 slot 4 pole non-overlapping winding alternator:
Most of 3-phase motors have winding factor values between 0.85 and 0.95.
The winding factor (along with some other factors like winding skew) can help to improve the harmonic content in the generated EMF of the machine.
References
Saadat, Hadi. 2004. Power Systems Analysis. 2nd Ed. McGraw Hill. International Edition.
Reducing the effect of total harmonics distortion of synchronous machines
Electric motor winding calculator, 2022, Emetor AB
Kencoil, glossary
Reading a winding diagram
A Comparative Study on Performance of 3kW Induction Motor with Different Shapes of Stator Slots
Electrical generators | Winding factor | [
"Physics",
"Technology"
] | 605 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
24,295,969 | https://en.wikipedia.org/wiki/Applied%20mathematics | Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of mathematical science and specialized knowledge. The term "applied mathematics" also describes the professional specialty in which mathematicians work on practical problems by formulating and studying mathematical models.
In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics where abstract concepts are studied for their own sake. The activity of applied mathematics is thus intimately connected with research in pure mathematics.
History
Historically, applied mathematics consisted principally of applied analysis, most notably differential equations; approximation theory (broadly construed, to include representations, asymptotic methods, variational methods, and numerical analysis); and applied probability. These areas of mathematics related directly to the development of Newtonian physics, and in fact, the distinction between mathematicians and physicists was not sharply drawn before the mid-19th century. This history left a pedagogical legacy in the United States: until the early 20th century, subjects such as classical mechanics were often taught in applied mathematics departments at American universities rather than in physics departments, and fluid mechanics may still be taught in applied mathematics departments. Engineering and computer science departments have traditionally made use of applied mathematics.
As time passed, Applied Mathematics grew alongside the advancement of science and technology. With the advent of modern times, the application of mathematics in fields such as science, economics, technology, and more became deeper and more timely. The development of computers and other technologies enabled a more detailed study and application of mathematical concepts in various fields.
Today, Applied Mathematics continues to be crucial for societal and technological advancement. It guides the development of new technologies, economic progress, and addresses challenges in various scientific fields and industries. The history of Applied Mathematics continually demonstrates the importance of mathematics in human progress.
Divisions
Today, the term "applied mathematics" is used in a broader sense. It includes the classical areas noted above as well as other areas that have become increasingly important in applications. Even fields such as number theory that are part of pure mathematics are now important in applications (such as cryptography), though they are not generally considered to be part of the field of applied mathematics per se.
There is no consensus as to what the various branches of applied mathematics are. Such categorizations are made difficult by the way mathematics and science change over time, and also by the way universities organize departments, courses, and degrees.
Many mathematicians distinguish between "applied mathematics", which is concerned with mathematical methods, and the "applications of mathematics" within science and engineering. A biologist using a population model and applying known mathematics would not be doing applied mathematics, but rather using it; however, mathematical biologists have posed problems that have stimulated the growth of pure mathematics. Mathematicians such as Poincaré and Arnold deny the existence of "applied mathematics" and claim that there are only "applications of mathematics." Similarly, non-mathematicians blend applied mathematics and applications of mathematics. The use and development of mathematics to solve industrial problems is also called "industrial mathematics".
The success of modern numerical mathematical methods and software has led to the emergence of computational mathematics, computational science, and computational engineering, which use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary.
Applicable mathematics
Sometimes, the term applicable mathematics is used to distinguish between the traditional applied mathematics that developed alongside physics and the many areas of mathematics that are applicable to real-world problems today, although there is no consensus as to a precise definition.
Mathematicians often distinguish between "applied mathematics" on the one hand, and the "applications of mathematics" or "applicable mathematics" both within and outside of science and engineering, on the other. Some mathematicians emphasize the term applicable mathematics to separate or delineate the traditional applied areas from new applications arising from fields that were previously seen as pure mathematics. For example, from this viewpoint, an ecologist or geographer using population models and applying known mathematics would not be doing applied, but rather applicable, mathematics. Even fields such as number theory that are part of pure mathematics are now important in applications (such as cryptography), though they are not generally considered to be part of the field of applied mathematics per se. Such descriptions can lead to applicable mathematics being seen as a collection of mathematical methods such as real analysis, linear algebra, mathematical modelling, optimisation, combinatorics, probability and statistics, which are useful in areas outside traditional mathematics and not specific to mathematical physics.
Other authors prefer describing applicable mathematics as a union of "new" mathematical applications with the traditional fields of applied mathematics. With this outlook, the terms applied mathematics and applicable mathematics are thus interchangeable.
Utility
Historically, mathematics was most important in the natural sciences and engineering. However, since World War II, fields outside the physical sciences have spawned the creation of new areas of mathematics, such as game theory and social choice theory, which grew out of economic considerations. Further, the utilization and development of mathematical methods expanded into other areas leading to the creation of new fields such as mathematical finance and data science.
The advent of the computer has enabled new applications: studying and using the new computer technology itself (computer science) to study problems arising in other areas of science (computational science) as well as the mathematics of computation (for example, theoretical computer science, computer algebra, numerical analysis). Statistics is probably the most widespread mathematical science used in the social sciences.
Status in academic departments
Academic institutions are not consistent in the way they group and label courses, programs, and degrees in applied mathematics. At some schools, there is a single mathematics department, whereas others have separate departments for Applied Mathematics and (Pure) Mathematics. It is very common for Statistics departments to be separated at schools with graduate programs, but many undergraduate-only institutions include statistics under the mathematics department.
Many applied mathematics programs (as opposed to departments) consist primarily of cross-listed courses and jointly appointed faculty in departments representing applications. Some Ph.D. programs in applied mathematics require little or no coursework outside mathematics, while others require substantial coursework in a specific area of application. In some respects this difference reflects the distinction between "application of mathematics" and "applied mathematics".
Some universities in the U.K. host departments of Applied Mathematics and Theoretical Physics, but it is now much less common to have separate departments of pure and applied mathematics. A notable exception to this is the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge, housing the Lucasian Professor of Mathematics whose past holders include Isaac Newton, Charles Babbage, James Lighthill, Paul Dirac, and Stephen Hawking.
Schools with separate applied mathematics departments range from Brown University, which has a large Division of Applied Mathematics that offers degrees through the doctorate, to Santa Clara University, which offers only the M.S. in applied mathematics. Research universities dividing their mathematics department into pure and applied sections include MIT. Students in this program also learn another skill (computer science, engineering, physics, pure math, etc.) to supplement their applied math skills.
Associated mathematical sciences
Applied mathematics is associated with the following mathematical sciences:
Engineering and technological engineering
With applications of applied geometry together with applied chemistry.
Scientific computing
Scientific computing includes applied mathematics (especially numerical analysis), computing science (especially high-performance computing), and mathematical modelling in a scientific discipline.
Computer science
Computer science relies on logic, algebra, discrete mathematics such as graph theory, and combinatorics.
Operations research and management science
Operations research and management science are often taught in faculties of engineering, business, and public policy.
Statistics
Applied mathematics has substantial overlap with the discipline of statistics. Statistical theorists study and improve statistical procedures with mathematics, and statistical research often raises mathematical questions. Statistical theory relies on probability and decision theory, and makes extensive use of scientific computing, analysis, and optimization; for the design of experiments, statisticians use algebra and combinatorial design. Applied mathematicians and statisticians often work in a department of mathematical sciences (particularly at colleges and small universities).
Actuarial science
Actuarial science applies probability, statistics, and economic theory to assess risk in insurance, finance and other industries and professions.
Mathematical economics
Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. The applied methods usually refer to nontrivial mathematical techniques or approaches. Mathematical economics is based on statistics, probability, mathematical programming (as well as other computational methods), operations research, game theory, and some methods from mathematical analysis. In this regard, it resembles (but is distinct from) financial mathematics, another part of applied mathematics.
According to the Mathematics Subject Classification (MSC), mathematical economics falls into the Applied mathematics/other classification of category 91:
Game theory, economics, social and behavioral sciences
with MSC2010 classifications for 'Game theory' at codes 91Axx and for 'Mathematical economics' at codes 91Bxx .
Other disciplines
The line between applied mathematics and specific areas of application is often blurred. Many universities teach mathematical and statistical courses outside the respective departments, in departments and areas including business, engineering, physics, chemistry, psychology, biology, computer science, scientific computation, information theory, and mathematical physics.
Applied Mathematics Societies
The Society for Industrial and Applied Mathematics is an international applied mathematics organization. As of 2024, the society has 14,000 individual members. The American Mathematics Society has its Applied Mathematics Group.
See also
Analytics
Applied science
Engineering mathematics
Society for Industrial and Applied Mathematics
References
Further reading
Applicable mathematics
The Morehead Journal of Applicable Mathematics hosted by Morehead State University
Series on Concrete and Applicable Mathematics by World Scientific
Handbook of Applicable Mathematics Series by Walter Ledermann
External links
The Society for Industrial and Applied Mathematics (SIAM) is a professional society dedicated to promoting the interaction between mathematics and other scientific and technical communities. Aside from organizing and sponsoring numerous conferences, SIAM is a major publisher of research journals and books in applied mathematics.
The Applicable Mathematics Research Group at Notre Dame University (archived 29 March 2013)
Centre for Applicable Mathematics at Liverpool Hope University (archived 1 April 2018)
Applicable Mathematics research group at Glasgow Caledonian University (archived 4 March 2016) | Applied mathematics | [
"Mathematics"
] | 2,069 | [
"Applied mathematics"
] |
542,054 | https://en.wikipedia.org/wiki/Coefficient%20of%20performance | The coefficient of performance or COP (sometimes CP or CoP) of a heat pump, refrigerator or air conditioning system is a ratio of useful heating or cooling provided to work (energy) required. Higher COPs equate to higher efficiency, lower energy (power) consumption and thus lower operating costs. The COP is used in thermodynamics.
The COP usually exceeds 1, especially in heat pumps, because instead of just converting work to heat (which, if 100% efficient, would be a COP of 1), it pumps additional heat from a heat source to where the heat is required. Most air conditioners have a COP of 3.5 to 5. Less work is required to move heat than for conversion into heat, and because of this, heat pumps, air conditioners and refrigeration systems can have a coefficient of performance greater than one.
The COP is highly dependent on operating conditions, especially absolute temperature and relative temperature between sink and system, and is often graphed or averaged against expected conditions.
Performance of absorption refrigerator chillers is typically much lower, as they are not heat pumps relying on compression, but instead rely on chemical reactions driven by heat.
Equation
The equation is:
where
is the useful heat supplied or removed by the considered system (machine).
is the net work put into the considered system in one cycle.
The COP for heating and cooling are different because the heat reservoir of interest is different. When one is interested in how well a machine cools, the COP is the ratio of the heat taken up from the cold reservoir to input work. However, for heating, the COP is the ratio of the magnitude of the heat given off to the hot reservoir (which is the heat taken up from the cold reservoir plus the input work) to the input work:
where
is the heat removed from the cold reservoir and added to the system;
is the heat given off to the hot reservoir; it is lost by the system and therefore negative (see heat).
Note that the COP of a heat pump depends on its direction. The heat rejected to the hot sink is greater than the heat absorbed from the cold source, so the heating COP is greater by one than the cooling COP.
Theoretical performance limits
According to the first law of thermodynamics, after a full cycle of the process and thus .
Since , we obtain
For a heat pump operating at maximum theoretical efficiency (i.e. Carnot efficiency), it can be shown that
and thus
where and are the thermodynamic temperatures of the hot and cold heat reservoirs, respectively.
At maximum theoretical efficiency, therefore
which is equal to the reciprocal of the thermal efficiency of an ideal heat engine, because a heat pump is a heat engine operating in reverse.
Similarly, the COP of a refrigerator or air conditioner operating at maximum theoretical efficiency,
applies to heat pumps and applies to air conditioners and refrigerators.
Measured values for actual systems will always be significantly less than these theoretical maxima.
In Europe, the standard test conditions for ground source heat pump units use 308 K (35 °C; 95 °F) for and 273 K (0 °C; 32 °F) for . According to the above formula, the maximum theoretical COPs would be
Test results of the best systems are around 4.5. When measuring installed units over a whole season and accounting for the energy needed to pump water through the piping systems, seasonal COP's for heating are around 3.5 or less. This indicates room for further improvement.
The EU standard test conditions for an air source heat pump is at dry-bulb temperature of 20 °C (68 °F) for and 7 °C (44.6 °F) for . Given sub-zero European winter temperatures, real world heating performance is significantly poorer than such standard COP figures imply.
Improving the COP
As the formula shows, the COP of a heat pump system can be improved by reducing the temperature gap at which the system works. For a heating system this would mean two things:
Reducing the output temperature to around which requires piped floor, wall or ceiling heating, or oversized water to air heaters.
Increasing the input temperature (e.g. by using an oversized ground source or by access to a solar-assisted thermal bank ).
Accurately determining thermal conductivity will allow for much more precise ground loop or borehole sizing, resulting in higher return temperatures and a more efficient system. For an air cooler, the COP could be improved by using ground water as an input instead of air, and by reducing the temperature drop on the output side by increasing the air flow. For both systems, also increasing the size of pipes and air canals would help to reduce noise and the energy consumption of pumps (and ventilators) by decreasing the speed of the fluid, which in turn lowers the Reynolds number and hence the turbulence (and noise) and the head loss (see hydraulic head). The heat pump itself can be improved by increasing the size of the internal heat exchangers, which in turn increases the efficiency (and the cost) relative to the power of the compressor, and also by reducing the system's internal temperature gap over the compressor. Obviously, this latter measure makes some heat pumps unsuitable to produce high temperatures, which means that a separate machine is needed for producing, e.g., hot tap water.
The COP of absorption chillers can be improved by adding a second or third stage. Double and triple effect chillers are significantly more efficient than single effect chillers, and can surpass a COP of 1. They require higher pressure and higher temperature steam, but this is still a relatively small 10 pounds of steam per hour per ton of cooling.
Seasonal efficiency
A realistic indication of energy efficiency over an entire year can be achieved by using seasonal COP or seasonal coefficient of performance (SCOP) for heat. Seasonal energy efficiency ratio (SEER) is mostly used for air conditioning. SCOP is a new methodology which gives a better indication of expected real-life performance of heat pump technology.
See also
Seasonal energy efficiency ratio (SEER)
Seasonal thermal energy storage (STES)
Heating seasonal performance factor (HSPF)
Power usage effectiveness (PUE)
Thermal efficiency
Vapor-compression refrigeration
Air conditioner
HVAC
Notes
External links
Discussion on changes to COP of a heat pump depending on input and output temperatures
See COP definition in Cap XII of the book Industrial Energy Management - Principles and Applications
Heat pumps
Heating, ventilation, and air conditioning
Dimensionless numbers of thermodynamics
Engineering ratios | Coefficient of performance | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 1,309 | [
"Thermodynamic properties",
"Physical quantities",
"Metrics",
"Dimensionless numbers of thermodynamics",
"Engineering ratios",
"Quantity"
] |
542,198 | https://en.wikipedia.org/wiki/Alexandrov%20topology | In topology, an Alexandrov topology is a topology in which the intersection of every family of open sets is open. It is an axiom of topology that the intersection of every finite family of open sets is open; in Alexandrov topologies the finite qualifier is dropped.
A set together with an Alexandrov topology is known as an Alexandrov-discrete space or finitely generated space.
Alexandrov topologies are uniquely determined by their specialization preorders. Indeed, given any preorder ≤ on a set X, there is a unique Alexandrov topology on X for which the specialization preorder is ≤. The open sets are just the upper sets with respect to ≤. Thus, Alexandrov topologies on X are in one-to-one correspondence with preorders on X.
Alexandrov-discrete spaces are also called finitely generated spaces because their topology is uniquely determined by the family of all finite subspaces. Alexandrov-discrete spaces can thus be viewed as a generalization of finite topological spaces.
Due to the fact that inverse images commute with arbitrary unions and intersections, the property of being an Alexandrov-discrete space is preserved under quotients.
Alexandrov-discrete spaces are named after the Russian topologist Pavel Alexandrov. They should not be confused with the more geometrical Alexandrov spaces introduced by the Russian mathematician Aleksandr Danilovich Aleksandrov.
Characterizations of Alexandrov topologies
Alexandrov topologies have numerous characterizations. Let X = <X, T> be a topological space. Then the following are equivalent:
Open and closed set characterizations:
Open set. An arbitrary intersection of open sets in X is open.
Closed set. An arbitrary union of closed sets in X is closed.
Neighbourhood characterizations:
Smallest neighbourhood. Every point of X has a smallest neighbourhood.
Neighbourhood filter. The neighbourhood filter of every point in X is closed under arbitrary intersections.
Interior and closure algebraic characterizations:
Interior operator. The interior operator of X distributes over arbitrary intersections of subsets.
Closure operator. The closure operator of X distributes over arbitrary unions of subsets.
Preorder characterizations:
Specialization preorder. T is the finest topology consistent with the specialization preorder of X i.e. the finest topology giving the preorder ≤ satisfying x ≤ y if and only if x is in the closure of {y} in X.
Open up-set. There is a preorder ≤ such that the open sets of X are precisely those that are upward closed i.e. if x is in the set and x ≤ y then y is in the set. (This preorder will be precisely the specialization preorder.)
Closed down-set. There is a preorder ≤ such that the closed sets of X are precisely those that are downward closed i.e. if x is in the set and y ≤ x then y is in the set. (This preorder will be precisely the specialization preorder.)
Downward closure. A point x lies in the closure of a subset S of X if and only if there is a point y in S such that x ≤ y where ≤ is the specialization preorder i.e. x lies in the closure of {y}.
Finite generation and category theoretic characterizations:
Finite closure. A point x lies within the closure of a subset S of X if and only if there is a finite subset F of S such that x lies in the closure of F. (This finite subset can always be chosen to be a singleton.)
Finite subspace. T is coherent with the finite subspaces of X.
Finite inclusion map. The inclusion maps fi : Xi → X of the finite subspaces of X form a final sink.
Finite generation. X is finitely generated i.e. it is in the final hull of the finite spaces. (This means that there is a final sink fi : Xi → X where each Xi is a finite topological space.)
Topological spaces satisfying the above equivalent characterizations are called finitely generated spaces or Alexandrov-discrete spaces and their topology T is called an Alexandrov topology.
Equivalence with preordered sets
The Alexandrov topology on a preordered set
Given a preordered set we can define an Alexandrov topology on X by choosing the open sets to be the upper sets:
We thus obtain a topological space .
The corresponding closed sets are the lower sets:
The specialization preorder on a topological space
Given a topological space X = <X, T> the specialization preorder on X is defined by:
x ≤ y if and only if x is in the closure of {y}.
We thus obtain a preordered set W(X) = <X, ≤>.
Equivalence between preorders and Alexandrov topologies
For every preordered set X = <X, ≤> we always have W(T(X)) = X, i.e. the preorder of X is recovered from the topological space T(X) as the specialization preorder.
Moreover for every Alexandrov-discrete space X, we have T(W(X)) = X, i.e. the Alexandrov topology of X is recovered as the topology induced by the specialization preorder.
However for a topological space in general we do not have T(W(X)) = X. Rather T(W(X)) will be the set X with a finer topology than that of X (i.e. it will have more open sets).
The topology of T(W(X)) induces the same specialization preorder as the original topology of the space X and is in fact the finest topology on X with that property.
Equivalence between monotonicity and continuity
Given a monotone function
f : X→Y
between two preordered sets (i.e. a function
f : X→Y
between the underlying sets such that x ≤ y in X implies f(x) ≤ f(y) in Y), let
T(f) : T(X)→T(Y)
be the same map as f considered as a map between the corresponding Alexandrov spaces. Then T(f) is a continuous map.
Conversely given a continuous map
g: X→Y
between two topological spaces, let
W(g) : W(X)→W(Y)
be the same map as g considered as a map between the corresponding preordered sets. Then W(g) is a monotone function.
Thus a map between two preordered sets is monotone if and only if it is a continuous map between the corresponding Alexandrov-discrete spaces. Conversely a map between two Alexandrov-discrete spaces is continuous if and only if it is a monotone function between the corresponding preordered sets.
Notice however that in the case of topologies other than the Alexandrov topology, we can have a map between two topological spaces that is not continuous but which is nevertheless still a monotone function between the corresponding preordered sets. (To see this consider a non-Alexandrov-discrete space X and consider the identity map i : X→T(W(X)).)
Category theoretic description of the equivalence
Let Set denote the category of sets and maps. Let Top denote the category of topological spaces and continuous maps; and let Pro denote the category of preordered sets and monotone functions. Then
T : Pro→Top and
W : Top→Pro
are concrete functors over Set that are left and right adjoints respectively.
Let Alx denote the full subcategory of Top consisting of the Alexandrov-discrete spaces. Then the restrictions
T : Pro→Alx and
W : Alx→Pro
are inverse concrete isomorphisms over Set.
Alx is in fact a bico-reflective subcategory of Top with bico-reflector T◦W : Top→Alx. This means that given a topological space X, the identity map
i : T(W(X))→X
is continuous and for every continuous map
f : Y→X
where Y is an Alexandrov-discrete space, the composition
i −1◦f : Y→T(W(X))
is continuous.
Relationship to the construction of modal algebras from modal frames
Given a preordered set X, the interior operator and closure operator of T(X) are given by:
Int(S) = { x ∈ S : for all y ∈ X, x ≤ y implies y ∈ S }, and
Cl(S) = { x ∈ X : there exists a y ∈ S with x ≤ y }
for all S ⊆ X.
Considering the interior operator and closure operator to be modal operators on the power set Boolean algebra of X, this construction is a special case of the construction of a modal algebra from a modal frame i.e. from a set with a single binary relation. (The latter construction is itself a special case of a more general construction of a complex algebra from a relational structure i.e. a set with relations defined on it.) The class of modal algebras that we obtain in the case of a preordered set is the class of interior algebras—the algebraic abstractions of topological spaces.
Properties
Every subspace of an Alexandrov-discrete space is Alexandrov-discrete.
The product of two Alexandrov-discrete spaces is Alexandrov-discrete.
Every Alexandrov topology is first countable.
Every Alexandrov topology is locally compact in the sense that every point has a local base of compact neighbourhoods, since the smallest neighbourhood of a point is always compact. Indeed, if is the smallest (open) neighbourhood of a point , in itself with the subspace topology any open cover of contains a neighbourhood of included in . Such a neighbourhood is necessarily equal to , so the open cover admits as a finite subcover.
Every Alexandrov topology is locally path connected.
History
Alexandrov spaces were first introduced in 1937 by P. S. Alexandrov under the name discrete spaces, where he provided the characterizations in terms of sets and neighbourhoods. The name discrete spaces later came to be used for topological spaces in which every subset is open and the original concept lay forgotten in the topological literature. On the other hand, Alexandrov spaces played a relevant role in Øystein Ore pioneering studies on closure systems and their relationships
with lattice theory and topology.
With the advancement of categorical topology in the 1980s, Alexandrov spaces were rediscovered when the concept of finite generation was applied to general topology and the name finitely generated spaces was adopted for them. Alexandrov spaces were also rediscovered around the same time in the context of topologies resulting from denotational semantics and domain theory in computer science.
In 1966 Michael C. McCord and A. K. Steiner each independently observed an equivalence between partially ordered sets and spaces that were precisely the T0 versions of the spaces that Alexandrov had introduced. P. T. Johnstone referred to such topologies as Alexandrov topologies. F. G. Arenas independently proposed this name for the general version of these topologies. McCord also showed that these spaces are weak homotopy equivalent to the order complex of the corresponding partially ordered set. Steiner demonstrated that the equivalence is a contravariant lattice isomorphism preserving arbitrary meets and joins as well as complementation.
It was also a well-known result in the field of modal logic that a equivalence exists between finite topological spaces and preorders on finite sets (the finite modal frames for the modal logic S4). A. Grzegorczyk observed that this extended to a equivalence between what he referred to as totally distributive spaces and preorders. C. Naturman observed that these spaces were the Alexandrov-discrete spaces and extended the result to a category-theoretic equivalence between the category of Alexandrov-discrete spaces and (open) continuous maps, and the category of preorders and (bounded) monotone maps, providing the preorder characterizations as well as the interior and closure algebraic characterizations.
A systematic investigation of these spaces from the point of view of general topology, which had been neglected since the original paper by Alexandrov was taken up by F. G. Arenas.
See also
P-space, a space satisfying the weaker condition that countable intersections of open sets are open
References
Closure operators
Order theory
Properties of topological spaces | Alexandrov topology | [
"Mathematics"
] | 2,579 | [
"Closure operators",
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Order theory"
] |
542,241 | https://en.wikipedia.org/wiki/Tetra | Tetra is the common name of many small freshwater characiform fishes. Tetras come from Africa, Central America, and South America, belonging to the biological family Characidae and to its former subfamilies Alestidae (the "African tetras") and Lebiasinidae. The Characidae are distinguished from other fish by the presence of a small adipose fin between the dorsal and caudal fins. Many of these, such as the neon tetra (Paracheirodon innesi), are brightly colored and easy to keep in captivity. Consequently, they are extremely popular for home aquaria.
Tetra is no longer a taxonomic, phylogenetic term. It is short for Tetragonopterus, a genus name formerly applied to many of these fish, which is Greek for "square-finned" (literally, four-sided-wing).
Because of the popularity of tetras in the fishkeeping hobby, many unrelated fish are commonly known as tetras, including species from different families. Even vastly different fish may be called tetras. For example, payara (Hydrolycus scomberoides) is occasionally known as the "sabretooth tetra" or "vampire tetra".
Tetras generally have compressed (sometimes deep), fusiform bodies and are typically identifiable by their fins. They ordinarily possess a homocercal caudal fin (a twin-lobed, or forked, tail fin whose upper and lower lobes are of equal size) and a tall dorsal fin characterized by a short connection to the fish's body. Additionally, tetras possess a long anal fin stretching from a position just posterior of the dorsal fin and ending on the ventral caudal peduncle, and a small, fleshy adipose fin located dorsally between the dorsal and caudal fins. This adipose fin represents the fourth unpaired fin on the fish (the four unpaired fins are the caudal fin, dorsal fin, anal fin, and adipose fin), lending to the name tetra, which is Greek for four. While this adipose fin is generally considered the distinguishing feature, some tetras (such as the emperor tetras, Nematobrycon palmeri) lack this appendage. Ichthyologists debate the function of the adipose fin, doubting its role in swimming due to its small size and lack of stiffening rays or spines.
Although the list below is sorted by common name, in a number of cases, the common name is applied to different species. Since the aquarium trade may use a different name for the same species, advanced aquarists tend to use scientific names for the less-common tetras. The list below is incomplete.
Species
Tetra species:
A–D
Adonis tetra, Lepidarchus adonis
African long-finned tetra, Brycinus longipinnis
African moon tetra, Bathyaethiops caudomaculatus
Arnold's tetra, Arnoldichthys spilopterus
Banded tetra, Psalidodon fasciatus
Bandtail tetra, Moenkhausia dichroura
Barred glass tetra, Phenagoniates macrolepis
Beacon tetra, Hemigrammus ocellifer
Belgian flag tetra, Hyphessobrycon heterorhabdus
black morpho tetra, Poecilocharax weitzmani
Black neon tetra, Hyphessobrycon herbertaxelrodi
Black phantom tetra, Hyphessobrycon megalopterus
Black tetra or butterfly tetra, Gymnocorymbus ternetzi
Black tetra, Gymnocorymbus thayer
Black wedge tetra, Hemigrammus pulcher
Blackband tetra, Hyphessobrycon scholzei
Blackedge tetra, Tyttocharax madeirae
Black-flag tetra, Hyphessobrycon rosaceus
Black-jacket tetra, Moenkhausia takasei
blackline tetra, Hyphessobrycon scholzei
Bleeding heart tetra, Hyphessobrycon erythrostigma
Blind tetra, Stygichthys typhlops
Bloodfin tetra, Aphyocharax anisitsi
blue tetra, Boehlkea fredcochui
blue tetra, Mimagoniates microlepis
blue tetra, Tyttocharax madeirae
Bucktooth tetra, Exodon paradoxus
Buenos Aires tetra, Psalidodon anisitsi
Callistus tetra, Hyphessobrycon eques
calypso tetra, Hyphessobrycon axelrodi
Candy cane tetra, Hyphessobrycon sp. HY511
Cardinal tetra, Paracheirodon axelrodi
Carlana tetra, Carlana eigenmanni
Cochu's blue tetra, Knodus borki
Colombian tetra, Hyphessobrycon columbianus
Central tetra, Astyanax aeneus
Coffee-bean tetra, Hyphessobrycon takasei
Colcibolca tetra, Astyanax nasutus
Congo tetra, Phenacogrammus interruptus
Copper tetra, Hasemania melanura
Costello tetra, Hemigrammus hyanuary
Creek tetra, Bryconamericus scleroparius
Creek tetra, Bryconamericus terrabensis
Croaking tetra, Mimagoniates inequalis
Croaking tetra, Mimagoniates lateralis
Croaking tetra, Mimagoniates microlepis
Dawn tetra, Aphyocharax paraguayensis
Dawn tetra, Hyphessobrycon eos
Diamond tetra, Moenkhausia pittieri
Discus tetra, Brachychalcinus orbicularis
Disk tetra, Myleus schomburgkii
Dragonfin tetra, Pseudocorynopoma doriae
E–Q
Ember tetra, Hyphessobrycon amandae
Emperor tetra, Nematobrycon palmeri
False black tetra, Gymnocorymbus thayeri
False rummynose tetra, Petitella georgiae
Featherfin tetra, Hemigrammus unilineatus
Firehead tetra, Petitella bleheri
Flag tetra, Hyphessobrycon heterorhabdus
Flame tail tetra, Aphyocharax erythrurus
Flame tetra, Hyphessobrycon flammeus
Garnet tetra, Hemigrammus pulcher
Glass tetra, Moenkhausia oligolepis
Glass bloodfin tetra, Prionobrama filigera
Glossy tetra, Moenkhausia oligolepis
Glowlight tetra, Hemigrammus erythrozonus
Gold tetra (aka golden tetra, or brass tetra), Hemigrammus rodwayi
Goldencrown tetra, Aphyocharax alburnus
Goldspotted tetra, Hyphessobrycon griemi
Gold-tailed tetra, Carlastyanax aurocaudatus
Green dwarf tetra, Odontocharacidium aphanes
Green neon tetra, Paracheirodon simulans
Griem's tetra, Hyphessobrycon griemi
Head & Taillight tetra, Hemigrammus ocellifer
January tetra, Hemigrammus hyanuary
Jellybean tetra, Lepidarchus adonis
Jewel tetra, Hyphessobrycon eques
Jumping tetra, Hemibrycon tridens
Largespot tetra, Astyanax orthodus
Lemon tetra, Hyphessobrycon pulchripinnis
Longfin tetra, Brycinus longipinnis
Long-finned glass tetra, Xenagoniates bondi
Longjaw tetra, Bramocharax bransfordii
Loreto tetra, Hyphessobrycon loretoensis
Mayan tetra, Hyphessobrycon compressus
Mexican tetra, Astyanax mexicanus
Mimic scale-eating tetra, Deuterodon heterostomus
Mourning tetra, Brycon pesu
Naked tetra, Gymnocharacinus bergii
Neon tetra, Paracheirodon innesi
Niger tetra, Arnoldichthys spilopterus
Nurse tetra, Brycinus nurse
Oneline tetra, Nannaethiops unitaeniatus
One-line tetra, Hemigrammus unilineatus
Orangefin tetra, Bryconops affinis
Ornate tetra, Hyphessobrycon bentosi
Panama tetra, Hyphessobrycon panamensis
Penguin tetra, Thayeria boehlkei
Peruvian tetra, Hyphessobrycon peruvianus
Petticoat tetra, Gymnocorymbus ternetzi
Phantom tetra, Hyphessobrycon megalopterus
Pittier's tetra, Moenkhausia pittieri
Pretty tetra, Hemigrammus pulcher
Pristella tetra, Pristella maxillaris
Pygmy tetra, Odontostilbe dialeptura
R–Z
rainbow tetra, Nematobrycon lacortei
rainbow tetra, Nematobrycon palmeri
Red eye tetra, Moenkhausia sanctaefilomenae
Red phantom tetra, Hyphessobrycon sweglesi
Red tetra or rio tetra, Hyphessobrycon flammeus
Redspotted tetra, Copeina guttata
Rosy tetra, Hyphessobrycon rosaceus
Royal tetra, Inpaichthys kerri
Ruby tetra, Axelrodia riesei
Rummy-nose tetra, Petitella rhodostoma
brilliant rummy-nose tetra, Petitella bleheri
Sailfin tetra, Crenuchus spilurus
Savage tetra, Hyphessobrycon savagei
Savanna tetra, Hyphessobrycon stegemanni
Semaphore tetra, Pterobrycon myrnae
Serpae tetra, Hyphessobrycon eques
Sharptooth tetra, Micralestes acutidens
Silver tetra, Ctenobrycon spilurus
Silver tetra, Gymnocorymbus thayeri
Silver tetra, Micralestes acutidens
Silvertip tetra, Hasemania melanura
Silvertip tetra, Hasemania nana
Splash tetra, Copella arnoldi
Spot-fin tetra, Hyphessobrycon socolofi
Spottail tetra, Moenkhausia dichroura
Spotted tetra, Copella nattereri
Swegles's tetra, Hyphessobrycon sweglesi
Tailspot tetra, Bryconops caudomaculatus
Three-lined African tetra, Neolebias trilineatus
Tietê tetra, Brycon insignis
Tortuguero tetra, Hyphessobrycon tortuguerae
transparent tetra, Charax gibbosus
True big-scale tetra, Brycinus macrolepidotus
Uruguay tetra, Cheirodon interruptus
White spot tetra, Aphyocharax paraguayensis
x-ray tetra, Pristella maxillaris
Yellow tetra, Hyphessobrycon bifasciatus
Yellow-tailed African tetra, Alestopetersius caudalis
References
Fish common names
Paraphyletic groups | Tetra | [
"Biology"
] | 2,450 | [
"Phylogenetics",
"Paraphyletic groups"
] |
542,326 | https://en.wikipedia.org/wiki/Borel%20regular%20measure | In mathematics, an outer measure μ on n-dimensional Euclidean space Rn is called a Borel regular measure if the following two conditions hold:
Every Borel set B ⊆ Rn is μ-measurable in the sense of Carathéodory's criterion: for every A ⊆ Rn,
For every set A ⊆ Rn there exists a Borel set B ⊆ Rn such that A ⊆ B and μ(A) = μ(B).
Notice that the set A need not be μ-measurable: μ(A) is however well defined as μ is an outer measure.
An outer measure satisfying only the first of these two requirements is called a Borel measure, while an outer measure satisfying only the second requirement (with the Borel set B replaced by a measurable set B) is called a regular measure.
The Lebesgue outer measure on Rn is an example of a Borel regular measure.
It can be proved that a Borel regular measure, although introduced here as an outer measure (only countably subadditive), becomes a full measure (countably additive) if restricted to the Borel sets.
References
Measures (measure theory) | Borel regular measure | [
"Physics",
"Mathematics"
] | 236 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
542,396 | https://en.wikipedia.org/wiki/Stress%E2%80%93strain%20analysis | Stress–strain analysis (or stress analysis) is an engineering discipline that uses many methods to determine the stresses and strains in materials and structures subjected to forces. In continuum mechanics, stress is a physical quantity that expresses the internal forces that neighboring particles of a continuous material exert on each other, while strain is the measure of the deformation of the material.
In simple terms we can define stress as the force of resistance per unit area, offered by a body against deformation. Stress is the ratio of force over area (S = R/A, where S is the stress, R is the internal resisting force and A is the cross-sectional area). Strain is the ratio of change in length to the original length, when a given body is subjected to some external force (Strain= change in length÷the original length).
Stress analysis is a primary task for civil, mechanical and aerospace engineers involved in the design of structures of all sizes, such as tunnels, bridges and dams, aircraft and rocket bodies, mechanical parts, and even plastic cutlery and staples. Stress analysis is also used in the maintenance of such structures, and to investigate the causes of structural failures.
Typically, the starting point for stress analysis are a geometrical description of the structure, the properties of the materials used for its parts, how the parts are joined, and the maximum or typical forces that are expected to be applied to the structure. The output data is typically a quantitative description of how the applied forces spread throughout the structure, resulting in stresses, strains and the deflections of the entire structure and each component of that structure. The analysis may consider forces that vary with time, such as engine vibrations or the load of moving vehicles. In that case, the stresses and deformations will also be functions of time and space.
In engineering, stress analysis is often a tool rather than a goal in itself; the ultimate goal being the design of structures and artifacts that can withstand a specified load, using the minimum amount of material or that satisfies some other optimality criterion.
Stress analysis may be performed through classical mathematical techniques, analytic mathematical modelling or computational simulation, experimental testing, or a combination of methods.
The term stress analysis is used throughout this article for the sake of brevity, but it should be understood that the strains, and deflections of structures are of equal importance and in fact, an analysis of a structure may begin with the calculation of deflections or strains and end with calculation of the stresses.
Scope
General principles
Stress analysis is specifically concerned with solid objects. The study of stresses in liquids and gases is the subject of fluid mechanics.
Stress analysis adopts the macroscopic view of materials characteristic of continuum mechanics, namely that all properties of materials are homogeneous at small enough scales. Thus, even the smallest particle considered in stress analysis still contains an enormous number of atoms, and its properties are averages of the properties of those atoms.
In stress analysis one normally disregards the physical causes of forces or the precise nature of the materials. Instead, one assumes that the stresses are related to strain of the material by known constitutive equations.
By Newton's laws of motion, any external forces that act on a system must be balanced by internal reaction forces, or cause the particles in the affected part to accelerate. In a solid object, all particles must move substantially in concert in order to maintain the object's overall shape. It follows that any force applied to one part of a solid object must give rise to internal reaction forces that propagate from particle to particle throughout an extended part of the system. With very rare exceptions (such as ferromagnetic materials or planet-scale bodies), internal forces are due to very short range intermolecular interactions, and are therefore manifested as surface contact forces between adjacent particles — that is, as stress.
Fundamental problem
The fundamental problem in stress analysis is to determine the distribution of internal stresses throughout the system, given the external forces that are acting on it. In principle, that means determining, implicitly or explicitly, the Cauchy stress tensor at every point.
The external forces may be body forces (such as gravity or magnetic attraction), that act throughout the volume of a material; or concentrated loads (such as friction between an axle and a bearing, or the weight of a train wheel on a rail), that are imagined to act over a two-dimensional area, or along a line, or at single point. The same net external force will have a different effect on the local stress depending on whether it is concentrated or spread out.
Types of structures
In civil engineering applications, one typically considers structures to be in static equilibrium: that is, are either unchanging with time, or are changing slowly enough for viscous stresses to be unimportant (quasi-static). In mechanical and aerospace engineering, however, stress analysis must often be performed on parts that are far from equilibrium, such as vibrating plates or rapidly spinning wheels and axles. In those cases, the equations of motion must include terms that account for the acceleration of the particles. In structural design applications, one usually tries to ensure the stresses are everywhere well below the yield strength of the material. In the case of dynamic loads, the material fatigue must also be taken into account. However, these concerns lie outside the scope of stress analysis proper, being covered in materials science under the names strength of materials, fatigue analysis, stress corrosion, creep modeling, and other.
Experimental methods
Stress analysis can be performed experimentally by applying forces to a test element or structure and then determining the resulting stress using sensors. In this case the process would more properly be known as testing (destructive or non-destructive). Experimental methods may be used in cases where mathematical approaches are cumbersome or inaccurate. Special equipment appropriate to the experimental method is used to apply the static or dynamic loading.
There are a number of experimental methods which may be used:
Tensile testing is a fundamental materials science test in which a sample is subjected to uniaxial tension until failure. The results from the test are commonly used to select a material for an application, for quality control, or to predict how a material will react under other types of forces. Properties that are directly measured via a tensile test are the ultimate tensile strength, maximum elongation and reduction in cross-section area. From these measurements, properties such as Young's modulus, Poisson's ratio, yield strength, and the strain-hardening characteristics of the sample can be determined.
Strain gauges can be used to experimentally determine the deformation of a physical part. A commonly used type of strain gauge is a thin flat resistor that is affixed to the surface of a part, and which measures the strain in a given direction. From the measurement of strain on a surface in three directions the stress state that developed in the part can be calculated.
Neutron diffraction is a technique that can be used to determine the subsurface strain in a part.
The photoelastic method relies on the fact that some materials exhibit birefringence on the application of stress, and the magnitude of the refractive indices at each point in the material is directly related to the state of stress at that point. The stresses in a structure can be determined by making a model of the structure from such a photoelastic material.
Dynamic mechanical analysis (DMA) is a technique used to study and characterize viscoelastic materials, particularly polymers. The viscoelastic property of a polymer is studied by dynamic mechanical analysis where a sinusoidal force (stress) is applied to a material and the resulting displacement (strain) is measured. For a perfectly elastic solid, the resulting strains and the stresses will be perfectly in phase. For a purely viscous fluid, there will be a 90 degree phase lag of strain with respect to stress. Viscoelastic polymers have the characteristics in between where some phase lag will occur during DMA tests.
Mathematical methods
While experimental techniques are widely used, most stress analysis is done by mathematical methods, especially during design.
Differential formulation
The basic stress analysis problem can be formulated by Euler's equations of motion for continuous bodies (which are consequences of Newton's laws for conservation of linear momentum and angular momentum) and the Euler-Cauchy stress principle, together with the appropriate constitutive equations.
These laws yield a system of partial differential equations that relate the stress tensor field to the strain tensor field as unknown functions to be determined. Solving for either then allows one to solve for the other through another set of equations called constitutive equations. Both the stress and strain tensor fields will normally be continuous within each part of the system and that part can be regarded as a continuous medium with smoothly varying constitutive equations.
The external body forces will appear as the independent ("right-hand side") term in the differential equations, while the concentrated forces appear as boundary conditions. An external (applied) surface force, such as ambient pressure or friction, can be incorporated as an imposed value of the stress tensor across that surface. External forces that are specified as line loads (such as traction) or point loads (such as the weight of a person standing on a roof) introduce singularities in the stress field, and may be introduced by assuming that they are spread over small volume or surface area. The basic stress analysis problem is therefore a boundary-value problem.
Elastic and linear cases
A system is said to be elastic if any deformations caused by applied forces will spontaneously and completely disappear once the applied forces are removed. The calculation of the stresses (stress analysis) that develop within such systems is based on the theory of elasticity and infinitesimal strain theory. When the applied loads cause permanent deformation, one must use more complicated constitutive equations, that can account for the physical processes involved (plastic flow, fracture, phase change, etc.)
Engineered structures are usually designed so that the maximum expected stresses are well within the realm of linear elastic (the generalization of Hooke’s law for continuous media) behavior for the material from which the structure will be built. That is, the deformations caused by internal stresses are linearly related to the applied loads. In this case the differential equations that define the stress tensor are also linear. Linear equations are much better understood than non-linear ones; for one thing, their solution (the calculation of stress at any desired point within the structure) will also be a linear function of the applied forces. For small enough applied loads, even non-linear systems can usually be assumed to be linear.
Built-in stress (preloaded)
A preloaded structure is one that has internal forces, stresses, and strains imposed within it by various means prior to application of externally applied forces. For example, a structure may have cables that are tightened, causing forces to develop in the structure, before any other loads are applied. Tempered glass is a commonly found example of a preloaded structure that has tensile forces and stresses that act on the plane of the glass and in the central plane of glass that causes compression forces to act on the external surfaces of that glass.
The mathematical problem represented is typically ill-posed because it has an infinitude of solutions. In fact, in any three-dimensional solid body one may have infinitely many (and infinitely complicated) non-zero stress tensor fields that are in stable equilibrium even in the absence of external forces. These stress fields are often termed hyperstatic stress fields and they co-exist with the stress fields that balance the external forces. In linear elasticity, their presence is required to satisfy the strain/displacement compatibility requirements and in limit analysis their presence is required to maximise the load carrying capacity of the structure or component.
Such built-in stress may occur due to many physical causes, either during manufacture (in processes like extrusion, casting or cold working), or after the fact (for example because of uneven heating, or changes in moisture content or chemical composition). However, if the system can be assumed to behave in a linear fashion with respect to the loading and response of the system, then effect of preload can be accounted for by adding the results of a preloaded structure and the same non-preloaded structure.
If linearity cannot be assumed, however, any built-in stress may affect the distribution of internal forces induced by applied loads (for example, by changing the effective stiffness of the material) or even cause an unexpected material failure. For these reasons, a number of techniques have been developed to avoid or reduce built-in stress, such as annealing of cold-worked glass and metal parts, expansion joints in buildings, and roller joints for bridges.
Simplifications
Stress analysis is simplified when the physical dimensions and the distribution of loads allow the structure to be treated as one- or two-dimensional. In the analysis of a bridge, its three dimensional structure may be idealized as a single planar structure, if all forces are acting in the plane of the trusses of the bridge. Further, each member of the truss structure might then be treated a uni-dimensional members with the forces acting along the axis of each member. In which case, the differential equations reduce to a finite set of equations with finitely many unknowns.
If the stress distribution can be assumed to be uniform (or predictable, or unimportant) in one direction, then one may use the assumption of plane stress and plane strain behavior and the equations that describe the stress field are then a function of two coordinates only, instead of three.
Even under the assumption of linear elastic behavior of the material, the relation between the stress and strain tensors is generally expressed by a fourth-order stiffness tensor with 21 independent coefficients (a symmetric 6 × 6 stiffness matrix). This complexity may be required for general anisotropic materials, but for many common materials it can be simplified. For orthotropic materials such as wood, whose stiffness is symmetric with respect to each of three orthogonal planes, nine coefficients suffice to express the stress–strain relationship. For isotropic materials, these coefficients reduce to only two.
One may be able to determine a priori that, in some parts of the system, the stress will be of a certain type, such as uniaxial tension or compression, simple shear, isotropic compression or tension, torsion, bending, etc. In those parts, the stress field may then be represented by fewer than six numbers, and possibly just one.
Solving the equations
In any case, for two- or three-dimensional domains one must solve a system of partial differential equations with specified boundary conditions. Analytical (closed-form) solutions to the differential equations can be obtained when the geometry, constitutive relations, and boundary conditions are simple enough. For more complicated problems one must generally resort to numerical approximations such as the finite element method, the finite difference method, and the boundary element method.
Factor of safety
The ultimate purpose of any analysis is to allow the comparison of the developed stresses, strains, and deflections with those that are allowed by the design criteria. All structures, and components thereof, must obviously be designed to have a capacity greater than what is expected to develop during the structure's use to obviate failure. The stress that is calculated to develop in a member is compared to the strength of the material from which the member is made by calculating the ratio of the strength of the material to the calculated stress. The ratio must obviously be greater than 1.0 if the member is to not fail. However, the ratio of the allowable stress to the developed stress must be greater than 1.0 as a factor of safety (design factor) will be specified in the design requirement for the structure. All structures are designed to exceed the load those structures are expected to experience during their use. The design factor (a number greater than 1.0) represents the degree of uncertainty in the value of the loads, material strength, and consequences of failure. The stress (or load, or deflection) the structure is expected to experience are known as the working, the design or limit stress. The limit stress, for example, is chosen to be some fraction of the yield strength of the material from which the structure is made. The ratio of the ultimate strength of the material to the allowable stress is defined as the factor of safety against ultimate failure.
Laboratory tests are usually performed on material samples in order to determine the yield and ultimate strengths of those materials. A statistical analysis of the strength of many samples of a material is performed to calculate the particular material strength of that material. The analysis allows for a rational method of defining the material strength and results in a value less than, for example, 99.99% of the values from samples tested. By that method, in a sense, a separate factor of safety has been applied over and above the design factor of safety applied to a particular design that uses said material.
The purpose of maintaining a factor of safety on yield strength is to prevent detrimental deformations that would impair the use of the structure. An aircraft with a permanently bent wing might not be able to move its control surfaces, and hence, is inoperable. While yielding of material of structure could render the structure unusable it would not necessarily lead to the collapse of the structure. The factor of safety on ultimate tensile strength is to prevent sudden fracture and collapse, which would result in greater economic loss and possible loss of life.
An aircraft wing might be designed with a factor of safety of 1.25 on the yield strength of the wing and a factor of safety of 1.5 on its ultimate strength. The test fixtures that apply those loads to the wing during the test might be designed with a factor of safety of 3.0 on ultimate strength, while the structure that shelters the test fixture might have an ultimate factor of safety of ten. These values reflect the degree of confidence the responsible authorities have in their understanding of the load environment, their certainty of the material strengths, the accuracy of the analytical techniques used in the analysis, the value of the structures, the value of the lives of those flying, those near the test fixtures, and those within the building.
The factor of safety is used to calculate a maximum allowable stress:
Load transfer
The evaluation of loads and stresses within structures is directed to finding the load transfer path. Loads will be transferred by physical contact between the various component parts and within structures. The load transfer may be identified visually or by simple logic for simple structures. For more complex structures more complex methods, such as theoretical solid mechanics or numerical methods may be required. Numerical methods include direct stiffness method which is also referred to as the finite element method.
The object is to determine the critical stresses in each part, and compare them to the strength of the material (see strength of materials).
For parts that have broken in service, a forensic engineering or failure analysis is performed to identify weakness, where broken parts are analysed for the cause or causes of failure. The method seeks to identify the weakest component in the load path. If this is the part which actually failed, then it may corroborate independent evidence of the failure. If not, then another explanation has to be sought, such as a defective part with a lower tensile strength than it should for example.
Uniaxial stress
A linear element of a structure is one that is essentially one dimensional and is often subject to axial loading only. When a structural element is subjected to tension or compression its length will tend to elongate or shorten, and its cross-sectional area changes by an amount that depends on the Poisson's ratio of the material. In engineering applications, structural members experience small deformations and the reduction in cross-sectional area is very small and can be neglected, i.e., the cross-sectional area is assumed constant during deformation. For this case, the stress is called engineering stress or nominal stress and is calculated using the original cross section.
where P is the applied load, and Ao is the original cross-sectional area.
In some other cases, e.g., elastomers and plastic materials, the change in cross-sectional area is significant. For the case of materials where the volume is conserved (i.e. Poisson's ratio = 0.5), if the true stress is desired, it must be calculated using the true cross-sectional area instead of the initial cross-sectional area, as:
where
The relationship between true strain and engineering strain is given by
In uniaxial tension, true stress is then greater than nominal stress. The converse holds in compression.
Graphical representation of stress at a point
Mohr's circle, Lame's stress ellipsoid (together with the stress director surface), and Cauchy's stress quadric are two-dimensional graphical representations of the state of stress at a point. They allow for the graphical determination of the magnitude of the stress tensor at a given point for all planes passing through that point. Mohr's circle is the most common graphical method.
Mohr's circle, named after Christian Otto Mohr, is the locus of points that represent the state of stress on individual planes at all their orientations. The abscissa, , and ordinate, , of each point on the circle are the normal stress and shear stress components, respectively, acting on a particular cut plane with a unit vector with components .
Lamé's stress ellipsoid
The surface of the ellipsoid represents the locus of the endpoints of all stress vectors acting on all planes passing through a given point in the continuum body. In other words, the endpoints of all stress vectors at a given point in the continuum body lie on the stress ellipsoid surface, i.e., the radius-vector from the center of the ellipsoid, located at the material point in consideration, to a point on the surface of the ellipsoid is equal to the stress vector on some plane passing through the point. In two dimensions, the surface is represented by an ellipse (Figure coming).
Cauchy's stress quadric
The Cauchy's stress quadric, also called the stress surface, is a surface of the second order that traces the variation of the normal stress vector as the orientation of the planes passing through a given point is changed.
The complete state of stress in a body at a particular deformed configuration, i.e., at a particular time during the motion of the body, implies knowing the six independent components of the stress tensor , or the three principal stresses , at each material point in the body at that time. However, numerical analysis and analytical methods allow only for the calculation of the stress tensor at a certain number of discrete material points. To graphically represent in two dimensions this partial picture of the stress field different sets of contour lines can be used:
Isobars are curves along which the principal stress, e.g., is constant.
Isochromatics are curves along which the maximum shear stress is constant. These curves are directly determined using photoelasticity methods.
Isopachs are curves along which the mean normal stress is constant.
Isostatics or stress trajectories are a system of curves which are at each material point tangent to the principal axes of stress - see figure
Isoclinics are curves on which the principal axes make a constant angle with a given fixed reference direction. These curves can also be obtained directly by photoelasticity methods.
Slip lines are curves on which the shear stress is a maximum.
See also
Forensic engineering
Piping
Rockwell scale
Structural analysis
Stress
Worst case circuit analysis
List of finite element software packages
Stress–strain curve
References
Structural analysis | Stress–strain analysis | [
"Engineering"
] | 4,842 | [
"Structural engineering",
"Structural analysis",
"Mechanical engineering",
"Aerospace engineering"
] |
543,215 | https://en.wikipedia.org/wiki/Specific%20activity | Specific activity (symbol a) is the activity per unit mass of a radionuclide and is a physical property of that radionuclide.
It is usually given in units of becquerel per kilogram (Bq/kg), but another commonly used unit of specific activity is the curie per gram (Ci/g).
In the context of radioactivity, activity or total activity (symbol A) is a physical quantity defined as the number of radioactive transformations per second that occur in a particular radionuclide. The unit of activity is the becquerel (symbol Bq), which is defined equivalent to reciprocal seconds (symbol s−1). The older, non-SI unit of activity is the curie (Ci), which is radioactive decays per second. Another unit of activity is the rutherford, which is defined as radioactive decays per second.
The specific activity should not be confused with level of exposure to ionizing radiation and thus the exposure or absorbed dose, which is the quantity important in assessing the effects of ionizing radiation on humans.
Since the probability of radioactive decay for a given radionuclide within a set time interval is fixed (with some slight exceptions, see changing decay rates), the number of decays that occur in a given time of a given mass (and hence a specific number of atoms) of that radionuclide is also a fixed (ignoring statistical fluctuations).
Formulation
Relationship between λ and T1/2
Radioactivity is expressed as the decay rate of a particular radionuclide with decay constant λ and the number of atoms N:
The integral solution is described by exponential decay:
where N0 is the initial quantity of atoms at time t = 0.
Half-life T1/2 is defined as the length of time for half of a given quantity of radioactive atoms to undergo radioactive decay:
Taking the natural logarithm of both sides, the half-life is given by
Conversely, the decay constant λ can be derived from the half-life T1/2 as
Calculation of specific activity
The mass of the radionuclide is given by
where M is molar mass of the radionuclide, and NA is the Avogadro constant. Practically, the mass number A of the radionuclide is within a fraction of 1% of the molar mass expressed in g/mol and can be used as an approximation.
Specific radioactivity a is defined as radioactivity per unit mass of the radionuclide:
Thus, specific radioactivity can also be described by
This equation is simplified to
When the unit of half-life is in years instead of seconds:
Example: specific activity of Ra-226
For example, specific radioactivity of radium-226 with a half-life of 1600 years is obtained as
This value derived from radium-226 was defined as unit of radioactivity known as the curie (Ci).
Calculation of half-life from specific activity
Experimentally measured specific activity can be used to calculate the half-life of a radionuclide.
Where decay constant λ is related to specific radioactivity a by the following equation:
Therefore, the half-life can also be described by
Example: half-life of Rb-87
One gram of rubidium-87 and a radioactivity count rate that, after taking solid angle effects into account, is consistent with a decay rate of 3200 decays per second corresponds to a specific activity of . Rubidium atomic mass is 87 g/mol, so one gram is 1/87 of a mole. Plugging in the numbers:
Other calculations
For a given mass (in grams) of an isotope with atomic mass (in g/mol) and a half-life of (in s), the radioactivity can be calculated using:
With = , the Avogadro constant.
Since is the number of moles (), the amount of radioactivity can be calculated by:
For instance, on average each gram of potassium contains 117 micrograms of 40K (all other naturally occurring isotopes are stable) that has a of = , and has an atomic mass of 39.964 g/mol, so the amount of radioactivity associated with a gram of potassium is 30 Bq.
Examples
Applications
The specific activity of radionuclides is particularly relevant when it comes to select them for production for therapeutic pharmaceuticals, as well as for immunoassays or other diagnostic procedures, or assessing radioactivity in certain environments, among several other biomedical applications.
References
Further reading
Radioactivity quantities | Specific activity | [
"Physics",
"Chemistry",
"Mathematics"
] | 940 | [
"Quantity",
"Radioactivity quantities",
"Physical quantities",
"Radioactivity"
] |
543,288 | https://en.wikipedia.org/wiki/Sparse%20graph%20code | A Sparse graph code is a code which is represented by a sparse graph.
Any linear code can be represented as a graph, where there are two sets of nodes - a set representing the transmitted bits and another set representing the constraints that the transmitted bits have to satisfy. The state of the art classical error-correcting codes are based on sparse graphs, achieving close to the Shannon limit. The archetypal sparse-graph codes are Gallager's low-density parity-check codes.
External links
The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay, discusses sparse-graph codes in Chapters 47–50.
Encyclopedia of Sparse Graph Codes
Iterative Error Correction: Turbo, Low-Density Parity-Check, and Repeat-Cccumulate Codes
Matrix theory
Error detection and correction | Sparse graph code | [
"Mathematics",
"Engineering"
] | 170 | [
"Reliability engineering",
"Error detection and correction",
"Mathematical objects",
"Matrices (mathematics)",
"Matrix stubs"
] |
544,255 | https://en.wikipedia.org/wiki/Ludwig%20Boltzmann | Ludwig Eduard Boltzmann (, ; ; 20 February 1844 – 5 September 1906) was an Austrian physicist and philosopher. His greatest achievements were the development of statistical mechanics and the statistical explanation of the second law of thermodynamics. In 1877 he provided the current definition of entropy, , where Ω is the number of microstates whose energy equals the system's energy, interpreted as a measure of the statistical disorder of a system. Max Planck named the constant the Boltzmann constant.
Statistical mechanics is one of the pillars of modern physics. It describes how macroscopic observations (such as temperature and pressure) are related to microscopic parameters that fluctuate around an average. It connects thermodynamic quantities (such as heat capacity) to microscopic behavior, whereas, in classical thermodynamics, the only available option would be to measure and tabulate such quantities for various materials.
Biography
Childhood and education
Boltzmann was born in Erdberg, a suburb of Vienna into a Catholic family. His father, Ludwig Georg Boltzmann, was a revenue official. His grandfather, who had moved to Vienna from Berlin, was a clock manufacturer, and Boltzmann's mother, Katharina Pauernfeind, was originally from Salzburg. Boltzmann was home-schooled until the age of ten, and then attended high school in Linz, Upper Austria. When Boltzmann was 15, his father died.
Starting in 1863, Boltzmann studied mathematics and physics at the University of Vienna. He received his doctorate in 1866 and his venia legendi in 1869. Boltzmann worked closely with Josef Stefan, director of the institute of physics. It was Stefan who introduced Boltzmann to Maxwell's work.
Academic career
In 1869 at age 25, thanks to a letter of recommendation written by Josef Stefan, Boltzmann was appointed full Professor of Mathematical Physics at the University of Graz in the province of Styria. In 1869 he spent several months in Heidelberg working with Robert Bunsen and Leo Königsberger and in 1871 with Gustav Kirchhoff and Hermann von Helmholtz in Berlin. In 1873 Boltzmann joined the University of Vienna as Professor of Mathematics and there he stayed until 1876.
In 1872, long before women were admitted to Austrian universities, he met Henriette von Aigentler, an aspiring teacher of mathematics and physics in Graz. She was refused permission to audit lectures unofficially. Boltzmann supported her decision to appeal, which was successful. On 17 July 1876 Ludwig Boltzmann married Henriette; they had three daughters: Henriette (1880), Ida (1884) and Else (1891); and a son, Arthur Ludwig (1881). Boltzmann went back to Graz to take up the chair of Experimental Physics. Among his students in Graz were Svante Arrhenius and Walther Nernst. He spent 14 happy years in Graz and it was there that he developed his statistical concept of nature.
Boltzmann was appointed to the Chair of Theoretical Physics at the University of Munich in Bavaria, Germany in 1890.
In 1894, Boltzmann succeeded his teacher Joseph Stefan as Professor of Theoretical Physics at the University of Vienna.
Final years and death
Boltzmann spent a great deal of effort in his final years defending his theories. He did not get along with some of his colleagues in Vienna, particularly Ernst Mach, who became a professor of philosophy and history of sciences in 1895. That same year Georg Helm and Wilhelm Ostwald presented their position on energetics at a meeting in Lübeck. They saw energy, and not matter, as the chief component of the universe. Boltzmann's position carried the day among other physicists who supported his atomic theories in the debate. In 1900, Boltzmann went to the University of Leipzig, on the invitation of Wilhelm Ostwald. Ostwald offered Boltzmann the professorial chair in physics, which became vacant when Gustav Heinrich Wiedemann died. After Mach retired due to bad health, Boltzmann returned to Vienna in 1902. In 1903, Boltzmann, together with Gustav von Escherich and Emil Müller, founded the Austrian Mathematical Society. His students included Karl Přibram, Paul Ehrenfest and Lise Meitner.
In Vienna, Boltzmann taught physics and also lectured on philosophy. Boltzmann's lectures on natural philosophy were very popular and received considerable attention. His first lecture was an enormous success. Even though the largest lecture hall had been chosen for it, the people stood all the way down the staircase. Because of the great successes of Boltzmann's philosophical lectures, the Emperor invited him for a reception at the Palace.
In 1905, he gave an invited course of lectures in the summer session at the University of California in Berkeley, which he described in a popular essay A German professor's trip to El Dorado.
In May 1906, Boltzmann's deteriorating mental condition described in a letter by the Dean as "a serious form of neurasthenia" forced him to resign his position, and his symptoms indicate he experienced what would today be diagnosed as bipolar disorder. Four months later he died by suicide on 5 September 1906, by hanging himself while on vacation with his wife and daughter in Duino, near Trieste (then Austria).
He is buried in the Viennese Zentralfriedhof. His tombstone bears the inscription of Boltzmann's entropy formula: .
Philosophy
Boltzmann's kinetic theory of gases seemed to presuppose the reality of atoms and molecules, but almost all German philosophers and many scientists like Ernst Mach and the physical chemist Wilhelm Ostwald disbelieved their existence. Boltzmann was exposed to molecular theory by the paper of atomist James Clerk Maxwell entitled "Illustrations of the Dynamical Theory of Gases" which described temperature as dependent on the speed of the molecules thereby introducing statistics into physics. This inspired Boltzmann to embrace atomism and extend the theory.
Boltzmann wrote treatises on philosophy such as "On the question of the objective existence of processes in inanimate nature" (1897). He was a realist. In his work "On Thesis of Schopenhauer's", Boltzmann refers to his philosophy as materialism and says further: "Idealism asserts that only the ego exists, the various ideas, and seeks to explain matter from them. Materialism starts from the existence of matter and seeks to explain sensations from it."
Physics
Boltzmann's most important scientific contributions were in the kinetic theory of gases based upon the Second law of thermodynamics. This was important because Newtonian mechanics did not differentiate between past and future motion, but Rudolf Clausius’ invention of entropy to describe the second law was based on disgregation or dispersion at the molecular level so that the future was one-directional. Boltzmann was twenty-five years of age when he came upon James Clerk Maxwell's work on the kinetic theory of gases which hypothesized that temperature was caused by collision of molecules. Maxwell used statistics to create a curve of molecular kinetic energy distribution from which Boltzmann clarified and developed the ideas of kinetic theory and entropy based upon statistical atomic theory creating the Maxwell–Boltzmann distribution as a description of molecular speeds in a gas. It was Boltzmann who derived the first equation to model the dynamic evolution of the probability distribution Maxwell and he had created. Boltzmann's key insight was that dispersion occurred due to the statistical probability of increased molecular "states". Boltzmann went beyond Maxwell by applying his distribution equation to not solely gases, but also liquids and solids. Boltzmann also extended his theory in his 1877 paper beyond Carnot, Rudolf Clausius, James Clerk Maxwell and Lord Kelvin by demonstrating that entropy is contributed to by heat, spatial separation, and radiation. Maxwell–Boltzmann statistics and the Boltzmann distribution remain central in the foundations of classical statistical mechanics. They are also applicable to other phenomena that do not require quantum statistics and provide insight into the meaning of temperature.
He made multiple attempts to explain the second law of thermodynamics, with the attempts ranging over many areas. He tried Helmholtz's monocycle model, a pure ensemble approach like Gibbs, a pure mechanical approach like ergodic theory, the combinatorial argument, the Stoßzahlansatz, etc.
Most chemists, since the discoveries of John Dalton in 1808, and James Clerk Maxwell in Scotland and Josiah Willard Gibbs in the United States, shared Boltzmann's belief in atoms and molecules, but much of the physics establishment did not share this belief until decades later. Boltzmann had a long-running dispute with the editor of the preeminent German physics journal of his day, who refused to let Boltzmann refer to atoms and molecules as anything other than convenient theoretical constructs. Only a couple of years after Boltzmann's death, Perrin's studies of colloidal suspensions (1908–1909), based on Einstein's theoretical studies of 1905, confirmed the values of the Avogadro constant and the Boltzmann constant, convincing the world that the tiny particles really exist.
To quote Planck, "The logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases". This famous formula for entropy S is
where is the Boltzmann constant, and ln is the natural logarithm. (for , a German word meaning "probability") is the probability of occurrence of a macrostate or, more precisely, the number of possible microstates corresponding to the macroscopic state of a system – the number of (unobservable) "ways" in the (observable) thermodynamic state of a system that can be realized by assigning different positions and momenta to the various molecules. Boltzmann's paradigm was an ideal gas of identical particles, of which are in the th microscopic condition (range) of position and momentum. can be counted using the formula for permutations
where ranges over all possible molecular conditions, and where denotes factorial. The "correction" in the denominator account for indistinguishable particles in the same condition.
Boltzmann could also be considered one of the forerunners of quantum mechanics due to his suggestion in 1877 that the energy levels of a physical system could be discrete, although Boltzmann used this as a mathematical device with no physical meaning.
An alternative to Boltzmann's formula for entropy, above, is the information entropy definition introduced in 1948 by Claude Shannon. Shannon's definition was intended for use in communication theory but is applicable in all areas. It reduces to Boltzmann's expression when all the probabilities are equal, but can, of course, be used when they are not. Its virtue is that it yields immediate results without resorting to factorials or Stirling's approximation. Similar formulas are found, however, as far back as the work of Boltzmann, and explicitly in Gibbs (see reference).
Boltzmann equation
The Boltzmann equation was developed to describe the dynamics of an ideal gas.
where represents the distribution function of single-particle position and momentum at a given time (see the Maxwell–Boltzmann distribution), is a force, is the mass of a particle, is the time and is an average velocity of particles.
This equation describes the temporal and spatial variation of the probability distribution for the position and momentum of a density distribution of a cloud of points in single-particle phase space. (See Hamiltonian mechanics.) The first term on the left-hand side represents the explicit time variation of the distribution function, while the second term gives the spatial variation, and the third term describes the effect of any force acting on the particles. The right-hand side of the equation represents the effect of collisions.
In principle, the above equation completely describes the dynamics of an ensemble of gas particles, given appropriate boundary conditions. This first-order differential equation has a deceptively simple appearance, since can represent an arbitrary single-particle distribution function. Also, the force acting on the particles depends directly on the velocity distribution function . The Boltzmann equation is notoriously difficult to integrate. David Hilbert spent years trying to solve it without any real success.
The form of the collision term assumed by Boltzmann was approximate. However, for an ideal gas the standard Chapman–Enskog solution of the Boltzmann equation is highly accurate. It is expected to lead to incorrect results for an ideal gas only under shock wave conditions.
Boltzmann tried for many years to "prove" the second law of thermodynamics using his gas-dynamical equation – his famous H-theorem. However the key assumption he made in formulating the collision term was "molecular chaos", an assumption which breaks time-reversal symmetry as is necessary for anything which could imply the second law. It was from the probabilistic assumption alone that Boltzmann's apparent success emanated, so his long dispute with Loschmidt and others over Loschmidt's paradox ultimately ended in his failure.
Finally, in the 1970s E. G. D. Cohen and J. R. Dorfman proved that a systematic (power series) extension of the Boltzmann equation to high densities is mathematically impossible. Consequently, nonequilibrium statistical mechanics for dense gases and liquids focuses on the Green–Kubo relations, the fluctuation theorem, and other approaches instead.
Second thermodynamics law as a law of disorder
The idea that the second law of thermodynamics or "entropy law" is a law of disorder (or that dynamically ordered states are "infinitely improbable") is due to Boltzmann's view of the second law of thermodynamics.
In particular, it was Boltzmann's attempt to reduce it to a stochastic collision function, or law of probability following from the random collisions of mechanical particles. Following Maxwell, Boltzmann modeled gas molecules as colliding billiard balls in a box, noting that with each collision nonequilibrium velocity distributions (groups of molecules moving at the same speed and in the same direction) would become increasingly disordered leading to a final state of macroscopic uniformity and maximum microscopic disorder or the state of maximum entropy (where the macroscopic uniformity corresponds to the obliteration of all field potentials or gradients). The second law, he argued, was thus simply the result of the fact that in a world of mechanically colliding particles disordered states are the most probable. Because there are so many more possible disordered states than ordered ones, a system will almost always be found either in the state of maximum disorder – the macrostate with the greatest number of accessible microstates such as a gas in a box at equilibrium – or moving towards it. A dynamically ordered state, one with molecules moving "at the same speed and in the same direction", Boltzmann concluded, is thus "the most improbable case conceivable...an infinitely improbable configuration of energy."
Boltzmann accomplished the feat of showing that the second law of thermodynamics is only a statistical fact. The gradual disordering of energy is analogous to the disordering of an initially ordered pack of cards under repeated shuffling, and just as the cards will finally return to their original order if shuffled a gigantic number of times, so the entire universe must some-day regain, by pure chance, the state from which it first set out. (This optimistic coda to the idea of the dying universe becomes somewhat muted when one attempts to estimate the timeline which will probably elapse before it spontaneously occurs.) The tendency for entropy increase seems to cause difficulty to beginners in thermodynamics, but is easy to understand from the standpoint of the theory of probability. Consider two ordinary dice, with both sixes face up. After the dice are shaken, the chance of finding these two sixes face up is small (1 in 36); thus one can say that the random motion (the agitation) of the dice, like the chaotic collisions of molecules because of thermal energy, causes the less probable state to change to one that is more probable. With millions of dice, like the millions of atoms involved in thermodynamic calculations, the probability of their all being sixes becomes so vanishingly small that the system must move to one of the more probable states.
Legacy and impact on modern science
Ludwig Boltzmann's contributions to physics and philosophy have left a lasting impact on modern science. His pioneering work in statistical mechanics and thermodynamics laid the foundation for some of the most fundamental concepts in physics. For instance, Max Planck in quantizing resonators in his Black Body theory of radiation used the Boltzmann constant to describe the entropy of the system to arrive at his formula in 1900. However, Boltzmann's work was not always readily accepted during his lifetime, and he faced opposition from some of his contemporaries, particularly in regards to the existence of atoms and molecules. Nevertheless, the validity and importance of his ideas were eventually recognized, and they have since become cornerstones of modern physics. Here, we delve into some aspects of Boltzmann's legacy and his influence on various areas of science.
Atomic theory and the existence of atoms and molecules
Boltzmann's kinetic theory of gases was one of the first attempts to explain macroscopic properties, such as pressure and temperature, in terms of the behaviour of individual atoms and molecules. Although many chemists were already accepting the existence of atoms and molecules, the broader physics community took some time to embrace this view. Boltzmann's long-running dispute with the editor of a prominent German physics journal over the acceptance of atoms and molecules underscores the initial resistance to this idea.
It was only after experiments, such as Jean Perrin's studies of colloidal suspensions, confirmed the values of the Avogadro constant and the Boltzmann constant that the existence of atoms and molecules gained wider acceptance. Boltzmann's kinetic theory played a crucial role in demonstrating the reality of atoms and molecules and explaining various phenomena in gases, liquids, and solids.
Statistical mechanics and the Boltzmann constant
Statistical mechanics, which Boltzmann pioneered, connects macroscopic observations with microscopic behaviors. His statistical explanation of the second law of thermodynamics was a significant achievement, and he provided the current definition of entropy (), where is the Boltzmann constant and Ω is the number of microstates corresponding to a given macrostate.
Max Planck later named the constant as the Boltzmann constant in honor of Boltzmann's contributions to statistical mechanics. The Boltzmann constant is now a fundamental constant in physics and across many scientific disciplines.
Boltzmann equation and modern uses
Because the Boltzmann equation is practical in solving problems in rarefied or dilute gases, it has been used in many diverse areas of technology. It is used to calculate Space Shuttle re-entry in the upper atmosphere. It is the basis for Neutron transport theory, and ion transport in Semiconductors.
Influence on quantum mechanics
Boltzmann's work in statistical mechanics laid the groundwork for understanding the statistical behavior of particles in systems with a large number of degrees of freedom. In his 1877 paper, he used discrete energy levels of physical systems as a mathematical device and went on to show that the same approach could be applied to continuous systems. This might be seen as a forerunner to the development of quantum mechanics. One biographer of Boltzmann says that Boltzmann’s approach “pav[ed] the way for Planck.”
Quantization of energy levels became a fundamental postulate in quantum mechanics, leading to groundbreaking theories like quantum electrodynamics and quantum field theory. Thus, Boltzmann's early insights into the quantization of energy levels had a profound influence on the development of quantum physics.
Works
Awards and honours
In 1885 he became a member of the Imperial Austrian Academy of Sciences and in 1887 he became the President of the University of Graz. He was elected a member of the Royal Swedish Academy of Sciences in 1888 and a Foreign Member of the Royal Society (ForMemRS) in 1899. Numerous things are named in his honour.
See also
Thermodynamics
Statistical Mechanics
Boltzmann brain
References
Further reading
Roman Sexl & John Blackmore (eds.), "Ludwig Boltzmann – Ausgewahlte Abhandlungen", (Ludwig Boltzmann Gesamtausgabe, Band 8), Vieweg, Braunschweig, 1982.
John Blackmore (ed.), "Ludwig Boltzmann – His Later Life and Philosophy, 1900–1906, Book One: A Documentary History", Kluwer, 1995.
John Blackmore, "Ludwig Boltzmann – His Later Life and Philosophy, 1900–1906, Book Two: The Philosopher", Kluwer, Dordrecht, Netherlands, 1995.
John Blackmore (ed.), "Ludwig Boltzmann – Troubled Genius as Philosopher", in Synthese, Volume 119, Nos. 1 & 2, 1999, pp. 1–232.
Boltzmann, Ludwig Boltzmann – Leben und Briefe, ed., Walter Hoeflechner, Akademische Druck- u. Verlagsanstalt. Graz, Oesterreich, 1994
Brush, Stephen G. (ed. & tr.), Boltzmann, Lectures on Gas Theory, Berkeley, California: U. of California Press, 1964
Brush, Stephen G. (ed.), Kinetic Theory, New York: Pergamon Press, 1965
Ehrenfest, P. & Ehrenfest, T. (1911) "Begriffliche Grundlagen der statistischen Auffassung in der Mechanik", in Encyklopädie der mathematischen Wissenschaften mit Einschluß ihrer Anwendungen Band IV, 2. Teil ( F. Klein and C. Müller (eds.). Leipzig: Teubner, pp. 3–90. Translated as The Conceptual Foundations of the Statistical Approach in Mechanics. New York: Cornell University Press, 1959.
English translation by Morton Masius of the 2nd ed. of Waermestrahlung. Reprinted by Dover (1959) & (1991).
Sharp, Kim (2019). Entropy and the Tao of Counting: A Brief Introduction to Statistical Mechanics and the Second Law of Thermodynamics (SpringerBriefs in Physics). Springer Nature.
Reprinted: Dover (1979).
External links
Ludwig Boltzmann - The genius of disorder (Youtube)
Ruth Lewin Sime, Lise Meitner: A Life in Physics Chapter One: Girlhood in Vienna gives Lise Meitner's account of Boltzmann's teaching and career.
Eftekhari, Ali, "Ludwig Boltzmann (1844–1906)." Discusses Boltzmann's philosophical opinions, with numerous quotes.
1844 births
1906 suicides
1906 deaths
Scientists from Vienna
19th-century Austrian physicists
Thermodynamicists
Fluid dynamicists
Burials at the Vienna Central Cemetery
University of Vienna alumni
Members of the Royal Swedish Academy of Sciences
Corresponding members of the Saint Petersburg Academy of Sciences
Suicides in Austria-Hungary
Foreign members of the Royal Society
Foreign associates of the National Academy of Sciences
Mathematical physicists
Theoretical physicists
Rectors of universities in Austria-Hungary
Physicists from Austria-Hungary
19th-century Austrian philosophers
20th-century Austrian philosophers
Members of the Göttingen Academy of Sciences and Humanities | Ludwig Boltzmann | [
"Physics",
"Chemistry"
] | 4,880 | [
"Theoretical physics",
"Fluid dynamicists",
"Thermodynamics",
"Thermodynamicists",
"Theoretical physicists",
"Fluid dynamics"
] |
544,333 | https://en.wikipedia.org/wiki/Eugene%20Mallove | Eugene Franklin Mallove (June 9, 1947 – May 14, 2004) was an American scientist, science writer, editor, and publisher of Infinite Energy magazine, and founder of the nonprofit organization New Energy Foundation. He was a proponent of cold fusion, and a supporter of its research and related exploratory alternative energy topics, several of which are sometimes characterised as "fringe science".
Mallove authored Fire from Ice, a book detailing the 1989 report of tabletop cold fusion from Stanley Pons and Martin Fleischmann at the University of Utah. Among other things, the book claims the team did produce "greater-than-unity" output energy in an experiment successfully replicated on several occasions, but that the results were suppressed through an organized campaign of ridicule from mainstream physicists, including those studying controlled thermonuclear fusion, trying to protect their research and funding.
Mallove was murdered in 2004 while cleaning out his former childhood home, which had been rented out. Three people have been arrested and charged in connection with the killing; two were convicted of first-degree manslaughter and murder; the third pleaded guilty to obstruction of justice.
Biography
Eugene Franklin Mallove was born on June 9, 1947, to Gladys (nee' Alexander) and Mitchell Mallove. He grew up in Norwich, Connecticut and graduated from the Norwich Free Academy in 1965. From an early age, he showed great interest in science and especially astronomy. While in Boston, he met Joanne Smith, who was a student at Boston University. On September 9, 1970, Gene and Joanne married. They had two children, Kimberlyn, born in 1974, and Ethan, born in 1979.
Eugene Mallove held a BS (1969) and MS degree (1970) in aeronautical and astronautical engineering from MIT and a ScD degree (1975) in environmental health sciences from Harvard University. He had worked for technology engineering firms such as Hughes Research Laboratories, the Analytic Science Corporation, and MIT's Lincoln Laboratory, and he consulted in research and development of new energies.
In 1981, he and Gregory Matloff wrote a paper about using solar sails to reach Alpha Centauri, the nearest star to the Sun. They calculated that the trip would take several hundred years and that the ship would have to withstand accelerations of 60 g. They wrote several papers on that and other proposed methods of space travel, such as laser propulsion, the Bussard ramjet, and exotic fuels that could give very high power.
Mallove taught science journalism at MIT and Boston University and was chief science writer at MIT's news office, a position he left as part of a dispute with the school over cold fusion. Mallove resigned from MIT in 1991 because he said MIT was hiding cold fusion data, partly to protect funding for and reputation of traditional fusion research.
He was a science writer and broadcaster with the Voice of America radio service and author of three science books: The Quickening Universe: Cosmic Evolution and Human Destiny (1987, St. Martin’s Press), The Starflight Handbook: A Pioneer’s Guide to Interstellar Travel (1989, John Wiley & Sons, with co-author Gregory Matloff), and Fire from Ice: Searching for the Truth Behind the Cold Fusion Furor (1991, John Wiley & Sons). He also published articles for numerous magazines and newspapers.
Mallove was a member of the Aurora Biophysics Research Institute (ABRI), one of the founders of the International Society of the Friends of Aetherometry, a member of its Organizing Committee, a co-inventor of the HYBORAC technology and one of the main evaluators of ABRI technologies.
His alternative energy research included studying the reproduction of Wilhelm Reich's Orgone Motor by Dr. Paulo Correa and Alexandra Correa, as well as the evolution of heat in the Reich-Einstein experiment. He was among the scientists and engineers who claimed to have confirmed the output of excess electric energy from tuned pulsed plasmas in vacuum arc discharges.
Mallove's combative stance against what he saw as the hypocrisy of mainstream science gave him a high-profile. Among other things, he was a frequent guest on the American radio program Coast to Coast AM.
In 1992, Mallove was a consultant on the ERR (Electromagnetic Radiation Receiver) project at the Noah’s Ark Research Facility in the Philippines. He is also credited as a "cold fusion technical consultant", for providing advice to the producers of the movie The Saint from 1997, with a plot revolving around cold fusion formulas.
Eugene Mallove was a notable proponent and supporter of research into cold fusion. He authored the book Fire from Ice, which details the 1989 report of table-top cold fusion from Stanley Pons and Martin Fleischmann at the University of Utah. The book claims the team did produce "greater-than-unity" output energy in an experiment that was successfully replicated on several occasions. Mallove claims that the results were suppressed through an organized campaign of ridicule from mainstream physicists.
Death
Eugene Mallove was killed on May 14, 2004, in Norwich, Connecticut, while cleaning a recently vacated rental property owned by his parents, the home he grew up in. The nature of Mallove's work led to some conspiracy theories regarding the homicide, but police suspected robbery as the motive.
In 2005, two local men were arrested in connection with the killing. The case proceeded slowly and the charges against the two men were finally dismissed on November 6, 2008.
On February 11, 2009, the State of Connecticut announced a $50,000 reward leading to the arrest and conviction of the person or persons responsible for the murder. On April 2, 2010, the police made two arrests in connection with the murder and said that more arrests were expected.
On May 22, 2011, a state prosecutor said that they were charging a third person in connection with the killing. Court testimony indicated that Mallove may have been killed by an evicted tenant who was angry about belongings being disposed of during the clearout.
On April 20, 2012, the Norwich Bulletin stated that: "An ongoing murder trial came to an abrupt halt Friday when Chad Schaffer, of Norwich, decided to accept an offer of 16 years in prison, pleading guilty to the lesser charge of first-degree manslaughter in the 2004 beating death of Eugene Mallove." Mallove had just evicted Schaffer's parents, and he was cleaning the evicted house when Schaffer arrived and confronted him.
A third individual was arraigned on November 21, 2013.
Mozelle Brown was convicted of Mallove's murder in October 2014 and on January 6, 2015, was sentenced to 58 years in prison. Schaffer's girlfriend, Candace Foster, testified against Brown and Schaffer, and pleaded guilty to a charge of hindering prosecution and tampering with evidence.
Books
References
Further reading
External links
"Eugene Mallove's Open Letter to the World" with preface by Richard Hoagland and clarification by Christy Frazier. PES Network, last update August 30, 2004.
Harvard University alumni
Boston University faculty
American science writers
American magazine editors
People murdered in Connecticut
Orgonomy
1947 births
2004 deaths
Deaths by beating in the United States
Free energy conspiracy theorists
Cold fusion
MIT Lincoln Laboratory people
American conspiracy theorists | Eugene Mallove | [
"Physics",
"Chemistry"
] | 1,495 | [
"Nuclear fusion",
"Cold fusion",
"Nuclear physics"
] |
544,641 | https://en.wikipedia.org/wiki/CDNA%20library | A cDNA library is a combination of cloned cDNA (complementary DNA) fragments inserted into a collection of host cells, which constitute some portion of the transcriptome of the organism and are stored as a "library". cDNA is produced from fully transcribed mRNA found in the nucleus and therefore contains only the expressed genes of an organism. Similarly, tissue-specific cDNA libraries can be produced. In eukaryotic cells the mature mRNA is already spliced, hence the cDNA produced lacks introns and can be readily expressed in a bacterial cell. While information in cDNA libraries is a powerful and useful tool since gene products are easily identified, the libraries lack information about enhancers, introns, and other regulatory elements found in a genomic DNA library.
cDNA Library Construction
cDNA is created from a mature mRNA from a eukaryotic cell with the use of reverse transcriptase. In eukaryotes, a poly-(A) tail (consisting of a long sequence of adenine nucleotides) distinguishes mRNA from tRNA and rRNA and can therefore be used as a primer site for reverse transcription. This has the problem that not all transcripts, such as those for the histone, encode a poly-A tail.
mRNA extraction
Firstly, mRNA template needs to be isolated for the creation of cDNA libraries. Since mRNA only contains exons, the integrity of the isolated mRNA should be considered so that the protein encoded can still be produced. Isolated mRNA should range from 500 bp to 8 kb. Several methods exist for purifying RNA such as trizol extraction and column purification. Column purification can be done using oligomeric dT nucleotide coated resins, and features of mRNA such as having a poly-A tail can be exploited where only mRNA sequences containing said feature will bind. The desired mRNA bound to the column is then eluted.
cDNA construction
Once mRNA is purified, an oligo-dT primer (a short sequence of deoxy-thymidine nucleotides) is bound to the poly-A tail of the RNA. The primer is required to initiate DNA synthesis by the enzyme reverse transcriptase. This results in the creation of RNA-DNA hybrids where a single strand of complementary DNA is bound to a strand of mRNA. To remove the mRNA, the RNAse H enzyme is used to cleave the backbone of the mRNA and generate free 3'-OH groups, which is important for the replacement of mRNA with DNA. DNA polymerase I is then added, the cleaved RNA acts as a primer the DNA polymerase I can identify and initiate replacement of RNA nucleotides with those of DNA. This is provided by the sscDNA itself by coiling on itself at the 3' end, generating a hairpin loop. The polymerase extends the 3'-OH end, and later the loop at 3' end is opened by the scissoring action of S1 nuclease. Restriction endonucleases and DNA ligase are then used to clone the sequences into bacterial plasmids.
The cloned bacteria are then selected, commonly through the use of antibiotic selection. Once selected, stocks of the bacteria are created which can later be grown and sequenced to compile the cDNA library.
cDNA Library uses
cDNA libraries are commonly used when reproducing eukaryotic genomes, as the amount of information is reduced to remove the large numbers of non-coding regions from the library. cDNA libraries are used to express eukaryotic genes in prokaryotes. Prokaryotes do not have introns in their DNA and therefore do not possess any enzymes that can cut it out during transcription process. cDNA does not have introns and therefore can be expressed in prokaryotic cells. cDNA libraries are most useful in reverse genetics where the additional genomic information is of less use. Additionally, cDNA libraries are frequently used in functional cloning to identify genes based on the encoded protein's function. When studying eukaryotic DNA, expression libraries are constructed using complementary DNA (cDNA) to help ensure the insert is truly a gene.
cDNA Library vs. Genomic DNA Library
cDNA library lacks the non-coding and regulatory elements found in genomic DNA. Genomic DNA libraries provide more detailed information about the organism, but are more resource-intensive to generate and keep.
Cloning of cDNA
cDNA molecules can be cloned by using restriction site linkers. Linkers are short, double stranded pieces of DNA (oligodeoxyribonucleotide) about 8 to 12 nucleotide pairs long that include a restriction endonuclease cleavage site e.g. BamHI. Both the cDNA and the linker have blunt ends which can be ligated together using a high concentration of T4 DNA ligase. Then sticky ends are produced in the cDNA molecule by cleaving the cDNA ends (which now have linkers with an incorporated site) with the appropriate endonuclease. A cloning vector (plasmid) is then also cleaved with the appropriate endonuclease. Following "sticky end" ligation of the insert into the vector the resulting recombinant DNA molecule is transferred into E. coli host cell for cloning.
See also
Functional cloning
References
External links
Functional Annotation of the Mouse database (FANTOM)
examples of cDNA synthesis and cloning
Preparation of cDNA libraries for high-throughput RNA sequencing analysis of RNA 5′ ends
Molecular biology
DNA | CDNA library | [
"Chemistry",
"Biology"
] | 1,149 | [
"Biochemistry",
"Molecular biology"
] |
544,672 | https://en.wikipedia.org/wiki/Slew%20rate | In electronics and electromagnetics, slew rate is defined as the change of voltage or current, or any other electrical or electromagnetic quantity, per unit of time. Expressed in SI units, the unit of measurement is given as the change per second, but in the context of electronic circuits a slew rate is usually expressed in terms of microseconds (μs) or nanoseconds (ns).
Electronic circuits may specify minimum or maximum limits on the slew rates for their inputs or outputs, with these limits only valid under some set of given conditions (e.g. output loading). When given for the output of a circuit, such as an amplifier, the slew rate specification guarantees that the speed of the output signal transition will be at least the given minimum, or at most the given maximum. When applied to the input of a circuit, it instead indicates that the external driving circuitry needs to meet those limits in order to guarantee the correct operation of the receiving device. If these limits are violated, some error might occur and correct operation is no longer guaranteed.
For example, when the input to a digital circuit is driven too slowly, the digital input value registered by the circuit may oscillate between 0 and 1 during the signal transition. In other cases, a maximum slew rate is specified in order to limit the high frequency content present in the signal, thereby preventing such undesirable effects as ringing or radiated interference.
In amplifiers, limitations in slew rate capability can give rise to non-linear effects. For a sinusoidal waveform not to be subject to slew rate limitation, the slew rate capability (in volts per second) at all points in an amplifier must satisfy the following condition:
where f is the operating frequency, and is the peak amplitude of the waveform, i.e. half the peak-to-peak swing of a sinusoid.
In mechanics the slew rate is the change in position over time of an object which orbits around the observer, measured in radians, degrees or turns per unit of time. It has dimension
Definition
The slew rate of an electronic circuit is defined as the rate of change of the voltage per unit time. Slew rate is usually expressed in units of V/μs.
where is the output produced by the amplifier as a function of time t.
Measurement
The slew rate can be measured using a function generator (usually square wave) and an oscilloscope (CRO). The slew rate is the same, regardless of whether feedback is considered.
Slew rate limiting in amplifiers
There are slight differences between different amplifier designs in how the slewing phenomenon occurs. However, the general principles are the same as in this illustration.
The input stage of modern amplifiers is usually a differential amplifier with a transconductance characteristic. This means the input stage takes a differential input voltage and produces an output current into the second stage.
The transconductance is typically very high — this is where the large open loop gain of the amplifier is generated. This also means that a fairly small input voltage can cause the input stage to saturate. In saturation, the stage produces a nearly constant output current.
The second stage of modern power amplifiers is, among other things, where frequency compensation is accomplished. The low pass characteristic of this stage approximates an integrator. A constant current input will therefore produce a linearly increasing output. If the second stage has an effective input capacitance and voltage gain , then slew rate in this example can be expressed as:
where is the output current of the first stage in saturation.
Slew rate helps us identify the maximum input frequency and amplitude applicable to the amplifier such that the output is not significantly distorted. Thus it becomes imperative to check the datasheet for the device's slew rate before using it for high-frequency applications.
Slew rate can be deliberately limited using two op amps, a capacitor, and two resistors.
Musical applications
In electronic musical instruments, slew circuitry or software-generated slew functions are used deliberately to provide a portamento (also called glide or lag) feature, where an initial digital value or analog control voltage is slowly transitioned to a new value over a period of time (see interpolation).
See also
Power bandwidth
References
External links
Slew-rate explanation with interactive example and detailed calculation for a standard opamp circuit
Linear Circuit Design Chapter 1: Op Amps
Electrical parameters
Electronics concepts
Temporal rates | Slew rate | [
"Physics",
"Engineering"
] | 922 | [
"Temporal quantities",
"Physical quantities",
"Temporal rates",
"Electrical engineering",
"Electrical parameters"
] |
544,776 | https://en.wikipedia.org/wiki/Soil%20liquefaction | Soil liquefaction occurs when a cohesionless saturated or partially saturated soil substantially loses strength and stiffness in response to an applied stress such as shaking during an earthquake or other sudden change in stress condition, in which material that is ordinarily a solid behaves like a liquid. In soil mechanics, the term "liquefied" was first used by Allen Hazen in reference to the 1918 failure of the Calaveras Dam in California. He described the mechanism of flow liquefaction of the embankment dam as:
The phenomenon is most often observed in saturated, loose (low density or uncompacted), sandy soils. This is because a loose sand has a tendency to compress when a load is applied. Dense sands, by contrast, tend to expand in volume or 'dilate'. If the soil is saturated by water, a condition that often exists when the soil is below the water table or sea level, then water fills the gaps between soil grains ('pore spaces'). In response to soil compressing, the pore water pressure increases and the water attempts to flow out from the soil to zones of low pressure (usually upward towards the ground surface). However, if the loading is rapidly applied and large enough, or is repeated many times (e.g., earthquake shaking, storm wave loading) such that the water does not flow out before the next cycle of load is applied, the water pressures may build to the extent that it exceeds the force (contact stresses) between the grains of soil that keep them in contact. These contacts between grains are the means by which the weight from buildings and overlying soil layers is transferred from the ground surface to layers of soil or rock at greater depths. This loss of soil structure causes it to lose its strength (the ability to transfer shear stress), and it may be observed to flow like a liquid (hence 'liquefaction').
Although the effects of soil liquefaction have been long understood, engineers took more notice after the 1964 Alaska earthquake and 1964 Niigata earthquake. It was a major cause of the destruction produced in San Francisco's Marina District during the 1989 Loma Prieta earthquake, and in the Port of Kobe during the 1995 Great Hanshin earthquake. More recently soil liquefaction was largely responsible for extensive damage to residential properties in the eastern suburbs and satellite townships of Christchurch during the 2010 Canterbury earthquake and more extensively again following the Christchurch earthquakes that followed in early and mid-2011. On 28 September 2018, an earthquake of 7.5 magnitude hit the Central Sulawesi province of Indonesia. Resulting soil liquefaction buried the suburb of Balaroa and Petobo village deep in mud. The government of Indonesia is considering designating the two neighborhoods of Balaroa and Petobo, that have been totally buried under mud, as mass graves.
The building codes in many countries require engineers to consider the effects of soil liquefaction in the design of new buildings and infrastructure such as bridges, embankment dams and retaining structures.
Technical definitions
Soil liquefaction occurs when the effective stress (shear strength) of soil is reduced to essentially zero. This may be initiated by either monotonic loading (i.e., a single, sudden occurrence of a change in stress – examples include an increase in load on an embankment or sudden loss of toe support) or cyclic loading (i.e., repeated changes in stress condition – examples include wave loading or earthquake shaking). In both cases a soil in a saturated loose state, and one which may generate significant pore water pressure on a change in load are the most likely to liquefy. This is because loose soil has the tendency to compress when sheared, generating large excess porewater pressure as load is transferred from the soil skeleton to adjacent pore water during undrained loading. As pore water pressure rises, a progressive loss of strength of the soil occurs as effective stress is reduced. Liquefaction is more likely to occur in sandy or non-plastic silty soils but may in rare cases occur in gravels and clays (see quick clay).
A 'flow failure' may initiate if the strength of the soil is reduced below the stresses required to maintain the equilibrium of a slope or footing of a structure. This can occur due to monotonic loading or cyclic loading and can be sudden and catastrophic. A historical example is the Aberfan disaster. Casagrande referred to this type of phenomena as 'flow liquefaction' although a state of zero effective stress is not required for this to occur.
'Cyclic liquefaction' is the state of soil when large shear strains have accumulated in response to cyclic loading. A typical reference strain for the approximate occurrence of zero effective stress is 5% double amplitude shear strain. This is a soil test-based definition, usually performed via cyclic triaxial, cyclic direct simple shear, or cyclic torsional shear type apparatus. These tests are performed to determine a soil's resistance to liquefaction by observing the number of cycles of loading at a particular shear stress amplitude required to induce 'fails'. Failure here is defined by the aforementioned shear strain criteria.
The term 'cyclic mobility' refers to the mechanism of progressive reduction of effective stress due to cyclic loading. This may occur in all soil types including dense soils. However, on reaching a state of zero effective stress such soils immediately dilate and regain strength. Thus, shear strains are significantly less than a true state of soil liquefaction.
Occurrence
Liquefaction is more likely to occur in loose to moderately saturated granular soils with poor drainage, such as silty sands or sands and gravels containing impermeable sediments. During wave loading, usually cyclic undrained loading, e.g. seismic loading, loose sands tend to decrease in volume, which produces an increase in their pore water pressures and consequently a decrease in shear strength, i.e. reduction in effective stress.
Deposits most susceptible to liquefaction are young (Holocene-age, deposited within the last 10,000 years) sands and silts of similar grain size (well-sorted), in beds at least metres thick, and saturated with water. Such deposits are often found along stream beds, beaches, dunes, and areas where windblown silt (loess) and sand have accumulated. Examples of soil liquefaction include quicksand, quick clay, turbidity currents and earthquake-induced liquefaction.
Depending on the initial void ratio, the soil material can respond to loading either strain-softening or strain-hardening. Strain-softened soils, e.g., loose sands, can be triggered to collapse, either monotonically or cyclically, if the static shear stress is greater than the ultimate or steady-state shear strength of the soil. In this case flow liquefaction occurs, where the soil deforms at a low constant residual shear stress. If the soil strain-hardens, e.g., moderately dense to dense sand, flow liquefaction will generally not occur. However, cyclic softening can occur due to cyclic undrained loading, e.g., earthquake loading. Deformation during cyclic loading depends on the density of the soil, the magnitude and duration of the cyclic loading, and amount of shear stress reversal. If stress reversal occurs, the effective shear stress could reach zero, allowing cyclic liquefaction to take place. If stress reversal does not occur, zero effective stress cannot occur, and cyclic mobility takes place.
The resistance of the cohesionless soil to liquefaction will depend on the density of the soil, confining stresses, soil structure (fabric, age and cementation), the magnitude and duration of the cyclic loading, and the extent to which shear stress reversal occurs.
Liquefaction potential: simplified empirical analysis
Three parameters are needed to assess liquefaction potential using the simplified empirical method:
A measure of soil resistance to liquefaction: Standard Penetration Resistance (SPT), Cone Penetration Resistance (CPT), or shear wave velocity (Vs)
The earthquake load, measured as cyclic stress ratio
the capacity of the soil to resist liquefaction, expressed in terms of the cyclic resistance ratio (CRR)
Liquefaction potential: advanced constitutive model
The interaction between the solid skeleton and pore fluid flow has been considered by many researchers to model the material softening associated with the liquefaction phenomenon. The dynamic performance of saturated porous media depends on the soil-pore fluid interaction. When the saturated porous media is subjected to strong ground shaking, pore fluid movement relative to the solid skeleton is induced. The transient movement of pore fluid can significantly affect the redistribution of pore water pressure, which is generally governed by the loading rate, soil permeability, pressure gradient, and boundary conditions. It is well known that for a sufficiently high seepage velocity, the governing flow law in porous media is nonlinear and does not follow Darcy's law. This fact has been recently considered in the studies of soil-pore fluid interaction for liquefaction modeling. A fully explicit dynamic finite element method has been developed for turbulent flow law. The governing equations have been expressed for saturated porous media based on the extension of the Biot formulation. The elastoplastic behavior of soil under earthquake loading has been simulated using a generalized plasticity theory that is composed of a yield surface along with a non-associated flow rule.
Earthquake liquefaction
Pressures generated during large earthquakes can force underground water and liquefied sand to the surface. This can be observed at the surface as effects known alternatively as "sand boils", "sand blows" or "sand volcanoes". Such earthquake ground deformations can be categorized as primary deformation if located on or close to the ruptured fault, or distributed deformation if located at considerable distance from the ruptured fault.
The other common observation is land instability – cracking and movement of the ground down slope or towards unsupported margins of rivers, streams, or the coast. The failure of ground in this manner is called 'lateral spreading' and may occur on very shallow slopes with angles only 1 or 2 degrees from the horizontal.
One positive aspect of soil liquefaction is the tendency for the effects of earthquake shaking to be significantly damped (reduced) for the remainder of the earthquake. This is because liquids do not support a shear stress and so once the soil liquefies due to shaking, subsequent earthquake shaking (transferred through ground by shear waves) is not transferred to buildings at the ground surface.
Studies of liquefaction features left by prehistoric earthquakes, called paleoliquefaction or paleoseismology, can reveal information about earthquakes that occurred before records were kept or accurate measurements could be taken.
Soil liquefaction induced by earthquake shaking is a major contributor to urban seismic risk.
Effects
The effects of soil liquefaction on the built environment can be extremely damaging. Buildings whose foundations bear directly on sand which liquefies will experience a sudden loss of support, which will result in drastic and irregular settlement of the building causing structural damage, including cracking of foundations and damage to the building structure, or leaving the structure unserviceable, even without structural damage. Where a thin crust of non-liquefied soil exists between building foundation and liquefied soil, a 'punching shear' type foundation failure may occur. Irregular settlement may break underground utility lines. The upward pressure applied by the movement of liquefied soil through the crust layer can crack weak foundation slabs and enter buildings through service ducts and may allow water to damage building contents and electrical services.
Bridges and large buildings constructed on pile foundations may lose support from the adjacent soil and buckle or come to rest at a tilt.
Sloping ground and ground next to rivers and lakes may slide on a liquefied soil layer (termed 'lateral spreading'), opening large ground fissures, and can cause significant damage to buildings, bridges, roads and services such as water, natural gas, sewerage, power and telecommunications installed in the affected ground. Buried tanks and manholes may float in the liquefied soil due to buoyancy. Earth embankments such as flood levees and earth dams may lose stability or collapse if the material comprising the embankment or its foundation liquefies.
Over geological time, liquefaction of soil material due to earthquakes could provide a dense parent material in which the fragipan may develop through pedogenesis.
Mitigation methods
Mitigation methods have been devised by earthquake engineers and include various soil compaction techniques such as vibro compaction (compaction of the soil by depth vibrators), dynamic compaction, and vibro stone columns. These methods densify soil and enable buildings to avoid soil liquefaction.<ref></ref>
Existing buildings can be mitigated by injecting grout into the soil to stabilize the layer of soil that is subject to liquefaction. Another method called IPS (Induced Partial Saturation) is now practicable to apply on larger scale. In this method, the saturation degree of the soil is decreased.
Quicksand
Quicksand forms when water saturates an area of loose sand, and the sand is agitated. When the water trapped in the batch of sand cannot escape, it creates liquefied soil that can no longer resist force. Quicksand can be formed by standing or (upwards) flowing underground water (as from an underground spring), or by earthquakes. In the case of flowing underground water, the force of the water flow opposes the force of gravity, causing the granules of sand to be more buoyant. In the case of earthquakes, the shaking force can increase the pressure of shallow groundwater, liquefying sand and silt deposits. In both cases, the liquefied surface loses strength, causing buildings or other objects on that surface to sink or fall over.
The saturated sediment may appear quite solid until a change in pressure, or a shock initiates the liquefaction, causing the sand to form a suspension with each grain surrounded by a thin film of water. This cushioning gives quicksand, and other liquefied sediments, a spongy, fluidlike texture. Objects in the liquefied sand sink to the level at which the weight of the object is equal to the weight of the displaced sand/water mix and the object floats due to its buoyancy.
Quick clay
Quick clay, known as Leda Clay in Canada, is a water-saturated gel, which in its solid form resembles highly sensitive clay. This clay has a tendency to change from a relatively stiff condition to a liquid mass when it is disturbed. This gradual change in appearance from solid to liquid is a process known as spontaneous liquefaction. The clay retains a solid structure despite its high-water content (up to 80% by volume), because surface tension holds water-coated flakes of clay together. When the structure is broken by a shock or sufficient shear, it enters a fluid state.
Quick clay is found only in northern countries such as Russia, Canada, Alaska in the U.S., Norway, Sweden and Finland, which were glaciated during the Pleistocene epoch.
Quick clay has been the underlying cause of many deadly landslides. In Canada alone, it has been associated with more than 250 mapped landslides. Some of these are ancient, and may have been triggered by earthquakes.
Turbidity currents
Submarine landslides are turbidity currents and consist of water-saturated sediments flowing downslope. An example occurred during the 1929 Grand Banks earthquake that struck the continental slope off the coast of Newfoundland. Minutes later, transatlantic telephone cables began breaking sequentially, further and further downslope, away from the epicenter. Twelve cables were snapped in a total of 28 places. The exact times and locations were recorded for each break. Investigators suggested that a 60-mile-per-hour (100 km/h) submarine landslide or turbidity current of water-saturated sediments swept 400 miles (600 km) down the continental slope from the earthquake's epicenter, snapping the cables as it passed.
See also
Atterberg limits
Dry quicksand
Earthflow
Earthquake engineering
Fluidization
Liquefaction
Mud volcano
Mudflow
Network for Earthquake Engineering Simulation#Soil liquefaction research
Paleoseismology
Sand boil
Subsidence
Thixotropy
References
Further reading
Seed et al., Recent Advances in Soil Liquefaction Engineering: A Unified and Consistent Framework'', 26th Annual ASCE Los Angeles Geotechnical Spring Seminar, Long Beach, California, April 30, 2003, Earthquake Engineering Research Center
External links
Soil Liquefaction
Liquefaction – Pacific Northwest Seismic Network
Liquefaction in Chiba, Japan on YouTube recorded during the 2011 Tohoku earthquake
Earthquake engineering
Liquifaction, soil
Sedimentology
Seismology
Soil mechanics
Natural disasters | Soil liquefaction | [
"Physics",
"Engineering",
"Environmental_science"
] | 3,482 | [
"Structural engineering",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Weather",
"Soil mechanics",
"Natural disasters",
"Civil engineering",
"Earthquake engineering",
"Environmental soil science"
] |
544,888 | https://en.wikipedia.org/wiki/Invar | Invar, also known generically as FeNi36 (64FeNi in the US), is a nickel–iron alloy notable for its uniquely low coefficient of thermal expansion (CTE or α). The name Invar comes from the word invariable, referring to its relative lack of expansion or contraction with temperature changes, and is a registered trademark of ArcelorMittal.
The discovery of the alloy was made in 1895 by Swiss physicist Charles Édouard Guillaume for which he received the Nobel Prize in Physics in 1920. It enabled improvements in scientific instruments.
Properties
Like other nickel/iron compositions, Invar is a solid solution; that is, it is a single-phase alloy. In one commercial grade called Invar 36 it consists of approximately 36% nickel and 64% iron, has a melting point of 1427C, a density of 8.05 g/cm3 and a resistivity of 8.2 x 10-5 Ω·cm. The invar range was described by Westinghouse scientists in 1961 as "30–45 atom per cent nickel".
Common grades of Invar have a coefficient of thermal expansion (denoted α, and measured between 20 °C and 100 °C) of about 1.2 × 10−6 K−1 (1.2 ppm/°C), while ordinary steels have values of around 11–15 ppm/°C. Extra-pure grades (<0.1% Co) can readily produce values as low as 0.62–0.65 ppm/°C. Some formulations display negative thermal expansion (NTE) characteristics. Though it displays high dimensional stability over a range of temperatures, it does have a propensity to creep.
Historically, the paramagnetic properties of certain iron-nickel alloys were first identified as a unique characteristic. These alloys exhibit a coexistence of two types of structures, whose proportions vary depending on temperature. One of these structures is characterized by a high magnetic moment (ranging from 2.2 to 2.5 μB) and a high lattice parameter, adhering to Hund's rules. The other structure, in contrast, has a low magnetic moment (ranging from 0.8 to 1.5 μB) and a low lattice parameter. When exposed to a variable magnetic field, this dual-structure nature induces dimensional changes in the alloy. This phenomenon is particularly significant in the case of Invar alloys, which are renowned for their exceptional dimensional stability over a wide range of temperatures. However, to maintain this stability, it is crucial to avoid exposing the material to magnetic fields, as such exposure can disrupt the delicate balance between the two structures and lead to undesirable dimensional variations.
In recent years, advancements in material science have led to the development of non-ferromagnetic Invar alloys. These innovative materials have opened up new possibilities for applications in cutting-edge fields such as the semiconductor industry and aerospace engineering. By eliminating the influence of magnetic fields on dimensional stability, non-ferromagnetic Invar alloys have the potential to significantly enhance the performance of optical instruments and other precision devices.
Applications
Invar is used where high dimensional stability is required, such as precision instruments, clocks, seismic creep gauges, color-television tubes' shadow-mask frames, valves in engines and large aerostructure molds.
One of its first applications was in watch balance wheels and pendulum rods for precision regulator clocks. At the time it was invented, the pendulum clock was the world's most precise timekeeper, and the limit to timekeeping accuracy was due to thermal variations in length of clock pendulums. The Riefler regulator clock developed in 1898 by Clemens Riefler, the first clock to use an Invar pendulum, had an accuracy of 10 milliseconds per day, and served as the primary time standard in naval observatories and for national time services until the 1930s.
In land surveying, when first-order (high-precision) elevation leveling is to be performed, the level staff (leveling rod) used is made of Invar, instead of wood, fiberglass, or other metals. Invar struts were used in some pistons to limit their thermal expansion inside their cylinders. In the manufacture of large composite material structures for aerospace carbon fibre layup molds, Invar is used to facilitate the manufacture of parts to extremely tight tolerances.
In the astronomical field, Invar is used as the structural components that support dimension-sensitive optics of astronomical telescopes. Superior dimensional stability of Invar allows the astronomical telescopes to significantly improve the observation precision and accuracy.
Variations
There are variations of the original Invar material that have slightly different coefficient of thermal expansion such as:
Inovco, which is Fe–33Ni–4.5Co and has an α of 0.55 ppm/°C (from 20 to 100 °C).
FeNi42 (for example NILO alloy 42), which has a nickel content of 42% and , matching that of silicon, is widely used as lead frame material for integrated circuits, etc.
FeNiCo alloys—named Kovar or Dilver P—that have the same expansion behaviour (~) and form strong bonds with molten borosilicate glass, and because of that are used for glass-to-metal seals, and to support optical parts in a wide range of temperatures and applications, such as satellites.
Explanation of anomalous properties
A detailed explanation of Invar's anomalously low CTE has proven elusive for physicists.
All the iron-rich face-centered cubic Fe–Ni alloys show Invar anomalies in their measured thermal and magnetic properties that evolve continuously in intensity with varying alloy composition. Scientists had once proposed that Invar's behavior was a direct consequence of a high-magnetic-moment to low-magnetic-moment transition occurring in the face centered cubic Fe–Ni series (and that gives rise to the mineral antitaenite); however, this theory was proven incorrect. Instead, it appears that the low-moment/high-moment transition is preceded by a high-magnetic-moment frustrated ferromagnetic state in which the Fe–Fe magnetic exchange bonds have a large magneto-volume effect of the right sign and magnitude to create the observed thermal expansion anomaly.
Wang et al. considered the statistical mixture between the fully ferromagnetic (FM) configuration and the spin-flipping configurations (SFCs) in with the free energies of FM and SFCs predicted from first-principles calculations and were able to predict the temperature ranges of negative thermal expansion under various pressures. It was shown that all individual FM and SFCs have positive thermal expansion, and the negative thermal expansion originates from the increasing populations of SFCs with smaller volumes than that of FM.
See also
Constantan and Manganin, alloys with relatively constant electrical resistivity
Elinvar, alloy with relatively constant elasticity over a range of temperatures
Sitall and Zerodur, ceramic materials with a relatively low thermal expansion
Borosilicate glass and Ultra low expansion glass, low expansion glasses resistant to thermal shock
References
Ferrous alloys
Nickel alloys
Surveying instruments
Low thermal expansion materials | Invar | [
"Physics",
"Chemistry"
] | 1,458 | [
"Nickel alloys",
"Ferrous alloys",
"Low thermal expansion materials",
"Materials",
"Alloys",
"Matter"
] |
544,919 | https://en.wikipedia.org/wiki/Eadie%E2%80%93Hofstee%20diagram | In biochemistry, an Eadie–Hofstee plot (or Eadie–Hofstee diagram) is a graphical representation of the Michaelis–Menten equation in enzyme kinetics. It has been known by various different names, including Eadie plot, Hofstee plot and Augustinsson plot. Attribution to Woolf is often omitted, because although Haldane and Stern credited Woolf with the underlying equation, it was just one of the three linear transformations of the Michaelis–Menten equation that they initially introduced. However, Haldane indicated in 1957 that Woolf had indeed found the three linear forms:In 1932, Dr. Kurt Stern published a German translation of my book Enzymes, with numerous additions to the English text. On pp. 119–120, I described some graphical methods, stating that they were due to my friend Dr. Barnett Woolf. [...] Woolf pointed out that linear graphs are obtained when is plotted against , against , or against , the first plot being most convenient unless inhibition is being studied.
Derivation of the equation for the plot
The simplest equation for the rate of an enzyme-catalysed reaction as a function of the substrate concentration is the Michaelis-Menten equation, which can be written as follows:
in which is the rate at substrate saturation (when approaches infinity, or limiting rate, and is the value of at half-saturation, i.e. for , known as the Michaelis constant. Eadie and Hofstee transformed this into straight-line relationship. Multiplication of both sides by gives:
This can be directly rearranged to express a straight-line relationship:
which shows that a plot of against is a straight line with intercept on the ordinate, and slope (Hofstee plot).
In the Eadie plot the axes are reversed:
with intercept on the ordinate, and slope .
These plots are kinetic versions of the Scatchard plot used in ligand-binding experiments.
Attribution to Augustinsson
The plot is occasionally attributed to Augustinsson and referred to the Woolf–Augustinsson–Hofstee plot or simply the Augustinsson plot. However, although Haldane, Woolf or Eadie were not explicitly cited when Augustinsson introduced the versus equation, both the work of Haldane and of Eadie are cited at other places of his work and are listed in his bibliography.
Effect of experimental error
Experimental error is usually assumed to affect the rate and not the substrate concentration , so is the dependent variable. As a result, both ordinate and abscissa are subject to experimental error, and so the deviations that occur due to error are not parallel with the ordinate axis but towards or away from the origin. As long as the plot is used for illustrating an analysis rather than for estimating the parameters, that matters very little. Regardless of these considerations various authors have compared the suitability of the various plots for displaying and analysing data.
Use for estimating parameters
Like other straight-line forms of the Michaelis–Menten equation, the Eadie–Hofstee plot was used historically for rapid evaluation of the parameters and , but has been largely superseded by nonlinear regression methods that are significantly more accurate when properly weighted and no longer computationally inaccessible.
Making faults in experimental design visible
As the ordinate scale spans the entire range of theoretically possible vales, from to one can see at a glance at an Eadie–Hofstee plot how well the experimental design fills the theoretical design space, and the plot makes it impossible to hide poor design. By contrast, the other well known straight-line plots make it easy to choose scales that suggest that the design is better than it is. Faulty design, as shown in the right-hand diagram, is common with experiments with a substrate that is not soluble enough or too expensive to use concentrations above , and in this case cannot be estimated satisfactorily. The opposite case, with values concentrated above (left-hand diagram) is less common but not unknown, as for example in a study of nitrate reductase.
See also
Michaelis–Menten kinetics
Lineweaver–Burk plot
Hanes–Woolf plot
Direct linear plot
Footnotes and references
Diagrams
Enzyme kinetics
Biotechnology
Molecular biology | Eadie–Hofstee diagram | [
"Chemistry",
"Biology"
] | 872 | [
"Enzyme kinetics",
"Biotechnology",
"nan",
"Molecular biology",
"Biochemistry",
"Chemical kinetics"
] |
3,763,642 | https://en.wikipedia.org/wiki/Uranium%20diboride | Uranium boride (UB2), a compound of uranium and boron, is a very stable glassy boride material that is insoluble in water.
It is being explored as an ingredient in high entropy alloys, and as a method of immobilizing uranium-based radioactive waste, and rendering it safe for long-term storage. It has some applications in endocurietherapy, a method of radiation therapy wherein radioactive microspheres are implanted directly into the treatment site and allowed to remain for an extended period of time, may also use this class of material as it would not be attacked while in situ.
It is being considered as a nuclear fuel material as it has a high density and thermal conductivity
References
Uranium compounds
Borides
Nuclear materials
Non-oxide glasses | Uranium diboride | [
"Physics",
"Chemistry"
] | 163 | [
"Inorganic compounds",
"Inorganic compound stubs",
"Materials",
"Nuclear materials",
"Matter"
] |
3,763,850 | https://en.wikipedia.org/wiki/Resampling%20%28statistics%29 | In statistics, resampling is the creation of new samples based on one observed sample.
Resampling methods are:
Permutation tests (also re-randomization tests)
Bootstrapping
Cross validation
Jackknife
Permutation tests
Permutation tests rely on resampling the original data assuming the null hypothesis. Based on the resampled data it can be concluded how likely the original data is to occur under the null hypothesis.
Bootstrap
Bootstrapping is a statistical method for estimating the sampling distribution of an estimator by sampling with replacement from the original sample, most often with the purpose of deriving robust estimates of standard errors and confidence intervals of a population parameter like a mean, median, proportion, odds ratio, correlation coefficient or regression coefficient. It has been called the plug-in principle, as it is the method of estimation of functionals of a population distribution by evaluating the same functionals at the empirical distribution based on a sample.
For example, when estimating the population mean, this method uses the sample mean; to estimate the population median, it uses the sample median; to estimate the population regression line, it uses the sample regression line.
It may also be used for constructing hypothesis tests. It is often used as a robust alternative to inference based on parametric assumptions when those assumptions are in doubt, or where parametric inference is impossible or requires very complicated formulas for the calculation of standard errors. Bootstrapping techniques are also used in the updating-selection transitions of particle filters, genetic type algorithms and related resample/reconfiguration Monte Carlo methods used in computational physics. In this context, the bootstrap is used to replace sequentially empirical weighted probability measures by empirical measures. The bootstrap allows to replace the samples with low weights by copies of the samples with high weights.
Cross-validation
Cross-validation is a statistical method for validating a predictive model. Subsets of the data are held out for use as validating sets; a model is fit to the remaining data (a training set) and used to predict for the validation set. Averaging the quality of the predictions across the validation sets yields an overall measure of prediction accuracy. Cross-validation is employed repeatedly in building decision trees.
One form of cross-validation leaves out a single observation at a time; this is similar to the jackknife. Another, K-fold cross-validation, splits the data into K subsets; each is held out in turn as the validation set.
This avoids "self-influence". For comparison, in regression analysis methods such as linear regression, each y value draws the regression line toward itself, making the prediction of that value appear more accurate than it really is. Cross-validation applied to linear regression predicts the y value for each observation without using that observation.
This is often used for deciding how many predictor variables to use in regression. Without cross-validation, adding predictors always reduces the residual sum of squares (or possibly leaves it unchanged). In contrast, the cross-validated mean-square error will tend to decrease if valuable predictors are added, but increase if worthless predictors are added.
Monte Carlo cross-validation
Subsampling is an alternative method for approximating the sampling distribution of an estimator. The two key differences to the bootstrap are:
the resample size is smaller than the sample size and
resampling is done without replacement.
The advantage of subsampling is that it is valid under much weaker conditions compared to the bootstrap. In particular, a set of sufficient conditions is that the rate of convergence of the estimator is known and that the limiting distribution is continuous.
In addition, the resample (or subsample) size must tend to infinity together with the sample size but at a smaller rate, so that their ratio converges to zero. While subsampling was originally proposed for the case of independent and identically distributed (iid) data only, the methodology has been extended to cover time series data as well; in this case, one resamples blocks of subsequent data rather than individual data points. There are many cases of applied interest where subsampling leads to valid inference whereas bootstrapping does not; for example, such cases include examples where the rate of convergence of the estimator is not the square root of the sample size or when the limiting distribution is non-normal. When both subsampling and the bootstrap are consistent, the bootstrap is typically more accurate. RANSAC is a popular algorithm using subsampling.
Jackknife cross-validation
Jackknifing (jackknife cross-validation), is used in statistical inference to estimate the bias and standard error (variance) of a statistic, when a random sample of observations is used to calculate it. Historically, this method preceded the invention of the bootstrap with Quenouille inventing this method in 1949 and Tukey extending it in 1958. This method was foreshadowed by Mahalanobis who in 1946 suggested repeated estimates of the statistic of interest with half the sample chosen at random. He coined the name 'interpenetrating samples' for this method.
Quenouille invented this method with the intention of reducing the bias of the sample estimate. Tukey extended this method by assuming that if the replicates could be considered identically and independently distributed, then an estimate of the variance of the sample parameter could be made and that it would be approximately distributed as a t variate with n−1 degrees of freedom (n being the sample size).
The basic idea behind the jackknife variance estimator lies in systematically recomputing the statistic estimate, leaving out one or more observations at a time from the sample set. From this new set of replicates of the statistic, an estimate for the bias and an estimate for the variance of the statistic can be calculated. Jackknife is equivalent to the random (subsampling) leave-one-out cross-validation, it only differs in the goal.
For many statistical parameters the jackknife estimate of variance tends asymptotically to the true value almost surely. In technical terms one says that the jackknife estimate is consistent. The jackknife is consistent for the sample means, sample variances, central and non-central t-statistics (with possibly non-normal populations), sample coefficient of variation, maximum likelihood estimators, least squares estimators, correlation coefficients and regression coefficients.
It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom.
Instead of using the jackknife to estimate the variance, it may instead be applied to the log of the variance. This transformation may result in better estimates particularly when the distribution of the variance itself may be non normal.
The jackknife, like the original bootstrap, is dependent on the independence of the data. Extensions of the jackknife to allow for dependence in the data have been proposed. One such extension is the delete-a-group method used in association with Poisson sampling.
Comparison of bootstrap and jackknife
Both methods, the bootstrap and the jackknife, estimate the variability of a statistic from the variability of that statistic between subsamples, rather than from parametric assumptions. For the more general jackknife, the delete-m observations jackknife, the bootstrap can be seen as a random approximation of it. Both yield similar numerical results, which is why each can be seen as approximation to the other. Although there are huge theoretical differences in their mathematical insights, the main practical difference for statistics users is that the bootstrap gives different results when repeated on the same data, whereas the jackknife gives exactly the same result each time. Because of this, the jackknife is popular when the estimates need to be verified several times before publishing (e.g., official statistics agencies). On the other hand, when this verification feature is not crucial and it is of interest not to have a number but just an idea of its distribution, the bootstrap is preferred (e.g., studies in physics, economics, biological sciences).
Whether to use the bootstrap or the jackknife may depend more on operational aspects than on statistical concerns of a survey. The jackknife, originally used for bias reduction, is more of a specialized method and only estimates the variance of the point estimator. This can be enough for basic statistical inference (e.g., hypothesis testing, confidence intervals). The bootstrap, on the other hand, first estimates the whole distribution (of the point estimator) and then computes the variance from that. While powerful and easy, this can become highly computationally intensive.
"The bootstrap can be applied to both variance and distribution estimation problems. However, the bootstrap variance estimator is not as good as the jackknife or the balanced repeated replication (BRR) variance estimator in terms of the empirical results. Furthermore, the bootstrap variance estimator usually requires more computations than the jackknife or the BRR. Thus, the bootstrap is mainly recommended for distribution estimation."
There is a special consideration with the jackknife, particularly with the delete-1 observation jackknife. It should only be used with smooth, differentiable statistics (e.g., totals, means, proportions, ratios, odd ratios, regression coefficients, etc.; not with medians or quantiles). This could become a practical disadvantage. This disadvantage is usually the argument favoring bootstrapping over jackknifing. More general jackknifes than the delete-1, such as the delete-m jackknife or the delete-all-but-2 Hodges–Lehmann estimator, overcome this problem for the medians and quantiles by relaxing the smoothness requirements for consistent variance estimation.
Usually the jackknife is easier to apply to complex sampling schemes than the bootstrap. Complex sampling schemes may involve stratification, multiple stages (clustering), varying sampling weights (non-response adjustments, calibration, post-stratification) and under unequal-probability sampling designs. Theoretical aspects of both the bootstrap and the jackknife can be found in Shao and Tu (1995), whereas a basic introduction is accounted in Wolter (2007). The bootstrap estimate of model prediction bias is more precise than jackknife estimates with linear models such as linear discriminant function or multiple regression.
See also
Bootstrap aggregating (bagging)
Genetic algorithm
Monte Carlo method
Nonparametric statistics
Particle filter
Pseudoreplication
Non-uniform random variate generation
Random permutation
Replication (statistics)
Surrogate data testing
References
Literature
Good, P. (2006) Resampling Methods. 3rd Ed. Birkhauser.
Wolter, K.M. (2007). Introduction to Variance Estimation. 2nd Edition. Springer, Inc.
Pierre Del Moral (2004). Feynman-Kac formulae. Genealogical and Interacting particle systems with applications, Springer, Series Probability and Applications.
Pierre Del Moral (2013). Del Moral, Pierre (2013). Mean field simulation for Monte Carlo integration. Chapman & Hall/CRC Press, Monographs on Statistics and Applied Probability.
Jiang W, Simon R. A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification. Stat Med. 2007 Dec 20;26(29):5320-34. doi: 10.1002/sim.2968. PMID: 17624926. https://brb.nci.nih.gov/techreport/prederr_rev_0407.pdf
External links
Software
Angelo Canty and Brian Ripley (2010). boot: Bootstrap R (S-Plus) Functions. R package version 1.2-43. Functions and datasets for bootstrapping from the book Bootstrap Methods and Their Applications by A. C. Davison and D. V. Hinkley (1997, CUP).
Statistics101: Resampling, Bootstrap, Monte Carlo Simulation program
R package 'samplingVarEst': Sampling Variance Estimation. Implements functions for estimating the sampling variance of some point estimators.
Paired randomization/permutation test for evaluation of TREC results
Randomization/permutation tests to evaluate outcomes in information retrieval experiments (with and without adjustments for multiple comparisons).
Bioconductor resampling-based multiple hypothesis testing with Applications to Genomics.
permtest: an R package to compare the variability within and distance between two groups within a set of microarray data.
Bootstrap Resampling: interactive demonstration of hypothesis testing with bootstrap resampling in R.
Permutation Test: interactive demonstration of hypothesis testing with permutation test in R.
Monte Carlo methods
Statistical inference
Nonparametric statistics | Resampling (statistics) | [
"Physics"
] | 2,755 | [
"Monte Carlo methods",
"Computational physics"
] |
3,764,207 | https://en.wikipedia.org/wiki/Pfitzner%E2%80%93Moffatt%20oxidation | The Pfitzner–Moffatt oxidation, sometimes referred to as simply the Moffatt oxidation, is a chemical reaction for the oxidation of primary and secondary alcohols to aldehydes and ketones, respectively. The oxidant is a combination of dimethyl sulfoxide (DMSO) and dicyclohexylcarbodiimide (DCC). The reaction was first reported by J. Moffatt and his student K. Pfitzner in 1963.
Stoichiometry and mechanism
The reaction requires one equivalent each of the diimide, which is the dehydrating agent, and the sulfoxide, the oxidant:
(CH3)2SO + (CyN)2C + R2CHOH → (CH3)2S + (CyNH)2CO + R2C=O
Typically the sulfoxide and diimide are used in excess. The reaction cogenerates dimethyl sulfide and a urea. Dicyclohexylurea ((CyNH)2CO) can be difficult to remove from the product.
In terms of mechanism, the reaction is proposed to involve the intermediary of an sulfonium group, formed by a reaction between DMSO and the carbodiimide.
This species is highly reactive and is attacked by the alcohol. Rearrangement give an alkoxysulfonium ylide which decomposes to give dimethyl sulfide and the carbonyl compound.
This reaction has been largely displaced by the Swern oxidation, which also uses DMSO as an oxidant in the presence of an electrophilic activator. Swern oxidations tend to give higher yields and simpler workup; however, they typically employ cryogenic conditions.
See also
Parikh–Doering oxidation - mechanistically similar alcohol oxidation, replaces carbodiimide with sulfur trioxide
Corey–Kim oxidation
Swern oxidation
Alcohol oxidation
Sulfonium-based oxidation of alcohols to aldehydes
References
Organic oxidation reactions
Name reactions | Pfitzner–Moffatt oxidation | [
"Chemistry"
] | 430 | [
"Name reactions",
"Organic oxidation reactions",
"Organic reactions"
] |
3,764,368 | https://en.wikipedia.org/wiki/Torque%20limiter | A torque limiter is an automatic device that protects mechanical equipment, or its work, from damage by mechanical overload. A torque limiter may limit the torque by slipping (as in a friction plate slip-clutch), or uncouple the load entirely (as in a shear pin). The action of a torque limiter is especially useful to limit any damage due to crash stops and jams.
Torque limiters may be packaged as a shaft coupling or as a hub for sprocket or sheave. A torque limiting device is also known as an overload clutch.
Disconnect types
Disconnect types will uncouple the drive, with little or no residual torque making its way to the load. They may reset automatically or manually
Shear pin
A shear pin type sacrifices a mechanical component, the pin, to disconnect the shafts. The use of shear pins as torque limiters has been well known since at least the early 20th century.
Synchronous magnetic
A synchronous magnetic torque limiter uses permanent magnets mounted to each shaft, with an air gap between. They are very fast acting, but may have more backlash than mechanical types. Because there is no mechanical contact between the two shafts, they are also used to transmit torque through a physical barrier like a thin plastic wall. On some models, the torque limit may be adjusted by changing the gap between the magnets.
Ball detent
A ball detent type limiter transmits force through hardened balls which rest in detents on the shaft and are held in place with springs. An over-torque condition pushes the balls out of their detents, thereby decoupling the shaft. It can have single or multiple detent positions, or a snap acting spring which requires a manual reset. There may be a compression adjustment to adjust the torque limit.
Many cordless drills incorporate this type of torque limiter in a planetary gearset. It may be a part of an assembly of multiple gearsets used to primarily reduce speed and multiply torque as well as perform ratio changes. The torque limiter is typically the last gearset in the transmission. It uses the planet carrier as the input with the sun gear as the output, and the annulus normally locked. A series of ball detents act on the annulus to lock it, allowing power to be transmitted from the planet carrier to the sun gear. When the torque transmitted through the gearset reaches a determinate amount, the torque acting on the annulus causes it to unlock from its ball detents and freely rotate, causing power to be diverted from the load on the sun gear to the annulus and thereby stalling the output until torque is reduced to an amount where the ball detents can lock the annulus again. This system equally limits torque in both directions of rotation and also works with the sun gear as the input. The compression of the ball detents (and therefore the amount of torque at which the limiter is utilized) is typically adjusted by means of a rotating collar accessible to the user which is indexed and held in place with its own separate ball detents.
Pawl and spring
This mechanical type uses a spring to hold a drive pawl against a notch in the rotor. It may feature automatic or manual reset. A compression adjustment on the spring determines the torque limit.
Friction plate
This type is similar to a friction plate clutch. Over-torque will cause the plates to slip. A simple example is found in a fixed-spool fishing reel, where the slipping torque is set by means of a large hand nut in order that the reel will turn and allow more line to unwind before the line breaks under the pull of a fish.
Magnetic particle
A magnetic particle clutch can be used effectively as a torque limiter. The torque setting fairly approximates a linear relationship with the current passing through the windings, which can be statically or dynamically set depending on needs.
Magnetic hysteresis
This type is non-synchronous in normal operation, so there is always some slippage.
See also
Torque converter
References
External links
Mechanisms (engineering)
Safety equipment
Limiter | Torque limiter | [
"Physics",
"Engineering"
] | 836 | [
"Force",
"Physical quantities",
"Mechanical engineering",
"Wikipedia categories named after physical quantities",
"Mechanisms (engineering)",
"Torque"
] |
3,766,499 | https://en.wikipedia.org/wiki/Eyring%20equation | The Eyring equation (occasionally also known as Eyring–Polanyi equation) is an equation used in chemical kinetics to describe changes in the rate of a chemical reaction against temperature. It was developed almost simultaneously in 1935 by Henry Eyring, Meredith Gwynne Evans and Michael Polanyi. The equation follows from the transition state theory, also known as activated-complex theory. If one assumes a constant enthalpy of activation and constant entropy of activation, the Eyring equation is similar to the empirical Arrhenius equation, despite the Arrhenius equation being empirical and the Eyring equation based on statistical mechanical justification.
General form
The general form of the Eyring–Polanyi equation somewhat resembles the Arrhenius equation:
where is the rate constant, is the Gibbs energy of activation, is the transmission coefficient, is the Boltzmann constant, is the temperature, and is the Planck constant.
The transmission coefficient is often assumed to be equal to one as it reflects what fraction of the flux through the transition state proceeds to the product without recrossing the transition state. So, a transmission coefficient equal to one means that the fundamental no-recrossing assumption of transition state theory holds perfectly. However, is typically not one because (i) the reaction coordinate chosen for the process at hand is usually not perfect and (ii) many barrier-crossing processes are somewhat or even strongly diffusive in nature. For example, the transmission coefficient of methane hopping in a gas hydrate from one site to an adjacent empty site is between 0.25 and 0.5. Typically, reactive flux correlation function (RFCF) simulations are performed in order to explicitly calculate from the resulting plateau in the RFCF. This approach is also referred to as the Bennett-Chandler approach, which yields a dynamical correction to the standard transition state theory-based rate constant.
It can be rewritten as:
One can put this equation in the following form:
where:
= reaction rate constant
= absolute temperature
= enthalpy of activation
= gas constant
= transmission coefficient
= Boltzmann constant = R/NA, NA = Avogadro constant
= Planck constant
= entropy of activation
If one assumes constant enthalpy of activation, constant entropy of activation, and constant transmission coefficient, this equation can be used as follows: A certain chemical reaction is performed at different temperatures and the reaction rate is determined. The plot of versus gives a straight line with slope from which the enthalpy of activation can be derived and with intercept from which the entropy of activation is derived.
Accuracy
Transition state theory requires a value of the transmission coefficient, called in that theory. This value is often taken to be unity (i.e., the species passing through the transition state always proceed directly to products and never revert to reactants and ). To avoid specifying a value of , the rate constant can be compared to the value of the rate constant at some fixed reference temperature (i.e., ) which eliminates the factor in the resulting expression if one assumes that the transmission coefficient is independent of temperature.
Error propagation formulas
Error propagation formulas for and have been published.
Notes
References
Chapman, S. and Cowling, T.G. (1991). "The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases" (3rd Edition). Cambridge University Press,
External links
Eyring equation at the University of Regensburg (archived from the original)
Online-tool to calculate the reaction rate from an energy barrier (in kJ/mol) using the Eyring equation
Chemical kinetics
Eponymous equations of physics
Reaction mechanisms
Physical chemistry
de:Eyring-Theorie | Eyring equation | [
"Physics",
"Chemistry"
] | 764 | [
"Reaction mechanisms",
"Chemical reaction engineering",
"Applied and interdisciplinary physics",
"Equations of physics",
"Eponymous equations of physics",
"Physical organic chemistry",
"nan",
"Chemical kinetics",
"Physical chemistry"
] |
3,766,560 | https://en.wikipedia.org/wiki/Entropy%20of%20activation | In chemical kinetics, the entropy of activation of a reaction is one of the two parameters (along with the enthalpy of activation) that are typically obtained from the temperature dependence of a reaction rate constant, when these data are analyzed using the Eyring equation of the transition state theory. The standard entropy of activation is symbolized and equals the change in entropy when the reactants change from their initial state to the activated complex or transition state ( = change, = entropy, = activation).
Importance
Entropy of activation determines the preexponential factor of the Arrhenius equation for temperature dependence of reaction rates. The relationship depends on the molecularity of the reaction:
for reactions in solution and unimolecular gas reactions
,
while for bimolecular gas reactions
.
In these equations is the base of natural logarithms, is the Planck constant, is the Boltzmann constant and the absolute temperature. is the ideal gas constant. The factor is needed because of the pressure dependence of the reaction rate. = .
The value of provides clues about the molecularity of the rate determining step in a reaction, i.e. the number of molecules that enter this step. Positive values suggest that entropy increases upon achieving the transition state, which often indicates a dissociative mechanism in which the activated complex is loosely bound and about to dissociate. Negative values for indicate that entropy decreases on forming the transition state, which often indicates an associative mechanism in which two reaction partners form a single activated complex.
Derivation
It is possible to obtain entropy of activation using Eyring equation. This equation is of the form
where:
= reaction rate constant
= absolute temperature
= enthalpy of activation
= gas constant
= transmission coefficient
= Boltzmann constant = R/NA, NA = Avogadro constant
= Planck constant
= entropy of activation
This equation can be turned into the formThe plot of versus gives a straight line with slope from which the enthalpy of activation can be derived and with intercept from which the entropy of activation is derived.
References
Chemical kinetics | Entropy of activation | [
"Chemistry"
] | 421 | [
"Chemical kinetics",
"Chemical reaction engineering"
] |
3,766,754 | https://en.wikipedia.org/wiki/Ningaloo%20Coast | The Ningaloo Coast is a World Heritage Site located in the north west coastal region of Western Australia. The heritage-listed area is located approximately north of Perth, along the East Indian Ocean. The distinctive Ningaloo Reef that fringes the Ningaloo Coast is long and is Australia's largest fringing coral reef and the only large reef positioned very close to a landmass. The Muiron Islands and Cape Farquhar are within this coastal zone.
The coast and reef draw their name from the Australian Aboriginal Wajarri language word meaning 'promontory', 'deepwater', or 'high land jutting into the sea'. The Yamatji peoples of the Baiyungu and Yinigudura are the traditional owners of the area.
Ningaloo Coast World Heritage Site
The World Heritage status of the region was created and negotiated in 2011, and the adopted boundary included the Ningaloo Marine Park (Commonwealth waters), Ningaloo Marine Park (State waters) and Muiron Islands Marine Management Area (including the Muiron Islands), Jurabi Coastal Park, Bundegi Coastal Park, Cape Range National Park, and the Learmonth Air Weapons Range. The site was gazetted on the Australian National Heritage List on 6 January 2010 under the Environment Protection and Biodiversity Conservation Act 1999.
In 1987, the reef and surrounding waters were designated as the Ningaloo Marine Park.
Reputation
Although most famed for its whale sharks which feed there during March to August, the reef is also rich in coral and other marine life. During the winter months, the reef is part of the migratory routes for dolphins, dugongs, manta rays and humpback whales. The beaches of the reef are an important breeding ground of the loggerhead, green and hawksbill turtles. They also depend on the reef for nesting and food. The Ningaloo supports an abundance of fish (500 species), corals (300 species), molluscs (600 species) and many other marine invertebrates.
The reef is less than offshore in some areas, such as Coral Bay. In 2006, researchers from the Australian Institute of Marine Science discovered gardens of sponges in the marine park's deeper waters that are thought to be species completely new to science. The short-nosed sea snake, thought to have been extinct for 17 years, was found on Ningaloo Reef in December 2015.
Conservation controversy
During the early 2000s, significant controversy arose over the proposed construction of a resort at Mauds Landing, a crucial nesting ground for the loggerhead turtle. It was also feared that the resort would be generally degrading to the entire marine park. Author Tim Winton, who lives in the area, was vocal in his opposition to the development. In 2002, when he won the WA Premier's Book Award, he donated the prize money, equivalent to in , to the community campaign to save the reef. Ultimately the planned resort did not go ahead. However, developers continue to take an interest in the area.
Ningaloo Collaborative Research Cluster
The Ningaloo Collaboration Cluster, an extensive research initiative commenced in 2007 within the region, forms a vital part of the CSIRO flagship Collaboration Fund Research Initiative. The project involves researchers from the CSIRO, Sustainable Tourism Cooperative Research Centre and a range of Australian Universities including Curtin University of Technology, Murdoch University, University of Western Australia, Australian National University and the University of Queensland. The project aims to create a dynamic model of Ningaloo that integrates socioeconomic factors and environmental impacts resulting from human activities in the region. This model will be combined with an ecological model of the area, ultimately serving to develop planning tools and management models. The primary goal is to facilitate sustainable utilization of the region's resources.
The study entails gathering and analysing socioeconomic data from both tourists and the local communities of Exmouth, Coral Bay, and Carnarvon. It also encompasses the collection of data on the environmental impact of human activities, encompassing natural resource utilization, waste generation, pollution, visual implications, and effects on flora and fauna. The interactive project involves key stakeholders in the region including the Department of Environment and Conservation, the shires of Carnarvon and Exmouth, local tourism organisations and Tourism Western Australia, the Gascoyne Development Commission, the Department of Water and Environmental Regulation, researchers from Wealth from the Oceans and Ningaloo Project, Chamber of Commerce and Industry of Western Australia, WA Department of Energy and Resources, Department of Fisheries, the Department for Planning and Infrastructure, the Ningaloo Sustainable Development Committee and Ningaloo Sustainable Development Office, Yamatji Land and Sea Council representatives, and the Ningaloo research community along with other cluster project members and the state's Ningaloo project. The project involves collaborating with regional planners and managers to analyse the development and management of tourism.
Specific reserved areas
National parks and reserves in the World Heritage Area
Bundegi Coastal Park
Cape Range National Park
Jurabi Coastal Park
Ningaloo Marine Park (Commonwealth waters)
Ningaloo Marine Park (State waters)
Bays of the World Heritage area
Islands of the World Heritage area
North Muiron Island
South Muiron Island
Peninsulas of the World Heritage area
Marine Park zones
Bundegi Sanctuary Zone
Murat Sanctuary Zone
Lighthouse Bay Sanctuary Zone
Jurabi Sanctuary Zone
Tantabiddi Sanctuary Zone
Mangrove Sanctuary Zone
Lakeside Sanctuary Zone
Mandu Sanctuary Zone
Osprey Sanctuary Zone
Winderabandi Sanctuary Zone
Cloates Sanctuary Zone
Bateman Sanctuary Zone
Maud Sanctuary Zone
Pelican Sanctuary Zone
Cape Farquhar Sanctuary Zone
Gnaraloo Bay Sanctuary Zone
3 Mile Sanctuary Zone
Turtles Sanctuary Zone
South Muiron Conservation Area
North Muiron Conservation Area
Sunday Island Conservation Area
Coastal forecast area
Ningaloo Coast is a designated weather forecast area, by the Bureau of Meteorology.
See also
Protected areas of Western Australia
Gnaraloo
Gnaraloo Turtle Conservation Program
Ningaloo Station
Warroora
References
External links
Official websites
UNESCO World Heritage List: Shark Bay, Western Australia
Ningaloo Coast UNESCO Collection on Google Arts and Culture
Additional information
Ningaloo collaboration cluster site
Sustainable Tourism Cooperative Research Centre site
Department of Environment and Conservation Site
A Ningaloo conservation site
Coral reefs
Shire of Exmouth
Australian National Heritage List
World Heritage Sites in Western Australia
Protected areas of Western Australia
Marine parks of Western Australia
IMCRA meso-scale bioregions
Biogeography of Western Australia
Central Indo-Pacific
IMCRA provincial bioregions | Ningaloo Coast | [
"Biology"
] | 1,332 | [
"Biogeomorphology",
"Coral reefs"
] |
3,769,879 | https://en.wikipedia.org/wiki/DELTA%20%28taxonomy%29 | DELTA (DEscription Language for TAxonomy) is a data format used in taxonomy for recording descriptions of living things. It is designed for computer processing, allowing the generation of identification keys, diagnosis, etc.
It is widely accepted as a standard and many programs using this format are available for various taxonomic tasks.
It was devised by the CSIRO Australian Division of Entomology in 1971 to 2000, with a notable part taken by Dr. Michael J. Dallwitz. More recently, the Atlas of Living Australia (ALA) rewrote the DELTA software in Java so it can run in a Java environment and across multiple operating systems. The software package can now be found at and downloaded from the ALA site.
DELTA System
The DELTA System is a group of integrated programs that are built on the DELTA format. The main program is the DELTA Editor, which provides an interface for creating a matrix of characters for any number taxa. A whole suite of programs can be found and run from within the DELTA editor which allow for the output of an interactive identification key, called Intkey. Other powerful features include the output of natural language descriptions, full diagnoses, and differences among taxa.
References
External links
DELTA for beginners. An introduction into the taxonomy software package DELTA
Taxonomy (biology) | DELTA (taxonomy) | [
"Biology"
] | 255 | [
"Taxonomy (biology)"
] |
20,241,654 | https://en.wikipedia.org/wiki/UNIFAC%20Consortium | The UNIFAC Consortium was founded at the Carl von Ossietzky University of Oldenburg at the chair of industrial chemistry of Prof. Gmehling to invite private companies to support the further development of the group-contribution methods UNIFAC and its successor modified UNIFAC (Dortmund). Both models are used for the prediction of thermodynamic properties, especially the estimation of phase equilibria.
The UNIFAC consortium is a successful example of private sponsorship of a public university in Germany.
History
The consortium was founded in 1997 when the public financing of the further development of the models became unlikely. The models UNIFAC and mod. UNIFAC (Dortmund) have already been used widely in software for the simulation and synthesis of chemical processes. Many companies doing process development in the field of chemical engineering had announced their support for a new way to subsidize the further development. This is facilitated through the support of over 40 companies, and is particularly aided by the DDBST GmbH, which supplies the complete Dortmund Data Bank (DDB) and several software tools for free. The DDB, a factual data bank for thermodynamic data, especially phase equilibrium data, is the main source for the work of the consortium.
Objectives
The normal work of the consortium includes
the creation of new and the improvement of older model parameters
the measurement of experimental data (partly own work, partly given to contractors)
holding annual member meetings
The consortium has e. g. added or modified 404 interaction parameters in the original UNIFAC matrix compared to the 635 parameters from the latest publication.
The major goals are to
improve the quality of the predictions
extend the range of applicability of the models. This include the support for further component types with new functional groups.
supply the parameters to process simulation and DDB software (for consortium members only)
The model parameters are confidential and only accessible to consortium members for at least two and a half year after the first delivery. After this time the university can publish the model parameters.
Supported models
The UNIFAC consortium supports the development of three different models,
original UNIFAC,
mod. UNIFAC (Dortmund), and,
PSRK (since 2005).
Both UNIFAC models are estimating activity coefficients, PSRK (short for Predictive Soave-Redlich-Kwong) however is a combination of the original UNIFAC model with an equation of state.
External links
Official web site
References
Thermodynamic models | UNIFAC Consortium | [
"Physics",
"Chemistry"
] | 506 | [
"Thermodynamic models",
"Thermodynamics"
] |
20,250,709 | https://en.wikipedia.org/wiki/Anatoly%20Karatsuba | Anatoly Alexeyevich Karatsuba (his first name often spelled Anatolii) (; Grozny, Soviet Union, 31 January 1937 – Moscow, Russia, 28 September 2008) was a Russian mathematician working in the field of analytic number theory, p-adic numbers and Dirichlet series.
For most of his student and professional life he was associated with the Faculty of Mechanics and Mathematics of Moscow State University, defending a D.Sc. there entitled "The method of trigonometric sums and intermediate value theorems" in 1966. He later held a position at the Steklov Institute of Mathematics of the Academy of Sciences.
His textbook Foundations of Analytic Number Theory went to two editions, 1975 and 1983.
The Karatsuba algorithm is the earliest known divide and conquer algorithm for multiplication and lives on as a special case of its direct generalization, the Toom–Cook algorithm.
The main research works of Anatoly Karatsuba were published in more than 160 research papers and monographs.
His daughter, Yekaterina Karatsuba, also a mathematician, constructed the FEE method.
Work on informatics
As a student of Lomonosov Moscow State University, Karatsuba attended the seminar of Andrey Kolmogorov and found solutions to two problems set up by Kolmogorov. This was essential for the development of automata theory and started a new branch in Mathematics, the theory of fast algorithms.
Automata
In the paper of :Edward F. Moore, , an automaton (or a machine) , is defined as a device with states, input symbols
and output symbols. Nine theorems on the structure of and experiments with are proved. Later such machines got the name of Moore machines. At the end of the paper, in the chapter «New problems», Moore formulates the problem of improving the estimates which he obtained in Theorems 8 and 9:
Theorem 8 (Moore). Given an arbitrary machine , such that every two states can be distinguished from each other, there exists an experiment of length that identifies the state of at the end of this experiment.
In 1957 Karatsuba proved two theorems which completely solved the Moore problem on improving the estimate of the length of experiment in his Theorem 8.
Theorem A (Karatsuba). If is a machine such that each two its states can be distinguished from each other then there exists a ramified experiment of length at most , by means of which one can find the state at the end of the experiment.
Theorem B (Karatsuba). There exists a machine, every states of which can be distinguished from each other, such that the length of the shortest experiment finding the state of the machine at the end of the experiment, is equal to .
These two theorems were proved by Karatsuba in his 4th year as a basis of his 4th year project; the corresponding paper was submitted to the journal "Uspekhi Mat. Nauk" on December 17, 1958 and published in June 1960. Up to this day (2011) this result of Karatsuba that later acquired the title "the Moore-Karatsuba theorem", remains the only precise (the only precise non-linear order of the estimate) non-linear result both in the automata theory and in the similar problems of the theory of complexity of computations.
Work on number theory
The main research works of A. A. Karatsuba were published in more than 160 research papers and monographs.
The p-adic method
A.A.Karatsuba constructed a new -adic method in the theory of trigonometric sums. The estimates of so-called -sums of the form
led to the new bounds for zeros of the Dirichlet -series modulo a power of a prime number, to the asymptotic formula for the number of Waring congruence of the form
to a solution of the problem of distribution of fractional parts of a polynomial with integer coefficients modulo . A.A. Karatsuba was the first to realize in the -adic form the «embedding principle» of Euler-Vinogradov and to compute a -adic analog of Vinogradov -numbers when estimating the number of solutions of a congruence of the Waring type.
Assume that : and moreover : where is a prime number. Karatsuba proved that in that case for any natural number there exists a such that for any every natural number can be represented in the form (1) for , and for there exist such that the congruence (1) has no solutions.
This new approach, found by Karatsuba, led to a new -adic proof of the Vinogradov mean value theorem, which plays the central part in the Vinogradov's method of trigonometric sums.
Another component of the -adic method of A.A. Karatsuba is the transition from incomplete systems of equations to complete ones at the expense of the local -adic change of unknowns.
Let be an arbitrary natural number, . Determine an integer by the inequalities . Consider the system of equations
Karatsuba proved that the number of solutions of this system of equations for satisfies the estimate
For incomplete systems of equations, in which the variables run through numbers with small prime divisors, Karatsuba applied multiplicative translation of variables. This led to an essentially new estimate of trigonometric sums and a new mean value theorem for such systems of equations.
The Hua Luogeng problem on the convergency exponent of the singular integral in the Terry problem
-adic method of A.A.Karatsuba includes the techniques of estimating the measure of the set of points with small values of functions in terms of the values of their parameters (coefficients etc.) and, conversely, the techniques of estimating those parameters in terms of the measure of this set in the real and -adic metrics. This side of Karatsuba's method manifested itself especially clear in estimating trigonometric integrals, which led to the solution of the problem of Hua Luogeng. In 1979 Karatsuba, together with his students G.I. Arkhipov and V.N. Chubarikov obtained a complete solution of the Hua Luogeng problem of finding the exponent of convergency of the integral:
where is a fixed number.
In this case, the exponent of convergency means the value , such that converges for and diverges for , where is arbitrarily small. It was shown that the integral converges for and diverges for
.
At the same time, the similar problem for the integral was solved:
where are integers, satisfying the conditions :
Karatsuba and his students proved that the integral converges, if and diverges, if .
The integrals and arise in the studying of the so-called Prouhet–Tarry–Escott problem. Karatsuba and his students obtained a series of new results connected with the multi-dimensional analog of the Tarry problem. In particular, they proved that if is a polynomial in variables () of the form :
with the zero free term,
, is
the -dimensional vector, consisting of the coefficients of , then the integral :
converges for , where is the highest of the numbers . This result, being not a final one, generated a new area in the theory of trigonometric integrals, connected with improving the bounds of the exponent of convergency (I. A. Ikromov, M. A. Chahkiev and others).
Multiple trigonometric sums
In 1966–1980, Karatsuba developed (with participation of his students G.I. Arkhipov and V.N. Chubarikov) the theory of multiple Hermann Weyl trigonometric sums, that is, the sums of the form
, where ,
is a system of real coefficients . The central point of that theory, as in the theory of the Vinogradov trigonometric sums, is the following mean value theorem.
Let be natural numbers, ,. Furthermore, let be the -dimensional cube of the form :: , , in the euclidean space : and :: . : Then for any and the value can be estimated as follows
, :
where , , , , and the natural numbers are such that: :: , .
The mean value theorem and the lemma on the multiplicity of intersection of multi-dimensional parallelepipeds form the basis of the estimate of a multiple trigonometric sum, that was obtained by Karatsuba (two-dimensional case was derived by G.I. Arkhipov). Denoting by the least common multiple of the numbers with the condition , for the estimate holds
,
where is the number of divisors of the integer , and is the number of distinct prime divisors of the number .
The estimate of the Hardy function in the Waring problem
Applying his -adic form of the Hardy-Littlewood-Ramanujan-Vinogradov method to estimating trigonometric sums, in which the summation is taken over numbers with small prime divisors, Karatsuba obtained a new estimate of the well known Hardy function in the Waring's problem (for ):
Multi-dimensional analog of the Waring problem
In his subsequent investigation of the Waring problem Karatsuba obtained the following two-dimensional generalization of that problem:
Consider the system of equations
, ,
where are given positive integers with the same order or growth, , and are unknowns, which are also positive integers. This system has solutions, if , and if , then there exist such , that the system has no solutions.
The Artin problem of local representation of zero by a form
Emil Artin had posed the problem on the -adic representation of zero by a form of arbitrary degree d. Artin initially conjectured a result, which would now be described as the p-adic field being a C2 field; in other words non-trivial representation of zero would occur if the number of variables was at least d2. This was shown not to be the case by an example of Guy Terjanian. Karatsuba showed that, in order to have a non-trivial representation of zero by a form, the number of variables should grow faster than polynomially in the degree d; this number in fact should have an almost exponential growth, depending on the degree. Karatsuba and his student Arkhipov proved, that for any natural number there exists , such that for any there is a form with integral coefficients of degree smaller than , the number of variables of which is , ,
which has only trivial representation of zero in the 2-adic numbers. They also obtained a similar result for any odd prime modulus .
Estimates of short Kloosterman sums
Karatsuba developed (1993—1999) a new method of estimating short
Kloosterman sums, that is, trigonometric sums of the form
where runs through a set of numbers, coprime to , the number of elements in which is essentially smaller than , and the symbol denotes the congruence class, inverse to modulo : .
Up to the early 1990s, the estimates of this type were known, mainly, for sums in which the number of summands was higher than (H. D. Kloosterman, I. M. Vinogradov, H. Salié,
L. Carlitz, S. Uchiyama, A. Weil). The only exception was the special moduli of the form , where is a fixed prime and the exponent increases to infinity (this case was studied by A. G. Postnikov by means of the method of Vinogradov). Karatsuba's method makes it possible to estimate Kloosterman sums where the number of summands does not exceed
and in some cases even
where is an arbitrarily small fixed number. The final paper of Karatsuba on this subject was published posthumously.
Various aspects of the method of Karatsuba have found applications in the following problems of analytic number theory:
finding asymptotics of the sums of fractional parts of the form : : where runs, one after another, through the integers satisfying the condition , and runs through the primes that do not divide the module (Karatsuba);
finding a lower bound for the number of solutions of inequalities of the form : : in the integers , , coprime to , (Karatsuba);
the precision of approximation of an arbitrary real number in the segment by fractional parts of the form :
: where , ,
(Karatsuba);
a more precise constant in the Brun–Titchmarsh theorem :
: where is the number of primes , not exceeding and belonging to the arithmetic progression
(J. Friedlander, H. Iwaniec);
a lower bound for the greatest prime divisor of the product of numbers of the form :
,
(D. R. Heath-Brown);
proving that there are infinitely many primes of the form:
(J. Friedlander, H. Iwaniec);
combinatorial properties of the set of numbers :
(A. A. Glibichuk).
The Riemann zeta function
The Selberg zeroes
In 1984 Karatsuba proved, that for a fixed satisfying the condition
, a sufficiently large and , , the interval contains at least real zeros of the Riemann zeta function .
The special case was proven by Atle Selberg earlier in 1942. The estimates of Atle Selberg and Karatsuba can not be improved in respect of the order of growth as .
Distribution of zeros of the Riemann zeta function on the short intervals of the critical line
Karatsuba also obtained a number of results about the distribution of zeros of on «short» intervals of the critical line. He proved that an analog of the Selberg conjecture holds for «almost all» intervals , , where is an arbitrarily small fixed positive number. Karatsuba developed (1992) a new approach to investigating zeros of the Riemann zeta-function on «supershort» intervals of the critical line, that is, on the intervals , the length of which grows slower than any, even arbitrarily small degree . In particular, he proved that for any given numbers , satisfying the conditions almost all intervals for contain at least zeros of the function . This estimate is quite close to the one that follows from the Riemann hypothesis.
Zeros of linear combinations of Dirichlet L-series
Karatsuba developed a new method of investigating zeros of functions which can be represented as linear combinations of Dirichlet -series. The simplest example of a function of that type is the Davenport-Heilbronn function, defined by the equality
where is a non-principal character modulo (, , , , , for any ),
For Riemann hypothesis is not true, however, the critical line contains, nevertheless, abnormally many zeros.
Karatsuba proved (1989) that the interval , , contains at least
zeros of the function . Similar results were obtained by Karatsuba also for linear combinations containing arbitrary (finite) number of summands; the degree exponent is here replaced by a smaller number , that depends only on the form of the linear combination.
The boundary of zeros of the zeta function and the multi-dimensional problem of Dirichlet divisors
To Karatsuba belongs a new breakthrough result in the multi-dimensional problem of Dirichlet divisors, which is connected with finding the number of solutions of the inequality in the natural numbers as . For there is an asymptotic formula of the form
,
where is a polynomial of degree , the coefficients of which depend on and can be found explicitly and is the remainder term, all known estimates of which (up to 1960) were of the form
,
where , are some absolute positive constants.
Karatsuba obtained a more precise estimate of , in which the value was of order and was decreasing much slower than in the previous estimates. Karatsuba's estimate is uniform in and ; in particular, the value may grow as grows (as some power of the logarithm of ). (A similar looking, but weaker result was obtained in 1960 by a German mathematician Richert, whose paper remained unknown to Soviet mathematicians at least until the mid-seventies.)
Proof of the estimate of is based on a series of claims, essentially equivalent to the theorem on the boundary of zeros of the Riemann zeta function, obtained by the method of Vinogradov, that is, the theorem claiming that has no zeros in the region
.
Karatsuba found (2000) the backward relation of estimates of the values with the behaviour of
near the line . In particular, he proved that if is an arbitrary non-increasing function satisfying the condition , such that for all the estimate
holds, then has no zeros in the region
( are some absolute constants).
Estimates from below of the maximum of the modulus of the zeta function in small regions of the critical domain and on small intervals of the critical line
Karatsuba introduced and studied the functions and , defined by the equalities
Here is a sufficiently large positive number, , , , . Estimating the values and from below shows, how large (in modulus) values can take on short intervals of the critical line or in small neighborhoods of points lying in the critical strip . The case was studied earlier by Ramachandra; the case , where is a sufficiently large constant, is trivial.
Karatsuba proved, in particular, that if the values and exceed certain sufficiently small constants, then the estimates
hold, where are certain absolute constants.
Behaviour of the argument of the zeta-function on the critical line
Karatsuba obtained a number of new results related to the behaviour of the function , which is called the argument of Riemann zeta function on the
critical line (here is the increment of an arbitrary continuous branch of along the broken line joining the points and ). Among those results are the mean value theorems for the function and its first integral on intervals of the real line, and also the theorem claiming that every interval for contains at least
points where the function changes sign. Earlier similar results were obtained by Atle Selberg for the case
.
The Dirichlet characters
Estimates of short sums of characters in finite fields
In the end of the sixties Karatsuba, estimating short sums of Dirichlet characters, developed a new method, making it possible to obtain non-trivial estimates of short sums of characters in finite fields. Let
be a fixed integer, a polynomial, irreducible over the field of rational numbers, a root of the equation , the corresponding extension of the field , a basis of , , , . Furthermore, let be a sufficiently large prime, such that is irreducible modulo ,
the Galois field with a basis , a non-principal Dirichlet character of the field . Finally, let be some nonnegative integers, the set of elements of the Galois field ,
,
such that for any , , the following inequalities hold:
.
Karatsuba proved that for any fixed , , and arbitrary satisfying the condition
the following estimate holds:
where , and the constant depends only on and the basis .
Estimates of linear sums of characters over shifted prime numbers
Karatsuba developed a number of new tools, which, combined with the Vinogradov method of estimating sums with prime numbers, enabled him to obtain in 1970 an estimate of the sum of values of a non-principal character modulo a prime on a sequence of shifted prime numbers, namely, an estimate of the form
where is an integer satisfying the condition , an arbitrarily small fixed number, , and the constant depends on only.
This claim is considerably stronger than the estimate of Vinogradov, which is non-trivial for .
In 1971 speaking at the International conference on number theory on the occasion of the 80th birthday of Ivan Matveyevich Vinogradov, Academician Yuri Linnik noted the following:
«Of a great importance are the investigations carried out by Vinogradov in the area of asymptotics of Dirichlet character on shifted primes , which give a decreased power compared to compared to , , where is the modulus of the character. This estimate is of crucial importance, as it is so deep that gives more than the extended Riemann hypothesis, and, it seems, in that directions is a deeper fact than that conjecture (if the conjecture is true). Recently this estimate was improved by A.A.Karatsuba».
This result was extended by Karatsuba to the case when runs through the primes in an arithmetic progression, the increment of which grows with the modulus
.
Estimates of sums of characters on polynomials with a prime argument
Karatsuba found a number of estimates of sums of
Dirichlet characters in polynomials of degree two for the case when the argument of the polynomial runs through a short sequence of subsequent primes. Let, for instance, be a sufficiently high prime, , where and are integers, satisfying the condition , and let denote the Legendre symbol, then for any fixed with the condition and for the sum ,
the following estimate holds:
(here runs through subsequent primes, is the number of primes not exceeding , and is a constant, depending on only).
A similar estimate was obtained by Karatsuba also for the case when runs through a sequence of primes in an arithmetic progression, the increment of which may grow together with the modulus .
Karatsuba conjectured that the non-trivial estimate of the sum for , which are "small" compared to , remains true in the case when is replaced by an arbitrary polynomial of degree , which is not a square modulo . This conjecture is still open.
Lower bounds for sums of characters in polynomials
Karatsuba constructed an infinite sequence of primes and a sequence of polynomials of degree with integer coefficients, such that is not a full square modulo ,
and such that
In other words, for any the value turns out to be a quadratic residues modulo . This result shows that André Weil's estimate
cannot be essentially improved and the right hand side of the latter inequality cannot be replaced by say the value , where is an absolute constant.
Sums of characters on additive sequences
Karatsuba found a new method, making it possible to obtain rather precise estimates of sums of values of non-principal Dirichlet characters on additive sequences, that is, on sequences consisting of numbers of the form , where the variables and runs through some sets
and independently of each other. The most characteristic example of that kind is the following claim which is applied in solving a wide class of problems, connected with summing up values of Dirichlet characters. Let be an arbitrarily small fixed number, , a sufficiently large prime, a non-principal character modulo . Furthermore, let and be arbitrary subsets of the complete system of congruence classes modulo , satisfying only the conditions , . Then the following estimate holds:
Karatsuba's method makes it possible to obtain non-trivial estimates of that sort in certain other cases when the conditions for the sets and , formulated above, are replaced by different ones, for example: ,
In the case when and are the sets of primes in intervals , respectively, where , , an estimate of the form
holds, where is the number of primes, not exceeding , , and is some absolute constant.
Distribution of power congruence classes and primitive roots in sparse sequences
Karatsuba obtained (2000) non-trivial estimates of sums of values of Dirichlet characters "with weights", that is, sums of components of the form , where is a function of natural argument. Estimates of that sort are applied in solving a wide class of problems of number theory, connected with distribution of power congruence classes, also primitive roots in certain sequences.
Let be an integer, a sufficiently large prime, , , , where , and set, finally,
(for an asymptotic expression for , see above, in the section on the multi-dimensional problem of Dirichlet divisors). For the sums and of the values , extended on the values , for which the numbers are quadratic residues (respectively, non-residues) modulo , Karatsuba obtained asymptotic formulas of the form
.
Similarly, for the sum of values , taken over all , for which is a primitive root modulo , one gets an asymptotic expression of the form
,
where are all prime divisors of the number .
Karatsuba applied his method also to the problems of distribution of power residues (non-residues) in the sequences of shifted primes , of the integers of the type and some others.
Late work
In his later years, apart from his research in number theory (see Karatsuba phenomenon), Karatsuba studied certain problems of theoretical physics, in particular in the area of quantum field theory. Applying his ATS theorem and some other number-theoretic approaches, he obtained new results in the Jaynes–Cummings model in quantum optics.
Awards and titles
1981: P.L.Tchebyshev Prize of Soviet Academy of Sciences
1999: Distinguished Scientist of Russia
2001: I.M.Vinogradov Prize of Russian Academy of Sciences
See also
ATS theorem
Karatsuba algorithm
Moore machine
References
External links
List of Research Works at Steklov Institute of Mathematics
Number theorists
Mathematical analysts
20th-century Russian mathematicians
21st-century Russian mathematicians
1937 births
2008 deaths
Soviet mathematicians
Moscow State University alumni
20th-century Russian scientists | Anatoly Karatsuba | [
"Mathematics"
] | 5,259 | [
"Mathematical analysis",
"Number theorists",
"Number theory",
"Mathematical analysts"
] |
22,757,570 | https://en.wikipedia.org/wiki/Composite%20aircraft | A composite aircraft is made up of multiple component craft. It takes off and flies initially as a single aircraft, with the components able to separate in flight and continue as independent aircraft. Typically the larger aircraft acts as a carrier aircraft or mother ship, with the smaller sometimes called a parasite or jockey craft.
The first composite aircraft flew in 1916, during World War I, when the British launched a Bristol Scout from a Felixstowe Porte Baby flying boat. Between the World Wars, American experiments with airship/biplane composites led to the construction of two airborne aircraft carriers, while the British Short Mayo seaplane composite demonstrated successful transatlantic mail delivery. During the Second World War some composites saw operational use including the Mistel ("mistletoe"), the larger unmanned component of a composite aircraft configuration developed in Germany during the later stages of World War II, in effect a two-part manned flying bomb. Experiments continued into the jet age, with large aircraft carrying fully capable parasite fighters or reconnaissance drones, though none entered service.
Design principles
A composite configuration is usually adopted to provide improved performance or operational flexibility for one of the components, compared to a single craft flying alone. Composite designs can take a number of different forms:
In the original composite arrangement, the smaller component carries out the operational mission and is mounted on a larger carrier aircraft or "mother ship". Thus it need not be compromised by the requirements for takeoff, climb and initial cruise, but may be optimised for the later stages of the mission.
In another form the larger carrier aircraft conducts the main operational mission, with small parasite aircraft carried to support it or extend its mission if required.
A third variant comprises a small piloted jockey component coupled with a larger unpiloted component. This arrangement is typically used as an attack aircraft in which the larger component is loaded with explosives and impacts the target.
The slip-wing composite comprises a lightweight upper lifting component, the slip wing, which assists the lower operational component during initial takeoff and climb: in the true slip-wing, the two wings act together as a biplane. The slip wing component may or may not be powered and/or manned.
Airship-aeroplane composites
[[Image:XF9C 1 aircraft hooking onto USS Akron, May 1932.jpg|thumb|F9C Sparrowhawk on the Akron'''s trapeze]]
During and after World War I, a number of efforts were made to develop airship-plane composites, in which one or more aeroplanes were carried by an airship.
United Kingdom
The first British effort, undertaken in 1916 with a non-rigid SS class airship, was aimed at the anti-Zeppelin role. The airship was to provide fast climb to altitude, while a B.E.2c aeroplane would provide the speed and manoeuvrability to attack the Zeppelin. It ended in disaster when the forward attachment point released prematurely and the aeroplane tipped nose-down. Both crew were killed in the ensuing disaster. By 1918 larger rigid airships were available and a Sopwith Camel was successfully released by HMA 23 in July 1918, but the armistice halted work. The idea was briefly revived in 1925 when the airship R33 was used to launch and then recapture a DH 53 Hummingbird light monoplane aircraft and, in 1926, two Gloster Grebe biplane fighters.
Germany
The first parasite fighter was a German Albatros D.III which flew from Zeppelin L 35 (LZ 80) on January 26, 1918. The LZ 129 Hindenburg later conducted trials using parasite aircraft in the days before it crashed at Lakehurst, but the trial proved unsuccessful as the plane hit the hull trapeze.
United States
In 1923 the TC-3 and TC-7 non-rigid airships launched and recovered a Sperry Messenger biplane.
Then in 1930, the US Navy fitted the USS Los Angeles with a trapeze designed to release and recover a small parasite aircraft. Successful trials with a glider and a biplane led to the construction of the Akron and Macon airships as airborne aircraft carriers.
List of airship-aeroplane composites
Composite aeroplanes
The first composite aeroplanes
In parallel with early airship activity, efforts also went into carrying a fighter plane aloft on top of a second aeroplane.
In the UK, the Felixstowe Porte Baby/Bristol Scout composite flew in May 1916. The idea was to intercept German Zeppelin airships far out to sea, beyond the normal range of a land or shore based craft. The successful first flight was not followed up, due to the ungainliness of the composite in takeoff and its vulnerability in flight. From 1921, a series of types were adapted as carriers for gliders used as aerial targets.
The Short Mayo Composite mailplane comprised the S.21 Maia carrier flying boat and S.20 Mercury parasite seaplane. It made successful transatlantic flights in trials during 1938, before operations were cut short by the outbreak of war.
World War II
Several countries experimented with composite designs during the second world war, and a few of these were used on operational missions.
In the USSR, the Tupolev Vakhmistrov Zveno project developed a series of composite types. The SPB variant used the Tupolev TB-3 as the mother ship and in 1941 Polikarpov I-16 dive-bombers flying from it became the first parasite fighters to see successfully operate in combat.
In the UK, Pemberton-Billing proposed "slip-wing" composite bomber and fighter types, early in the war. Hawker's also worked on a Liberator/Hurricane composite.
In America in 1943, O.A. Buettner patented a composite design in which the secondary fighter components' wings fitted into depressions in the carrier's upper wing.
A number of composite proposals were considered by German designers during World War II. Of these, the Junkers Ju 88 Mistel project reached operational status, mounting either a manned Messerschmitt Bf 109 or Focke-Wulf Fw 190 fighter above an unmanned shaped charge-warheaded Junkers Ju 88 and flying a number of combat missions. The führungsmaschine (pathfinder) project used a similar Ju 88/Fw 190 combination where the Ju 88 was also manned and the Fw 190 was carried as a protective escort fighter. The Dornier Do 217/Messerschmitt Me 328 escort fighter project was unsuccessful due to engine problems. Other studies included the Daimler-Benz Project C.
Postwar
Experiments with parasite aircraft continued into the jet age, especially in America and, immediately post-war, in France as well for their own advanced jet and rocket-powered experimental designs - first achieved with the pair of postwar-completed Heinkel He 274 four-engined high altitude bomber prototypes, both built in France.
In America the FIghter CONveyer (FICON) trapeze system was developed for carrying, launching and recovering parasite fighters.
Examples with and without the FICON system included:
B-36/XF-85 Goblin, an attempt to equip bombers with their own escort fighters (1948)
Convair B-36/F-84, another, more successful, escort fighter attempt (1952)
Lockheed DC-130/Q-2C Firebee, drone launched and controlled from C-130 "mother"
Lockheed D-21/M-21, for high-speed reconnaissance, based upon the SR-71 Blackbird (1963)
Elsewhere, during the 1950s in the UK Short Brothers studied proposals for a composite VTOL strike fighter but the design did not progress.
In modern times the term "composite aircraft" tends to refer to types constructed from composite materials. The White Knight/Space Ship One spaceplane is a composite aircraft in both senses.
List of composite aeroplanes
See also
Parasite aircraft
References
Notes
Bibliography
Harper, H.C.J.; Composite history, Flight, November 11, 1937.
Winchester, J. (Ed.); Concept aircraft'', Grange, 2005
Aircraft configurations
Vehicles introduced in 1916 | Composite aircraft | [
"Engineering"
] | 1,632 | [
"Aircraft configurations",
"Aerospace engineering"
] |
22,758,158 | https://en.wikipedia.org/wiki/%CE%91-Hexachlorocyclohexane | α-Hexachlorocyclohexane (α-HCH) is an organochloride which is one of the isomers of hexachlorocyclohexane (HCH). It is a byproduct of the production of the insecticide lindane (γ-HCH) and it is typically still contained in commercial grade lindane used as insecticide. Lindane, however, has not been produced or used in the United States for more than 20 years. At ambient temperatures it is a stable, white, powdery solid substance. As of 2009, the Stockholm Convention on Persistent Organic Pollutants classified (α-HCH) and (β-HCH) as persistent organic pollutants (POPs), due to the chemical's ability to persistence in the environment, bioaccumulative, biomagnifying, and long-range transport capacity.
See also
β-Hexachlorocyclohexane
References
External links
α-hexachlorocyclohexane United States Environmental Protection Agency IRIS fact sheet
Cyclohexane, 1,2,3,4,5,6-hexachloro-, (1α,2β,3α,4β,5α,6β)- – NIST
Organochlorides
Persistent organic pollutants under the Stockholm Convention
Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution
ru:Гексахлоран | Α-Hexachlorocyclohexane | [
"Chemistry"
] | 318 | [
"Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution",
"Persistent organic pollutants under the Stockholm Convention"
] |
22,759,879 | https://en.wikipedia.org/wiki/Nowhere%20commutative%20semigroup | In mathematics, a nowhere commutative semigroup is a semigroup S such that, for all a and b in S, if ab = ba then a = b. A semigroup S is nowhere commutative if and only if any two elements of S are inverses of each other.
Characterization of nowhere commutative semigroups
Nowhere commutative semigroups can be characterized in several different ways. If S is a semigroup then the following statements are equivalent:
S is nowhere commutative.
S is a rectangular band (in the sense in which the term is used by John Howie).
For all a and b in S, aba = a.
For all a, b and c in S, a2 = a and abc = ac.
Even though, by definition, the rectangular bands are concrete semigroups, they have the defect that their definition is formulated not in terms of the basic binary operation in the semigroup. The approach via the definition of nowhere commutative semigroups rectifies this defect.
To see that a nowhere commutative semigroup is a rectangular band, let S be a nowhere commutative semigroup. Using the defining properties of a nowhere commutative semigroup, one can see that for every a in S the intersection of the Green classes Ra and La contains the unique element a. Let S/L be the family of L-classes in S and S/R be the family of R-classes in S. The mapping
ψ : S → (S/R) × (S/L)
defined by
aψ = (Ra, La)
is a bijection. If the Cartesian product (S/R) × (S/L) is made into a semigroup by furnishing it with the rectangular band multiplication, the map ψ becomes an isomorphism. So S is isomorphic to a rectangular band.
Other claims of equivalences follow directly from the relevant definitions.
See also
Special classes of semigroups
References
Algebraic structures
Semigroup theory | Nowhere commutative semigroup | [
"Mathematics"
] | 412 | [
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Algebraic structures",
"Semigroup theory"
] |
22,759,890 | https://en.wikipedia.org/wiki/EDTMP | EDTMP or ethylenediamine tetra(methylene phosphonic acid) is a phosphonic acid. It has chelating and anti corrosion properties. EDTMP is the phosphonate analog of EDTA. It is classified as a nitrogenous organic polyphosphonic acid.
Properties and applications
EDTMP is normally delivered as its sodium salt, which exhibits good solubility in water.
Used in Water treatment as an antiscaling and anti corrosion agent, the corrosion inhibition of EDTMP is 3–5 times better than that of inorganic polyphosphate. It can degrade to Aminomethylphosphonic acid. It shows excellent scale inhibition ability under temperature 200 °C. It functions by chelating with many metal ions.
The anti-cancer drug Samarium (153Sm) lexidronam is also derived from EDTMP.
References
Phosphonic acids
Chelating agents
Tertiary amines
Ethyleneamines
Corrosion inhibitors
Water treatment
Hexadentate ligands | EDTMP | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 206 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering",
"Corrosion inhibitors",
"Chelating agents",
"Process chemicals"
] |
22,760,945 | https://en.wikipedia.org/wiki/Current%20injection%20technique | The current injection technique is a technique developed to reduce the turn-OFF switching transient of power bipolar semiconductor devices. It was developed and published by Dr S. Eio of Staffordshire University (United Kingdom) in 2007.
Background
The Turn-OFF switching transient of silicon-based power bipolar semiconductor devices, caused by stored charge in the device during the forward conduction state, limits switching speed of the device, which in turn limits the efficiency of the application it is used within.
Different techniques, such as carrier lifetime control, injection efficiency and buffer layer devices, have been used to minimize turn-OFF switching transient, but all result in a trade-off between the ON-state loss and switching speed.
Details of the Technique
The current injection technique examined in Dr Eio's publications optimize the switching transient of power diodes, thyristors and insulated gate bipolar transistors (IGBTs) without the need of changing the structure of these devices. To implement the current injection technique, current injection circuit was developed with results indicating that the injection of an additional current during its switching transient can reduce the reverse recovery charge of a given power diode and thyristor, and also reduce the tail current of insulated gate bipolar transistors.
Practical experimental results on diodes and thyristors showed that the amplitude of the injected current required is proportional to the peak reverse recovery current and proved that these devices experience a momentary increase in recombination of current carriers during the injection of the additional current. This help to prevent the device from conducting large negative current, which in turn reduce its reverse recovery charge and reverse recovery time. Results obtained from experiments with insulated gate bipolar transistors showed a significant reduction in the time where current falls to zero when opposing current was injected into the device during its turn-off transient. Further simulation results from numerical modeling showed that the injected opposing current temporary increase recombination in the device and therefore reduce the extracted excess carriers that stored within the device.
To prevent circuit commutation and bonding between the current injection circuit and the main test circuit where the device under test (DUT) is connected to, non-invasive circuit was developed to magnetically couple the two circuits.
In summary, current injection technique makes it possible to use devices with low forward voltage drop for high frequency applications. This also imply cheaper cost of devices as less processing steps are required during the manufacturing stages where the need of carrier lifetime control techniques are reduced. This removed the need for the semiconductor device used in the current injection circuit to have high breakdown voltage rating and also provided electrical isolation. Typical application of this technique in an inductive load chopper circuit showed a significant reduction in the tail current of insulated gate bipolar transistors, and the reverse recovery time and charge of the freewheeling diode used.
References
Notes
S. Eio., N. Shammas., “IGBT Tail Current Reduction by Current Injection,” 43rd International Universities Power Engineering Conference, Padova, Italy,1 – 4 September 2008
S. Eio., N. Shammas., “A chopper circuit with current injection technique for increasing operating frequency,” 9th International Seminar On Power Semiconductors, Prague, Czech Republic, 27–29 August 2008
S. Eio., N. Shammas., “Switching Transient of Power Diode,” 41st International Universities Power Engineering Conference, Newcastle, United Kingdom, 6–8 September 2006, Volume 2, P. 564 – 568, Digital Object Identifier 10.1109 / UPEC.2006.367541
N. Shammas., S. Eio., “A Novel Technique to Reduce the Reverse Recovery Charge of a Power Diode,” 12th European Power Electronics and Applications, EPE 2007, Aalborg, Denmark, 2–5 September. 2007 P.1 – 8, Digital Object Identifier 10.1109 / EPE.2007.4417713
N. Shammas., S. Eio., “A Novel Technique to Reduce the Reverse Recovery Charge of a Power Thyristor,” 42nd International Universities Power Engineering Conference, Brighton, United Kingdom, 4 – 6 September 2007, p. 1222–1227, Digital Object Identifier 10.1109 / UPEC.2007.4469126
N. Shammas., S. Eio., D. Chamund., “Semiconductor Devices and Their Use in Power Electronic Applications,” World Scientific and Eng. Academy and Society, Venice, Italy, 21 -23 Nov 2007
N.Shammas, S.Eio, S.Nathan, K.Shukry, D.Chamund., “Thermal Aspects of Power Semiconductor Devices and Systems,” VII Conference Thermal Problems in Electronics, MicroTherm’07, 24 – 28 June 2007, Lodz, Poland
Semiconductors | Current injection technique | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 989 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
22,762,505 | https://en.wikipedia.org/wiki/Epithelial%20polarity | Epithelial polarity is one example of the cell polarity that is a fundamental feature of many types of cells. Epithelial cells feature distinct 'apical', 'lateral' and 'basal' plasma membrane domains. Epithelial cells connect to one another via their lateral membranes to form epithelial sheets that line cavities and surfaces throughout the animal body. Each plasma membrane domain has a distinct protein composition, giving them distinct properties and allowing directional transport of molecules across the epithelial sheet. How epithelial cells generate and maintain polarity remains unclear, but certain molecules have been found to play a key role.
A variety of molecules are located at the apical membrane, but only a few key molecules act as determinants that are required to maintain the identity of the apical membrane and, thus, epithelial polarity. These molecules are the proteins Cdc42, atypical protein kinase C (aPKC), Par6, Par3/Bazooka/ASIP. Crumbs, "Stardust" and protein at tight junctions (PATJ). These molecules appear to form two distinct complexes: an aPKC-Par3-Par6 "aPKC" (or "Par") complex that also interacts with Cdc42; and a Crumbs-Stardust-PATJ "Crumbs" complex. Of these two complexes, the aPKC complex is the most important for epithelial polarity, being required even when the Crumbs complex is not. Crumbs is the only transmembrane protein in this list and the Crumbs complex serves as an apical cue to keep the aPKC complex apical during complex cellular shape changes.
Basolateral membranes
In the context of renal tubule physiology, the term basolateral membrane refers to the cell membrane which is oriented away from the lumen of the tubule, whereas the term apical or luminal membrane refers to the cell membrane which is oriented towards the lumen. The principal function of this basolateral membrane is to take up metabolic waste products into the epithelial cell for disposal into the lumen where it is transported out of the body as urine. A secondary role of the basolateral membrane is to allow the recycling of desirable substrates, such as glucose, that have been rescued from the lumen of the tubule to be secreted into the interstitial fluids.<ref
name="pmid16403838"></ref>
Basal and lateral membranes share common determinants, the proteins LLGL1, DLG1, and SCRIB. These three proteins all localize to the basolateral domain and are essential for basolateral identity and for epithelial polarity.
Mechanisms of polarity
How epithelial cells polarize is still not fully understood. Some key principles have been proposed to maintain polarity, but the mechanisms behind these principles remain to be discovered.
The first principle is positive feedback. In computer models, a molecule that can be either membrane-associated or cytoplasmic can polarize when its association with the membrane is subject to positive feedback: that membrane localization occurs most strongly where the molecule is already most highly concentrated. In similar models, researchers have shown that epithelial cells can self-assemble into a rich set of robust biological shapes. In the yeast saccharomyces cerevisiae, there is genetic evidence that Cdc42 is subject to positive feedback of this kind and can spontaneously polarize, even in the absence of an external cue. In the fruit fly Drosophila melanogaster, Cdc42 is recruited by the aPKC complex and then promotes the apical localization of the aPKC complex in a probable positive feedback loop. Thus, in the absence of Cdc42 or the aPKC complex, apical determinants cannot be maintained at the apical membrane and consequently, apical identity and polarity is lost.
The second principle is segregation of polarity determinants. The sharp distinction between apical and baso-lateral domains is maintained by an active mechanism that prevents mixing. The nature of this mechanism is not known, but it clearly depends on the polarity determinants. In the absence of the aPKC complex, the baso-lateral determinants spread into the former apical domain. Conversely, in the absence of any of Lgl, Dlg or Scrib, the apical determinants spread into the former baso-lateral domain. Thus, the two determinants behave as if they exert mutual repulsion upon one another.
The third principle is directed exocytosis. Apical membrane proteins are trafficked from the Golgi to the apical, rather than baso-lateral, membrane because apical determinants serve to identify the correct destination for vesicle delivery. A related mechanism is likely to operate for the baso-lateral membranes.
The fourth principle is lipid modification. A component of the lipid bilayer, phosphatidyl inositol phosphate (PIP) can be phosphorylated to form PIP2 and PIP3. In some epithelial cells, PIP2 is apically localised while PIP3 is basolaterally localised. In at least one cultured cell line, the MDCK cell, this system is required for epithelial polarity. The relationship between this system and the polarity determinants in animal tissues remains unclear.
Basal versus lateral
Since basal and lateral membranes share the same determinants, another mechanism must make the difference between the two domains. Cell shape and contacts provide the likely mechanism. Lateral membranes are the site of contact between epithelial cells, whereas basal membranes connect epithelial cells to the basement membrane, an extracellular matrix layer that lies along the basal surface of the epithelium. Certain molecules, such as Integrins, localise specifically to the basal membrane and form connections with the extracellular matrix.
Epithelial cell shape
Epithelial cells come in a variety of shapes that relate to their function in development or physiology. How epithelial cells adopt particular shapes is poorly understood, but it must involve spatial control of the actin cytoskeleton, which is central to cell shape in all plant cells.
Apical snouts, also called apical blebs, are small protrusions of cytoplasm towards the lumen. They are found normally in apocrine cells, and can also appear in apocrine metaplasia and columnar cell changes in the breast.
Epithelial cadherin
All epithelial cells express the transmembrane adhesion molecule E-cadherin, a cadherin which localises most prominently to the junction between the apical and lateral membranes. The extra-cellular domains of E-cadherin molecules from neighbouring cells bind to one another via a homotypic interaction. The intra-cellular domains of E-cadherin molecules bind to the actin cytoskeleton via the adaptor proteins alpha-catenin and beta-catenin. Thus, E-cadherin forms adherens junctions that connect the actin cytoskeletons of neighbouring cells. Adherens junctions are the primary force-bearing junctions between epithelial cells and are fundamentally important for maintaining epithelial cell shape and for dynamic changes in shape during tissue development. How E-cadherin localizes to the boundary between apical and lateral membranes is not known, but polarized membranes are essential for maintaining E-cadherin at adherens junctions.
See also
Cell polarity
References
Cell biology | Epithelial polarity | [
"Biology"
] | 1,558 | [
"Cell biology"
] |
22,762,876 | https://en.wikipedia.org/wiki/MAXEkSAT | MAXEkSAT is a problem in computational complexity theory that is a maximization version of the Boolean satisfiability problem 3SAT. In MAXEkSAT, each clause has exactly k literals, each with distinct variables, and is in conjunctive normal form. These are called k-CNF formulas. The problem is to determine the maximum number of clauses that can be satisfied by a truth assignment to the variables in the clauses.
We say that an algorithm A provides an α-approximation to MAXEkSAT if, for some fixed positive α less than or equal to 1, and every kCNF formula φ, A can find a truth assignment to the variables of φ that will satisfy at least an α-fraction of the maximum number of satisfiable clauses of φ.
Because the NP-hard k-SAT problem (for k ≥ 3) is equivalent to determining if the corresponding MAXEkSAT instance has a value equal to the number of clauses, MAXEkSAT must also be NP-hard, meaning that there is no polynomial time algorithm unless P=NP. A natural next question, then, is that of finding approximate solutions: what's the largest real number α < 1 such that some explicit P (complexity) algorithm always finds a solution of size α·OPT, where OPT is the (potentially hard to find) maximizing assignment. While the algorithm is efficient, it's not obvious how to remove its dependence on randomness. There are problems related to the satisfiability of conjunctive normal form Boolean formulas.
Approximation Algorithm
There is a simple randomized polynomial-time algorithm that provides a -approximation to MAXEkSAT: independently set each variable to true with probability , otherwise set it to false.
Any given clause c is unsatisfied only if all of its k constituent literals evaluates to false. Because each literal within a clause has a chance of evaluating to true independently of any of the truth value of any of the other literals, the probability that they are all false is . Thus, the probability that c is indeed satisfied is , so the indicator variable (that is 1 if c is true and 0 otherwise) has expectation . The sum of all of the indicator variables over all clauses is , so by linearity of expectation we satisfy a fraction of the clauses in expectation. Because the optimal solution can't satisfy more than all of the clauses, we have that , so the algorithm finds a approximation to the true optimal solution in expectation.
Despite its high expectation, this algorithm may occasionally stumble upon solutions of value lower than the expectation we computed above. However, over a large number of trials, the average fraction of satisfied clauses will tend towards . This implies two things:
There must exist an assignment satisfying at least a fraction of the clauses. If there weren't, we could never attain a value this large on average over a large number of trials.
If we run the algorithm a large number of times, at least half of the trials (in expectation) will satisfy some fraction of the clauses. This is because any smaller fraction would bring down the average enough that the algorithm must occasionally satisfy more than 100% of the clauses to get back to its expectation of , which cannot happen. Extending this using Markov's inequality, at least some -fraction of the trials (in expectation) will satisfy at least an -fraction of the clauses. Therefore, for any positive , it takes only a polynomial number of random trials until we expect to find an assignment satisfying at least an fraction of the clauses.
A more robust analysis (such as that in ) shows that we will, in fact, satisfy at least a -fraction of the clauses a constant fraction of the time (depending only on k), with no loss of .
Derandomization
While the above algorithm is efficient, it's not obvious how to remove its dependence on randomness. Trying out all possible random assignments is equivalent to the naive brute force approach, so may take exponential time. One clever way to derandomize the above in polynomial time relies on work in error correcting codes, satisfying a fraction of the clauses in time polynomial in the input size (although the exponent depends on k).
We need one definition and two facts to find the algorithm.
Definition
is an -wise independent source if, for a uniformly chosen random are -wise independent random variables.
Fact 1
Note that such an assignment can be found among elements of any -wise independent source over n binary variables. This is easier to see once you realize that an -wise independent source is really just any set of binary vectors over with the property that all restrictions of those vectors to co-ordinates must present the 2ℓ possible binary combinations an equal number of times.
Fact 2
Recall that BCH2,m,d is an linear code.
There exists an -wise independent source of size , namely the dual of a code, which is a linear code. Since every BCH code can be presented as a polynomial-time computable restriction of a related Reed Solomon code, which itself is strongly explicit, there is a polynomial-time algorithm for finding such an assignment to the xi's. The proof of fact 2 can be found at Dual of BCH is an independent source.
Outline of the Algorithm
The algorithm works by generating , computing its dual (which as a set is an -wise independent source) and treating each element (codeword) of that source as a truth assignment to the n variables in φ. At least one of them will satisfy at least of the clauses of φ, whenever φ is in kCNF form, .
Related problems
There are many problems related to the satisfiability of conjunctive normal form Boolean formulas.
Decision problems:
2SAT
3SAT
Optimization problems, where the goal is to maximize the number of clauses satisfied:
MAX-SAT, and the corresponded weighted version Weighted MAX-SAT
MAX-SAT, where each clause has exactly variables:
MAX-2SAT
MAX-3SAT
MAXEkSAT
The partial maximum satisfiability problem (PMAX-SAT) asks for the maximum number of clauses which can be satisfied by any assignment of a given subset of clauses. The rest of the clauses must be satisfied.
The soft satisfiability problem (soft-SAT), given a set of SAT problems, asks for the maximum number of sets which can be satisfied by any assignment.
The minimum satisfiability problem.
The MAX-SAT problem can be extended to the case where the variables of the constraint satisfaction problem belong the set of reals. The problem amounts to finding the smallest q such that the q-relaxed intersection of the constraints is not empty.
See also
References
External links
Coding Theory notes at MIT
NP-hard problems | MAXEkSAT | [
"Mathematics"
] | 1,368 | [
"NP-hard problems",
"Mathematical problems",
"Computational problems"
] |
22,763,312 | https://en.wikipedia.org/wiki/Micromirror%20device | Micromirror devices are devices based on microscopically small mirrors. The mirrors are microelectromechanical systems (MEMS), which means that their states are controlled by applying a voltage between the two electrodes around the mirror arrays. Digital micromirror devices are used in video projectors and optics and micromirror devices for light deflection and control.
Digital Micromirror Devices
Digital Micromirror Devices (DMD) were invented by Texas Instruments in 1987 and are the core of the DLP technology used for video projection. The mirrors are arranged in a matrix and have two states, "on" or "off" (digital). In the on state, light from the projector bulb is reflected into the lens making the pixel appear bright on the screen. In the off state, the light is directed elsewhere (usually onto a heatsink), making the pixel appear dark. Colours could be produced by various technologies like different light sources or gratings.
Light Deflection and Control
The mirrors could not only be switched between two states, their rotation is in fact continuous. This could be used for controlling the intensity and direction of incident light. One future application is controlling the light in buildings, based on micromirrors between the two panes of Insulated glazing. The power and direction of the incident light is determined by the mirrors state, which itself is controlled electrostatically.
MEMS Scanning Micromirror
A MEMS scanning micromirror consists of a silicon device with a millimeter-scale mirror at the center. The mirror is typically connected to flexures that allow it to oscillate on a single axis or biaxially, to project or capture light.
References
Semiconductor devices
Optoelectronics
Microtechnology
Microelectronic and microelectromechanical systems | Micromirror device | [
"Materials_science",
"Engineering"
] | 377 | [
"Microelectronic and microelectromechanical systems",
"Materials science",
"Microtechnology"
] |
27,557,070 | https://en.wikipedia.org/wiki/Mountain%20Pass%20Rare%20Earth%20Mine | The Mountain Pass Rare Earth Mine and Processing Facility, owned by MP Materials, is an open-pit mine of rare-earth elements on the south flank of the Clark Mountain Range in California, southwest of Las Vegas, Nevada. In 2020 the mine supplied 15.8% of the world's rare-earth production. It is the only rare-earth mining and processing facility in the United States. It is the largest single known deposit of such minerals.
As of 2022, work is ongoing to restore processing capabilities for domestic light rare-earth elements (LREEs) and work has been funded by the United States Department of Defense to restore processing capabilities for heavy rare-earth metals (HREEs) to alleviate supply chain risk.
Geology
The Mountain Pass deposit is in a 1.4 billion-year-old Precambrian carbonatite intruded into gneiss. It contains 8% to 12% rare-earth oxides, mostly contained in the mineral bastnäsite. Gangue minerals include calcite, barite, and dolomite. It is regarded as a world-class rare-earth mineral deposit. The metals that can be extracted from it include: cerium, lanthanum, neodymium, and europium.
At 1 July 2020, Proven and Probable Reserves, using a 3.83% total rare-earth oxide (REO) cutoff grade, were 18.9 million tonnes of ore containing 1.36 million tonnes of REO at an average grade of 7.06% REO. The ore body is about thick and long.
Ore processing
To process bastnäsite ore, it is finely ground and subjected to froth flotation to separate the bulk of the bastnäsite from the accompanying barite, calcite, and dolomite. Marketable products include each of the major intermediates of the ore dressing process: flotation concentrate, acid-washed flotation concentrate, calcined acid-washed bastnäsite, and finally a cerium concentrate, which was the insoluble residue left after the calcined bastnäsite had been leached with hydrochloric acid.
The lanthanides that dissolve as a result of the acid treatment are subjected to solvent extraction to capture the europium and purify the other individual components of the ore. A further product includes a lanthanide mix, depleted of much of the cerium, and essentially all of samarium and heavier lanthanides. The calcination of bastnäsite drives off the carbon dioxide content, leaving an oxide-fluoride, in which the cerium content oxidizes to the less-basic quadrivalent state. However, the high temperature of the calcination gives less-reactive oxide, and the use of hydrochloric acid, which can cause reduction of quadrivalent cerium, leads to an incomplete separation of cerium and the trivalent lanthanides.
History
Gold mining began at the site in 1936, but the rare earth deposits were not discovered until 1949 when prospectors in search of uranium noticed anomalously high radioactivity. Molybdenum Corporation of America bought most of the mining claims, and began small-scale production in 1952.
Production expanded greatly in the 1960s, to supply demand for europium used in color television screens. Between 1965 and 1995, the mine supplied most of the worldwide rare-earth metals consumption.
Molybdenum Corporation of America changed its name to Molycorp in 1974. The corporation was acquired by Union Oil in 1977, which in turn became part of Chevron Corporation in 2005.
In 1998, the mine's separation plant ceased production of refined rare-earth compounds; it continued to produce bastnäsite concentrate.
The mine closed in 2002 after a toxic waste spill and wasn't reopened due to competition from Chinese suppliers, though processing of previously mined ore continued.
In 2008, Chevron sold the mine to privately-held Molycorp Minerals LLC, a company formed to revive the Mountain Pass mine. Molycorp announced plans to spend $500 million to reopen and expand the mine, and on July 29, 2010, it raised about $400 million through an initial public offering, selling 28,125,000 shares at $14 under the ticker symbol MCP on the New York Stock Exchange.
In December 2010, Molycorp announced that it had secured all the environmental permits needed to build a new ore processing plant at the mine; construction would begin in January 2011, and was expected to be completed by the end of 2012. On August 27, 2012, the company announced that mining had restarted.
The processing plant was in full production on June 25, 2015, when Molycorp filed for Chapter 11 bankruptcy with outstanding bonds in the amount of $US 1.4 billion. The company's shares were removed from the NYSE.
In August 2015, it was reported that the mine was to be shut down.
On August 31, 2016, Molycorp Inc. emerged from bankruptcy as Neo Performance Materials, leaving behind the mine as Molycorp Minerals LLC in its own separate Chapter 11 bankruptcy. As of January 2016, its shares were traded OTC under the symbol MCPIQ.
Mountain Pass was acquired out of bankruptcy in July 2017 with the goal of reviving America's rare-earth industry. MP Materials resumed mining and refining operations in January 2018.
Current ownership
MP Materials is 51.8%-owned by US hedge funds JHL Capital Group (and its CEO James Litinsky) and QVT Financial LP, while Shenghe Resources, a partially state-owned enterprise of the Government of China, holds an 8.0% stake. Apart from institutions, the public owns 18%.
Environmental impact
In the 1980s, the company began piping wastewater up to 14 miles to evaporation ponds on or near Ivanpah Dry Lake, east of Interstate 15 near Nevada. This pipeline repeatedly ruptured during cleaning operations to remove mineral deposits called scale. The scale is radioactive because of the presence of thorium and radium, which occur naturally in the rare-earth ore. A federal investigation later found that some 60 spills—some unreported—occurred between 1984 and 1998, when the pipeline and chemical processing at the mine were shut down. In all, about 600,000 gallons of radioactive and other hazardous waste flowed onto the desert floor, according to federal authorities. By the end of the 1990s, Unocal was served with a cleanup order and a San Bernardino County district attorney's lawsuit. The company paid more than $1.4 million in fines and settlements. After preparing a cleanup plan and completing an extensive environmental study, Unocal in 2004 won approval of a county permit that allowed the mine to operate for another 30 years. The mine passed a key county inspection in 2007.
Current activity
Since 2007, China has restricted exports of REEs (rare-earth elements) and imposed export tariffs, both to conserve resources and to give preference to Chinese manufacturers. In 2009, China supplied more than 96% of the world's REEs. Some outside China are concerned that because rare-earths are essential to some high-tech, renewable-energy, and defense-related technologies, the world should not be so reliant on a single supplier country
On September 22, 2010, China quietly enacted a ban on exports of rare-earths to Japan, a move suspected to be in retaliation for the Japanese arrest of a Chinese trawler captain in a territorial dispute. Because Japan and China are the only current sources for rare-earth magnetic material used in the US, a permanent disruption of Chinese rare-earth supply to Japan would leave China as the sole source. Jeff Green, a rare-earth lobbyist, said, "We are going to be 100 percent reliant on the Chinese to make the components for the defense supply chain." The House Committee on Science and Technology scheduled on September 23, 2010, the review of a detailed bill to subsidize the revival of the American rare-earths industry, including the reopening of the Mountain Pass mine.
After China doubled import duties on rare-earth concentrates to 25% as a result of the US-China trade war, MP Materials said, in May 2019, it will start its own partial processing operation in the United States, though full processing operations without Shenghe Resources have been delayed. According to Bloomberg, China in 2019 established a plan for restricting U.S. access to Chinese heavy rare earth elements, should the punitive step be deemed necessary. In 2022, the company announced that it had secured Department of Defense grants to support both light rare-earth elements (LREEs) and heavy rare earth elements (HREEs). The facility plans to begin separating NdPr oxide in early 2023.
References
Further reading
External links
Mountain Pass mine: geoology, history & potential February 1, 2023, "Geology for Investors"
Buildings and structures in San Bernardino County, California
Carbonatite occurrences
Geography of San Bernardino County, California
Metallurgical facilities
Mines in California
Rare earth mines
Surface mines in the United States | Mountain Pass Rare Earth Mine | [
"Chemistry",
"Materials_science"
] | 1,868 | [
"Metallurgy",
"Metallurgical facilities"
] |
27,557,852 | https://en.wikipedia.org/wiki/Stable%20nucleic%20acid%20lipid%20particle | Stable nucleic acid lipid particles (SNALPs) are microscopic particles approximately 120 nanometers in diameter, smaller than the wavelengths of visible light. They have been used to deliver siRNAs therapeutically to mammals in vivo. In SNALPs, the siRNA is surrounded by a lipid bilayer containing a mixture of cationic and fusogenic lipids, coated with diffusible polyethylene glycol.
Introduction
RNA interference(RNAi) is a process that occurs naturally within the cytoplasm inhibiting gene expression at specific sequences. Regulation of gene expression through RNAi is possible by introducing small interfering RNAs(siRNAs), which effectively silence expression of a targeted gene. RNAi activates the RNA-induced silencing complex(RISC) containing siRNA, siRNA derived from cleaved dsRNA. The siRNA guides the RISC complex to a specific sequence on the mRNA that is cleaved by RISC and, consequently, silences those genes.
However, without modifications to the RNA backbone or inclusion of inverted bases at either end, siRNA instability in the plasma makes it extremely difficult to apply this technique in vivo. Pattern recognition receptors(PRRs), which can be grouped as endocytic PRRs or signaling PRRs, are expressed in all cells of the innate immune system. Signaling PRRs, in particular, include Toll-like receptors(TLRs) and are involved primarily with identifying pathogen-associated molecular patterns(PAMPs). For example, TLRs can recognize specific regions conserved in various pathogens, recognition stimulating an immune response with potentially devastating effects to the organism. In particular, TLR 3 recognizes both dsRNA characteristic of viral replication and siRNA, which is also double-stranded. In addition to this instability, another limitation of siRNA therapy concerns the inability to target a tissue with any specificity.
SNALPs, though, may provide the stability and specificity required for this mode of RNAi therapy to be effective. Consisting of a lipid bilayer, SNALPs are able to provide stability to siRNAs by protecting them from nucleases within the plasma that would degrade them. In addition, delivery of siRNAs is subject to endosomal trafficking, potentially exposing them to TLR3 and TLR7, and can lead to activation of interferons and proinflammatory cytokines. However, SNALPs allow siRNA uptake into the endosome without activating Toll-like receptors and consequently stimulating an impeding immune response, thus enabling siRNA escape from the endosome.
Development of SNALP delivery of siRNA
Downregulation of gene expression via siRNA has been an important research tool in in vitro studies. Susceptibility of siRNAs to nuclease degradation, though, makes use of them in vivo problematic. In 2005, researchers working with hepatitis B virus(HBV) in rodents, determined that certain modifications of the siRNA prevented degradation by nucleases within the plasma and lead to increased gene silencing compared to unmodified siRNA. Modifications to the sense and antisense strands were made differentially. With respect to both sense and antisense strands, 2'-OH was substituted with 2'-fluoro at all pyrimidine positions. In addition, sense strands were modified at all purine positions with deoxyribose, antisense strands modified with 2'-O-methyl at the same positions. The 5' and 3' ends of the sense strand were capped with abasic inverted repeats, while a phosphorothioate linkage was incorporated at the 3' end of the antisense strand.
Although this research demonstrated a potential RNAi therapy using modified siRNA, the 90% reduction in HBV DNA in rodents resulted from a 30 mg/kg dosage with frequent administration. Because this is not a viable dosing regime, this same group looked at the effects of encapsulating the siRNA in a PEGylated lipid bilayer, or SNALP. Specifically, the lipid bilayer facilitates uptake into the cell and subsequent release from the endosome, the PEGylated outer layer providing stability during formulation due to the resulting hydrophilicity of the exterior. According to this 2005 study, researchers obtained 90% reduction in HBV DNA with a 3 mg/kg/day dose of siRNA for three days, a dose substantially lower than the earlier study. In addition, in contrast to unmodified or modified and non-encapsulated siRNA, administration of SNALP-delivered siRNA resulted in no detectable levels of interferons, such as IFN-a, or inflammatory cytokines associated with immunostimulation. Even so, researchers acknowledged that more work was necessary in order to reach a feasible dose and dosing regime.
In 2006, researchers working on silencing of apolipoprotein B(ApoB) in non-human primates achieved 90% silencing with a single dose of 2.5 mg/kg of SNALP-delivered APOB-specific siRNA. ApoB is a protein involved with the assembly and secretion of very-low-density lipoprotein(VLDL) and low-density lipoprotein(LDL), and it is expressed primarily in the liver and jejunum. Both VLDL and LDL are important in cholesterol transport and its metabolism. Not only was this degree of silencing observed very quickly, in about 24 hours post-administration, but the silencing effects maintained for over 22 days after only a single dose. Researchers tested a 1 mg/kg single dose, too, obtaining a 68% silencing of the target gene, indicating dose-dependent silencing. This dose-dependent silencing was evident not only on the degree of silencing but the duration of silencing, expression of the target gene recovering 72 hours post-administration.
Although SNALPs having a 100 nm diameter have been used effectively to target specific genes for silencing, there are a variety of systemic barriers that relate specifically to size. For example, diffusion into solid tumors is impeded by large SNALPs and, similarly, inflamed cells having enhanced permeation and retention make it difficult for large SNALPs to enter. In addition, reticuloendothelial elimination, blood–brain barrier size-selectivity and limitations of capillary fenestrae all necessitate a smaller SNALP in order to effectively deliver target-specific siRNA. In 2012, scientists in Germany developed what they termed "mono-NALPs" using a fairly simple solvent exchange method involving progressive dilution of a 50% isopropanol solution. What results is a very stable delivery system similar to traditional SNALPs, but one having only a diameter of 30 nm. The mono-NALPs developed here, however, are inactive, but can become active carriers by implementing specific targeting and release mechanisms used by similar delivery systems.
Applications
Zaire Ebola virus (ZEBOV)
In May 2010, an application of SNALPs to the Ebola Zaire virus made headlines, as the preparation was able to cure rhesus macaques when administered shortly after their exposure to a lethal dose of the virus, which can be up to 90% lethal to humans in sporadic outbreaks in Africa. The treatment used for rhesus macaques consisted of three siRNAs (staggered duplexes of RNA) targeting three viral genes. The SNALPs (around 81 nm in size here) were formulated by spontaneous vesiculation from a mixture of cholesterol, dipalmitoyl phosphatidylcholine, 3-N-[(ω-methoxy
poly(ethylene glycol)2000)carbamoyl]-1,2-dimyrestyloxypropylamine, and cationic 1,2-dilinoleyloxy-3-N,N-dimethylaminopropane.
In addition to the rhesus macaque application, SNALPs have also been proven to protect cavia porcellua from viremia and death when administered shortly after postexposure to ZEBOV. A polymerase (L) gene-specific siRNAs delivery system was imposed upon four genes associated with the viral genomic RNA in the ribonucleoprotein complex found within EBOV particles (three of which match the application above): NP, VP30, VP35, and the L protein. The SNALPs ranged from 71 – 84 nm in size and were composed of synthetic cholesterol, phospholipid DSPC, PEG lipid PEGC-DMA, and cationic lipid DLinDMA at the molar ratio of 48:20:2:30. The results confirm complete protection against viremia and death in guinea pigs when administered a SNALP-siRNA delivery system after diagnosis of the Ebola virus, thus proving this technology to be an effective treatment. Future studies will focus mainly upon evaluating the effects of siRNA ‘cocktails’ on EBOV genes to increase antiviral effects.
Hepatocellular Carcinoma
In 2010, researchers developed an applicable targeting therapy for hepatocellular carcinoma (HCC) in humans. The identification of CSN5, the fifth subunit of the COP9 signalosome complex found in early HCC, was used as a therapeutic target for siRNA induction. Systemic delivery of modified CSN5siRNA encapsulated in SNALPs significantly inhibited hepatic tumor growth in the Huh7-luc+ orthotopic xenograft model of human liver cancer. SiRNA-mediated CSN5 knockdown was also proven to inhibit cell-cycle progression and increases the rate of apoptosis in HCC cells in vitro. Not only do these results demonstrate the role of CSN5 in liver cancer progression, they also indicate that CSN5 has an essential role in HCC pathogenesis. In conclusion, SNALPs have been proven to significantly reduce hepatocellular carcinoma tumor growth in human Huh7-luc* cells through therapeutic silencing.
Tumors
In 2009, researchers developed siRNAs capable of targeting both polo-like kinase 1(PLK1) and kinesin spindle protein(KSP). Both proteins are important to the cell-cycle of tumor cells, PLK1 involved with phosphorylation of a variety of proteins and KSP integral to chromosome segregation during mitosis. Specifically, bipolar mitotic spindles are unable to form when KSP is inhibited, leading to arrest of the cell cycle and, eventually, apoptosis. Likewise, inhibition of PLK1 facilitates mitotic arrests and cell apoptosis. According to the study, a 2 mg/kg dose of PLK1-specific siRNA administered for 3 weeks to mice implanted with tumors resulted in increased survival times and obvious reduction of tumors. In fact, the median survival time of treated mice was 51 days as opposed to 32 days for the controls. Further, only 2 of the 6 mice treated had noticeable tumors around the implantation site. Even so, GAPDH, a tumor-derived signal, was present at low levels, indicating significant suppression of tumor growth but not complete elimination. Still, the results suggested minimal toxicity and no significant dysfunction of the bone marrow. Animals treated with KSP-specific siRNA, too, exhibited increased survival times of 28 days compared to 20 days in the controls.
References
Molecular biology
RNA interference | Stable nucleic acid lipid particle | [
"Chemistry",
"Biology"
] | 2,420 | [
"Biochemistry",
"Molecular biology"
] |
5,106,912 | https://en.wikipedia.org/wiki/Immunocytochemistry | Immunocytochemistry (ICC) is a common laboratory technique that is used to anatomically visualize the localization of a specific protein or antigen in cells by use of a specific primary antibody that binds to it. The primary antibody allows visualization of the protein under a fluorescence microscope when it is bound by a secondary antibody that has a conjugated fluorophore. ICC allows researchers to evaluate whether or not cells in a particular sample express the antigen in question. In cases where an immunopositive signal is found, ICC also allows researchers to determine which sub-cellular compartments are expressing the antigen.
Immunocytochemistry vs. immunohistochemistry
Immunocytochemistry differs from immunohistochemistry in that the former is performed on samples of intact cells that have had most, if not all, of their surrounding extracellular matrix removed. This includes individual cells that have been isolated from a block of solid tissue, cells grown within a culture, cells deposited from suspension, or cells taken from a smear. In contrast, immunohistochemical samples are sections of biological tissue, where each cell is surrounded by tissue architecture and other cells normally found in the intact tissue.
Immunocytochemistry is a technique used to assess the presence of a specific protein or antigen in cells (cultured cells, cell suspensions) by use of a specific antibody, which binds to it, thereby allowing visualization and examination under a microscope. It is a valuable tool for the determination of cellular contents from individual cells. Samples that can be analyzed include blood smears, aspirates, swabs, cultured cells, and cell suspensions.
There are many ways to prepare cell samples for immunocytochemical analysis. Each method has its own strengths and unique characteristics so the right method can be chosen for the desired sample and outcome.
Cells to be stained can be attached to a solid support to allow easy handling in subsequent procedures. This can be achieved by several methods: adherent cells may be grown on microscope slides, coverslips, or an optically suitable plastic support. Suspension cells can be centrifuged onto glass slides (cytospin), bound to solid support using chemical linkers, or in some cases handled in suspension.
Concentrated cellular suspensions that exist in a low-viscosity medium make good candidates for smear preparations. Dilute cell suspensions existing in a dilute medium are best suited for the preparation of cytospins through cytocentrifugation. Cell suspensions that exist in a high-viscosity medium, are best suited to be tested as swab preparations. The constant among these preparations is that the whole cell is present on the slide surface. For any intercellular reaction to take place, immunoglobulin must first traverse the cell membrane that is intact in these preparations. Reactions taking place in the nucleus can be more difficult, and the extracellular fluids can create unique obstacles in the performance of immunocytochemistry. In this situation, permeabilizing cells using detergent (Triton X-100 or Tween-20) or choosing organic fixatives (acetone, methanol, or ethanol) becomes necessary.
Antibodies are an important tool for demonstrating both the presence and the subcellular localization of an antigen. Cell staining is a very versatile technique and, if the antigen is highly localized, can detect as few as a thousand antigen molecules in a cell. In some circumstances, cell staining may also be used to determine the approximate concentration of an antigen, especially by an image analyzer.
Methods
There are many methods to obtain immunological detection on tissues, including those tied directly to primary antibodies or antisera. A direct method involves the use of a detectable tag (e.g., fluorescent molecule, gold particles, etc., ) directly to the antibody that is then allowed to bind to the antigen (e.g., protein) in a cell.
Alternatively, there are many indirect methods. In one such method, the antigen is bound by a primary antibody which is then amplified by use of a secondary antibody which binds to the primary antibody. Next, a tertiary reagent containing an enzymatic moiety is applied and binds to the secondary antibody. When the quaternary reagent, or substrate, is applied, the enzymatic end of the tertiary reagent converts the substrate into a pigment reaction product, which produces a color (many colors are possible; brown, black, red, etc.,) in the same location that the original primary antibody recognized that antigen of interest.
Some examples of substrates used (also known as chromogens) are AEC (3-Amino-9-EthylCarbazole), or DAB (3,3'-Diaminobenzidine). Use of one of these reagents after exposure to the necessary enzyme (e.g., horseradish peroxidase conjugated to an antibody reagent) produces a positive immunoreaction product. Immunocytochemical visualization of specific antigens of interest can be used when a less specific stain like H&E (Hematoxylin and Eosin) cannot be used for a diagnosis to be made or to provide additional predictive information regarding treatment (in some cancers, for example).
Alternatively the secondary antibody may be covalently linked to a fluorophore (FITC and Rhodamine are the most common) which is detected in a fluorescence or confocal microscope. The location of fluorescence will vary according to the target molecule, external for membrane proteins, and internal for cytoplasmic proteins. In this way immunofluorescence is a powerful technique when combined with confocal microscopy for studying the location of proteins and dynamic processes (exocytosis, endocytosis, etc.).
References
External links
Immunocytochemistry Staining Protocol
Immunohistochemistry of Whole-Mount Mouse Embryos
Histology
Immunologic tests
Protein methods
Staining
Laboratory techniques | Immunocytochemistry | [
"Chemistry",
"Biology"
] | 1,270 | [
"Biochemistry methods",
"Staining",
"Protein methods",
"Protein biochemistry",
"Immunologic tests",
"Histology",
"Microbiology techniques",
"nan",
"Microscopy",
"Cell imaging"
] |
5,107,430 | https://en.wikipedia.org/wiki/Stoplogs | Stoplogs are hydraulic engineering control elements that are used in floodgates to adjust the water level or discharge in a river, canal, or reservoir. Stoplogs are designed to cut off or stop flow through a conduit. They are typically long rectangular timber beams or boards that are placed on top of each other and dropped into premade slots inside a weir, gate, or channel. Present day, the process of adding and removing stoplogs is not manual, but done with hydraulic stoplog lifters and hoists. Since the height of the barrier can only be adjusted through the addition and removal of stoplogs, finding a lighter and stronger material other than wood or concrete became a more desirable choice. Other materials, including steel and composites, can be used as stoplogs as well. Stoplogs are sometimes confused with flashboards, as both elements are used in bulkhead or crest gates.
Usage
Stoplogs are modular in nature, giving the operator of a gated structure the ability to control the water level in a channel by adding or removing individual stoplogs. A gate may make use of one or more logs. Each log is lowered horizontally into a space or bay between two grooved piers referred to as a stoplog check. In larger gate structures, there will be multiple bays in which stoplogs can be placed to better control the discharge through the structure.
Stoplogs are frequently used to temporarily block flow through a spillway or canal during routine maintenance. At other times stoplogs can be used over longer periods of times, such as when a field is flooded and stoplogs are being used in smaller gates in order to control the depth of water in fields. The logs may be left in and adjusted during the entire time that the field is submerged.
In most cases, the boards used are subjected to high flow conditions. As individual stoplogs begin to age they are replaced. Typically small amounts of water will leak between individual logs.
Stoplogs are typically used in structures where the removal, installation, and replacement of the logs is expected infrequently. When larger flows of water are passing through a stoplog gate, it can be difficult to remove or place individuals logs. Larger logs often require multiple people to position and lift the logs.
Stoplogs vs. flashboards
Sometimes engineers will use these two terms interchangeably by calling a stoplog a flashboard. This is done in part because unlike many other types of bulkhead gates that are one continuous unit, both stoplogs and flashboards are modular and can be easily designed to hold back water at varying levels. However, most engineering texts and design firms differentiate between the two structures. Stoplogs are specialized bulkheads that are dropped into premade slots or guides in a channel or control structure, while flashboards are bulkheads that are placed on the crest or top of a channel wall or control structure. Flashboards are sometimes designed to break away under high flow conditions and thus to provide only a temporary diversion. In contrast, stoplogs are intended to be reused, and failure of a stoplog will result in an uncontrolled flow through a gate.
Handstops
Smaller stoplogs are sometimes referred to as handstops. Handstops are used in smaller gated structures, such as irrigation delivery ditches or the gates used to control water depth in larger submerged fields (such as rice fields). They are designed to be easily operated by a single individual.
References
Hydraulic engineering
Hydrology | Stoplogs | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 711 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering"
] |
5,108,937 | https://en.wikipedia.org/wiki/Heteroclinic%20cycle | In mathematics, a heteroclinic cycle is an invariant set in the phase space of a dynamical system. It is a topological circle of equilibrium points and connecting heteroclinic orbits. If a heteroclinic cycle is asymptotically stable, approaching trajectories spend longer and longer periods of time in a neighbourhood of successive equilibria.
In generic dynamical systems heteroclinic connections are of high co-dimension, that is, they will not persist if parameters are varied.
Robust heteroclinic cycles
A robust heteroclinic cycle is one which persists under small changes in the underlying dynamical system. Robust cycles often arise in the presence of symmetry or other constraints which force the existence of invariant hyperplanes. A prototypical example of a robust heteroclinic cycle is the Guckenheimer–Holmes cycle. This cycle has also been studied in the context of rotating convection, and as three competing species in population dynamics.
See also
Heteroclinic bifurcation
Heteroclinic network
References
External links
Dynamical systems | Heteroclinic cycle | [
"Physics",
"Mathematics"
] | 228 | [
"Mechanics",
"Dynamical systems"
] |
5,111,875 | https://en.wikipedia.org/wiki/Cadmium%20iodide | Cadmium iodide is an inorganic compound with the formula CdI2. It is a white hygroscopic solid. It also can be obtained as a mono- and tetrahydrate. It has few applications. It is notable for its crystal structure, which is typical for compounds of the form MX2 with strong polarization effects.
Preparation
Cadmium iodide is prepared by the addition of cadmium metal, or its oxide, hydroxide or carbonate to hydroiodic acid. Also, the compound can be made by heating cadmium with iodine.
Applications
Historically, cadmium iodide was used as a catalyst for the Henkel process, a high-temperature isomerisation of dipotassium phthalate to yield the terephthalate. The salt was then treated with acetic acid to yield potassium acetate and commercially valuable terephthalic acid.
While uneconomical compared to the production of terephthalic acid from p-xylene, the Henkel method has been proposed as a potential route to produce terephthalic acid from furfural. As existing Bio-PET is still reliant on petroleum as a source of p-xylene, the Henkel process could theoretically offer a completely bioplastic route to polyethylene terephthalate.
Crystal structure
In cadmium iodide the iodide anions form a hexagonal closely packed arrangement while the cadmium cations fill all of the octahedral sites in alternate layers. The resultant structure consists of a layered lattice. This same basic structure is found in many other salts and minerals. Cadmium iodide is mostly ionically bonded but with partial covalent character.
Cadmium iodide's crystal structure is the prototype on which the crystal structures of many other compounds can be considered to be based. Compounds with any of the following characteristics tend to adopt the CdI2 structure:
Iodides of moderately polarising cations; bromides and chlorides of strongly polarising cations
Hydroxides of dications, i.e. compounds with the general formula M(OH)2
Sulfides, selenides and tellurides (chalcogenides) of tetracations, i.e. compounds with the general formula MX2, where X = S, Se, Te
References
Cadmium compounds
Iodides
Metal halides
Photographic chemicals
Crystal structure types | Cadmium iodide | [
"Chemistry",
"Materials_science"
] | 505 | [
"Inorganic compounds",
"Crystal structure types",
"Salts",
"Crystallography",
"Metal halides"
] |
5,112,101 | https://en.wikipedia.org/wiki/Cadmium%20tungstate | Cadmium tungstate (CdWO4 or CWO), the cadmium salt of tungstic acid, is a dense, chemically inert solid which is used as a scintillation crystal to detect gamma rays. It has density of 7.9 g/cm3 and melting point of 1325 °C. It is toxic if inhaled or swallowed. Its crystals are transparent, colorless, with slight yellow tint. It is odorless. Its CAS number is . It is not hygroscopic.
The crystal is transparent and emits light when it is hit by gamma rays and x-rays, making it useful as a detector of ionizing radiation. Its peak scintillation wavelength is 480 nm (with emission range between 380 and 660 nm), and efficiency of 13000 photons/MeV. It has a relatively high light yield, its light output is about 40% of NaI(Tl), but the time of scintillation is quite long (12−15 μs). It is often used in computed tomography. Combining the scintillator crystal with externally applied piece of boron carbide allows construction of compact detectors of gamma rays and neutron radiation.
Cadmium tungstate was used as a replacement of calcium tungstate in some fluoroscopes since the 1940s. Very high radiopurity allows use of this scintillator as a detector of rare nuclear processes (double beta decay, other rare alpha and beta decays) in low-background applications. For example, the first indication of the natural alpha activity of tungsten (alpha decay of 180W) was found in 2003 with CWO detectors. Due to different time of light emission for different types of ionizing particles, the alpha-beta discrimination technique has been developed for CWO scintillators.
Cadmium tungstate films can be deposited by sol-gel technology. Cadmium tungstate nanorods can be synthesized by a hydrothermal process.
Similar materials are calcium tungstate (scheelite) and zinc tungstate.
It is toxic, as are all cadmium compounds.
References
External links
Scintillator materials
Cadmium compounds
Tungstates
Phosphors and scintillators | Cadmium tungstate | [
"Chemistry"
] | 459 | [
"Luminescence",
"Phosphors and scintillators"
] |
5,114,212 | https://en.wikipedia.org/wiki/Goursat%27s%20lemma | Goursat's lemma, named after the French mathematician Édouard Goursat, is an algebraic theorem about subgroups of the direct product of two groups.
It can be stated more generally in a Goursat variety (and consequently it also holds in any Maltsev variety), from which one recovers a more general version of Zassenhaus' butterfly lemma. In this form, Goursat's lemma also implies the snake lemma.
Groups
Goursat's lemma for groups can be stated as follows.
Let , be groups, and let be a subgroup of such that the two projections and are surjective (i.e., is a subdirect product of and ). Let be the kernel of and the kernel of . One can identify as a normal subgroup of , and as a normal subgroup of . Then the image of in is the graph of an isomorphism . One then obtains a bijection between:
Subgroups of which project onto both factors,
Triples with normal in , normal in and isomorphism of onto .
An immediate consequence of this is that the subdirect product of two groups can be described as a fiber product and vice versa.
Notice that if is any subgroup of (the projections and need not be surjective), then the projections from onto and are surjective. Then one can apply Goursat's lemma to .
To motivate the proof, consider the slice in , for any arbitrary . By the surjectivity of the projection map to , this has a non trivial intersection with . Then essentially, this intersection represents exactly one particular coset of . Indeed, if we have elements with and , then being a group, we get that , and hence, . It follows that and lie in the same coset of . Thus the intersection of with every "horizontal" slice isomorphic to is exactly one particular coset of in .
By an identical argument, the intersection of with every "vertical" slice isomorphic to is exactly one particular coset of in .
All the cosets of are present in the group , and by the above argument, there is an exact 1:1 correspondence between them. The proof below further shows that the map is an isomorphism.
Proof
Before proceeding with the proof, and are shown to be normal in and , respectively. It is in this sense that and can be identified as normal in G and G''', respectively.
Since is a homomorphism, its kernel N is normal in H. Moreover, given , there exists , since is surjective. Therefore, is normal in G, viz:
.
It follows that is normal in since
.
The proof that is normal in proceeds in a similar manner.
Given the identification of with , we can write and instead of and , . Similarly, we can write and , .
On to the proof. Consider the map defined by . The image of under this map is . Since is surjective, this relation is the graph of a well-defined function provided for every , essentially an application of the vertical line test.
Since (more properly, ), we have . Thus , whence , that is, .
Furthermore, for every we have . It follows that this function is a group homomorphism.
By symmetry, is the graph of a well-defined homomorphism . These two homomorphisms are clearly inverse to each other and thus are indeed isomorphisms.
Goursat varieties
As a consequence of Goursat's theorem, one can derive a very general version on the Jordan–Hölder–Schreier theorem in Goursat varieties.
References
Édouard Goursat, "Sur les substitutions orthogonales et les divisions régulières de l'espace", Annales Scientifiques de l'École Normale Supérieure (1889), Volume: 6, pages 9–102
Kenneth A. Ribet (Autumn 1976), "Galois Action on Division Points of Abelian Varieties with Real Multiplications", American Journal of Mathematics'', Vol. 98, No. 3, 751–804.
A. Carboni, G.M. Kelly and M.C. Pedicchio (1993), Some remarks on Mal'tsev and Goursat categories, Applied Categorical Structures, Vol. 4, 385–421.
Lemmas in group theory
Articles containing proofs | Goursat's lemma | [
"Mathematics"
] | 901 | [
"Articles containing proofs"
] |
5,114,927 | https://en.wikipedia.org/wiki/Paser | A PASER (an acronym from Particle Acceleration by Stimulated Emission of Radiation) is a device that accelerates a coherent beam of electrons. This process was demonstrated for the first time in 2006 at the Brookhaven National Lab by a team of physicists from the Technion-Israel Institute of Technology.
Relativistic electrons from a conventional particle accelerator pass through a vibrationally excited carbon dioxide medium in which the electrons undergo millions of collisions with excited carbon dioxide molecules and are accelerated in a coherent fashion. No heat is generated in this quantum energy transfer, thus all the energy transferred to the electrons is used in accelerating the electrons. The electron beam created from this process may result in electrons that are highly collimated in velocity in comparison to other acceleration methods.
The vibrationally excited carbon dioxide is the same medium used in a carbon dioxide laser. This medium resonantly amplifies light with a wavelength near 10.6 or 9.4 micrometers, corresponding to a frequency of approximately 30 terahertz. In order to be accelerated, incident electrons must be microbunched at this frequency. An appropriately bunched electron beam strikes excited carbon dioxide molecules resonantly in order to efficiently stimulate energy emission.
See also
Laser
References
Electron
Particle accelerators | Paser | [
"Chemistry"
] | 253 | [
"Electron",
"Molecular physics"
] |
6,719,257 | https://en.wikipedia.org/wiki/Introduction%20to%20the%20mathematics%20of%20general%20relativity | The mathematics of general relativity is complicated. In Newton's theories of motion, an object's length and the rate at which time passes remain constant while the object accelerates, meaning that many problems in Newtonian mechanics may be solved by algebra alone. In relativity, however, an object's length and the rate at which time passes both change appreciably as the object's speed approaches the speed of light, meaning that more variables and more complicated mathematics are required to calculate the object's motion. As a result, relativity requires the use of concepts such as vectors, tensors, pseudotensors and curvilinear coordinates.
For an introduction based on the example of particles following circular orbits about a large mass, nonrelativistic and relativistic treatments are given in, respectively, Newtonian motivations for general relativity and Theoretical motivation for general relativity.
Vectors and tensors
Vectors
In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric vector or spatial vector, or – as here – simply a vector) is a geometric object that has both a magnitude (or length) and direction. A vector is what is needed to "carry" the point to the point ; the Latin word vector means "one who carries". The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from to . Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity.
Tensors
A tensor extends the concept of a vector to additional directions. A scalar, that is, a simple number without a direction, would be shown on a graph as a point, a zero-dimensional object. A vector, which has a magnitude and direction, would appear on a graph as a line, which is a one-dimensional object. A vector is a first-order tensor, since it holds one direction.
A second-order tensor has two magnitudes and two directions, and would appear on a graph as two lines similar to the hands of a clock. The "order" of a tensor is the number of directions contained within, which is separate from the dimensions of the individual directions. A second-order tensor in two dimensions might be represented mathematically by a 2-by-2 matrix, and in three dimensions by a 3-by-3 matrix, but in both cases the matrix is "square" for a second-order tensor. A third-order tensor has three magnitudes and directions, and would be represented by a cube of numbers, 3-by-3-by-3 for directions in three dimensions, and so on.
Applications
Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has both a magnitude and direction, such as velocity, the magnitude of which is speed. For example, the velocity 5 meters per second upward could be represented by the vector (in 2 dimensions with the positive axis as 'up'). Another quantity represented by a vector is force, since it has a magnitude and direction. Vectors also describe many other physical quantities, such as displacement, acceleration, momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field.
Tensors also have extensive applications in physics:
Electromagnetic tensor (or Faraday's tensor) in electromagnetism
Finite deformation tensors for describing deformations and strain tensor for strain in continuum mechanics
Permittivity and electric susceptibility are tensors in anisotropic media
Stress–energy tensor in general relativity, used to represent momentum fluxes
Spherical tensor operators are the eigenfunctions of the quantum angular momentum operator in spherical coordinates
Diffusion tensors, the basis of diffusion tensor imaging, represent rates of diffusion in biologic environments
Dimensions
In general relativity, four-dimensional vectors, or four-vectors, are required. These four dimensions are length, height, width and time. A "point" in this context would be an event, as it has both a location and a time. Similar to vectors, tensors in relativity require four dimensions. One example is the Riemann curvature tensor.
Coordinate transformation
In physics, as well as mathematics, a vector is often identified with a tuple, or list of numbers, which depend on a coordinate system or reference frame. If the coordinates are transformed, such as by rotation or stretching the coordinate system, the components of the vector also transform. The vector itself does not change, but the reference frame does. This means that the components of the vector have to change to compensate.
The vector is called covariant or contravariant depending on how the transformation of the vector's components is related to the transformation of coordinates.
Contravariant vectors have units of distance (such as a displacement) or distance times some other unit (such as velocity or acceleration) and transform in the opposite way as the coordinate system. For example, in changing units from meters to millimeters the coordinate units get smaller, but the numbers in a vector become larger: 1 m becomes 1000 mm.
Covariant vectors, on the other hand, have units of one-over-distance (as in a gradient) and transform in the same way as the coordinate system. For example, in changing from meters to millimeters, the coordinate units become smaller and the number measuring a gradient will also become smaller: 1 Kelvin per m becomes 0.001 Kelvin per mm.
In Einstein notation, contravariant vectors and components of tensors are shown with superscripts, e.g. , and covariant vectors and components of tensors with subscripts, e.g. . Indices are "raised" or "lowered" by multiplication by an appropriate matrix, often the identity matrix.
Coordinate transformation is important because relativity states that there is not one reference point (or perspective) in the universe that is more favored than another. On earth, we use dimensions like north, east, and elevation, which are used throughout the entire planet. There is no such system for space. Without a clear reference grid, it becomes more accurate to describe the four dimensions as towards/away, left/right, up/down and past/future. As an example event, assume that Earth is a motionless object, and consider the signing of the Declaration of Independence. To a modern observer on Mount Rainier looking east, the event is ahead, to the right, below, and in the past. However, to an observer in medieval England looking north, the event is behind, to the left, neither up nor down, and in the future. The event itself has not changed: the location of the observer has.
Oblique axes
An oblique coordinate system is one in which the axes are not necessarily orthogonal to each other; that is, they meet at angles other than right angles. When using coordinate transformations as described above, the new coordinate system will often appear to have oblique axes compared to the old system.
Nontensors
A nontensor is a tensor-like quantity that behaves like a tensor in the raising and lowering of indices, but that does not transform like a tensor under a coordinate transformation. For example, Christoffel symbols cannot be tensors themselves if the coordinates do not change in a linear way.
In general relativity, one cannot describe the energy and momentum of the gravitational field by an energy–momentum tensor. Instead, one introduces objects that behave as tensors only with respect to restricted coordinate transformations. Strictly speaking, such objects are not tensors at all. A famous example of such a pseudotensor is the Landau–Lifshitz pseudotensor.
Curvilinear coordinates and curved spacetime
Curvilinear coordinates are coordinates in which the angles between axes can change from point to point. This means that rather than having a grid of straight lines, the grid instead has curvature.
A good example of this is the surface of the Earth. While maps frequently portray north, south, east and west as a simple square grid, that is not in fact the case. Instead, the longitude lines running north and south are curved and meet at the north pole. This is because the Earth is not flat, but instead round.
In general relativity, energy and mass have curvature effects on the four dimensions of the universe (= spacetime). This curvature gives rise to the gravitational force. A common analogy is placing a heavy object on a stretched out rubber sheet, causing the sheet to bend downward. This curves the coordinate system around the object, much like an object in the universe curves the coordinate system it sits in. The mathematics here are conceptually more complex than on Earth, as it results in four dimensions of curved coordinates instead of three as used to describe a curved 2D surface.
Parallel transport
The interval in a high-dimensional space
In a Euclidean space, the separation between two points is measured by the distance between the two points. The distance is purely spatial, and is always positive. In spacetime, the separation between two events is measured by the invariant interval between the two events, which takes into account not only the spatial separation between the events, but also their separation in time. The interval, , between two events is defined as:
(spacetime interval),
where is the speed of light, and and denote differences of the space and time coordinates, respectively, between the events. The choice of signs for above follows the space-like convention (−+++). A notation like means . The reason and not is called the interval is that can be positive, zero or negative.
Spacetime intervals may be classified into three distinct types, based on whether the temporal separation () or the spatial separation () of the two events is greater: time-like, light-like or space-like.
Certain types of world lines are called geodesics of the spacetime – straight lines in the case of flat Minkowski spacetime and their closest equivalent in the curved spacetime of general relativity. In the case of purely time-like paths, geodesics are (locally) the paths of greatest separation (spacetime interval) as measured along the path between two events, whereas in Euclidean space and Riemannian manifolds, geodesics are paths of shortest distance between two points. The concept of geodesics becomes central in general relativity, since geodesic motion may be thought of as "pure motion" (inertial motion) in spacetime, that is, free from any external influences.
The covariant derivative
The covariant derivative is a generalization of the directional derivative from vector calculus. As with the directional derivative, the covariant derivative is a rule, which takes as its inputs: (1) a vector, , (along which the derivative is taken) defined at a point , and (2) a vector field, , defined in a neighborhood of . The output is a vector, also at the point . The primary difference from the usual directional derivative is that the covariant derivative must, in a certain precise sense, be independent of the manner in which it is expressed in a coordinate system.
Parallel transport
Given the covariant derivative, one can define the parallel transport of a vector at a point along a curve starting at . For each point of , the parallel transport of at will be a function of , and can be written as , where . The function is determined by the requirement that the covariant derivative of along is 0. This is similar to the fact that a constant function is one whose derivative is constantly 0.
Christoffel symbols
The equation for the covariant derivative can be written in terms of Christoffel symbols. The Christoffel symbols find frequent use in Einstein's theory of general relativity, where spacetime is represented by a curved 4-dimensional Lorentz manifold with a Levi-Civita connection. The Einstein field equations – which determine the geometry of spacetime in the presence of matter – contain the Ricci tensor. Since the Ricci tensor is derived from the Riemann curvature tensor, which can be written in terms of Christoffel symbols, a calculation of the Christoffel symbols is essential. Once the geometry is determined, the paths of particles and light beams are calculated by solving the geodesic equations in which the Christoffel symbols explicitly appear.
Geodesics
In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational force, is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic.
In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting around a star is the projection of a geodesic of the curved 4-dimensional spacetime geometry around the star onto 3-dimensional space.
A curve is a geodesic if the tangent vector of the curve at any point is equal to the parallel transport of the tangent vector of the base point.
Curvature tensor
The Riemann curvature tensor tells us, mathematically, how much curvature there is in any given region of space. In flat space this tensor is zero.
Contracting the tensor produces 2 more mathematical objects:
The Ricci tensor: , comes from the need in Einstein's theory for a curvature tensor with only 2 indices. It is obtained by averaging certain portions of the Riemann curvature tensor.
The scalar curvature: , the simplest measure of curvature, assigns a single scalar value to each point in a space. It is obtained by averaging the Ricci tensor.
The Riemann curvature tensor can be expressed in terms of the covariant derivative.
The Einstein tensor is a rank-2 tensor defined over pseudo-Riemannian manifolds. In index-free notation it is defined as
where is the Ricci tensor, is the metric tensor and is the scalar curvature. It is used in the Einstein field equations.
Stress–energy tensor
The stress–energy tensor (sometimes stress–energy–momentum tensor or energy–momentum tensor) is a tensor quantity in physics that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. The stress–energy tensor is the source of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity. Because this tensor has 2 indices (see next section) the Riemann curvature tensor has to be contracted into the Ricci tensor, also with 2 indices.
Einstein equation
The Einstein field equations (EFE) or Einstein's equations are a set of 10 equations in Albert Einstein's general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy. First published by Einstein in 1915 as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).
The Einstein field equations can be written as
where is the Einstein tensor and is the stress–energy tensor.
This implies that the curvature of space (represented by the Einstein tensor) is directly connected to the presence of matter and energy (represented by the stress–energy tensor).
Schwarzschild solution and black holes
In Einstein's theory of general relativity, the Schwarzschild metric (also Schwarzschild vacuum or Schwarzschild solution), is a solution to the Einstein field equations which describes the gravitational field outside a spherical mass, on the assumption that the electric charge of the mass, the angular momentum of the mass, and the universal cosmological constant are all zero. The solution is a useful approximation for describing slowly rotating astronomical objects such as many stars and planets, including Earth and the Sun. The solution is named after Karl Schwarzschild, who first published the solution in 1916, just before his death.
According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric, vacuum solution of the Einstein field equations. A Schwarzschild black hole or static black hole is a black hole that has no charge or angular momentum. A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.
See also
Differentiable manifold
Christoffel symbol
Riemannian geometry
Ricci calculus
Differential geometry and topology
List of differential geometry topics
General relativity
Gauge gravitation theory
General covariant transformations
Derivations of the Lorentz transformations
Notes
References
.
. | Introduction to the mathematics of general relativity | [
"Physics"
] | 3,491 | [
"General relativity",
"Theory of relativity"
] |
6,721,300 | https://en.wikipedia.org/wiki/List%20of%20alpha%20emitting%20materials | The following are among the principal radioactive materials known to emit alpha particles.
209Bi, 211Bi, 212Bi, 213Bi
210Po, 211Po, 212Po, 214Po, 215Po, 216Po, 218Po
215At, 217At, 218At
218Rn, 219Rn, 220Rn, 222Rn, 226Rn
221Fr
223Ra, 224Ra, 226Ra
225Ac, 227Ac
227Th, 228Th, 229Th, 230Th, 232Th
231Pa
233U, 234U, 235U, 236U, 238U
237Np
238Pu, 239Pu, 240Pu, 244Pu
241Am
244Cm, 245Cm, 248Cm
249Cf, 252Cf
Alpha emitting
Alpha emitting | List of alpha emitting materials | [
"Physics",
"Chemistry"
] | 151 | [
"Transport phenomena",
"Physical phenomena",
"Waves",
"Radiation",
"Nuclear physics",
"Radioactivity"
] |
35,582,176 | https://en.wikipedia.org/wiki/Deltorphin%20I | Deltorphin I, also known as [D-Ala2]deltorphin I or deltorphin C, is a naturally occurring, exogenous opioid heptapeptide and hence, exorphin, with the amino acid sequence Tyr-D-Ala-Phe-Asp-Val-Val-Gly-NH2. While not known to be endogenous to humans or other mammals, deltorphin I, along with the other deltorphins and the dermorphins, is produced naturally in the skin of species of Phyllomedusa, a genus of frogs native to South and Central America. Deltorphin possesses very high affinity and selectivity as an agonist for the δ-opioid receptor, and on account of its unusually high blood-brain-barrier penetration rate, produces centrally-mediated analgesic effects in animals even when administered peripherally.
See also
Deltorphin
Dermorphin
References
Opioids
Peptides | Deltorphin I | [
"Chemistry"
] | 214 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
35,590,090 | https://en.wikipedia.org/wiki/Compositional%20domain | A compositional domain in genetics is a region of DNA with a distinct guanine (G) and cytosine (C) G-C and C-G content (collectively GC content). The homogeneity of compositional domains is compared to that of the chromosome on which they reside. As such, compositional domains can be homogeneous or nonhomogeneous domains. Compositionally homogeneous domains that are sufficiently long (= 300 kb) are termed isochores or isochoric domains.
The compositional domain model was proposed as an alternative to the isochoric model. The isochore model was proposed by Bernardi and colleagues to explain the observed non-uniformity of genomic fragments in the genome. However, recent sequencing of complete genomic data refuted the isochoric model. Its main predictions were:
GC content of the third codon position (GC3) of protein coding genes is correlated with the GC content of the isochores embedding the corresponding genes. This prediction was found to be incorrect. GC3 could not predict the GC content of nearby sequences.
The genome organization of warm-blooded vertebrates is a mosaic of isochores. This prediction was rejected by many studies that used the complete human genome data.
The genome organization of cold-blooded vertebrates is characterized by low GC content levels and lower compositional heterogeneity. This prediction was disproved by finding high and low GC content domains in fish genomes.
The compositional domain model describes the genome as a mosaic of short and long homogeneous and nonhomogeneous domains. The composition and organization of the domains were shaped by different evolutionary processes that either fused or broke down the domains. This genomic organization model was confirmed in many new genomic studies of cow, honeybee, sea urchin, body louse, Nasonia, beetle, and ant genomes. The human genome was described as consisting of a mixture of compositionally nonhomogeneous domains with numerous short compositionally homogeneous domains and relatively few long ones.
References
External links
IsoPlotter — a free, open source program to calculate and visualize isochores in a given genome
DNA
Molecular biology
Biological classification | Compositional domain | [
"Chemistry",
"Biology"
] | 456 | [
"Biochemistry",
"nan",
"Molecular biology"
] |
32,908,006 | https://en.wikipedia.org/wiki/Microbial%20electrosynthesis | Microbial electrosynthesis (MES) is a form of microbial electrocatalysis in which electrons are supplied to living microorganisms via a cathode in an electrochemical cell by applying an electric current. The electrons are then used by the microorganisms to reduce carbon dioxide to yield industrially relevant products. The electric current would ideally be produced by a renewable source of power. This process is the opposite to that employed in a microbial fuel cell, in which microorganisms transfer electrons from the oxidation of compounds to an anode to generate an electric current.
Comparison to microbial electrolysis cells
Microbial electrosynthesis (MES) is related to microbial electrolysis cells (MEC). Both use the interactions of microorganisms with a cathode to reduce chemical compounds. In MECs, an electrical power source is used to augment the electrical potential produced by the microorganisms consuming a source of chemical energy such as acetic acid. The combined potential provided by the power source and the microorganisms is then sufficient to reduce hydrogen ions to molecular hydrogen. The mechanism of MES is not well understood, but the potential products include alcohols and organic acids. MES can be combined with MEC in a single reaction vessel, where substrate consumed by the microorganisms provides a voltage potential that is lowered as the microbe ages. "MES has gained increasing attention as it promises to use renewable (electric) energy and biogenic feedstock for a bio-based economy."
Applications
Microbial electrosynthesis may be used to produce fuel from carbon dioxide using electrical energy generated by either traditional power stations or renewable electricity generation. It may also be used to produce speciality chemicals such as drug precursors through microbially assisted electrocatalysis.
Microbial electrosynthesis can also be used to "power" plants. Plants can then be grown without sunlight.
See also
Electrofuels
Electrohydrogenesis
Electromethanogenesis
Glossary of fuel cell terms
Microbial fuel cell
References
Biotechnology
Electric power
Fuel cells | Microbial electrosynthesis | [
"Physics",
"Engineering",
"Biology"
] | 428 | [
"Physical quantities",
"Biotechnology",
"Power (physics)",
"Electric power",
"nan",
"Electrical engineering"
] |
28,833,800 | https://en.wikipedia.org/wiki/Slurry%20transport | Slurry transport uses several methods: hydraulic conveying; conventional lean slurry conveying; and high concentration slurry disposal (HCSD). The latter, HCSD, is a relatively modern approach, which is used to transfer high throughputs of fine fly ash over long distances (>) using high pressure diaphragm pumps with velocities of around 2 m/s. Ash disposal is simple as the ash solidifies easily and the system does not produce the waste water or leachate problems which can often be associated with ash lagoons.
Examples
Typical HCSD systems include the Clyde Bergemann solution designed to reduce water usage (up to 90% by weight), reduce ground and surface water pollution, reduce dust emission surrounding landfill site, increase disposal area working capacity and lower energy consumption.
See also
High-density solids pump
References
2: Miedema, S.A., Slurry Transport: Fundamentals, a Historical Overview and The Delft Head Loss & Limit Deposit Velocity Framework. http://www.dredging.org/media/ceda/org/documents/resources/othersonline/miedema-2016-slurry-transport.pdf
External links
CASE STUDY: Modern materials handling solution installed at Eraring power plant, Australia
Waste treatment technology | Slurry transport | [
"Chemistry",
"Engineering"
] | 268 | [
"Water treatment",
"Waste treatment technology",
"Environmental engineering"
] |
28,835,992 | https://en.wikipedia.org/wiki/Single%20domain%20%28magnetic%29 | In magnetism, single domain refers to the state of a ferromagnet (in the broader meaning of the term that includes ferrimagnetism) in which the magnetization does not vary across the magnet. A magnetic particle that stays in a single domain state for all magnetic fields is called a single domain particle (but other definitions are possible; see below). Such particles are very small (generally below a micrometre in diameter). They are also very important in a lot of applications because they have a high coercivity. They are the main source of hardness in hard magnets, the carriers of magnetic storage in tape drives, and the best recorders of the ancient Earth's magnetic field (see paleomagnetism).
History
Early theories of magnetization in ferromagnets assumed that ferromagnets are divided into magnetic domains and that the magnetization changed by the movement of domain walls. However, as early as 1930, Frenkel and Dorfman predicted that sufficiently small particles could only hold one domain, although they greatly overestimated the upper size limit for such particles. The possibility of single domain particles received little attention until two developments in the late 1940s: (1) Improved calculations of the upper size limit by Charles Kittel and Louis Néel, and (2) a calculation of the magnetization curves for systems of single-domain particles by Stoner and Wohlfarth. The Stoner–Wohlfarth model has been enormously influential in subsequent work and is still frequently cited.
Definitions of a single-domain particle
Early investigators pointed out that a single-domain particle could be defined in more than one way. Perhaps most commonly, it is implicitly defined as a particle that is in a single-domain state throughout the hysteresis cycle, including during the transition between two such states. This is the type of particle that is modeled by the Stoner–Wohlfarth model. However, it might be in a single-domain state except during reversal. Often particles are considered single-domain if their saturation remanence is consistent with the single-domain state. More recently it was realized that a particle's state could be single-domain for some range of magnetic fields and then change continuously into a non-uniform state.
Another common definition of single-domain particle is one in which the single-domain state has the lowest energy of all possible states (see below).
Single domain hysteresis
If a particle is in the single-domain state, all of its internal magnetization is pointed in the same direction. It therefore has the largest possible magnetic moment for a particle of that size and composition. The magnitude of this moment is , where is the volume of the particle and is the saturation magnetization.
The magnetization at any point in a ferromagnet can only change by rotation. If there is more than one magnetic domain, the transition between one domain and its neighbor involves a rotation of the magnetization to form a domain wall. Domain walls move easily within the magnet and have a low coercivity. By contrast, a particle that is single-domain in all magnetic fields changes its state by rotation of all the magnetization as a unit. This results in a much larger coercivity.
The most widely used theory for hysteresis in single-domain particle is the Stoner–Wohlfarth model. This applies to a particle with uniaxial magnetocrystalline anisotropy.
Limits on the single-domain size
Experimentally, it is observed that though the magnitude of the magnetization is uniform throughout a homogeneous specimen at uniform temperature, the direction of the magnetization is in general not uniform, but varies from one region to another, on a scale corresponding to visual observations with a microscope. Uniform of direction is attained only by applying a field, or by choosing as a specimen, a body which is itself of microscopic dimensions (a fine particle). The size range for which a ferromagnet become single-domain is generally quite narrow and a first quantitative result in this direction is due to William Fuller Brown, Jr. who, in his fundamental paper, rigorously proved (in the framework of Micromagnetics), though in the special case of a homogeneous sphere of radius , what nowadays is known as Brown’s fundamental theorem of the theory of fine ferromagnetic particles. This theorem states the existence of a critical radius such that the state of lowest free energy is one of uniform magnetization if (i.e. the existence of a critical size under which spherical ferromagnetic particles stay uniformly magnetized in zero applied field). A lower bound for can then be computed. In 1988, Amikam A. Aharoni, by using the same mathematical reasoning as Brown, was able to extend the Fundamental Theorem to the case of a prolate spheroid. Recently, Brown’s fundamental theorem on fine ferromagnetic particles has been rigorously extended to the case of a general ellipsoid, and an estimate for the critical diameter (under which the ellipsoidal particle become single domain) has been given in terms of the demagnetizing factors of the general ellipsoid. Eventually, the same result has been shown to be true for metastable equilibria in small ellipsoidal particles.
Although pure single-domain particles (mathematically) exist for some special geometries only, for most ferromagnets a state of quasi-uniformity of magnetization is achieved when the diameter of the particle is in between about 25 nanometers and 80 nanometers. The size range is bounded below by the transition to superparamagnetism and above by the formation of multiple magnetic domains.
Lower limit: superparamagnetism
Thermal fluctuations cause the magnetization to change in a random manner. In the single-domain state, the moment rarely strays far from the local stable state. Energy barriers (see also activation energy) prevent the magnetization from jumping from one state to another. However, if the energy barrier gets small enough, the moment can jump from state to state frequently enough to make the particle superparamagnetic. The frequency of jumps has a strong exponential dependence on the energy barrier, and the energy barrier is proportional to the volume, so there is a critical volume at which the transition occurs. This volume can be thought of as the volume at which the blocking temperature is at room temperature.
Upper limit: transition to multiple domains
As size of a ferromagnet increases, the single-domain state incurs an increasing energy cost because of the demagnetizing field. This field tends to rotate the magnetization in a way that reduces the total moment of the magnet, and in larger magnets the magnetization is organized in magnetic domains. The demagnetizing energy is balanced by the energy of the exchange interaction, which tends to keep spins aligned. There is a critical size at which the balance tips in favor of the demagnetizing field and the multidomain state is favored. Most calculations of the upper size limit for the single-domain state identify it with this critical size.
Notes
References
Rock magnetism
Ferromagnetism | Single domain (magnetic) | [
"Chemistry",
"Materials_science"
] | 1,483 | [
"Magnetic ordering",
"Ferromagnetism"
] |
21,321,517 | https://en.wikipedia.org/wiki/Quintom%20scenario | The Quintom scenario (derived from the words quintessence and phantom, as in phantom energy) is a hypothetical model of dark energy.
Equation of State
In this scenario, the equation of state of the dark energy, relating its pressure and energy density, can cross the boundary associated with the cosmological constant. The boundary separates the phantom-energy-like behavior with from the quintessence-like behavior with . A no-go theorem shows that this behavior requires at least two degrees of freedom for dark energy models involving ideal gases or scalar fields.
The Quintom scenario was applied in 2008 to produce a model of inflationary cosmology with a Big Bounce instead of a Big Bang singularity.
References
External links
Dark Energy Constraints from the Cosmic Age and Supernova by Bo Feng, Xiulian Wang and Xinmin Zhang
Crossing the Phantom Divide by Martin Kunz and Domenico Sapone
Dark energy
fr:Énergie fantôme
it:Energia fantasma | Quintom scenario | [
"Physics",
"Astronomy"
] | 204 | [
"Unsolved problems in astronomy",
"Physical quantities",
"Concepts in astronomy",
"Unsolved problems in physics",
"Energy (physics)",
"Dark energy",
"Wikipedia categories named after physical quantities"
] |
21,323,727 | https://en.wikipedia.org/wiki/X-ray%20filter | An X-ray filter (or compensating filter) is a device placed in front of an X-ray source in order to reduce the intensity of particular wavelengths from its spectrum and selectively alter the distribution of X-ray wavelengths within a given beam before reaching the image receptor. Adding a filtration device to certain x-ray examinations attenuates the x-ray beam by eliminating lower energy x-ray photons to produce a clearer image with greater anatomic detail to better visualize differences in tissue densities. While a compensating filter provides a better radiographic image by removing lower energy photons, it also reduces radiation dose to the patient.
When X-rays hit matter, part of the incoming beam is transmitted through the material and part of it is absorbed by the material. The amount absorbed is dependent on the material's mass absorption coefficient and tends to decrease for incident photons of greater energy. True absorption occurs when X-rays of sufficient energy cause electron energy level transitions in the atoms of the absorbing material. The energy from these X-rays are used to excite the atoms and do not continue past the material (thus being "filtered" out). Because of this, despite the general trend of decreased absorption at higher energy wavelengths, there are periodic spikes in the absorption characteristics of any given material corresponding to each of the atomic energy level transitions. These spikes are called absorption edges. The result is that every material preferentially filters out x-rays corresponding to and slightly above their electron energy levels, while generally allowing X-rays with energies slightly less than these levels to transmit through relatively unscathed.
Therefore, it is possible to selectively fine tune which wavelengths of x-rays are present in a beam by matching materials with particular absorption characteristics to different X-ray source spectra.
Applications
For example, a copper X-ray source may preferentially produce a beam of x-rays with wavelengths 154 and 139 picometres. Nickel has an absorption edge at 149 pm, between the two copper lines. Thus, using nickel as a filter for copper would result in the absorption of the slightly higher energy 139 pm x-rays, while letting the 154 pm rays through without a significant decrease in intensity. Thus, a copper X-ray source with a nickel filter can produce a nearly monochromatic X-ray beam with photons of mostly 154 pm.
For medical purposes, X-ray filters are used to selectively attenuate, or block out, low-energy rays during x-ray imaging (radiography). Low energy x-rays (less than 30 keV) contribute little to the resultant image as they are heavily absorbed by the patient's soft tissues (particularly the skin). Additionally, this absorption adds to the risk of stochastic (e.g. cancer) or non stochastic radiation effects (e.g. tissue reactions) in the patient. Thus, it is favorable to remove these low energy X-rays from the incident light beam. X-ray filtration may be inherent due to the X-ray tube and housing material itself or added from additional sheets of filter material. The minimum filtration used is usually 2.5 mm aluminium (Al) equivalent, although there is an increasing trend to use greater filtration. Manufacturers of modern fluoroscopy equipment utilize a system of adding a variable thickness of copper (Cu) filtration according to patient thickness. This typically ranges from 0.1 to 0.9 mm Cu.
The need for selectively attenuating x-rays in radiography is due to the differences in densities across anatomic regions of the body. Less dense regions or tissues (lungs, sinuses) show up darker or black on x-rays while more dense tissues (bones, calcification) present as white or shades of grey. For instance, the thoracic spine, when imaged for an anterior-posterior (AP or from front to back) projection, lies between both lung fields. The lungs have a very low attenuation value because they are air-filled and show up as dark areas on radiographs, while the thoracic spine is bony with higher attenuation and displays as white or grey. The vast differences in density make it difficult to acquire a high quality, detailed x-ray unless a compensating filter is applied.
X-ray filters are commonly mounted to the collimator (collimator-mounted) of an x-ray machine, where the photon beam exits the x-ray tube. However, there are non-attachable compensating filters called contact filters that are either placed on or behind the patient. Contact filters placed between the patient and the image receptor, where the photons that pass through the patient are recorded to form an image, do not limit radiation dose to the patient.
X-ray filters are also used for X-ray diffraction, in determinations of the interatomic spaces of crystalline solids. These lattice spacings can be determined using Bragg diffraction, but this technique requires scans to be done with approximately monochromatic X-ray beams. Thus, filter set ups like the copper nickel system described above are used to allow only a single X-ray wavelength to penetrate through to a target crystal, allowing the resulting scattering to determine the diffraction distance.
Types of X-Ray Filters
Wedge
Most common filter used in x-ray imaging
Collimator mounted to the x-ray source
Best used for long axis areas where tissue density widely differs
AP projection of the Thoracic Spine
Lateral projection of the Nasal Bones
AP Foot
AP Hip for emaciated patients
Trough
Channel shape or double wedge
Best for body parts where density would be higher in the center of the image and tissues are less dense at the edges
Posterior-Anterior (PA) Chest Projections
Collimator mounted
Ferlic Swimmer's
Collimator mounted
Lateral Cervicothoracic (Swimmer's View)
Axiolateral Hip Projection (Danelius-Miller)
Boomerang
Contact Filter (placed between the anatomy to be imaged and the image receptor)
Radiation dose to the patient is not reduced as it is placed at a point where x-ray photons strike the patient before encountering the filter
Designed for the shoulder but can also be beneficial for lateral facial bones
Scoliosis
Used for full spine imaging
PA projection uses a wedge filter over the cervical and thoracic spines to remove excess photons as a result of a higher dose required for the lumbar spine
Lateral projection engages the use of a double wedge filter from the mid-thoracic region to the cervical spine
Various elemental effects
Suitable for X-ray crystallography:
Zirconium - Absorbs Bremsstrahlung & K-Beta.
Iron - Absorbs the entire spectra.
Molybdenum - Absorbs Bremsstrahlung - Leaving K-Beta & K-Alpha.
Aluminium - 'Pinches' Bremsstrahlung* & Removes 3rd Generation peaks.
Silver - Same as Aluminium, But to greater extent.
Indium - Same as Iron, But to lesser extent.
Copper - Same as Aluminium, Leaving only 1st Generation Peaks.
Suitable for Radiography:
Molybdenum - Used in Mammography
Rhodium - Used in Mammography with Rhodium anodes
Aluminium - Used in general radiography x-ray tubes
Copper - Used in general radiography - especially in paediatric applications.
Silver - Used in Mammography with tungsten anode
Tantalum - Used in fluoroscopy applications with tungsten anodes
Niobium - Used in radiography and dental radiography with tungsten anodes
Erbium - Used in radiography with tungsten anodes
Compensating filters used in general radiography are widely manufactured using aluminum due to its lightweight nature and its ability to effectively attenuate the x-ray beam. Plastics with high densities are a common compensating filter material, with clear leaded plastic (Clear-Pb) now being offered. While aluminum compensating filters attenuate x-ray photons, they also attenuate the light beam emitted through the collimator that allows the x-ray technologist to see exactly where the x-ray beam will strike the patient. Clear-Pb attenuates the x-ray beam but still allows collimator light to shine through the clear plastic, allowing the technologist to better visualize the intended area and still reducing the patient's radiation dose.
Notes:
- Bremsstrahlung pinching is due to the atomic mass. The denser the atom, the higher the X-Ray Absorption. Only the higher energy X-Rays pass through the filter, appearing as if the Bremsstrahlung continuum had been pinched.
- In this case, Mo appears to leave K-Alpha and K-Beta alone while absorbing the Bremsstrahlung. This is due to Mo absorbing all of the spectra's energy, but in doing so produces the same characteristic peaks as generated by the target.
References
Further reading
B.D. Cullity & S.R. Stock, Elements of X-Ray Diffraction, 3rd Ed., Prentice-Hall Inc., 2001, p 167-171, .
CFL imaging diagnostic
See also
X-ray crystallography
X-rays
Bragg diffraction
Filter | X-ray filter | [
"Physics"
] | 1,930 | [
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
21,329,022 | https://en.wikipedia.org/wiki/Dichlorine%20hexoxide | Dichlorine hexoxide is the chemical compound with the molecular formula , which is correct for its gaseous state. However, in liquid or solid form, this chlorine oxide ionizes into the dark red ionic compound chloryl perchlorate , which may be thought of as the mixed anhydride of chloric and perchloric acids. This compound is a notable perchlorating agent.
It is produced by reaction between chlorine dioxide and excess ozone:
2 + 2 → 2 + 2 → + 2
Molecular structure
It was originally reported to exist as the monomeric chlorine trioxide ClO3 in gas phase, but was later shown to remain an oxygen-bridged dimer after evaporation and until thermal decomposition into chlorine perchlorate, Cl2O4, and oxygen. The compound ClO3 was then rediscovered.
It is a dark red fuming liquid at room temperature that crystallizes as a red ionic compound, chloryl perchlorate, . The red color shows the presence of chloryl ions. Thus, chlorine's formal oxidation state in this compound remains a mixture of chlorine (V) and chlorine (VII) both in the gas phase and when condensed; however by breaking one oxygen-chlorine bond some electron density does shifts towards the chlorine (VII).
Properties
Cl2O6 is diamagnetic and is a very strong oxidizing agent. Although stable at room temperature, it explodes violently on contact with organic compounds It is a strong dehydrating agent:
Many reactions involving Cl2O6 reflect its ionic structure, , including the following:
NO2F + Cl2O6 → NO2ClO4 + ClO2F
NO + Cl2O6 → NOClO4 + ClO2
2 V2O5 + 12 Cl2O6 → 4 VO(ClO4)3 + 12 ClO2 + 3 O2
SnCl4 + 6 Cl2O6 → [ClO2]2[Sn(ClO4)6] + 4 ClO2 + 2 Cl2
It reacts with gold to produce the chloryl salt :
2Au + 6Cl2O6 → 2 + Cl2
Several other transition metal perchlorate complexes are prepared using dichlorine hexoxide.
Nevertheless, it can also react as a source of the ClO3 radical:
2 AsF5 + Cl2O6 → 2 ClO3AsF5
References
Chlorine oxides
Acidic oxides
Perchlorates
Chloryl compounds
Chlorine(V) compounds
Chlorine(VII) compounds | Dichlorine hexoxide | [
"Chemistry"
] | 552 | [
"Perchlorates",
"Salts"
] |
21,330,156 | https://en.wikipedia.org/wiki/Graphane | Graphane is a two-dimensional polymer of carbon and hydrogen with the formula unit (CH)n where n is large. Partial hydrogenation results in hydrogenated graphene, which was reported by Elias et al. in 2009 by a TEM study to be "direct evidence for a new graphene-based derivative". The authors viewed the panorama as "a whole range of new two-dimensional crystals with designed electronic and other properties". With the band gap ranges from 0 to 0.8 eV
Synthesis
Its preparation was reported in 2009. Graphane can be formed by electrolytic hydrogenation of graphene, few-layer graphene or high-oriented pyrolytic graphite. In the last case mechanical exfoliation of hydrogenated top layers can be used.
Structure
The first theoretical description of graphane was reported in 2003. The structure was found, using a cluster expansion method, to be the most stable of all the possible hydrogenation ratios of graphene. In 2007, researchers found that the compound is more stable than other compounds containing carbon and hydrogen, such as benzene, cyclohexane and polyethylene. This group named the predicted compound graphane, because it is the fully saturated version of graphene.
Graphane is effectively made up of cyclohexane units, and, in parallel to cyclohexane, the most stable structural conformation is not planar, but an out-of-plane structure, including the chair and boat conformers, in order to minimize ring strain and allow for the ideal tetrahedral bond angle of 109.5° for sp3-bonded atoms. However, in contrast to cyclohexane, graphane cannot interconvert between these different conformers because not only are they topologically different, but they are also different structural isomers with different configurations. The chair conformer has the hydrogens alternating above or below the plane from carbon to neighboring carbon, while the boat conformer has the hydrogen atoms alternating in pairs above and below the plane. There are also other possible conformational isomers, including the twist-boat and twist-boat-chair. As with cyclohexane, the most stable conformer for graphane is the chair, followed by the twist-boat structure. While the buckling of the chair conformer would imply lattice shrinkage, calculations show the lattice actually expands by approximately 30% due to the opposing effect on the lattice spacing of the longer carbon-carbon (C-C) bonds, as the sp3-bonding of graphane yields longer C-C bonds of 1.52 Å compared to the sp2-bonding of graphene which yields shorter C-C bonds of 1.42 Å. As just established, theoretically if graphane was perfect and everywhere in its stable chair conformer, the lattice would expand; however, the existence of domains where the locally stable twist-boat conformer dominates “contribute to the experimentally observed lattice contraction.” When experimentalists have characterized graphane, they have found a distribution of lattice spacings, corresponding to different domains exhibiting different conformers. Any disorder in hydrogenation conformation tends to contract the lattice constant by about 2.0%.
Graphane is an insulator. Chemical functionalization of graphene with hydrogen may be a suitable method to open a band gap in graphene. P-doped graphane is proposed to be a high-temperature BCS theory superconductor with a Tc above 90 K.
Variants
Partial hydrogenation leads to hydrogenated graphene rather than (fully hydrogenated) graphane. Such compounds are usually named as "graphane-like" structures. Graphane and graphane-like structures can be formed by electrolytic hydrogenation of graphene or few-layer graphene or high-oriented pyrolytic graphite. In the last case mechanical exfoliation of hydrogenated top layers can be used.
Hydrogenation of graphene on substrate affects only one side, preserving hexagonal symmetry. One-sided hydrogenation of graphene is possible due to the existence of ripplings. Because the latter are distributed randomly, the obtained material is disordered in contrast to two-sided graphane. Annealing allows the hydrogen to disperse, reverting to graphene. Simulations revealed the underlying kinetic mechanism.
Potential applications
p-Doped graphane is postulated to be a high-temperature BCS theory superconductor with a Tc above 90 K.
Graphane has been proposed for hydrogen storage. Hydrogenation decreases the dependence of the lattice constant on temperature, which indicates a possible application in precision instruments.
References
External links
Sep 14, 2010 Hydrogen vacancies induce stable ferromagnetism in graphane
May 25, 2010 Graphane yields new potential
May 02 2010 Doped Graphane Should Superconduct at 90K
Two-dimensional nanomaterials
Polymers
Superconductors
Hydrocarbons | Graphane | [
"Chemistry",
"Materials_science"
] | 1,004 | [
"Hydrocarbons",
"Superconductivity",
"Organic compounds",
"Polymer chemistry",
"Superconductors",
"Polymers"
] |
25,694,179 | https://en.wikipedia.org/wiki/Vibratory%20shear-enhanced%20process | Vibratory shear enhanced process (VSEP) is a membrane separation technology platform invented in 1987 and patented in 1989 by Dr. J. Brad Culkin. VSEP's vibration system was designed to prevent membrane fouling, or the build-up of solid particles on the surface of the membrane. VSEP systems have been applied in a variety of industrial environments.
History and technology development
After earning his PhD in chemical engineering from Northwestern University Dr. Culkin spent his early professional career with Dorr–Oliver, Inc., a pioneering company in the area of separation processes. Culkin contributed to six Dorr–Oliver patent applications in 1985 and 1986.
While at Dorr–Oliver, Dr. Culkin was exposed to the advantages of membrane separation technology as well as its failings. The membrane's Achilles' heel, Culkin decided, was fouling.
Concurrent with his membrane work, Culkin was helping to develop a mechanically resonating loudspeaker with the founders of Velodyne Acoustics. Culkin married these two areas of expertise and struck out to overcome membrane fouling through the use of vibration.
The first VSEP prototype Culkin developed was a literal combination of loudspeaker and membrane technology as the photo shows below.
Principle of operation
A VSEP filter uses oscillatory vibration to create high shear at the surface of the filter membrane. This high shear force significantly improves the filter's resistance to fouling thereby enabling high throughputs and minimizing reject volumes. VSEP feed stream are split into two products—a permeate stream with little or no solids and a concentrate stream with a solids concentration much higher than that of the original feed stream.
Industrial applications
VSEP has been applied in a variety of industrial application areas including pulp and paper, chemical processing, landfill leachate, oil and gas, RO Reject and a variety of industrial wastewaters.
Awards
A VSEP system was recognized in 2009 as part of the WateReuse Foundation's Desalination Project of the Year. The system was installed to minimize the brine from an electrodialysis reversal (EDR) system.
References
External links
vsep.com
Filtration
Membrane technology | Vibratory shear-enhanced process | [
"Chemistry"
] | 455 | [
"Membrane technology",
"Filtration",
"Separation processes"
] |
25,694,349 | https://en.wikipedia.org/wiki/Metastate | In statistical mechanics, the metastate is a probability measure on the
space of all thermodynamic states for a system with quenched randomness. The term metastate, in this context, was first used in by Charles M. Newman and Daniel L. Stein in 1996..
Two different versions have been proposed:
1) The Aizenman-Wehr construction, a canonical ensemble approach,
constructs the metastate through an ensemble of states obtained by varying
the random parameters in the Hamiltonian outside of the volume being
considered.
2) The Newman-Stein metastate, a microcanonical ensemble approach,
constructs an empirical average from a deterministic (i.e., chosen
independently of the randomness) subsequence of finite-volume Gibbs distributions.
It was proved for Euclidean lattices that there always
exists a deterministic subsequence along which the Newman-Stein and
Aizenman-Wehr constructions result in the same metastate. The metastate is
especially useful in systems where deterministic sequences of volumes fail
to converge to a thermodynamic state, and/or there are many competing
observable thermodynamic states.
As an alternative usage, "metastate" can refer to thermodynamic states, where the system is in a metastable state (for example superheated or undercooled liquids, when the actual temperature of the liquid is above or below the boiling or freezing temperature, but the material is still in a liquid state).
References
Statistical mechanics
Condensed matter physics | Metastate | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 327 | [
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
25,696,548 | https://en.wikipedia.org/wiki/International%20Space%20Education%20Institute | The International Space Education Institute was established in Leipzig, Germany by Ralf Heckel and 6 more members of the society including Prof. Dr. Jesco von Puttkamer as councilor in October 2005 to support high school and college students in international space competitions (Amtsgericht Leipzig, VR 4401, non-profit, benefit accepted by Finanzamt Leipzig).
They bring professional astronauts, including Chen Dong, to their programs for students to learn from.
References
External links
international Website
German Website
Artemis 1 diary, raumfahrt-concret.de
supportive companies (internal)
Space Hotel Leipzig
Space organizations
Organisations based in Germany | International Space Education Institute | [
"Astronomy"
] | 133 | [
"Outer space",
"Astronomy stubs",
"Astronomy organizations",
"Space organizations",
"Outer space stubs"
] |
25,697,878 | https://en.wikipedia.org/wiki/Airborne%20Launch%20Control%20System | The Airborne Launch Control System (ALCS) provides a survivable launch capability for the United States Air Force's LGM-30G Minuteman III intercontinental ballistic missile (ICBM) force. The ALCS is operated by airborne missileers from Air Force Global Strike Command's (AFGSC) 625th Strategic Operations Squadron (STOS) and United States Strategic Command (USSTRATCOM). The system is located on board the United States Navy's E-6B Mercury, which serves as USSTRATCOM's "Looking Glass" Airborne Command Post (ABNCP). The ALCS crew is integrated into the ABNCP battle staff and is on alert around the clock.
Overview
In the mid-1960s, United States civilian and military leadership became concerned about the possibility of a decapitating attack from the Soviets, destroying any land-based communication links to the nuclear forces of the Strategic Air Command. One solution to the communication problem was placing radio equipment on board an aircraft, and allow it to fly over the United States and use radio broadcasts to pass along information. This concept would allow communication to missile launch crews to pass along Emergency Action Messages (EAMs), but would not duplicate the missile combat crew's function of actually launching the missiles. The key characteristic added to ALCS (versus other communication methods such as ERCS) was giving the airborne crews the same degree of access to the launch facilities as the underground missile crews.
Minuteman launch facilities contained an ultra high frequency (UHF) receiver that would pick up commands from the ALCS; the destruction of the launch control center or the hardened intersite cable system would not prevent retaliation.
History
ALCS' first generation equipment was declared operational on 31 May 1967.
Operational information
ALCS-configured aircraft
The ALCS mission has been held by multiple aircraft during the last 50 years:
EC-135 – performed Looking Glass and ALCC mission for the Strategic Air Command (1967–1998)
EC-135A (ALCC)
EC-135C (ABNCP and ALCC)
EC-135G (ALCC and ABNCP)
EC-135L PACCS Radio Relay
E-4B Advanced Airborne Command Post – Aircraft tail number 75-0125 performed Looking Glass on a trial basis from 1980 to 1981 to assess possibility of replacing EC-135 fleet. Deemed too expensive and ALCS was subsequently removed from the E-4B.
E-6B Mercury – performs Looking Glass, ALCC, and TACAMO mission for United States Strategic Command (1998–Present)
E-6B
ICBMs remotely controlled
LGM-30A/B Minuteman I (1967–1975)
LGM-30F Minuteman II (1967–1992)
LGM-30G Minuteman III (1971–present)
LGM-118A Peacekeeper (1987–2005)
Units
Units with ALCS crewmembers assigned
68th Strategic Missile Squadron (Ellsworth AFB, SD: 1967-1970)
91st Strategic Missile Wing (Minot AFB, ND: 1967-1969)
4th Airborne Command and Control Squadron (Ellsworth AFB, SD: 1970-1992)
2nd Airborne Command and Control Squadron (Offutt AFB, NE: 1970-1994)
7th Airborne Command and Control Squadron (Offutt AFB, NE: 1994-1998)
625th Missile Operations Flight/USSTRATCOM (Offutt AFB, NE: 1998-2007)
625th Strategic Operations Squadron/USSTRATCOM (Offutt AFB, NE: 2007–Present)
Units with ALCS-equipped aircraft
28th Air Refueling Squadron (Ellsworth AFB, SD: 1967-1970)
EC-135A, EC-135G
906th Air Refueling Squadron (Minot AFB, ND: 1967-1969
EC-135A, EC-135L
38th Strategic Reconnaissance Squadron (Offutt AFB, NE: 1967-1970)
EC-135C
4th Airborne Command and Control Squadron (Ellsworth AFB, SD: 1970-1992)
EC-135A, EC-135C, EC-135G, EC-135L
2nd Airborne Command and Control Squadron (Offutt AFB, NE: 1970-1994)
EC-135C
7th Airborne Command and Control Squadron (Offutt AFB, NE: 1994-1998)
EC-135C
STRATCOMWING ONE (Tinker AFB, OK: 1998–Present)
Fleet Air Reconnaissance Squadron 3 (VQ-3)
E-6B Mercury
Fleet Air Reconnaissance Squadron 4 (VQ-4)
E-6B Mercury
ALCS personnel
The Airborne Launch Control System Flight of the 625th Strategic Operations Squadron provides training and crewmembers for two ALCS positions on board the E-6B Mercury.
ALCS-assisted launches
A test of the ALCS, both ground and air components, is called a GIANT BALL.
This list does not contain any launches after the initial Test and Evaluation phase of the system.
See also
References
External links
E-6B ABNCP Factsheet
LGM-30G Minuteman III Factsheet
Missile launchers
Military radio systems of the United States
Nuclear warfare
Military communications
United States nuclear command and control | Airborne Launch Control System | [
"Chemistry",
"Engineering"
] | 1,057 | [
"Military communications",
"Telecommunications engineering",
"Radioactivity",
"Nuclear warfare"
] |
25,698,625 | https://en.wikipedia.org/wiki/Journal%20of%20Mechanics%20in%20Medicine%20and%20Biology | The Journal of Mechanics in Medicine and Biology is a peer-reviewed medical journal that was established in 2001 and is published by World Scientific. It covers research in the field of mechanics as applied to medicine and biology.
Abstracting and indexing
The journal is abstracted and indexed in:
Academic OneFile
Academic Search Complete/ Elite/ Premier
Baidu
CNKI Scholar
CnpLINKer
Compendex
CrossRef
CSA Physical Education Abstracts
Ebsco Discovery Service
Ebsco Electronic Journal Service (EJS)
ExLibris Primo Central
Google Scholar
Health Reference Center Academic (Gale)
J-Gate
Journal Citation Reports/Science Edition
Naver
NSTL - National Science and Technology Libraries
OCLC WorldCat®
ProQuest SciTech Premium Collection
Science Citation Index Expanded (SCIE)
Scopus
The Summon® Service
WanFang Data
The journal has a 2020 SCI impact factor of 0.897.
References
External links
English-language journals
Biomedical engineering journals
General medical journals
Academic journals established in 2001
World Scientific academic journals | Journal of Mechanics in Medicine and Biology | [
"Engineering",
"Biology"
] | 201 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
25,698,853 | https://en.wikipedia.org/wiki/Journal%20of%20Nonlinear%20Optical%20Physics%20%26%20Materials | The Journal of Nonlinear Optical Physics & Materials is a quarterly peer-reviewed scientific journal that was established in 1992 and is published by World Scientific. It covers developments in the field of nonlinear interactions of light with matter, guided waves, and solitons, as well as their applications, such as in laser and coherent lightwave amplification, and information processing.
Abstracting and indexing
The journal is abstracted and indexed in:
Astrophysics Data System
Chemical Abstracts Service
Current Contents/Physical, Chemical & Earth Sciences
EBSCO databases
Ei Compendex
Inspec
ProQuest databases
Science Citation Index Expanded
Scopus
References
External links
Academic journals established in 1992
Optics journals
Materials science journals
World Scientific academic journals
English-language journals
Quarterly journals | Journal of Nonlinear Optical Physics & Materials | [
"Materials_science",
"Engineering"
] | 149 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science"
] |
25,699,501 | https://en.wikipedia.org/wiki/Baltic%20Sea%20hypoxia | Baltic Sea hypoxia refers to low levels of oxygen in bottom waters, also known as hypoxia, occurring regularly in the Baltic Sea. the total area of bottom covered with hypoxic waters with oxygen concentrations less than 2 mg/L in the Baltic Sea has averaged 49,000 km2 over the last 40 years. The ultimate cause of hypoxia is excess nutrient loading from human activities causing algal blooms. The blooms sink to the bottom and use oxygen to decompose at a rate faster than it can be added back into the system through the physical processes of mixing. The lack of oxygen (anoxia) kills bottom-living organisms and creates dead zones.
Causes
The rapid increase in hypoxia in coastal areas around the world is due to the excessive inputs of plant nutrients, such as nitrogen and phosphorus by human activities. The sources of these nutrients include agriculture, sewage, and atmospheric deposition of nitrogen containing compounds from the burning of fossil fuels. The nutrients stimulate the growth of algae causing problems with eutrophication. The algae sink to the bottom and use the oxygen when they decompose. If mixing of the bottom waters is slow, such that oxygen stocks are not renewed, hypoxia can occur.
Description
the total area of bottom covered with hypoxic waters with oxygen concentrations less than 2 mg/L in the Baltic Sea has averaged 49,000 km2 over the last 40 years.
In the Baltic Sea, the input of salt water from the North Sea through the Danish Straits is important in determining the area of hypoxia each year. Denser, saltier water comes into the Baltic Sea and flows along the bottom. Although large salt water inputs help to renew the bottom waters and increase oxygen concentrations, the new oxygen added with the salt water inflow is rapidly used to decompose organic matter that is in the sediments. The denser salt water also reduces mixing of oxygen poor bottom waters with more brackish, lighter surface waters. Thus, large areas of hypoxia occur when more salt water comes into the Baltic Sea.
Geological perspective
Geological archives in sediments, primarily the appearance of laminated sediments that occur only when hypoxic conditions are present, are used to determine the historical time frame of oxygen conditions.
Hypoxic conditions were common during the development of the early Baltic Sea called the Mastogloia Sea and Littorina Sea starting around 8,000 calendar years Before Present until 4,000 BP. Hypoxia disappeared for a period of nearly 2,000 years, appearing a second time just before the Medieval Warm Period around 1 AD until 1200 AD. The Baltic Sea became hypoxic again around 1900 AD and has remained hypoxic for the last 100 years.
The causes of the various periods of hypoxia are being scientifically debated, but it is believed to result from high surface salinity, climate and human impacts.
Impacts
The deficiency of oxygen in bottom waters changes the types of organisms that live on the bottom. The species change from long-living, deep-burrowing, slow-growing animals to species that live on the sediment surface. They are small and fast-growing, and can tolerate low concentrations of oxygen. When oxygen concentrations are low enough only bacteria and fungi can survive, dead zones form. In the Baltic Sea, low oxygen concentrations also reduce the ability of cod to spawn in bottom waters. Cod spawning requires both high salinity and high oxygen concentrations for cod fry to develop, conditions that are rare in the Baltic Sea today.
The lack of oxygen also increases the release of phosphorus from bottom sediments. Excess phosphorus in surface waters and the lack of nitrogen stimulates the growth of cyanobacteria. When the cyanobacteria die and sink to the bottom they consume oxygen leading to further hypoxia and more phosphorus is released from bottom sediments. This process creates a vicious circle of eutrophication that helps to sustain itself.
Solutions
The countries surrounding the Baltic Sea have established the HELCOM Baltic Marine Environment Protection Commission to protect and improve the environmental health of the Baltic Sea. In 2007, the Member States accepted the Baltic Sea Action Plan to reduce nutrients. Because the public and media have been frustrated by the lack of progress in improving the environmental status of the Baltic Sea, there have been calls for large-scale engineering solutions to add oxygen back into bottom waters and bring life back to the dead zones. An international committee evaluated different ideas and came to the conclusion that large-scale engineering approaches are not able to add oxygen to the extremely large dead zones in the Baltic Sea without completely changing the Baltic Sea ecosystem. The best long-term solution is to implement policies and measures to reduce the load of nutrients to the Baltic Sea.
References
External links
HELCOM
Baltic Sea Action Plan
HYPER Project
BONUS
Baltic Nest Institute 2011
Baltic Sea 2020
Baltic Sea
Algal blooms | Baltic Sea hypoxia | [
"Chemistry",
"Biology",
"Environmental_science"
] | 994 | [
"Algae",
"Water treatment",
"Water pollution",
"Water quality indicators",
"Algal blooms"
] |
25,700,410 | https://en.wikipedia.org/wiki/Albion%20process | The Albion process is an atmospheric leaching process for processing zinc concentrate, refractory copper and refractory gold. The process is important because it is the most cost-effective method currently in use for extracting both the zinc and lead from concentrates that contain high lead levels (7% or greater). Zinc and lead often occur together and large remaining zinc deposits contain levels of lead that exceed what can be economically extracted through other techniques. The Albion process is not sensitive to the concentration grade and gives favorable recovery with both low grade and dirty concentrates. Environmental impact is also claimed to be mitigated using this technology because in contrast to other methods, sulfur dioxide is not emitted and less energy is consumed over all.
History
Development of the Albion process started during the early nineties led by Mount Isa Mines. It was first patented in 1993. Several pilot plant projects were conducted in 1994 and 1995 which tested the feasibility of using the technology to process high arsenic gold and copper ore.
The Albion Process has been successfully installed in seven projects globally:
GPM Gold Project (Gold, Armenia)
Las Lagunas Tailings (Gold, Dominican Republic)
Sable Copper Project (Copper, Chalcopyrite, Zambia)
Asturiana de Zinc (Zinc, Spain)
Nordenham Zinc Refinery (Zinc, Germany)
McArthur River (Zinc, Australia)
Process
The ore concentrate is first introduced into an IsaMill. This comminution step places a high degree of strain on the mineral lattice and causes an increase in the number of grain boundary fractures and lattice defects of several orders of magnitude. The increase in the number of defects within the mineral lattice "activates" the mineral, facilitating leaching. The rate of leaching is also enhanced, due to the increase in the mineral surface area.
The oxidative leaching stage is carried out in agitated tanks operating at atmospheric pressure. Oxygen is introduced to the leach slurry to assist the oxidation. Leaching is autothermal, not requiring any external heat. Temperature is controlled by the rate of addition of oxygen, and by the leach slurry density.
Chemistry
The general reaction for the leaching process is:
References
Metallurgical processes | Albion process | [
"Chemistry",
"Materials_science"
] | 441 | [
"Metallurgical processes",
"Metallurgy"
] |
30,295,004 | https://en.wikipedia.org/wiki/C14H12N2O2 | {{DISPLAYTITLE:C14H12N2O2}}
The molecular formula C14H12N2O2 may refer to:
Benzoylphenylurea
N-Benzoyl-N′-phenylurea
1,4-Diamino-2,3-dihydroanthraquinone
Dibenzoylhydrazine | C14H12N2O2 | [
"Chemistry"
] | 80 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
30,295,206 | https://en.wikipedia.org/wiki/Laser%20guided%20and%20stabilized%20arc%20welding | Laser guided and stabilized welding (LGS-welding) is a process in which a laser beam irradiates an electrical heated plasma arc to set a path of increased conductivity. Therefore, the arc's energy can be spatial directed and the plasma burns more stable. The process must be distinguished from laser-hybrid welding, since only low power laser energy of a couple hundred Watts is used and the laser does not contribute significantly to the welding process in terms of energy input.
Operation
The principle of laser enhanced welding is based on the interaction between the electrical arc and laser radiation. Due to the optogalvanic effect (OGE) a channel of higher conductivity in the plasma is established along the path of the laser. Therefore, a movement of the laser beam results in a movement of the electrical arc. This effect is limited to a range of some millimeters, but shows the influence of the radiation to the plasma. A raise of welding speed of over 100% is described by using a diode laser with a wavelength of 811 nm without a significant loss in penetration depth. Furthermore, this technique is used in cladding. Depending on the welded material argon or argon with CO2 is used as shielding gas. The laser source must be tuned to emit at a wavelength of 811 nm and is focused into the plasma.
Laser guided and stabilized GMA-Welding
The process is used for welding thin metal sheets up to about 2 mm when welding in overlap or butt joint. LGS-GMA-welding is most advantageous when welding fillet welds. The guidance effect of the laser radiation forces the arc into the fillet. Therefore, a steady seam can be reached. Furthermore, the stabilization of the plasma enables the GMA-process to weld thin sheets without burning holes in the material.
Equipment and setup
The setup requires the GMA welding head tilted at 60° to the work piece surface. In order to realize a maximum overlap between the electric arc and the laser beam in the process area, the laser is installed upright to the workpiece and focused in the electrical arc. Standard welding equipment can be used for the process. The laser source is described above.
Laser guided and stabilized double head TIG-welding
In laser guided and stabilized double head TIG-welding the laser forces two arcs together. The goal of this technique is to increase the welding speed of TIG-welding without compromising the quality.
Equipment and setup
For this process two TIG-sources are needed and the laser described above. The TIG-torches are set up with the laser beam perpendicular in the middle. All welding modes of the two torches are possible (DC/DC, AC/AC, AC/DC).
Laser guided and stabiliszed GMA-Cladding
In LGS-GMA-cladding the stabilization effect is used enable the GMA-process to work with low energy. This is needed to reduce the penetration depth and therefore the dilution of base and deposition material. The combination of GMA-welding and a diode laser leads to a cheap and energy efficient process.
Equipment and setup
The setup for the LGS-GMA-cladding is almost alike the one for LGS-GMA-welding beside that the GMA-source needs to have a "Cold-MIG" process. This means, that the welding current is controlled my microcontrollers and produced by power electronics. That way not only the current peaks can be controlled, but also the slopes.
References
External links and further reading
Project homepage at LZH (german)
Project Homepage (Laser Stabilized Double TIG-welding)
Wendelstorf, J.; Decker, I.; Wohlfahrt, H. 1994, Laser-enhanced gas tungsten arc welding (Laser-TIG), Welding in the World
Cui, H., 1991, Untersuchungen der Wechselwirkung zwischen Schweißlichtbogen und fokussiertem Laserstrahl und der Anwendungsmöglichkeit kombinierter Laser-Lichtbogentechnik, . (German only)
Paulini, J., Simon, G., 1993, A theoretical lower limit for laser power in laser-enhanced arc welding, J. Phys. D: Appl. Phys. 26 (1993) 1523-1527
M. Schnick, S. Rose, U. Füssel, A. Mahrle, C. Demuth, E. Beyer: Numerische und experimentelle Untersuchungen zur Wechselwirkung zwischen einem Plasmalichtbogen und einem Laserstrahl geringer Leistung: DVS (German only)
Cui, H., 1991, Untersuchungen der Wechselwirkung zwischen Schweißlichtbogen und fokussiertem Laserstrahl und der Anwendungsmöglichkeit kombinierter Laser-Lichtbogentechnik, . (German only)
Welding | Laser guided and stabilized arc welding | [
"Engineering"
] | 1,036 | [
"Welding",
"Mechanical engineering"
] |
30,304,860 | https://en.wikipedia.org/wiki/Rabi%20resonance%20method | Rabi resonance method is a technique developed by Isidor Isaac Rabi for measuring the nuclear spin. The atom is placed in a static magnetic field and a perpendicular rotating magnetic field.
We present a classical treatment in here.
Theory
When only the static magnetic field (B0) is turned on, the spin will precess around it with Larmor frequency ν0 and the corresponding angular frequency is ω0.
According to mechanics, the equation of motion of the spin J is:
where μ is the magnetic moment.
g is g-factor, which is dimensionless and reflecting the environmental effect on the spin.
Solving gives the angular frequency (Larmor frequency) with the magnetic field pointing on z-axis:
The minus sign is necessary. It reflects that the J is rotating in left-hand when the thumb is pointing as the H field.
when turned on the rotating magnetic field (BR), with angular frequency ω. In the rotating frame of the rotating field, the equation of motion is:
or
if , the static field is cancelled, and the spin now precesses around HR with angular frequency Rabi frequency
Since the rotating field is perpendicular to the static field, the spin in rotating frame is now able to flip between up and down.
By sweeping ωR, one can obtain a maximum flipping and determine the magnetic moment.
Experiment
The experiment setup contains 3 parts: an inhomogeneous magnetic field in front, the rotating field at the middle, and another inhomogeneous magnetic field at the end.
Atoms after passing the first inhomogeneous field will split into 2 beams corresponding the spin up and spin down state. Select one beam (spin up state, for example) and let it pass the rotating field. If the rotating field has frequency (ω) equal to the Larmor frequency, it will produce a high intensity of the other beam (spin down state). By sweeping the frequency to obtain a maximum intensity, one can find out the Larmor frequency and the magnetic moment of the atom.
References and notes
https://web.archive.org/web/20160325004825/https://www.colorado.edu/physics/phys7550/phys7550_sp07/extras/Ramsey90_RMP.pdf
See also
Rabi frequency
Rabi cycle
Rabi problem
Quantum optics
Atomic physics
Atomic, molecular, and optical physics | Rabi resonance method | [
"Physics",
"Chemistry"
] | 492 | [
"Quantum optics",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
30,308,303 | https://en.wikipedia.org/wiki/Bergman%E2%80%93Weil%20formula | In mathematics, the Bergman–Weil formula is an integral representation for holomorphic functions of several variables generalizing the Cauchy integral formula. It was introduced by and .
Weil domains
A Weil domain is an analytic polyhedron with a domain U in Cn defined by inequalities fj(z) < 1
for functions fj that are holomorphic on some neighborhood of the closure of U, such that the faces of the Weil domain (where one of the functions is 1 and the others are less than 1) all have dimension 2n − 1, and the intersections of k faces have codimension at least k.
See also
Andreotti–Norguet formula
Bochner–Martinelli formula
References
.
.
Theorems in complex analysis
Several complex variables | Bergman–Weil formula | [
"Mathematics"
] | 163 | [
"Theorems in mathematical analysis",
"Functions and mappings",
"Several complex variables",
"Theorems in complex analysis",
"Mathematical objects",
"Mathematical relations"
] |
2,016,219 | https://en.wikipedia.org/wiki/National%20Institute%20of%20Building%20Sciences | The National Institute of Building Sciences is a non-profit, non-governmental organization that identifies and resolves problems and potential issues in the built environment throughout the United States. Its creation was authorized by the U.S. Congress in the Housing and Community Development Act of 1974.
Board of directors
The Institute is governed by a board of directors which consists of 21 members. All members serve for terms of three years, with a third of the board up for new terms each year. The President, with the advice and consent of the Senate, appoints six members to represent the public interest. The remaining 15 members are elected from the nation's construction industry, including representatives of construction labor organizations, product manufacturers, and builders, housing management experts, and experts in building standards, codes, and fire safety, as well as public interest representatives including architects, professional engineers, officials of Federal, State, and local agencies, and representatives of consumer organizations. The board shall always have a majority of public interest representatives.
After the expiration of the term of any member, they may continue to serve until their successor has been elected or has been appointed and confirmed.
The board annually elects from among its members a chairman. It shall also elect one or more vice chairmen. The terms are for one year and no one can serve as chairman or vice chairman for more than two consecutive terms.
Among the board's duties is to appoint a president and CEO, and other executive officers and as they see fit. As of September 12, 2024, George K. Guszcza is the President and CEO of the NIBS.
Board members appointed by the President
The current members of the board that are appointed by the President, :
Councils and Workgroups
Building Enclosure Technology and Environment Council (BETEC)
Building Information Management (BIM) Council (formerly the buildingSMART alliance)
Building Seismic Safety Council (BSSC)
Consultative Council
Facility Management and Operations Council (FMOC)
Multi-Hazard Mitigation Council (MMC)
Off-Site Construction Council
Whole Building Design Guide (WBDG) Workgroup
Technology programs
HAZUS
ProjNet
Whole Building Design Guide WBDG
News
NIBS Member Quarterly Newsletter
Standards and publications
National BIM Standard - United States
United States National CAD Standard
Former councils include:
Facility Information Council (FIC)
International Alliance for Interoperability (IAI)
Charter members
Mortimer M. Marshall Jr., FAIA
Homer Hurst
See also
National CAD Standard
Whole Building Design Guide
References
External links
Building engineering organizations
Professional associations based in the United States
Institutes based in the United States | National Institute of Building Sciences | [
"Engineering"
] | 518 | [
"Building engineering",
"Building engineering organizations"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.