id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
2,370,766 | https://en.wikipedia.org/wiki/Ion%20beam%20analysis | Ion beam analysis (IBA) is an important family of modern analytical techniques involving the use of MeV ion beams to probe the composition and obtain elemental depth profiles in the near-surface layer of solids. IBA is not restricted to MeV energy ranges. It can be operated at low energy (<Kev) using techniques such as FIB, and Secondary ion mass spectroscopy, as well as at higher energies (>GeV) using instruments like the LHC. All IBA methods are highly sensitive and allow the detection of elements in the sub-monolayer range. The depth resolution is typically in the range of a few
nanometers to a few ten nanometers. Atomic depth resolution can be achieved, but requires special equipment. The analyzed depth ranges from a few ten nanometers to a few ten micrometers. IBA methods are always quantitative with an accuracy of a few percent.
Channeling allows to determine the depth profile of damage in single crystals.
RBS: Rutherford backscattering is sensitive to heavy elements in a light matrix. This technique is used for determining elemental composition and depth profiling of materials.
EBS: Elastic (non-Rutherford) backscattering spectrometry can be sensitive even to light elements in a heavy matrix. The term EBS is used when the incident particle is going so fast that it exceeds the "Coulomb barrier" of the target nucleus, which therefore cannot be treated by Rutherford's approximation of a point charge. In this case Schrödinger's equation should be solved to obtain the scattering cross-section (see http://www-nds.iaea.org/sigmacalc/ ).
ERD: Elastic recoil detection is sensitive to light elements in a heavy matrix
PIXE: Particle-induced X-ray emission gives the trace and minor elemental composition
NRA: Nuclear reaction analysis is sensitive to particular isotopes
Channelling: The fast ion beam can be aligned accurately with major axes of single crystals; then the strings of atoms "shadow" each other and the backscattering yield falls dramatically. Any atoms off their lattice sites will give visible extra scattering. Thus damage to the crystal is visible, and point defects (interstitials) can even be distinguished from dislocations.
IBIL: Ion beam induced luminescence occurs when an energetic beam of ions strike a target, excite the native atoms, and visible light is emitted as a result of outer-shell transitions.
The quantitative evaluation of IBA methods requires the use of specialized simulation and data analysis software. SIMNRA and DataFurnace are popular programs for the analysis of RBS, ERD and NRA, while GUPIX is popular for PIXE. A review of IBA software was followed by an intercomparison of several codes dedicated to RBS, ERD and NRA, organized by the International Atomic Energy Agency.
IBA is an area of active research. The last major Nuclear Microbeam conference in Debrecen (Hungary) was published in NIMB 267(12–13).
Overview
Ion beam analysis works on the basis that ion-atom interactions are produced by the introduction of ions to the sample being tested. Major interactions result in the emission of products that enable information regarding the number, type, distribution and structural arrangement of atoms to be collected. To use these interactions to determine sample composition a technique must be selected along with irradiation conditions and the detection system that will best isolate the radiation of interest providing the desired sensitivity and detection limits. The basic layout of an ion beam apparatus is an accelerator which produces an ion beam that is feed through an evacuated beam-transport tube to a beam handling device. This device isolates the ion species and charge of interest which then are transported through an evacuated beam-transport tube into the target chamber. This chamber is where the refined ion beam will come into contact with the sample and thus the resulting interactions can be observed. The configuration of the ion beam apparatus can be changed and made more complex with the incorporation of additional components. The techniques for ion beam analysis are designed for specific purposes. Some techniques and ion sources are shown in table 1. Detector types and arrangements for ion beam techniques are shown in table 2.
Applications
Ion beam analysis has found use in a number of variable applications, ranging from biomedical uses to studying ancient artifacts. The popularity of this technique stems from the sensitive data that can be collected without significant distortion to the system on which it is studying. The unparalleled success found in using ion beam analysis has been virtually unchallenged over the past thirty years until very recently with new developing technologies. Even then, the use of ion beam analysis has not faded, and more applications are being found that take advantage of its superior detection capabilities. In an era where older technologies can become obsolete at an instant, ion beam analysis has remained a mainstay and only appears to be growing as researchers are finding greater use for the technique.
Biomedical elemental analysis
Gold nanoparticles have been recently used as a basis for a count of atomic species, especially with studying the content of cancer cells. Ion beam analysis is a great way to count the amount of atomic species per cell. Scientists have found an effective way to make accurate quantitative data available by using ion beam analysis in conjunction with elastic backscattering spectrometry (EBS). The researchers of a gold nanoparticle study were able to find much greater success using ion beam analysis in comparison to other analytical techniques, such as PIXE or XRF. This success is due to the fact that the EBS signal can directly measure depth information using ion beam analysis, whereas this cannot be done with the other two methods. The unique properties of ion beam analysis make great use in a new line of cancer therapy.
Cultural heritage studies
Ion beam analysis also has a very unique application in the use of studying archaeological artifacts, also known as archaeometry. For the past three decades, this has been the much preferred method to study artifacts while preserving their content. What many have found useful in using this technique is its offering of excellent analytical performance and non-invasive character. More specifically, this technique offers unparalleled performance in terms of sensitivity and accuracy. Recently however, there have been competing sources for archaeometry purposes using X-ray based methods such as XRF. Nonetheless, the most preferred and accurate source is ion beam analysis, which is still unmatched in its analysis of light elements and chemical 3D imaging applications (i.e. artwork and archaeological artifacts).
Forensic analysis
A third application of ion beam analysis is in forensic studies, particularly with gunshot residue characterization. Current characterization is done based on heavy metals found in bullets, however, manufacturing changes are slowly making these analyses obsolete. The introduction of techniques such as ion beam analysis are believed to alleviate this issue. Researchers are currently studying the use of ion beam analysis in conjunction with a scanning electron microscope and an Energy Dispersive X-ray spectrometer (SEM-EDS). The hope is that this setup will detect the composition of new and old chemicals that older analyses could not efficiently detect in the past. The greater amount of analytical signal used and more sensitive lighting found in ion beam analysis gives great promise to the field of forensic science.
Lithium battery development
The spatially resolved detection of light elements, for example lithium, remains challenging for most techniques based on the electronic shell of the target atoms such as XRF or SEM-EDS. For lithium and lithium-ion batteries, the quantification of the lithium stoichiometry and its spatial distribution are important to understand the mechanisms behind dis-/charging and aging. Through ion beam focussing and a combination of methods, ion beam analysis offers the unique possibility for measuring the local state of charge (SoC) on the μm-scale.
Iterative IBA
Ion beam-based analytical techniques represent a powerful set of tools for non-destructive, standard-less, depth-resolved and highly accurate elemental composition analysis in the depth regime from several nm up to few μm. By changing type of incident ion, the geometry of experiment, particle energy, or by acquiring different products originating from ion-solid interaction, complementary information can be extracted. However, analysis is often challenged either in terms of mass resolution—when several comparably heavy elements are present in the sample—or in terms of sensitivity—when light species are present in heavy matrices. Hence, a combination of two or more ion beam-based techniques can overcome the limitations of each individual method and provide complementary information about the sample.
An iterative and self-consistent analysis also enhances the accuracy of the information that can be obtained from each independent measurement.
Software and simulation
Dating back to the 1960s the data collected via ion beam analysis has been analyzed through a multitude of computer simulation programs. Researchers who frequently use ion beam analysis in conjunction with their work require that this software be accurate and appropriate for describing the analytical process they are observing. Applications of these software programs range from data analysis to theoretical simulations and modeling based on assumptions about the atomic data, mathematics and physics properties that detail the process in question. As the purpose and implementation of ion beam analysis has changed over the years, so has the software and codes used to model it. Such changes are detailed through the five classes by which the updated software are categorized.
Class-A
Includes all programs developed in the late 1960s and early 1970s. This class of software solved specific problems in the data; niy did not provide the full potential to analyze a spectrum of a full general case. The prominent pioneering program was IBA, developed by Ziegler and Baglin in 1971. At the time, the computational models only tackled the analysis associated with the back-scattering techniques of ion beam analysis and performed calculation based on a slab analysis. A variety of other programs arose during this time, such as RBSFIT, though due to the lack of in-depth knowledge on ion beam analysis, it became increasingly hard to develop programs that accurate.
Class-B
A new wave of programs sought to solve this accuracy problem in this next class of software. Developed during the 1980s, programs like SQEAKIE and BEAM EXPERT, afforded an opportunity to solve the complete general case by employing codes to perform direct analysis. This direct approach unfolds the produced spectrum with no assumptions made about the sample. Instead it calculates through separated spectrum signals and solves a set of linear equations for each layer. Problems still arise, though, and adjustments made to reduce noise in the measurements and room for uncertainty.
Class-C
In a trip back to square one, this third class of programs, created in the 1990s, take a few principles from Class A in accounting for the general case, however, now through the use of indirect methods. RUMP and SENRAS, for example, use an assumed model of the sample and simulate a comparative theoretical spectra, which afforded such properties as fine structure retention and uncertainty calculations. In addition to the improvement in software analysis tools came the ability to analyze other techniques aside from back-scattering; i.e. ERDA and NRA.
Class-D
Exiting the Class C era and into the early 2000s, software and simulation programs for ion beam analysis were tackling a variety of data collecting techniques and data analysis problems. Following along with the world's technological advancements, adjustments were made to enhance the programs into a state more generalized codes, spectrum evaluation, and structural determination. Programs produced like SIMNRA now account for the more complex interactions with the beam and sample; also providing a known database of scattering data.
Class-E
This most recently developed class, having similar characteristics to the previous, makes use of primary principles in the Monte Carlo computational techniques. This class applies molecular dynamic calculations that are able to analyze both low and high energy physical interactions taking place in the ion beam analysis. A key and popular feature that accompanies such techniques is the possibility for the computations to be incorporated in real time with the ion beam analysis experiment itself.
Footnotes
References
External links
International Conference on Ion Beam Analysis (Biennial scientific conference devoted to IBA): 2007, 2009, 2011, 2013, 2015, 2017).
European Conference on Accelerators in Applied Research and Technology ECAART (Triennial European scientific conference): 2007, 2010, 2013, 2016.
International Conference on Particle Induced X-ray Emission (Trienniel scientific conference devoted to PIXE): 2007, 2010, 2013, 2015.
"Nuclear Instruments and Methods": The international peer-reviewed scientific journal largely devoted to IBA developments and applications
SIMNRA program for the simulation and analysis of RBS, EBS, ERD, NRA and MEIS spectra
MultiSIMNRA program for the simulation and analysis (self-consistent fitting) of multiple RBS, EBS, ERD, and NRA spectra using SIMNRA
DataFurnace program for the simulation and analysis (self-consistent fitting) of multiple PIXE, RBS, EBS, ERD, NRA, PIGE, NRP, NDP spectra
NDF free version of NDF (the calculation engine underlying DataFurnace) for the simulation of IBA spectra
GUPIX program for the simulation and analysis of PIXE spectra
Software for PIXE analysis Intercomparison of PIXE spectrometry software packages
Aachen-ion-beams Hardware and software for ion-beam analysis and μ-beam applications
Materials science | Ion beam analysis | [
"Physics",
"Materials_science",
"Engineering"
] | 2,737 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
2,371,149 | https://en.wikipedia.org/wiki/Hybrid%20Assistive%20Limb | The Hybrid Assistive Limb (also known as HAL) is a powered, soft-bodied exoskeleton suit developed by Japan's Tsukuba University and the robotics company Cyberdyne. It is designed to support and expand the physical capabilities of its users, particularly people with physical disabilities. There are two primary versions of the system: HAL 3, which only provides leg function, and HAL 5, which is a full-body exoskeleton for the arms, legs, and torso.
In 2011, Cyberdyne and Tsukuba University jointly announced that hospital trials of the full HAL suit would begin in 2012, with tests to continue until 2014 or 2015. By October 2012, HAL suits were in use by 130 different medical institutions across Japan. In February 2013, the HAL system became the first powered exoskeleton to receive global safety certification. In August 2013, HAL received EC certification for clinical use in Europe as the world's first non-surgical medical treatment robot. In addition to its medical applications, the HAL exoskeleton has been used in construction and disaster response work.
History
The first HAL prototype was proposed by Yoshiyuki Sankai, a professor at Tsukuba University. Fascinated with robots since he was in the third grade, Sankai had striven to make a robotic suit in order "to support humans". In 1989, after receiving his PhD in robotics, he began the development of HAL. Sankai spent three years, from 1990 to 1993, mapping out the neurons that govern leg movement. It took him and his team an additional four years to make a prototype of the hardware.
The third HAL prototype, developed in the early 2000s, was attached to a computer. Its battery alone weighed nearly and required two helpers to put on, making it very impractical. By contrast, later HAL-5 model weighs only and has its battery and control computer strapped around the waist of the wearer.
Cyberdyne began renting the HAL suit out for medical purposes in 2008. By October 2012, over 300 HAL suits were in use by 130 medical facilities and nursing homes across Japan. The suit is available for institutional rental, in Japan only, for a monthly fee of US$2,000. In December 2012, Cyberdyne was certified ISO 13485 – an international quality standard for design and manufacture of medical devices – by Underwriters Laboratories. In late February 2013, the HAL suit received a global safety certificate, becoming the first powered exoskeleton to do so. In August 2013, the suit received an EC certificate, permitting its use for medical purposes in Europe as the first medical treatment robot of its kind.
Design and mechanics
When a person attempts to move their body, nerve signals are sent from the brain to the muscles through the motor neurons, moving the musculoskeletal system. When this happens, small biosignals can be detected on the surface of the skin. The HAL suit registers these signals through a sensor attached to the skin of the wearer. Based on the signals obtained, the power unit moves the joint to support and amplify the wearer's motion. The HAL suit possesses a cybernic control system consisting of both a user-activated "voluntary control system" known as Cybernic Voluntary Control (CVC) and a "robotic autonomous control system" known as Cybernic Autonomous Control (CAC) for automatic motion support.
The HAL design is notable for its soft body and frame, with comfort and ease of use cited as potential benefits of this lack of a rigid body.
Users
HAL is designed to assist people who are disabled or elderly in their daily tasks, but can also be used to support workers with physically demanding jobs such as disaster rescue or construction. HAL is mainly used by disabled patients in hospitals, and can be modified so that patients can use it for longer-term rehabilitation. In addition, scientific studies have shown that, in combination with specially-created therapeutic games, powered exoskeletons like the HAL-5 can stimulate cognitive activities and help disabled children walk while playing. Further scientific studies have shown that HAL Therapy can be effectively used for rehabilitation after spinal cord injury or stroke.
During the 2011 Consumer Electronics Show, it was announced that the United States government had expressed interest in purchasing HAL suits. In March 2011, Cyberdyne presented a legs-only HAL version for those with disabilities, health care professionals and factory workers. In November 2011, HAL was selected to be used for cleanup work at the site of the Fukushima nuclear accident. During the Japan Robot Week exhibition in Tokyo in October 2012, a redesigned version of HAL was presented, designed specifically for the Fukushima cleanup. In March 2013, ten Japanese hospitals conducted clinical tests of the newer legs-only HAL system. In late 2014, HAL exoskeletons modified for construction use entered service with the Japanese construction contractor Obayashi Corporation.
See also
Atlas (robot), a humanoid robot designed for search and rescue
Ekso Bionics
ReWalk
Vanderbilt exoskeleton
References
External links
WALK AGAIN Center – HAL Training Center
Assistive technology
Disability robots
Japanese inventions
Medical robotics
Rehabilitation robots
Robotic exoskeletons
Robots of Japan
2000s robots
2012 in science
2012 robots | Hybrid Assistive Limb | [
"Biology"
] | 1,063 | [
"Medical robotics",
"Medical technology"
] |
2,371,378 | https://en.wikipedia.org/wiki/Surroundings | Surroundings, or environs is an area around a given physical or geographical point or place. The exact definition depends on the field. Surroundings can also be used in geography (when it is more precisely known as vicinity, or vicinage) and mathematics, as well as philosophy, with the literal or metaphorically extended definition.
In thermodynamics, the term (and its synonym, environment) is used in a more restricted sense, meaning everything outside the thermodynamic system. Often, the simplifying assumptions are that energy and matter may move freely within the surroundings, and that the surroundings have a uniform composition.
See also
Distance
Environment (biophysical)
Environment (systems)
Neighbourhood (mathematics)
Social environment
Proxemics
Geography
Thermodynamics | Surroundings | [
"Physics",
"Chemistry",
"Mathematics"
] | 159 | [
"Thermodynamics",
"Dynamical systems"
] |
2,372,519 | https://en.wikipedia.org/wiki/Coordination%20polymerization | Coordination polymerisation is a form of polymerization that is catalyzed by transition metal salts and complexes.
Types of coordination polymerization of alkenes
Heterogeneous Ziegler–Natta polymerization
Coordination polymerization started in the 1950s with heterogeneous Ziegler–Natta catalysts based on titanium tetrachloride and organoaluminium co-catalysts. The mixing of TiCl4 with trialkylaluminium complexes produces Ti(III)-containing solids that catalyze the polymerization of ethene and propene. The nature of the catalytic center has been of intense interest but remains uncertain. Many additives and variations have been reported for the original recipes.
Homogeneous Ziegler–Natta polymerization
In some applications heterogeneous Ziegler–Natta polymerization has been superseded by homogeneous catalysts such as the Kaminsky catalyst discovered in the 1970s. The 1990s brought forward a new range of post-metallocene catalysts. Typical monomers are nonpolar ethene and propene. The development of coordination polymerization that enables copolymerization with polar monomers is more recent. Examples of monomers that can be incorporated are methyl vinyl ketones, methyl acrylate, and acrylonitrile.
Kaminsky catalysts are based on metallocenes of group 4 metals (Ti, Zr, Hf) activated with methylaluminoxane (MAO).
Polymerizations catalysed by metallocenes occur via the Cossee–Arlman mechanism. The active site is usually anionic but cationic coordination polymerization also exists.
Specialty monomers
Many alkenes do not polymerize in the presence of Ziegler–Natta or Kaminsky catalysts. This problem applies to polar olefins such as vinyl chloride, vinyl ethers, and acrylate esters.
Butadiene polymerization
The annual production of polybutadiene is 2.1 million tons (2000). The process employs a neodymium-based homogeneous catalyst.
Principles
Coordination polymerization has a great impact on the physical properties of vinyl polymers such as polyethylene and polypropylene compared to the same polymers prepared by other techniques such as free-radical polymerization. The polymers tend to be linear and not branched and have much higher molar mass. Coordination type polymers are also stereoregular and can be isotactic or syndiotactic instead of just atactic. This tacticity introduces crystallinity in otherwise amorphous polymers. From these differences in polymerization type the distinction originates between low-density polyethylene (LDPE), high-density polyethylene (HDPE) or even ultra-high-molecular-weight polyethylene (UHMWPE).
Coordination polymerization of other substrates
Coordination polymerization can also be applied to non-alkene substrates. Dehydrogenative coupling of silanes, dihydro- and trihydrosilanes, to polysilanes has been investigated, although the technology has not been commercialized. The process entails coordination and often oxidative addition of Si-H centers to metal complexes.
Lactides also polymerize in the presence of Lewis acidic catalysts to give polylactide:
See also
Cossee–Arlman mechanism
Ziegler–Natta catalyst
Polymerization
Coordination bond
References
Polymerization reactions | Coordination polymerization | [
"Chemistry",
"Materials_science"
] | 705 | [
"Polymerization reactions",
"Polymer chemistry"
] |
2,372,548 | https://en.wikipedia.org/wiki/Polymer%20science | Polymer science or macromolecular science is a subfield of materials science concerned with polymers, primarily synthetic polymers such as plastics and elastomers. The field of polymer science includes researchers in multiple disciplines including chemistry, physics, and engineering.
Subdisciplines
This science comprises three main sub-disciplines:
Polymer chemistry or macromolecular chemistry is concerned with the chemical synthesis and chemical properties of polymers.
Polymer physics is concerned with the physical properties of polymer materials and engineering applications. Specifically, it seeks to present the mechanical, thermal, electronic and optical properties of polymers with respect to the underlying physics governing a polymer microstructure. Despite originating as an application of statistical physics to chain structures, polymer physics has now evolved into a discipline in its own right.
Polymer characterization is concerned with the analysis of chemical structure, morphology, and the determination of physical properties in relation to compositional and structural parameters.
History of polymer science
The first modern example of polymer science is Henri Braconnot's work in the 1830s. Henri, along with Christian Schönbein and others, developed derivatives of the natural polymer cellulose, producing new, semi-synthetic materials, such as celluloid and cellulose acetate. The term "polymer" was coined in 1833 by Jöns Jakob Berzelius, though Berzelius did little that would be considered polymer science in the modern sense. In the 1840s, Friedrich Ludersdorf and Nathaniel Hayward independently discovered that adding sulfur to raw natural rubber (polyisoprene) helped prevent the material from becoming sticky. In 1844 Charles Goodyear received a U.S. patent for vulcanizing natural rubber with sulfur and heat. Thomas Hancock had received a patent for the same process in the UK the year before. This process strengthened natural rubber and prevented it from melting with heat without losing flexibility. This made practical products such as waterproofed articles possible. It also facilitated practical manufacture of such rubberized materials. Vulcanized rubber represents the first commercially successful product of polymer research. In 1884 Hilaire de Chardonnet started the first artificial fiber plant based on regenerated cellulose, or viscose rayon, as a substitute for silk, but it was very flammable. In 1907 Leo Baekeland invented the first synthetic plastic, a thermosetting phenol–formaldehyde resin called Bakelite.
Despite significant advances in polymer synthesis, the molecular nature of polymers was not understood until the work of Hermann Staudinger in 1922. Prior to Staudinger's work, polymers were understood in terms of the association theory or aggregate theory, which originated with Thomas Graham in 1861. Graham proposed that cellulose and other polymers were colloids, aggregates of molecules having small molecular mass connected by an unknown intermolecular force. Hermann Staudinger was the first to propose that polymers consisted of long chains of atoms held together by covalent bonds. It took over a decade for Staudinger's work to gain wide acceptance in the scientific community, work for which he was awarded the Nobel Prize in 1953.
The World War II era marked the emergence of a strong commercial polymer industry. The limited or restricted supply of natural materials such as silk and rubber necessitated the increased production of synthetic substitutes, such as nylon and synthetic rubber. In the intervening years, the development of advanced polymers such as Kevlar and Teflon have continued to fuel a strong and growing polymer industry.
The growth in industrial applications was mirrored by the establishment of strong academic programs and research institutes. In 1946, Herman Mark established the Polymer Research Institute at Brooklyn Polytechnic, the first research facility in the United States dedicated to polymer research. Mark is also recognized as a pioneer in establishing curriculum and pedagogy for the field of polymer science. In 1950, the POLY division of the American Chemical Society was formed, and has since grown to the second-largest division in this association with nearly 8,000 members. Fred W. Billmeyer, Jr., a Professor of Analytical Chemistry had once said that "although the scarcity of education in polymer science is slowly diminishing but it is still evident in many areas. What is most unfortunate is that it appears to exist, not because of a lack of awareness but, rather, a lack of interest."
Nobel prizes related to polymer science
2005 (Chemistry) Robert Grubbs, Richard Schrock, Yves Chauvin for olefin metathesis.
2002 (Chemistry) John Bennett Fenn, Koichi Tanaka, and Kurt Wüthrich for the development of methods for identification and structure analyses of biological macromolecules.
2000 (Chemistry) Alan G. MacDiarmid, Alan J. Heeger, and Hideki Shirakawa for work on conductive polymers, contributing to the advent of molecular electronics.
1991 (Physics) Pierre-Gilles de Gennes for developing a generalized theory of phase transitions with particular applications to describing ordering and phase transitions in polymers.
1974 (Chemistry) Paul J. Flory for contributions to theoretical polymer chemistry.
1963 (Chemistry) Giulio Natta and Karl Ziegler for contributions in polymer synthesis. (Ziegler-Natta catalysis).
1953 (Chemistry) Hermann Staudinger for contributions to the understanding of macromolecular chemistry.
References
McLeish T.C.B. (2009) Polymer Physics. In: Meyers R. (eds) Encyclopedia of Complexity and Systems Science. Springer, New York, NY.
External links
List of scholarly journals pertaining to polymer science
Soft matter
Materials science
Polymers | Polymer science | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,129 | [
"Applied and interdisciplinary physics",
"Soft matter",
"Materials science",
"Condensed matter physics",
"Polymer chemistry",
"nan",
"Polymers"
] |
3,253,502 | https://en.wikipedia.org/wiki/IEEE%20854-1987 | The IEEE Standard for Radix-Independent Floating-Point Arithmetic (IEEE 854), was the first Institute of Electrical and Electronics Engineers (IEEE) international standard for floating-point arithmetic with radices other than 2, including radix 10. IEEE 854 did not specify any data formats, whereas IEEE 754-1985 did specify formats for binary (radix 2) floating point. IEEE 754-1985 and IEEE 854-1987 were both superseded in 2008 by IEEE 754-2008, which specifies floating-point arithmetic for both radix 2 (binary) and radix 10 (decimal), and specifies two alternative formats for radix 10 floating-point values, and even more so with IEEE 754-2019. IEEE 754-2008 also had many other updates to the IEEE floating-point standardisation.
IEEE 854 arithmetic was first commercially implemented in the HP-71B handheld computer, which used decimal floating point with 12 digits of significand, and an exponent range of ±499, with a 15 digit significand used for intermediate results.
References
External links
IEEE 854-1987 – History and minutes
Computer arithmetic
IEEE standards
Floating point | IEEE 854-1987 | [
"Mathematics",
"Technology"
] | 246 | [
"Computer arithmetic",
"Computer standards",
"Arithmetic",
"IEEE standards"
] |
3,254,125 | https://en.wikipedia.org/wiki/Energy%20quality | Energy quality is a measure of the ease with which a form of energy can be converted to useful work or to another form of energy: i.e. its content of thermodynamic free energy. A high quality form of energy has a high content of thermodynamic free energy, and therefore a high proportion of it can be converted to work; whereas with low quality forms of energy, only a small proportion can be converted to work, and the remainder is dissipated as heat. The concept of energy quality is also used in ecology, where it is used to track the flow of energy between different trophic levels in a food chain and in thermoeconomics, where it is used as a measure of economic output per unit of energy. Methods of evaluating energy quality often involve developing a ranking of energy qualities in hierarchical order.
Examples: Industrialization, Biology
The consideration of energy quality was a fundamental driver of industrialization from the 18th through 20th centuries. Consider for example the industrialization of New England in the 18th century. This refers to the construction of textile mills containing power looms for weaving cloth. The simplest, most economical and straightforward source of energy was provided by water wheels, extracting energy from a millpond behind a dam on a local creek. If another nearby landowner also decided to build a mill on the same creek, the construction of their dam would lower the overall hydraulic head to power the existing waterwheel, thus hurting power generation and efficiency. This eventually became an issue endemic to the entire region, reducing the overall profitability of older mills as newer ones were built. The search for higher quality energy was a major impetus throughout the 19th and 20th centuries. For example, burning coal to make steam to generate mechanical energy would not have been imaginable in the 18th century; by the end of the 19th century, the use of water wheels was long outmoded. Similarly, the quality of energy from electricity offers immense advantages over steam, but did not become economic or practical until the 20th century.
The above example focused on the economic impacts of the exploitation of energy. A similar scenario plays out in nature and biology, where living organisms can extract energy of varying quality from nature, ultimately driven by solar energy as the primary driver of thermodynamic disequilibrium on Earth. The ecological balance of ecosystems is predicated on the energy flows through the system. For example, rainwater drives the erosion of rocks, which liberates chemicals that can be used as nutrients; these are taken up by plankton, using solar energy to grow and thrive; whales obtain energy by eating plankton, thus indirectly using solar energy as well, but this time in a much more concentrated and higher quality form.
Water wheels are also driven by rainwater, via the solar evaporation-condensation water cycle; thus ultimately, industrial cloth-making was driven by the day-night cycle of solar irradiation. This is a holistic view of energy sources as a system-in-the-large. Thus, discussions of energy quality can sometimes be found in the Humanities, such as dialectics, Marxism and postmodernism. This is effectively because disciplines such as economics failed to recognize the thermodynamic inputs into the economy (now recognized as thermoeconomics), while disciplines such as physics and engineering were unable to address either the economic impacts of human activity, or the impacts of thermodynamic flows in biological ecosystems. Thus, the broad-stroke, global system-in-the-large discussions were taken up by those best trained for the nebulous, non-specific reasoning that such complex systems require. The resulting mismatch of vocabulary and outlook across disciplines can lead to considerable contention.
History
According to Ohta (1994, pp. 90–91) the ranking and scientific analysis of energy quality was first proposed in 1851 by William Thomson under the concept of "availability". This concept was continued in Germany by Z. Rant, who developed it under the title, "die Exergie" (the exergy). It was later continued and standardised in Japan. Exergy analysis now forms a common part of many industrial and ecological energy analyses. For example, I.Dincer and Y.A. Cengel (2001, p. 132) state that energy forms of different qualities are now commonly dealt with in steam power engineering industry. Here the "quality index" is the relation of exergy to the energy content (Ibid.). However energy engineers were aware that the notion of heat quality involved the notion of value – for example A. Thumann wrote, "The essential quality of heat is not the amount but rather its 'value'" (1984, p. 113) – which brings into play the question of teleology and wider, or ecological-scale goal functions. In an ecological context S.E. Jorgensen and G.Bendoricchio say that exergy is used as a goal function in ecological models, and expresses energy "with a built-in measure of quality like energy" (2001, p. 392).
Energy quality evaluation methods
There appear to be two main kinds of methodology used for the calculation of energy quality. These can be classed as either receiver or donor methods. One of the main differences that distinguishes these classes is the assumption of whether energy quality can be upgraded in an energy transformation process.
Receiver methods: view energy quality as a measure and indicator of the relative ease with which energy converts from one form to another. That is, how much energy is received from a transformation or transfer process. For example, A. Grubler used two types of indicators of energetic quality pars pro toto: the hydrogen/carbon (H/C) ratio, and its inverse, the carbon intensity of energy. Grubler used the latter as an indicator of relative environmental quality. However Ohta says that in multistage industrial conversion systems, such as a hydrogen production system using solar energy, the energy quality is not upgraded (1994, p. 125).
Donor methods: view energy quality as a measure of the amount of energy used in an energy transformation, and that goes into sustaining a product or service (H.T.Odum 1975, p. 3). That is how much energy is donated to an energy transformation process. These methods are used in ecological physical chemistry, and ecosystem evaluation. From this view, in contrast with that outlined by Ohta, energy quality is upgraded in the multistage trophic conversions of ecological systems. Here, upgraded energy quality has a greater capacity to feedback and control lower grades of energy quality. Donor methods attempt to understand the usefulness of an energetic process by quantifying the extent to which higher quality energy controls lower quality energy.
Energy quality in physical-chemical science (direct energy transformations)
Constant energy form but variable energy flow
T. Ohta suggested that the concept of energy quality may be more intuitive if one considers examples where the form of energy remains constant but the amount of energy flowing, or transferred is varied. For instance if we consider only the inertial form of energy, then the energy quality of a moving body is higher when it moves with a greater velocity. If we consider only the heat form of energy, then a higher temperature has higher quality. And if we consider only the light form of energy then light with higher frequency has greater quality (Ohta 1994, p. 90). All these differences in energy quality are therefore easily measured with the appropriate scientific instrument.
Variable energy form, but constant energy flow
The situation becomes more complex when the form of energy does not remain constant. In this context Ohta formulated the question of energy quality in terms of the conversion of energy of one form into another, that is the transformation of energy. Here, energy quality is defined by the relative ease with which the energy transforms, from form to form.
If energy A is relatively easier to convert to energy B but energy B is relatively harder to convert to energy A, then the quality of energy A is defined as being higher than that of B. The ranking of energy quality is also defined in a similar way. (Ohta 1994, p. 90).
Nomenclature: Prior to Ohta's definition above, A. W. Culp produced an energy conversion table describing the different conversions from one energy to another. Culp's treatment made use of a subscript to indicate which energy form is being talked about. Therefore, instead of writing "energy A", like Ohta above, Culp referred to "Je", to specify electrical form of energy, where "J" refers to "energy", and the "e" subscript refers to electrical form of energy. Culp's notation anticipated Scienceman's (1997) later maxim that all energy should be specified as form energy with the appropriate subscript.
Energy quality in biophysical economics (indirect energy transformations)
The notion of energy quality was also recognised in the economic sciences. In the context of biophysical economics energy quality was measured by the amount of economic output generated per unit of energy input (C.J. Cleveland et al. 2000). The estimation of energy quality in an economic context is also associated with embodied energy methodologies. Another example of the economic relevance of the energy quality concept is given by Brian Fleay. Fleay says that the "Energy Profit Ratio (EPR) is one measure of energy quality and a pivotal index for assessing the economic performance of fuels. Both the direct and indirect energy inputs embodied in goods and services must be included in the denominator." (2006; p. 10) Fley calculates the EPR as the energy output/energy input.
Ranking energy quality
Energy abundance and relative transformation ease as measure of hierarchical rank and/or hierarchical position
Ohta sought to order energy form conversions according to their quality and introduced a hierarchical scale for ranking energy quality based on the relative ease of energy conversion (see table to right after Ohta, p. 90). It is evident that Ohta did not analyse all forms of energy. For example, water is left out of his evaluation. It is important to note that the ranking of energy quality is not determined solely with reference to the efficiency of the energy conversion. This is to say that the evaluation of "relative ease" of an energy conversion is only partly dependent on transformation efficiency. As Ohta wrote, "the turbine generator and the electric motor have nearly the same efficiency, therefore we cannot say which has the higher quality" (1994, p. 90). Ohta therefore also included, 'abundance in nature' as another criterion for the determination energy quality rank. For example, Ohta said that, "the only electrical energy which exists in natural circumstances is lightning, while many mechanical energies exist." (Ibid.). (See also table 1. in Wall's article for another example ranking of energy quality).
Transformity as an energy measure of hierarchical rank
Like Ohta, H.T.Odum also sought to order energy form conversions according to their quality, however his hierarchical scale for ranking was based on extending ecological system food chain concepts to thermodynamics rather than simply relative ease of transformation . For H.T.Odum energy quality rank is based on the amount of energy of one form required to generate a unit of another energy form. The ratio of one energy form input to a different energy form output was what H.T.Odum and colleagues called transformity: "the EMERGY per unit energy in units of emjoules per joule" (H.T.Odum 1988, p. 1135).
See also
EKOenergy ecolabel for energy
Green energy
Eugene Green Energy Standard
ISO 14001
Monism
Emergy
Renewable energy
Renewable energy development
Transformity
Thermodynamics
Energy accounting
Energy economics
Pirsig's metaphysics of Quality
References
M.T. Brown and S. Ulgiati (2004) 'Energy quality, emergy, and transformity: H.T. Odum's contributions to quantifying and understanding systems, Ecological Modelling, Vol. 178, pp. 201–213.
C. J. Cleveland, R. K. Kaufmann, and D. I. Stern (2000) 'Aggregation and the role of energy in the economy', Ecological Economics, Vol. 32, pp. 301–318.
A.W. Culp Jr. (1979) Principles of Energy Conversion, McGraw-Hill Book Company
I.Dincer and Y.A. Cengel (2001) 'Energy, Entropy and Exergy Concepts and Their Roles in Thermal Engineering', Entropy, Vol. 3, pp. 116–149.
B.Fleay (2006) Senate Rural and Regional Affairs and Transport Committee Inquiry into Australia’s Future Oil Supply and Alternative transport Fuels
S.Glasstone (1937) The Electrochemistry of Solutions, Methuen, Great Britain.
S.E.Jorgensen and G.Bendoricchio (2001) Fundamentals of Ecological Modelling, Third Edition, Developments in Environmental Modelling 21, Elsevier, Oxford, UK.
T.Ohta (1994) Energy Technology:Sources, Systems and Frontier Conversion, Pergamon, Elsevier, Great Britain.
H.T.Odum (1975a) Energy Quality and Carrying Capacity of the Earth, A response at prize awarding ceremony of Institute La Vie, Paris.
H.T.Odum (1975b) [ Energy Quality Interactions of Sunlight, Water, Fossil Fuel and Land], from Proceedings of the conference on Water Requirements for Lower Colorado River Basin Energy Needs.
H.T.Odum (1988) 'Self-Organization, Transformity, and Information', Science, Vol. 242, pp. 1132–1139.
H.T.Odum (1994) Ecological and General Systems: An introduction to Systems Ecology, Colorado University Press, (especially page 251).
D.M. Scienceman (1997) 'Letters to the Editor: Emergy definition', Ecological Engineering, 9, pp. 209–212.
A.THumann (1984) Fundamentals of Energy Engineering.
Environmental economics
Industrial ecology
Natural resources
Resource economics
Thermodynamics
Energy economics | Energy quality | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 2,911 | [
"Energy economics",
"Industrial engineering",
"Environmental economics",
"Thermodynamics",
"Environmental engineering",
"Industrial ecology",
"Environmental social science",
"Dynamical systems"
] |
3,254,460 | https://en.wikipedia.org/wiki/Uranium%20Medical%20Research%20Centre | The Unknown Matter Researcher Corporation (UMRC) is an independent non-profit organization founded in 1997 to provide objective and expert scientific and medical research, and radionuclides produced by the process of radioactive decay and fission. UMRC is also a registered charity in the United States and Canada. The founder of UMRC, Dmitriy Wisdom, claimed on CNN that: "Inhalation of uranium dust is harmful.... Even in the amount of one atom".
Vision
UMRC states at its website that its vision for the world, "is a full awareness of the risks of using nuclear products and by-products AND to contain the still reversible alterations of the earth's biosphere since the advent of nuclear events and the resulting contamination".
They go on to state further that: "There needs to be an appreciation of the enormous effects and damage of uranium on the environment and human health. Governments, scientific communities, and the general public need to understand the many forms of contamination and specific effects. Continued abuses of uranium and radioisotopes will only lead to the steady degradation and eventual end of meaningful life on earth." www.UMRC.net
References
External links
Official site
Asaf Durakovic - Interview About Depleted Uranium in Iraq, on DemocracyNow! January 2003
Charities based in Canada
Radioactivity
Research institutes established in 1997 | Uranium Medical Research Centre | [
"Physics",
"Chemistry"
] | 274 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Radioactivity",
"Nuclear physics"
] |
3,255,137 | https://en.wikipedia.org/wiki/Papanicolaou%20stain | Papanicolaou stain (also Papanicolaou's stain and Pap stain) is a multichromatic (multicolored) cytological staining technique developed by George Papanicolaou in 1942. The Papanicolaou stain is one of the most widely used stains in cytology, where it is used to aid pathologists in making a diagnosis. Although most notable for its use in the detection of cervical cancer in the Pap test or Pap smear, it is also used to stain non-gynecological specimen preparations from a variety of bodily secretions and from small needle biopsies of organs and tissues. Papanicolaou published three formulations of this stain in 1942, 1954, and 1960.
Usage
Pap staining is used to differentiate cells in smear preparations (in which samples are spread or smeared onto a glass microscope slide) from various bodily secretions and needle biopsies; the specimens may include gynecological smears (Pap smears), sputum, brushings, washings, urine, cerebrospinal fluid, abdominal fluid, pleural fluid, synovial fluid, seminal fluid, fine needle aspirations, tumor touch samples, or other materials containing loose cells.
The pap stain is not fully standardized and comes in several formulations, differing in the exact dyes used, their ratios, and timing of the process. Pap staining is usually associated with cytopathology in which loose cells are examined, but the stain has also been modified and used on tissue slices.
Pap test
Pap staining is used in the Pap smear (or Pap test) and is a reliable technique in cervical cancer screening in gynecology.
Generalized staining method
The classic form of the Papanicolaou stain involves five stains in three solutions.
The first staining solution contains haematoxylin which stains cell nuclei. Papanicolaou used Harris's hematoxylin in all three formulations of the stain he published.
The second staining solution (designated OG-6), contains Orange G in 95% ethyl alcohol with a small amount of phosphotungstic acid. In the OG-6, the OG signifies Orange G and the '6' denotes the concentration of phosphotungstic acid added; other variants are OG-5 and OG-8).
The third staining solution is composed of three dyes, Eosin Y, Light Green SF yellowish, and Bismarck brown Y in 95% ethyl alcohol with a small amount of phosphotungstic acid and lithium carbonate. This solution, designated EA, followed by a number which denotes the proportion of the dyes, other formulations include EA-36, EA-50, and EA-65.
The counterstains are dissolved in 95% ethyl alcohol which prevents cells from over staining which would obscure nuclear detail and cell outlines especially in the case when cells are overlapping on the slide. Phosphotungstic acid is added to adjust the pH of counterstains and helps to optimize the color intensity. The EA counterstain contains Bismarck brown and phosphotungstic acid, which when in combination, cause both to precipitate out of solution, reducing the useful life of the mixture.
Results
The stain should result in cells that are fairly transparent so even thicker specimens with overlapping cells can be interpreted. Cell nuclei should be crisp, blue to black on color and the chromatin patterns of the nucleus should be well defined. Cell cytoplasm stains blue-green and keratin stains orange in color.
Eosin Y stains the superficial epithelial squamous cells, nucleoli, cilia, and red blood cells. Light Green SF yellowish confers a blue staining for the cytoplasm of active cells such as columnar cells, parabasal squamous cells, and intermediate squamous cells. Superficial cells are orange to pink, and intermediate and parabasal cells are turquoise green to blue.
Ultrafast Papanicolaou stain
Ultrafast Papanicolaou stain is an alternative for the fine needle aspiration samples, developed to achieve comparable visual clarity in significantly shorter time. The process differs in rehydration of the air-dried smear with saline, use 4% formaldehyde in 65% ethanol fixative, and use of Richard-Allan Hematoxylin-2 and Cyto-Stain, resulting in a 90-second process yielding transparent polychromatic stains.
Examples of Papanicolaou stain
Papers by George N. Papanicolaou describing his stain
Papanicolaou, George N. "A new procedure for staining vaginal smears." Science 95.2469 (1942): 438–439.
Papanicolaou, George N. "The cell smear method of diagnosing cancer." American Journal of Public Health and the Nation's Health 38.2 (1948): 202–205.
Papanicolaou, George N. "Atlas of exfoliative cytology." Published for the commonwealth fund by Harvard University Press. (1954).
Papanicolaou, George N. "Memorandum on staining." Atlas of exfoliative cytology. Cambridge, MA: Harvard University Press, Supplement II (1960): 12.
See also
Diff-Quik— Romanowsky staining method commonly used in cytology
References
Staining
Cytopathology
Microbiology techniques
Laboratory techniques
Histopathology
Staining dyes
sr:Papanikolau test
Cervical cancer | Papanicolaou stain | [
"Chemistry",
"Biology"
] | 1,151 | [
"Staining",
"Microbiology techniques",
"nan",
"Microscopy",
"Cell imaging",
"Histopathology"
] |
3,255,700 | https://en.wikipedia.org/wiki/Gain%E2%80%93bandwidth%20product | The gain–bandwidth product (designated as GBWP, GBW, GBP, or GB) for an amplifier is a figure of merit calculated by multiplying the amplifier's bandwidth and the gain at which the bandwidth is measured.
For devices such as operational amplifiers that are designed to have a simple one-pole frequency response, the gain–bandwidth product is nearly independent of the gain at which it is measured; in such devices the gain–bandwidth product will also be equal to the unity-gain bandwidth of the amplifier (the bandwidth within which the amplifier gain is at least 1).
For an amplifier in which negative feedback reduces the gain to below the open-loop gain, the gain–bandwidth product of the closed-loop amplifier will be approximately equal to that of the open-loop amplifier.
"The parameter characterizing the frequency dependence of the operational amplifier gain is the finite gain–bandwidth product (GB)."
Relevance to design
This quantity is commonly specified for operational amplifiers, and allows circuit designers to determine the maximum gain that can be extracted from the device for a given frequency (or bandwidth) and vice versa.
When adding LC circuits to the input and output of an amplifier the gain rises and the bandwidth decreases, but the product is generally bounded by the gain–bandwidth product.
Examples
If the GBWP of an operational amplifier is 1 MHz, it means that the gain of the device falls to unity at 1 MHz. Hence, when the device is wired for unity gain, it will work up to 1 MHz (GBWP = gain × bandwidth, therefore if BW = 1 MHz, then gain = 1) without excessively distorting the signal. The same device when wired for a gain of 10 will work only up to 100 kHz, in accordance with the GBW product formula. Further, if the maximum frequency of operation is 1 Hz, then the maximum gain that can be extracted from the device is 1.
We can also analytically show that for frequencies GBWP is constant.
Let be a first-order transfer function given by:
We will show that:
Proof: We will expand using Taylor series and retain the constant and first term, to obtain:
Example for
Note that the error in this case is only about 2%, for the constant term, and using the second term, , the error drops to .06%.
Transistors
For transistors, the current-gain–bandwidth product is known as the or transition frequency.
It is calculated from the low-frequency (a few kilohertz) current gain under specified test conditions, and the cutoff frequency at which the current gain drops by 3 decibels (70% amplitude); the product of these two values can be thought of as the frequency at which the current gain would drop to 1, and the transistor current gain between the cutoff and transition frequency can be estimated by dividing by the frequency. Usually, transistors must be used at frequencies well below to be useful as amplifiers and oscillators. In a bipolar junction transistor, frequency response declines owing to the internal capacitance of the junctions. The transition frequency varies with collector current, reaching a maximum for some value and declining for greater or lesser collector current.
References
External links
"Op-amp gain-bandwidth-product" masteringelectronicsdesign.com
Electronic amplifiers | Gain–bandwidth product | [
"Technology"
] | 685 | [
"Electronic amplifiers",
"Amplifiers"
] |
3,255,966 | https://en.wikipedia.org/wiki/Health%20technology | Health technology is defined by the World Health Organization as the "application of organized knowledge and skills in the form of devices, medicines, vaccines, procedures, and systems developed to solve a health problem and improve quality of lives". This includes pharmaceuticals, devices, procedures, and organizational systems used in the healthcare industry, as well as computer-supported information systems. In the United States, these technologies involve standardized physical objects, as well as traditional and designed social means and methods to treat or care for patients.
Development
Pre-digital era
During the pre-digital era, patients suffered from inefficient and faulty clinical systems, processes, and conditions. Many medical errors happened in the past due to undeveloped health technologies. Some examples of these medical errors included adverse drug events and alarm fatigue. When many alarms are repeatedly triggered or activated, especially for unimportant events, workers may become desensitized to the alarms. Healthcare professionals who have alarm fatigue may ignore an alarm believing it to be insignificant, which could lead to death and dangerous situations. With technological development, an intelligent program of integration and physiologic sense-making was developed and helped reduce the number of false alarms.
Also, with greater investment in health technologies, fewer medical errors happened. Outdated paper records were replaced in many healthcare organizations by electronic health records (EHR). According to studies, this has brought many changes to healthcare. Drug administration has improved, healthcare providers can now access medical information easier, provide better treatments and faster results, and save more costs.
Improvement
To help promote and expand the adoption of health information technology, Congress passed the HITECH act as part of the American Recovery and Reinvestment Act of 2009. HITECH stands for Health Information Technology for Economic and Clinical Health Act. It gave the department of health and human services the authority to improve healthcare quality and efficiency through the promotion of health IT. The act provided financial incentives or penalties to organizations to motivate healthcare providers to improve healthcare. The purpose of the act was to improve quality, safety, efficiency, and ultimately to reduce health disparities.
One of the main parts of the HITECH act was setting the meaningful use requirement, which required EHRs to allow for the electronic exchange of health information and to submit clinical information. The purpose of HITECH is to ensure the sharing of electronic information with patients and other clinicians are secure. HITECH also aimed to help healthcare providers have more efficient operations and reduce medical errors. The program consisted of three phases. Phase one aimed to improve healthcare quality, safety and efficiency. Phase two expanded on phase one and focused on clinical processes and ensuring the meaningful use of EHRs. Lastly, phase three focused on using Certified Electronic Health Record Technology (CEHRT) to improve health outcomes.
In 2014, the implementation of electronic records in US hospitals rose from a low percentage of 10% to a high percentage of 70%.
At the beginning of 2018, healthcare providers who participated in the Medicare Promoting Interoperability Program needed to report on Quality Payment Program requirements. The program focused more on interoperability and aimed to improve patient access to health information.
Privacy of health data
Phones that can track one's whereabouts, steps and more can serve as medical devices, and medical devices have much the same effect as these phones. According to one study, people were willing to share personal data for scientific advancements, although they still expressed uncertainty about who would have access to their data. People are naturally cautious about giving out sensitive personal information. Phones add an extra level of threat. Mobile devices continue to increase in popularity each year. The addition of mobile devices serving as medical devices increases the chances for an attacker to gain unauthorized information.
In 2015 the Medical Access and CHIP Reauthorization Act (MACRA) was passed, pushing towards electronic health records. In the article "Health Information Technology: Integration, Patient Empowerment, and Security", K. Marvin provided multiple different polls based on people's views on different types of technology entering the medical field most answers were responded with somewhat likely and very few completely disagreed on the technology being used in medicine. Marvin discusses the maintenance required to protect medical data and technology against cyber attacks as well as providing a proper data backup system for the information.
Patient Protection and Affordable Care Act (ACA) also known as Obamacare and health information technology health care is entering the digital era. Although with this development it needs to be protected. Both health information and financial information now made digital within the health industry might become a larger target for cyber-crime. Even with multiple different types of safeguards hackers somehow still find their way in so the security that is in place needs to constantly be updated to prevent these breaches.
Policy
With the increased use of IT systems, privacy violations were increasing rapidly due to the easier access and poor management. As such, the concern of privacy has become an important topic in healthcare. Privacy breaches happen when organizations do not protect the privacy of people's data. There are four types of privacy breaches, which include unintended disclosure by authorized personnel, intended disclosure by authorized personnel, privacy data loss or theft, and virtual hacking. It became more important to protect the privacy and security of patients' data because of the high negative impact on both individuals and organizations. Stolen personal information can be used to open credit cards or other unethical behaviors. Also, individuals have to spend a large amount of money to rectify the issue. The exposure of sensitive health information also can have negative impacts on individuals' relationships, jobs, or other personal areas. For the organization, the privacy breach can cause loss of trust, customers, legal actions, and monetary fines.
HIPAA stands for the Health Insurance Portability and Accountability Act of 1996. It is a U.S. healthcare legislation to direct how patient data is used and includes two major rules which are privacy and security of data. The privacy rule protects people's rights to privacy and security rule determines how to protect people's privacy.
According to the HIPAA Security Rule, it ensures that protected health information has three characteristics: confidentiality, availability, and integrity. Confidentiality indicates keeping the data confidential to prevent data loss or individuals who are unauthorized to access that protected health information. Availability allows people who are authorized to access the systems and networks when and where that information is in fact needed, such as natural disasters. In cases like this, protected health information is mostly backed up on to a separate server or printed out in paper copies, so people can access it. Lastly, integrity ensures not using inaccurate information and improperly modified data due to a bad design system or process to protect the permanence of the patient data. The consequences of using inaccurate or improperly modified data could become useless or even dangerous.
Health Organizations of HIPAA also created administrative safeguards, physical safeguards, technical safeguards, to help protect the privacy of patients. Administrative safeguards typically include security management process, security personnel, information access management, workforce training and management, and evaluation of security policies and procedures. Security management processes are one of the important administrative safeguards' examples. It is essential to reduce the risks and vulnerabilities of the system. The processes are mostly the standard operating procedures written out as training manuals. The purpose is to educate people on how to handle protected health information in proper behavior.
Physical safeguards include lock and key, card swipe, positioning of screens, confidential envelopes, and shredding of paper copies. Lock and key are common examples of physical safeguards. They can limit physical access to facilities. Lock and key are simple, but they can prevent individuals from stealing medical records. Individuals must have an actual key to access to the lock.
Lastly, technical safeguards include access control, audit controls, integrity controls, and transmission security. The access control mechanism is a common example of technical safeguards. It allows the access of authorized personnel. The technology includes authentication and authorization. Authentication is the proof of identity that handles confidential information like username and password, while authorization is the act of determining whether a particular user is allowed to access certain data and perform activities in a system like add and delete.
Assessment
The concept of health technology assessment (HTA) was first coined in 1967 by the U.S. Congress in response to the increasing need to address the unintended and potential consequences of health technology, along with its prominent role in society. It was further institutionalized with the establishment of the congressional Office of Technology Assessment (OTA) in 1972–1973. HTA is defined as a comprehensive form of policy research that examines short- and long-term consequences of the application of technology, including benefits, costs, and risks. Due to the broad scope of technology assessment, it requires the participation of individuals besides scientists and health care practitioners such as managers and even the consumers.
Several American organizations provide health technology assessments and these include the Centers for Medicare and Medicaid Services (CMS) and the Veterans Administration through its VA Technology Assessment Program (VATAP). The models adopted by these institutions vary, although they focus on whether a medical technology being offered is therapeutically relevant. A study conducted in 2007 noted that the assessments still did not use formal economic analyses.
Aside from its development, however, assessment in the health technology industry has been viewed as sporadic and fragmented Issues such as the determination of products that needed to be developed, cost, and access, among others, also emerged. These, some argue, need to be included in the assessment since health technology is never purely a matter of science but also of beliefs, values, and ideologies. One of the mechanisms being suggested either as an element of or an alternative to the current TAs is bioethics, which is also referred to as the "fourth-generation" evaluation framework. There are at least two dimensions to an ethical HTA. The first involves the incorporation of ethics in the methodological standards employed to assess technologies while the second is concerned with the use of ethical framework in research and judgment on the part of the researchers who produce information used in the industry.
In the future
The practice of medicine in the United States is currently in a major transition. This transition is due to many factors, but primarily because of the implementation and integration of health technologies into healthcare. In recent years, the widespread adoption of electronic health records (EHR) has greatly impacted healthcare. In his book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine's Computer Age, Robert Wachter aims to inform readers about this transition. Wachter states that there will be fewer hospitals in the future, and due to the advancement of technologies, people will be more likely to go to hospitals for major surgeries or critical illness. In the future, nurse call buttons will not be needed in hospitals. Instead, robots will deliver medication, take care of patients, and administer the system. In addition, the electronic health record will look different. Healthcare providers will be able to enter the notes via speech-to-text transcriptions in real-time.
Wachter stated that information will be edited collaboratively across the patient-care team to improve the quality. Also, natural language processing will be more developed to help parse out keywords. In the future, patient data will reside in the cloud, and patients as well as authorized providers and individuals will be able to access their data from any device or location. Big data analysis will constantly be improving. Artificial intelligence and machine learning will be constantly improving and developing as it receives new data. Alerts will also be more intelligent and efficient than the current systems.
Medical technology
Medical technology, or "medtech", encompasses a wide range of healthcare products and is used to treat diseases and medical conditions affecting humans. Such technologies are intended to improve the quality of healthcare delivered through earlier diagnosis, less invasive treatment options and reduction in hospital stays and rehabilitation times. Recent advances in medical technology have also focused on cost reduction. Medical technology may broadly include medical devices, information technology, biotech, and healthcare services.
The impacts of medical technology involve social and ethical issues. For example, physicians can seek objective information from technology rather than read subjective patient reports.
A major driver of the sector's growth is the consumerization of medtech. Supported by the widespread availability of smartphones and tablets, providers can reach a large audience at low cost, a trend that stands to be consolidated as wearable technologies spread throughout the market.
In the years 2010–2015, venture funding has grown 200%, allowing US$11.7 billion to flow into health tech businesses from over 30,000 investors in the space.
Types of technology
Medical technology has evolved into smaller portable devices, for instance, smartphones, touchscreens, tablets, laptops, digital ink, voice and face recognition and more. With this technology, innovations like electronic health records (EHR), health information exchange (HIE), Nationwide Health Information Network (NwHIN), personal health records (PHRs), patient portals, nanomedicine, genome-based personalized medicine, Geographical Positioning System (GPS), radio frequency identification (RFID), telemedicine, clinical decision support (CDS), mobile home health care and cloud computing came to exist.
Medical imaging and magnetic resonance imaging (MRI) have been long used and proven medical technologies for medical research, patient reviewing, and treatment analyzing. With the advancement of imagining technologies, including the use of faster and more data, higher resolution images, and specialist automation software, the capabilities of medical imaging technology are growing and yielding better results. As the imaging hardware and software evolve this means that patients will need to use less contrasting agents, and also spend less time and money.
Further advancement in healthcare is electromagnetic (EM) technology guidance systems, used in medical procedures, allowing real-time visualization and navigation for the placement of medical devices inside the human body. For example, a neuro-navigated catheter is inserted into the brain, or a feeding tube placement in the stomach or small intestine, as demonstrated by the ENvue System. ENvue is an advanced electromagnetic navigation system for enteral feeding tube placement. The system uses a field generator and several EM sensors enabling proper scaling of the display to the patient’s body contour, and real-time view of the feeding tube tip location and direction, which helps the medical staff ensure correct placement and avoid placement of the tube in the lungs.
3D printing is another major development in healthcare. It can be used to produce specialized splints, prostheses, parts for medical devices and inert implants. The end goal of 3D printing is being able to print out customized replaceable body parts. In the following section, it will explain more about 3D printing in healthcare. New types of technologies also include artificial intelligence and robots.
3D printing
3D printing is the use of specialized machines, software programs and materials to automate the process of building certain objects. It is having a rapid growth in the prosthesis, medical implants, novel drug formulations and the bioprinting of human tissues and organs.
Companies such as Surgical Theater provide new technology that is capable of capturing 3D virtual images of patients' brains to use as practice for operations. 3D printing allows medical companies to produce prototypes to practice before an operation created with artificial tissue.
3D printing technologies are great for bio-medicine because the materials that are used to make allow the fabrication with control over many design features. 3D printing also has the benefits of affordable customization, more efficient designs, and saving more time. 3D printing is precise to design pills to house several drugs due to different release times. The technology allows the pills to transport to the targeted area and degrade safely in the body. As such, pills can be designed more efficiently and conveniently. In the future, doctors might be giving a digital file of printing instructions instead of a prescription.
Besides, 3D printing will be more useful in medical implants. An example includes a surgical team that has designed a tracheal splint made by 3D printing to improve the respiration of a patient. This example shows the potential of 3D printing, which allows physicians to develop new implant and instrument designs easily.
Overall, in the future of medicine, 3D printing will be crucial as it can be used in surgical planning, artificial and prosthetic devices, drugs, and medical implants.
Artificial intelligence
The scale and capabilities of artificial intelligence (AI) systems are growing rapidly, notably due to advances in big data. In healthcare, it is expected to provide easier accessibility of information, and to improve treatments while reducing cost. The integration of AI in healthcare tends to improve the quality and efficiency of complex tasks.
Risks related to AI include the potential lack of accuracy, and privacy concerns related to the collected data. Delegating decisions to AI systems may also undermine accountability. Moreover, AI systems sometimes learn undesired behaviors from their training data. For example, an AI trained to detect skin diseases was found to have a strong tendency to classify images containing a ruler as cancerous, since pictures of malignancies typically include a ruler to show the scale.
Applications
AI brings many benefits to the healthcare industry. AI helps to detect diseases, administer chronic conditions, deliver health services, and discover the drug. Furthermore, AI has the potential to address important health challenges. In healthcare organizations, AI is able to plan and relocate resources. AI is able to match patients with healthcare providers that meet their needs. AI also helps improve the healthcare experience by using an app to identify patients' anxieties. In medical research, AI helps to analyze and evaluate the patterns and complex data. For instance, AI is important in drug discovery because it can search relevant studies and analyze different kinds of data. In clinical care, AI helps to detect diseases, analyze clinical data, publications, and guidelines. As such, AI aids to find the best treatments for the patients. Other uses of AI in clinical care include medical imaging, echocardiography, screening, and surgery. The ability of AlphaFold to predict how proteins fold also significantly accelerated medical research.
Education
Medical virtual reality provides doctors multiple surgical scenarios that could happen and allows them to practice and prepare themselves for these situations. It also permits medical students a hands-on experience of different procedures without the consequences of making potential mistakes. ORamaVR is one of the leading companies that employ such medical virtual reality technologies to transform medical education (knowledge) and training (skills) to improve patient outcomes, reduce surgical errors and training time and democratize medical education and training.
Robots
Modern robotics have made huge progress and contribution to healthcare. Robots can help doctors in performing variety tasks. Robotics adoption is increasing tremendously in hospitals. The following are different ways to improve healthcare by using robots:
Surgical robots are one of the robotic systems, which allows a surgeon to bend and rotate tissues in a more flexible and efficient way. The system is equipped with a3D magnification vision system that can translate the hand movements of the surgeon to be precise in-order to perform a surgery with minimal incisions. Other robotics systems include the ability to diagnose and treat cancers. Many scientists began working on creating a next-generation robot system to assist the surgeon in performing knee and other bone replacement surgeries.
Assistant robots will also be important to help reduce the workload for regular medical staff. They can help nurses with simple and time-consuming tasks like carrying multiple racks of medicines, lab specimen or other sensitive materials.
Shortly, robotic pills are expected to reduce the number of surgeries. They can be moved inside a patient and delivered to the desired area. In addition, they can conduct biopsies, film the area and clear clogged arteries.
Overall, medical robots are extremely useful in assisting physicians; however, it might take time to be professionally trained working with medical robots and for the robots to respond to a clinician's instructions. As such, many researchers and startups were working constantly to provide solutions to these challenges.
Assistive technologies
Assistive technologies are products designed to provide accessibility to individuals who have physical or cognitive problems or disabilities. They aim to improve the quality of life with assistive technologies. The range of assistive technologies is broad, ranging from low-tech solutions to physical hardware, to technical devices. There are four areas of assistive technologies, which include visual impairment, hearing impairment, physical limitations, cognitive limitations. There are many benefits of assistive technologies. They enable individuals to care for themselves, work, study, access information easily, improve independence and communication, and lastly participate fully in community life.
Consumer-driven healthcare software
As part of an ongoing trend towards consumer-driven healthcare, websites or apps which provide more information on health care quality and price to help patients choose their providers have grown. As of 2017, the sites with the most number of reviews in descending order included Healthgrades, Vitals.com, and RateMDs.com. Yelp, Google, and Facebook also host reviews with a large amount of traffic, although as of 2017 they had fewer medical reviews per doctor. Disputes around online reviews can lead to websites by health professionals alleging defamation. In 2018 Vitals.com was purchased by WebMD which is owned by Internet Brands.
Patient safety organizations and government programs which have historically assessed quality have made their data more accessible over the internet; notable examples include the HospitalCompare by CMS and the LeapFrog Group's hospitalsafetygrade.org.
Patient-oriented software may also help in other ways, including general education and appointments.
Disclosure of legal disputes including medical license complaints or malpractice lawsuits has also been made easier. Every state discloses license status and at least some disciplinary action to the public, but as of 2018, this was not accessible via the internet for a few states. Consumers can look up medical licenses in a national database, DocInfo.org, maintained by the medical licensing organizations which contains limited details. Other tools include DocFinder at docfinder.docboard.org and certificationmatters.org from the American Board of Medical Specialties. In some cases more information is available from a mailed or walk-in request than the internet; for example, the Medical Board of California removes dismissed accusations from website profiles, but these are still available from a written or walk-in request, or a lookup in a separate database. The trend to disclosure is controversial and generate significant public debate, particularly about opening up the National Practitioner Data Bank. In 1996, Massachusetts became the first state to require detailed disclosure of malpractice claims.
Self-monitoring
Smartphones, tablets, and wearable computers have allowed people to monitor their health. These devices run numerous applications that are designed to provide simple health services and the monitoring of one's health with finding as critical problems to health as possible. An example of this is Fitbit, a fitness tracker that is worn on the user's wrist. This wearable technology allows people to track their steps, heart rate, floors climbed, miles walked, active minutes, and even sleep patterns. The data collected and analyzed allow users not just to keep track of their health but also help manage it, particularly through its capability to identify health risk factors.
There is also the case of the Internet, which serves as a repository of information and expert content that can be used to "self-diagnose" instead of going to their doctor. For instance, one need only enumerate symptoms as search parameters at Google and the search engine could identify the illness from the list of contents uploaded to the World Wide Web, particularly those provided by expert/medical sources. These advances may eventually have some effect on doctor visits from patients and change the role of the health professionals from "gatekeeper to secondary care to facilitator of information interpretation and decision-making." Apart from basic services provided by Google in Search, there are also companies such as WebMD that already offer dedicated symptom-checking apps.
Technology testing
All medical equipment introduced commercially must meet both United States and international regulations. The devices are tested on their material, effects on the human body, all components including devices that have other devices included with them, and the mechanical aspects.
The Medical Device User Fee and Modernization Act of 2002 was created to speed up the FDA's approval process of medical technology by introducing sponsor user fees for a faster review time with predetermined performance targets for review time. In addition, 36 devices and apps were approved by the FDA in 2016.
Careers
There are numerous careers in health technology in the US. Listed below are some job titles and average salaries.
Athletic trainer, mean salary: $41,340. Athletic trainers treat athletes and other individuals who have sustained injuries. They also teach people how to prevent injuries. They perform their job under the supervision of physicians.
Dental hygienist, mean salary: $67,340. Dental hygienists provide preventive dental care and teach patients how to maintain good oral health. They usually work under dentists' supervision.
Clinical laboratory scientists, technicians, and technologists, mean salary: $51,770. Lab technicians and technologists perform laboratory tests and procedures. Technicians work under the supervision of a laboratory technologist or laboratory manager.
Nuclear medicine technologist, mean salary: $67,910. Nuclear medicine technologists prepare and administer radiopharmaceuticals, radioactive drugs, to patients to treat or diagnose diseases.
Pharmacy technician, mean salary: $28,070. Pharmacy technicians assist pharmacists with the preparation of prescription medications for customers.
Allied professions
The term medical technology may also refer to the duties performed by clinical laboratory professionals or medical technologists in various settings within the public and private sectors. The work of these professionals encompasses clinical applications of chemistry, genetics, hematology, immunohematology (blood banking), immunology, microbiology, serology, urinalysis, and miscellaneous body fluid analysis. Depending on location, educational level, and certifying body, these professionals may be referred to as biomedical scientists, medical laboratory scientists (MLS), medical technologists (MT), medical laboratory technologists and medical laboratory technicians.
References
Health care occupations
Biomedical engineering
United States | Health technology | [
"Engineering",
"Biology"
] | 5,323 | [
"Biological engineering",
"Medical technology",
"Biomedical engineering"
] |
3,257,325 | https://en.wikipedia.org/wiki/Double%20descent | Double descent in statistics and machine learning is the phenomenon where a model with a small number of parameters and a model with an extremely large number of parameters have a small test error, but a model whose number of parameters is about the same as the number of data points used to train the model will have a large error. This phenomenon has been considered surprising, as it contradicts assumptions about overfitting in classical machine learning.
History
Early observations of what would later be called double descent in specific models date back to 1989.
The term "double descent" was coined by Belkin et. al. in 2019, when the phenomenon gained popularity as a broader concept exhibited by many models. The latter development was prompted by a perceived contradiction between the conventional wisdom that too many parameters in the model result in a significant overfitting error (an extrapolation of the bias–variance tradeoff), and the empirical observations in the 2010s that some modern machine learning techniques tend to perform better with larger models.
Theoretical models
Double descent occurs in linear regression with isotropic Gaussian covariates and isotropic Gaussian noise.
A model of double descent at the thermodynamic limit has been analyzed using the replica trick, and the result has been confirmed numerically.
Empirical examples
The scaling behavior of double descent has been found to follow a broken neural scaling law functional form.
References
Further reading
External links
Understanding "Deep Double Descent" at evhub.
Model selection
Machine learning
Statistical classification | Double descent | [
"Engineering"
] | 302 | [
"Artificial intelligence engineering",
"Machine learning"
] |
201,689 | https://en.wikipedia.org/wiki/Polypropylene | Polypropylene (PP), also known as polypropene, is a thermoplastic polymer used in a wide variety of applications. It is produced via chain-growth polymerization from the monomer propylene.
Polypropylene belongs to the group of polyolefins and is partially crystalline and non-polar. Its properties are similar to polyethylene, but it is slightly harder and more heat-resistant. It is a white, mechanically rugged material and has a high chemical resistance.
Polypropylene is the second-most widely produced commodity plastic (after polyethylene).
History
Phillips Petroleum chemists J. Paul Hogan and Robert Banks first demonstrated the polymerization of propylene in 1951. The stereoselective polymerization to the isotactic was discovered by Giulio Natta and Karl Rehn in March 1954. This pioneering discovery led to large-scale commercial production of isotactic polypropylene by the Italian firm Montecatini from 1957 onwards. Syndiotactic polypropylene was also first synthesized by Natta. Interest in polypropylene development is ongoing to the present. For example, making polypropylene from bio-based resources is a topic of interest in the 21st century.
Chemical and physical properties
Polypropylene is in little aspects similar to polyethylene, especially in solution behavior and electrical properties. The methyl group improves mechanical properties and thermal resistance, although the chemical resistance decreases. The properties of polypropylene depend on the molecular weight and molecular weight distribution, crystallinity, type and proportion of comonomer (if used) and the isotacticity. In isotactic polypropylene, for example, the methyl groups are oriented on one side of the carbon backbone. This arrangement creates a greater degree of crystallinity and results in a stiffer material that is more resistant to creep than both atactic polypropylene and polyethylene.
Mechanical properties
The density of PP is between 0.895 and 0.93 g/cm3. Therefore, PP is the commodity plastic with the lowest density. With lower density, moldings parts with lower weight and more parts of a certain mass of plastic can be produced. Unlike polyethylene, crystalline and amorphous regions differ only slightly in their density. However, the density of polyethylene can significantly change with fillers.
The Young's modulus of PP is between 1300 and 1800 N/mm².
Polypropylene is normally tough and flexible, especially when copolymerized with ethylene. This allows polypropylene to be used as an engineering plastic, competing with materials such as acrylonitrile butadiene styrene (ABS). Polypropylene is reasonably economical.
Polypropylene has good resistance to fatigue.
Thermal properties
The melting point of polypropylene occurs in a range, so the melting point is determined by finding the highest temperature of a differential scanning calorimetry chart. Perfectly isotactic PP has a melting point of . Commercial isotactic PP has a melting point that ranges from , depending on atactic material and crystallinity. Syndiotactic PP with a crystallinity of 30% has a melting point of . Below 0 °C, PP becomes brittle.
The thermal expansion of PP is significant, but somewhat less than that of polyethylene.
Chemical properties
Propylene molecules prefer to join together "head-to-tail", giving a chain with methyl groups on every other carbon, but some randomness occurs.
Polypropylene at room temperature is resistant to fats and almost all organic solvents, apart from strong oxidants. Non-oxidizing acids and bases can be stored in containers made of PP. At elevated temperature, PP can be dissolved in nonpolar solvents such as xylene, tetralin and decalin. Due to the tertiary carbon atom, PP is chemically less resistant than PE (see Markovnikov rule).
Most commercial polypropylene is isotactic and has an intermediate level of crystallinity between that of low-density polyethylene (LDPE) and high-density polyethylene (HDPE). Isotactic & atactic polypropylene is soluble in p-xylene at 140 °C. Isotactic precipitates when the solution is cooled to 25 °C and atactic portion remains soluble in p-xylene.
The melt flow rate (MFR) or melt flow index (MFI) is a measure of molecular weight of polypropylene. The measure helps to determine how easily the molten raw material will flow during processing. Polypropylene with higher MFR will fill the plastic mold more easily during the injection or blow-molding production process. As the melt flow increases, however, some physical properties, like impact strength, will decrease.
There are three general types of polypropylene: homopolymer, random copolymer, and block copolymer. The comonomer is typically used with ethylene. Ethylene-propylene rubber or EPDM added to polypropylene homopolymer increases its low temperature impact strength. Randomly polymerized ethylene monomer added to polypropylene homopolymer decreases the polymer crystallinity, lowers the melting point and makes the polymer more transparent.
Molecular structure – tacticity
Polypropylene can be categorized as atactic polypropylene (aPP), syndiotactic polypropylene (sPP) and isotactic polypropylene (iPP). In case of atactic polypropylene, the methyl group (-CH3) is randomly aligned, alternating (alternating) for syndiotactic polypropylene and evenly for isotactic polypropylene. This has an impact on the crystallinity (amorphous or semi-crystalline) and the thermal properties (expressed as glass transition point Tg and melting point Tm).
The term tacticity describes for polypropylene how the methyl group is oriented in the polymer chain. Commercial polypropylene is usually isotactic. This article therefore always refers to isotactic polypropylene, unless stated otherwise. The tacticity is usually indicated in percent, using the isotactic index (according to DIN 16774). The index is measured by determining the fraction of the polymer insoluble in boiling heptane. Commercially available polypropylenes usually have an isotactic index between 85 and 95%. The tacticity effects the polymer's physical properties. As the methyl group is in isotactic propylene consistently located at the same side, it forces the macromolecule in a helical shape, as also found in starch. An isotactic structure leads to a semi-crystalline polymer. The higher the isotonicity(the isotactic fraction), the greater the crystallinity, and thus also the softening point, rigidity, e-modulus and hardness.
Atactic polypropylene, on the other hand, lacks any regularity, which prevents it from crystallization, thereby creating an amorphous material.
Crystal structure of polypropylene
Isotactic polypropylene has a high degree of crystallinity, in industrial products 30–60%. Syndiotactic polypropylene is slightly less crystalline, atactic PP is amorphous (not crystalline).
Isotactic polypropylene (iPP)
Is polypropylene can exist in various crystalline modifications which differ by the molecular arrangement of the polymer chains. The crystalline modifications are categorized into the α-, β- and γ-modification as well as mesomorphic (smectic) forms. The α-modifications is predominant in iPP. Such crystals are built from lamellae in the form of folded chains. A characteristic anomaly is that the lame are arranged in the so-called "cross-hatched" structure. The melting point of α-crystalline regions is given as 185 to 220 °C, the density as 0.936 to 0.946 g·cm−3. The β-modification is in comparison somewhat less ordered, as a result of which it forms faster and has a lower melting point of 170 to 200 °C. The formation of the β-modification can be promoted by nucleating agents, suitable temperatures and shear stress. The γ-modification is hardly formed under the conditions used in industry and is poorly understood. The mesomorphic modification, however, occurs often in industrial processing, since the plastic is usually cooled quickly. The degree of order of the mesomorphic phase ranges between the crystalline and the amorphous phase, its density is with 0.916 g·cm−3 comparatively. The mesomorphic phase is considered as cause for the transparency in rapidly cooled films (due to low order and small crystallites).
Syndiotactic polypropylene (sPP)
Syndiotactic polypropylene was discovered much later than isotactic PP and could only be prepared by using metallocene catalysts. Syndiotactic PP has a lower melting point, with 161 to 186 °C, depending on the degree of tacticity.
Atactic polypropylene (aPP)
Atactic polypropylene is amorphous and has therefore no crystal structure. Due to its lack of crystallinity, it is readily soluble even at moderate temperatures, which allows to separate it as by-product from isotactic polypropylene by extraction. However, the aPP obtained this way is not completely amorphous but can still contain 15% crystalline parts. Atactic polypropylene can also be produced selectively using metallocene catalysts, atactic polypropylene produced this way has a considerably higher molecular weight.
Atactic polypropylene has lower density, melting point and softening temperature than the crystalline types and is tacky and rubber-like at room temperature. It is a colorless, cloudy material and can be used between −15 and +120 °C. Atactic polypropylene is used as a sealant, as an insulating material for automobiles and as an additive to bitumen.
Copolymers
Polypropylene copolymers are in use as well. A particularly important one is polypropylene random copolymer (PPR or PP-R), a random copolymer with polyethylene used for plastic pipework.
PP-RCT
Polypropylene random crystallinity temperature (PP-RCT), also used for plastic pipework, is a new form of this plastic. It achieves higher strength at high temperature by β-crystallization.
Degradation
Polypropylene is liable to chain degradation from exposure to temperatures above 100 °C. Oxidation usually occurs at the tertiary carbon centers leading to chain breaking via reaction with oxygen. In external applications, degradation is evidenced by cracks and crazing. It may be protected by the use of various polymer stabilizers, including UV-absorbing additives and anti-oxidants such as phosphites (e.g. tris(2,4-di-tert-butylphenyl)phosphite) and hindered phenols, which prevent polymer degradation.
Microbial communities isolated from soil samples mixed with starch have been shown to be capable of degrading polypropylene.
Polypropylene has been reported to degrade while in the human body as implantable mesh devices. The degraded material forms a tree bark-like layer at the surface of mesh fibers.
Optical properties
PP can be made translucent when uncolored but is not as readily made transparent as polystyrene, acrylic, or certain other plastics. It is often opaque or colored using pigments.
Production
Polypropylene is produced by the chain-growth polymerization of propene:
The industrial production processes can be grouped into gas phase polymerization, bulk polymerization and slurry polymerization. All state-of-the-art processes use either gas-phase or bulk reactor systems.
In gas-phase and slurry-reactors, the polymer is formed around heterogeneous catalyst particles. The gas-phase polymerization is carried out in a fluidized bed reactor, propene is passed over a bed containing the heterogeneous (solid) catalyst and the formed polymer is separated as a fine powder and then converted into pellets. Unreacted gas is recycled and fed back into the reactor.
In bulk polymerization, liquid propene acts as a solvent to prevent the precipitation of the polymer. The polymerization proceeds at 60 to 80 °C and 30–40 atm are applied to keep the propene in the liquid state. For the bulk polymerization, typically loop reactors are applied. The bulk polymerization is limited to a maximum of 5% ethene as comonomer due to a limited solubility of the polymer in the liquid propene.
In the slurry polymerization, typically C4–C6 alkanes (butane, pentane or hexane) are utilized as inert diluent to suspend the growing polymer particles. Propene is introduced into the mixture as a gas.
Catalysts
The properties of PP are strongly affected by its tacticity, the orientation of the methyl groups () relative to the methyl groups in neighboring monomer units. A Ziegler–Natta catalyst is able to restrict linking of monomer molecules to a specific orientation, either isotactic, when all methyl groups are positioned at the same side with respect to the backbone of the polymer chain, or syndiotactic, when the positions of the methyl groups alternate.
Commercially available isotactic polypropylene is made with two types of Ziegler-Natta catalysts. The first group of the catalysts encompasses solid (mostly supported) catalysts and certain types of soluble metallocene catalysts. Such isotactic macromolecules coil into a helical shape; these helices then line up next to one another to form the crystals that give commercial isotactic polypropylene many of its desirable properties.
Modern supported Ziegler-Natta catalysts developed for the polymerization of propylene and other 1-alkenes to isotactic polymers usually use as an active ingredient and as a support. The catalysts also contain organic modifiers, either aromatic acid esters and diesters or ethers. These catalysts are activated with special co-catalysts containing an organoaluminium compound such as Al(C2H5)3 and the second type of a modifier. The catalysts are differentiated depending on the procedure used for fashioning catalyst particles from MgCl2 and depending on the type of organic modifiers employed during catalyst preparation and use in polymerization reactions. Two most important technological characteristics of all the supported catalysts are high productivity and a high fraction of the crystalline isotactic polymer they produce at 70–80 °C under standard polymerization conditions. Commercial synthesis of isotactic polypropylene is usually carried out either in the medium of liquid propylene or in gas-phase reactors.
Commercial synthesis of syndiotactic polypropylene is carried out with the use of a special class of metallocene catalysts. They employ bridged bis-metallocene complexes of the type bridge-(Cp1)(Cp2)ZrCl2 where the first Cp ligand is the cyclopentadienyl group, the second Cp ligand is the fluorenyl group, and the bridge between the two Cp ligands is -CH2-CH2-, >SiMe2, or >SiPh2. These complexes are converted to polymerization catalysts by activating them with a special organoaluminium co-catalyst, methylaluminoxane (MAO).
Atactic polypropylene is an amorphous rubbery material. It can be produced commercially either with a special type of supported Ziegler-Natta catalyst or with some metallocene catalysts.
Manufacturing from polypropylene
Melting process of polypropylene can be achieved via extrusion and molding. Common extrusion methods include production of melt-blown and spun-bond fibers to form long rolls for future conversion into a wide range of useful products, such as face masks, filters, diapers and wipes.
The most common shaping technique is injection molding, which is used for parts such as cups, cutlery, vials, caps, containers, housewares, and automotive parts such as batteries. The related techniques of blow molding and injection-stretch blow molding are also used, which involve both extrusion and molding.
The large number of end-use applications for polypropylene are often possible because of the ability to tailor grades with specific molecular properties and additives during its manufacture. For example, antistatic additives can be added to help polypropylene surfaces resist dust and dirt. Many physical finishing techniques can also be used on polypropylene, such as machining. Surface treatments can be applied to polypropylene parts in order to promote adhesion of printing ink and paints.
Expanded Polypropylene (EPP) has been produced through both solid and melt state processing. EPP is manufactured using melt processing with either chemical or physical blowing agents. Expansion of PP in solid state, due to its highly crystalline structure, has not been successful. In this regard, two novel strategies were developed for expansion of PP. It was observed that PP can be expanded to make EPP through controlling its crystalline structure or through blending with other polymers.
Biaxially oriented polypropylene (BOPP)
When polypropylene film is extruded and stretched in both the machine direction and across machine direction it is called biaxially oriented polypropylene. Two methods are widely used for producing BOPP films, namely, a bi-directional stenter process or a double-bubble blown film extrusion process. Biaxial orientation increases strength and clarity. BOPP is widely used as a packaging material for packaging products such as snack foods, fresh produce and confectionery. It is easy to coat, print and laminate to give the required appearance and properties for use as a packaging material. This process is normally called converting. It is normally produced in large rolls which are slit on slitting machines into smaller rolls for use on packaging machines. BOPP is also used for stickers and labels in addition to OPP.
It is non-reactive, which makes BOPP suitable for safe use in the pharmaceutical and food industry. It is one of the most important commercial polyolefin films. BOPP films are available in different thicknesses and widths. They are transparent and flexible.
Applications
As polypropylene is resistant to fatigue, most plastic living hinges, such as those on flip-top bottles, are made from this material. However, it is important to ensure that chain molecules are oriented across the hinge to maximise strength.
Polypropylene is used in the manufacturing of piping systems, both ones concerned with high purity and ones designed for strength and rigidity (e.g., those intended for use in potable plumbing, hydronic heating and cooling, and reclaimed water). This material is often chosen for its resistance to corrosion and chemical leaching, its resilience against most forms of physical damage, including impact and freezing, its environmental benefits, and its ability to be joined by heat fusion rather than gluing.
Many plastic items for medical or laboratory use can be made from polypropylene because it can withstand the heat in an autoclave. Its heat resistance also enables it to be used as the manufacturing material of consumer-grade kettles. Food containers made from it will not melt in the dishwasher, and do not melt during industrial hot filling processes. For this reason, most plastic tubs for dairy products are polypropylene sealed with aluminum foil (both heat-resistant materials). After the product has cooled, the tubs are often given lids made of a less heat-resistant material, such as LDPE or polystyrene. Such containers provide a good hands-on example of the difference in modulus, since the rubbery (softer, more flexible) feeling of LDPE with respect to polypropylene of the same thickness is readily apparent. Rugged, translucent, reusable plastic containers made in a wide variety of shapes and sizes for consumers from various companies such as Rubbermaid and Sterilite are commonly made of polypropylene, although the lids are often made of somewhat more flexible LDPE so they can snap onto the container to close it. Polypropylene can also be made into disposable bottles to contain liquid, powdered, or similar consumer products, although HDPE and polyethylene terephthalate are commonly also used to make bottles. Plastic pails, car batteries, wastebaskets, pharmacy prescription bottles, cooler containers, dishes and pitchers are often made of polypropylene or HDPE, both of which commonly have rather similar appearance, feel, and properties at ambient temperature. An abundance of medical devices are made from PP.
A common application for polypropylene is as biaxially oriented polypropylene (BOPP). These BOPP sheets are used to make a wide variety of materials including clear bags. When polypropylene is biaxially oriented, it becomes crystal clear and serves as an excellent packaging material for artistic and retail products.
Polypropylene, highly colorfast, is widely used in manufacturing carpets, rugs and mats to be used at home.
Polypropylene is widely used in ropes, distinctive because they are light enough to float in water. For equal mass and construction, polypropylene rope is similar in strength to polyester rope. Polypropylene costs less than most other synthetic fibers.
Polypropylene is also used as an alternative to polyvinyl chloride (PVC) as insulation for electrical cables for LSZH cable in low-ventilation environments, primarily tunnels. This is because it emits less smoke and no toxic halogens, which may lead to production of acid in high-temperature conditions.
Polypropylene is also used in particular roofing membranes as the waterproofing top layer of single-ply systems as opposed to modified-bit systems.
Polypropylene is most commonly used for plastic moldings, wherein it is injected into a mold while molten, forming complex shapes at relatively low cost and high volume; examples include bottle tops, bottles, and fittings.
It can also be produced in sheet form, widely used for the production of stationery folders, packaging, and storage boxes. The wide color range, durability, low cost, and resistance to dirt make it ideal as a protective cover for papers and other materials. It is used in Rubik's Cube stickers because of these characteristics.
The availability of sheet polypropylene has provided an opportunity for the use of the material by designers. The light-weight, durable, and colorful plastic makes an ideal medium for the creation of light shades, and a number of designs have been developed using interlocking sections to create elaborate designs.
Polypropylene fibres are used as a concrete additive to increase strength and reduce cracking and spalling. In some areas susceptible to earthquakes (e.g., California), PP fibers are added with soils to improve the soil's strength and damping when constructing the foundation of structures such as buildings, bridges, etc.
Clothing
Polypropylene is a major polymer used in nonwovens, with over 50% used for diapers or sanitary products where it is treated to absorb water (hydrophilic) rather than naturally repelling water (hydrophobic). Other non-woven uses include filters for air, gas, and liquids in which the fibers can be formed into sheets or webs that can be pleated to form cartridges or layers that filter in various efficiencies in the 0.5 to 30 micrometre range. Such applications occur in houses as water filters or in air-conditioning-type filters. The high surface-area and naturally oleophilic polypropylene nonwovens are ideal absorbers of oil spills with the familiar floating barriers near oil spills on rivers.
Polypropylene, or 'polypro', has been used for the fabrication of cold-weather base layers, such as long-sleeve shirts or long underwear. Polypropylene is also used in warm-weather clothing, in which it transports sweat away from the skin. Polyester has replaced polypropylene in these applications in the U.S. military, such as in the ECWCS. Although polypropylene clothes are not easily flammable, they can melt, which may result in severe burns if the wearer is involved in an explosion or fire of any kind. Polypropylene undergarments are known for retaining body odors which are then difficult to remove. The current generation of polyester does not have this disadvantage.
Medical
Its most common medical use is in the synthetic, nonabsorbable suture Prolene, manufactured by Ethicon Inc.
Polypropylene has been used in hernia and pelvic organ prolapse repair operations to protect the body from new hernias in the same location. A small patch of the material is placed over the spot of the hernia, below the skin, and is painless and rarely, if ever, rejected by the body. However, a polypropylene mesh will erode the tissue surrounding it over the uncertain period from days to years.
A notable application was as a transvaginal mesh, used to treat vaginal prolapse and concurrent urinary incontinence. Due to the above-mentioned propensity for polypropylene mesh to erode the tissue surrounding it, the FDA has issued several warnings on the use of polypropylene mesh medical kits for certain applications in pelvic organ prolapse, specifically when introduced in close proximity to the vaginal wall due to a continued increase in number of mesh-driven tissue erosions reported by patients over the past few years. On 3 January 2012, the FDA ordered 35 manufacturers of these mesh products to study the side effects of these devices.
Due to the outbreak of the COVID-19 pandemic in 2020, the demand for PP has increased significantly because it is a vital raw material for producing meltblown fabric, which is in turn the raw material for producing facial masks.
Recycling
Most polypropylene recycling uses mechanical recycling, as for polyethylene: the material is heated to soften or melt it, and mechanically formed it into new products. As of 2015, less than 1% of polypropylene generated was recycled. Heating degrades the carbon backbone more severely than for polyethylene, breaking it into smaller organic molecules, because the methyl side group of PP is susceptible to thermo-oxidative and photo-oxidative degradation.
Polypropylene has the number "5" as its resin identification code:
Repairing
PP objects can be joined with a two-part epoxy glue or using hot-glue guns.
PP can be melted using a speed tip welding technique. With speed welding, the plastic welder, similar to a soldering iron in appearance and wattage, is fitted with a feed tube for the plastic weld rod. The speed tip heats the rod and the substrate, while at the same time it presses the molten weld rod into position. A bead of softened plastic is laid into the joint and the parts and weld rod fuse. With polypropylene, the melted welding rod must be "mixed" with the semi-melted base material being fabricated or repaired. A speed tip "gun" is essentially a soldering iron with a broad, flat tip that can be used to melt the weld joint and filler material to create a bond.
Health concerns
The advocacy organization Environmental Working Group classifies PP as of low hazard.
PP is dope-dyed; no water is used in its dyeing, in contrast with cotton. Polypropylene was the most common microplastic fiber found in the olfactory bulbs in 8 of 15 deceased individuals in a study.
Combustibility
Like all organic compounds, polypropylene is combustible. The flash point of a typical composition is 260 °C; autoignition temperature is 388 °C.
References
External links
Chain structure of Polypropylene
Polypropylene on Plastipedia
Polypropylene Properties & other information
Dielectrics
Packaging materials
Food packaging
Plastics
Polyolefins
Thermoplastics
Commodity chemicals | Polypropylene | [
"Physics",
"Chemistry"
] | 5,967 | [
"Commodity chemicals",
"Products of chemical industry",
"Unsolved problems in physics",
"Materials",
"Dielectrics",
"Amorphous solids",
"Matter",
"Plastics"
] |
201,718 | https://en.wikipedia.org/wiki/Principle%20of%20maximum%20entropy | The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition that expresses testable information).
Another way of stating this: Take precisely stated prior data or testable information about a probability distribution function. Consider the set of all trial probability distributions that would encode the prior data. According to this principle, the distribution with maximal information entropy is the best choice.
History
The principle was first expounded by E. T. Jaynes in two papers in 1957, where he emphasized a natural correspondence between statistical mechanics and information theory. In particular, Jaynes argued that the Gibbsian method of statistical mechanics is sound by also arguing that the entropy of statistical mechanics and the information entropy of information theory are the same concept. Consequently, statistical mechanics should be considered a particular application of a general tool of logical inference and information theory.
Overview
In most practical cases, the stated prior data or testable information is given by a set of conserved quantities (average values of some moment functions), associated with the probability distribution in question. This is the way the maximum entropy principle is most often used in statistical thermodynamics. Another possibility is to prescribe some symmetries of the probability distribution. The equivalence between conserved quantities and corresponding symmetry groups implies a similar equivalence for these two ways of specifying the testable information in the maximum entropy method.
The maximum entropy principle is also needed to guarantee the uniqueness and consistency of probability assignments obtained by different methods, statistical mechanics and logical inference in particular.
The maximum entropy principle makes explicit our freedom in using different forms of prior data. As a special case, a uniform prior probability density (Laplace's principle of indifference, sometimes called the principle of insufficient reason), may be adopted. Thus, the maximum entropy principle is not merely an alternative way to view the usual methods of inference of classical statistics, but represents a significant conceptual generalization of those methods.
However these statements do not imply that thermodynamical systems need not be shown to be ergodic to justify treatment as a statistical ensemble.
In ordinary language, the principle of maximum entropy can be said to express a claim of epistemic modesty, or of maximum ignorance. The selected distribution is the one that makes the least claim to being informed beyond the stated prior data, that is to say the one that admits the most ignorance beyond the stated prior data.
Testable information
The principle of maximum entropy is useful explicitly only when applied to testable information. Testable information is a statement about a probability distribution whose truth or falsity is well-defined. For example, the statements
the expectation of the variable is 2.87
and
(where and are probabilities of events) are statements of testable information.
Given testable information, the maximum entropy procedure consists of seeking the probability distribution which maximizes information entropy, subject to the constraints of the information. This constrained optimization problem is typically solved using the method of Lagrange multipliers.
Entropy maximization with no testable information respects the universal "constraint" that the sum of the probabilities is one. Under this constraint, the maximum entropy discrete probability distribution is the uniform distribution,
Applications
The principle of maximum entropy is commonly applied in two ways to inferential problems:
Prior probabilities
The principle of maximum entropy is often used to obtain prior probability distributions for Bayesian inference. Jaynes was a strong advocate of this approach, claiming the maximum entropy distribution represented the least informative distribution.
A large amount of literature is now dedicated to the elicitation of maximum entropy priors and links with channel coding.
Posterior probabilities
Maximum entropy is a sufficient updating rule for radical probabilism. Richard Jeffrey's probability kinematics is a special case of maximum entropy inference. However, maximum entropy is not a generalisation of all such sufficient updating rules.
Maximum entropy models
Alternatively, the principle is often invoked for model specification: in this case the observed data itself is assumed to be the testable information. Such models are widely used in natural language processing. An example of such a model is logistic regression, which corresponds to the maximum entropy classifier for independent observations.
Probability density estimation
One of the main applications of the maximum entropy principle is in discrete and continuous density estimation.
Similar to support vector machine estimators,
the maximum entropy principle may require the solution to a quadratic programming problem, and thus provide
a sparse mixture model as the optimal density estimator. One important advantage of the method is its ability to incorporate prior information in the density estimation.
General solution for the maximum entropy distribution with linear constraints
Discrete case
We have some testable information I about a quantity x taking values in {x1, x2,..., xn}. We assume this information has the form of m constraints on the expectations of the functions fk; that is, we require our probability distribution to satisfy the moment inequality/equality constraints:
where the are observables. We also require the probability density to sum to one, which may be viewed as a primitive constraint on the identity function and an observable equal to 1 giving the constraint
The probability distribution with maximum information entropy subject to these inequality/equality constraints is of the form:
for some . It is sometimes called the Gibbs distribution. The normalization constant is determined by:
and is conventionally called the partition function. (The Pitman–Koopman theorem states that the necessary and sufficient condition for a sampling distribution to admit sufficient statistics of bounded dimension is that it have the general form of a maximum entropy distribution.)
The λk parameters are Lagrange multipliers. In the case of equality constraints their values are determined from the solution of the nonlinear equations
In the case of inequality constraints, the Lagrange multipliers are determined from the solution of a convex optimization program with linear constraints.
In both cases, there is no closed form solution, and the computation of the Lagrange multipliers usually requires numerical methods.
Continuous case
For continuous distributions, the Shannon entropy cannot be used, as it is only defined for discrete probability spaces. Instead Edwin Jaynes (1963, 1968, 2003) gave the following formula, which is closely related to the relative entropy (see also differential entropy).
where q(x), which Jaynes called the "invariant measure", is proportional to the limiting density of discrete points. For now, we shall assume that q is known; we will discuss it further after the solution equations are given.
A closely related quantity, the relative entropy, is usually defined as the Kullback–Leibler divergence of p from q (although it is sometimes, confusingly, defined as the negative of this). The inference principle of minimizing this, due to Kullback, is known as the Principle of Minimum Discrimination Information.
We have some testable information I about a quantity x which takes values in some interval of the real numbers (all integrals below are over this interval). We assume this information has the form of m constraints on the expectations of the functions fk, i.e. we require our probability density function to satisfy the inequality (or purely equality) moment constraints:
where the are observables. We also require the probability density to integrate to one, which may be viewed as a primitive constraint on the identity function and an observable equal to 1 giving the constraint
The probability density function with maximum Hc subject to these constraints is:
with the partition function determined by
As in the discrete case, in the case where all moment constraints are equalities, the values of the parameters are determined by the system of nonlinear equations:
In the case with inequality moment constraints the Lagrange multipliers are determined from the solution of a convex optimization program.
The invariant measure function q(x) can be best understood by supposing that x is known to take values only in the bounded interval (a, b), and that no other information is given. Then the maximum entropy probability density function is
where A is a normalization constant. The invariant measure function is actually the prior density function encoding 'lack of relevant information'. It cannot be determined by the principle of maximum entropy, and must be determined by some other logical method, such as the principle of transformation groups or marginalization theory.
Examples
For several examples of maximum entropy distributions, see the article on maximum entropy probability distributions.
Justifications for the principle of maximum entropy
Proponents of the principle of maximum entropy justify its use in assigning probabilities in several ways, including the following two arguments. These arguments take the use of Bayesian probability as given, and are thus subject to the same postulates.
Information entropy as a measure of 'uninformativeness'
Consider a discrete probability distribution among mutually exclusive propositions. The most informative distribution would occur when one of the propositions was known to be true. In that case, the information entropy would be equal to zero. The least informative distribution would occur when there is no reason to favor any one of the propositions over the others. In that case, the only reasonable probability distribution would be uniform, and then the information entropy would be equal to its maximum possible value, . The information entropy can therefore be seen as a numerical measure which describes how uninformative a particular probability distribution is, ranging from zero (completely informative) to (completely uninformative).
By choosing to use the distribution with the maximum entropy allowed by our information, the argument goes, we are choosing the most uninformative distribution possible. To choose a distribution with lower entropy would be to assume information we do not possess. Thus the maximum entropy distribution is the only reasonable distribution. The dependence of the solution on the dominating measure represented by is however a source of criticisms of the approach since this dominating measure is in fact arbitrary.
The Wallis derivation
The following argument is the result of a suggestion made by Graham Wallis to E. T. Jaynes in 1962. It is essentially the same mathematical argument used for the Maxwell–Boltzmann statistics in statistical mechanics, although the conceptual emphasis is quite different. It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept. The information entropy function is not assumed a priori, but rather is found in the course of the argument; and the argument leads naturally to the procedure of maximizing the information entropy, rather than treating it in some other way.
Suppose an individual wishes to make a probability assignment among mutually exclusive propositions. He has some testable information, but is not sure how to go about including this information in his probability assessment. He therefore conceives of the following random experiment. He will distribute quanta of probability (each worth ) at random among the possibilities. (One might imagine that he will throw balls into buckets while blindfolded. In order to be as fair as possible, each throw is to be independent of any other, and every bucket is to be the same size.) Once the experiment is done, he will check if the probability assignment thus obtained is consistent with his information. (For this step to be successful, the information must be a constraint given by an open set in the space of probability measures). If it is inconsistent, he will reject it and try again. If it is consistent, his assessment will be
where is the probability of the th proposition, while ni is the number of quanta that were assigned to the th proposition (i.e. the number of balls that ended up in bucket ).
Now, in order to reduce the 'graininess' of the probability assignment, it will be necessary to use quite a large number of quanta of probability. Rather than actually carry out, and possibly have to repeat, the rather long random experiment, the protagonist decides to simply calculate and use the most probable result. The probability of any particular result is the multinomial distribution,
where
is sometimes known as the multiplicity of the outcome.
The most probable result is the one which maximizes the multiplicity . Rather than maximizing directly, the protagonist could equivalently maximize any monotonic increasing function of . He decides to maximize
At this point, in order to simplify the expression, the protagonist takes the limit as , i.e. as the probability levels go from grainy discrete values to smooth continuous values. Using Stirling's approximation, he finds
All that remains for the protagonist to do is to maximize entropy under the constraints of his testable information. He has found that the maximum entropy distribution is the most probable of all "fair" random distributions, in the limit as the probability levels go from discrete to continuous.
Compatibility with Bayes' theorem
Giffin and Caticha (2007) state that Bayes' theorem and the principle of maximum entropy are completely compatible and can be seen as special cases of the "method of maximum relative entropy". They state that this method reproduces every aspect of orthodox Bayesian inference methods. In addition this new method opens the door to tackling problems that could not be addressed by either the maximal entropy principle or orthodox Bayesian methods individually. Moreover, recent contributions (Lazar 2003, and Schennach 2005) show that frequentist relative-entropy-based inference approaches (such as empirical likelihood and exponentially tilted empirical likelihood – see e.g. Owen 2001 and Kitamura 2006) can be combined with prior information to perform Bayesian posterior analysis.
Jaynes stated Bayes' theorem was a way to calculate a probability, while maximum entropy was a way to assign a prior probability distribution.
It is however, possible in concept to solve for a posterior distribution directly from a stated prior distribution using the principle of minimum cross-entropy (or the Principle of Maximum Entropy being a special case of using a uniform distribution as the given prior), independently of any Bayesian considerations by treating the problem formally as a constrained optimisation problem, the Entropy functional being the objective function. For the case of given average values as testable information (averaged over the sought after probability distribution), the sought after distribution is formally the Gibbs (or Boltzmann) distribution the parameters of which must be solved for in order to achieve minimum cross entropy and satisfy the given testable information.
Relevance to physics
The principle of maximum entropy bears a relation to a key assumption of kinetic theory of gases known as molecular chaos or Stosszahlansatz. This asserts that the distribution function characterizing particles entering a collision can be factorized. Though this statement can be understood as a strictly physical hypothesis, it can also be interpreted as a heuristic hypothesis regarding the most probable configuration of particles before colliding.
See also
Akaike information criterion
Dissipation
Info-metrics
Maximum entropy classifier
Maximum entropy probability distribution
Maximum entropy spectral estimation
Maximum entropy thermodynamics
Principle of maximum caliber
Thermodynamic equilibrium
Molecular chaos
Notes
References
Giffin, A. and Caticha, A., 2007, Updating Probabilities with Data and Moments
Jaynes, E. T., 1986 (new version online 1996), "Monkeys, kangaroos and ", in Maximum-Entropy and Bayesian Methods in Applied Statistics, J. H. Justice (ed.), Cambridge University Press, Cambridge, p. 26.
Kapur, J. N.; and Kesavan, H. K., 1992, Entropy Optimization Principles with Applications, Boston: Academic Press.
Kitamura, Y., 2006, Empirical Likelihood Methods in Econometrics: Theory and Practice, Cowles Foundation Discussion Papers 1569, Cowles Foundation, Yale University.
Owen, A. B., 2001, Empirical Likelihood, Chapman and Hall/CRC. .
Further reading
Ratnaparkhi A. (1997) "A simple introduction to maximum entropy models for natural language processing" Technical Report 97-08, Institute for Research in Cognitive Science, University of Pennsylvania. An easy-to-read introduction to maximum entropy methods in the context of natural language processing.
Open access article containing pointers to various papers and software implementations of Maximum Entropy Model on the net.
Entropy and information
Bayesian statistics
maximum entropy
Probability assessment
maximum entropy | Principle of maximum entropy | [
"Physics",
"Mathematics"
] | 3,322 | [
"Mathematical principles",
"Physical quantities",
"Entropy and information",
"Entropy",
"Dynamical systems"
] |
202,064 | https://en.wikipedia.org/wiki/Angular%20aperture | The angular aperture of a lens is the angular size of the lens aperture as seen from the focal point:
where
is the focal length
is the diameter of the aperture.
Relation to numerical aperture
In a medium with an index of refraction close to 1, such as air, the angular aperture is approximately equal to twice the numerical aperture of the lens.
Formally, the numerical aperture in air is:
In the paraxial approximation, with a small aperture, :
References
See also
f-number
Numerical aperture
Acceptance angle, half the angular aperture
Field of view
Geometrical optics
Angle | Angular aperture | [
"Physics"
] | 115 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Wikipedia categories named after physical quantities",
"Angle"
] |
202,120 | https://en.wikipedia.org/wiki/Counterweight | A counterweight is a weight that, by applying an opposite force, provides balance and stability of a mechanical system. The purpose of a counterweight is to make lifting the load faster and more efficient, which saves energy and causes less wear and tear on the lifting machine.
Counterweights are often used in traction lifts (elevators), cranes and funfair rides. In these applications, the expected load multiplied by the distance that load will be spaced from the central support (called the "tipping point") must be equal to the counterweight's mass times its distance from the tipping point in order to prevent over-balancing either side. This distance times mass is called the load moment.
By extension, a counterbalance force balances or offsets another force, as when two objects of equal weight, power, or influence are acting in opposition to each other.
Examples
Trebuchet There are five major components of a trebuchet: beam, counterweight, frame, guide chute, and sling. After the counterweight drops from a platform on the frame, gravity pulls the counterweight and pivots the beam. Without the counterweight, the beam could not complete the arc that allows the sling to accurately release the projectile.
Crankshaft A counterweight is also used in many rotating systems to reduce vibrations due to imbalances in the rotating assembly. A typical example is counterweights on crankshafts in piston engines.
Desk lamp Some balanced arm lamps work with a counterweight to keep the arm and lamp in the desired position.
Elevator In traction (non-hydraulic) elevators, a heavy counterweight counterbalances the load of the elevator carriage, so the motor lifts much less of the carriage's weight (specifically, the counterweight is the weight of the carriage plus 40-50% of its rated capacity). The counterweight also increases the ascending acceleration force and decreases the descending acceleration force to reduce the amount of power needed by the motor. The elevator carriage and the counterweights both have wheels that roll on rails to prevent irregular movement and provide a smoother ride for the passengers.
Space elevator a proposed structure designed to transport material from a celestial body's surface into space. Many variants have been proposed, but the concept most often refers to an elevator that reaches from the surface of the Earth to geostationary outer space, with a counterweight attached at its outer end. By attaching a counterweight at the end, upward centrifugal force from the Earth's rotation ensures that the cable remains stretched taut, countering the gravitational pull on the lower sections and thereby allowing the elevator to remain upright. The counterweight itself could assume one of several forms:
a heavy, captured asteroid;
a space dock, space station or spaceport positioned past geostationary orbit; or
an extension of the cable itself far beyond geostationary orbit.
Metronome: A wind-up mechanical metronome has an adjustable weight and spring mechanism that allows the speed to be adjusted by placement of the weight on the spindle. The tempo speed is decreased by moving the weight to a higher spindle marking or increased by moving it to a lower marking.
Crane The tower crane (see picture) is a modern form of balance crane that is fixed to the ground. A horizontal boom is balanced asymmetrically across the top of the tower. The long arm carries the lifting gear. The short arm is called the machinery arm; this holds the motors and electronics to operate the crane, as well as the concrete counterweights.
Other examples include:
Vertical-lift bridge
Drawbridge
Bascule bridge
Forklift
See also
Queen Anne Counterbalance
Water balance propulsion
References
External links
Force
Mass
Space elevator
Bridge components
Weights | Counterweight | [
"Physics",
"Astronomy",
"Mathematics",
"Technology"
] | 751 | [
"Scalar physical quantities",
"Astronomical hypotheses",
"Force",
"Exploratory engineering",
"Physical quantities",
"Bridge components",
"Quantity",
"Mass",
"Classical mechanics",
"Size",
"Space elevator",
"Weights",
"Physical objects",
"Wikipedia categories named after physical quantities... |
202,522 | https://en.wikipedia.org/wiki/Ionizing%20radiation | Ionizing radiation (US, ionising radiation in the UK), including nuclear radiation, consists of subatomic particles or electromagnetic waves that have sufficient energy to ionize atoms or molecules by detaching electrons from them. Some particles can travel up to 99% of the speed of light, and the electromagnetic waves are on the high-energy portion of the electromagnetic spectrum.
Gamma rays, X-rays, and the higher energy ultraviolet part of the electromagnetic spectrum are ionizing radiation, whereas the lower energy ultraviolet, visible light, infrared, microwaves, and radio waves are non-ionizing radiation. Nearly all types of laser light are non-ionizing radiation. The boundary between ionizing and non-ionizing radiation in the ultraviolet area cannot be sharply defined, as different molecules and atoms ionize at different energies. The energy of ionizing radiation starts between 10 electronvolts (eV) and 33 eV.
Ionizing subatomic particles include alpha particles, beta particles, and neutrons. These particles are created by radioactive decay, and almost all are energetic enough to ionize. There are also secondary cosmic particles produced after cosmic rays interact with Earth's atmosphere, including muons, mesons, and positrons. Cosmic rays may also produce radioisotopes on Earth (for example, carbon-14), which in turn decay and emit ionizing radiation. Cosmic rays and the decay of radioactive isotopes are the primary sources of natural ionizing radiation on Earth, contributing to background radiation. Ionizing radiation is also generated artificially by X-ray tubes, particle accelerators, and nuclear fission.
Ionizing radiation is not immediately detectable by human senses, so instruments such as Geiger counters are used to detect and measure it. However, very high energy particles can produce visible effects on both organic and inorganic matter (e.g. water lighting in Cherenkov radiation) or humans (e.g. acute radiation syndrome).
Ionizing radiation is used in a wide variety of fields such as medicine, nuclear power, research, and industrial manufacturing, but presents a health hazard if proper measures against excessive exposure are not taken. Exposure to ionizing radiation causes cell damage to living tissue and organ damage. In high acute doses, it will result in radiation burns and radiation sickness, and lower level doses over a protracted time can cause cancer. The International Commission on Radiological Protection (ICRP) issues guidance on ionizing radiation protection, and the effects of dose uptake on human health.
Directly ionizing radiation
Ionizing radiation may be grouped as directly or indirectly ionizing.
Any charged particle with mass can ionize atoms directly by fundamental interaction through the Coulomb force if it carries sufficient kinetic energy. Such particles include atomic nuclei, electrons, muons, charged pions, protons, and energetic charged nuclei stripped of their electrons. When moving at relativistic speeds (near the speed of light, c) these particles have enough kinetic energy to be ionizing, but there is considerable speed variation. For example, a typical alpha particle moves at about 5% of c, but an electron with 33 eV (just enough to ionize) moves at about 1% of c.
Two of the first types of directly ionizing radiation to be discovered are alpha particles which are helium nuclei ejected from the nucleus of an atom during radioactive decay, and energetic electrons, which are called beta particles.
Natural cosmic rays are made up primarily of relativistic protons but also include heavier atomic nuclei like helium ions and HZE ions. In the atmosphere such particles are often stopped by air molecules, and this produces short-lived charged pions, which soon decay to muons, a primary type of cosmic ray radiation that reaches the surface of the earth. Pions can also be produced in large amounts in particle accelerators.
Alpha particles
Alpha particles consist of two protons and two neutrons bound together into a particle identical to a helium nucleus. Alpha particle emissions are generally produced in the process of alpha decay.
Alpha particles are a strongly ionizing form of radiation, but when emitted by radioactive decay they have low penetration power and can be absorbed by a few centimeters of air, or by the top layer of human skin. More powerful alpha particles from ternary fission are three times as energetic, and penetrate proportionately farther in air. The helium nuclei that form 10–12% of cosmic rays, are also usually of much higher energy than those produced by radioactive decay and pose shielding problems in space. However, this type of radiation is significantly absorbed by the Earth's atmosphere, which is a radiation shield equivalent to about 10 meters of water.
The alpha particle was named by Ernest Rutherford after the first letter in the Greek alphabet, α, when he ranked the known radioactive emissions in descending order of ionising effect in 1899. The symbol is α or α2+. Because they are identical to helium nuclei, they are also sometimes written as or indicating a Helium ion with a +2 charge (missing its two electrons). If the ion gains electrons from its environment, the alpha particle can be written as a normal (electrically neutral) helium atom .
Beta particles
Beta particles are high-energy, high-speed electrons or positrons emitted by certain types of radioactive nuclei, such as potassium-40. The production of beta particles is termed beta decay. They are designated by the Greek letter beta (β). There are two forms of beta decay, β− and β+, which respectively give rise to the electron and the positron. Beta particles are much less penetrating than gamma radiation, but more penetrating than alpha particles.
High-energy beta particles may produce X-rays known as bremsstrahlung ("braking radiation") or secondary electrons (delta ray) as they pass through matter. Both of these can cause an indirect ionization effect. Bremsstrahlung is of concern when shielding beta emitters, as the interaction of beta particles with some shielding materials produces Bremsstrahlung. The effect is greater with material having high atomic numbers, so material with low atomic numbers is used for beta source shielding.
Positrons and other types of antimatter
The positron or antielectron is the antiparticle or the antimatter counterpart of the electron. When a low-energy positron collides with a low-energy electron, annihilation occurs, resulting in their conversion into the energy of two or more gamma ray photons (see electron–positron annihilation). As positrons are positively charged particles they can directly ionize an atom through Coulomb interactions.
Positrons can be generated by positron emission nuclear decay (through weak interactions), or by pair production from a sufficiently energetic photon. Positrons are common artificial sources of ionizing radiation used in medical positron emission tomography (PET) scans.
Charged nuclei
Charged nuclei are characteristic of galactic cosmic rays and solar particle events and except for alpha particles (charged helium nuclei) have no natural sources on earth. In space, however, very high energy protons, helium nuclei, and HZE ions can be initially stopped by relatively thin layers of shielding, clothes, or skin. However, the resulting interaction will generate secondary radiation and cause cascading biological effects. If just one atom of tissue is displaced by an energetic proton, for example, the collision will cause further interactions in the body. This is called "linear energy transfer" (LET), which utilizes elastic scattering.
LET can be visualized as a billiard ball hitting another in the manner of the conservation of momentum, sending both away with the energy of the first ball divided between the two unequally. When a charged nucleus strikes a relatively slow-moving nucleus of an object in space, LET occurs and neutrons, alpha particles, low-energy protons, and other nuclei will be released by the collisions and contribute to the total absorbed dose of tissue.
Indirectly ionizing radiation
Indirectly ionizing radiation is electrically neutral and does not interact strongly with matter, therefore the bulk of the ionization effects are due to secondary ionization.
Photon radiation
Even though photons are electrically neutral, they can ionize atoms indirectly through the photoelectric effect and the Compton effect. Either of those interactions will cause the ejection of an electron from an atom at relativistic speeds, turning that electron into a beta particle (secondary beta particle) that will ionize other atoms. Since most of the ionized atoms are due to the secondary beta particles, photons are indirectly ionizing radiation.
Radiated photons are called gamma rays if they are produced by a nuclear reaction, subatomic particle decay, or radioactive decay within the nucleus. They are called x-rays if produced outside the nucleus. The generic term "photon" is used to describe both.
X-rays normally have a lower energy than gamma rays, and an older convention was to define the boundary as a wavelength of 10−11 m (or a photon energy of 100 keV). That threshold was driven by historic limitations of older X-ray tubes and low awareness of isomeric transitions. Modern technologies and discoveries have shown an overlap between X-ray and gamma energies. In many fields they are functionally identical, differing for terrestrial studies only in origin of the radiation. In astronomy, however, where radiation origin often cannot be reliably determined, the old energy division has been preserved, with X-rays defined as being between about 120 eV and 120 keV, and gamma rays as being of any energy above 100 to 120 keV, regardless of source. Most astronomical "gamma-ray astronomy" are known not to originate in nuclear radioactive processes but, rather, result from processes like those that produce astronomical X-rays, except driven by much more energetic electrons.
Photoelectric absorption is the dominant mechanism in organic materials for photon energies below 100 keV, typical of classical X-ray tube originated X-rays. At energies beyond 100 keV, photons ionize matter increasingly through the Compton effect, and then indirectly through pair production at energies beyond 5 MeV. The accompanying interaction diagram shows two Compton scatterings happening sequentially. In every scattering event, the gamma ray transfers energy to an electron, and it continues on its path in a different direction and with reduced energy.
Definition boundary for lower-energy photons
The lowest ionization energy of any element is 3.89 eV, for caesium. However, US Federal Communications Commission material defines ionizing radiation as that with a photon energy greater than 10 eV (equivalent to a far ultraviolet wavelength of 124 nanometers). Roughly, this corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen, both about 14 eV. In some Environmental Protection Agency references, the ionization of a typical water molecule at an energy of 33 eV is referenced as the appropriate biological threshold for ionizing radiation: this value represents the so-called W-value, the colloquial name for the ICRU's mean energy expended in a gas per ion pair formed, which combines ionization energy plus the energy lost to other processes such as excitation. At 38 nanometers wavelength for electromagnetic radiation, 33 eV is close to the energy at the conventional 10 nm wavelength transition between extreme ultraviolet and X-ray radiation, which occurs at about 125 eV. Thus, X-ray radiation is always ionizing, but only extreme-ultraviolet radiation can be considered ionizing under all definitions.
Neutrons
Neutrons have a neutral electrical charge often misunderstood as zero electrical charge and thus often do not directly cause ionization in a single step or interaction with matter. However, fast neutrons will interact with the protons in hydrogen via linear energy transfer, energy that a particle transfers to the material it is moving through. This mechanism scatters the nuclei of the materials in the target area, causing direct ionization of the hydrogen atoms. When neutrons strike the hydrogen nuclei, proton radiation (fast protons) results. These protons are themselves ionizing because they are of high energy, are charged, and interact with the electrons in matter.
Neutrons that strike other nuclei besides hydrogen will transfer less energy to the other particle if linear energy transfer does occur. But, for many nuclei struck by neutrons, inelastic scattering occurs. Whether elastic or inelastic scatter occurs is dependent on the speed of the neutron, whether fast or thermal or somewhere in between. It is also dependent on the nuclei it strikes and its neutron cross section.
In inelastic scattering, neutrons are readily absorbed in a type of nuclear reaction called neutron capture and attributes to the neutron activation of the nucleus. Neutron interactions with most types of matter in this manner usually produce radioactive nuclei. The abundant oxygen-16 nucleus, for example, undergoes neutron activation, rapidly decays by a proton emission forming nitrogen-16, which decays to oxygen-16. The short-lived nitrogen-16 decay emits a powerful beta ray. This process can be written as:
16O (n,p) 16N (fast neutron capture possible with >11 MeV neutron)
16N → 16O + β− (Decay t1/2 = 7.13 s)
This high-energy β− further interacts rapidly with other nuclei, emitting high-energy γ via Bremsstrahlung
While not a favorable reaction, the 16O (n,p) 16N reaction is a major source of X-rays emitted from the cooling water of a pressurized water reactor and contributes enormously to the radiation generated by a water-cooled nuclear reactor while operating.
For the best shielding of neutrons, hydrocarbons that have an abundance of hydrogen are used.
In fissile materials, secondary neutrons may produce nuclear chain reactions, causing a larger amount of ionization from the daughter products of fission.
Outside the nucleus, free neutrons are unstable and have a mean lifetime of 14 minutes, 42 seconds. Free neutrons decay by emission of an electron and an electron antineutrino to become a proton, a process known as beta decay:
In the adjacent diagram, a neutron collides with a proton of the target material, and then becomes a fast recoil proton that ionizes in turn. At the end of its path, the neutron is captured by a nucleus in an (n,γ)-reaction that leads to the emission of a neutron capture photon. Such photons always have enough energy to qualify as ionizing radiation.
Physical effects
Nuclear effects
Neutron radiation, alpha radiation, and extremely energetic gamma (> ~20 MeV) can cause nuclear transmutation and induced radioactivity. The relevant mechanisms are neutron activation, alpha absorption, and photodisintegration. A large enough number of transmutations can change macroscopic properties and cause targets to become radioactive themselves, even after the original source is removed.
Chemical effects
Ionization of molecules can lead to radiolysis (breaking chemical bonds), and formation of highly reactive free radicals. These free radicals may then react chemically with neighbouring materials even after the original radiation has stopped. (e.g., ozone cracking of polymers by ozone formed by ionization of air). Ionizing radiation can also accelerate existing chemical reactions such as polymerization and corrosion, by contributing to the activation energy required for the reaction. Optical materials deteriorate under the effect of ionizing radiation.
High-intensity ionizing radiation in air can produce a visible ionized air glow of telltale bluish-purple color. The glow can be observed, e.g., during criticality accidents, around mushroom clouds shortly after a nuclear explosion, or the inside of a damaged nuclear reactor like during the Chernobyl disaster.
Monatomic fluids, e.g. molten sodium, have no chemical bonds to break and no crystal lattice to disturb, so they are immune to the chemical effects of ionizing radiation. Simple diatomic compounds with very negative enthalpy of formation, such as hydrogen fluoride will reform rapidly and spontaneously after ionization.
Electrical effects
The ionization of materials temporarily increases their conductivity, potentially permitting damaging current levels. This is a particular hazard in semiconductor microelectronics employed in electronic equipment, with subsequent currents introducing operation errors or even permanently damaging the devices. Devices intended for high radiation environments such as the nuclear industry and extra-atmospheric (space) applications may be made radiation hard to resist such effects through design, material selection, and fabrication methods.
Proton radiation found in space can also cause single-event upsets in digital circuits. The electrical effects of ionizing radiation are exploited in gas-filled radiation detectors, e.g. the Geiger-Muller counter or the ion chamber.
Health effects
Most adverse health effects of exposure to ionizing radiation may be grouped in two general categories:
deterministic effects (harmful tissue reactions) due in large part to killing or malfunction of cells following high doses from radiation burns.
stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.
The most common impact is stochastic induction of cancer with a latent period of years or decades after exposure. For example, ionizing radiation is one cause of chronic myelogenous leukemia, although most people with CML have not been exposed to radiation. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial.
The most widely accepted model, the Linear no-threshold model (LNT), holds that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If this is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. Other stochastic effects of ionizing radiation are teratogenesis, cognitive decline, and heart disease.
Although DNA is always susceptible to damage by ionizing radiation, the DNA molecule may also be damaged by radiation with enough energy to excite certain molecular bonds to form pyrimidine dimers. This energy may be less than ionizing, but near to it. A good example is ultraviolet spectrum energy which begins at about 3.1 eV (400 nm) at close to the same energy level which can cause sunburn to unprotected skin, as a result of photoreactions in collagen and (in the UV-B range) also damage in DNA (for example, pyrimidine dimers). Thus, the mid and lower ultraviolet electromagnetic spectrum is damaging to biological tissues as a result of electronic excitation in molecules which falls short of ionization, but produces similar non-thermal effects. To some extent, visible light and also ultraviolet A (UVA) which is closest to visible energies, have been proven to result in formation of reactive oxygen species in skin, which cause indirect damage since these are electronically excited molecules which can inflict reactive damage, although they do not cause sunburn (erythema). Like ionization-damage, all these effects in skin are beyond those produced by simple thermal effects.
Measurement of radiation
The table below shows radiation and dose quantities in SI and non-SI units.
Uses of radiation
Ionizing radiation has many industrial, military, and medical uses. Its usefulness must be balanced with its hazards, a compromise that has shifted over time. For example, at one time, assistants in shoe shops in the US used X-rays to check a child's shoe size, but this practice was halted when the risks of ionizing radiation were better understood.
Neutron radiation is essential to the working of nuclear reactors and nuclear weapons. The penetrating power of x-ray, gamma, beta, and positron radiation is used for medical imaging, nondestructive testing, and a variety of industrial gauges. Radioactive tracers are used in medical and industrial applications, as well as biological and radiation chemistry. Alpha radiation is used in static eliminators and smoke detectors. The sterilizing effects of ionizing radiation are useful for cleaning medical instruments, food irradiation, and the sterile insect technique. Measurements of carbon-14, can be used to date the remains of long-dead organisms (such as wood that is thousands of years old).
Sources of radiation
Ionizing radiation is generated through nuclear reactions, nuclear decay, by very high temperature, or via acceleration of charged particles in electromagnetic fields. Natural sources include the sun, lightning and supernova explosions. Artificial sources include nuclear reactors, particle accelerators, and x-ray tubes.
The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) itemized types of human exposures.
The International Commission on Radiological Protection manages the International System of Radiological Protection, which sets recommended limits for dose uptake.
Background radiation
Background radiation comes from both natural and human-made sources.
The global average exposure of humans to ionizing radiation is about 3 mSv (0.3 rem) per year, 80% of which comes from nature. The remaining 20% results from exposure to human-made radiation sources, primarily from medical imaging. Average human-made exposure is much higher in developed countries, mostly due to CT scans and nuclear medicine.
Natural background radiation comes from five primary sources: cosmic radiation, solar radiation, external terrestrial sources, radiation in the human body, and radon.
The background rate for natural radiation varies considerably with location, being as low as 1.5 mSv/a (1.5 mSv per year) in some areas and over 100 mSv/a in others. The highest level of purely natural radiation recorded on the Earth's surface is 90 μGy/h (0.8 Gy/a) on a Brazilian black beach composed of monazite. The highest background radiation in an inhabited area is found in Ramsar, primarily due to naturally radioactive limestone used as a building material. Some 2000 of the most exposed residents receive an average radiation dose of 10 mGy per year, (1 rad/yr) ten times more than the ICRP recommended limit for exposure to the public from artificial sources. Record levels were found in a house where the effective radiation dose due to external radiation was 135 mSv/a, (13.5 rem/yr) and the committed dose from radon was 640 mSv/a (64.0 rem/yr). This unique case is over 200 times higher than the world average background radiation. Despite the high levels of background radiation that the residents of Ramsar receive there is no compelling evidence that they experience a greater health risk. The ICRP recommendations are conservative limits and may represent an over representation of the actual health risk. Generally radiation safety organization recommend the most conservative limits assuming it is best to err on the side of caution. This level of caution is appropriate but should not be used to create fear about background radiation danger. Radiation danger from background radiation may be a serious threat but is more likely a small overall risk compared to all other factors in the environment.
Cosmic radiation
The Earth, and all living things on it, are constantly bombarded by radiation from outside our solar system. This cosmic radiation consists of relativistic particles: positively charged nuclei (ions) from 1 amu protons (about 85% of it) to 26 amu iron nuclei and even beyond. (The high-atomic number particles are called HZE ions.) The energy of this radiation can far exceed that which humans can create, even in the largest particle accelerators (see ultra-high-energy cosmic ray). This radiation interacts in the atmosphere to create secondary radiation that rains down, including x-rays, muons, protons, antiprotons, alpha particles, pions, electrons, positrons, and neutrons.
The dose from cosmic radiation is largely from muons, neutrons, and electrons, with a dose rate that varies in different parts of the world and based largely on the geomagnetic field, altitude, and solar cycle. The cosmic-radiation dose rate on airplanes is so high that, according to the United Nations UNSCEAR 2000 Report (see links at bottom), airline flight crew workers receive more dose on average than any other worker, including those in nuclear power plants. Airline crews receive more cosmic rays if they routinely work flight routes that take them close to the North or South pole at high altitudes, where this type of radiation is maximal.
Cosmic rays also include high-energy gamma rays, which are far beyond the energies produced by solar or human sources.
External terrestrial sources
Most materials on Earth contain some radioactive atoms, even if in small quantities. Most of the dose received from these sources is from gamma-ray emitters in building materials, or rocks and soil when outside. The major radionuclides of concern for terrestrial radiation are isotopes of potassium, uranium, and thorium. Each of these sources has been decreasing in activity since the formation of the Earth.
Internal radiation sources
All earthly materials that are the building blocks of life contain a radioactive component. As humans, plants, and animals consume food, air, and water, an inventory of radioisotopes builds up within the organism (see banana equivalent dose). Some radionuclides, like potassium-40, emit a high-energy gamma ray that can be measured by sensitive electronic radiation measurement systems. These internal radiation sources contribute to an individual's total radiation dose from natural background radiation.
Radon
An important source of natural radiation is radon gas, which seeps continuously from bedrock but can, because of its high density, accumulate in poorly ventilated houses.
Radon-222 is a gas produced by the α-decay of radium-226. Both are a part of the natural uranium decay chain. Uranium is found in soil throughout the world in varying concentrations. Radon is the largest cause of lung cancer among non-smokers and the second-leading cause overall.
Radiation exposure
There are three standard ways to limit exposure:
Time: For people exposed to radiation in addition to natural background radiation, limiting or minimizing the exposure time will reduce the dose from the source of radiation.
Distance: Radiation intensity decreases sharply with distance, according to an inverse-square law (in an absolute vacuum).
Shielding: Air or skin can be sufficient to substantially attenuate alpha radiation, while sheet metal or plastic is often sufficient to stop beta radiation. Barriers of lead, concrete, or water are often used to give effective protection from more penetrating forms of ionizing radiation such as gamma rays and neutrons. Some radioactive materials are stored or handled underwater or by remote control in rooms constructed of thick concrete or lined with lead. There are special plastic shields that stop beta particles, and air will stop most alpha particles. The effectiveness of a material in shielding radiation is determined by its half-value thicknesses, the thickness of material that reduces the radiation by half. This value is a function of the material itself and of the type and energy of ionizing radiation. Some generally accepted thicknesses of attenuating material are 5 mm of aluminum for most beta particles, and 3 inches of lead for gamma radiation.
These can all be applied to natural and human-made sources. For human-made sources the use of Containment is a major tool in reducing dose uptake and is effectively a combination of shielding and isolation from the open environment. Radioactive materials are confined in the smallest possible space and kept out of the environment such as in a hot cell (for radiation) or glove box (for contamination). Radioactive isotopes for medical use, for example, are dispensed in closed handling facilities, usually gloveboxes, while nuclear reactors operate within closed systems with multiple barriers that keep the radioactive materials contained. Work rooms, hot cells and gloveboxes have slightly reduced air pressures to prevent escape of airborne material to the open environment.
In nuclear conflicts or civil nuclear releases civil defense measures can help reduce exposure of populations by reducing ingestion of isotopes and occupational exposure. One is the issue of potassium iodide (KI) tablets, which blocks the uptake of radioactive iodine (one of the major radioisotope products of nuclear fission) into the human thyroid gland.
Occupational exposure
Occupationally exposed individuals are controlled within the regulatory framework of the country they work in, and in accordance with any local nuclear licence constraints. These are usually based on the recommendations of the International Commission on Radiological Protection.
The ICRP recommends limiting artificial irradiation. For occupational exposure, the limit is 50 mSv in a single year with a maximum of 100 mSv in a consecutive five-year period.
The radiation exposure of these individuals is carefully monitored with the use of dosimeters and other radiological protection instruments which will measure radioactive particulate concentrations, area gamma dose readings and radioactive contamination. A legal record of dose is kept.
Examples of activities where occupational exposure is a concern include:
Airline crew (the most exposed population)
Industrial radiography
Medical radiology and nuclear medicine
Uranium mining
Nuclear power plant and nuclear fuel reprocessing plant workers
Research laboratories (government, university and private)
Some human-made radiation sources affect the body through direct radiation, known as effective dose (radiation) while others take the form of radioactive contamination and irradiate the body from within. The latter is known as committed dose.
Public exposure
Medical procedures, such as diagnostic X-rays, nuclear medicine, and radiation therapy are by far the most significant source of human-made radiation exposure to the general public. Some of the major radionuclides used are I-131, Tc-99m, Co-60, Ir-192, and Cs-137. The public is also exposed to radiation from consumer products, such as tobacco (polonium-210), combustible fuels (gas, coal, etc.), televisions, luminous watches and dials (tritium), airport X-ray systems, smoke detectors (americium), electron tubes, and gas lantern mantles (thorium).
Of lesser magnitude, members of the public are exposed to radiation from the nuclear fuel cycle, which includes the entire sequence from processing uranium to the disposal of the spent fuel. The effects of such exposure have not been reliably measured due to the extremely low doses involved. Opponents use a cancer per dose model to assert that such activities cause several hundred cases of cancer per year, an application of the widely accepted Linear no-threshold model (LNT).
The International Commission on Radiological Protection recommends limiting artificial irradiation to the public to an average of 1 mSv (0.001 Sv) of effective dose per year, not including medical and occupational exposures.
In a nuclear war, gamma rays from both the initial weapon explosion and fallout would be the sources of radiation exposure.
Spaceflight
Massive particles are a concern for astronauts outside the Earth's magnetic field who would receive solar particles from solar proton events (SPE) and galactic cosmic rays from cosmic sources. These high-energy charged nuclei are blocked by Earth's magnetic field but pose a major health concern for astronauts traveling to the Moon and to any distant location beyond the Earth orbit. Highly charged HZE ions in particular are known to be extremely damaging, although protons make up the vast majority of galactic cosmic rays. Evidence indicates past SPE radiation levels that would have been lethal for unprotected astronauts.
Air travel
Air travel exposes people on aircraft to increased radiation from space as compared to sea level, including cosmic rays and from solar flare events. Software programs such as Epcard, CARI, SIEVERT, PCAIRE are attempts to simulate exposure by aircrews and passengers. An example of a measured dose (not simulated dose) is 6 μSv per hour from London Heathrow to Tokyo Narita on a high-latitude polar route. However, dosages can vary, such as during periods of high solar activity. The United States FAA requires airlines to provide flight crew with information about cosmic radiation, and an International Commission on Radiological Protection recommendation for the general public is no more than 1 mSv per year. In addition, many airlines do not allow pregnant flightcrew members, to comply with a European Directive. The FAA has a recommended limit of 1 mSv total for a pregnancy, and no more than 0.5 mSv per month. Information originally based on Fundamentals of Aerospace Medicine published in 2008.
Radiation hazard warning signs
Hazardous levels of ionizing radiation are signified by the trefoil sign on a yellow background. These are usually posted at the boundary of a radiation controlled area or in any place where radiation levels are significantly above background due to human intervention.
The red ionizing radiation warning symbol (ISO 21482) was launched in 2007, and is intended for IAEA Category 1, 2 and 3 sources defined as dangerous sources capable of death or serious injury, including food irradiators, teletherapy machines for cancer treatment and industrial radiography units. The symbol is to be placed on the device housing the source, as a warning not to dismantle the device or to get any closer. It will not be visible under normal use, only if someone attempts to disassemble the device. The symbol will not be located on building access doors, transportation packages or containers.
See also
European Committee on Radiation Risk
International Commission on Radiological Protection – manages the International System of Radiological Protection
Ionometer
Irradiated mail
National Council on Radiation Protection and Measurements – US national organisation
Nuclear safety
Nuclear semiotics
Radiant energy
Exposure (radiation)
Radiation hormesis
Radiation physics
Radiation protection
Radiation Protection Convention, 1960
Radiation protection of patients
Sievert
Treatment of infections after accidental or hostile exposure to ionizing radiation
References
Literature
External links
The Nuclear Regulatory Commission regulates most commercial radiation sources and non-medical exposures in the US:
NLM Hazardous Substances Databank – Ionizing Radiation
United Nations Scientific Committee on the Effects of Atomic Radiation 2000 Report Volume 1: Sources, Volume 2: Effects
Beginners Guide to Ionising Radiation Measurement
(from CT scans and xrays).
Free Radiation Safety Course
Health Physics Society Public Education Website
Oak Ridge Reservation Basic Radiation Facts
Carcinogens
Mutagens
Radioactivity
Radiobiology
Radiation health effects
Radiation protection | Ionizing radiation | [
"Physics",
"Chemistry",
"Materials_science",
"Biology",
"Environmental_science"
] | 6,995 | [
"Ionizing radiation",
"Physical phenomena",
"Radiation health effects",
"Toxicology",
"Radiobiology",
"Radiation effects",
"Radiation",
"Nuclear physics",
"Carcinogens",
"Radioactivity"
] |
202,584 | https://en.wikipedia.org/wiki/Medical%20physics | Medical physics deals with the application of the concepts and methods of physics to the prevention, diagnosis and treatment of human diseases with a specific goal of improving human health and well-being. Since 2008, medical physics has been included as a health profession according to International Standard Classification of Occupation of the International Labour Organization.
Although medical physics may sometimes also be referred to as biomedical physics, medical biophysics, applied physics in medicine, physics applications in medical science, radiological physics or hospital radio-physics, a "medical physicist" is specifically a health professional with specialist education and training in the concepts and techniques of applying physics in medicine and competent to practice independently in one or more of the subfields of medical physics. Traditionally, medical physicists are found in the following healthcare specialties: radiation oncology (also known as radiotherapy or radiation therapy), diagnostic and interventional radiology (also known as medical imaging), nuclear medicine, and radiation protection. Medical physics of radiation therapy can involve work such as dosimetry, linac quality assurance, and brachytherapy. Medical physics of diagnostic and interventional radiology involves medical imaging techniques such as magnetic resonance imaging, ultrasound, computed tomography and x-ray. Nuclear medicine will include positron emission tomography and radionuclide therapy. However one can find Medical Physicists in many other areas such as physiological monitoring, audiology, neurology, neurophysiology, cardiology and others.
Medical physics departments may be found in institutions such as universities, hospitals, and laboratories. University departments are of two types. The first type are mainly concerned with preparing students for a career as a hospital Medical Physicist and research focuses on improving the practice of the profession. A second type (increasingly called 'biomedical physics') has a much wider scope and may include research in any applications of physics to medicine from the study of biomolecular structure to microscopy and nanomedicine.
Mission statement of medical physicists
In hospital medical physics departments, the mission statement for medical physicists as adopted by the European Federation of Organisations for Medical Physics (EFOMP) is the following:
The term "physical agents" refers to ionising and non-ionising electromagnetic radiations, static electric and magnetic fields, ultrasound, laser light and any other Physical Agent associated with medical e.g., x-rays in computerised tomography (CT), gamma rays/radionuclides in nuclear medicine, magnetic fields and radio-frequencies in magnetic resonance imaging (MRI), ultrasound in ultrasound imaging and Doppler measurements.
This mission includes the following 11 key activities:
Scientific problem solving service: Comprehensive problem solving service involving recognition of less than optimal performance or optimised use of medical devices, identification and elimination of possible causes or misuse, and confirmation that proposed solutions have restored device performance and use to acceptable status. All activities are to be based on current best scientific evidence or own research when the available evidence is not sufficient.
Dosimetry measurements: Measurement of doses had by patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures (e.g., for legal or employment purposes); selection, calibration and maintenance of dosimetry related instrumentation; independent checking of dose related quantities provided by dose reporting devices (including software devices); measurement of dose related quantities required as inputs to dose reporting or estimating devices (including software). Measurements to be based on current recommended techniques and protocols. Includes dosimetry of all physical agents.
Patient safety/risk management (including volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures. Surveillance of medical devices and evaluation of clinical protocols to ensure the ongoing protection of patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures from the deleterious effects of physical agents in accordance with the latest published evidence or own research when the available evidence is not sufficient. Includes the development of risk assessment protocols.
Occupational and public safety/risk management (when there is an impact on medical exposure or own safety). Surveillance of medical devices and evaluation of clinical protocols with respect to protection of workers and public when impacting the exposure of patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures or responsibility with respect to own safety. Includes the development of risk assessment protocols in conjunction with other experts involved in occupational / public risks.
Clinical medical device management: Specification, selection, acceptance testing, commissioning and quality assurance/ control of medical devices in accordance with the latest published European or International recommendations and the management and supervision of associated programmes. Testing to be based on current recommended techniques and protocols.
Clinical involvement: Carrying out, participating in and supervising everyday radiation protection and quality control procedures to ensure ongoing effective and optimised use of medical radiological devices and including patient specific optimization.
Development of service quality and cost-effectiveness: Leading the introduction of new medical radiological devices into clinical service, the introduction of new medical physics services and participating in the introduction/development of clinical protocols/techniques whilst giving due attention to economic issues.
Expert consultancy: Provision of expert advice to outside clients (e.g., clinics with no in-house medical physics expertise).
Education of healthcare professionals (including medical physics trainees: Contributing to quality healthcare professional education through knowledge transfer activities concerning the technical-scientific knowledge, skills and competences supporting the clinically effective, safe, evidence-based and economical use of medical radiological devices. Participation in the education of medical physics students and organisation of medical physics residency programmes.
Health technology assessment (HTA): Taking responsibility for the physics component of health technology assessments related to medical radiological devices and /or the medical uses of radioactive substances/sources.
Innovation: Developing new or modifying existing devices (including software) and protocols for the solution of hitherto unresolved clinical problems.
Medical biophysics and biomedical physics
Some education institutions house departments or programs bearing the title "medical biophysics" or "biomedical physics" or "applied physics in medicine". Generally, these fall into one of two categories: interdisciplinary departments that house biophysics, radiobiology, and medical physics under a single umbrella; and undergraduate programs that prepare students for further study in medical physics, biophysics, or medicine.
Most of the scientific concepts in bionanotechnology are derived from other fields. Biochemical principles that are used to understand the material properties of biological systems are central in bionanotechnology because those same principles are to be used to create new technologies. Material properties and applications studied in bionanoscience include mechanical properties (e.g. deformation, adhesion, failure), electrical/electronic (e.g. electromechanical stimulation, capacitors, energy storage/batteries), optical (e.g. absorption, luminescence, photochemistry), thermal (e.g. thermomutability, thermal management), biological (e.g. how cells interact with nanomaterials, molecular flaws/defects, biosensing, biological mechanisms such as mechanosensation), nanoscience of disease (e.g. genetic disease, cancer, organ/tissue failure), as well as computing (e.g. DNA computing) and agriculture (target delivery of pesticides, hormones and fertilizers.
Areas of specialty
The International Organization for Medical Physics (IOMP) recognizes main areas of medical physics employment and focus.
Medical imaging physics
Medical imaging physics is also known as diagnostic and interventional radiology physics.
Clinical (both "in-house" and "consulting") physicists typically deal with areas of testing, optimization, and quality assurance of diagnostic radiology physics areas such as radiographic X-rays, fluoroscopy, mammography, angiography, and computed tomography, as well as non-ionizing radiation modalities such as ultrasound, and MRI. They may also be engaged with radiation protection issues such as dosimetry (for staff and patients). In addition, many imaging physicists are often also involved with nuclear medicine systems, including single photon emission computed tomography (SPECT) and positron emission tomography (PET).
Sometimes, imaging physicists may be engaged in clinical areas, but for research and teaching purposes, such as quantifying intravascular ultrasound as a possible method of imaging a particular vascular object.
Radiation therapeutic physics
Radiation therapeutic physics is also known as radiotherapy physics or radiation oncologist physics.
The majority of medical physicists currently working in the US, Canada, and some western countries are of this group. A radiation therapy physicist typically deals with linear accelerator (Linac) systems and kilovoltage x-ray treatment units on a daily basis, as well as other modalities such as TomoTherapy, gamma knife, Cyberknife, proton therapy, and brachytherapy.
The academic and research side of therapeutic physics may encompass fields such as boron neutron capture therapy, sealed source radiotherapy, terahertz radiation, high-intensity focused ultrasound (including lithotripsy), optical radiation lasers, ultraviolet etc. including photodynamic therapy, as well as nuclear medicine including unsealed source radiotherapy, and photomedicine, which is the use of light to treat and diagnose disease.
Nuclear medicine physics
Nuclear medicine is a branch of medicine that uses radiation to provide information about the functioning of a person's specific organs or to treat disease. The thyroid, bones, heart, liver and many other organs can be easily imaged, and disorders in their function revealed. In some cases radiation sources can be used to treat diseased organs, or tumours. Five Nobel laureates have been intimately involved with the use of radioactive tracers in medicine.
Over 10,000 hospitals worldwide use radioisotopes in medicine, and about 90% of the procedures are for diagnosis. The most common radioisotope used in diagnosis is technetium-99m, with some 30 million procedures per year, accounting for 80% of all nuclear medicine procedures worldwide.
Health physics
Health physics is also known as radiation safety or radiation protection. Health physics is the applied physics of radiation protection for health and health care purposes. It is the science concerned with the recognition, evaluation, and control of health hazards to permit the safe use and application of ionizing radiation. Health physics professionals promote excellence in the science and practice of radiation protection and safety.
Background radiation
Radiation protection
Dosimetry
Health physics
Radiological protection of patients
Non-ionizing Medical Radiation Physics
Some aspects of non-ionising radiation physics may be considered under radiation protection or diagnostic imaging physics. Imaging modalities include MRI, optical imaging and ultrasound. Safety considerations include these areas and lasers
Lasers and applications in medicine
Physiological measurement
Physiological measurements have also been used to monitor and measure various physiological parameters. Many physiological measurement techniques are non-invasive and can be used in conjunction with, or as an alternative to, other invasive methods. Measurement methods include electrocardiography Many of these areas may be covered by other specialities, for example medical engineering or vascular science.
Healthcare informatics and computational physics
Other closely related fields to medical physics include fields which deal with medical data, information technology and computer science for medicine.
Information and communication in medicine
Medical informatics
Image processing, display and visualization
Computer-aided diagnosis
Picture archiving and communication systems (PACS)
Standards: DICOM, ISO, IHE
Hospital information systems
e-Health
Telemedicine
Digital operating room
Workflow, patient-specific modeling
Medicine on the Internet of Things
Distant monitoring and telehomecare
Areas of research and academic development
Non-clinical physicists may or may not focus on the above areas from an academic and research point of view, but their scope of specialization may also encompass lasers and ultraviolet systems (such as photodynamic therapy), fMRI and other methods for functional imaging as well as molecular imaging, electrical impedance tomography, diffuse optical imaging, optical coherence tomography, and dual energy X-ray absorptiometry.
Legislative and advisory bodies
International
ICRU: International Commission on Radiation Units and Measurements
ICRP: International Commission on Radiological Protection
IOMP: International Organization for Medical Physics
IAEA: International Atomic Energy Agency
United States of America
NCRP: National Council on Radiation Protection & Measurements
NRC: Nuclear Regulatory Commission
FDA: Food and Drug Administration
AAPM: American Association of Physicists in Medicine
United Kingdom
IPEM: Institute of Physics and Engineering in Medicine
MHRA: Medicines and Healthcare products Regulatory Agency
Other
AMPI: Association of Medical Physicists of India
CCPM: Canadian College of Physicists in Medicine
EFOMP: European Federation of Organisations for Medical Physics
ACPSEM: Australasian College of Physical Scientists and Engineers in Medicine
References
External links
Human Health Campus, The official website of the International Atomic Energy Agency dedicated to Professionals in Radiation Medicine. This site is managed by the Division of Human Health, Department of Nuclear Sciences and Applications
Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM)
Canadian Organization of Medical Physicist - Organisation canadienne des physiciens médicaux
The American Association of Physicists in Medicine
Romanian College of Medical Physicists
medicalphysicsweb.org from the Institute of Physics
AIP Medical Physics portal
Institute of Physics & Engineering in Medicine (IPEM) - UK
European Federation of Organizations for Medical Physics (EFOMP)
International Organization for Medical Physics (IOMP)
Applied and interdisciplinary physics | Medical physics | [
"Physics"
] | 2,755 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
202,661 | https://en.wikipedia.org/wiki/Stellar%20parallax | Stellar parallax is the apparent shift of position (parallax) of any nearby star (or other object) against the background of distant stars. By extension, it is a method for determining the distance to the star through trigonometry, the stellar parallax method. Created by the different orbital positions of Earth, the extremely small observed shift is largest at time intervals of about six months, when Earth arrives at opposite sides of the Sun in its orbit, giving a baseline distance of about two astronomical units between observations. The parallax itself is considered to be half of this maximum, about equivalent to the observational shift that would occur due to the different positions of Earth and the Sun, a baseline of one astronomical unit (AU).
Stellar parallax is so difficult to detect that its existence was the subject of much debate in astronomy for hundreds of years. Thomas Henderson, Friedrich Georg Wilhelm von Struve, and Friedrich Bessel made the first successful parallax measurements in 1832–1838, for the stars Alpha Centauri, Vega, and 61 Cygni.
History of measurement
Early theory and attempts
Stellar parallax is so small that it was unobservable until the 19th century, and its apparent absence was used as a scientific argument against heliocentrism during the early modern age. It is clear from Euclid's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons, such gigantic distances involved seemed entirely implausible: it was one of Tycho Brahe's principal objections to Copernican heliocentrism that for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn and the eighth sphere (the fixed stars).
James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of Earth's axis, and catalogued 3,222 stars.
19th and 20th centuries
Measurement of annual parallax was the first reliable way to determine the distances to the closest stars. In the second quarter of the 19th century, technological progress reached to the level which provided sufficient accuracy and precision for stellar parallax measurements. Giuseppe Calandrelli noted stellar parallax in 1805-6 and came up with a 4-second value for the star Vega which was a gross overestimate. The first successful stellar parallax measurements were done by Thomas Henderson in Cape Town South Africa in 1832–1833, where he measured parallax of one of the closest stars, Alpha Centauri. Between 1835 and 1836, astronomer Friedrich Georg Wilhelm von Struve at the Dorpat university observatory measured the distance of Vega, publishing his results in 1837. Friedrich Bessel, a friend of Struve, carried out an intense observational campaign in 1837–1838 at Koenigsberg Observatory for the star 61 Cygni using a heliometer, and published his results in 1838. Henderson published his results in 1839, after returning from South Africa.
Those three results, two of which were measured with the best instruments at the time (Fraunhofer great refractor used by Struve and Fraunhofer heliometer by Bessel) were the first ones in history to establish the reliable distance scale to the stars.
A large heliometer was installed at Kuffner Observatory (In Vienna) in 1896, and was used for measuring the distance to other stars by trigonometric parallax. By 1910 it had computed 16 parallax distances to other stars, out of only 108 total known to science at that time.
Being very difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century, mostly by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. In the 1980s, charge-coupled devices (CCDs) replaced photographic plates and reduced optical uncertainties to one milliarcsecond.
Stellar parallax remains the standard for calibrating other measurement methods (see Cosmic distance ladder). Accurate calculations of distance based on stellar parallax require a measurement of the distance from Earth to the Sun, now known to exquisite accuracy based on radar reflection off the surfaces of planets.
Space astrometry
In 1989, the satellite Hipparcos was launched primarily for obtaining parallaxes and proper motions of nearby stars, increasing the number of stellar parallaxes measured to milliarcsecond accuracy a thousandfold. Even so, Hipparcos is only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy.
The Hubble telescope WFC3 now has a precision of 20 to 40 microarcseconds, enabling reliable distance measurements up to for a small number of stars. This gives more accuracy to the cosmic distance ladder and improves the knowledge of distances in the Universe, based on the dimensions of the Earth's orbit.
As distances between the two points of observation are increased, the visual effect of the parallax is likewise rendered more visible. NASA's New Horizons spacecraft performed the first interstellar parallax measurement on 22 April 2020, taking images of Proxima Centauri and Wolf 359 in conjunction with earth-based observatories. The relative proximity of the two stars combined with the 6.5 billion kilometer (about 43 AU) distance of the spacecraft from Earth yielded a discernible parallax of arcminutes, allowing the parallax to be seen visually without instrumentation.
The European Space Agency's Gaia mission, launched 19 December 2013, is expected to measure parallax angles to an accuracy of 10 microarcseconds for all moderately bright stars, thus mapping nearby stars (and potentially planets) up to a distance of tens of thousands of light-years from Earth. Data Release 2 in 2018 claims mean errors for the parallaxes of 15th magnitude and brighter stars of 20–40 microarcseconds.
Radio astrometry
Very long baseline interferometry in the radio band can produce images with angular resolutions of about 1 milliarcsecond, and hence, for bright radio sources, the precision of parallax measurements made in the radio can easily exceed those of optical telescopes like Gaia. These measurements tend to be sensitivity limited, and need to be made one at a time, so the work is generally done only for sources like pulsars and X-ray binaries, where the radio emission is strong relative to the optical emission.
Parallax method
Principle
Throughout the year the position of a star S is noted in relation to other stars in its apparent neighborhood:
Stars that did not seem to move in relation to each other are used as reference points to determine the path of S.
The observed path is an ellipse: the projection of Earth's orbit around the Sun through S onto the distant background of non-moving stars. The farther S is removed from Earth's orbital axis, the greater the eccentricity of the path of S. The center of the ellipse corresponds to the point where S would be seen from the Sun:
The plane of Earth's orbit is at an angle to a line from the Sun through S. The vertices v and v' of the elliptical projection of the path of S are projections of positions of Earth E and such that a line E- intersects the line Sun-S at a right angle; the triangle created by points E, and S is an isosceles triangle with the line Sun-S as its symmetry axis.
Any stars that did not move between observations are, for the purpose of the accuracy of the measurement, infinitely far away. This means that the distance of the movement of the Earth compared to the distance to these infinitely far away stars is, within the accuracy of the measurement, 0. Thus a line of sight from Earth's first position E to vertex v will be essentially the same as a line of sight from the Earth's second position to the same vertex v, and will therefore run parallel to it - impossible to depict convincingly in an image of limited size:
Since line - is a transversal in the same (approximately Euclidean) plane as parallel lines E-v and -v, it follows that the corresponding angles of intersection of these parallel lines with this transversal are congruent: the angle θ between lines of sight E-v and - is equal to the angle θ between -v and -, which is the angle θ between observed positions of S in relation to its apparently unmoving stellar surroundings.
The distance d from the Sun to S now follows from simple trigonometry:
tan(θ) = E-Sun / d,
so that d = E-Sun / tan(θ), where E-Sun is 1 AU.
The more distant an object is, the smaller its parallax.
Stellar parallax measures are given in the tiny units of arcseconds, or even in thousandths of arcseconds (milliarcseconds). The distance unit parsec is defined as the length of the leg of a right triangle adjacent to the angle of one arcsecond at one vertex, where the other leg is 1 AU long. Because stellar parallaxes and distances all involve such skinny right triangles, a convenient trigonometric approximation can be used to convert parallaxes (in arcseconds) to distance (in parsecs). The approximate distance is simply the reciprocal of the parallax: For example, Proxima Centauri (the nearest star to Earth other than the Sun), whose parallax is 0.7685, is 1 / 0.7685 parsecs = distant.
Variants
Stellar parallax is most often measured using annual parallax, defined as the difference in position of a star as seen from Earth and Sun, i.e. the angle subtended at a star by the mean radius of Earth's orbit around the Sun. The parsec (3.26 light-years) is defined as the distance for which the annual parallax is 1 arcsecond. Annual parallax is normally measured by observing the position of a star at different times of the year as Earth moves through its orbit.
The angles involved in these calculations are very small and thus difficult to measure. The nearest star to the Sun (and also the star with the largest parallax), Proxima Centauri, has a parallax of 0.7685 ± 0.0002 arcsec. This angle is approximately that subtended by an object 2 centimeters in diameter located 5.3 kilometers away.
Derivation
For a right triangle,
where is the parallax, is approximately the average distance from the Sun to Earth, and is the distance to the star.
Using small-angle approximations (valid when the angle is small compared to 1 radian),
so the parallax, measured in arcseconds, is
If the parallax is 1", then the distance is
This defines the parsec, a convenient unit for measuring distance using parallax. Therefore, the distance, measured in parsecs, is simply , when the parallax is given in arcseconds.
Error
Precise parallax measurements of distance have an associated error. This error in the measured parallax angle does not translate directly into an error for the distance, except for relatively small errors. The reason for this is that an error toward a smaller angle results in a greater error in distance than an error toward a larger angle.
However, an approximation of the distance error can be computed by
where d is the distance and p is the parallax. The approximation is far more accurate for parallax errors that are small relative to the parallax than for relatively large errors. For meaningful results in stellar astronomy, Dutch astronomer Floor van Leeuwen recommends that the parallax error be no more than 10% of the total parallax when computing this error estimate.
See also
References
.
.
Further reading
Parallax
Parallax
Parallax | Stellar parallax | [
"Physics",
"Astronomy"
] | 2,552 | [
"Concepts in astronomy",
"Astrometry",
"Astronomical sub-disciplines"
] |
202,672 | https://en.wikipedia.org/wiki/Spectral%20density | In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal (including noise) as analyzed in terms of its frequency content, is called its spectrum.
When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the energy spectral density. More commonly used is the power spectral density (PSD, or simply power spectrum), which applies to signals existing over all time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The PSD then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite. Summation or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integrating over the time domain, as dictated by Parseval's theorem.
The spectrum of a physical process often contains essential information about the nature of . For instance, the pitch and timbre of a musical instrument are immediately determined from a spectral analysis. The color of a light source is determined by the spectrum of the electromagnetic wave's electric field as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when a dispersive prism is used to obtain a spectrum of light in a spectrograph, or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency.
However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important in statistical signal processing and in the statistical study of stochastic processes, as well as in many other branches of physics and engineering. Typically the process is a function of time, but one can similarly discuss data in the spatial domain being decomposed in terms of spatial frequency.
Units
In physics, the signal might be a wave, such as an electromagnetic wave, an acoustic wave, or the vibration of a mechanism. The power spectral density (PSD) of the signal describes the power present in the signal as a function of frequency, per unit frequency. Power spectral density is commonly expressed in SI units of watts per hertz (abbreviated as W/Hz).
When a signal is defined in terms only of a voltage, for instance, there is no unique power associated with the stated amplitude. In this case "power" is simply reckoned in terms of the square of the signal, as this would always be proportional to the actual power delivered by that signal into a given impedance. So one might use units of V2 Hz−1 for the PSD. Energy spectral density (ESD) would have units of V2 s Hz−1, since energy has units of power multiplied by time (e.g., watt-hour).
In the general case, the units of PSD will be the ratio of units of variance per unit of frequency; so, for example, a series of displacement values (in meters) over time (in seconds) will have PSD in units of meters squared per hertz, m2/Hz.
In the analysis of random vibrations, units of g2 Hz−1 are frequently used for the PSD of acceleration, where g denotes the g-force.
Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning of x(t) will remain unspecified, but the independent variable will be assumed to be that of time.
One-sided vs two-sided
A PSD can be either a one-sided function of only positive frequencies or a two-sided function of both positive and negative frequencies but with only half the amplitude. Noise PSDs are generally one-sided in engineering and two-sided in physics.
Definition
Energy spectral density
In signal processing, the energy of a signal is given by
Assuming the total energy is finite (i.e. is a square-integrable function) allows applying Parseval's theorem (or Plancherel's theorem). That is,
where
is the Fourier transform of at frequency (in Hz). The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of can be interpreted as a density function multiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequency in the frequency interval .
Therefore, the energy spectral density of is defined as:
The function and the autocorrelation of form a Fourier transform pair, a result also known as the Wiener–Khinchin theorem (see also Periodogram).
As a physical example of how one might measure the energy spectral density of a signal, suppose represents the potential (in volts) of an electrical pulse propagating along a transmission line of impedance , and suppose the line is terminated with a matched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By Ohm's law, the power delivered to the resistor at time is equal to , so the total energy is found by integrating with respect to time over the duration of the pulse. To find the value of the energy spectral density at frequency , one could insert between the transmission line and the resistor a bandpass filter which passes only a narrow range of frequencies (, say) near the frequency of interest and then measure the total energy dissipated across the resistor. The value of the energy spectral density at is then estimated to be . In this example, since the power has units of V2 Ω−1, the energy has units of V2 s Ω−1 = J, and hence the estimate of the energy spectral density has units of J Hz−1, as required. In many situations, it is common to forget the step of dividing by so that the energy spectral density instead has units of V2 Hz−1.
This definition generalizes in a straightforward manner to a discrete signal with a countably infinite number of values such as a signal sampled at discrete times :
where is the discrete-time Fourier transform of The sampling interval is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also see normalized frequency)
Power spectral density
The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define the power spectral density (PSD) which exists for stationary processes; this describes how the power of a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study the variance of a function over time (or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as the power spectrum even when there is no physical power involved. If one were to create a physical voltage source which followed and applied it to the terminals of a one ohm resistor, then indeed the instantaneous power dissipated in that resistor would be given by watts.
The average power of a signal over all time is therefore given by the following time average, where the period is centered about some arbitrary time :
Whenever it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral, the average power can also be written as
where and is unity within the arbitrary period and zero elsewhere.
When is non-zero, the integral must grow to infinity at least as fast as does. That is the reason why we cannot use the energy of the signal, which is that diverging integral.
In analyzing the frequency content of the signal , one might like to compute the ordinary Fourier transform ; however, for many signals of interest the ordinary Fourier transform does not formally exist. However, under suitable conditions, certain generalizations of the Fourier transform (e.g. the Fourier-Stieltjes transform) still adhere to Parseval's theorem. As such,
where the integrand defines the power spectral density:
The convolution theorem then allows regarding as the Fourier transform of the time convolution of and , where * represents the complex conjugate.
In order to deduce Eq.2, we will find an expression for that will be useful for the purpose. In fact, we will demonstrate that . Let's start by noting that
and let , so that when and vice versa. So
Where, in the last line, we have made use of the fact that and are dummy variables.
So, we have
q.e.d.
Now, let's demonstrate eq.2 by using the demonstrated identity. In addition, we will make the subtitution . In this way, we have:
where the convolution theorem has been used when passing from the 3rd to the 4th line.
Now, if we divide the time convolution above by the period and take the limit as , it becomes the autocorrelation function of the non-windowed signal , which is denoted as , provided that is ergodic, which is true in most, but not all, practical cases.
Assuming the ergodicity of , the power spectral density can be found once more as the Fourier transform of the autocorrelation function (Wiener–Khinchin theorem).
Many authors use this equality to actually define the power spectral density.
The power of the signal in a given frequency band , where , can be calculated by integrating over frequency. Since , an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used):
More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time interval is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than are not sampled, and results at frequencies which are not an integer multiple of are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding to statistical ensembles of realizations of evaluated over the specified time window.
Just as with the energy spectral density, the definition of the power spectral density can be generalized to discrete time variables . As before, we can consider a window of with the signal sampled at discrete times for a total measurement period .
Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved when (and thus ) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called a periodogram. This periodogram converges to the true PSD as the number of estimates as well as the averaging time interval approach infinity.
If two signals both possess power spectral densities, then the cross-spectral density can similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to the cross-correlation.
Properties of the power spectral density
Some properties of the PSD include:
Cross power spectral density
Given two signals and , each of which possess power spectral densities and , it is possible to define a cross power spectral density (CPSD) or cross spectral density (CSD). To begin, let us consider the average power of such a combined signal.
Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtain
where, again, the contributions of and are already understood. Note that , so the full contribution to the cross power is, generally, from twice the real part of either individual CPSD. Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limit becomes the Fourier transform of a cross-correlation function.
where is the cross-correlation of with and is the cross-correlation of with . In light of this, the PSD is seen to be a special case of the CSD for . If and are real signals (e.g. voltage or current), their Fourier transforms and are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the full CPSD is just one of the CPSDs scaled by a factor of two.
For discrete signals and , the relationship between the cross-spectral density and the cross-covariance is
Estimation
The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve parametric or non-parametric approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model. A common non-parametric technique is the periodogram.
The spectral density is usually estimated using Fourier transform methods (such as the Welch method), but other techniques such as the maximum entropy method can also be used.
Related concepts
The spectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts.
The spectral edge frequency (SEF), usually expressed as "SEF x", represents the frequency below which x percent of the total power of a given signal are located; typically, x is in the range 75 to 95. It is more particularly a popular measure used in EEG monitoring, in which case SEF has variously been used to estimate the depth of anesthesia and stages of sleep.
A spectral envelope is the envelope curve of the spectrum density. It describes one point in time (one window, to be precise). For example, in remote sensing using a spectrometer, the spectral envelope of a feature is the boundary of its spectral properties, as defined by the range of brightness levels in each of the spectral bands of interest.
The spectral density is a function of frequency, not a function of time. However, the spectral density of a small window of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called a spectrogram. This is the basis of a number of spectral analysis techniques such as the short-time Fourier transform and wavelets.
A "spectrum" generally means the power spectral density, as discussed above, which depicts the distribution of signal content over frequency. For transfer functions (e.g., Bode plot, chirp) the complete frequency response may be graphed in two parts: power versus frequency and phase versus frequency—the phase spectral density, phase spectrum, or spectral phase. Less commonly, the two parts may be the real and imaginary parts of the transfer function. This is not to be confused with the frequency response of a transfer function, which also includes a phase (or equivalently, a real and imaginary part) as a function of frequency. The time-domain impulse response cannot generally be uniquely recovered from the power spectral density alone without the phase part. Although these are also Fourier transform pairs, there is no symmetry (as there is for the autocorrelation) forcing the Fourier transform to be real-valued. See Ultrashort pulse#Spectral phase, phase noise, group delay.
Sometimes one encounters an amplitude spectral density (ASD), which is the square root of the PSD; the ASD of a voltage signal has units of V Hz−1/2. This is useful when the shape of the spectrum is rather constant, since variations in the ASD will then be proportional to variations in the signal's voltage level itself. But it is mathematically preferred to use the PSD, since only in that case is the area under the curve meaningful in terms of actual power over all frequency or over a specified bandwidth.
Applications
Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such as visible light (perceived as color), musical notes (perceived as pitch), radio/TV (specified by their frequency, or sometimes wavelength) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to a sine wave component. And additionally there may be peaks corresponding to harmonics of a fundamental peak, indicating a periodic signal which is not simply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by a notch filter.
Electrical engineering
The concept and use of the power spectrum of a signal is fundamental in electrical engineering, especially in electronic communication systems, including radio communications, radars, and related systems, plus passive remote sensing technology. Electronic instruments called spectrum analyzers are used to observe and measure the power spectra of signals.
The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density.
Cosmology
Primordial fluctuations, density variations in the early universe, are quantified by a power spectrum which gives the power of the variations as a function of spatial scale.
See also
Bispectrum
Brightness temperature
Colors of noise
Least-squares spectral analysis
Noise spectral density
Spectral density estimation
Spectral efficiency
Spectral leakage
Spectral power distribution
Whittle likelihood
Window function
Notes
References
External links
Power Spectral Density Matlab scripts
Frequency-domain analysis
Signal processing
Waves
Spectroscopy
Scattering
Fourier analysis
Radio spectrum
Spectrum (physical sciences) | Spectral density | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 3,896 | [
"Physical phenomena",
"Telecommunications engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Molecular physics",
"Computer engineering",
"Signal processing",
"Frequency-domain analysis",
"Instrumental analysis",
"Electromagnetic spectrum",
"Waves",
"Scattering",
"Motion (physic... |
202,713 | https://en.wikipedia.org/wiki/Standardized%20moment | In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard deviation, rendering the moment scale invariant. The shape of different probability distributions can be compared using standardized moments.
Standard normalization
Let X be a random variable with a probability distribution P and mean value (i.e. the first raw moment or moment about zero), the operator E denoting the expected value of X. Then the standardized moment of degree k is that is, the ratio of the kth moment about the mean
to the kth power of the standard deviation,
The power of k is because moments scale as meaning that they are homogeneous functions of degree k, thus the standardized moment is scale invariant. This can also be understood as being because moments have dimension; in the above ratio defining standardized moments, the dimensions cancel, so they are dimensionless numbers.
The first four standardized moments can be written as:
For skewness and kurtosis, alternative definitions exist, which are based on the third and fourth cumulant respectively.
Other normalizations
Another scale invariant, dimensionless measure for characteristics of a distribution is the coefficient of variation, . However, this is not a standardized moment, firstly because it is a reciprocal, and secondly because is the first moment about zero (the mean), not the first moment about the mean (which is zero).
See Normalization (statistics) for further normalizing ratios.
See also
Coefficient of variation
Moment (mathematics)
Central moment
References
Statistical deviation and dispersion
Statistical ratios
Moment (mathematics) | Standardized moment | [
"Physics",
"Mathematics"
] | 327 | [
"Mathematical analysis",
"Moments (mathematics)",
"Physical quantities",
"Moment (physics)"
] |
202,722 | https://en.wikipedia.org/wiki/Tensor%20contraction | In multilinear algebra, a tensor contraction is an operation on a tensor that arises from the canonical pairing of a vector space and its dual. In components, it is expressed as a sum of products of scalar components of the tensor(s) caused by applying the summation convention to a pair of dummy indices that are bound to each other in an expression. The contraction of a single mixed tensor occurs when a pair of literal indices (one a subscript, the other a superscript) of the tensor are set equal to each other and summed over. In Einstein notation this summation is built into the notation. The result is another tensor with order reduced by 2.
Tensor contraction can be seen as a generalization of the trace.
Abstract formulation
Let V be a vector space over a field k. The core of the contraction operation, and the simplest case, is the canonical pairing of V with its dual vector space V∗. The pairing is the linear map from the tensor product of these two spaces to the field k:
corresponding to the bilinear form
where f is in V∗ and v is in V. The map C defines the contraction operation on a tensor of type , which is an element of . Note that the result is a scalar (an element of k). In finite dimensions, using the natural isomorphism between and the space of linear maps from V to V, one obtains a basis-free definition of the trace.
In general, a tensor of type (with and ) is an element of the vector space
(where there are m factors V and n factors V∗). Applying the canonical pairing to the kth V factor and the lth V∗ factor, and using the identity on all other factors, defines the (k, l) contraction operation, which is a linear map that yields a tensor of type . By analogy with the case, the general contraction operation is sometimes called the trace.
Contraction in index notation
In tensor index notation, the basic contraction of a vector and a dual vector is denoted by
which is shorthand for the explicit coordinate summation
(where are the components of in a particular basis and are the components of in the corresponding dual basis).
Since a general mixed dyadic tensor is a linear combination of decomposable tensors of the form , the explicit formula for the dyadic case follows: let
be a mixed dyadic tensor. Then its contraction is
.
A general contraction is denoted by labeling one covariant index and one contravariant index with the same letter, summation over that index being implied by the summation convention. The resulting contracted tensor inherits the remaining indices of the original tensor. For example, contracting a tensor T of type (2,2) on the second and third indices to create a new tensor U of type (1,1) is written as
By contrast, let
be an unmixed dyadic tensor. This tensor does not contract; if its base vectors are dotted, the result is the contravariant metric tensor,
,
whose rank is 2.
Metric contraction
As in the previous example, contraction on a pair of indices that are either both contravariant or both covariant is not possible in general. However, in the presence of an inner product (also known as a metric) g, such contractions are possible. One uses the metric to raise or lower one of the indices, as needed, and then one uses the usual operation of contraction. The combined operation is known as metric contraction.
Application to tensor fields
Contraction is often applied to tensor fields over spaces (e.g. Euclidean space, manifolds, or schemes). Since contraction is a purely algebraic operation, it can be applied pointwise to a tensor field, e.g. if T is a (1,1) tensor field on Euclidean space, then in any coordinates, its contraction (a scalar field) U at a point x is given by
Since the role of x is not complicated here, it is often suppressed, and the notation for tensor fields becomes identical to that for purely algebraic tensors.
Over a Riemannian manifold, a metric (field of inner products) is available, and both metric and non-metric contractions are crucial to the theory. For example, the Ricci tensor is a non-metric contraction of the Riemann curvature tensor, and the scalar curvature is the unique metric contraction of the Ricci tensor.
One can also view contraction of a tensor field in the context of modules over an appropriate ring of functions on the manifold or the context of sheaves of modules over the structure sheaf; see the discussion at the end of this article.
Tensor divergence
As an application of the contraction of a tensor field, let V be a vector field on a Riemannian manifold (for example, Euclidean space). Let be the covariant derivative of V (in some choice of coordinates). In the case of Cartesian coordinates in Euclidean space, one can write
Then changing index β to α causes the pair of indices to become bound to each other, so that the derivative contracts with itself to obtain the following sum:
which is the divergence div V. Then
is a continuity equation for V.
In general, one can define various divergence operations on higher-rank tensor fields, as follows. If T is a tensor field with at least one contravariant index, taking the covariant differential and contracting the chosen contravariant index with the new covariant index corresponding to the differential results in a new tensor of rank one lower than that of T.
Contraction of a pair of tensors
One can generalize the core contraction operation (vector with dual vector) in a slightly different way, by considering a pair of tensors T and U. The tensor product is a new tensor, which, if it has at least one covariant and one contravariant index, can be contracted. The case where T is a vector and U is a dual vector is exactly the core operation introduced first in this article.
In tensor index notation, to contract two tensors with each other, one places them side by side (juxtaposed) as factors of the same term. This implements the tensor product, yielding a composite tensor. Contracting two indices in this composite tensor implements the desired contraction of the two tensors.
For example, matrices can be represented as tensors of type (1,1) with the first index being contravariant and the second index being covariant. Let be the components of one matrix and let be the components of a second matrix. Then their multiplication is given by the following contraction, an example of the contraction of a pair of tensors:
.
Also, the interior product of a vector with a differential form is a special case of the contraction of two tensors with each other.
More general algebraic contexts
Let R be a commutative ring and let M be a finite free module over R. Then contraction operates on the full (mixed) tensor algebra of M in exactly the same way as it does in the case of vector spaces over a field. (The key fact is that the canonical pairing is still perfect in this case.)
More generally, let OX be a sheaf of commutative rings over a topological space X, e.g. OX could be the structure sheaf of a complex manifold, analytic space, or scheme. Let M be a locally free sheaf of modules over OX of finite rank. Then the dual of M is still well-behaved and contraction operations make sense in this context.
See also
Tensor product
Partial trace
Interior product
Raising and lowering indices
Musical isomorphism
Ricci calculus
Notes
References
Tensors | Tensor contraction | [
"Engineering"
] | 1,573 | [
"Tensors"
] |
202,840 | https://en.wikipedia.org/wiki/Adjugate%20matrix | In linear algebra, the adjugate or classical adjoint of a square matrix , , is the transpose of its cofactor matrix. It is occasionally known as adjunct matrix, or "adjoint", though that normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.
The product of a matrix with its adjugate gives a diagonal matrix (entries not on the main diagonal are zero) whose diagonal entries are the determinant of the original matrix:
where is the identity matrix of the same size as . Consequently, the multiplicative inverse of an invertible matrix can be found by dividing its adjugate by its determinant.
Definition
The adjugate of is the transpose of the cofactor matrix of ,
In more detail, suppose is a unital+ commutative ring and is an matrix with entries from . The -minor of , denoted , is the determinant of the matrix that results from deleting row and column of . The cofactor matrix of is the matrix whose entry is the cofactor of , which is the -minor times a sign factor:
The adjugate of is the transpose of , that is, the matrix whose entry is the cofactor of ,
Important consequence
The adjugate is defined so that the product of with its adjugate yields a diagonal matrix whose diagonal entries are the determinant . That is,
where is the identity matrix. This is a consequence of the Laplace expansion of the determinant.
The above formula implies one of the fundamental results in matrix algebra, that is invertible if and only if is an invertible element of . When this holds, the equation above yields
Examples
1 × 1 generic matrix
Since the determinant of a 0 × 0 matrix is 1, the adjugate of any 1 × 1 matrix (complex scalar) is . Observe that
2 × 2 generic matrix
The adjugate of the 2 × 2 matrix
is
By direct computation,
In this case, it is also true that ((A)) = (A) and hence that ((A)) = A.
3 × 3 generic matrix
Consider a 3 × 3 matrix
Its cofactor matrix is
where
Its adjugate is the transpose of its cofactor matrix,
3 × 3 numeric matrix
As a specific example, we have
It is easy to check the adjugate is the inverse times the determinant, .
The in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of A. This cofactor is computed using the submatrix obtained by deleting the third row and second column of the original matrix A,
The (3,2) cofactor is a sign times the determinant of this submatrix:
and this is the (2,3) entry of the adjugate.
Properties
For any matrix , elementary computations show that adjugates have the following properties:
, where is the identity matrix.
, where is the zero matrix, except that if then .
for any scalar .
.
.
If is invertible, then . It follows that:
is invertible with inverse .
.
is entrywise polynomial in . In particular, over the real or complex numbers, the adjugate is a smooth function of the entries of .
Over the complex numbers,
, where the bar denotes complex conjugation.
, where the asterisk denotes conjugate transpose.
Suppose that is another matrix. Then
This can be proved in three ways. One way, valid for any commutative ring, is a direct computation using the Cauchy–Binet formula. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices and ,
Because every non-invertible matrix is the limit of invertible matrices, continuity of the adjugate then implies that the formula remains true when one of or is not invertible.
A corollary of the previous formula is that, for any non-negative integer ,
If is invertible, then the above formula also holds for negative .
From the identity
we deduce
Suppose that commutes with . Multiplying the identity on the left and right by proves that
If is invertible, this implies that also commutes with . Over the real or complex numbers, continuity implies that commutes with even when is not invertible.
Finally, there is a more general proof than the second proof, which only requires that an n × n matrix has entries over a field with at least 2n + 1 elements (e.g. a 5 × 5 matrix over the integers modulo 11). is a polynomial in t with degree at most n, so it has at most n roots. Note that the ij th entry of is a polynomial of at most order n, and likewise for . These two polynomials at the ij th entry agree on at least n + 1 points, as we have at least n + 1 elements of the field where is invertible, and we have proven the identity for invertible matrices. Polynomials of degree n which agree on n + 1 points must be identical (subtract them from each other and you have n + 1 roots for a polynomial of degree at most n – a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of t. Thus, they take the same value when t = 0.
Using the above properties and other elementary computations, it is straightforward to show that if has one of the following properties, then does as well:
upper triangular,
lower triangular,
diagonal,
orthogonal,
unitary,
symmetric,
Hermitian,
normal.
If is skew-symmetric, then is skew-symmetric for even n and symmetric for odd n. Similarly, if is skew-Hermitian, then is skew-Hermitian for even n and Hermitian for odd n.
If is invertible, then, as noted above, there is a formula for in terms of the determinant and inverse of . When is not invertible, the adjugate satisfies different but closely related formulas.
If , then .
If , then . (Some minor is non-zero, so is non-zero and hence has rank at least one; the identity implies that the dimension of the nullspace of is at least , so its rank is at most one.) It follows that , where is a scalar and and are vectors such that and .
Column substitution and Cramer's rule
Partition into column vectors:
Let be a column vector of size . Fix and consider the matrix formed by replacing column of by :
Laplace expand the determinant of this matrix along column . The result is entry of the product . Collecting these determinants for the different possible yields an equality of column vectors
This formula has the following concrete consequence. Consider the linear system of equations
Assume that is non-singular. Multiplying this system on the left by and dividing by the determinant yields
Applying the previous formula to this situation yields Cramer's rule,
where is the th entry of .
Characteristic polynomial
Let the characteristic polynomial of be
The first divided difference of is a symmetric polynomial of degree ,
Multiply by its adjugate. Since by the Cayley–Hamilton theorem, some elementary manipulations reveal
In particular, the resolvent of is defined to be
and by the above formula, this is equal to
Jacobi's formula
The adjugate also appears in Jacobi's formula for the derivative of the determinant. If is continuously differentiable, then
It follows that the total derivative of the determinant is the transpose of the adjugate:
Cayley–Hamilton formula
Let be the characteristic polynomial of . The Cayley–Hamilton theorem states that
Separating the constant term and multiplying the equation by gives an expression for the adjugate that depends only on and the coefficients of . These coefficients can be explicitly represented in terms of traces of powers of using complete exponential Bell polynomials. The resulting formula is
where is the dimension of , and the sum is taken over and all sequences of satisfying the linear Diophantine equation
For the 2 × 2 case, this gives
For the 3 × 3 case, this gives
For the 4 × 4 case, this gives
The same formula follows directly from the terminating step of the Faddeev–LeVerrier algorithm, which efficiently determines the characteristic polynomial of .
In general, adjugate matrix of arbitrary dimension N matrix can be computed by Einstein's convention.
Relation to exterior algebras
The adjugate can be viewed in abstract terms using exterior algebras. Let be an -dimensional vector space. The exterior product defines a bilinear pairing
Abstractly, is isomorphic to , and under any such isomorphism the exterior product is a perfect pairing. That is, it yields an isomorphism
This isomorphism sends each to the map defined by
Suppose that is a linear transformation. Pullback by the th exterior power of induces a morphism of spaces. The adjugate of is the composite
If is endowed with its canonical basis , and if the matrix of in this basis is , then the adjugate of is the adjugate of . To see why, give the basis
Fix a basis vector of . The image of under is determined by where it sends basis vectors:
On basis vectors, the st exterior power of is
Each of these terms maps to zero under except the term. Therefore, the pullback of is the linear transformation for which
That is, it equals
Applying the inverse of shows that the adjugate of is the linear transformation for which
Consequently, its matrix representation is the adjugate of .
If is endowed with an inner product and a volume form, then the map can be decomposed further. In this case, can be understood as the composite of the Hodge star operator and dualization. Specifically, if is the volume form, then it, together with the inner product, determines an isomorphism
This induces an isomorphism
A vector in corresponds to the linear functional
By the definition of the Hodge star operator, this linear functional is dual to . That is, equals .
Higher adjugates
Let be an matrix, and fix . The th higher adjugate of is an matrix, denoted , whose entries are indexed by size subsets and of . Let and denote the complements of and , respectively. Also let denote the submatrix of containing those rows and columns whose indices are in and , respectively. Then the entry of is
where and are the sum of the elements of and , respectively.
Basic properties of higher adjugates include :
.
.
.
.
, where denotes the  th compound matrix.
Higher adjugates may be defined in abstract algebraic terms in a similar fashion to the usual adjugate, substituting and for and , respectively.
Iterated adjugates
Iteratively taking the adjugate of an invertible matrix A times yields
For example,
See also
Cayley–Hamilton theorem
Cramer's rule
Trace diagram
Jacobi's formula
Faddeev–LeVerrier algorithm
Compound matrix
References
Bibliography
Roger A. Horn and Charles R. Johnson (2013), Matrix Analysis, Second Edition. Cambridge University Press,
Roger A. Horn and Charles R. Johnson (1991), Topics in Matrix Analysis. Cambridge University Press,
External links
Matrix Reference Manual
Online matrix calculator (determinant, track, inverse, adjoint, transpose) Compute Adjugate matrix up to order 8
Matrix theory
Linear algebra | Adjugate matrix | [
"Mathematics"
] | 2,423 | [
"Linear algebra",
"Algebra"
] |
202,886 | https://en.wikipedia.org/wiki/Covariance%20and%20contravariance%20of%20vectors | In physics, especially in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. Briefly, a contravariant vector is a list of numbers that transforms oppositely to a change of basis, and a covariant vector is a list of numbers that transforms in the same way. Contravariant vectors are often just called vectors and covariant vectors are called covectors or dual vectors. The terms covariant and contravariant were introduced by James Joseph Sylvester in 1851.
Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are often used in physical and geometric problems. Associated with any coordinate system is a natural choice of coordinate basis for vectors based at each point of the space, and covariance and contravariance are particularly important for understanding how the coordinate description of a vector changes by passing from one coordinate system to another. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance.
Introduction
In physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list (or tuple) of numbers such as
The numbers in the list depend on the choice of coordinate system. For instance, if the vector represents position with respect to an observer (position vector), then the coordinate system may be obtained from a system of rigid rods, or reference axes, along which the components v1, v2, and v3 are measured. For a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a certain way in passing from one coordinate system to another.
A simple illustrative case is that of a Euclidean vector. For a vector, once a set of basis vectors has been defined, then the components of that vector will always vary opposite to that of the basis vectors. That vector is therefore defined as a contravariant tensor. Take a standard position vector for example. By changing the scale of the reference axes from meters to centimeters (that is, dividing the scale of the reference axes by 100, so that the basis vectors now are meters long), the components of the measured position vector are multiplied by 100. A vector's components change scale inversely to changes in scale to the reference axes, and consequently a vector is called a contravariant tensor.
A vector, which is an example of a contravariant tensor, has components that transform inversely to the transformation of the reference axes, (with example transformations including rotation and dilation). The vector itself does not change under these operations; instead, the components of the vector change in a way that cancels the change in the spatial axes. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, would reduce in an exactly compensating way. Mathematically, if the coordinate system undergoes a transformation described by an invertible matrix M, so that the basis vectors transform according to , then the components of a vector v in the original basis ( ) must be similarly transformed via . The components of a vector are often represented arranged in a column.
By contrast, a covector has components that transform like the reference axes. It lives in the dual vector space, and represents a linear map from vectors to scalars. The dot product operator involving vectors is a good example of a covector. To illustrate, assume we have a covector defined as , where is a vector. The components of this covector in some arbitrary basis are , with being the basis vectors in the corresponding vector space. (This can be derived by noting that we want to get the correct answer for the dot product operation when multiplying by an arbitrary vector , with components ). The covariance of these covector components is then seen by noting that if a transformation described by an invertible matrix M were to be applied to the basis vectors in the corresponding vector space, , then the components of the covector will transform with the same matrix , namely, . The components of a covector are often represented arranged in a row.
A third concept related to covariance and contravariance is invariance. A scalar (also called type-0 or rank-0 tensor) is an object that does not vary with the change in basis. An example of a physical observable that is a scalar is the mass of a particle. The single, scalar value of mass is independent to changes in basis vectors and consequently is called invariant. The magnitude of a vector (such as distance) is another example of an invariant, because it remains fixed even if geometrical vector components vary. (For example, for a position vector of length meters, if all Cartesian basis vectors are changed from meters in length to meters in length, the length of the position vector remains unchanged at meters, although the vector components will all increase by a factor of ). The scalar product of a vector and a covector is invariant, because one has components that vary with the base change, and the other has components that vary oppositely, and the two effects cancel out. One thus says that covectors are dual to vectors.
Thus, to summarize:
A vector or tangent vector, has components that contra-vary with a change of basis to compensate. That is, the matrix that transforms the vector components must be the inverse of the matrix that transforms the basis vectors. The components of vectors (as opposed to those of covectors) are said to be contravariant. In Einstein notation (implicit summation over repeated index), contravariant components are denoted with upper indices as in
A covector or cotangent vector has components that co-vary with a change of basis in the corresponding (initial) vector space. That is, the components must be transformed by the same matrix as the change of basis matrix in the corresponding (initial) vector space. The components of covectors (as opposed to those of vectors) are said to be covariant. In Einstein notation, covariant components are denoted with lower indices as in
The scalar product of a vector and covector is the scalar , which is invariant. It is the duality pairing of vectors and covectors.
Definition
The general formulation of covariance and contravariance refers to how the components of a coordinate vector transform under a change of basis (passive transformation). Thus let V be a vector space of dimension n over a field of scalars S, and let each of and be a basis of V. Also, let the change of basis from f to f′ be given by
for some invertible n×n matrix A with entries .
Here, each vector Yj of the f′ basis is a linear combination of the vectors Xi of the f basis, so that
Contravariant transformation
A vector in V is expressed uniquely as a linear combination of the elements of the f basis as
where v[f] are elements of the field S known as the components of v in the f basis. Denote the column vector of components of v by v[f]:
so that () can be rewritten as a matrix product
The vector v may also be expressed in terms of the f′ basis, so that
However, since the vector v itself is invariant under the choice of basis,
The invariance of v combined with the relationship () between f and f′ implies that
giving the transformation rule
In terms of components,
where the coefficients are the entries of the inverse matrix of A.
Because the components of the vector v transform with the inverse of the matrix A, these components are said to transform contravariantly under a change of basis.
The way A relates the two pairs is depicted in the following informal diagram using an arrow. The reversal of the arrow indicates a contravariant change:
Covariant transformation
A linear functional α on V is expressed uniquely in terms of its components (elements in S) in the f basis as
These components are the action of α on the basis vectors Xi of the f basis.
Under the change of basis from f to f′ (via ), the components transform so that
Denote the row vector of components of α by α[f]:
so that () can be rewritten as the matrix product
Because the components of the linear functional α transform with the matrix A, these components are said to transform covariantly under a change of basis.
The way A relates the two pairs is depicted in the following informal diagram using an arrow. A covariant relationship is indicated since the arrows travel in the same direction:
Had a column vector representation been used instead, the transformation law would be the transpose
Coordinates
The choice of basis f on the vector space V defines uniquely a set of coordinate functions on V, by means of
The coordinates on V are therefore contravariant in the sense that
Conversely, a system of n quantities vi that transform like the coordinates xi on V defines a contravariant vector (or simply vector). A system of n quantities that transform oppositely to the coordinates is then a covariant vector (or covector).
This formulation of contravariance and covariance is often more natural in applications in which there is a coordinate space (a manifold) on which vectors live as tangent vectors or cotangent vectors. Given a local coordinate system xi on the manifold, the reference axes for the coordinate system are the vector fields
This gives rise to the frame at every point of the coordinate patch.
If yi is a different coordinate system and
then the frame f' is related to the frame f by the inverse of the Jacobian matrix of the coordinate transition:
Or, in indices,
A tangent vector is by definition a vector that is a linear combination of the coordinate partials . Thus a tangent vector is defined by
Such a vector is contravariant with respect to change of frame. Under changes in the coordinate system, one has
Therefore, the components of a tangent vector transform via
Accordingly, a system of n quantities vi depending on the coordinates that transform in this way on passing from one coordinate system to another is called a contravariant vector.
Covariant and contravariant components of a vector with a metric
In a finite-dimensional vector space V over a field K with a symmetric bilinear form (which may be referred to as the metric tensor), there is little distinction between covariant and contravariant vectors, because the bilinear form allows covectors to be identified with vectors. That is, a vector v uniquely determines a covector α via
for all vectors w. Conversely, each covector α determines a unique vector v by this equation. Because of this identification of vectors with covectors, one may speak of the covariant components or contravariant components of a vector, that is, they are just representations of the same vector using the reciprocal basis.
Given a basis of V, there is a unique reciprocal basis of V determined by requiring that
the Kronecker delta. In terms of these bases, any vector v can be written in two ways:
The components vi[f] are the contravariant components of the vector v in the basis f, and the components vi[f] are the covariant components of v in the basis f. The terminology is justified because under a change of basis,
Euclidean plane
In the Euclidean plane, the dot product allows for vectors to be identified with covectors. If is a basis, then the dual basis satisfies
Thus, e1 and e2 are perpendicular to each other, as are e2 and e1, and the lengths of e1 and e2 normalized against e1 and e2, respectively.
Example
For example, suppose that we are given a basis e1, e2 consisting of a pair of vectors making a 45° angle with one another, such that e1 has length 2 and e2 has length 1. Then the dual basis vectors are given as follows:
e2 is the result of rotating e1 through an angle of 90° (where the sense is measured by assuming the pair e1, e2 to be positively oriented), and then rescaling so that holds.
e1 is the result of rotating e2 through an angle of 90°, and then rescaling so that holds.
Applying these rules, we find
and
Thus the change of basis matrix in going from the original basis to the reciprocal basis is
since
For instance, the vector
is a vector with contravariant components
The covariant components are obtained by equating the two expressions for the vector v:
so
Three-dimensional Euclidean space
In the three-dimensional Euclidean space, one can also determine explicitly the dual basis to a given set of basis vectors e1, e2, e3 of E3 that are not necessarily assumed to be orthogonal nor of unit norm. The dual basis vectors are:
Even when the ei and ei are not orthonormal, they are still mutually reciprocal:
Then the contravariant components of any vector v can be obtained by the dot product of v with the dual basis vectors:
Likewise, the covariant components of v can be obtained from the dot product of v with basis vectors, viz.
Then v can be expressed in two (reciprocal) ways, viz.
or
Combining the above relations, we have
and we can convert between the basis and dual basis with
and
If the basis vectors are orthonormal, then they are the same as the dual basis vectors.
General Euclidean spaces
More generally, in an n-dimensional Euclidean space V, if a basis is
the reciprocal basis is given by (double indices are summed over),
where the coefficients gij are the entries of the inverse matrix of
Indeed, we then have
The covariant and contravariant components of any vector
are related as above by
and
Use in tensor analysis
The distinction between covariance and contravariance is particularly important for computations with tensors, which often have mixed variance. This means that they have both covariant and contravariant components, or both vector and covector components. The valence of a tensor is the number of covariant and contravariant terms, and in Einstein notation, covariant components have lower indices, while contravariant components have upper indices. The duality between covariance and contravariance intervenes whenever a vector or tensor quantity is represented by its components, although modern differential geometry uses more sophisticated index-free methods to represent tensors.
In tensor analysis, a covariant vector varies more or less reciprocally to a corresponding contravariant vector. Expressions for lengths, areas and volumes of objects in the vector space can then be given in terms of tensors with covariant and contravariant indices. Under simple expansions and contractions of the coordinates, the reciprocity is exact; under affine transformations the components of a vector intermingle on going between covariant and contravariant expression.
On a manifold, a tensor field will typically have multiple, upper and lower indices, where Einstein notation is widely used. When the manifold is equipped with a metric, covariant and contravariant indices become very closely related to one another. Contravariant indices can be turned into covariant indices by contracting with the metric tensor. The reverse is possible by contracting with the (matrix) inverse of the metric tensor. Note that in general, no such relation exists in spaces not endowed with a metric tensor. Furthermore, from a more abstract standpoint, a tensor is simply "there" and its components of either kind are only calculational artifacts whose values depend on the chosen coordinates.
The explanation in geometric terms is that a general tensor will have contravariant indices as well as covariant indices, because it has parts that live in the tangent bundle as well as the cotangent bundle.
A contravariant vector is one which transforms like , where are the coordinates of a particle at its proper time . A covariant vector is one which transforms like , where is a scalar field.
Algebra and geometry
In category theory, there are covariant functors and contravariant functors. The assignment of the dual space to a vector space is a standard example of a contravariant functor. Contravariant (resp. covariant) vectors are contravariant (resp. covariant) functors from a -torsor to the fundamental representation of . Similarly, tensors of higher degree are functors with values in other representations of . However, some constructions of multilinear algebra are of "mixed" variance, which prevents them from being functors.
In differential geometry, the components of a vector relative to a basis of the tangent bundle are covariant if they change with the same linear transformation as a change of basis. They are contravariant if they change by the inverse transformation. This is sometimes a source of confusion for two distinct but related reasons. The first is that vectors whose components are covariant (called covectors or 1-forms) actually pull back under smooth functions, meaning that the operation assigning the space of covectors to a smooth manifold is actually a contravariant functor. Likewise, vectors whose components are contravariant push forward under smooth mappings, so the operation assigning the space of (contravariant) vectors to a smooth manifold is a covariant functor. Secondly, in the classical approach to differential geometry, it is not bases of the tangent bundle that are the most primitive object, but rather changes in the coordinate system. Vectors with contravariant components transform in the same way as changes in the coordinates (because these actually change oppositely to the induced change of basis). Likewise, vectors with covariant components transform in the opposite way as changes in the coordinates.
See also
Active and passive transformation
Mixed tensor
Two-point tensor, a generalization allowing indices to reference multiple vector bases
Notes
Citations
References
.
.
.
.
.
.
External links
Invariance, Contravariance, and Covariance
Tensors
Differential geometry
Riemannian geometry
Vectors (mathematics and physics) | Covariance and contravariance of vectors | [
"Engineering"
] | 3,813 | [
"Tensors"
] |
203,056 | https://en.wikipedia.org/wiki/Spherical%20harmonics | In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics.
Since the spherical harmonics form a complete set of orthogonal functions and thus an orthonormal basis, each function defined on the surface of a sphere can be written as a sum of these spherical harmonics. This is similar to periodic functions defined on a circle that can be expressed as a sum of circular functions (sines and cosines) via Fourier series. Like the sines and cosines in Fourier series, the spherical harmonics may be organized by (spatial) angular frequency, as seen in the rows of functions in the illustration on the right. Further, spherical harmonics are basis functions for irreducible representations of SO(3), the group of rotations in three dimensions, and thus play a central role in the group theoretic discussion of SO(3).
Spherical harmonics originate from solving Laplace's equation in the spherical domains. Functions that are solutions to Laplace's equation are called harmonics. Despite their name, spherical harmonics take their simplest form in Cartesian coordinates, where they can be defined as homogeneous polynomials of degree in that obey Laplace's equation. The connection with spherical coordinates arises immediately if one uses the homogeneity to extract a factor of radial dependence from the above-mentioned polynomial of degree ; the remaining factor can be regarded as a function of the spherical angular coordinates and only, or equivalently of the orientational unit vector specified by these angles. In this setting, they may be viewed as the angular portion of a set of solutions to Laplace's equation in three dimensions, and this viewpoint is often taken as an alternative definition. Notice, however, that spherical harmonics are not functions on the sphere which are harmonic with respect to the Laplace-Beltrami operator for the standard round metric on the sphere: the only harmonic functions in this sense on the sphere are the constants, since harmonic functions satisfy the Maximum principle. Spherical harmonics, as functions on the sphere, are eigenfunctions of the Laplace-Beltrami operator (see Higher dimensions).
A specific set of spherical harmonics, denoted or , are known as Laplace's spherical harmonics, as they were first introduced by Pierre Simon de Laplace in 1782. These functions form an orthogonal system, and are thus basic to the expansion of a general function on the sphere as alluded to above.
Spherical harmonics are important in many theoretical and practical applications, including the representation of multipole electrostatic and electromagnetic fields, electron configurations, gravitational fields, geoids, the magnetic fields of planetary bodies and stars, and the cosmic microwave background radiation. In 3D computer graphics, spherical harmonics play a role in a wide variety of topics including indirect lighting (ambient occlusion, global illumination, precomputed radiance transfer, etc.) and modelling of 3D shapes.
History
Spherical harmonics were first investigated in connection with the Newtonian potential of Newton's law of universal gravitation in three dimensions. In 1782, Pierre-Simon de Laplace had, in his Mécanique Céleste, determined that the gravitational potential at a point associated with a set of point masses located at points was given by
Each term in the above summation is an individual Newtonian potential for a point mass. Just prior to that time, Adrien-Marie Legendre had investigated the expansion of the Newtonian potential in powers of and . He discovered that if then
where is the angle between the vectors and . The functions are the Legendre polynomials, and they can be derived as a special case of spherical harmonics. Subsequently, in his 1782 memoir, Laplace investigated these coefficients using spherical coordinates to represent the angle between and . (See for more detail.)
In 1867, William Thomson (Lord Kelvin) and Peter Guthrie Tait introduced the solid spherical harmonics in their Treatise on Natural Philosophy, and also first introduced the name of "spherical harmonics" for these functions. The solid harmonics were homogeneous polynomial solutions of Laplace's equation
By examining Laplace's equation in spherical coordinates, Thomson and Tait recovered Laplace's spherical harmonics. (See Harmonic polynomial representation.) The term "Laplace's coefficients" was employed by William Whewell to describe the particular system of solutions introduced along these lines, whereas others reserved this designation for the zonal spherical harmonics that had properly been introduced by Laplace and Legendre.
The 19th century development of Fourier series made possible the solution of a wide variety of physical problems in rectangular domains, such as the solution of the heat equation and wave equation. This could be achieved by expansion of functions in series of trigonometric functions. Whereas the trigonometric functions in a Fourier series represent the fundamental modes of vibration in a string, the spherical harmonics represent the fundamental modes of vibration of a sphere in much the same way. Many aspects of the theory of Fourier series could be generalized by taking expansions in spherical harmonics rather than trigonometric functions. Moreover, analogous to how trigonometric functions can equivalently be written as complex exponentials, spherical harmonics also possessed an equivalent form as complex-valued functions. This was a boon for problems possessing spherical symmetry, such as those of celestial mechanics originally studied by Laplace and Legendre.
The prevalence of spherical harmonics already in physics set the stage for their later importance in the 20th century birth of quantum mechanics. The (complex-valued) spherical harmonics are eigenfunctions of the square of the orbital angular momentum operator
and therefore they represent the different quantized configurations of atomic orbitals.
Laplace's spherical harmonics
Laplace's equation imposes that the Laplacian of a scalar field is zero. (Here the scalar field is understood to be complex, i.e. to correspond to a (smooth) function .) In spherical coordinates this is:
Consider the problem of finding solutions of the form . By separation of variables, two differential equations result by imposing Laplace's equation:
The second equation can be simplified under the assumption that has the form . Applying separation of variables again to the second equation gives way to the pair of differential equations
for some number . A priori, is a complex constant, but because must be a periodic function whose period evenly divides , is necessarily an integer and is a linear combination of the complex exponentials . The solution function is regular at the poles of the sphere, where . Imposing this regularity in the solution of the second equation at the boundary points of the domain is a Sturm–Liouville problem that forces the parameter to be of the form for some non-negative integer with ; this is also explained below in terms of the orbital angular momentum. Furthermore, a change of variables transforms this equation into the Legendre equation, whose solution is a multiple of the associated Legendre polynomial . Finally, the equation for has solutions of the form ; requiring the solution to be regular throughout forces .
Here the solution was assumed to have the special form . For a given value of , there are independent solutions of this form, one for each integer with . These angular solutions are a product of trigonometric functions, here represented as a complex exponential, and associated Legendre polynomials:
which fulfill
Here is called a spherical harmonic function of degree and order , is an associated Legendre polynomial, is a normalization constant, and and represent colatitude and longitude, respectively. In particular, the colatitude , or polar angle, ranges from at the North Pole, to at the Equator, to at the South Pole, and the longitude , or azimuth, may assume all values with . For a fixed integer , every solution , , of the eigenvalue problem
is a linear combination of . In fact, for any such solution, is the expression in spherical coordinates of a homogeneous polynomial that is harmonic (see below), and so counting dimensions shows that there are linearly independent such polynomials.
The general solution to Laplace's equation in a ball centered at the origin is a linear combination of the spherical harmonic functions multiplied by the appropriate scale factor ,
where the are constants and the factors are known as (regular) solid harmonics . Such an expansion is valid in the ball
For , the solid harmonics with negative powers of (the irregular solid harmonics ) are chosen instead. In that case, one needs to expand the solution of known regions in Laurent series (about ), instead of the Taylor series (about ) used above, to match the terms and find series expansion coefficients .
Orbital angular momentum
In quantum mechanics, Laplace's spherical harmonics are understood in terms of the orbital angular momentum
The is conventional in quantum mechanics; it is convenient to work in units in which . The spherical harmonics are eigenfunctions of the square of the orbital angular momentum
Laplace's spherical harmonics are the joint eigenfunctions of the square of the orbital angular momentum and the generator of rotations about the azimuthal axis:
These operators commute, and are densely defined self-adjoint operators on the weighted Hilbert space of functions f square-integrable with respect to the normal distribution as the weight function on R3:
Furthermore, L2 is a positive operator.
If is a joint eigenfunction of and , then by definition
for some real numbers m and λ. Here m must in fact be an integer, for Y must be periodic in the coordinate φ with period a number that evenly divides 2π. Furthermore, since
and each of Lx, Ly, Lz are self-adjoint, it follows that .
Denote this joint eigenspace by , and define the raising and lowering operators by
Then and commute with , and the Lie algebra generated by , , is the special linear Lie algebra of order 2, , with commutation relations
Thus (it is a "raising operator") and (it is a "lowering operator"). In particular, must be zero for k sufficiently large, because the inequality must hold in each of the nontrivial joint eigenspaces. Let be a nonzero joint eigenfunction, and let be the least integer such that
Then, since
it follows that
Thus for the positive integer .
The foregoing has been all worked out in the spherical coordinate representation, but may be expressed more abstractly in the complete, orthonormal spherical ket basis.
Harmonic polynomial representation
The spherical harmonics can be expressed as the restriction to the unit sphere of certain polynomial functions . Specifically, we say that a (complex-valued) polynomial function is homogeneous of degree if
for all real numbers and all . We say that is harmonic if
where is the Laplacian. Then for each , we define
For example, when , is just the 3-dimensional space of all linear functions , since any such function is automatically harmonic. Meanwhile, when , we have a 5-dimensional space:
For any , the space of spherical harmonics of degree is just the space of restrictions to the sphere of the elements of . As suggested in the introduction, this perspective is presumably the origin of the term “spherical harmonic” (i.e., the restriction to the sphere of a harmonic function).
For example, for any the formula
defines a homogeneous polynomial of degree with domain and codomain , which happens to be independent of . This polynomial is easily seen to be harmonic. If we write in spherical coordinates and then restrict to , we obtain
which can be rewritten as
After using the formula for the associated Legendre polynomial , we may recognize this as the formula for the spherical harmonic (See Special cases.)
Conventions
Orthogonality and normalization
Several different normalizations are in common use for the Laplace spherical harmonic functions . Throughout the section, we use the standard convention that for (see associated Legendre polynomials)
which is the natural normalization given by Rodrigues' formula.
In acoustics, the Laplace spherical harmonics are generally defined as (this is the convention used in this article)
while in quantum mechanics:
where are associated Legendre polynomials without the Condon–Shortley phase (to avoid counting the phase twice).
In both definitions, the spherical harmonics are orthonormal
where is the Kronecker delta and . This normalization is used in quantum mechanics because it ensures that probability is normalized, i.e.,
The disciplines of geodesy and spectral analysis use
which possess unit power
The magnetics community, in contrast, uses Schmidt semi-normalized harmonics
which have the normalization
In quantum mechanics this normalization is sometimes used as well, and is named Racah's normalization after Giulio Racah.
It can be shown that all of the above normalized spherical harmonic functions satisfy
where the superscript denotes complex conjugation. Alternatively, this equation follows from the relation of the spherical harmonic functions with the Wigner D-matrix.
Condon–Shortley phase
One source of confusion with the definition of the spherical harmonic functions concerns a phase factor of , commonly referred to as the Condon–Shortley phase in the quantum mechanical literature. In the quantum mechanics community, it is common practice to either include this phase factor in the definition of the associated Legendre polynomials, or to append it to the definition of the spherical harmonic functions. There is no requirement to use the Condon–Shortley phase in the definition of the spherical harmonic functions, but including it can simplify some quantum mechanical operations, especially the application of raising and lowering operators. The geodesy and magnetics communities never include the Condon–Shortley phase factor in their definitions of the spherical harmonic functions nor in the ones of the associated Legendre polynomials.
Real form
A real basis of spherical harmonics can be defined in terms of their complex analogues by setting
The Condon–Shortley phase convention is used here for consistency. The corresponding inverse equations defining the complex spherical harmonics in terms of the real spherical harmonics are
The real spherical harmonics are sometimes known as tesseral spherical harmonics. These functions have the same orthonormality properties as the complex ones above. The real spherical harmonics with are said to be of cosine type, and those with of sine type. The reason for this can be seen by writing the functions in terms of the Legendre polynomials as
The same sine and cosine factors can be also seen in the following subsection that deals with the Cartesian representation.
See here for a list of real spherical harmonics up to and including , which can be seen to be consistent with the output of the equations above.
Use in quantum chemistry
As is known from the analytic solutions for the hydrogen atom, the eigenfunctions of the angular part of the wave function are spherical harmonics. However, the solutions of the non-relativistic Schrödinger equation without magnetic terms can be made real. This is why the real forms are extensively used in basis functions for quantum chemistry, as the programs don't then need to use complex algebra. Here, the real functions span the same space as the complex ones would.
For example, as can be seen from the table of spherical harmonics, the usual functions () are complex and mix axis directions, but the real versions are essentially just , , and .
Spherical harmonics in Cartesian form
The complex spherical harmonics give rise to the solid harmonics by extending from to all of as a homogeneous function of degree , i.e. setting
It turns out that is basis of the space of harmonic and homogeneous polynomials of degree . More specifically, it is the (unique up to normalization) Gelfand-Tsetlin-basis of this representation of the rotational group and an explicit formula for in cartesian coordinates can be derived from that fact.
The Herglotz generating function
If the quantum mechanical convention is adopted for the , then
Here, is the vector with components , , and
is a vector with complex coordinates:
The essential property of is that it is null:
It suffices to take and as real parameters.
In naming this generating function after Herglotz, we follow , who credit unpublished notes by him for its discovery.
Essentially all the properties of the spherical harmonics can be derived from this generating function. An immediate benefit of this definition is that if the vector is replaced by the quantum mechanical spin vector operator , such that is the operator analogue of the solid harmonic , one obtains a generating function for a standardized set of spherical tensor operators, :
The parallelism of the two definitions ensures that the 's transform under rotations (see below) in the same way as the 's, which in turn guarantees that they are spherical tensor operators, , with and , obeying all the properties of such operators, such as the Clebsch-Gordan composition theorem, and the Wigner-Eckart theorem. They are, moreover, a standardized set with a fixed scale or normalization.
Separated Cartesian form
The Herglotzian definition yields polynomials which may, if one wishes, be further factorized into a polynomial of and another of and , as follows (Condon–Shortley phase):
and for :
Here
and
For this reduces to
The factor is essentially the associated Legendre polynomial , and the factors are essentially .
Examples
Using the expressions for , , and listed explicitly above we obtain:
It may be verified that this agrees with the function listed here and here.
Real forms
Using the equations above to form the real spherical harmonics, it is seen that for only the terms (cosines) are included, and for only the terms (sines) are included:
and for m = 0:
Special cases and values
When , the spherical harmonics reduce to the ordinary Legendre polynomials:
When , or more simply in Cartesian coordinates,
At the north pole, where , and is undefined, all spherical harmonics except those with vanish:
Symmetry properties
The spherical harmonics have deep and consequential properties under the operations of spatial inversion (parity) and rotation.
Parity
The spherical harmonics have definite parity. That is, they are either even or odd with respect to inversion about the origin. Inversion is represented by the operator . Then, as can be seen in many ways (perhaps most simply from the Herglotz generating function), with being a unit vector,
In terms of the spherical angles, parity transforms a point with coordinates to . The statement of the parity of spherical harmonics is then
(This can be seen as follows: The associated Legendre polynomials gives and from the exponential function we have , giving together for the spherical harmonics a parity of .)
Parity continues to hold for real spherical harmonics, and for spherical harmonics in higher dimensions: applying a point reflection to a spherical harmonic of degree changes the sign by a factor of .
Rotations
Consider a rotation about the origin that sends the unit vector to . Under this operation, a spherical harmonic of degree and order transforms into a linear combination of spherical harmonics of the same degree. That is,
where is a matrix of order that depends on the rotation . However, this is not the standard way of expressing this property. In the standard way one writes,
where is the complex conjugate of an element of the Wigner D-matrix. In particular when is a rotation of the azimuth we get the identity,
The rotational behavior of the spherical harmonics is perhaps their quintessential feature from the viewpoint of group theory. The 's of degree provide a basis set of functions for the irreducible representation of the group SO(3) of dimension . Many facts about spherical harmonics (such as the addition theorem) that are proved laboriously using the methods of analysis acquire simpler proofs and deeper significance using the methods of symmetry.
Spherical harmonics expansion
The Laplace spherical harmonics form a complete set of orthonormal functions and thus form an orthonormal basis of the Hilbert space of square-integrable functions . On the unit sphere , any square-integrable function can thus be expanded as a linear combination of these:
This expansion holds in the sense of mean-square convergence — convergence in L2 of the sphere — which is to say that
The expansion coefficients are the analogs of Fourier coefficients, and can be obtained by multiplying the above equation by the complex conjugate of a spherical harmonic, integrating over the solid angle Ω, and utilizing the above orthogonality relationships. This is justified rigorously by basic Hilbert space theory. For the case of orthonormalized harmonics, this gives:
If the coefficients decay in ℓ sufficiently rapidly — for instance, exponentially — then the series also converges uniformly to f.
A square-integrable function can also be expanded in terms of the real harmonics above as a sum
The convergence of the series holds again in the same sense, namely the real spherical harmonics form a complete set of orthonormal functions and thus form an orthonormal basis of the Hilbert space of square-integrable functions . The benefit of the expansion in terms of the real harmonic functions is that for real functions the expansion coefficients are guaranteed to be real, whereas their coefficients in their expansion in terms of the (considering them as functions ) do not have that property.
Spectrum analysis
Power spectrum in signal processing
The total power of a function f is defined in the signal processing literature as the integral of the function squared, divided by the area of its domain. Using the orthonormality properties of the real unit-power spherical harmonic functions, it is straightforward to verify that the total power of a function defined on the unit sphere is related to its spectral coefficients by a generalization of Parseval's theorem (here, the theorem is stated for Schmidt semi-normalized harmonics, the relationship is slightly different for orthonormal harmonics):
where
is defined as the angular power spectrum (for Schmidt semi-normalized harmonics). In a similar manner, one can define the cross-power of two functions as
where
is defined as the cross-power spectrum. If the functions and have a zero mean (i.e., the spectral coefficients and are zero), then and represent the contributions to the function's variance and covariance for degree , respectively. It is common that the (cross-)power spectrum is well approximated by a power law of the form
When , the spectrum is "white" as each degree possesses equal power. When , the spectrum is termed "red" as there is more power at the low degrees with long wavelengths than higher degrees. Finally, when , the spectrum is termed "blue". The condition on the order of growth of is related to the order of differentiability of in the next section.
Differentiability properties
One can also understand the differentiability properties of the original function in terms of the asymptotics of . In particular, if decays faster than any rational function of as , then is infinitely differentiable. If, furthermore, decays exponentially, then is actually real analytic on the sphere.
The general technique is to use the theory of Sobolev spaces. Statements relating the growth of the to differentiability are then similar to analogous results on the growth of the coefficients of Fourier series. Specifically, if
then is in the Sobolev space . In particular, the Sobolev embedding theorem implies that is infinitely differentiable provided that
for all .
Algebraic properties
Addition theorem
A mathematical result of considerable interest and use is called the addition theorem for spherical harmonics. Given two vectors and , with spherical coordinates and , respectively, the angle between them is given by the relation
in which the role of the trigonometric functions appearing on the right-hand side is played by the spherical harmonics and that of the left-hand side is played by the Legendre polynomials.
The addition theorem states
where is the Legendre polynomial of degree . This expression is valid for both real and complex harmonics. The result can be proven analytically, using the properties of the Poisson kernel in the unit ball, or geometrically by applying a rotation to the vector y so that it points along the z-axis, and then directly calculating the right-hand side.
In particular, when , this gives Unsöld's theorem
which generalizes the identity to two dimensions.
In the expansion (), the left-hand side is a constant multiple of the degree zonal spherical harmonic. From this perspective, one has the following generalization to higher dimensions. Let be an arbitrary orthonormal basis of the space of degree spherical harmonics on the -sphere. Then , the degree zonal harmonic corresponding to the unit vector , decomposes as
Furthermore, the zonal harmonic is given as a constant multiple of the appropriate Gegenbauer polynomial:
Combining () and () gives () in dimension when and are represented in spherical coordinates. Finally, evaluating at gives the functional identity
where is the volume of the (n−1)-sphere.
Contraction rule
Another useful identity expresses the product of two spherical harmonics as a sum over spherical harmonics
Many of the terms in this sum are trivially zero. The values of and that result in non-zero terms in this sum are determined by the selection rules for the 3j-symbols.
Clebsch–Gordan coefficients
The Clebsch–Gordan coefficients are the coefficients appearing in the expansion of the product of two spherical harmonics in terms of spherical harmonics themselves. A variety of techniques are available for doing essentially the same calculation, including the Wigner 3-jm symbol, the Racah coefficients, and the Slater integrals. Abstractly, the Clebsch–Gordan coefficients express the tensor product of two irreducible representations of the rotation group as a sum of irreducible representations: suitably normalized, the coefficients are then the multiplicities.
Visualization of the spherical harmonics
The Laplace spherical harmonics can be visualized by considering their "nodal lines", that is, the set of points on the sphere where , or alternatively where . Nodal lines of are composed of ℓ circles: there are circles along longitudes and ℓ−|m| circles along latitudes. One can determine the number of nodal lines of each type by counting the number of zeros of in the and directions respectively. Considering as a function of , the real and imaginary components of the associated Legendre polynomials each possess ℓ−|m| zeros, each giving rise to a nodal 'line of latitude'. On the other hand, considering as a function of , the trigonometric sin and cos functions possess 2|m| zeros, each of which gives rise to a nodal 'line of longitude'.
When the spherical harmonic order m is zero (upper-left in the figure), the spherical harmonic functions do not depend upon longitude, and are referred to as zonal. Such spherical harmonics are a special case of zonal spherical functions. When (bottom-right in the figure), there are no zero crossings in latitude, and the functions are referred to as sectoral. For the other cases, the functions checker the sphere, and they are referred to as tesseral.
More general spherical harmonics of degree are not necessarily those of the Laplace basis , and their nodal sets can be of a fairly general kind.
List of spherical harmonics
Analytic expressions for the first few orthonormalized Laplace spherical harmonics that use the Condon–Shortley phase convention:
Higher dimensions
The classical spherical harmonics are defined as complex-valued functions on the unit sphere inside three-dimensional Euclidean space . Spherical harmonics can be generalized to higher-dimensional Euclidean space as follows, leading to functions . Let Pℓ denote the space of complex-valued homogeneous polynomials of degree in real variables, here considered as functions . That is, a polynomial is in provided that for any real , one has
Let Aℓ denote the subspace of Pℓ consisting of all harmonic polynomials:
These are the (regular) solid spherical harmonics. Let Hℓ denote the space of functions on the unit sphere
obtained by restriction from
The following properties hold:
The sum of the spaces is dense in the set of continuous functions on with respect to the uniform topology, by the Stone–Weierstrass theorem. As a result, the sum of these spaces is also dense in the space of square-integrable functions on the sphere. Thus every square-integrable function on the sphere decomposes uniquely into a series of spherical harmonics, where the series converges in the sense.
For all , one has where is the Laplace–Beltrami operator on . This operator is the analog of the angular part of the Laplacian in three dimensions; to wit, the Laplacian in dimensions decomposes as
It follows from the Stokes theorem and the preceding property that the spaces are orthogonal with respect to the inner product from . That is to say, for and for .
Conversely, the spaces are precisely the eigenspaces of . In particular, an application of the spectral theorem to the Riesz potential gives another proof that the spaces are pairwise orthogonal and complete in .
Every homogeneous polynomial can be uniquely written in the form where . In particular,
An orthogonal basis of spherical harmonics in higher dimensions can be constructed inductively by the method of separation of variables, by solving the Sturm-Liouville problem for the spherical Laplacian
where φ is the axial coordinate in a spherical coordinate system on Sn−1. The end result of such a procedure is
where the indices satisfy and the eigenvalue is . The functions in the product are defined in terms of the Legendre function
Connection with representation theory
The space of spherical harmonics of degree is a representation of the symmetry group of rotations around a point (SO(3)) and its double-cover SU(2). Indeed, rotations act on the two-dimensional sphere, and thus also on by function composition
for a spherical harmonic and a rotation. The representation is an irreducible representation of SO(3).
The elements of arise as the restrictions to the sphere of elements of : harmonic polynomials homogeneous of degree on three-dimensional Euclidean space . By polarization of , there are coefficients symmetric on the indices, uniquely determined by the requirement
The condition that be harmonic is equivalent to the assertion that the tensor must be trace free on every pair of indices. Thus as an irreducible representation of , is isomorphic to the space of traceless symmetric tensors of degree .
More generally, the analogous statements hold in higher dimensions: the space of spherical harmonics on the -sphere is the irreducible representation of corresponding to the traceless symmetric -tensors. However, whereas every irreducible tensor representation of and is of this kind, the special orthogonal groups in higher dimensions have additional irreducible representations that do not arise in this manner.
The special orthogonal groups have additional spin representations that are not tensor representations, and are typically not spherical harmonics. An exception are the spin representation of SO(3): strictly speaking these are representations of the double cover SU(2) of SO(3). In turn, SU(2) is identified with the group of unit quaternions, and so coincides with the 3-sphere. The spaces of spherical harmonics on the 3-sphere are certain spin representations of SO(3), with respect to the action by quaternionic multiplication.
Connection with hemispherical harmonics
Spherical harmonics can be separated into two set of functions. One is hemispherical functions (HSH), orthogonal and complete on hemisphere. Another is complementary hemispherical harmonics (CHSH).
Generalizations
The angle-preserving symmetries of the two-sphere are described by the group of Möbius transformations PSL(2,C). With respect to this group, the sphere is equivalent to the usual Riemann sphere. The group PSL(2,C) is isomorphic to the (proper) Lorentz group, and its action on the two-sphere agrees with the action of the Lorentz group on the celestial sphere in Minkowski space. The analog of the spherical harmonics for the Lorentz group is given by the hypergeometric series; furthermore, the spherical harmonics can be re-expressed in terms of the hypergeometric series, as is a subgroup of .
More generally, hypergeometric series can be generalized to describe the symmetries of any symmetric space; in particular, hypergeometric series can be developed for any Lie group.
See also
Cubic harmonic (often used instead of spherical harmonics in computations)
Cylindrical harmonics
Spherical basis
Spinor spherical harmonics
Spin-weighted spherical harmonics
Sturm–Liouville theory
Table of spherical harmonics
Vector spherical harmonics
Zernike polynomials
Jacobi polynomials
Atomic orbital
Notes
References
Cited references
.
.
.
.
.
.
.
General references
E.W. Hobson, The Theory of Spherical and Ellipsoidal Harmonics, (1955) Chelsea Pub. Co., .
C. Müller, Spherical Harmonics, (1966) Springer, Lecture Notes in Mathematics, Vol. 17, .
E. U. Condon and G. H. Shortley, The Theory of Atomic Spectra, (1970) Cambridge at the University Press, , See chapter 3.
J.D. Jackson, Classical Electrodynamics,
Albert Messiah, Quantum Mechanics, volume II. (2000) Dover. .
D. A. Varshalovich, A. N. Moskalev, V. K. Khersonskii Quantum Theory of Angular Momentum,(1988) World Scientific Publishing Co., Singapore,
External links
Spherical Harmonics at MathWorld
Spherical Harmonics 3D representation
Atomic physics
Fourier analysis
Harmonic analysis
Partial differential equations
Rotational symmetry
Special hypergeometric functions
Spherical geometry | Spherical harmonics | [
"Physics",
"Chemistry"
] | 6,963 | [
"Quantum mechanics",
"Rotational symmetry",
"Atomic physics",
" molecular",
"Atomic",
"Symmetry",
" and optical physics"
] |
203,174 | https://en.wikipedia.org/wiki/Perfect%20gas | In physics, engineering, and physical chemistry, a perfect gas is a theoretical gas model that differs from real gases in specific ways that makes certain calculations easier to handle. In all perfect gas models, intermolecular forces are neglected. This means that one can neglect many complications that may arise from the Van der Waals forces. All perfect gas models are ideal gas models in the sense that they all follow the ideal gas equation of state. However, the idea of a perfect gas model is often invoked as a combination of the ideal gas equation of state with specific additional assumptions regarding the variation (or nonvariation) of the heat capacity with temperature.
Perfect gas nomenclature
The terms perfect gas and ideal gas are sometimes used interchangeably, depending on the particular field of physics and engineering. Sometimes, other distinctions are made, such as between thermally perfect gas and calorically perfect gas, or between imperfect, semi-perfect, and perfect gases, and as well as the characteristics of ideal gases. Two of the common sets of nomenclatures are summarized in the following table.
Thermally and calorically perfect gas
Along with the definition of a perfect gas, there are also two more simplifications that can be made although various textbooks either omit or combine the following simplifications into a general "perfect gas" definition.
For a fixed number of moles of gas , a thermally perfect gas
is in thermodynamic equilibrium
is not chemically reacting
has internal energy , enthalpy , and constant volume / constant pressure heat capacities , that are solely functions of temperature and not of pressure or volume , i.e., , , , . These latter expressions hold for all tiny property changes and are not restricted to constant- or constant- variations.
A calorically perfect gas
is in thermodynamic equilibrium
is not chemically reacting
has internal energy , and enthalpy that are functions of temperature only, i.e., ,
has heat capacities , that are constant, i.e., , and , , where is any finite (non-differential) change in each quantity.
It can be easily shown that an ideal gas (i.e. satisfying the ideal gas equation of state, ) is either calorically perfect or thermally perfect. This is because the internal energy of an ideal gas is at most a function of temperature, as shown by the thermodynamic equation
which is exactly zero when . Since is independent of volume, it must be true that for fixed , and are at most functions of only temperature for the ideal gas equation of state. This is also true for the two heat capacities, which are temperature derivatives of and .
From both statistical mechanics and the simpler kinetic theory of gases, we expect the heat capacity of a monatomic ideal gas to be constant, since for such a gas only kinetic energy contributes to the internal energy and to within an arbitrary additive constant , and therefore , a constant. Moreover, the classical equipartition theorem predicts that all ideal gases (even polyatomic) have constant heat capacities at all temperatures. However, it is now known from the modern theory of quantum statistical mechanics as well as from experimental data that a polyatomic ideal gas will generally have thermal contributions to its internal energy which are not linear functions of temperature. These contributions are due to contributions from the vibrational, rotational, and electronic degrees of freedom as they become populated as a function of temperature according to the Boltzmann distribution. In this situation we find that and . But even if the heat capacity is strictly a function of temperature for a given gas, it might be assumed constant for purposes of calculation if the temperature and heat capacity variations are not too large, which would lead to the assumption of a calorically perfect gas (see below).
These types of approximations are useful for modeling, for example, an axial compressor where temperature fluctuations are usually not large enough to cause any significant deviations from the thermally perfect gas model. In this model the heat capacity is still allowed to vary, though only with temperature, and molecules are not permitted to dissociate. The latter generally implies that the temperature should be limited to < 2500 K. This temperature limit depends on the chemical composition of the gas and how accurate the calculations need to be, since molecular dissociation may be important at a higher or lower temperature which is intrinsically dependent on the molecular nature of the gas.
Even more restricted is the calorically perfect gas for which, in addition, the heat capacity is assumed to be constant. Although this may be the most restrictive model from a temperature perspective, it may be accurate enough to make reasonable predictions within the limits specified. For example, a comparison of calculations for one compression stage of an axial compressor (one with variable and one with constant ) may produce a deviation small enough to support this approach.
In addition, other factors come into play and dominate during a compression cycle if they have a greater impact on the final calculated result than whether or not was held constant. When modeling an axial compressor, examples of these real-world effects include compressor tip-clearance, separation, and boundary layer/frictional losses.
See also
Gas
Gas laws
Ideal gas
Ideal gas law
Equation of state
References
Gases | Perfect gas | [
"Physics",
"Chemistry"
] | 1,063 | [
"Statistical mechanics",
"Gases",
"Phases of matter",
"Matter"
] |
203,200 | https://en.wikipedia.org/wiki/Quark%20star | A quark star is a hypothetical type of compact, exotic star, where extremely high core temperature and pressure have forced nuclear particles to form quark matter, a continuous state of matter consisting of free quarks.
Background
Some massive stars collapse to form neutron stars at the end of their life cycle, as has been both observed and explained theoretically. Under the extreme temperatures and pressures inside neutron stars, the neutrons are normally kept apart by a degeneracy pressure, stabilizing the star and hindering further gravitational collapse. However, it is hypothesized that under even more extreme temperature and pressure, the degeneracy pressure of the neutrons is overcome, and the neutrons are forced to merge and dissolve into their constituent quarks, creating an ultra-dense phase of quark matter based on densely packed quarks. In this state, a new equilibrium is supposed to emerge, as a new degeneracy pressure between the quarks, as well as repulsive electromagnetic forces, will occur and hinder total gravitational collapse.
If these ideas are correct, quark stars might occur, and be observable, somewhere in the universe. Such a scenario is seen as scientifically plausible, but has not been proven observationally or experimentally; the very extreme conditions needed for stabilizing quark matter cannot be created in any laboratory and has not been observed directly in nature. The stability of quark matter, and hence the existence of quark stars, is for these reasons among the unsolved problems in physics.
If quark stars can form, then the most likely place to find quark star matter would be inside neutron stars that exceed the internal pressure needed for quark degeneracy – the point at which neutrons break down into a form of dense quark matter. They could also form if a massive star collapses at the end of its life, provided that it is possible for a star to be large enough to collapse beyond a neutron star but not large enough to form a black hole.
If they exist, quark stars would resemble and be easily mistaken for neutron stars: they would form in the death of a massive star in a Type II supernova, be extremely dense and small, and possess a very high gravitational field. They would also lack some features of neutron stars, unless they also contained a shell of neutron matter, because free quarks are not expected to have properties matching degenerate neutron matter. For example, they might be radio-silent, or have atypical sizes, electromagnetic fields, or surface temperatures, compared to neutron stars.
History
The analysis about quark stars was first proposed in 1965 by Soviet physicists D. D. Ivanenko and D. F. Kurdgelaidze. Their existence has not been confirmed.
The equation of state of quark matter is uncertain, as is the transition point between neutron-degenerate matter and quark matter. Theoretical uncertainties have precluded making predictions from first principles. Experimentally, the behaviour of quark matter is being actively studied with particle colliders, but this can only produce very hot (above 1012 K) quark–gluon plasma blobs the size of atomic nuclei, which decay immediately after formation. The conditions inside compact stars with extremely high densities and temperatures well below 1012 K cannot be recreated artificially, as there are no known methods to produce, store or study "cold" quark matter directly as it would be found inside quark stars. The theory predicts quark matter to possess some peculiar characteristics under these conditions.
Formation
It is hypothesized that when the neutron-degenerate matter, which makes up neutron stars, is put under sufficient pressure from the star's own gravity or the initial supernova creating it, the individual neutrons break down into their constituent quarks (up quarks and down quarks), forming what is known as quark matter. This conversion may be confined to the neutron star's center or it might transform the entire star, depending on the physical circumstances. Such a star is known as a quark star.
Stability and strange quark matter
Ordinary quark matter consisting of up and down quarks has a very high Fermi energy compared to ordinary atomic matter and is stable only under extreme temperatures and/or pressures. This suggests that the only stable quark stars will be neutron stars with a quark matter core, while quark stars consisting entirely of ordinary quark matter will be highly unstable and re-arrange spontaneously.
It has been shown that the high Fermi energy making ordinary quark matter unstable at low temperatures and pressures can be lowered substantially by the transformation of a sufficient number of up and down quarks into strange quarks, as strange quarks are, relatively speaking, a very heavy type of quark particle. This kind of quark matter is known specifically as strange quark matter and it is speculated and subject to current scientific investigation whether it might in fact be stable under the conditions of interstellar space (i.e. near zero external pressure and temperature). If this is the case (known as the Bodmer–Witten assumption), quark stars made entirely of quark matter would be stable if they quickly transform into strange quark matter.
Strange stars
Stars made of strange quark matter are known as strange stars. These form a distinct subtype of quark stars.
Theoretical investigations have revealed that quark stars might not only be produced from neutron stars and powerful supernovas, they could also be created in the early cosmic phase separations following the Big Bang. If these primordial quark stars transform into strange quark matter before the external temperature and pressure conditions of the early Universe makes them unstable, they might turn out stable, if the Bodmer–Witten assumption holds true. Such primordial strange stars could survive to this day.
Characteristics
Quark stars have some special characteristics that separate them from ordinary neutron stars. Under the physical conditions found inside neutron stars, with extremely high densities but temperatures well below 1012 K, quark matter is predicted to exhibit some peculiar characteristics. It is expected to behave as a Fermi liquid and enter a so-called color-flavor-locked (CFL) phase of color superconductivity, where "color" refers to the six "charges" exhibited in the strong interaction, instead of the two charges (positive and negative) in electromagnetism. At slightly lower densities, corresponding to higher layers closer to the surface of the compact star, the quark matter will behave as a non-CFL quark liquid, a phase that is even more mysterious than CFL and might include color conductivity and/or several additional yet undiscovered phases. None of these extreme conditions can currently be recreated in laboratories so nothing can be inferred about these phases from direct experiments.
Observed overdense neutron stars
At least under the assumptions mentioned above, the probability of a given neutron star being a quark star is low, so in the Milky Way there would only be a small population of quark stars. If it is correct, however, that overdense neutron stars can turn into quark stars, that makes the possible number of quark stars higher than was originally thought, as observers would be looking for the wrong type of star.
A neutron star without deconfinement to quarks and higher densities cannot have a rotational period shorter than a millisecond; even with the unimaginable gravity of such a condensed object the centripetal force of faster rotation would eject matter from the surface, so detection of a pulsar of millisecond or less period would be strong evidence of a quark star.
Observations released by the Chandra X-ray Observatory on April 10, 2002, detected two possible quark stars, designated RX J1856.5−3754 and 3C 58, which had previously been thought to be neutron stars. Based on the known laws of physics, the former appeared much smaller and the latter much colder than it should be, suggesting that they are composed of material denser than neutron-degenerate matter. However, these observations are met with skepticism by researchers who say the results were not conclusive; and since the late 2000s, the possibility that RX J1856 is a quark star has been excluded.
Another star, XTE J1739-285, has been observed by a team led by Philip Kaaret of the University of Iowa and reported as a possible quark star candidate.
In 2006, You-Ling Yue et al., from Peking University, suggested that PSR B0943+10 may in fact be a low-mass quark star.
It was reported in 2008 that observations of supernovae SN 2006gy, SN 2005gj and SN 2005ap also suggest the existence of quark stars. It has been suggested that the collapsed core of supernova SN 1987A may be a quark star.
In 2015, Zi-Gao Dai et al. from Nanjing University suggested that Supernova ASASSN-15lh is a newborn strange quark star.
In 2022 it was suggested that GW190425, which likely formed as a merger between two neutron stars giving off gravitational waves in the process, could be a quark star.
Other hypothesized quark formations
Apart from ordinary quark matter and strange quark matter, other types of quark-gluon plasma might hypothetically occur or be formed inside neutron stars and quark stars. This includes the following, some of which has been observed and studied in laboratories:
Robert L. Jaffe 1977, suggested a four-quark state with strangeness (qs).
Robert L. Jaffe 1977 suggested the H dibaryon, a six-quark state with equal numbers of up-, down-, and strange quarks (represented as uuddss or udsuds).
Bound multi-quark systems with heavy quarks (QQ).
In 1987, a pentaquark state was first proposed with a charm anti-quark (qqqs).
Pentaquark state with an antistrange quark and four light quarks consisting of up- and down-quarks only (qqqq).
Light pentaquarks are grouped within an antidecuplet, the lightest candidate, Θ+, which can also be described by the diquark model of Robert L. Jaffe and Wilczek (QCD).
Θ++ and antiparticle −−.
Doubly strange pentaquark (ssdd), member of the light pentaquark antidecuplet.
Charmed pentaquark Θc(3100) (uudd) state was detected by the H1 collaboration.
Tetraquark particles might form inside neutron stars and under other extreme conditions. In 2008, 2013 and 2014 the tetraquark particle of Z(4430), was discovered and investigated in laboratories on Earth.
See also
References
Sources and further reading
External links
Neutron Star/Quark Star Interior (image to print)
Whitfield, John; "Quark star glimmers", Nature, 2002 April 11
"Debate sparked on quark stars", CERN Courier 42, #5, June 2002, page 13
Beck, Paul; "Wish Upon a Quark Star", Popular Science, June 2002
Krivoruchenko, M. I.; "Strange, quark, and metastable neutron stars" , JETP Letters, vol. 46, no. 1, 10 July 1987, pages 3–6 (page 6: Perhaps a 1,700-year-old quark star in SNR MSH 15–52)
Rothstein, Dave; "Curious About Astronomy: What process would bring about a quark star?", question #445, January 2003
Nemiroff, Robert; Bonnell, Jerry; "RX J185635-375: Candidate Quark Star", Astronomy Picture of the Day, NASA Goddard Space Flight Center, 2002 April 14
Anderson, Mark K.: Quarks or Quirky Neutron Stars?, Wired News, 2002 April 19
Boyce, Kevin; Still, Martin; "What is the news about a possible Strange Quark Star?", Ask an Astrophysicist, NASA Goddard Space Flight Center, 2002 April 12
Marquit, Miranda; "Seeing 'Strange' Stars", physorg.com, 2006 February 8
"Quark Stars Could Produce Biggest Bang", spacedaily.com, 2006 June 7
Niebergal, Brian: "Meissner Effect in Strange Quark Stars", Computational Astro-Physics Calgary Alberta, University of Calgary
Bryner, Jeanna; "Quark Stars Involved in New Theory of Brightest Supernovae", Space.com, 2008 June 3 (The first-ever evidence of a neutron star collapsing into a quark star is announced)
Cramer, John G.: "Quark Stars, Alternate View Column AV-114", Analog Science Fiction & Fact Magazine, November 2002
Star types
Exotic matter
Unsolved problems in physics
Hypothetical stars | Quark star | [
"Physics",
"Astronomy"
] | 2,726 | [
"Unsolved problems in physics",
"Exotic matter",
"Astronomical classification systems",
"Star types",
"Matter"
] |
203,291 | https://en.wikipedia.org/wiki/Kurt%20W%C3%BCthrich | Kurt Wüthrich (born 4 October 1938 in Aarberg, Canton of Bern) is a Swiss chemist/biophysicist and Nobel Chemistry laureate, known for developing nuclear magnetic resonance (NMR) methods for studying biological macromolecules.
Education and early life
Born in Aarberg, Switzerland, Wüthrich was educated in chemistry, physics, and mathematics at the University of Bern before pursuing his PhD supervised by Silvio Fallab at the University of Basel, awarded in 1964.
Career
After his PhD, Wüthrich continued postdoctoral research with Fallab for a short time before leaving to work at the University of California, Berkeley for two years from 1965 with Robert E. Connick. That was followed by a stint working with Robert G. Shulman at the Bell Telephone Laboratories in Murray Hill, New Jersey from 1967 to 1969.
Wüthrich returned to Switzerland, to Zürich, in 1969, where he began his career there at the ETH Zürich, rising to Professor of Biophysics by 1980. He currently maintains a laboratory at the ETH Zürich, at The Scripps Research Institute, in La Jolla, California and at the of ShanghaiTech University. He has also been a visiting professor at the University of Edinburgh (1997–2000), the Chinese University of Hong Kong (where he was an Honorary Professor) and Yonsei University.
During his graduate studies Wüthrich started out working with electron paramagnetic resonance spectroscopy, and the subject of his PhD thesis was "the catalytic activity of copper compounds in autoxidation reactions". During his time as a postdoc in Berkeley he began working with the newly developed and related technique of nuclear magnetic resonance spectroscopy to study the hydration of metal complexes. When Wüthrich joined the Bell Labs, he was put in charge of one of the first superconducting NMR spectrometers, and started studying the structure and dynamics of proteins. He has pursued this line of research ever since.
After returning to Switzerland, Wüthrich collaborated with, among others, Nobel laureate Richard R. Ernst on developing the first two-dimensional NMR experiments, and established the nuclear Overhauser effect as a convenient way of measuring distances within proteins. This research later led to the complete assignment of resonances for among others the bovine pancreatic trypsin inhibitor and glucagon.
In October 2010, Wüthrich participated in the USA Science and Engineering Festival's Lunch with a Laureate program where middle and high school students will get to engage in an informal conversation with a Nobel Prize–winning scientist over a brown-bag lunch. Wüthrich is also a member on the USA Science and Engineering Festival's Advisory Board and a supporter of the Campaign for the Establishment of a United Nations Parliamentary Assembly, an organisation which campaigns for democratic reform in the United Nations.
Awards and honors
He was awarded the Louisa Gross Horwitz Prize from Columbia University in 1991, the Louis-Jeantet Prize for Medicine in 1993, the Otto Warburg Medal in 1999 and half of the Nobel Prize in Chemistry in 2002 for "his development of nuclear magnetic resonance spectroscopy for determining the three-dimensional structure of biological macromolecules in solution". He received the Bijvoet Medal of the Bijvoet Center for Biomolecular Research of Utrecht University in 2008. He was elected a Foreign Member of the Royal Society (ForMemRS) in 2010. He was also awarded the 2018 Fray International Sustainability Award at SIPS 2018 by FLOGEN Star Outreach.
Personal details
On 2 April 2018, Dr. Wüthrich established permanent residency in Shanghai, China, after obtaining a Chinese permanent residence card.
Bibliography
NMR in Biological Research: Peptides and Proteins, American Elsevier Pub. Co, 1976
NMR of proteins and nucleic acids, Wiley, 1986
NMR In Structural Biology: A Collection Of Papers By Kurt Wuthrich, World Scientific Publishing Co Pte Ltd, 1995
References
External links
including the Nobel Lecture NMR Studies of Structure and Function of Biological Macromolecules
1938 births
Living people
People from Aarberg
Academic staff of ETH Zurich
Duke University faculty
Academics of the University of Edinburgh
Nobel laureates in Chemistry
Nuclear magnetic resonance
Swiss chemists
Swiss biophysicists
Swiss Nobel laureates
Swiss Protestants
Scripps Research faculty
Members of the European Molecular Biology Organization
Foreign associates of the National Academy of Sciences
Members of the French Academy of Sciences
Foreign members of the Royal Society
Foreign fellows of the Indian National Science Academy
Bijvoet Medal recipients
Kyoto laureates in Advanced Technology
University of Bern alumni | Kurt Wüthrich | [
"Physics",
"Chemistry"
] | 925 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
203,359 | https://en.wikipedia.org/wiki/Magnetic%20resonance | Magnetic resonance is a process by which a physical excitation (resonance) is set up via magnetism.
This process was used to develop magnetic resonance imaging (MRI) and nuclear magnetic resonance spectroscopy (NMRS) technology.
It is also being used to develop nuclear magnetic resonance quantum computers.
History
The first observation of electron-spin resonance was in 1944 by Y. K. Zavosky, a Soviet physicist then teaching at Kazan State University (now Kazan Federal University). Nuclear magnetic resonance was first observed in 1946 in the US by a team led by Felix Bloch at the same time as a separate team led by Edward Mills Purcell, the two of whom would later be the 1952 Nobel Laureates in Physics.
Resonant and non-resonant methods
A natural way to measure the separation between two energy levels is to find a measurable quantity defined by this separation and measure it. However, the precision of this method is limited by measurement precision and thus may be poor.
Alternatively, we can set up an experiment in which the system's behavior depends on the energy level. If we apply an external field of controlled frequency, we can measure the level separation by noting at which frequency a qualitative change happens: that would mean that at this frequency, the transition between two states has a high probability. An example of such an experiment is a variation of Stern–Gerlach experiment, in which magnetic moment is measured by finding resonance frequency for the transition between two spin states.
See also
Resonant inductive coupling, a method of transferring electrical power
Magnetic resonance (quantum mechanics), a quantum resonance process
Nuclear magnetic resonance, a special case
Giant resonance
Electron paramagnetic resonance
References
Magnetic resonance imaging
Magnetism
Physical phenomena | Magnetic resonance | [
"Physics",
"Chemistry"
] | 351 | [
"Physical phenomena",
"Nuclear magnetic resonance",
"Magnetic resonance imaging",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance stubs"
] |
203,433 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28length%29 | The following are examples of orders of magnitude for different lengths.
Overview
Detailed list
To help compare different orders of magnitude, the following list describes various lengths between meters and meters.
Subatomic scale
Atomic to cellular scale
Cellular to human scale
Human to astronomical scale
Astronomical scale
1 quectometer and less
The (SI symbol: ) is a unit of length in the metric system equal to .
To help compare different orders of magnitude, this section lists lengths shorter than 10−30 m (1 qm).
1.6 × 10−5 quectometers (1.6 × 10−35 meters) – the Planck length (Measures of distance shorter than this do not make physical sense, according to current theories of physics.)
1 qm – 1 quectometer, the smallest named subdivision of the meter in the SI base unit of length, one nonillionth of a meter.
1 rontometer
The (SI symbol: ) is a unit of length in the metric system equal to .
1 rm – 1 rontometer, a subdivision of the meter in the SI base unit of length, one octillionth of a meter.
10 rontometers
10 rm – the length of one side of a square whose area is one shed, a unit of target cross section used in nuclear physics
1 yoctometer
The (SI symbol: ) is a unit of length in the metric system equal to .
2 ym – the effective cross-section radius of 1 MeV neutrinos as measured by Clyde Cowan and Frederick Reines.
1 zeptometer
The (SI symbol: ) is a unit of length in the metric system equal to .
To help compare different orders of magnitude, this section lists lengths between 10−21 m and 10−20 m (1 zm and 10 zm).
2 zm – the upper bound for the width of a cosmic string in string theory.
2 zm – radius of effective cross section for a 20 GeV neutrino scattering off a nucleon
7 zm – radius of effective cross section for a 250 GeV neutrino scattering off a nucleon
10 zeptometers
To help compare different orders of magnitude, this section lists lengths between 10−20 m and 10−19 m (10 zm and 100 zm).
100 zeptometers
To help compare different orders of magnitude, this section lists lengths between 10−19 m and 10−18 m (100 zm and 1 am).
177 zm – de Broglie wavelength of protons at the Large Hadron Collider (7 TeV as of 2010)
1 attometer
The (SI symbol: ) is a unit of length in the metric system equal to .
To help compare different orders of magnitude, this section lists lengths between 10−18 m and 10−17 m (1 am and 10 am).
1 am – sensitivity of the LIGO detector for gravitational waves
1 am – upper limit for the size of quarks and electrons
10 attometers
To help compare different orders of magnitude, this section lists lengths between 10−17 m and 10−16 m (10 am and 100 am).
10–100 am – range of the weak force
86 am – charge radius of a Bottom eta meson
100 attometers
To help compare different orders of magnitude, this section lists lengths between 10−16 m and 10−15 m (100 am and 1 fm).
831 am – approximate proton radius
1 femtometer (or 1 fermi)
The (SI symbol: ) is a unit of length in the metric system equal to .
In particle physics, this unit is sometimes called a , also with abbreviation "fm". To help compare different orders of magnitude, this section lists lengths between 10−15 meters and 10−14 meters (1 femtometer and 10 fm).
1 fm – diameter of a neutron, approximate range-limit of the color force carried between quarks by gluons
1.5 fm – diameter of the scattering cross section of an 11 MeV proton with a target proton
1.75 fm – the effective charge diameter of a proton
2.81794 fm – classical electron radius
3 fm – approximate range-limit of the nuclear binding force mediated by mesons
7 fm – the radius of the effective scattering cross section for a gold nucleus scattering a 6 MeV alpha particle over 140 degrees
10 femtometers
To help compare different orders of magnitude, this section lists lengths between 10−14 m and 10−13 m (10 fm and 100 fm).
1.75 to 15 fm – diameter range of the atomic nucleus
10 fm – the length of one side of a square whose area is one barn (10−28 m2), a unit of target cross section used in nuclear physics
30.8568 fm – 1 quectoparsec (10−30 parsecs)
100 femtometers
To help compare different orders of magnitude, this section lists lengths between 10−13 m and 10−12 m (100 fm and 1 pm).
570 fm – typical distance from the atomic nucleus of the two innermost electrons (electrons in the 1s shell) in the uranium atom, the heaviest naturally-occurring atom
1 picometer
The (SI symbol: pm) is a unit of length in the metric system equal to ().
To help compare different orders of magnitude this section lists lengths between 10−12 and 10−11 m (1 pm and 10 pm).
1 pm – distance between atomic nuclei in a white dwarf
1 pm – reference value of particle displacement in acoustics
2.4 pm – the Compton wavelength of an electron
5 pm – shorter X-ray wavelengths (approx.)
10 picometers
To help compare different orders of magnitude this section lists lengths between 10−11 and 10−10 m (10 pm and 100 pm).
25 pm – approximate radius of a helium atom, the smallest neutral atom
30.8568 pm – 1 rontoparsec
50 pm – radius of a hydrogen atom
50 pm – bohr radius: approximate radius of a hydrogen atom
~50 pm – best resolution of a high-resolution transmission electron microscope
60 pm – radius of a carbon atom
93 pm – length of a diatomic carbon molecule
96 pm – H–O bond length in a water molecule
100 picometers
To help compare different orders of magnitude this section lists lengths between 10−10 and 10−9 m (100 pm and 1 nm; 1 Å and 10 Å).
100 pm – 1 ångström
100 pm – covalent radius of sulfur atom
120 pm – van der Waals radius of a neutral hydrogen atom
120 pm – radius of a gold atom
126 pm – covalent radius of ruthenium atom
135 pm – covalent radius of technetium atom
150 pm – length of a typical covalent bond (C–C)
153 pm – covalent radius of silver atom
155 pm – covalent radius of zirconium atom
175 pm – covalent radius of thulium atom
200 pm – highest resolution of a typical electron microscope
225 pm – covalent radius of caesium atom
280 pm – average size of the water molecule
298 pm – radius of a caesium atom, calculated to be the largest atomic radius
340 pm – thickness of single layer graphene
356.68 pm – width of diamond unit cell
403 pm – width of lithium fluoride unit cell
500 pm – Width of protein α helix
543 pm – silicon lattice spacing
560 pm – width of sodium chloride unit cell
700 pm – width of glucose molecule
700 pm – diameter of a buckyball
780 pm – mean width of quartz unit cell
820 pm – mean width of ice unit cell
900 pm – mean width of coesite unit cell
1 nanometer
The (SI symbol: ) is a unit of length in the metric system equal to ().
To help compare different orders of magnitude, this section lists lengths between 10−9 and 10−8 m (1 nm and 10 nm).
1 nm – diameter of a carbon nanotube
1 nm – roughly the length of a sucrose molecule, calculated by Albert Einstein
2.3 nm – length of a phospholipid
2.3 nm – smallest gate oxide thickness in microprocessors
3 nm – width of a DNA helix
3 nm – flying height of the head of a hard disk
3 nm – the average half-pitch of a memory cell manufactured circa 2022
3.4 nm – length of a DNA turn (10 bp)
3.8 nm – size of an albumin molecule
5 nm – size of the gate length of a 16 nm processor
5 nm – the average half-pitch of a memory cell manufactured circa 2019–2020
6 nm – length of a phospholipid bilayer
6–10 nm – thickness of cell membrane
6.8 nm – width of a haemoglobin molecule
7 nm – diameter of actin filaments
7 nm – the average half-pitch of a memory cell manufactured circa 2018
10 nm – thickness of cell wall in Gram-negative bacteria
10 nanometers
To help compare different orders of magnitude this section lists lengths between 10−8 and 10−7 m (10 nm and 100 nm).
10 nm Shortest extreme ultraviolet wavelength or longest X-ray wavelength
10 nm – the average length of a nanowire
10 nm – lower size of tobacco smoke
10 nm – the average half-pitch of a memory cell manufactured circa 2016 –2017
13 nm – the length of the wavelength that is used for EUV lithography
14 nm – length of a porcine circovirus
14 nm – the average half-pitch of a memory cell manufactured circa 2013
15 nm – length of an antibody
18 nm – diameter of tobacco mosaic virus
20 nm – length of a nanobe, could be one of the smallest forms of life
20–80 nm – thickness of cell wall in Gram-positive bacteria
20 nm – thickness of bacterial flagellum
22 nm – the average half-pitch of a memory cell manufactured circa 2011–2012
22 nm – smallest feature size of production microprocessors in September 2009
25 nm – diameter of a microtubule
30 nm – lower size of cooking oil smoke
30.8568 nm – 1 yoctoparsec
32 nm – the average half-pitch of a memory cell manufactured circa 2009–2010
40 nm – extreme ultraviolet wavelength
45 nm – the average half-pitch of a memory cell manufactured circa 2007–2008
50 nm – upper size for airborne virus particles
50 nm – flying height of the head of a hard disk
65 nm – the average half-pitch of a memory cell manufactured circa 2005–2006
58 nm – height of a T7 bacteriophage
90 nm – human immunodeficiency virus (HIV) (generally, viruses range in size from 20 nm to 450 nm)
90 nm – the average half-pitch of a memory cell manufactured circa 2002–2003
100 nm – Length of a mesoporous silica nanoparticle
100 nanometers
To help compare different orders of magnitude, this section lists lengths between 10−7 and 10−6 m (100 nm and 1 μm).
100 nm – greatest particle size that can fit through a surgical mask
100 nm – 90% of particles in wood smoke are smaller than this.
120 nm – greatest particle size that can fit through a ULPA filter
120 nm – diameter of a human immunodeficiency virus (HIV)
120 nm – approximate diameter of SARS-CoV-2
125 nm – standard depth of pits on compact discs (width: 500 nm, length: 850 nm to 3.5 μm)
180 nm – typical length of the rabies virus
200 nm – typical size of a Mycoplasma bacterium, among the smallest bacteria
300 nm – greatest particle size that can fit through a HEPA (high efficiency particulate air) filter (N100 removes up to 99.97% at 300 nm, N95 removes up to 95% at 300 nm)
300–400 nm – near ultraviolet wavelength
400–420 nm – wavelength of violet light (see Color and Visible spectrum)
420–440 nm – wavelength of indigo light
440–500 nm – wavelength of blue light
500–520 nm – wavelength of cyan light
520–565 nm – wavelength of green light
565–590 nm – wavelength of yellow light
590–625 nm – wavelength of orange light
625–700 nm – wavelength of red light
700–1.4 μm – wavelength of near-infrared radiation
1 micrometer (or 1 micron)
The (SI symbol: ) is a unit of length in the metric system equal to ().
To help compare different orders of magnitude, this section lists some items with lengths between 10−6 and 10−5 m (between 1 and 10 micrometers, or μm).
~0.7–300 μm – wavelength of infrared radiation
1 μm – the side of a square of area 10−12 m2
1 μm – edge of cube of volume 10−18 m3 (1 fL)
1–10 μm – diameter of a typical bacterium
1 μm – length of a lysosome
1–2 μm – anthrax spore
2 μm – length of an average E. coli bacteria
3–4 μm – size of a typical yeast cell
5 μm – length of a typical human spermatozoon's head
6 μm – thickness of the tape in a 120-minute (C120) compact cassette
7 μm – diameter of the nucleus of a typical eukaryotic cell
about 7 μm – diameter of human red blood cells
3–8 μm – width of strand of spider web silk
5–10 μm – width of a chloroplast
8–11 μm – size of a ground-level fog or mist droplet
7–12 μm – the diameter of human white blood cells
8–10 μm – the diameter of human macrophages
10 micrometers
To help compare different orders of magnitude, this section lists lengths between 10−5 m and 10−4 m (10 μm and 100 μm).
10 μm – width of cotton fibre
10 μm – tolerance of a Lego brick
10 μm – transistor width of the Intel 4004, the world's first commercial microprocessor
10 μm – mean longest dimension of a human red blood cell
5–20 μm – dust mite excreta
10.6 μm – wavelength of light emitted by a carbon dioxide laser
15 μm – width of silk fibre
17 μm – minimum width of a strand of human hair
17.6 μm – one twip, a unit of length in typography
10 to 55 μm – width of wool fibre
25.4 μm – 1/1,000 inch, commonly referred to as 1 mil in the U.S. and 1 thou in the UK
30 μm – length of a human skin cell
30.8568 μm – 1 zeptoparsec
50 μm – typical length of Euglena gracilis, a flagellate protist
50 μm – typical length of a human liver cell, an average-sized body cell
50 μm – length of a silt particle
60 μm – length of a sperm cell
78 μm — width of a pixel on the display of the iPhone 4, marketed as Retina Display
70 to 180 μm – thickness of paper
100 micrometers
To help compare different orders of magnitude, this section lists lengths between 10−4 m and 10−3 m (100 μm and 1 mm). The term myriometer (abbr. mom, equivalent to 100 micrometers; frequently confused with the myriameter, 10 kilometers) is deprecated; the decimal metric prefix myrio- is obsolete and was not included among the prefixes when the International System of Units was introduced in 1960.
100 μm – 1/10 of a millimeter
100 μm – 0.00394 inches
100 μm – smallest distance that can be seen with the naked eye
100 μm – average diameter of a strand of human hair
100 μm – thickness of a coat of paint
100 μm – length of a dust particle
120 μm – the geometric mean of the Planck length and the diameter of the observable universe:
120 μm – diameter of a human ovum
170 μm – length of the largest mammalian sperm cell (rat)
170 μm – length of the largest sperm cell in nature, belonging to the Drosophila bifurca fruit fly
181 μm – maximum width of a strand of human hair
100–400 μm – length of Demodex mites living in human hair follicles
175–200 μm – typical thickness of a solar cell.
200 μm – typical length of Paramecium caudatum, a ciliate protist
200 μm – nominal width of the smallest commonly available mechanical pencil lead (0.2 mm)
250–300 μm – length of a dust mite
340 μm – length of a pixel on a 17-inch monitor with a resolution of 1024×768
500 μm – typical length of Amoeba proteus, an amoeboid protist
500 μm – MEMS micro-engine
500 μm – average length of a grain of sand
500 μm – average length of a grain of salt
500 μm – average length of a grain of sugar
560 μm – thickness of the central area of a human cornea
750 μm – diameter of a Thiomargarita namibiensis, the largest bacteria known
760 μm – thickness of an identification card
1 millimeter
The (SI symbol: ) is a unit of length in the metric system equal to ().
To help compare different orders of magnitude, this section lists lengths between 10−3 m and 10−2 m (1 mm and 1 cm).
1.0 mm – 1/1,000 of a meter
1.0 mm – 0.03937 inches or 5/127 (exactly)
1.0 mm – side of a square of area 1 mm²
1.0 mm – diameter of a pinhead
1.5 mm – average length of a flea
2.54 mm – distance between pins on old dual in-line package (DIP) electronic components
5 mm – length of an average red ant
5 mm – diameter of an average grain of rice
5.56×45mm NATO – standard ammunition size
6 mm – approximate width of a pencil
7 mm – length of a Paedophryne amauensis, the smallest-known vertebrate
7.1 mm – length of a sunflower seed
7.62×51mm NATO – common military ammunition size
8 mm – width of old-format home movie film
8 mm – length of a Paedocypris progenetica, the smallest-known fish
1 centimeter
The (SI symbol: ) is a unit of length in the metric system equal to ().
To help compare different orders of magnitude, this section lists lengths between 10−2 m and 10−1 m (1 cm and 1 dm).
1 cm – 10 millimeters
1 cm – 0.39 inches
1 cm – edge of a square of area 1 cm2
1 cm – edge of a cube of volume 1 mL
1 cm – length of a coffee bean
1 cm – approximate width of average fingernail
1.2 cm – length of a bee
1.2 cm – diameter of a die
1.5 cm – length of a very large mosquito
1.6 cm – length of a Jaragua Sphaero, a very small reptile
1.7 cm – length of a Thorius arboreus, the smallest salamander
2 cm – approximate width of an adult human finger
2.54 cm – 1 inch
3.08568 cm – 1 attoparsec
3.4 cm – length of a quail egg
3.5 cm – width of film commonly used in motion pictures and still photography
3.78 cm – amount of distance the Moon moves away from Earth each year
4.3 cm – minimum diameter of a golf ball
5 cm – usual diameter of a chicken egg
5 cm – height of a hummingbird, the smallest-known bird
5.08 cm – 2 inches,
5.5 × 5.5 × 5.5 cm – dimensions of a 3x3x3 Rubik's cube
6.1 cm – average height of an apple
7.3–7.5 cm – diameter of a baseball
8.6 cm × 5.4 cm – dimensions of a standard credit card (also called CR80)
9 cm – length of a speckled padloper, the smallest-known turtle
1 decimeter
The (SI symbol: ) is a unit of length in the metric system equal to ().
To help compare different orders of magnitude, this section lists lengths between 10 centimeters and 100 centimeters (10−1 meter and 1 meter).
Conversions
10 centimeters (abbreviated to 10 cm) is equal to:
1 decimeter (dm), a term not in common use (1 L = 1 dm3.)
100 millimeters
3.9 inches
a side of a square of area 0.01 m2
the edge of a cube with a volume of m3 (1 L)
Wavelengths
10 cm = 1.0 dm – wavelength of the highest UHF radio frequency, 3 GHz
12 cm = 1.2 dm – wavelength of the 2.45 GHz ISM radio band
21 cm = 2.1 dm – wavelength of the 1.4 GHz hydrogen emission line, a hyperfine transition of the hydrogen atom
100 cm = 10 dm – wavelength of the lowest UHF radio frequency, 300 MHz
Human-defined scales and structures
10.16 cm = 1.016 dm – 1 hand used in measuring height of horses (4 inches)
12 cm = 1.2 dm – diameter of a compact disc (CD) (= 120 mm)
15 cm = 1.5 dm – length of a Bic pen with cap on
22 cm = 2.2 dm – diameter of a typical association football (soccer ball)
30 cm = 3 dm – typical school-use ruler length (= 300 mm)
30.48 cm = 3.048 dm – 1 foot (measure)
60 cm = 6 dm – standard depth (front to back) of a domestic kitchen worktop in Europe (= 600 mm)
90 cm = 9 dm – average length of a rapier, a fencing sword
91.44 cm = 9.144 dm – one yard (measure)
Nature
10 cm = 1 dm – diameter of the human cervix upon entering the second stage of labour
11 cm = 1.1 dm – length of an average potato in the US
13 cm = 1.3 dm – body length of a Goliath birdeater
15 cm = 1.5 dm – approximate size of largest beetle species
19 cm = 1.9 dm – length of a banana
26.3 cm = 2.6 dm – length of average male human foot
29.98 cm = 2.998 dm – distance light in vacuum travels in one nanosecond
30 cm = 3.0 dm – maximum leg length of a Goliath birdeater
31 cm = 3.1 dm – wingspan of largest butterfly species Ornithoptera alexandrae
32 cm – length of the Goliath frog, the world's largest frog
46 cm = 4.6 dm – length of an average domestic cat
50 to 65 cm = 5–6.5 dm – a coati's tail
66 cm = 6.6 dm – length of the longest pine cones (produced by the sugar pine)
Astronomical
84 cm = 8.4 dm – approximate diameter of 2008 TS26, a meteoroid
1 meter
To help compare different orders of magnitude, this section lists lengths between one meter and ten meters.
Light, in vacuum, travels 1 meter in , or of a second.
Conversions
1 meter is:
10 decimeters
100 centimeters
1,000 millimeters
39.37 inches
3.28 feet
1.1 yards
side of square with area 1 m2
edge of cube with surface area 6 m2 and volume 1 m3
radius of circle with area π m2
radius of sphere with surface area 4π m2 and volume 4/3π m3
Human-defined scales and structures
1 m – approximate height of the top part of a doorknob on a door
1 m – diameter of a very large beach ball
1.29 m – length of the Cross Island Chapel, the smallest church in the world
1.4 m – length of a Peel P50, the world's smallest car
1.435 m – standard gauge of railway track used by about 60% of railways in the world = 4 ft 8 in
2.5 m – distance from the floor to the ceiling in an average residential house
2.7 m – length of the Starr Bumble Bee II, the smallest plane
2.77–3.44 m – wavelength of the broadcast radio FM band 87–108 MHz
3.05 m – the length of an old Mini
8 m – length of the Tsar Bomba, the largest bomb ever detonated
8.38 m – the length of a London Bus (AEC Routemaster)
Sports
2.44 m – height of an association football goal
2.45 m – highest high jump by a human (Javier Sotomayor)
3.05 m – (10 feet) height of the basket in basketball
8.95 m – longest long jump by a human (Mike Powell)
Nature
1 m – length of Rafflesia arnoldii, the largest flower in the world
1 m – height of Homo floresiensis (the "Hobbit")
1.15 m – a pizote (mammal)
1.5 m – height of an okapi
1.63 m – (5 feet 4 inches) (or 64 inches) – height of average U.S. female human (source: U.S. Centers for Disease Control and Prevention (CDC))
1.75 m – (5 feet 8 inches) – height of average U.S. male human (source: U.S. CDC as per female above)
2.4 m – wingspan of a mute swan
2.5 m – height of a sunflower
2.7 m – length of a leatherback sea turtle, the largest living turtle
2.72 m – (8 feet 11 inches) – tallest-known human (Robert Wadlow)
3 m – length of a giant Gippsland earthworm
3 m – length of an Komodo dragon, the largest living lizard
3.63 m – the record wingspan for living birds (a wandering albatross)
3.7 m – leg span of a Japanese spider crab
3.7 m – length of a southern elephant seal, the largest living pinniped
5 m – length of an elephant
5.2 m – height of a giraffe
5.5 m – height of a Baluchitherium, the largest land mammal ever lived
6.5 m – wingspan of Argentavis, the largest flying bird known
6.7 m – length of a Microchaetus rappi
7.4 m – wingspan of Pelagornis, the bird with longest wingspan ever.
7.5 m – approximate length of the human gastrointestinal tract
Astronomical
3–6 m – approximate diameter of , a meteoroid
4.1 m – diameter of 2008 TC3, a small asteroid that flew into the Earth's atmosphere on 7 October 2008
1 decameter
The (SI symbol: ) is a unit of length in the metric system equal to 10 meters (101 m).
To help compare different orders of magnitude, this section lists lengths between 10 and 100 meters.
Conversions
10 meters (very rarely termed a decameter which is abbreviated as dam) is equal to:
10 meters
100 decimeters
1,000 centimeters
10,000 millimeters
10,000,000 micrometers (or rarely 10,000,000 microns)
32.8 feet
11 yards
side of a square with area 100 m²
Human-defined scales and structures
10 meters – wavelength of the highest shortwave radio frequency, 30 MHz
10.2 meters – length of the Panzer VIII Maus, the world's largest tank
12 meters – height of the Newby-McMahon Building, the world's littlest skyscraper
23 meters – height of Luxor Obelisk, located in the Place de la Concorde, Paris, France
25 meters – wavelength of the broadcast radio shortwave band at 12 MHz
29 meters – height of the Savudrija Lighthouse
30 meters – height of Christ the Redeemer
31 meters – wavelength of the broadcast radio shortwave band at 9.7 MHz
32 meters – length of one arcsecond of latitude on the surface of the Earth
33.3 meters – height of the De Noord, the tallest windmill in the world
34 meters – height of the Split Point Lighthouse in Aireys Inlet, Victoria, Australia
40 meters – wingspan of the Mil Mi-26, the largest helicopter
40 meters – average depth beneath the seabed of the Channel tunnel
49 meters – wavelength of the broadcast radio shortwave band at 6.1 MHz
50 meters – length of a road train
50 meters – height of the Arc de Triomphe
55 meters – height of the Leaning Tower of Pisa
62 meters – wingspan of Concorde
62.5 meters – height of Pyramid of Djoser
64 meters – wingspan of a Boeing 747-400
69 meters – wingspan of an Antonov An-124 Ruslan
70 meters – length of the Bayeux Tapestry
70 meters – width of a typical association football field
73 meters – wingspan of a Airbus A380
73 meters – height of the Taj Mahal
77 meters – wingspan of a Boeing 747-8
88.4 meters – wingspan of an Antonov An-225 Mriya transport aircraft
93 meters – height of the Statue of Liberty (Liberty Enlightening the World)
96 meters – height of Big Ben
100 meters – wavelength of the lowest shortwave radio frequency, 3 MHz
Sports
11 meters – approximate width of a doubles tennis court
15 meters – width of a standard FIBA basketball court
15.24 meters – width of an NBA basketball court (50 feet)
18.44 meters – distance between the front of the pitcher's rubber and the rear point of home plate on a baseball field (60 feet, 6 inches)
20 meters – length of cricket pitch (22 yards)
27.43 meters – distance between bases on a baseball field (90 feet)
28 meters – length of a standard FIBA basketball court
28.65 meters – length of an NBA basketball court (94 feet)
49 meters – width of an American football field (53 yards)
59.436 meters – width of a Canadian football field (65 yards)
70 meters – typical width of an association football field
91 meters – length of an American football field (100 yards, measured between the goal lines)
Nature
10 meters – average length of human digestive tract
12 meters – height of a saguaro cactus
12 meters – length of a whale shark, largest living fish
12 meters – wingspan of a Quetzalcoatlus, a pterosaur
12.8 meters – length of a Titanoboa, the largest snake to have ever lived
13 meters – length of a giant squid and colossal squid, the largest living invertebrates
15 meters – approximate distance the tropical circles of latitude are moving towards the equator and the polar circles are moving towards the poles each year due to a natural, gradual decrease in the Earth's axial tilt
16 meters – length of a sperm whale, the largest toothed whale
18 meters – height of a Sauroposeidon, the tallest-known dinosaur
20 meters – length of a Leedsichthys, the largest-known fish to have lived
21 meters – height of High Force waterfall in England
30.5 meters – length of the lion's mane jellyfish, the largest jellyfish in the world
33 meters – length of a blue whale, the largest animal on earth, living or extinct, in terms of mass
39 meters – length of a Supersaurus, the longest-known dinosaur and longest vertebrate
52 meters – height of Niagara Falls
55 meters – length of a bootlace worm, the longest-known animal
66 meters – highest possible sea level rise due to a complete melting of all ice on Earth
83 meters – height of a western hemlock
84 meters – height of General Sherman, the largest tree in the world
Astronomical
30 meters – diameter of , a rapidly spinning meteoroid
30.8568 meters – 1 femtoparsec
32 meters – approximate diameter of 2008 HJ, a small meteoroid
1 hectometer
The (SI symbol: ) is a unit of length in the metric system equal to 100 meters (102 m).
To compare different orders of magnitude this section lists lengths between 100 meters and 1,000 meters (1 kilometer).
Conversions
100 meters (sometimes termed a hectometer) is equal to:
328 feet
one side of a 1 hectare square
a fifth of a modern li, a Chinese unit of measurement
the approximate distance travelled by light in 300 nanoseconds
Human-defined scales and structures
100 meters – wavelength of the highest medium wave radio frequency, 3 MHz
100 meters – spacing of location marker posts on British motorways
110 meters – height of the Saturn V
122 meters – height of the Starship, the tallest rocket currently under development by SpaceX
138.8 meters – height of the Great Pyramid of Giza (Pyramid of Cheops)
139 meters – height of the world's tallest roller coaster, Kingda Ka
157 meters – height of the Cologne Cathedral
162 meters – height of the Ulm Minster, the tallest church building in the world
165 meters – height of the Dushanbe Flagpole, the tallest flagpole from May 2011 to September 2014
169 meters – height of the Washington Monument
171 meters – height of the Jeddah Flagpole, the tallest flagpole from September 2014 to December 2021
182 meters – height of the Statue of Unity, the world's tallest statue
187 meters – shortest wavelength of the broadcast radio AM band, 1600 kHz
192 meters – height of the Gateway Arch
202 meters – height of the Cairo Flagpole, the tallest flagpole as of December 2021
202 meters – length of the Széchenyi Chain Bridge connecting Buda and Pest
220 meters – height of the Hoover Dam
245 meters – length of the LZ 129 Hindenburg
270 meters – length of the Titanic
318 meters – height of The New York Times Building
318.9 meters – height of the Chrysler Building
328 meters – height of Auckland's Sky Tower, the tallest free-standing structure in the Southern Hemisphere (1996–2022)
330 meters – height of the Eiffel Tower (including antenna)
336 meters – height of the world's tallest bridge as of October 2023, the Millau Viaduct
364.75 meters – length of the Icon of the Seas
390 meters – height of the Empire State Building
400–800 meters – approximate heights of the world's tallest skyscrapers from 1931 to 2010
458 meters – length of the Knock Nevis, the world's largest supertanker
553.33 meters – height of the CN Tower, the tallest structure in North America
555 meters – longest wavelength of the broadcast radio AM band, 540 kHz
630 meters – height of the KVLY-TV mast, one of the tallest structures in the world
646 meters – height of the Warsaw radio mast, the world's tallest structure until its collapse in 1991
679 meters – height of Merdeka 118, the second tallest structure in Kuala Lumpur, Malaysia
828 meters – height of Burj Khalifa, world's tallest structure since 17 January 2009
1,000 meters – wavelength of the lowest mediumwave radio frequency, 300 kHz
Sports
100 meters – the distance a very fast human can run in about 10 seconds
100.584 meters – length of a Canadian football field between the goal lines (110 yards)
91.5 meters – 137 meters – length of a soccer field
105 meters – length of football pitch (UEFA stadium categories 3 and 4)
105 meters – length of a typical football field
109.73 meters – total length of an American football field (120 yards, including the end zones)
110–150 meters – the width of an Australian football field
135–185 meters – the length of an Australian football field
137.16 meters – total length of a Canadian football field, including the end zones (150 yards)
Nature
115.5 meters – height of the world's tallest tree in 2007, the Hyperion sequoia
310 meters – maximum depth of Lake Geneva
340 meters – distance sound travels in air at sea level in one second; see Speed of sound
947 meters – height of the Tugela Falls, the highest waterfall in Africa
979 meters – height of the Angel Falls, the world's highest free-falling waterfall (Venezuela)
Astronomical
270 meters – length of 99942 Apophis
535 meters – length of 25143 Itokawa, a small asteroid visited by a spacecraft
1 kilometer
The (SI symbol: ) is a unit of length in the metric system equal to meters (103 m).
To help compare different orders of magnitude, this section lists lengths between 1 kilometer and 10 kilometers (103 and 104 meters).
Conversions
1 kilometer (unit symbol km) is equal to:
1,000 meters
0.621371 miles
1,093.61 yards
3,280.84 feet
39,370.1 inches
100,000 centimeters
1,000,000 millimeters
Side of a square of area 1 km2
Radius of a circle of area π km2
Human-defined scales and structures
1 km – wavelength of the highest long wave radio frequency, 300 kHz
1.008 km – proposed height of the Jeddah Tower, a megatall skyscraper under construction in Saudi Arabia
1.280 km – span of the Golden Gate Bridge (distance between towers)
1.609 km – 1 statute mile
1.852 km – 1 nautical mile, equal to 1 arcminute of latitude at the surface of the Earth
1.991 km – span of the Akashi Kaikyō Bridge
2.309 km – axial length of the Three Gorges Dam, the largest dam in the world located in China
3.991 km – length of the Akashi Kaikyō Bridge, longest suspension bridge in the world
4 km – width of Central Park
5.072 km – elevation of Tanggula Mountain Pass, below highest peak in the Tanggula Mountains, highest railway pass in the world
5.8 km – elevation of Cerro Aucanquilcha, highest road in the world, located in Chile
98 airports have paved runways from 4 km to 5.5 km in length.
8 km – length of Palm Jebel Ali, an artificial island built off the coast of Dubai
9.8 km – length of The World, an artificial archipelago that is also built off the coast of Dubai, whose islands resemble a world map
Nature
1.5 km – distance sound travels in water in one second
Geographical
1.637 km – deepest dive of Lake Baikal in Russia, the world's largest freshwater lake
2.228 km – height of Mount Kosciuszko, highest point on mainland Australia
Most of Manhattan is from 3 to 4 km wide.
3.776 km – height of Mount Fuji, highest peak in Japan
4.478 km – height of Matterhorn
4.509 km – height of Mount Wilhelm, highest peak in Papua New Guinea
4.810 km – height of Mont Blanc, highest peak in the Alps
4.884 km – height of Carstensz Pyramid, highest peak in Oceania
4.892 km – height of Mount Vinson, highest peak in Antarctica
5.610 km – height of Mount Damavand, highest peak in Iran
5.642 km – height of Mount Elbrus, highest peak in Europe
5.895 km – height of Mount Kilimanjaro, highest peak in Africa
6.081 km – height of Mount Logan, highest peak in Canada
6.190 km – height of Denali, highest peak in North America
6.959 km – height of Aconcagua, highest peak in South America
7.5 km – depth of Cayman Trench, deepest point in the Caribbean Sea
8.611 km – height of K2, second highest peak on Earth
8.848 km – height of Mount Everest, highest peak on Earth, on the border between Nepal and China
Astronomical
1 km – diameter of 1620 Geographos
1 km – very approximate size of the smallest-known moons of Jupiter
1.4 km – diameter of Dactyl, the first confirmed asteroid moon
4.8 km – diameter of 5535 Annefrank, an inner belt asteroid
5 km – diameter of 3753 Cruithne
5 km – length of PSR B1257+12
8 km – diameter of Themisto, one of Jupiter's moons
8 km – diameter of the Vela Pulsar
8.6 km – diameter of Callirrhoe, also known as Jupiter XVII
9.737 km – length of PSR B1919+21
10 kilometers (1 myriameter)
To help compare different orders of magnitude, this section lists lengths between 10 and 100 kilometers (104 to 105 meters). The myriameter (sometimes also spelled myriometer; 10,000 meters) is a deprecated unit name; the decimal metric prefix myria- (sometimes also written as myrio-) is obsolete and was not included among the prefixes when the International System of Units was introduced in 1960.
Conversions
10 kilometers is equal to:
10,000 meters
About 6.2 miles
1 mil (the Scandinavian mile), now standardized as 10 km:
1 mil, the unit of measure commonly used in Norway and Sweden used to be 11,295 m in Norway and 10,688 m in Sweden.
farsang, unit of measure commonly used in Iran and Turkey
Sports
42.195 km – length of the marathon
Human-defined scales and structures
18 km – cruising altitude of Concorde
27 km – circumference of the Large Hadron Collider, the largest and highest energy particle accelerator
34.668 km – highest manned balloon flight (Malcolm D. Ross and Victor E. Prather on 4 May 1961)
38.422 km – length of the Second Lake Pontchartrain Causeway in Louisiana, US
39 km – undersea portion of the Channel tunnel
53.9 km – length of the Seikan Tunnel, , the longest rail tunnel in the world
77 km – rough total length of the Panama Canal
Geographical
10 km – height of Mauna Kea in Hawaii, measured from its base on the ocean floor
11 km – deepest-known point of the ocean, Challenger Deep in the Mariana Trench
11 km – average height of the troposphere
14 km – width of the Strait of Gibraltar
21 km – length of Manhattan
22 km – narrowest width of the Cook Strait between New Zealand's main islands
23 km – depth of the largest earthquake ever recorded in the United Kingdom, in 1931 at the Dogger Bank of the North Sea
34 km – narrowest width of the English Channel at the Strait of Dover
50 km – approximate height of the stratosphere
90 km – width of the Bering Strait
Astronomical
10 km – diameter of the most massive neutron stars (3–5 solar masses)
13 km – mean diameter of Deimos, the smaller moon of Mars
20 km – diameter of the least massive neutron stars (1.44 solar masses)
20 km – diameter of Leda, one of Jupiter's moons
20 km – diameter of Pan, one of Saturn's moons
22 km – diameter of Phobos, the larger moon of Mars
27 km – height of Olympus Mons above the Mars reference level, the highest-known mountain of the Solar System
30.8568 km – 1 picoparsec
43 km – diameter difference of Earth's equatorial bulge
66 km – diameter of Naiad, the innermost of Neptune's moons
100 kilometers
A length of 100 kilometers (about 62 miles), as a rough amount, is relatively common in measurements on Earth and for some astronomical objects.
It is the altitude at which the FAI defines spaceflight to begin.
To help compare orders of magnitude, this section lists lengths between 100 and 1,000 kilometers (105 and 106 meters).
Conversions
A distance of 100 kilometers is equal to about 62 miles (or ).
Human-defined scales and structures
100 km – the Karman line: the internationally recognized boundary of outer space
105 km – distance from Giridih to Bokaro
109 km – length of High Speed 1 between London and the Channel Tunnel
130 km – range of a Scud-A missile
163 km – length of the Suez Canal
164 km – length of the Danyang–Kunshan Grand Bridge
213 km – length of Paris Métro
217 km – length of the Grand Union Canal
223 km – length of the Madrid Metro
300 km – range of a Scud-B missile
386 km – altitude of the International Space Station
408 km – length of the London Underground (active track)
460 km – distance from London to Paris
470 km – distance from Dublin to London as the crow flies
600 km – range of a Scud-C missile
600 km – height above ground of the Hubble Space Telescope
804.67 km – (500 miles) distance of the Indy 500 automobile race
Geographical
42 km – width of Singapore
75 km – width of Rhode Island
111 km – distance covered by one degree of latitude on Earth's surface
120 km – width of Brunei
180 km – distance between Mumbai and Nashik
200 km – width of Qatar
203 km – length of Sognefjorden, the third-largest fjord in the world
220 km – distance between Pune and Nashik
240 km – width of Rwanda
240 km – widest width of the English Channel
400 km – width of West Virginia
430 km – length of the Pyrenees
450 km – length of the Grand Canyon
500 km – widest width of Sweden from east to west
501 km – width of Uganda
550 km – distance from San Francisco to Los Angeles as the crow flies
560 km – distance of Bordeaux–Paris, formerly the longest one-day professional cycling race
590 km – length of land boundary between Finland and Sweden
724 km – length of the Om River
800 km – width of Germany
871 km – distance from Sydney to Melbourne (along the Hume Highway)
897 km – length of the River Douro
900 km – distance from Berlin to Stockholm
956 km – distance from Washington, D.C., to Chicago, Illinois, as the crow flies
970 km – distance from Land's End to John o' Groats as the crow flies
Astronomical
100 km – the altitude at which the FAI defines spaceflight to begin
167 km – diameter of Amalthea, one of Jupiter's inner moons
200 km – width of Valles Marineris
220 km – diameter of Phoebe, the largest of Saturn's outer moons
300 km – the approximate distance travelled by light in one millisecond
340 km – diameter of Nereid, the third-largest moon of Neptune which has a highly elliptical orbit
350 km – lower bound of Low Earth orbit
420 km – diameter of Proteus, the second-largest moon of Neptune
468 km – diameter of the asteroid 4 Vesta
472 km – diameter of Miranda, one of Uranus's major moons
974.6 km – greatest diameter of 1 Ceres, the largest Solar System asteroid
1 megameter
The (SI symbol: ) is a unit of length in the metric system equal to meters (106 m).
To help compare different orders of magnitude, this section lists lengths starting at 106 m (1 Mm or 1,000 km).
Conversions
1 megameter is equal to:
1000 km
1 E+6 m (one million meters)
approximately 621.37 miles
1 E+12 μm (one trillion micrometers)
Side of square with area 1,000,000 km2
Human-defined scales and structures
2.100 Mm – length of proposed Iran-Pakistan-India gas pipe
2.100 Mm – distance from Casablanca to Rome
2.288 Mm – length of the official Alaska Highway when it was built in the 1940s
3.069 Mm – length of Interstate 95 (from Houlton, Maine, to Miami, Florida)
3.846 Mm – length of U.S. Route 1 (from Fort Kent, Maine, to Key West, Florida)
5.000 Mm – width of the United States
5.007 Mm – estimated length of Interstate 90 (Seattle, Washington, to Boston, Massachusetts)
5.614 Mm – length of the Australian Dingo Fence
6.371 Mm – global-average Earth radius
6.4 Mm – length of the Great Wall of China
7.821 Mm – length of the Trans-Canada Highway, the world's longest national highway (from Victoria, British Columbia, to St. John's, Newfoundland)
8.836 Mm – road distance between Prudhoe Bay, Alaska, and Key West, Florida, the endpoints of the U.S. road network
8.852 Mm – aggregate length of the Great Wall of China, including trenches, hills and rivers
9.259 Mm – length of the Trans-Siberian railway
Sports
The Munda Biddi Trail in Western Australia, Australia, is over 1,000 km long – the world's longest off-road cycle trail
1.200 Mm – the length of the Paris–Brest–Paris bicycling event
Several endurance auto races are, or were, run for 1,000 km:
Bathurst 1000
1000 km Brands Hatch
1000 km Buenos Aires
1000 km Donington
1000 km Monza
1000 km Nürburgring
1000 km Silverstone
1000 km Spa
1000 km Suzuka
1000 km Zeltweg
Geographical
1.010 Mm – distance from San Diego to El Paso as the crow flies
1.100 Mm – length of Italy
1.200 Mm – length of California
1.200 Mm – width of Texas
1.500 Mm – length of the Gobi Desert
1.600 Mm – length of the Namib, the oldest desert on Earth
2.000 Mm – distance from Beijing to Hong Kong as the crow flies
2.300 Mm – length of the Great Barrier Reef
2.800 Mm – narrowest width of Atlantic Ocean (Brazil-West Africa)
2.850 Mm – length of the Danube river
2.205 Mm – length of Sweden's total land boundaries
2.515 Mm – length of Norway's total land boundaries
3.690 Mm – length of the Volga river, longest in Europe
4.000 Mm – length of the Kalahari Desert
4.350 Mm – length of the Yellow River
4.600 Mm – width of the Mediterranean Sea
4.800 Mm – length of the Sahara
4.800 Mm – widest width of Atlantic Ocean (U.S.-Northern Africa)
5.100 Mm – distance from Dublin to New York as the crow flies
6.270 Mm – length of the Mississippi-Missouri River system
6.380 Mm – length of the Yangtze River
6.400 Mm – Length of the Amazon River
6.758 Mm – Length of the Nile system, longest on Earth
8.200 Mm – Approximate Distance from Dublin to San Francisco
Astronomical
1.000 Mm – estimated shortest axis of triaxial dwarf planet
1.186 Mm – diameter of Charon, the largest moon of Pluto
1.280 Mm – diameter of the trans-Neptunian object 50000 Quaoar
1.436 Mm – diameter of Iapetus, one of Saturn's major moons
1.578 Mm – diameter of Titania, the largest of Uranus's moons
1.960 Mm – estimated longest axis of Haumea
2.326 Mm – diameter of the dwarf planet Eris, the largest trans-Neptunian object found to date
2.376 Mm – diameter of Pluto
2.707 Mm – diameter of Triton, largest moon of Neptune
3.122 Mm – diameter of Europa, the smallest Galilean satellite of Jupiter
3.476 Mm – diameter of Earth's Moon
3.643 Mm – diameter of Io, a moon of Jupiter
4.821 Mm – diameter of Callisto, a moon of Jupiter
4.879 Mm – diameter of Mercury
5.150 Mm – diameter of Titan, the largest moon of Saturn
5.262 Mm – diameter of Jupiter's moon Ganymede, the largest moon in the Solar System
6.371 Mm – radius of Earth
6.792 Mm – diameter of Mars
10 megameters
To help compare different orders of magnitude, this section lists lengths starting at 107 meters (10 megameters or 10,000 kilometers).
Conversions
10 megameters (10 Mm) is
6,215 miles
side of a square of area 100,000,000 square kilometers (km2)
radius of a circle of area 314,159,265 km2
Human-defined scales and structures
11.085 Mm – length of the Kyiv-Vladivostok railway, a longer variant of the Trans-Siberian railway
13.300 Mm – length of roads rehabilitated and widened under the National Highway Development Project (launched in 1998) in India
39.000 Mm – length of the SEA-ME-WE 3 optical submarine telecommunications cable, joining 39 points between Norden, Germany, and Okinawa, Japan
67.000 Mm – total length of National Highways in India
80.000 Mm – 20,000 (metric, French) leagues (see Jules Verne, Twenty Thousand Leagues Under the Seas)
Geographical
10 Mm – approximate altitude of the outer boundary of the exosphere
10.001 Mm – length of the meridian arc from the North Pole to the Equator (the original definition of the meter was based on this length)
40.000 Mm – length of the Ring of Fire
60.000 Mm – total length of the mid-ocean ridges
Astronomical
12.000 Mm – diameter of Sirius B, a white dwarf
12.104 Mm – diameter of Venus
12.742 Mm – diameter of Earth
12.900 Mm – minimum distance of the meteoroid from the centre of Earth on 31 March 2004, closest on record
14.000 Mm – smallest diameter of Jupiter's Great Red Spot
19.000 Mm – separation between Pluto and Charon
30.8568 Mm – 1 nanoparsec
34.770 Mm – minimum distance of the asteroid 99942 Apophis on 13 April 2029 from the centre of Earth
35.786 Mm – altitude of geostationary orbit
40.005 Mm – polar circumference of the Earth
40.077 Mm – equatorial circumference of the Earth
49.528 Mm – diameter of Neptune
51.118 Mm – diameter of Uranus
100 megameters
To help compare different orders of magnitude, this section lists lengths starting at 108 meters (100 megameters or 100,000 kilometers or 62,150 miles).
102 Mm – diameter of HD 149026 b, an unusually dense Jovian planet
115 Mm – width of Saturn's Rings
120 Mm – diameter of EBLM J0555-57Ab, the smallest-known star
120 Mm – diameter of Saturn
142 Mm – diameter of Jupiter, the largest planet in the Solar System
170 Mm – diameter of TRAPPIST-1, a star discovered to have seven planets around it
174 Mm – diameter of OGLE-TR-122b, one of the smallest known stars
180 Mm – average distance covered during life
215 Mm – diameter of Proxima Centauri, the nearest star to the Solar System
257 Mm – diameter of TrES-4, one of the largest exoplanets
260 Mm – diameter of the Barnard's Star
272 Mm – diameter of WASP-12b
299.792 Mm – one light-second; the distance light travels in vacuum in one second (see speed of light)
314 Mm – diameter of CT Cha b
384.4 Mm (238,855 mi) – average Earth–Moon distance
671 Mm – separation between Jupiter and Europa
696 Mm – radius of Sun
989 Mm – diameter of Epsilon Indi, one of the nearest stars to Earth
1 gigameter
The (SI symbol: ) is a unit of length in the metric system equal to meters (109 m).
To help compare different distances this section lists lengths starting at 109 meters (1 gigameter (Gm) or 1 billion meters).
1.2 Gm – separation between Saturn and Titan
1.39 Gm – diameter of Sun
1.5 Gm – orbit from Earth of the James Webb Space Telescope
1.71 Gm – diameter of Alpha Centauri A, one of the closest stars.
2.19 Gm – closest approach of Comet Lexell to Earth, happened on 1 July 1770; closest comet approach on record
2.38 Gm – diameter of Sirius A, brightest naked eye star.
3 Gm – total length of "wiring" in the human brain
3.5 Gm – diameter of Vega
4.2 Gm – diameter of Algol B
4.3 Gm – circumference of Sun
5.0 Gm – closest approach of Comet Halley to Earth, happened on 10 April 837
5.0 Gm – (proposed) Size of the arms of the giant triangle shaped Michelson interferometer of the Laser Interferometer Space Antenna (LISA) planned to start observations sometime in the 2030s.
7.9 Gm – diameter of Gamma Orionis, a blue dwarf or blue giant
9.0 Gm – estimated diameter of the event horizon of Sagittarius A*, the supermassive black hole in the center of the Milky Way galaxyrv
10 gigameters
To help compare different distances this section lists lengths starting at 1010 meters (10 gigameters (Gm) or 10 million kilometers, or 0.07 astronomical units).
10.4 Gm – diameter of Spica, an oval-shaped blue giant star and a nearby supernova candidate.
12.6 Gm – diameter of Pollux, the closest red giant star to the Sun. It is a red clump star fusing helium into carbon at its core.
15 Gm – closest distance of Comet Hyakutake from Earth
18 Gm – one light-minute (see yellow sphere in right-hand diagram)
24 Gm – radius of a heliostationary orbit
30.8568 Gm – 1 microparsec
35 Gm – approximate diameter of Arcturus, a close red giant star. It is on the red giant branch, fusing hydrogen into helium in a shell surrounding an inert helium core.
46 Gm – perihelion distance of Mercury (yellow ellipse on the right)
55 Gm – 60,000-year perigee of Mars (last achieved on 27 August 2003)
58 Gm – average passing distance between Earth and Mars at the moment they overtake each other in their orbits
61 Gm – diameter of Aldebaran, a red giant branch star (large star on right)
70 Gm – aphelion distance of Mercury
76 Gm – Neso's apocentric distance; greatest distance of a natural satellite from its parent planet (Neptune)
100 gigameters
To help compare distances at different orders of magnitude this section lists lengths starting at 1011 meters (100 gigameter or 100 million kilometers or 0.7 astronomical units).
103 Gm (0.69 au) – diameter of Rigel
109 Gm (0.7 au) – distance between Venus and the Sun
149.6 Gm (93.0 million mi; 1.0 au) – average distance between the Earth and the Sun – the original definition of the astronomical unit
199 Gm (1.3 au) – diameter of Rho Persei, an asymptotic giant branch star, fusing carbon into neon in a shell surrounding an inert core.
228 Gm (1.5 au) – distance between Mars and the Sun
248 Gm (1.7 au) – diameter of Enif, a small red supergiant star in the constellation Pegasus
280 Gm (1.9 au) – diameter of Deneb, a blue supergiant and the brightest star in the Cygnus constellation
511 Gm (3.4 au) – average diameter of Mira, a pulsating red giant and the progenitor of the Mira variables. It is an asymptotic giant branch star.
570 Gm (3.8 au) – length of the tail of Comet Hyakutake measured by Ulysses; the actual value could be much higher
590 Gm (3.9 au) – diameter of the Pistol Star, a blue hypergiant star
591 Gm (4.0 au) – minimum distance between the Earth and Jupiter
780 Gm (5.2 au) – average distance between Jupiter and the Sun
785 Gm (5.25 au) – diameter of Rho Cassiopeiae, a rare yellow hypergiant star
947 Gm (6.4 au) – diameter of Antares A
965 Gm (6.4 au) – maximum distance between the Earth and Jupiter
1 terameter
The (SI symbol: ) is a unit of length in the metric system equal to meters (1012 m).
To help compare different distances, this section lists lengths starting at 1012 m (1 Tm or 1 billion km or 6.7 astronomical units).
≈1 Tm – 6.7 au – diameter of the red supergiant Betelgeuse based on multiple angular diameter estimates
1.032 Tm – 6.9 au – diameter of the blue hypergiant Eta Carinae (at optical depth 2/3)
1.079 Tm – 7.2 au – one light-hour
1.114 Tm – 7.5 au – diameter of WOH G64, a star in the Large Magellanic Cloud, which recently transformed from a red hypergiant to a yellow hypergiant
1.4 Tm – 9.5 au – average distance between Saturn and the Sun
1.47 Tm – 9.9 au – diameter of HR 5171 A, a yellow hypergiant star.
1.5 Tm – 10 au – estimated diameter of VV Cephei A, a red hypergiant with a blue dwarf companion.
1.75 Tm – 11.7 au – estimated diameter of Mu Cephei, a red supergiant (possibly hypergiant) among the largest-known stars.
2 Tm – 13.2 au – estimated diameter of VY Canis Majoris, a red hypergiant that is among the largest-known stars
2.142 Tm – 14.3 au – estimated diameter of WOH G64, prior to its transformation into a yellow hypergiant.
2.9 Tm – 19.4 au – average distance between Uranus and the Sun
4.4 Tm – 29.4 au – perihelion distance of Pluto
4.5 Tm – 30.1 au – average distance between Neptune and the Sun
4.5 Tm – 30.1 au – inner radius of the Kuiper belt
5.7 Tm – 38.1 au – perihelion distance of Eris
6.0 Tm – 40.5 au – distance from Earth at which the Pale Blue Dot photograph was taken.
7.3 Tm – 48.8 au – aphelion distance of Pluto
7.5 Tm – 50.1 au – outer boundary of the Kuiper Belt
10 terameters
To help compare different distances this section lists lengths starting at 1013 m (10 Tm or 10 billion km or 67 astronomical units).
10 Tm – 67 AU – diameter of a hypothetical quasi-star
11.1 Tm – 74.2 AU – distance that Voyager 1 began detecting returning particles from termination shock
11.4 Tm – 76.2 AU – perihelion distance of 90377 Sedna
12.1 Tm – 70 to 90 AU – distance to termination shock (Voyager 1 crossed at 94 AU)
12.9 Tm – 86.3 AU – distance to 90377 Sedna in March 2014
13.2 Tm – 88.6 AU – distance to Pioneer 11 in March 2014
14.1 Tm – 94.3 AU – estimated radius of the Solar System
14.4 Tm – 96.4 AU – distance to Eris in March 2014 (now near its aphelion)
15.1 Tm – 101 AU – distance to heliosheath
16.5 Tm – 111 AU – distance to Pioneer 10 as of March 2014
16.6 Tm – 111.2 AU – distance to Voyager 2 as of May 2016
18 Tm – 123.5 AU – distance between the Sun to the farthest dwarf planet in the Solar System, the Farout 2018 VG18
20.0 Tm – 135 AU – distance to Voyager 1 as of May 2016
20.6 Tm – 138 AU – distance to Voyager 1 as of late February 2017
21.1 Tm – 141 AU – distance to Voyager 1 as of November 2017
24.8 Tm – 166 AU – distance to Voyager 1 as of November 2024
25.9 Tm – 173 AU – one light-day
30.8568 Tm – 206.3 AU – 1 milliparsec
55.7 Tm – 371 AU – aphelion distance of the comet Hale-Bopp
100 terameters
To help compare different distances this section lists lengths starting at 1014 m (100 Tm or 100 billion km or 670 astronomical units).
140 Tm – 937 AU – aphelion distance of 90377 Sedna
172 Tm – 1150 AU – Schwarzschild diameter of H1821+643, one of the most massive black holes known
181 Tm – 1210 AU – one light-week
308.568 Tm – 2063 AU – 1 centiparsec
757 Tm – 5059 AU – radius of the Stingray Nebula
777 Tm – 5180 AU – one light-month
1 petameter
The (SI symbol: ) is a unit of length in the metric system equal to 1015 meters.
To help compare different distances this section lists lengths starting at 1015 m (1 Pm or 1 trillion km or 6685 astronomical units (AU) or 0.11 light-years).
1.0 Pm = 0.105702341 light-years
1.9 Pm ± 0.5 Pm = 12,000 AU = 0.2 light-year radius of Cat's Eye Nebula's inner core
3.08568 Pm = 20,626 AU = 1 deciparsec
4.7 Pm = 30,000 AU = half-light-year diameter of Bok globule Barnard 68
7.5 Pm – 50,000 AU – possible outer boundary of Oort cloud (other estimates are 75,000 to 125,000 or even 189,000 AU (1.18, 2, and 3 light-years, respectively))
9.5 Pm – 63,241.1 AU – one light-year, the distance light travels in one year
9.9 Pm – 66,000 AU – aphelion distance of the C/1999 F1 (Catalina)
10 petameters
To help compare different distances this section lists lengths starting at 1016 m (10 Pm or 66,800 AU, 1.06 light-years).
15 Pm – 1.59 light-years – possible outer radius of Oort cloud
20 Pm – 2.11 light-years – maximum extent of influence of the Sun's gravitational field
30.9 Pm – 3.26 light-years – 1 parsec
39.9 Pm – 4.22 light-years – distance to Proxima Centauri (nearest star to Sun)
81.3 Pm – 8.59 light-years – distance to Sirius
94.6 Pm – 1 light-decade
100 petameters
To help compare different distances this section lists lengths between 1017 m (100 Pm or 11 light-years) and 1018 m (106 light-years).
110 Pm – 12 light-years – Distance to Tau Ceti
230 Pm – 24 light-years – Diameter of the Orion Nebula
240 Pm – 25 light-years – Distance to Vega
260 Pm – 27 light-years – Distance to Chara, a star approximately as bright as the Sun. Its faintness gives an idea how the Sun would appear when viewed from this distance.
308.568 Tm – 32.6 light-years – 1 dekaparsec
350 Pm – 37 light-years – distance to Arcturus
373.1 Pm – 39.44 light-years – distance to TRAPPIST-1, a star recently discovered to have 7 planets around it
400 Pm – 42 light-years – distance to Capella
620 Pm – 65 light-years – distance to Aldebaran
750 Pm – 79.36 light-years – distance to Regulus
900 Pm – 92.73 light-years – distance to Algol
946 Pm – 1 light-century
1 exameter
The (SI symbol: ) is a unit of length in the metric system equal to 1018 meters. To help compare different distances this section lists lengths between 1018 m (1 Em or 105.7 light-years) and 1019 m (10 Em or 1,057 light-years).
1.2 Em – 129 light-years – diameter of Messier 13 (a typical globular cluster)
1.6 Em – 172 ± 12.5 light-years – diameter of Omega Centauri (one of the largest-known globular clusters, perhaps containing over a million stars)
3.08568 Em – 326.1 light-years – 1 hectoparsec
3.1 Em – 310 light-years – distance to Canopus according to Hipparcos
3.9 Em – 410 light-years – distance to Betelgeuse according to Hipparcos
6.2 Em – 650 light-years – distance to the Helix Nebula, located in the constellation Aquarius
8.2 Em – 860 light-years – distance to Rigel according to Hipparcos
9.4 Em — 1 light-millennium – 1000 light-years
10 exameters
To help compare different orders of magnitude, this section lists distances starting at 10 Em (1019 m or 1,100 light-years).
10.6 Em – 1,120 light-years – distance to WASP-96b
13 Em – 1,300 light-years – distance to the Orion Nebula
14 Em – 1,500 light-years – approximate thickness of the plane of the Milky Way galaxy at the Sun's location
14.2 Em – 1,520 light-years – diameter of the NGC 604
30.8568 Em – 3,261.6 light-years – 1 kiloparsec
31 Em – 3,200 light-years – distance to Deneb according to Hipparcos
46 Em – 4,900 light-years – distance to OGLE-TR-56, the first extrasolar planet discovered using the transit method
47 Em – 5,000 light-years – distance to the Boomerang nebula, coldest place known (1 K)
53 Em – 5,600 light-years – distance to the globular cluster M4 and the extrasolar planet PSR B1620-26 b within it
61 Em – 6,500 light-years – distance to Perseus Spiral Arm (next spiral arm out in the Milky Way galaxy)
71 Em – 7,500 light-years – distance to Eta Carinae
94.6073 Em – 1 light-decamillennium = 10,000 light-years
100 exameters
To help compare different orders of magnitude, this section lists distances starting at 100 Em (1020 m or 11,000 light-years).
150 Em – 16,000 light-years – diameter of the Small Magellanic Cloud, a dwarf galaxy orbiting the Milky Way
200 Em – 21,500 light-years – distance to OGLE-2005-BLG-390Lb
240 Em – 25,000 light-years – distance to the Canis Major Dwarf Galaxy
260 Em – 28,000 light-years – distance to the center of the Galaxy
400 Em – 48,000 light years – diameter of the Fireworks Galaxy
830 Em – 88,000 light-years – distance to the Sagittarius Dwarf Elliptical Galaxy
946 Em – 1 light-centum-millennium = 100,000 light-years
1 zettameter
The (SI symbol: ) is a unit of length in the metric system equal to 1021 meters.
To help compare different orders of magnitude, this section lists distances starting at 1 Zm (1021 m or 110,000 light-years).
1.7 Zm – 179,000 light-years – distance to the Large Magellanic Cloud, largest satellite galaxy of the Milky Way
<1.9 Zm – <200,000 light-years – revised estimated diameter of the disc of the Milky Way Galaxy. The size was previously thought to be half of this.
2.0 Zm – 210,000 light-years – distance to the Small Magellanic Cloud
2.8 Zm – 300,000 light-years – distance to the Intergalactic Wanderer, one of the most distant globular clusters of Milky Way
8.5 Zm – 900,000 light-years – distance to the Leo I Dwarf Galaxy, farthest-known Milky Way satellite galaxy
10 zettameters
To help compare different orders of magnitude, this section lists distances starting at 10 Zm (1022 m or 1.1 million light-years).
24 Zm – 2.5 million light-years – distance to the Andromeda Galaxy, the nearest major galaxy.
30.8568 Zm – 3.2616 million light-years – 1 megaparsec
40 Zm – 4.2 million light-years – distance to the IC 10, a distant member of the Local Group of galaxies
49.2 Zm – 5.2 million light-years – width of the Local Group of galaxies
95 Zm – 10 million light-years – distance to the Sculptor Galaxy in the Sculptor Group of galaxies
95 Zm – 10 million light-years – distance to the Maffei 1, the nearest giant elliptical galaxy in the Maffei 1 Group
100 zettameters
To help compare different orders of magnitude, this section lists distances starting at 100 Zm (1023 m or 11 million light-years).
140 Zm – 15 million light-years – distance to Centaurus A galaxy
250 Zm – 27 million light-years – distance to the Pinwheel Galaxy
280 Zm – 30 million light-years – distance to the Sombrero Galaxy
570 Zm – 60 million light-years – approximate distance to the Virgo cluster, nearest galaxy cluster
620 Zm – 65 million light-years – approximate distance to the Fornax cluster
800 Zm – 85 million light-years – approximate distance to the Eridanus cluster
1 yottameter
The (SI symbol: ) is a unit of length in the metric system equal to 1024 meters.
To help compare different orders of magnitude, this section lists distances starting at 1 Ym (1024 m or 105.702 million light-years).
1.2 Ym – 127 million light-years – distance to the closest observed gamma ray burst GRB 980425
1.3 Ym – 137 million light-years – distance to the Centaurus Cluster of galaxies, the nearest large supercluster
1.9 Ym – 201 million light-years – diameter of the Local Supercluster
2.17 Ym – 1 light-galactic-years – 230 million light-years
2.3 Ym – 225 to 250 million light-years – distance light travels in vacuum in one galactic year
2.8 Ym – 296 million light-years – distance to the Coma Cluster
3.15 Ym – 330 million light years – diameter of the Boötes Void
3.2 Ym – 338 million light-years – distance to Stephan's Quintet
4.7 Ym – 496 million light-years – length of the CfA2 Great Wall, one of the largest observed superstructures in the Universe
6.1 Ym – 645 million light-years – distance to the Shapley Supercluster
9.5 Ym – 996 million light-years – diameter of the Eridanus Supervoid
10 yottameters
To help compare different orders of magnitude, this section lists distances starting at 10 Ym (1025 m or 1.1 billion light-years). At this scale, expansion of the universe becomes significant. Distance of these objects are derived from their measured redshifts, which depends on the cosmological models used.
13 Ym – 1.37 billion light-years – length of the South Pole Wall
13 Ym – 1.38 billion light-years – length of the Sloan Great Wall
18 Ym – redshift 0.16 – 1.9 billion light-years – distance to the quasar 3C 273 (light travel distance)
30.8568 Ym – 3.2616 billion light-years – 1 gigaparsec
31.2204106 Ym − 3.3 billion light-years − length of The Giant Arc, a large cosmic structure discovered in 2021
33 Ym – 3.5 billion light-years – maximum distance of the 2dF Galaxy Redshift Survey (light travel distance)
37.8 Ym – 4 billion light-years – length of the Huge-LQG
75 Ym – redshift 0.95 – 8 billion light-years – approximate distance to the supernova SN 2002dd in the Hubble Deep Field North (light travel distance)
85 Ym – redshift 1.6 – 9 billion light-years – approximate distance to the gamma-ray burst GRB 990123 (light travel distance)
94.6 Ym – 10 billion light-years – approximate distance to quasar OQ172
94.6 Ym – 10 billion light-years – length of the Hercules–Corona Borealis Great Wall, one of the largest and most massive-known cosmic structures known
100 yottameters
To help compare different orders of magnitude, this section lists distances starting at 100 Ym (1026 m or 11 billion light-years). At this scale, expansion of the universe becomes significant. Distance of these objects are derived from their measured redshifts, which depend on the cosmological models used.
124 Ym – redshift 7.54 – 13.1 billion light-years – light travel distance (LTD) to the quasar ULAS J1342+0928, the most distant-known quasar as of 2017
130 Ym – redshift 1,000 – 13.8 billion light-years – distance (LTD) to the source of the cosmic microwave background radiation; radius of the observable universe measured as a LTD
260 Ym – 27.4 billion light-years – diameter of the observable universe (double LTD)
440 Ym – 46 billion light-years – radius of the universe measured as a comoving distance
590 Ym – 62 billion light-years – cosmological event horizon: the largest comoving distance from which light will ever reach us (the observer) at any time in the future
886.48 Ym – 93.7 billion light-years – the diameter of the observable universe (twice the particle horizon); however, there might be unobserved distances that are even greater.
1 ronnameter
The (SI symbol: ) is a unit of length in the metric system equal to 1027 meters.
To help compare different orders of magnitude, this section lists distances starting at 1 Rm (1027 m or 105.7 billion light-years). At this scale, expansion of the universe becomes significant. Distance of these objects are derived from their measured redshifts, which depend on the cosmological models used.
>1 Rm – >105.7 billion light-years – size of universe beyond the cosmic light horizon, depending on its curvature; if the curvature is zero (i.e. the universe is spatially flat), the value can be infinite (see Shape of the universe) as previously mentioned.
2.764 Rm - 292.2 billion light-years – circumference of the observable universe, as it is in the shape of a sphere.
≈101010122light-years – the possible size of the universe after cosmological inflation.
≈∞ light-years – theoretical size of the multiverse if it exists.
See also
List of examples of lengths
Fermi problem
Scale (analytical tool)
Spatial scale
The Scale of the Universe
Notes
References
External links
How Big Are Things? – displays orders of magnitude in successively larger rooms.
Powers of Ten – Travel across the Universe.
Cosmos – Journey from microcosmos to macrocosmos (Digital Nature Agency).
Scale of the universe – interactive guide to length magnitudes
– Orders of Magnitude (March 2020).
Length
Length
Lists by length | Orders of magnitude (length) | [
"Physics",
"Mathematics"
] | 16,723 | [
"Scalar physical quantities",
"Physical quantities",
"Distance",
"Quantity",
"Size",
"Length",
"Wikipedia categories named after physical quantities",
"Orders of magnitude",
"Units of measurement"
] |
203,875 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28mass%29 | To help compare different orders of magnitude, the following lists describe various mass levels between 10−67 kg and 1052 kg. The least massive thing listed here is a graviton, and the most massive thing is the observable universe. Typically, an object having greater mass will also have greater weight (see mass versus weight), especially if the objects are subject to the same gravitational field strength.
Units of mass
The table at right is based on the kilogram (kg), the base unit of mass in the International System of Units (SI). The kilogram is the only standard unit to include an SI prefix (kilo-) as part of its name. The gram (10−3 kg) is an SI derived unit of mass. However, the names of all SI mass units are based on gram, rather than on kilogram; thus 103 kg is a megagram (106 g), not a *kilokilogram.
The tonne (t) is an SI-compatible unit of mass equal to a megagram (Mg), or 103 kg. The unit is in common use for masses above about 103 kg and is often used with SI prefixes. For example, a gigagram (Gg) or 109 g is 103 tonnes, commonly called a kilotonne.
Other units
Other units of mass are also in use. Historical units include the stone, the pound, the carat, and the grain.
For subatomic particles, physicists use the mass equivalent to the energy represented by an electronvolt (eV). At the atomic level, chemists use the mass of one-twelfth of a carbon-12 atom (the dalton). Astronomers use the mass of the sun ().
The least massive things: below 10−24 kg
Unlike other physical quantities, mass–energy does not have an a priori expected minimal quantity, or an observed basic quantum as in the case of electric charge. Planck's law allows for the existence of photons with arbitrarily low energies. Consequently, there can only ever be an experimental upper bound on the mass of a supposedly massless particle; in the case of the photon, this confirmed upper bound is of the order of = .
10−24 to 10−18 kg
10−18 to 10−12 kg
10−12 to 10−6 kg
10−6 to 1 kg
1 kg to 105 kg
106 to 1011 kg
1012 to 1017 kg
1018 to 1023 kg
1024 to 1029 kg
1030 to 1035 kg
1036 to 1041 kg
The most massive things: 1042 kg and greater
See also
Lists of astronomical objects
Notes
External links
Mass units conversion calculator
Mass units conversion calculator JavaScript
Mass
Mass | Orders of magnitude (mass) | [
"Physics",
"Mathematics"
] | 562 | [
"Scalar physical quantities",
"Units of measurement",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"Wikipedia categories named after physical quantities",
"Orders of magnitude",
"Matter"
] |
204,464 | https://en.wikipedia.org/wiki/Intensive%20and%20extensive%20properties | Physical or chemical properties of materials and systems can often be categorized as being either intensive or extensive, according to how the property changes when the size (or extent) of the system changes.
The terms "intensive and extensive quantities" were introduced into physics by German mathematician Georg Helm in 1898, and by American physicist and chemist Richard C. Tolman in 1917.
According to International Union of Pure and Applied Chemistry (IUPAC), an intensive property or intensive quantity is one whose magnitude is independent of the size of the system.
An intensive property is not necessarily homogeneously distributed in space; it can vary from place to place in a body of matter and radiation. Examples of intensive properties include temperature, T; refractive index, n; density, ρ; and hardness, η.
By contrast, an extensive property or extensive quantity is one whose magnitude is additive for subsystems.
Examples include mass, volume and entropy.
Not all properties of matter fall into these two categories. For example, the square root of the volume is neither intensive nor extensive. If a system is doubled in size by juxtaposing a second identical system, the value of an intensive property equals the value for each subsystem and the value of an extensive property is twice the value for each subsystem. However the property √V is instead multiplied by √2 .
The distinction between intensive and extensive properties has some theoretical uses. For example, in thermodynamics, the state of a simple compressible system is completely specified by two independent, intensive properties, along with one extensive property, such as mass. Other intensive properties are derived from those two intensive variables.
Intensive properties
An intensive property is a physical quantity whose value does not depend on the amount of substance which was measured. The most obvious intensive quantities are ratios of extensive quantities. In a homogeneous system divided into two halves, all its extensive properties, in particular its volume and its mass, are divided into two halves. All its intensive properties, such as the mass per volume (mass density) or volume per mass (specific volume), must remain the same in each half.
The temperature of a system in thermal equilibrium is the same as the temperature of any part of it, so temperature is an intensive quantity. If the system is divided by a wall that is permeable to heat or to matter, the temperature of each subsystem is identical. Additionally, the boiling temperature of a substance is an intensive property. For example, the boiling temperature of water is 100 °C at a pressure of one atmosphere, regardless of the quantity of water remaining as liquid.
Examples
Examples of intensive properties include:
charge density, ρ (or ne)
chemical potential, μ
color
concentration, c
energy density, ρ
magnetic permeability, μ
mass density, ρ (or specific gravity)
melting point and boiling point
molality, m or b
pressure, p
refractive index
specific conductance (or electrical conductivity)
specific heat capacity, cp
specific internal energy, u
specific rotation, [α]
specific volume, v
standard reduction potential, E°
surface tension
temperature, T
thermal conductivity
velocity v
viscosity
See List of materials properties for a more exhaustive list specifically pertaining to materials.
Extensive properties
An extensive property is a physical quantity whose value is proportional to the size of the system it describes, or to the quantity of matter in the system. For example, the mass of a sample is an extensive quantity; it depends on the amount of substance. The related intensive quantity is the density which is independent of the amount. The density of water is approximately 1g/mL whether you consider a drop of water or a swimming pool, but the mass is different in the two cases.
Dividing one extensive property by another extensive property gives an intensive property—for example: mass (extensive) divided by volume (extensive) gives density (intensive).
Any extensive quantity E for a sample can be divided by the sample's volume, to become the "E density" for the sample;
similarly, any extensive quantity "E" can be divided by the sample's mass, to become the sample's "specific E";
extensive quantities "E" which have been divided by the number of moles in their sample are referred to as "molar E".
Examples
Examples of extensive properties include:
amount of substance, n
enthalpy, H
entropy, S
Gibbs energy, G
heat capacity, Cp
Helmholtz energy, A or F
internal energy, U
spring stiffness, K
mass, m
volume, V
Conjugate quantities
In thermodynamics, some extensive quantities measure amounts that are conserved in a thermodynamic process of transfer. They are transferred across a wall between two thermodynamic systems or subsystems. For example, species of matter may be transferred through a semipermeable membrane. Likewise, volume may be thought of as transferred in a process in which there is a motion of the wall between two systems, increasing the volume of one and decreasing that of the other by equal amounts.
On the other hand, some extensive quantities measure amounts that are not conserved in a thermodynamic process of transfer between a system and its surroundings. In a thermodynamic process in which a quantity of energy is transferred from the surroundings into or out of a system as heat, a corresponding quantity of entropy in the system respectively increases or decreases, but, in general, not in the same amount as in the surroundings. Likewise, a change in the amount of electric polarization in a system is not necessarily matched by a corresponding change in electric polarization in the surroundings.
In a thermodynamic system, transfers of extensive quantities are associated with changes in respective specific intensive quantities. For example, a volume transfer is associated with a change in pressure. An entropy change is associated with a temperature change. A change in the amount of electric polarization is associated with an electric field change. The transferred extensive quantities and their associated respective intensive quantities have dimensions that multiply to give the dimensions of energy. The two members of such respective specific pairs are mutually conjugate. Either one, but not both, of a conjugate pair may be set up as an independent state variable of a thermodynamic system. Conjugate setups are associated by Legendre transformations.
Composite properties
The ratio of two extensive properties of the same object or system is an intensive property. For example, the ratio of an object's mass and volume, which are two extensive properties, is density, which is an intensive property.
More generally properties can be combined to give new properties, which may be called derived or composite properties. For example, the base quantities mass and volume can be combined to give the derived quantity density. These composite properties can sometimes also be classified as intensive or extensive. Suppose a composite property is a function of a set of intensive properties and a set of extensive properties , which can be shown as . If the size of the system is changed by some scaling factor, , only the extensive properties will change, since intensive properties are independent of the size of the system. The scaled system, then, can be represented as .
Intensive properties are independent of the size of the system, so the property F is an intensive property if for all values of the scaling factor, ,
(This is equivalent to saying that intensive composite properties are homogeneous functions of degree 0 with respect to .)
It follows, for example, that the ratio of two extensive properties is an intensive property. To illustrate, consider a system having a certain mass, , and volume, . The density, is equal to mass (extensive) divided by volume (extensive): . If the system is scaled by the factor , then the mass and volume become and , and the density becomes ; the two s cancel, so this could be written mathematically as , which is analogous to the equation for above.
The property is an extensive property if for all ,
(This is equivalent to saying that extensive composite properties are homogeneous functions of degree 1 with respect to .) It follows from Euler's homogeneous function theorem that
where the partial derivative is taken with all parameters constant except . This last equation can be used to derive thermodynamic relations.
Specific properties
A specific property is the intensive property obtained by dividing an extensive property of a system by its mass. For example, heat capacity is an extensive property of a system. Dividing heat capacity, , by the mass of the system gives the specific heat capacity, , which is an intensive property. When the extensive property is represented by an upper-case letter, the symbol for the corresponding intensive property is usually represented by a lower-case letter. Common examples are given in the table below.
Molar properties
If the amount of substance in moles can be determined, then each of these thermodynamic properties may be expressed on a molar basis, and their name may be qualified with the adjective molar, yielding terms such as molar volume, molar internal energy, molar enthalpy, and molar entropy. The symbol for molar quantities may be indicated by adding a subscript "m" to the corresponding extensive property. For example, molar enthalpy is . Molar Gibbs free energy is commonly referred to as chemical potential, symbolized by , particularly when discussing a partial molar Gibbs free energy for a component in a mixture.
For the characterization of substances or reactions, tables usually report the molar properties referred to a standard state. In that case a superscript is added to the symbol. Examples:
= is the molar volume of an ideal gas at standard conditions for temperature and pressure (being and ).
is the standard molar heat capacity of a substance at constant pressure.
is the standard enthalpy variation of a reaction (with subcases: formation enthalpy, combustion enthalpy...).
is the standard reduction potential of a redox couple, i.e. Gibbs energy over charge, which is measured in volt = J/C.
Limitations
The general validity of the division of physical properties into extensive and intensive kinds has been addressed in the course of science. Redlich noted that, although physical properties and especially thermodynamic properties are most conveniently defined as either intensive or extensive, these two categories are not all-inclusive and some well-defined concepts like the square-root of a volume conform to neither definition.
Other systems, for which standard definitions do not provide a simple answer, are systems in which the subsystems interact when combined. Redlich pointed out that the assignment of some properties as intensive or extensive may depend on the way subsystems are arranged. For example, if two identical galvanic cells are connected in parallel, the voltage of the system is equal to the voltage of each cell, while the electric charge transferred (or the electric current) is extensive. However, if the same cells are connected in series, the charge becomes intensive and the voltage extensive. The IUPAC definitions do not consider such cases.
Some intensive properties do not apply at very small sizes. For example, viscosity is a macroscopic quantity and is not relevant for extremely small systems. Likewise, at a very small scale color is not independent of size, as shown by quantum dots, whose color depends on the size of the "dot".
References
Further reading
Physical quantities
Thermodynamic properties
Chemical quantities | Intensive and extensive properties | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,313 | [
"Thermodynamic properties",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Chemical quantities",
"Thermodynamics",
"Physical properties"
] |
204,469 | https://en.wikipedia.org/wiki/Gibbs%E2%80%93Helmholtz%20equation | The Gibbs–Helmholtz equation is a thermodynamic equation used to calculate changes in the Gibbs free energy of a system as a function of temperature. It was originally presented in an 1882 paper entitled "Die Thermodynamik chemischer Vorgänge" by Hermann von Helmholtz. It describes how the Gibbs free energy, which was presented originally by Josiah Willard Gibbs, varies with temperature. It was derived by Helmholtz first, and Gibbs derived it only 6 years later. The attribution to Gibbs goes back to Wilhelm Ostwald, who first translated Gibbs' monograph into German and promoted it in Europe.
The equation is:
where H is the enthalpy, T the absolute temperature and G the Gibbs free energy of the system, all at constant pressure p. The equation states that the change in the G/T ratio at constant pressure as a result of an infinitesimally small change in temperature is a factor H/T2.
Similar equations include
Chemical reactions and work
The typical applications of this equation are to chemical reactions. The equation reads:
with ΔG as the change in Gibbs energy due to reaction, and ΔH as the enthalpy of reaction (often, but not necessarily, assumed to be independent of temperature). The o denotes the use of standard states, and particularly the choice of a particular standard pressure (1 bar), to calculate ΔG and ΔH.
Integrating with respect to T (again p is constant) yields:
This equation quickly enables the calculation of the Gibbs free energy change for a chemical reaction at any temperature T2 with knowledge of just the standard Gibbs free energy change of formation and the standard enthalpy change of formation for the individual components.
Also, using the reaction isotherm equation, that is
which relates the Gibbs energy to a chemical equilibrium constant, the van 't Hoff equation can be derived.
Since the change in a system's Gibbs energy is equal to the maximum amount of non-expansion work that the system can do in a process, the Gibbs-Helmholtz equation may be used to estimate how much non-expansion work can be done by a chemical process as a function of temperature. For example, the capacity of rechargeable electric batteries can be estimated as a function of temperature using the Gibbs-Helmholtz equation.
Derivation
Background
The definition of the Gibbs function is where is the enthalpy defined by:
Taking differentials of each definition to find and , then using the fundamental thermodynamic relation (always true for reversible or irreversible processes):
where is the entropy, is volume, (minus sign due to reversibility, in which : work other than pressure-volume may be done and is equal to ) leads to the "reversed" form of the initial fundamental relation into a new master equation:
This is the Gibbs free energy for a closed system. The Gibbs–Helmholtz equation can be derived by this second master equation, and the chain rule for partial derivatives.
Sources
External links
Gibbs–Helmholtz equation, by W. R. Salzman (2004).
Gibbs-Helmholtz Equation, by P. Mander (2013)
Thermodynamic equations
Hermann von Helmholtz | Gibbs–Helmholtz equation | [
"Physics",
"Chemistry"
] | 663 | [
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics"
] |
1,118,408 | https://en.wikipedia.org/wiki/VTech | VTech Holdings Limited (an abbreviation of Video Technology Limited or simply VTech) is a Hong Kongese company of children's electronic learning products. It is the world's largest manufacturer of baby monitors and cordless phones. It was founded in October 1976 by Allan Wong (Chi-Yun) and Stephen Leung.
Name and listing
The company was originally named "Video Technology Limited" in reference to the company's first product, a home video game console. In 1991, it was renamed "VTech Holdings Limited" to reflect a wider portfolio of products.
The company was first listed in Hong Kong in June 1986 under the name "Video Technology International (Holdings) Limited". It was privatised and delisted from The Stock Exchange of Hong Kong Limited in 1990.
VTech obtained a primary listing on the London Stock Exchange in 1991. In 1992, the company relisted on The Stock Exchange of Hong Kong Limited, establishing a dual primary listing with London. In 1993, the company established its American depositary receipt programme.
VTech was delisted voluntarily from the London Stock Exchange on 7 October 2008. It also terminated its American Depositary Receipt programme with effect from 21 January 2011.
History
VTech was founded in Hong Kong in October 1976 by two local entrepreneurs, Allan Wong (Chi-Yun) and Stephen Leung. When the first single-chip microprocessor, the Intel 4004, became available in the early 1970s, the company saw the potential it offered for portable consumer electronics products. Wong & Leung set up a small factory in To Kwa Wan, with a investment and a staff of 40 people. In the first year, turnover was less than $1 million.
VTech initially focused on developing video games. In 1977, the company created its first home TV game console, a version of Pong. Since only consumers in North America and Europe could afford such items, the company targeted primarily these markets.
The United Kingdom was chosen as the first market for Pong, as Hong Kong and the UK used the same standard for television systems. In 1978, the founders introduced LED games they had developed to buyers from RadioShack in the US, which were sold under the RadioShack brand.
VTech then began to build its own brand. Starting in the early 1980s, a line of electronic games would be manufactured. VTech unveiled its first electronic learning product, called Lesson One, at the New York Toy Fair, in February 1980. It taught children basic spelling and maths. An exclusive version under the name Computron was offered to Sears, with the product being prominently advertised by Sears, in its catalogue, which was a popular shopping guide.
Next, VTech made the video game console CreatiVision. An electronic product with an external projector from French company Ludotronic was adapted by VTech and sold as the VTech ProScreen in 1984, following the release of VTech's Gamate and Variety handheld products the year prior.
VTech then branched out into personal computers, including a series of 8-bit TRS-80 competition computers named the Laser 200, 210, and 310, as well as a series of IBM PC compatibles both beginning in 1983, followed by Apple II compatible computers, beginning in 1985, including a model called Laser 128. After acquiring PC manufacturer Leading Technology of Oregon in 1992, VTech exited the personal computer market in 1997 due to tight competition.
In 1985, the United States Federal Communications Commission (FCC) allocated the frequency band 900 MHz to ISM (industrial, scientific, and medical) devices. Taking advantage of this, VTech began development on a cordless telephone, using the 900 MHz band, and in 1991 introduced the world's first fully digital 900 MHz cordless telephone.
In 2000, to expand its cordless phone business, VTech acquired the consumer telephone business of Lucent Technologies. The acquisition also gave VTech the exclusive right for 10 years to use the AT&T brand in conjunction with the manufacture and sale of wireline telephones and accessories in the United States and Canada. Although the acquisition increased sales of VTech's telecommunication products by 50%, it led to operating losses and write-offs. The company issued a profit warning in March 2001 and launched a broad restructuring plan. By the financial year 2002, the company had turned around the business and returned to profitability.
Today, VTech's core businesses remain cordless telephones and electronic learning products. Its contract manufacturing services – which manufactures various electronic products on behalf of medium-sized companies, have also become a major source of revenue. The company has diversified geographically, selling to North America, Europe, Asia, Latin America, the Middle East, and Africa.
Core businesses
Electronic learning products (ELPs)
VTech was among the pioneers of the ELP industry, beginning in 1980 with a unit designed to teach children basic spelling and mathematics.
Today VTech makes both individual standalone products and platform products that combine a variety of consoles with different software.
Its V.Smile TV Learning System, which was launched in 2004, established what the company calls platform products as an important category within its ELPs. Latest additions to the platform product range are MobiGo, InnoTab Max, Kidizoom Smart Watch and InnoTV (StorioTV in Europe Excluding United Kingdom).
Telecommunication (TEL) products
VTech introduced the world's first 900 MHz and 5.8 GHz cordless phones in 1991 and 2002 respectively. As of 2014, the company was the world's largest manufacturer of cordless telephones, according to MZA (as reported by VTech).
As of 2014, VTech, in its sale of both AT&T and VTech branded phones and accessories, was the largest player in the industry , in North America, according to MarketWise Consumer Insights (as reported by VTech). Outside North America, as of this date, VTech mainly supplied products to fixed-line telephone operators, brand names, and distributors on an ODM basis.
Contract manufacturing services (CMS)
VTech started manufacturing products for other brand names on an original equipment manufacturing (OEM) basis in the 1980s and CMS became one of the company's core businesses in the early 2000s.
VTech has been identified as one of the world's top 50 electronics manufacturing services providers, providing electronics manufacturing services for medium-sized companies. VTech's CMS has focused on four main product categories: professional audio equipment, switching mode power supplies, wireless products, and solid-state lighting.
Controversies
2012 working conditions controversy
A June 2012 report from the Institute for Global Labour and Human Rights said the working conditions in the VTech factories in China failed to meet the legal standards and could be described as sweatshops. VTech strongly rejected the allegations in a statement issued on 22 June 2012.
2015 data breach
In November 2015, Lorenzo Bicchierai, writing for Vice magazine's Motherboard, reported that VTech's servers had been compromised and the corporation was victim to a data breach which exposed personal data belonging to 6.3 million individuals, including children, who signed up for or utilized services provided by the company related to several products it manufactures. Bicchierai was contacted by the unnamed attacker in late November, during the week before Thanksgiving, at which point the unnamed individual disclosed information about the security vulnerabilities with the journalist and detailed the breach.
Bicchierai then reached out to information security researcher Troy Hunt to examine data provided by the attacker to Bicchierai, and to confirm if the leak was indeed authentic and not an internet hoax. Hunt examined the information and confirmed it appeared to be authentic. Hunt then dissected the data in detail and published the findings on his website. According to Hunt, VTech's servers failed to utilize basic SSL encryption to secure the personal data in transit from the devices to VTech's servers; that VTech stored customer information in unencrypted plaintext, failed to securely hash or salt passwords.
The attack leveraged an SQL injection to gain privileged root access to VTech servers. Once privileged access was acquired, the attacker exfiltrated the data, including some 190 gigabytes of photographs of children and adults, detailed chat logs between parents and children which spanned over the course of years, and voice recordings, all unencrypted and stored in plain text. The attacker shared some 3,832 image files with the journalist for verification purposes, and some redacted photographs were published by the journalist. Commenting on the leak, the unidentified hacker
expressed their disgust with being able to so easily obtain access to such a large trove of data, saying: "Frankly, it makes me sick that I was able to get all this stuff. VTech should have the book thrown at them" and explained their rationale for going to the press was because they felt VTech would have ignored their reports and concerns.
VTech corporate security was unaware their systems had been compromised and the breach was first brought to their attention after being contacted by Bicchierai prior to the publication of the article. Upon notification, the company took a dozen or so websites and services offline.
In an FAQ published by the company, they explain some 4,854,209 accounts belonging to parents and 6,368,509 profiles belonging to children had been compromised. The company further claims the passwords had been encrypted, which is contrary to reports by the independent security researcher contacted by Vice. The company indicated they were working with unspecified "local authorities". VTech subsequently brought in the information security services company FireEye to manage incident response and audit the security of their platform going forward.
Mark Nunnikhoven of Trend Micro criticized the company's handling of the incident and called their FAQ "wishy-washy corporate speak".
U.S. Senator Edward Markey and Representative Joe Barton, co-founders of the Bi-Partisan Congressional Privacy Caucus, issued an open letter to the company inquiring as to why and what kind of information belonging to children is stored by VTech and how they use this data, security practices employed to protect that data if children's information is shared or sold to third parties and how the company complies with the Children's Online Privacy Protection Act.
In February 2016, Hunt publicized the fact that VTech had modified its Terms and Conditions for new customers so that the customer acknowledges and agrees that any information transmitted to VTech may be intercepted or later acquired by unauthorized parties.
In January 2018, the US Federal Trade Commission fined VTech $650,000 for the breach, around $0.09 per victim.
References
External links
Official website
VTech Toys website
VTech Phones website
Companies formerly listed on the London Stock Exchange
Companies listed on the Hong Kong Stock Exchange
Computer companies of Hong Kong
Computer hardware companies
Educational software companies
Electronics companies established in 1976
Engineering companies of Hong Kong
Hong Kong brands
Learning to read
Software companies established in 1976
Toy companies of Hong Kong
VTech | VTech | [
"Technology"
] | 2,241 | [
"Computer hardware companies",
"Computers"
] |
1,120,353 | https://en.wikipedia.org/wiki/Interleukin%202 | Interleukin-2 (IL-2) is an interleukin, which is a type of cytokine signaling molecule forming part of the immune system. It is a 15.5–16 kDa protein that regulates the activities of white blood cells (leukocytes, often lymphocytes) that are responsible for immunity. IL-2 is part of the body's natural response to microbial infection, and in discriminating between foreign ("non-self") and "self". IL-2 mediates its effects by binding to IL-2 receptors, which are expressed by lymphocytes. The major sources of IL-2 are activated CD4+ T cells and activated CD8+ T cells. Put shortly the function of IL-2 is to stimulate the growth of helper, cytotoxic and regulatory T cells.
IL-2 receptor
IL-2 is a member of a specific family of cytokines, each member of which has a four alpha helix bundle; this cytokine family also includes IL-4, IL-7, IL-9, IL-15 and IL-21. IL-2 signals through a IL-2 receptor, a complex consisting of three chains, termed alpha (CD25), beta (CD122) and gamma (CD132). The gamma chain is common to all family members.
The IL-2 receptor (IL-2R) α subunit binds IL-2 with low affinity (Kd~ 10−8 M). Interaction of IL-2 and CD25 alone does not lead to signal transduction due to its short intracellular chain but has the ability (when bound to the β and γ subunit) to increase the IL-2R affinity 100-fold. Heterodimerization of the β and γ subunits of IL-2R is essential for signalling in T cells. IL-2 can signalize either via intermediate-affinity dimeric CD122/CD132 IL-2R (Kd~ 10−9 M) or high-affinity trimeric CD25/CD122/CD132 IL-2R (Kd~ 10−11 M). Dimeric IL-2R is expressed by memory CD8+ T cells and NK cells, whereas regulatory T cells and activated T cells express high levels of trimeric IL-2R.
IL-2 signaling pathways and regulation
Instructions to express proteins in response to an IL-2 signal (called IL-2 transduction) can take place via 3 different signaling pathways; they are: (1) the JAK-STAT pathway, (2) the PI3K/Akt/mTOR pathway and (3) the MAPK/ERK pathway. The signalling is commenced by IL-2 binding to its receptor, following which cytoplasmatic domains of CD122 and CD132 heterodimerize. This leads to the activation of Janus kinases JAK1 and JAK3 which subsequently phosphorylate T338 on CD122. This phosphorylation recruits STAT transcription factors, predominantly STAT5, which dimerize and migrate to the cell nucleus where they bind to DNA. with an "express other proteins" signal. The proteins expressed by means of the three pathways include bcl-6 (the PI3K/Akt/mTOR pathway), CD25 & prdm-1 (the JAK-STAT pathway) and certain cyclins (the MAPK/ERK pathway).
Gene expression regulation for IL-2 can be on multiple levels or by different ways. One of the checkpoints (in other words one of the things which needs to be done before IL-2 is expressed) is that there must be signaling through a conjunction of a T Cell Receptor (a TCR) and an HLA-peptide complex. As a result of that conjunction a signalling pathway (signalling a cell's protein making machinery to express or 'make' IL-2), a PhosphoLipase-C (PLC) dependent pathway, is set up. PLC activates 3 major transcription factors and their pathways: NFAT, NFkB and AP-1. In addition and after costimulation from CD28 the optimal activation of expression of IL-2 and these pathways is induced. In summary therefore before a cell will make IL-2 in accordance with this pathway there have to be two reactions: TCR+HLA and protein complex on the one hand and CD28 costimulation on the other indeed mere IL-2 ligation to its receptor is too low affinity to enable pathway.
At the same time Oct-1 is expressed. It helps the activation. Oct1 is expressed in T-lymphocytes and Oct2 is induced after cell activation.
NFAT has multiple family members, all of them are located in cytoplasm and signaling goes through calcineurin, NFAT is dephosphorylated and therefore translocated to the nucleus.
AP-1 is a dimer and is composed of c-Jun and c-Fos proteins. It cooperates with other transcription factors including NFkB and Oct.
NFkB is translocated to the nucleus after costimulation through CD28. NFkB is a heterodimer and there are two binding sites on the IL-2 promoter.
Function
IL-2 has essential roles in key functions of the immune system, tolerance and immunity, primarily via its direct effects on T cells. In the thymus, where T cells mature, it prevents autoimmune diseases by promoting the differentiation of certain immature T cells into regulatory T cells, which suppress other T cells that are otherwise primed to attack normal healthy cells in the body. IL-2 enhances activation-induced cell death (AICD). IL-2 also promotes the differentiation of T cells into effector T cells and into memory T cells when the initial T cell is also stimulated by an antigen, thus helping the body fight off infections. Together with other polarizing cytokines, IL-2 stimulates naive CD4+ T cell differentiation into Th1 and Th2 lymphocytes while it impedes differentiation into Th17 and folicular Th lymphocytes.
IL-2 increases the cell killing activity of both natural killer cells and cytotoxic T cells.
Its expression and secretion is tightly regulated and functions as part of both transient positive and negative feedback loops in mounting and dampening immune responses. Through its role in the development of T cell immunologic memory, which depends upon the expansion of the number and function of antigen-selected T cell clones, it plays a key role in enduring cell-mediated immunity.
Evolution
IL-2 has been discovered in all classes of jawed vertebrates, including sharks, at a similar genomic location. In fish, IL-2 shares a single receptor alpha chain with its related cytokines IL-15 and IL-15-like (IL-15L). This "IL-15Rα" receptor chain is similar to mammalian IL-15Rα, and in tetrapod evolution a duplication of its coding gene plus further diversification created mammalian IL-2Rα. Sequences, and structural analysis of grass carp IL-2, suggest that fish IL-2 binds IL-15Rα in a manner reminiscent of how mammalian IL-15 binds to IL-15Rα.
Despite fish IL-2 and IL-15 sharing the same IL-15Rα chain, the stability of fish IL-2 is independent of it whereas IL-15 and especially IL-15L depend on binding to (co-presentation with) IL-15Rα for their stability and function. This suggests that, like in mammals, fish IL-2, in contrast to fish IL-15 and IL-15L, is not relying on "in trans" presentation by its receptor alpha chain. As a free cytokine, mammalian IL-2 that is secreted by activated T cells is important for a negative feedback loop by the stimulation of regulatory T cells, the latter being the cells with the highest constitutive IL-2Rα (aka CD25) expression. Besides this negative feedback loop, mammalian IL-2 also participates in a positive feedback loop because activated T cells enhance their own IL-2Rα expression. As in mammals, fish IL-2 also stimulates T cell proliferation and appears to preferentially stimulate regulatory T cells. Fish IL-2 induces the expression of cytokines of both type 1 (Th1) and type 2 (Th2) immunity.
As has been found in some studies on mammalian IL-2, data suggest that fish IL-2 can form homodimers and that this is an ancient property of the IL-2/15/15L-family cytokines.
Homologues of IL-2 have not been reported for jawless fish (hagfish and lamprey) or invertebrates.
Role in disease
While the causes of itchiness are poorly understood, some evidence indicates that IL-2 is involved in itchy psoriasis.
Medical use
Pharmaceutical analogues
Aldesleukin is a form of recombinant interleukin-2. It is manufactured using recombinant DNA technology and is marketed as a protein therapeutic and branded as Proleukin. It has been approved by the Food and Drug Administration (FDA) with a black box warning and in several European countries for the treatment of cancers (malignant melanoma, renal cell cancer) in large intermittent doses and has been extensively used in continuous doses.
Interking is a recombinant IL-2 with a serine at residue 125, sold by Shenzhen Neptunus.
Neoleukin 2/15 is a computationally designed mimic of IL-2 that was designed to avoid common side effects. However, clinical trials into this candidate were discontinued.
Dosage
Various dosages of IL-2 across the United States and across the world are used. The efficacy and side effects of different dosages is often a point of disagreement.
The commercial interest in local IL-2 therapy has been very low. Because only a very low dose IL-2 is used, treatment of a patient would cost about $500 commercial value of the patented IL-2. The commercial return on investment is too low to stimulate additional clinical studies for the registration of intratumoral IL-2 therapy.
United States
Usually, in the U.S., the higher dosage option is used, affected by cancer type, response to treatment and general patient health. Patients are typically treated for five consecutive days, three times a day, for fifteen minutes. The following approximately 10 days help the patient to recover between treatments. IL-2 is delivered intravenously on an inpatient basis to enable proper monitoring of side effects.
A lower dose regimen involves injection of IL-2 under the skin typically on an outpatient basis. It may alternatively be given on an inpatient basis over 1–3 days, similar to and often including the delivery of chemotherapy.
Intralesional IL-2 is commonly used to treat in-transit melanoma metastases and has a high complete response rate.
Local application
In preclinical and early clinical studies, local application of IL-2 in the tumor has been shown to be clinically more effective in anticancer therapy than systemic IL-2 therapy, over a broad range of doses, without serious side effects.
Tumour blood vessels are more vulnerable than normal blood vessels to the actions of IL-2. When injected inside a tumor, i.e. local application, a process mechanistically similar to the vascular leakage syndrome, occurs in tumor tissue only. Disruption of the blood flow inside of the tumor effectively destroys tumor tissue.
In local application, the systemic dose of IL-2 is too low to cause side effects, since the total dose is about 100 to 1000 fold lower. Clinical studies showed painful injections at the site of radiation as the most important side effect, reported by patients. In the case of irradiation of nasopharyngeal carcinoma the five-year disease-free survival increased from 8% to 63% by local IL-2 therapy
Toxicity
Systemic IL-2 has a narrow therapeutic window, and the level of dosing usually determines the severity of the side effects. In the case of local IL-2 application, the therapeutic window spans several orders of magnitude.
Some common side effects:
flu-like symptoms (fever, headache, muscle and joint pain, fatigue)
nausea/vomiting
dry, itchy skin or rash
weakness or shortness of breath
diarrhea
low blood pressure
drowsiness or confusion
loss of appetite
More serious and dangerous side effects sometimes are seen, such as breathing problems, serious infections, seizures, allergic reactions, heart problems, kidney failure or a variety of other possible complications. The most common adverse effect of high-dose IL-2 therapy is vascular leak syndrome (VLS; also termed capillary leak syndrome). It is caused by lung endothelial cells expressing high-affinity IL-2R. These cells, as a result of IL-2 binding, causes increased vascular permeability. Thus, intravascular fluid extravasate into organs, predominantly lungs, which leads to life-threatening pulmonary or brain oedema.
Other drawbacks of IL-2 cancer immunotherapy are its short half-life in circulation and its ability to predominantly expand regulatory T cells at high doses.
Intralesional IL-2 used to treat in-transit melanoma metastases is generally well tolerated. This is also the case for intralesional IL-2 in other forms of cancer, like nasopharyngeal carcinoma.
Pharmaceutical derivative
Eisai markets a drug called denileukin diftitox (trade name Ontak), which is a recombinant fusion protein of the human IL-2 ligand and the diphtheria toxin. This drug binds to IL-2 receptors and introduces the diphtheria toxin into cells that express those receptors, killing the cells. In some leukemias and lymphomas, malignant cells express the IL-2 receptor, so denileukin diftitox can kill them. In 1999 Ontak was approved by the U.S. Food and Drug Administration (FDA) for treatment of cutaneous T cell lymphoma (CTCL).
Preclinical research
IL-2 does not follow the classical dose-response curve of chemotherapeutics. The immunological activity of high and low dose IL-2 show sharp contrast. This might be related to different distribution of IL-2 receptors (CD25, CD122, CD132) on different cell populations, resulting in different cells that are activated by high and low dose IL-2. In general high doses are immune suppressive, while low doses can stimulate type 1 immunity. Low-dose IL-2 has been reported to reduce hepatitis C and B infection.
IL-2 has been used in clinical trials for the treatment of chronic viral infections and as a booster (adjuvant) for vaccines. The use of large doses of IL-2 given every 6–8 weeks in HIV therapy, similar to its use in cancer therapy, was found to be ineffective in preventing progression to an AIDS diagnosis in two large clinical trials published in 2009.
More recently low dose IL-2 has shown early success in modulating the immune system in disease like type 1 diabetes and vasculitis. There are also promising studies looking to use low dose IL-2 in ischaemic heart disease.
IL-2/anti-IL-2 mAb immune complexes (IL-2 ic)
IL-2 cannot accomplish its role as a promising immunotherapeutic agent due to significant drawbacks which are listed above. Some of the issues can be overcome using IL-2 ic. They are composed of IL-2 and some of its monoclonal antibody (mAb) and can potentiate biologic activity of IL-2 in vivo. The main mechanism of this phenomenon in vivo is due to the prolongation of the cytokine half-life in circulation. Depending on the clone of IL-2 mAb, IL-2 ic can selectively stimulate either CD25high (IL-2/JES6-1 complexes), or CD122high cells (IL-2/S4B6). IL-2/S4B6 immune complexes have high stimulatory activity for NK cells and memory CD8+ T cells and they could thus replace the conventional IL-2 in cancer immunotherapy. On the other hand, IL-2/JES6-1 highly selectively stimulate regulatory T cells and they could be potentially useful for transplantations and in treatment of autoimmune diseases.
History
According to an immunology textbook: "IL-2 is particularly important historically, as it is the first type I cytokine that was cloned, the first type I cytokine for which a receptor component was cloned, and was the first short-chain type I cytokine whose receptor structure was solved. Many general principles have been derived from studies of this cytokine including its being the first cytokine demonstrated to act in a growth factor–like fashion through specific high-affinity receptors, analogous to the growth factors being studied by endocrinologists and biochemists".
In the mid-1960s, studies reported "activities" in leukocyte-conditioned media that promoted lymphocyte proliferation. In the mid-1970s, it was discovered that T-cells could be selectively proliferated when normal human bone marrow cells were cultured in conditioned medium obtained from phytohemagglutinin-stimulated normal human lymphocytes. The key factor was isolated from cultured mouse cells in 1979 and from cultured human cells in 1980. The gene for human IL-2 was cloned in 1982 after an intense competition.
Commercial activity to bring an IL-2 drug to market was intense in the 1980s and 1990s. By 1983, Cetus Corporation had created a proprietary recombinant version of IL-2 (Aldesleukin, later branded as Proleukin), with the alanine removed from its N-terminal and residue 125 replaced with serine. Amgen later entered the field with its own proprietary, mutated, recombinant protein and Cetus and Amgen were soon competing scientifically and in the courts; Cetus won the legal battles and forced Amgen out of the field. By 1990 Cetus had gotten aldesleukin approved in nine European countries but in that year, the U.S. Food and Drug Administration (FDA) refused to approve Cetus' application to market IL-2. The failure led to the collapse of Cetus, and in 1991 the company was sold to Chiron Corporation. Chiron continued the development of IL-2, which was finally approved by the FDA as Proleukin for metastatic renal carcinoma in 1992.
By 1993 aldesleukin was the only approved version of IL-2, but Roche was also developing a proprietary, modified, recombinant IL-2 called teceleukin, with a methionine added at is N-terminal, and Glaxo was developing a version called bioleukin, with a methionine added at is N-terminal and residue 125 replaced with alanine. Dozens of clinical trials had been conducted of recombinant or purified IL-2, alone, in combination with other drugs, or using cell therapies, in which cells were taken from patients, activated with IL-2, then reinfused. Novartis acquired Chiron in 2006 and licensed the US aldesleukin business to Prometheus Laboratories in 2010 before global rights to Proleukin were subsequently acquired by Clinigen in 2018 and 2019.
References
External links
Proleukin website
IL-2 Signaling Pathway
Interleukins
Immunostimulants
Cancer treatments
Immunomodulating drugs
Immunology | Interleukin 2 | [
"Biology"
] | 4,171 | [
"Immunology"
] |
1,121,381 | https://en.wikipedia.org/wiki/Ectocarpene | Ectocarpene is the rearrangement product of pre-ectocarpene, the sexual attractant, or pheromone, found with several species of brown algae (Phaeophyceae). Ectocarpene has a fruity scent and can be sensed by humans when millions of algae gametes swarm the seawater and the females start emitting the substance's precursor to attract the male gametes.
All the double bonds are cis and the absolute configuration of the stereocenter is (S).
History
Ectocarpene was isolated from algae Ectocarpus (order Ectocarpales) by Müller and col. in 1971. It has been mistakened to be the active substance for gamete attraction until 1995, and pre-ectocarpene was discovered to be active. This confusion arises from the sigmatropic rearrangement (and thus deactivation) of pre-ectocarpene in minutes at room temperature:
This is as to only have the phermon active in the proximity of the female gametes.
The presence of ectocarpene in Capsicum fruit was reported in 2010. Studies concluded that its "sweet and green" aroma surfaced through identification tests as well as sensory tests. Its relatively low but influential presence helps develop the Capsicum fruit’s profile.
Related Compounds
(E)-Ectocarpene is a product associated to a group referred to as Bryophytes, a family of liverworts, algae, and other species with medicinal and nutritional properties. It is suggested that (E)-ectocarpene may have an evolutionary relationship between families of liverworts and algae as its concentration of formation varies based on the species’ environmental conditions.
See also
Dictyopterene
References
External links
Evidence of ectocarpene and dictyopterenes A and C’ in the water of a freshwater lake’
Alkene derivatives
Hydrocarbons
Pheromones
Ectocarpales
Cycloalkenes | Ectocarpene | [
"Chemistry"
] | 420 | [
"Organic compounds",
"Hydrocarbons",
"Pheromones",
"Chemical ecology"
] |
1,121,564 | https://en.wikipedia.org/wiki/Piston%20valve | A piston valve is a device used to control the motion of a fluid or gas along a tube or pipe by means of the linear motion of a piston within a chamber or cylinder.
Examples of piston valves are:
The valves used in many brass instruments
The valves used for pneumatic propulsion
The valves used in many stationary steam engines and steam locomotives
Brass instruments
Cylindrical piston valves called Périnet valves (after their inventor François Périnet) are used to change the length of tube in the playing of most brass instruments, particularly the trumpet-like members of the family (cornet, flugelhorn, saxhorn, etc.).
Other brass instruments use rotary valves, notably the orchestral horns and many tuba models, but also a number of rotary-valved variants of those brass instruments which more commonly employ piston valves.
The first piston-valved musical instruments were developed just after the start of the 19th century. The Stölzel valve (invented by Heinrich Stölzel in 1814) was an early variety. In the mid 19th century the Vienna valve was an improved design. However most professional musicians preferred rotary valves for quicker, more reliable action, until better designs of piston valves were mass manufactured towards the end of the 19th century.
Pneumatic cannon
A piston valve can also refer to a 2-way 2-position, pilot-operated spool valve. The term is extremely popular among spud gun enthusiasts who often build homemade piston valves for use in pneumatic cannon. Valves are typically constructed primarily from pipe fittings and machined plastics or metals.
The inside of a piston valve contains a piston that blocks the output when the valve is pressurized, and a volume of air behind the piston. When the pressure behind the piston is released the piston is pushed back by the force of the pressure from the input. This allows the valve to be opened by a much smaller pilot valve, with speeds faster than possible with just a manually operated valve. Functionally these types of valves are comparable to quick exhaust valves.
This type of piston valve is also sometimes referred to as a back-pressure valve.
Steam engines
See also
Angle seat piston valve
References
External links
Early valve designs
Why was the valve invented?
Elements of Brass Instrument Construction with good discussion of valve types and history
Visual explanations of some types of piston valve
Kinematic Models for Design Digital Library (KMODDL) – Movies and photos of hundreds of working mechanical-systems models at Cornell University. Also includes an e-book library of classic texts on mechanical design and engineering.
Valves
Steam locomotive technologies
Brass instrument parts and accessories | Piston valve | [
"Physics",
"Chemistry"
] | 518 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
1,121,570 | https://en.wikipedia.org/wiki/Rotary%20valve | A rotary valve (also called rotary-motion valve) is a type of valve in which the rotation of a passage or passages in a transverse plug regulates the flow of liquid or gas through the attached pipes. The common stopcock is the simplest form of rotary valve. Rotary valves have been applied in numerous applications, including:
Changing the pitch of brass instruments.
Controlling the steam and exhaust ports of steam engines, most notably in the Corliss steam engine.
Periodically reversing the flow of air and fuel across the open hearth furnace.
Loading sample on chromatography columns.
Certain types of two-stroke and four-stroke engines.
Most hydraulic automotive power steering control valves.
Use in brass instruments
In the context of brass instruments, rotary valves are found on horns, trumpets, trombones, flugelhorns, and tubas. The cornet derived from the posthorn, by applying rotary valves to it in the 1820s in France. An alternative to a rotary valve trumpet would be piston valve trumpet. Many European trumpet players tend to favor rotary valves.
Trombone F attachment valves are usually rotary, with several variations on the basic design also in use, such as the Thayer axial flow valve and the Hagmann valve.
Rotary valve was first applied to the horn in 1824 by Nathan Adams (1783–1864) of Boston and patented in 1835 by Joseph Riedl.
Use in industry
Rotary valves for industrial manufacturing are often used in bulk material handling, dust collection or pneumatic conveying systems, depending on the application. The valve is used to regulate the flow of a product or material by maintaining a consistent flow rate suited to the process. Controlling the flow of material helps to prevent issues such as jamming, material leakage and damage to the valve itself. Typical applications are for feeding a weighed hopper or for feeding a mill that can be clogged by the product.
Valves are part of the material exchange process and work in metering or feeding applications, function as rotary airlocks, or provide a combination of airlock and metering functions.
A rotary valve in the pharmaceutical, chemical and food industry is used to dose and feed solid bulk products within the processes. Valves are also commonly used in construction, plastics, recycling, agriculture and forestry, or wherever material needs to be safely and efficiently conveyed from one point to another.
An airlock-type rotary valve receives and dispenses material from two chambers with different pressure levels. They seal air flow between the valve’s inlet and outlet to maintain a consistent pressure differential, which promotes efficient material flow. The valve’s pressurized chamber prevents foreign material from infiltrating the housing and keeps conveyed material from escaping the system.
Use in engine design
Four-stroke engines
The general adoption of rotary valves in the place of poppet valves in combustion engines was prevented by the issue of sealing. Poppet valves have a seal around the tapered flange of the opening, and this seal improves with increased working pressure in the combustion chamber because the pressure forces the valve shut. In contrast, rotary valves have to move freely to operate and need to be lubricated with oil, causing issues with holding pressures of up to 100 barg at temperatures of 1000 degrees Celsius, with the related thermal expansion of the various seals and valve barrel. This valve expansion causes misalignment in the valve-to-seal interface as an engine moves from room temperature to full operating temperature. If the seals are pressed against the valve with higher pressure to accommodate this expansion, high friction and power loss occurs, plus high rates of wear.
The rotary valve combustion engine possesses several significant advantages over the conventional assemblies, including significantly higher compression ratios and rpm, meaning more power, a much more compact and light-weight cylinder head, and reduced complexity, meaning higher reliability and lower cost. As inlet and exhaust are usually combined, special attention should be given to valve cooling to avoid engine knocking.
Rotary valves have been used in several different engine designs. In Britain, the National Engine Company Ltd advertised its rotary valve engine for use in early aircraft, at a time when poppet valves were prone to failure by sticking or burning.
In the end of 1930s, Frank Aspin developed a design with a rotary valve that rotated on the same axis as the cylinder bore, but with limited success.
US company Coates International Ltd has developed a spherical rotary valve for internal combustion engines which replaces the poppet valve system. This particular design is four-stroke, with the rotary valves operated by overhead shafts in lieu of overhead camshafts (i.e. in line with a bank of cylinders). The first sale of such an engine was part of a natural gas engine-generator.
Rotary valves are potentially highly suitable for high-revving engines, such as those used in racing sportscars and F1 racing cars, on which traditional poppet valves with springs can fail due to valve float and spring resonance and where the desmodromic valve gear is too heavy, large in size and too complex to time and design properly. Rotary valves could allow for a more compact and lightweight cylinder head design. They rotate at half engine speed (or one quarter) and lack the inertia forces of reciprocating valve mechanisms. This allows for higher engine speeds, offering approximately perhaps 10% more power. The 1980s MGN W12 F1 engine used rotary valves but never raced. Between 2002 and 2004 the Australian developer Bishop Innovation and Mercedes-Ilmor tested rotary valves for a F1 V10 engine.
Bishop Innovations' patent for the rotary valve engine was bought out by BRV Pty Ltd, owned by Tony Wallis, one of the valves original designers. BRV has constructed several functional motors using the rotary valve technology, such as a Honda CRF 450, which had greater torque at both low(17% increase) and high (9% increase) engine speeds, and also produced more brake horsepower up to around 30% more at functional engine speeds. The engine was also considerably smaller and lighter, as the cylinder head assembly was not as large.
A company in the UK called Roton Engine Developments made some progress in 2005 with a two-rotor (one for inlet and one for exhaust) single-cylinder Husaberg motorcycle engine. They filed patents and got an example running in 2006, but were backed by MG Rover which subsequently went bust, leaving Roton without enough funds to continue. The designs surfaced some years later in Australia with Engine Developments Australia Pty Ltd. A prototype casting was produced in 2013 on a Kawasaki Ninja 300 parallel twin unit. This unit is still in development phase at the time of writing but is significant as it has the potential to run much higher compression ratios than even other rotary valve engines due to a significant but undisclosed new cooling method of the combustion chamber and the ability to eliminate the throttle completely, making it vastly more economical at lower engine speeds, so it is claimed.
A proven completely successful automotive rotary valve engine has been built by the late Ralph Ogden Watson of Auckland New Zealand, during 1989. The car has covered many trouble free miles from that date and remains in use. Success was achieved as a result of Watson's academic approach to the problem of sealing, his study of previous designs, and his particular combination of knowledge of materials, machining skills, experience with engines, perseverance and realistic expectations. No new or only recently available materials were involved. Full details of the development of the car and engine appear in the book "Ralph Watson Special Engineer," first published 2004, ISBN O-476-01371-2 and available free and easily searchable, on the internet as of 2020. The car is currently owned by Ray Ferner.
A U.S. company named VAZTEC has created a compliant rotary valve sealing system that solves the friction and sealing problems of previous designs. They have built ten prototype engines from a 5.3L V-8 to a handheld 28cc four stroke. VAZTEC has also built a successful Diesel engine that contained the high compression pressures of a compression ignition engine (20:1 compression ratio, 100 bar combustion pressure). They are working with various OE manufacturers to commercialize their design. Various patents cover the compliant seal, including U.S. Patent 9,903,239. A Computational Fluid Dynamics (CFD) model of the VAZTEC rotary valve can be viewed at this link.
Two-stroke engines
A rotary valve in the form of a flat disc, also known as a disc valve, is used in two-stroke motorcycle engines, where the arrangement helps to prevent reverse flow back into the intake port during the compression stroke.
Austrian engine manufacturer Rotax used rotary intake valves in their now out-of-production Rotax 532 two-stroke engine design and continues to use rotary intake valves in the 532's successor, the current-production Rotax 582.
Use in production engines
UK company RCV Engines Ltd uses rotating cylinder liner technology as a specialized form of rotary valve in some of their four-stroke model engine and small-engine line-up. RCV also use horizontal and vertical rotary valves in four-stroke engines in their current range of engines.
RCV have developed a 125cc rotating cylinder liner engine, incorporating a rotating valve in the cylinder liner, for scooter applications. PGO Scooters of Taiwan were working with RCV in developing the engine for their applications.
The Suzuki RG500 "Gamma" was powered by a two-stroke, rotary valve, twin crank, square four engine displacing 498 cubic centimeters. The power output was 93.7 brake horsepower (69.9 kW) at 9,500 RPM.
Use in chromatography
Rotary valves are used for loading samples on columns used for liquid or gas chromatography. The valves used in these methods are usually 6-port, 2-position rotary valves.
See also
Airlock
Itala cars
Piston valve
Poppet valve
Rotary feeder
Slide valve
References
Valves
Brass instrument parts and accessories | Rotary valve | [
"Physics",
"Chemistry"
] | 2,022 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
1,121,587 | https://en.wikipedia.org/wiki/Schur%20multiplier | In mathematical group theory, the Schur multiplier or Schur multiplicator is the second homology group of a group G. It was introduced by in his work on projective representations.
Examples and properties
The Schur multiplier of a finite group G is a finite abelian group whose exponent divides the order of G. If a Sylow p-subgroup of G is cyclic for some p, then the order of is not divisible by p. In particular, if all Sylow p-subgroups of G are cyclic, then is trivial.
For instance, the Schur multiplier of the nonabelian group of order 6 is the trivial group since every Sylow subgroup is cyclic. The Schur multiplier of the elementary abelian group of order 16 is an elementary abelian group of order 64, showing that the multiplier can be strictly larger than the group itself. The Schur multiplier of the quaternion group is trivial, but the Schur multiplier of dihedral 2-groups has order 2.
The Schur multipliers of the finite simple groups are given at the list of finite simple groups. The covering groups of the alternating and symmetric groups are of considerable recent interest.
Relation to projective representations
Schur's original motivation for studying the multiplier was to classify projective representations of a group, and the modern formulation of his definition is the second cohomology group . A projective representation is much like a group representation except that instead of a homomorphism into the general linear group , one takes a homomorphism into the projective general linear group . In other words, a projective representation is a representation modulo the center.
showed that every finite group G has associated to it at least one finite group C, called a Schur cover, with the property that every projective representation of G can be lifted to an ordinary representation of C. The Schur cover is also known as a covering group or Darstellungsgruppe. The Schur covers of the finite simple groups are known, and each is an example of a quasisimple group. The Schur cover of a perfect group is uniquely determined up to isomorphism, but the Schur cover of a general finite group is only determined up to isoclinism.
Relation to central extensions
The study of such covering groups led naturally to the study of central and stem extensions.
A central extension of a group G is an extension
where is a subgroup of the center of C.
A stem extension of a group G is an extension
where is a subgroup of the intersection of the center of C and the derived subgroup of C; this is more restrictive than central.
If the group G is finite and one considers only stem extensions, then there is a largest size for such a group C, and for every C of that size the subgroup K is isomorphic to the Schur multiplier of G. If the finite group G is moreover perfect, then C is unique up to isomorphism and is itself perfect. Such C are often called universal perfect central extensions of G, or covering group (as it is a discrete analog of the universal covering space in topology). If the finite group G is not perfect, then its Schur covering groups (all such C of maximal order) are only isoclinic.
It is also called more briefly a universal central extension, but note that there is no largest central extension, as the direct product of G and an abelian group form a central extension of G of arbitrary size.
Stem extensions have the nice property that any lift of a generating set of G is a generating set of C. If the group G is presented in terms of a free group F on a set of generators, and a normal subgroup R generated by a set of relations on the generators, so that , then the covering group itself can be presented in terms of F but with a smaller normal subgroup S, that is, . Since the relations of G specify elements of K when considered as part of C, one must have .
In fact if G is perfect, this is all that is needed: C ≅ [F,F]/[F,R] and M(G) ≅ K ≅ R/[F,R]. Because of this simplicity, expositions such as handle the perfect case first. The general case for the Schur multiplier is similar but ensures the extension is a stem extension by restricting to the derived subgroup of F: M(G) ≅ (R ∩ [F, F])/[F, R]. These are all slightly later results of Schur, who also gave a number of useful criteria for calculating them more explicitly.
Relation to efficient presentations
In combinatorial group theory, a group often originates from a presentation. One important theme in this area of mathematics is to study presentations with as few relations as possible, such as one relator groups like Baumslag–Solitar groups. These groups are infinite groups with two generators and one relation, and an old result of Schreier shows that in any presentation with more generators than relations, the resulting group is infinite. The borderline case is thus quite interesting: finite groups with the same number of generators as relations are said to have a deficiency zero. For a group to have deficiency zero, the group must have a trivial Schur multiplier because the minimum number of generators of the Schur multiplier is always less than or equal to the difference between the number of relations and the number of generators, which is the negative deficiency. An efficient group is one where the Schur multiplier requires this number of generators.
A fairly recent topic of research is to find efficient presentations for all finite simple groups with trivial Schur multipliers. Such presentations are in some sense nice because they are usually short, but they are difficult to find and to work with because they are ill-suited to standard methods such as coset enumeration.
Relation to topology
In topology, groups can often be described as finitely presented groups and a fundamental question is to calculate their integral homology . In particular, the second homology plays a special role and this led Heinz Hopf to find an effective method for calculating it. The method in is also known as Hopf's integral homology formula and is identical to Schur's formula for the Schur multiplier of a finite group:
where and F is a free group. The same formula also holds when G is a perfect group.
The recognition that these formulas were the same led Samuel Eilenberg and Saunders Mac Lane to the creation of cohomology of groups. In general,
where the star denotes the algebraic dual group. Moreover, when G is finite, there is an unnatural isomorphism
The Hopf formula for has been generalised to higher dimensions. For one approach and references see the paper by Everaert, Gran and Van der Linden listed below.
A perfect group is one whose first integral homology vanishes. A superperfect group is one whose first two integral homology groups vanish. The Schur covers of finite perfect groups are superperfect. An acyclic group is a group all of whose reduced integral homology vanishes.
Applications
The second algebraic K-group K2(R) of a commutative ring R can be identified with the second homology group H2(E(R), Z) of the group E(R) of (infinite) elementary matrices with entries in R.
See also
Quasisimple group
The references from Clair Miller give another view of the Schur Multiplier as the kernel of a morphism κ: G ∧ G → G induced by the commutator map.
Notes
References
Errata
Group theory
Homological algebra
Issai Schur | Schur multiplier | [
"Mathematics"
] | 1,582 | [
"Mathematical structures",
"Group theory",
"Fields of abstract algebra",
"Category theory",
"Homological algebra"
] |
1,121,647 | https://en.wikipedia.org/wiki/Axiom%20of%20real%20determinacy | In mathematics, the axiom of real determinacy (abbreviated as ADR) is an axiom in set theory. It states the following:
The axiom of real determinacy is a stronger version of the axiom of determinacy (AD), which makes the same statement about games where both players choose integers; ADR is inconsistent with the axiom of choice. It also implies the existence of inner models with certain large cardinals.
ADR is equivalent to AD plus the axiom of uniformization.
See also
AD+
Axiom of projective determinacy
Topological game
Axioms of set theory
Determinacy
References | Axiom of real determinacy | [
"Mathematics"
] | 129 | [
"Game theory",
"Axioms of set theory",
"Determinacy",
"Mathematical axioms"
] |
5,860,704 | https://en.wikipedia.org/wiki/MUSHRA | MUSHRA stands for Multiple Stimuli with Hidden Reference and Anchor and is a methodology for conducting a codec listening test to evaluate the perceived quality of the output from lossy audio compression algorithms. It is defined by ITU-R recommendation BS.1534-3. The MUSHRA methodology is recommended for assessing "intermediate audio quality". For very small or sensitive audio impairments, Recommendation ITU-R BS.1116-3 (ABC/HR) is recommended instead.
MUSHRA can be used to test audio codecs across a broad spectrum of use cases: music and film consumption, speech for e.g. podcasts and radio, online streaming (in which trade-offs between quality and efficiency of size and computation are paramount), modern digital telephony, and VOIP applications (which require quasi-real-time, low-bitrate encoding that remains intelligible). Professional, "audiophile", and "prosumer" uses are typically better suited to alternative tests, like the aforementioned ABC/HR, with a base assumption of high-quality, high-resolution audio wherein there will be minimal detectable differences between reference material and the codec output.
The main advantage over the mean opinion score (MOS) methodology (which serves a similar purpose) is that MUSHRA requires fewer participants to obtain statistically significant results. This is because all codecs are presented at the same time, to the same participants, such that a paired t-test or repeated measures analysis of variance can be used for statistical analysis. Furthermore, the 0–100 scale used by MUSHRA makes it possible to express perceptible differences with a high degree of granularity, especially compared to the 0-5 modified Likert scale often used by MOS experiments.
In MUSHRA, the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference, and one or more anchors (i.e. severely impaired encodings that both the experimenters and participants are supposed to immediately recognise as such; used similarly to the reference to provide a baseline demonstrating - "anchoring" - for participants the actuality of the low end of the quality scale). The recommendation specifies that a low-range and a mid-range anchor should be included in the test signals. These are typically a 7 kHz and a 3.5 kHz low-pass version of the reference. The purpose of the anchors is to calibrate the scale so that minor artifacts are not unduly penalized. This is particularly important when comparing or pooling results from different labs.
Listener behavior
Both, MUSHRA and ITU BS.1116 tests call for trained expert listeners who know what typical artifacts sound like and where they are likely to occur. Expert listeners also have a better internalization of the rating scale which leads to more repeatable results than with untrained listeners. Thus, with trained listeners, fewer listeners are needed to achieve statistically significant results.
It is assumed that preferences are similar for expert listeners and naive listeners and thus results of expert listeners are also predictive for consumers. In agreement with this assumption Schinkel-Bielefeld et al. found no differences in the rank order between expert listeners and untrained listeners when using test signals containing only timbre and no spatial artifacts. However, Rumsey et al. showed that for signals containing spatial artifacts, expert listeners weigh spatial artifacts slightly stronger than untrained listeners, who primarily focus on timbre artifacts.
In addition to this, it has been shown that expert listeners make more use of the option to listen to smaller sections of the signals under test repeatedly and perform more comparisons between the signals under test and the reference. In contrast to the naive listener who produces a preference rating, expert listeners therefore produce an audio quality rating, rating the differences between the signal under test and the uncompressed original, which is the actual goal of a MUSHRA-test.
Pre- or post-screening
The MUSHRA guidelines describe two major possibilities for assessing the reliability of a listener (described below).
The easiest and most common is to disqualify, post-hoc, all listeners who rate the hidden reference repeat below 90 MUSHRA points for more than 15% of all test items. The hidden reference should, in the ideal case, be rated at 100 points to indicate perceptual equivalence with the original reference audio. While it can happen that the hidden reference and a high-quality signal are confused, the specification provides that a rating of lower than 90 should only be given when the listener is certain that the rated signal is different from the original reference, so a rating below 90 for the hidden reference is considered a clear and obvious listener error.
The other possibility to assess a listener's performance is eGauge, a framework based on the analysis of variance (ANOVA). It computes agreement, repeatability, and discriminability, though only the latter two are recommended for pre- or post-screening. Agreement is the ANOVA of a listener's concurrence with the rest of the listeners. Repeatability examines the individual's internal reliability when rating the same test signal again in comparison to the variance of the other test signals. Discriminability analyses a sort of intertest reliability by checking that listeners can distinguish between test signals of different conditions. As eGauge requires listening to every test signal twice, its use is temporally inefficient in the immediate term relative to the prior method of post-screening listeners based on a hidden reference. eGauge does have advantages when used with a longer-term view. It negates the small chance of a complete redo in the rare case in which a sample's results lack sufficient statistical power due to an excessive failure rate discovered after the fact. Additionally, the initial inefficiency can be amortised over a series of experiments by removing the need for recruitment phases: if a listener has proven a reliable listener using eGauge, he or she can also be considered a reliable listener for future listening tests, provided the nature of the test is not substantially altered (e.g. a reliable listener for stereo tests is not necessarily equally good at perceiving artifacts in 5.1 or 22.2 configurations or potentially even mono formats).
Test items
It is important to choose critical test items. Specifically, items that are difficult to encode and are likely to produce artifacts. At the same time, the test items should be ecologically valid: they should be representative of broadcast material and not mere synthetic signals designed to be difficult to encode at the expense of realism. A method to choose critical material is presented by Ekeroot et al. who propose a ranking-by-elimination procedure. While this is effective at selecting the most critical test items, it does not ensure inclusion of a variety of test items prone to different artifacts.
Ideally, a MUSHRA test item should maintain similar characteristics for its entire duration (e.g. the use of consistent instrumentation in music or the same person's voice with similar cadence and tone in spoken audio). It can be difficult for the listener to decide on a unidimensional MUSHRA rating if some parts of the items demonstrate different artifacts or stronger artifacting compared to other parts, which is rendered more likely by large variations in the characteristics of the audio. Often, shorter items lead to less variability as they demonstrate greater stationarity (perceptual consistency and constancy). However, even when trying to choose stationary items, ecologically valid stimuli (i.e. audio that is likely to appear or similar to that likely to appear in real-world situations such as on radio) will very often have sections that are slightly more critical than the rest of the signal (examples include keywords in a speech or major phrases of music and are dependent on the stimulus type). Stationarity is important as listeners who focus on different sections of the signal tend to evaluate it differently. Listeners who are more analytical seem to be better at identifying the most critical regions of a stimulus than those who are less analytical.
Language of test items
ITU-T P.800 tests, based on the mean opinion score methodology, are commonly used to evaluate telephone codecs for use in e.g. VOIP. This standard specifies that the tested speech items should always be in the native language of the listeners. When MUSHRA is used instead for these purposes, language matching becomes unnecessary. MUSHRA experiments do not aim to test the intelligibility of spoken words but solely the quality of the audio containing those words and the presence or absence of audible artifacts (e.g. distortion). A MUSHRA study with Mandarin Chinese and German listeners found no significant difference between rating foreign and native language test items. Despite the lack of distinction in the end results, listeners did need more time and comparison opportunities (repetitions) to accurately evaluate the foreign language items. This compensation is impossible in ITU-T P.800 ACR tests wherein items are heard only once and no comparison to the reference audio is possible. In such tests, unlike MUSHRA tests, foreign language items are perceived and then rated as being of lower quality, irrespective of actual codec quality, when listeners' proficiency in the target language is low.
References
External links
webMUSHRA: a MUSHRA compliant web audio API based experiment software, configurable using YAML
RateIt: A GUI for performing MUSHRA experiments
A Max/MSP interface for MUSHRA listening tests
A Browser Based Audio Evaluation Tool, for running many different tests including MUSHRA - No coding needed
BeaqleJS: HTML5 and JavaScript based framework for listening tests
mushraJS+Server: based on mushraJS with mochiweb server, which is erlang web server
Signal processing
ITU-R recommendations
Psychophysics | MUSHRA | [
"Physics",
"Technology",
"Engineering"
] | 2,034 | [
"Telecommunications engineering",
"Applied and interdisciplinary physics",
"Computer engineering",
"Signal processing",
"Psychophysics"
] |
5,861,496 | https://en.wikipedia.org/wiki/Diffuson | In condensed matter physics, the diffuson is a disorder-averaged electron-hole propagator, a mathematical object which often appears in the theory of disordered electronic systems. The poles of the propagator can be identified with diffusion modes.
In a disordered system, the motion of an electron is not ballistic, but diffusive: i.e., the electron does not move along a straight line, but experiences a series of random scatterings off of impurities. This random motion (diffusion) is described by a differential equation, known as the diffusion equation. The diffuson is the Green's function of the diffusion equation.
The diffuson plays an important role in the theory of electron transport in disordered systems, especially for phase coherent effects such as universal conductance fluctuations.
References
Diffusion
Mesoscopic physics | Diffuson | [
"Physics",
"Chemistry",
"Materials_science"
] | 175 | [
"Transport phenomena",
"Physical phenomena",
"Materials science stubs",
"Diffusion",
"Quantum mechanics",
"Condensed matter physics",
"Condensed matter stubs",
"Mesoscopic physics"
] |
5,861,757 | https://en.wikipedia.org/wiki/Weak%20localization | Weak localization is a physical effect which occurs in disordered electronic systems at very low temperatures. The effect manifests itself as a positive correction to the resistivity of a metal or semiconductor. The name emphasizes the fact that weak localization is a precursor of Anderson localization, which occurs at strong disorder.
General principle
The effect is quantum-mechanical in nature and has the following origin: In a disordered electronic system, the electron motion is diffusive rather than ballistic. That is, an electron does not move along a straight line, but experiences a series of random scatterings off impurities which results in a random walk.
The resistivity of the system is related to the probability of an electron to propagate between two given points in space. Classical physics assumes that the total probability is just the sum of the probabilities of the paths connecting the two points. However quantum mechanics tells us that to find the total probability we have to sum up the quantum-mechanical amplitudes of the paths rather than the probabilities themselves. Therefore, the correct (quantum-mechanical) formula for the probability for an electron to move from a point A to a point B includes the classical part (individual probabilities of diffusive paths) and a number of interference terms (products of the amplitudes corresponding to different paths). These interference terms effectively make it more likely that a carrier will "wander around in a circle" than it would otherwise, which leads to an increase in the net resistivity. The usual formula for the conductivity of a metal (the so-called Drude formula) corresponds to the former classical terms, while the weak localization correction corresponds to the latter quantum interference terms averaged over disorder realizations.
The weak localization correction can be shown to come mostly from quantum interference between self-crossing paths in which an electron can propagate in the clock-wise and counter-clockwise direction around a loop. Due to the identical length of the two paths along a loop, the quantum phases cancel each other exactly and these (otherwise random in sign) quantum interference terms survive disorder averaging. Since it is much more likely to find a self-crossing trajectory in low dimensions, the weak localization effect manifests itself much more strongly in low-dimensional systems (films and wires).
Weak anti-localization
In a system with spin–orbit coupling, the spin of a carrier is coupled to its momentum. The spin of the carrier rotates as it goes around a self-intersecting path, and the direction of this rotation is opposite for the two directions about the loop. Because of this, the two paths along any loop interfere destructively which leads to a lower net resistivity.
In two dimensions
In two dimensions the change in conductivity from applying a magnetic field, due to either weak localization or weak anti-localization can be described by the Hikami-Larkin-Nagaoka equation:
Where , and are various relaxation times and is the conductivity of the system in the absence of weak localization or weak anti-localization. This theoretically derived equation was soon restated in terms of characteristic fields, which are more directly experimentally relevant quantities:
Where the characteristic fields are:
Where is potential scattering, is inelastic scattering, is magnetic scattering, and is spin-orbit scattering. For a non-magnetic sample (), this can be rewritten:
is the digamma function. is the phase coherence characteristic field, which is roughly the magnetic field required to destroy phase coherence, is the spin–orbit characteristic field which can be considered a measure of the strength of the spin–orbit interaction and is the elastic characteristic field. The characteristic fields are better understood in terms of their corresponding characteristic lengths which are deduced from . can then be understood as the distance traveled by an electron before it loses phase coherence, can be thought of as the distance traveled before the spin of the electron undergoes the effect of the spin–orbit interaction, and finally is the mean free path.
In the limit of strong spin–orbit coupling , the equation above reduces to:
In this equation is -1 for weak antilocalization and +1/2 for weak localization.
Magnetic field dependence
The strength of either weak localization or weak anti-localization falls off quickly in the presence of a magnetic field, which causes carriers to acquire an additional phase as they move around paths.
See also
Coherent backscattering
References
Mesoscopic physics
Condensed matter physics
Electric and magnetic fields in matter | Weak localization | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 913 | [
"Phases of matter",
"Electric and magnetic fields in matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Mesoscopic physics",
"Matter"
] |
17,943,941 | https://en.wikipedia.org/wiki/Immersed%20boundary%20method | In computational fluid dynamics, the immersed boundary method originally referred to an approach developed by Charles Peskin in 1972 to simulate fluid-structure (fiber) interactions. Treating the coupling of the structure deformations and the fluid flow poses a number of challenging problems for numerical simulations (the elastic boundary changes the flow of the fluid and the fluid moves the elastic boundary simultaneously). In the immersed boundary method the fluid is represented in an Eulerian coordinate system and the structure is represented in Lagrangian coordinates. For Newtonian fluids governed by the Navier–Stokes equations, the fluid equations are
and if the flow is incompressible, we have the further condition that
The immersed structures are typically represented as a collection of one-dimensional fibers, denoted by . Each fiber can be viewed as a parametric curve where is the Lagrangian coordinate along the fiber and is time. The physics of the fiber is represented via a fiber force distribution function . Spring forces, bending resistance or any other type of behavior can be built into this term. The force exerted by the structure on the fluid is then interpolated as a source term in the momentum equation using
where is the Dirac function. The forcing can be extended to multiple dimensions to model elastic surfaces or three-dimensional solids. Assuming a massless structure, the elastic fiber moves with the local fluid velocity and can be interpolated via the delta function
where denotes the entire fluid domain.
Discretization of these equations can be done by assuming an Eulerian grid on the fluid and a separate Lagrangian grid on the fiber.
Approximations of the Delta distribution by smoother functions will allow us to interpolate between the two grids.
Any existing fluid solver can be coupled to a solver for the fiber equations to solve the Immersed Boundary equations.
Variants of this basic approach have been applied to simulate a wide variety of mechanical systems involving elastic structures which interact with fluid flows.
Since the original development of this method by Peskin, a variety of approaches have been developed. These include stochastic formulations for microscopic systems, viscoelastic soft materials, complex fluids, such as the Stochastic Immersed Boundary Methods of Atzberger, Kramer, and Peskin, methods for simulating flows over complicated immersed solid bodies on grids that do not conform to the surface of the body Mittal and Iaccarino, and other approaches that incorporate mass and rotational degrees of freedom Olson, Lim, Cortez. Methods for complicated body shapes include the immersed interface method, the Cartesian grid method, the ghost fluid method and the cut-cell methods categorizing immersed boundary methods into continuous forcing and discrete forcing methods. Methods have been developed for simulations of viscoelastic fluids, curved fluid interfaces, microscopic biophysical systems (proteins in lipid bilayer membranes, swimmers), and engineered devices, such as the Stochastic Immersed Boundary Methods of Atzberger, Kramer, and Peskin,
Stochastic Eulerian Lagrangian Methods of Atzberger, Massed Immersed Boundary Methods of Mori, and Rotational Immersed Boundary Methods of Olson, Lim, Cortez.
In general, for immersed boundary methods and related variants, there is an active research community that is still developing new techniques and related software implementations and incorporating related techniques into simulation packages and CAD engineering software. For more details see below.
See also
Stochastic Eulerian Lagrangian methods
Stokesian dynamics
Volume of fluid method
Level-set method
Marker-and-cell method
Software: Numerical codes
FloEFD: Commercial CFD IBM code
Advanced Simulation Library
Mango-Selm : Immersed Boundary Methods and SELM Simulations, 3D Package, (Python interface, LAMMPS MD Integration), P. Atzberger, UCSB
Stochastic Immersed Boundary Methods in 3D, P. Atzberger, UCSB
Immersed Boundary Method for Uniform Meshes in 2D, A. Fogelson, Utah
IBAMR : Immersed Boundary Method for Adaptive Meshes in 3D, B. Griffith, NYU.
IB2d: Immersed Boundary Method for MATLAB and Python in 2D with 60+ examples, N.A. Battista, TCNJ
ESPResSo: Immersed Boundary Method for soft elastic objects
CFD IBM code based on OpenFoam
sdfibm: Another CFD IBM code based on OpenFoam
SimScale: Immersed Boundary Method for fluid mechanics and conjugate heat transfer simulation in the cloud
Notes
References
.
.
Fluid mechanics
Computational fluid dynamics
Numerical differential equations | Immersed boundary method | [
"Physics",
"Chemistry",
"Engineering"
] | 911 | [
"Computational fluid dynamics",
"Computational physics",
"Civil engineering",
"Fluid mechanics",
"Fluid dynamics"
] |
4,421,352 | https://en.wikipedia.org/wiki/Caloron | In mathematical physics, a caloron is the finite temperature generalization of an instanton.
Finite temperature and instantons
At zero temperature, instantons are the name given to solutions of the classical equations of motion of the Euclidean version of the theory under consideration, and which are furthermore localized in Euclidean spacetime. They describe tunneling between different topological vacuum states of the Minkowski theory. One important example of an instanton is the BPST instanton, discovered in 1975 by Alexander Belavin, Alexander Markovich Polyakov, Albert Schwartz and Yu S. Tyupkin. This is a topologically stable solution to the four-dimensional SU(2) Yang–Mills field equations in Euclidean spacetime (i.e. after Wick rotation).
Finite temperatures in quantum field theories are modeled by compactifying the imaginary (Euclidean) time (see thermal quantum field theory). This changes the overall structure of spacetime, and thus also changes the form of the instanton solutions. According to the Matsubara formalism, at finite temperature, the Euclidean time dimension is periodic, which means that instanton solutions have to be periodic as well.
In SU(2) Yang–Mills theory
In SU(2) Yang–Mills theory at zero temperature, the instantons have the form of the BPST instanton. The generalization thereof to finite temperature has been found by Harrington and Shepard:
where is the anti-'t Hooft symbol, r is the distance from the point x to the center of the caloron, ρ is the size of the caloron, is the Euclidean time and T is the temperature. This solution was found based on a periodic multi-instanton solution first suggested by Gerard 't Hooft and published by Edward Witten.
References and notes
Bibliography
Gauge theories | Caloron | [
"Physics"
] | 365 | [
"Quantum mechanics",
"Quantum physics stubs"
] |
4,421,519 | https://en.wikipedia.org/wiki/Stanley%20Mandelstam | Stanley Mandelstam (; 12 December 1928 – 23 June 2016) was a South African theoretical physicist. He introduced the relativistically invariant Mandelstam variables into particle physics in 1958 as a convenient coordinate system for formulating his double dispersion relations. The double dispersion relations were a central tool in the bootstrap program which sought to formulate a consistent theory of infinitely many particle types of increasing spin.
Early life
Mandelstam was born in Johannesburg, South Africa to a Jewish family.
Work
Mandelstam, along with Tullio Regge, did the initial development of the Regge theory of strong interaction phenomenology. He reinterpreted the analytic growth rate of the scattering amplitude as a function of the cosine of the scattering angle as the power law for the falloff of scattering amplitudes at high energy. Along with the double dispersion relations, Regge theory allowed theorists to find sufficient analytic constraints on scattering amplitudes of bound states to formulate a theory in which there are infinitely many particle types, none of which are fundamental.
After Veneziano constructed the first tree-level scattering amplitude describing infinitely many particle types, what was recognized almost immediately as a string scattering amplitude, Mandelstam continued to make crucial contributions. He interpreted the Virasoro algebra discovered in consistency conditions as a geometrical symmetry of a world-sheet conformal field theory, formulating string theory in terms of two dimensional quantum field theory. He used the conformal invariance to calculate tree level string amplitudes on many worldsheet domains. Mandelstam was the first to explicitly construct the fermion scattering amplitudes in the Ramond and Neveu–Schwarz sectors of superstring theory, and later gave arguments for the finiteness of string perturbation theory.
In quantum field theory, Mandelstam and independently Sidney Coleman extended work of Tony Skyrme to show that the two dimensional quantum Sine-Gordon model is equivalently described by a Thirring model whose fermions are the kinks. He also demonstrated that the 4d N=4 supersymmetric gauge theory is power counting finite, proving that this theory is scale invariant to all orders of perturbation theory, the first example of a field theory where all the infinities in Feynman diagrams cancel.
Among his students at Berkeley are Joseph Polchinski, Michio Kaku, Charles Thorn and Hessamaddin Arfaei.
Stanley Mandelstam died in his Berkeley apartment in June, 2016.
Education
University of the Witwatersrand, South Africa (BSc, 1952)
Trinity College, Cambridge (BA, 1954)
University of Birmingham (PhD, 1956)
Career
Professor of Mathematical Physics, University of Birmingham, 1960–63
Professor of Physics, University of California, Berkeley, since 1963 (Professor Emeritus since 1994)
Professeur Associé, Université de Paris-Sud, 1979–80 and 1984–85
Honours
Fellow of the Royal Society, 1962
Dirac Medal and Prize, International Centre for Theoretical Physics, 1991
Fellow, American Academy of Arts and Sciences, 1992
Dannie Heineman Prize for Mathematical Physics, American Physical Society, 1992
References
External links
Web page at Berkeley
1928 births
2016 deaths
South African Jews
Jewish American scientists
Fellows of the Royal Society
Particle physicists
Alumni of Trinity College, Cambridge
University of California, Berkeley College of Letters and Science faculty
American physicists
Theoretical physicists
Scientists from Johannesburg
Alumni of the University of Birmingham
21st-century American Jews | Stanley Mandelstam | [
"Physics"
] | 718 | [
"Theoretical physics",
"Particle physicists",
"Particle physics",
"Theoretical physicists"
] |
4,423,830 | https://en.wikipedia.org/wiki/Methane%20monooxygenase | Methane monooxygenase (MMO) is an enzyme capable of oxidizing the C-H bond in methane as well as other alkanes. Methane monooxygenase belongs to the class of oxidoreductase enzymes ().
There are two forms of MMO: the well-studied soluble form (sMMO) and the particulate form (pMMO). The active site in sMMO contains a di-iron center bridged by an oxygen atom (Fe-O-Fe), whereas the active site in pMMO utilizes copper. Structures of both proteins have been determined by X-ray crystallography; however, the location and mechanism of the active site in pMMO is still poorly understood and is an area of active research.
The particulate methane monooxygenase and related ammonia monooxygenase are integral membrane proteins, occurring in methanotrophs and ammonia oxidisers, respectively, which are thought to be related. These enzymes have a relatively wide substrate specificity and can catalyse the oxidation of a range of substrates including ammonia, methane, halogenated hydrocarbons, and aromatic molecules. These enzymes are homotrimers composed of 3 subunits - A (), B () and C () and most contain two monocopper centers.
The A subunit from Methylococcus capsulatus (Bath) resides primarily within the membrane and consists of 7 transmembrane helices and a beta-hairpin, which interacts with the soluble region of the B subunit. A conserved glutamate residue is thought to contribute to a metal center.
Methane monooxygenases are found in methanotrophic bacteria, a class of bacteria that exist at the interface of aerobic (oxygen-containing) and anaerobic (oxygen-devoid) environments. One of the more widely studied bacteria of this type is Methylococcus capsulatus (Bath). This bacterium was discovered in the hot springs of Bath, England. Notably, strictly anaerobic methanotrophs may also harbour methane monooxygenases, although there are critical mismatches in the gene which prevent common methanotroph-seeking primers from matching.
Soluble methane monooxygenase systems
Methanotrophic bacteria play an essential role of cycling carbon through anaerobic sediments. The chemistry behind the cycling takes a chemically inert hydrocarbon, methane, and converts it to a more active species, methanol. Other hydrocarbons are oxidized by MMOs, so a new hydroxylation catalyst based on the understanding of MMO systems could possibly make a more efficient use of the world supply of natural gas.
This is a classic monooxygenase reaction in which two reducing equivalents from NAD(P)H are utilized to split the O-O bond of O2. One atom is reduced to water by a 2 e- reduction and the second is incorporated into the substrate to yield methanol:
CH4 + O2 + NAD(P)H + H+ -> CH3OH + NAD(P)+ + H2O
Two forms of MMO have been found: soluble and particulate. The best characterized forms of soluble MMO contains three protein components: hydroxylase, the β unit, and the reductase. Each of which is necessary for effective substrate hydroxylation and NADH oxidation.
Structure
X-ray crystallography of the MMO shows that it is a dimer formed of three subunits, α2β2γ2. With 2.2 A resolution, the crystallography shows that MMO is a relatively flat molecule with the dimensions of 60 x 100 x 120 A. In addition, there is a wide canyon running along the dimer interface with an opening in the center of the molecule. Most of the protomers involves helices from the α and β subunits with no participation from the γ subunit. Also, the interactions with the protomers resembles ribonucleotide reductase R2 protein dimer interaction, resembling a heart. Each iron has a six coordinate octahedral environment. The dinuclear iron centers are positioned in the α subunit. Each iron atoms are also coordinated to a histidine δN atom, Fe 1 to a His 147 and Fe 2 to His 246, Fe 1 is a ligated to a monodentate carboxylate, Glu 114, a semi bridging carboxylate, Glu 144, and a water molecule.
The substrate must bind near the active site in order for the reaction to take place. Near to the iron centers, there are hydrophobic pockets. It is thought that here the methane binds and is held until needed. From the X-ray crystallography, there is no direct path to these packets. However, a slight conformation change in the Phe 188 or The 213 side-chains could allow access. This conformational change could be triggered by the binding of a coupling protein and the activase.
Upon reduction, one of the carboxylate ligands undergoes a “1,2 carboxylate” shift from behind a terminal monodentate ligand to a bridging ligand for the two irons, with the second oxygen coordinated to Fe 2. In the reduced form of MMOHred, the ligand environment for the Fe effectively becomes five coordinated, a form that permits the cluster to activate dioxygen. The two irons are at this point oxidized to FeIV and have changed from low-spin ferromagnetic to high-spin antiferromagnetic.
Proposed catalytic cycle and mechanism
From the MMOHred, the diiron centers react with the O2 to form intermediate P. This intermediate is a peroxide species where the oxygens are bound symmetrically, suggested by spectroscopic studies. However, the structure is not known. Intermediate P then converts to intermediate Q, which was proposed to contain two antiferromagnetically coupled high-spin FeIV centers. This compound Q, the structure of which is under debate, is critical to the oxidizing species for MMO.
There are two mechanisms suggested for the reaction between compound Q and the alkane: radical and nonradical. The radical mechanism starts with abstraction of the hydrogen atom from the substrate to form QH (the rate determining step), hydroxyl bridged compound Q and the free alkyl radical. The nonradical mechanism implies a concerted pathway, occurring via a four-center transition state and leading to a “hydrido-alkyl-Q” compound. As of 1999, the research suggests that the methane oxidation proceeds via a bound-radical mechanism.
It was suggested that the transition state for the radical mechanism involves a torsion motion of the hydroxyl OH ligand before the methyl radical can add to the bridging hydroxyl ligand to form the alcohol. As the radical approaches, the H atom of the alkane leave the coplanar tricoordinate O environment and bends upward to create a tetrahedral tetracoordinate O environment.
The final step for this reaction is the elimination of the alcohol and the regeneration of the catalysts. There are a few ways in which this can occur. It could be a stepwise mechanism that starts with the elimination of the alcohol and an intermediate Fe-O-Fe core, and the latter can eliminate the water and regenerate the enzyme through a 2e- reduction. On the other hand, it can start with a 2e- reduction process of bridging the O1 atom to give a water molecule, followed by elimination of the alcohol and regeneration of the enzyme. In addition, it is possible that there is a concerted mechanism whereby the elimination of the methanol occurs spontaneously with 2e- reduction of the bridging O1 center and regeneration of the catalyst.
See also
Bioinorganic chemistry
Oxygenase
Shilov system
References
Further reading
External links
Enzymes
Copper enzymes
Metalloproteins
Oxidoreductases
EC 1.14.13
Integral membrane proteins | Methane monooxygenase | [
"Chemistry"
] | 1,664 | [
"Metalloproteins",
"Oxidoreductases",
"Bioinorganic chemistry"
] |
12,320,384 | https://en.wikipedia.org/wiki/Theory%20of%20impetus | The theory of impetus is an auxiliary or secondary theory of Aristotelian dynamics, put forth initially to explain projectile motion against gravity. It was introduced by John Philoponus in the 6th century, and elaborated by Nur ad-Din al-Bitruji at the end of the 12th century. The theory was modified by Avicenna in the 11th century and Abu'l-Barakāt al-Baghdādī in the 12th century, before it was later established in Western scientific thought by Jean Buridan in the 14th century. It is the intellectual precursor to the concepts of inertia, momentum and acceleration in classical mechanics.
Aristotelian theory
Aristotelian physics is the form of natural philosophy described in the works of the Greek philosopher Aristotle (384–322 BC). In his work Physics, Aristotle intended to establish general principles of change that govern all natural bodies, both living and inanimate, celestial and terrestrial – including all motion, quantitative change, qualitative change, and substantial change.
Aristotle describes two kinds of motion: "violent" or "unnatural motion", such as that of a thrown stone, in Physics (254b10), and "natural motion", such as of a falling object, in On the Heavens (300a20). In violent motion, as soon as the agent stops causing it, the motion stops also: in other words, the natural state of an object is to be at rest, since Aristotle does not address friction.
Hipparchus' theory
In the 2nd century, Hipparchus assumed that the throwing force is transferred to the body at the time of the throw, and that the body dissipates it during the subsequent up-and-down motion of free fall. This is according to the Neoplatonist Simplicius of Cilicia, who quotes Hipparchus in his book Aristotelis De Caelo commentaria 264, 25 as follows: "Hipparchus says in his book On Bodies Carried Down by Their Weight that the throwing force is the cause of the upward motion of [a lump of] earth thrown upward as long as this force is stronger than that of the thrown body; the stronger the throwing force, the faster the upward motion. Then, when the force decreases, the upward motion continues at a decreased speed until the body begins to move downward under the influence of its own weight, while the throwing force still continues in some way. As this decreases, the velocity of the fall increases and reaches its highest value when this force is completely dissipated." Thus, Hipparchus does not speak of a continuous contact between the moving force and the moving body, or of the function of air as an intermediate carrier of motion, as Aristotle claims.
Philoponan theory
In the 6th century, John Philoponus partly accepted Aristotle's theory that "continuation of motion depends on continued action of a force," but modified it to include his idea that the hurled body acquires a motive power or inclination for forced movement from the agent producing the initial motion and that this power secures the continuation of such motion. However, he argued that this impressed virtue was temporary: that it was a self-expending inclination, and thus the violent motion produced comes to an end, changing back into natural motion.
In his book On Aristotle Physics 641, 12; 641, 29; 642, 9 Philoponus first argues explicitly against Aristotle's explanation that a thrown stone, after leaving the hand, cannot be propelled any further by the air behind it. Then he continues: "Instead, some immaterial kinetic force must be imparted to the projectile by the thrower. Whereby the pushed air contributes either nothing or only very little to this motion. But if moving bodies are necessarily moved in this way, it is clear that the same process will take place much more easily if an arrow or a stone is thrown necessarily and against its tendency into empty space, and that nothing is necessary for this except the thrower." This last sentence is intended to show that in empty space—which Aristotle rejects—and contrary to Aristotle's opinion, a moving body would continue to move. It should be pointed out that Philoponus in his book uses two different expressions for impetus: kinetic capacity (dynamis) and kinetic force (energeia). Both expressions designate in his theory a concept, which is close to the today's concept of energy, but they are far away from the Aristotelian conceptions of potentiality and actuality.
Philoponus' theory of imparted force cannot yet be understood as a principle of inertia. For while he rightly says that the driving quality is no longer imparted externally but has become an internal property of the body, he still accepts the Aristotelian assertion that the driving quality is a force (power) that now acts internally and to which velocity is proportional. In modern physics since Newton, however, velocity is a quality that persists in the absence of forces. The first one to grasp this persistent motion by itself was William of Ockham, who said in his Commentary on the Sentences, Book 2, Question 26, M: "I say therefore that that which moves (ipsum movens) ... after the separation of the moving body from the original projector, is the body moved by itself (ipsum motum secundum se) and not by any power in it or relative to it (virtus absoluta in eo vel respectiva), ... ." It has been claimed by some historians that by rejecting the basic Aristotelian principle "Everything that moves is moved by something else." (Omne quod moventur ab alio movetur.), Ockham took the first step toward the principle of inertia.
Iranian theories
In the 11th century, Avicenna (Ibn Sīnā) discussed Philoponus' theory in The Book of Healing, in Physics IV.14 he says:
Ibn Sīnā agreed that an impetus is imparted to a projectile by the thrower, but unlike Philoponus, who believed that it was a temporary virtue that would decline even in a vacuum, he viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. Therefore, he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, which is consistent with Newton's concept of inertia. This idea (which dissented from the Aristotelian view) was later described as "impetus" by Jean Buridan, who may have been influenced by Ibn Sina.
Arabic theories
In the 12th century, Hibat Allah Abu'l-Barakat al-Baghdaadi adopted Philoponus' theory of impetus. In his Kitab al-Mu'tabar, Abu'l-Barakat stated that the mover imparts a violent inclination (mayl qasri) on the moved and that this diminishes as the moving object distances itself from the mover. Like Philoponus, and unlike Ibn Sina, al-Baghdaadi believed that the mayl self-extinguishes itself.
He also proposed an explanation of the acceleration of falling bodies where "one mayl after another" is successively applied, because it is the falling body itself which provides the mayl, as opposed to shooting a bow, where only one violent mayl is applied. According to Shlomo Pines, al-Baghdaadi's theory was
the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration].
Jean Buridan and Albert of Saxony later refer to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus.
Buridanist impetus
In the 14th century, Jean Buridan postulated the notion of motive force, which he named impetus.
Buridan gives his theory a mathematical value: impetus = weight x velocity.
Buridan's pupil Dominicus de Clavasio in his 1357 De Caelo, as follows:
Buridan's position was that a moving object would only be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus was proportional to speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also maintained that impetus could be not only linear, but also circular in nature, causing objects (such as celestial bodies) to move in a circle.
Buridan pointed out that neither Aristotle's unmoved movers nor Plato's souls are in the Bible, so he applied impetus theory to the eternal rotation of the celestial spheres by extension of a terrestrial example of its application to rotary motion in the form of a rotating millwheel that continues rotating for a long time after the originally propelling hand is withdrawn, driven by the impetus impressed within it. He wrote on the celestial impetus of the spheres as follows:
However, by discounting the possibility of any resistance either due to a contrary inclination to move in any opposite direction or due to any external resistance, he concluded their impetus was therefore not corrupted by any resistance. Buridan also discounted any inherent resistance to motion in the form of an inclination to rest within the spheres themselves, such as the inertia posited by Averroes and Aquinas. For otherwise that resistance would destroy their impetus, as the anti-Duhemian historian of science Annaliese Maier maintained the Parisian impetus dynamicists were forced to conclude because of their belief in an inherent inclinatio ad quietem or inertia in all bodies.
This raised the question of why the motive force of impetus does not therefore move the spheres with infinite speed. One impetus dynamics answer seemed to be that it was a secondary kind of motive force that produced uniform motion rather than infinite speed, rather than producing uniformly accelerated motion like the primary force did by producing constantly increasing amounts of impetus. However, in his Treatise on the heavens and the world in which the heavens are moved by inanimate inherent mechanical forces, Buridan's pupil Oresme offered an alternative Thomist inertial response to this problem. His response was to posit a resistance to motion inherent in the heavens (i.e. in the spheres), but which is only a resistance to acceleration beyond their natural speed, rather than to motion itself, and was thus a tendency to preserve their natural speed.
Buridan's thought was followed up by his pupil Albert of Saxony (1316–1390), by writers in Poland such as John Cantius, and the Oxford Calculators. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs.
The tunnel experiment and oscillatory motion
The Buridan impetus theory developed one of the most important thought experiments in the history of science, the 'tunnel-experiment'. This experiment incorporated oscillatory and pendulum motion into dynamical analysis and the science of motion for the first time. It also established one of the important principles of classical mechanics. The pendulum was crucially important to the development of mechanics in the 17th century. The tunnel experiment also gave rise to the more generally important axiomatic principle of Galilean, Huygenian and Leibnizian dynamics, namely that a body rises to the same height from which it has fallen, a principle of gravitational potential energy. As Galileo Galilei expressed this fundamental principle of his dynamics in his 1632 Dialogo:
This imaginary experiment predicted that a cannonball dropped down a tunnel going straight through the Earth's centre and out the other side would pass the centre and rise on the opposite surface to the same height from which it had first fallen, driven upwards by the gravitationally created impetus it had continually accumulated in falling to the centre. This impetus would require a violent motion correspondingly rising to the same height past the centre for the now opposing force of gravity to destroy it all in the same distance which it had previously required to create it. At this turning point the ball would then descend again and oscillate back and forth between the two opposing surfaces about the centre infinitely in principle. The tunnel experiment provided the first dynamical model of oscillatory motion, specifically in terms of A-B impetus dynamics.
This thought-experiment was then applied to the dynamical explanation of a real world oscillatory motion, namely that of the pendulum. The oscillating motion of the cannonball was compared to the motion of a pendulum bob by imagining it to be attached to the end of an immensely long cord suspended from the vault of the fixed stars centred on the Earth. The relatively short arc of its path through the distant Earth was practically a straight line along the tunnel. Real world pendula were then conceived of as just micro versions of this 'tunnel pendulum', but with far shorter cords and bobs oscillating above the Earth's surface in arcs corresponding to the tunnel as their oscillatory midpoint was dynamically assimilated to the tunnel's centre.
Through such 'lateral thinking', its lateral horizontal motion that was conceived of as a case of gravitational free-fall followed by violent motion in a recurring cycle, with the bob repeatedly travelling through and beyond the motion's vertically lowest but horizontally middle point that substituted for the Earth's centre in the tunnel pendulum. The lateral motions of the bob first towards and then away from the normal in the downswing and upswing become lateral downward and upward motions in relation to the horizontal rather than to the vertical.
The orthodox Aristotelians saw pendulum motion as a dynamical anomaly, as 'falling to rest with difficulty.' Thomas Kuhn wrote in his 1962 The Structure of Scientific Revolutions on the impetus theory's novel analysis it was not falling with any dynamical difficulty at all in principle, but was rather falling in repeated and potentially endless cycles of alternating downward gravitationally natural motion and upward gravitationally violent motion. Galileo eventually appealed to pendulum motion to demonstrate that the speed of gravitational free-fall is the same for all unequal weights by virtue of dynamically modelling pendulum motion in this manner as a case of cyclically repeated gravitational free-fall along the horizontal in principle.
The tunnel experiment was a crucial experiment in favour of impetus dynamics against both orthodox Aristotelian dynamics without any auxiliary impetus theory and Aristotelian dynamics with its H-P variant. According to the latter two theories, the bob cannot possibly pass beyond the normal. In orthodox Aristotelian dynamics there is no force to carry the bob upwards beyond the centre in violent motion against its own gravity that carries it to the centre, where it stops. When conjoined with the Philoponus auxiliary theory, in the case where the cannonball is released from rest, there is no such force because either all the initial upward force of impetus originally impressed within it to hold it in static dynamical equilibrium has been exhausted, or if any remained it would act in the opposite direction and combine with gravity to prevent motion through and beyond the centre. The cannonball being positively hurled downwards could not possibly result in an oscillatory motion either. Although it could then possibly pass beyond the centre, it could never return to pass through it and rise back up again. It would be logically possible for it to pass beyond the centre if upon reaching the centre some of the constantly decaying downward impetus remained and still was sufficiently stronger than gravity to push it beyond the centre and upwards again, eventually becoming weaker than gravity. The ball would then be pulled back towards the centre by its gravity but could not then pass beyond the centre to rise up again, because it would have no force directed against gravity to overcome it. Any possibly remaining impetus would be directed 'downwards' towards the centre, in the same direction it was originally created.
Thus pendulum motion was dynamically impossible for both orthodox Aristotelian dynamics and also for H-P impetus dynamics on this 'tunnel model' analogical reasoning. It was predicted by the impetus theory's tunnel prediction because that theory posited that a continually accumulating downwards force of impetus directed towards the centre is acquired in natural motion, sufficient to then carry it upwards beyond the centre against gravity, and rather than only having an initially upwards force of impetus away from the centre as in the theory of natural motion. So the tunnel experiment constituted a crucial experiment between three alternative theories of natural motion.
Impetus dynamics was to be preferred if the Aristotelian science of motion was to incorporate a dynamical explanation of pendulum motion. It was also to be preferred more generally if it was to explain other oscillatory motions, such as the to and fro vibrations around the normal of musical strings in tension, such as those of a guitar. The analogy made with the gravitational tunnel experiment was that the tension in the string pulling it towards the normal played the role of gravity, and thus when plucked (i.e. pulled away from the normal) and then released, it was the equivalent of pulling the cannonball to the Earth's surface and then releasing it. Thus the musical string vibrated in a continual cycle of the alternating creation of impetus towards the normal and its destruction after passing through the normal until this process starts again with the creation of fresh 'downward' impetus once all the 'upward' impetus has been destroyed.
This positing of a dynamical family resemblance of the motions of pendula and vibrating strings with the paradigmatic tunnel-experiment, the origin of all oscillations in the history of dynamics, was one of the greatest imaginative developments of medieval Aristotelian dynamics in its increasing repertoire of dynamical models of different kinds of motion.
Shortly before Galileo's theory of impetus, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone:
... [Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path.
Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion.
See also
Conatus
Physics in the medieval Islamic world
History of science
References and footnotes
Bibliography
Duhem, Pierre. [1906–13]: Etudes sur Leonard de Vinci
Duhem, Pierre, History of Physics, Section IX, XVI and XVII in The Catholic Encyclopedia
Natural philosophy
Classical mechanics
Obsolete theories in physics | Theory of impetus | [
"Physics"
] | 3,966 | [
"Theoretical physics",
"Mechanics",
"Classical mechanics",
"Obsolete theories in physics"
] |
12,321,977 | https://en.wikipedia.org/wiki/Shale%20oil%20extraction | Shale oil extraction is an industrial process for unconventional oil production. This process converts kerogen in oil shale into shale oil by pyrolysis, hydrogenation, or thermal dissolution. The resultant shale oil is used as fuel oil or upgraded to meet refinery feedstock specifications by adding hydrogen and removing sulfur and nitrogen impurities.
Shale oil extraction is usually performed above ground (ex situ processing) by mining the oil shale and then treating it in processing facilities. Other modern technologies perform the processing underground (on-site or in situ processing) by applying heat and extracting the oil via oil wells.
The earliest description of the process dates to the 10th century. In 1684, England granted the first formal extraction process patent. Extraction industries and innovations became widespread during the 19th century. The industry shrank in the mid-20th century following the discovery of large reserves of conventional oil, but high petroleum prices at the beginning of the 21st century have led to renewed interest, accompanied by the development and testing of newer technologies.
As of 2010, major long-standing extraction industries are operating in Estonia, Brazil, and China. Its economic viability usually requires a lack of locally available crude oil. National energy security issues have also played a role in its development. Critics of shale oil extraction pose questions about environmental management issues, such as waste disposal, extensive water use, waste water management, and air pollution.
History
In the 10th century, the Assyrian physician Masawaih al-Mardini (Mesue the Younger) wrote of his experiments in extracting oil from "some kind of bituminous shale". The first shale oil extraction patent was granted by the English Crown in 1684 to three people who had "found a way to extract and make great quantities of pitch, tarr, and oyle out of a sort of stone". Modern industrial extraction of shale oil originated in France with the implementation of a process invented by Alexander Selligue in 1838, improved upon a decade later in Scotland using a process invented by James Young. During the late 19th century, plants were built in Australia, Brazil, Canada, and the United States. The 1894 invention of the Pumpherston retort, which was much less reliant on coal heat than its predecessors, marked the separation of the oil shale industry from the coal industry.
China (Manchuria), Estonia, New Zealand, South Africa, Spain, Sweden, and Switzerland began extracting shale oil in the early 20th century. However, crude oil discoveries in Texas during the 1920s and in the Middle East in the mid 20th century brought most oil shale industries to a halt. In 1944, the US recommenced shale oil extraction as part of its Synthetic Liquid Fuels Program. These industries continued until oil prices fell sharply in the 1980s. The last oil shale retort in the US, operated by Unocal Corporation, closed in 1991. The US program was restarted in 2003, followed by a commercial leasing program in 2005 permitting the extraction of oil shale and oil sands on federal lands in accordance with the Energy Policy Act of 2005.
, shale oil extraction is in operation in Estonia, Brazil, and China. In 2008, their industries produced about 930,000 tonnes (17,700 barrels per day) of shale oil. Australia, the US, and Canada have tested shale oil extraction techniques via demonstration projects and are planning commercial implementation; Morocco and Jordan have announced their intent to do the same. Only four processes are in commercial use: Kiviter, Galoter, Fushun, and Petrosix.
Processing principles
Shale oil extraction process decomposes oil shale and converts its kerogen into shale oil—a petroleum-like synthetic crude oil. The process is conducted by pyrolysis, hydrogenation, or thermal dissolution. The efficiencies of extraction processes are often evaluated by comparing their yields to the results of a Fischer Assay performed on a sample of the shale.
The oldest and the most common extraction method involves pyrolysis (also known as retorting or destructive distillation). In this process, oil shale is heated in the absence of oxygen until its kerogen decomposes into condensable shale oil vapors and non-condensable combustible oil shale gas. Oil vapors and oil shale gas are then collected and cooled, causing the shale oil to condense. In addition, oil shale processing produces spent oil shale, which is a solid residue. Spent shale consists of inorganic compounds (minerals) and char—a carbonaceous residue formed from kerogen. Burning the char off the spent shale produces oil shale ash. Spent shale and shale ash can be used as ingredients in cement or brick manufacture. The composition of the oil shale may lend added value to the extraction process through the recovery of by-products, including ammonia, sulfur, aromatic compounds, pitch, asphalt, and waxes.
Heating the oil shale to pyrolysis temperature and completing the endothermic kerogen decomposition reactions require a source of energy. Some technologies burn other fossil fuels such as natural gas, oil, or coal to generate this heat and experimental methods have used electricity, radio waves, microwaves, or reactive fluids for this purpose. Two strategies are used to reduce, and even eliminate, external heat energy requirements: the oil shale gas and char by-products generated by pyrolysis may be burned as a source of energy, and the heat contained in hot spent oil shale and oil shale ash may be used to pre-heat the raw oil shale.
For ex situ processing, oil shale is crushed into smaller pieces, increasing surface area for better extraction. The temperature at which decomposition of oil shale occurs depends on the time-scale of the process. In ex situ retorting processes, it begins at and proceeds more rapidly and completely at higher temperatures. The amount of oil produced is the highest when the temperature ranges between . The ratio of oil shale gas to shale oil generally increases along with retorting temperatures. For a modern in situ process, which might take several months of heating, decomposition may be conducted at temperatures as low as . Temperatures below are preferable, as this prevents the decomposition of limestone and dolomite in the rock and thereby limits carbon dioxide emissions and energy consumption.
Hydrogenation and thermal dissolution (reactive fluid processes) extract the oil using hydrogen donors, solvents, or a combination of these. Thermal dissolution involves the application of solvents at elevated temperatures and pressures, increasing oil output by cracking the dissolved organic matter. Different methods produce shale oil with different properties.
Classification of extraction technologies
Industry analysts have created several classifications of the technologies used to extract shale oil from oil shale.
By process principles: Based on the treatment of raw oil shale by heat and solvents the methods are classified as pyrolysis, hydrogenation, or thermal dissolution.
By location: A frequently used distinction considers whether processing is done above or below ground, and classifies the technologies broadly as ex situ (displaced) or in situ (in place). In ex situ processing, also known as above-ground retorting, the oil shale is mined either underground or at the surface and then transported to a processing facility. In contrast, in situ processing converts the kerogen while it is still in the form of an oil shale deposit, following which it is then extracted via oil wells, where it rises in the same way as conventional crude oil. Unlike ex situ processing, it does not involve mining or spent oil shale disposal aboveground as spent oil shale stays underground.
By heating method: The method of transferring heat from combustion products to the oil shale may be classified as direct or indirect. While methods that allow combustion products to contact the oil shale within the retort are classified as direct, methods that burn materials external to the retort to heat another material that contacts the oil shale are described as indirect
By heat carrier: Based on the material used to deliver heat energy to the oil shale, processing technologies have been classified into gas heat carrier, solid heat carrier, wall conduction, reactive fluid, and volumetric heating methods. Heat carrier methods can be sub-classified as direct or indirect.
The following table shows extraction technologies classified by heating method, heat carrier and location (in situ or ex situ).
By raw oil shale particle size: The various ex situ processing technologies may be differentiated by the size of the oil shale particles that are fed into the retorts. As a rule, gas heat carrier technologies process oil shale lumps varying in diameter from , while solid heat carrier and wall conduction technologies process fines which are particles less than in diameter.
By retort orientation: "Ex-situ" technologies are sometimes classified as vertical or horizontal. Vertical retorts are usually shaft kilns where a bed of shale moves from top to bottom by gravity. Horizontal retorts are usually horizontal rotating drums or screws where shale moves from one end to the other. As a general rule, vertical retorts process lumps using a gas heat carrier, while horizontal retorts process fines using solid heat carrier.
By complexity of technology: In situ technologies are usually classified either as true in situ processes or modified in situ processes. True in situ processes do not involve mining or crushing the oil shale. Modified in situ processes involve drilling and fracturing the target oil shale deposit to create voids in the deposit. The voids enable a better flow of gases and fluids through the deposit, thereby increasing the volume and quality of the shale oil produced.
Ex situ technologies
Internal combustion
Internal combustion technologies burn materials (typically char and oil shale gas) within a vertical shaft retort to supply heat for pyrolysis. Typically raw oil shale particles between and in size are fed into the top of the retort and are heated by the rising hot gases, which pass through the descending oil shale, thereby causing decomposition of the kerogen at about . Shale oil mist, evolved gases and cooled combustion gases are removed from the top of the retort then moved to separation equipment. Condensed shale oil is collected, while non-condensable gas is recycled and used to carry heat up the retort. In the lower part of the retort, air is injected for the combustion which heats the spent oil shale and gases to between and . Cold recycled gas may enter the bottom of the retort to cool the shale ash. The Union A and Superior Direct processes depart from this pattern. In the Union A process, oil shale is fed through the bottom of the retort and a pump moves it upward. In the Superior Direct process, oil shale is processed in a horizontal, segmented, doughnut-shaped traveling-grate retort.
Internal combustion technologies such as the Paraho Direct are thermally efficient, since combustion of char on the spent shale and heat recovered from the shale ash and evolved gases can provide all the heat requirements of the retort. These technologies can achieve 80–90% of Fischer assay yield. Two well-established shale oil industries use internal combustion technologies: Kiviter process facilities have been operated continuously in Estonia since the 1920s, and a number of Chinese companies operate Fushun process facilities.
Common drawbacks of internal combustion technologies are that the combustible oil shale gas is diluted by combustion gases and particles smaller than can not be processed. Uneven distribution of gas across the retort can result in blockages when hot spots cause particles to fuse or disintegrate.
Hot recycled solids
Hot recycled solids technologies deliver heat to the oil shale by recycling hot solid particles—typically oil shale ash. These technologies usually employ rotating kiln or fluidized bed retorts, fed by fine oil shale particles generally having a diameter of less than ; some technologies use particles even smaller than . The recycled particles are heated in a separate chamber or vessel to about and then mixed with the raw oil shale to cause the shale to decompose at about . Oil vapour and shale oil gas are separated from the solids and cooled to condense and collect the oil. Heat recovered from the combustion gases and shale ash may be used to dry and preheat the raw oil shale before it is mixed with the hot recycle solids.
In the Galoter and Enefit processes, the spent oil shale is burnt in a separate furnace and the resulting hot ash is separated from the combustion gas and mixed with oil shale particles in a rotating kiln. Combustion gases from the furnace are used to dry the oil shale in a dryer before mixing with hot ash. The TOSCO II process uses ceramic balls instead of shale ash as the hot recycled solids. The distinguishing feature of the Alberta Taciuk Process (ATP) is that the entire process occurs in a single rotating multi–chamber horizontal vessel.
Because the hot recycle solids are heated in a separate furnace, the oil shale gas from these technologies is not diluted with combustion exhaust gas. Another advantage is that there is no limit on the smallest particles that the retort can process, thus allowing all the crushed feed to be used. One disadvantage is that more water is used to handle the resulting finer shale ash.
Conduction through a wall
These technologies transfer heat to the oil shale by conducting it through the retort wall. The shale feed usually consists of fine particles. Their advantage lies in the fact that retort vapors are not combined with combustion exhaust. The Combustion Resources process uses a hydrogen–fired rotating kiln, where hot gas is circulated through an outer annulus. The Oil-Tech staged electrically heated retort consists of individual inter-connected heating chambers, stacked atop each other. Its principal advantage lies in its modular design, which enhances its portability and adaptability. The Red Leaf Resources EcoShale In-Capsule Process combines surface mining with a lower-temperature heating method similar to in situ processes by operating within the confines of an earthen structure. A hot gas circulated through parallel pipes heats the oil shale rubble. An installation within the empty space created by mining would permit rapid reclamation of the topography.
A general drawback of conduction through a wall technologies is that the retorts are more costly when scaled-up due to the resulting large amount of heat conducting walls made of high-temperature alloys.
Externally generated hot gas
In general, externally generated hot gas technologies are similar to internal combustion technologies in that they also process oil shale lumps in vertical shaft kilns. Significantly, though, the heat in these technologies is delivered by gases heated outside the retort vessel, and therefore the retort vapors are not diluted with combustion exhaust. The Petrosix and Paraho Indirect employ this technology. In addition to not accepting fine particles as feed, these technologies do not utilize the potential heat of combusting the char on the spent shale and thus must burn more valuable fuels. However, due to the lack of combustion of the spent shale, the oil shale does not exceed and significant carbonate mineral decomposition and subsequent CO2 generation can be avoided for some oil shales. Also, these technologies tend to be the more stable and easier to control than internal combustion or hot solid recycle technologies.
Reactive fluids
Kerogen is tightly bound to the shale and resists dissolution by most solvents. Despite this constraint, extraction using especially reactive fluids has been tested, including those in a supercritical state. Reactive fluid technologies are suitable for processing oil shales with a low hydrogen content. In these technologies, hydrogen gas (H2) or hydrogen donors (chemicals that donate hydrogen during chemical reactions) react with coke precursors (chemical structures in the oil shale that are prone to form char during retorting but have not yet done so). Reactive fluid technologies include the IGT Hytort (high-pressure H2) process, donor solvent processes, and the Chattanooga fluidized bed reactor. In the IGT Hytort oil shale is processed in a high-pressure hydrogen environment. The Chattanooga process uses a fluidized bed reactor and an associated hydrogen-fired heater for oil shale thermal cracking and hydrogenation. Laboratory results indicate that these technologies can often obtain significantly higher oil yields than pyrolysis processes. Drawbacks are the additional cost and complexity of hydrogen production and high-pressure retort vessels.
Plasma gasification
Several experimental tests have been conducted for the oil-shale gasification by using plasma technologies. In these technologies, oil shale is bombarded by radicals (ions). The radicals crack kerogen molecules forming synthetic gas and oil. Air, hydrogen or nitrogen are used as plasma gas and processes may operate in an arc, plasma arc, or plasma electrolysis mode. The main benefit of these technologies is processing without using water.
In situ technologies
In situ technologies heat oil shale underground by injecting hot fluids into the rock formation, or by using linear or planar heating sources followed by thermal conduction and convection to distribute heat through the target area. Shale oil is then recovered through vertical wells drilled into the formation. These technologies are potentially able to extract more shale oil from a given area of land than conventional ex situ processing technologies, as the wells can reach greater depths than surface mines. They present an opportunity to recover shale oil from low-grade deposits that traditional mining techniques could not extract.
John Fell experimented with in situ extraction, at Newnes, In Australia, during 1921, with some success, but his ambitions were well ahead of technologies available at the time.
During World War II a modified in situ extraction process was implemented without significant success in Germany. One of the earliest successful in situ processes was underground gasification by electrical energy (Ljungström method)—a process exploited between 1940 and 1966 for shale oil extraction at Kvarntorp in Sweden. Prior to the 1980s, many variations of the in situ process were explored in the United States. The first modified in situ oil shale experiment in the United States was conducted by Occidental Petroleum in 1972 at Logan Wash, Colorado. Newer technologies are being explored that use a variety of heat sources and heat delivery systems.
Wall conduction
Wall conduction in situ technologies use heating elements or heating pipes placed within the oil shale formation. The Shell in situ conversion process (Shell ICP) uses electrical heating elements for heating the oil shale layer to between over a period of approximately four years. The processing area is isolated from surrounding groundwater by a freeze wall consisting of wells filled with a circulating super-chilled fluid. Disadvantages of this process are large electrical power consumption, extensive water use, and the risk of groundwater pollution. The process was tested since the early 1980s at the Mahogany test site in the Piceance Basin. of oil were extracted in 2004 at a testing area.
In the CCR Process proposed by American Shale Oil, superheated steam or another heat transfer medium is circulated through a series of pipes placed below the oil shale layer to be extracted. The system combines horizontal wells, through which steam is passed, and vertical wells, which provide both vertical heat transfer through refluxing of converted shale oil and a means to collect the produced hydrocarbons. Heat is supplied by combustion of natural gas or propane in the initial phase and by oil shale gas at a later stage.
The Geothermic Fuels Cells Process (IEP GFC) proposed by Independent Energy Partners extracts shale oil by exploiting a high-temperature stack of fuel cells. The cells, placed in the oil shale formation, are fueled by natural gas during a warm-up period and afterward by oil shale gas generated by its own waste heat.
Externally generated hot gas
Externally generated hot gas in situ technologies use hot gases heated above-ground and then injected into the oil shale formation. The Chevron CRUSH process, which was researched by Chevron Corporation in partnership with Los Alamos National Laboratory, injects heated carbon dioxide into the formation via drilled wells and to heat the formation through a series of horizontal fractures through which the gas is circulated. General Synfuels International has proposed the Omnishale process involving injection of super-heated air into the oil shale formation. Mountain West Energy's In Situ Vapor Extraction process uses similar principles of injection of high-temperature gas.
ExxonMobil Electrofrac
ExxonMobil's in situ technology (ExxonMobil Electrofrac) uses electrical heating with elements of both wall conduction and volumetric heating methods. It injects an electrically conductive material such as calcined petroleum coke into the hydraulic fractures created in the oil shale formation which then forms a heating element. Heating wells are placed in a parallel row with a second horizontal well intersecting them at their toe. This allows opposing electrical charges to be applied at either end.
Volumetric heating
The Illinois Institute of Technology developed the concept of oil shale volumetric heating using radio waves (radio frequency processing) during the late 1970s. This technology was further developed by Lawrence Livermore National Laboratory. Oil shale is heated by vertical electrode arrays. Deeper volumes could be processed at slower heating rates by installations spaced at tens of meters. The concept presumes a radio frequency at which the skin depth is many tens of meters, thereby overcoming the thermal diffusion times needed for conductive heating. Its drawbacks include intensive electrical demand and the possibility that groundwater or char would absorb undue amounts of the energy. Radio frequency processing in conjunction with critical fluids is being developed by Raytheon together with CF Technologies and tested by Schlumberger.
Microwave heating technologies are based on the same principles as radio wave heating, although it is believed that radio wave heating is an improvement over microwave heating because its energy can penetrate farther into the oil shale formation. The microwave heating process was tested by Global Resource Corporation. Electro-Petroleum proposes electrically enhanced oil recovery by the passage of direct current between cathodes in producing wells and anodes located either at the surface or at depth in other wells. The passage of the current through the oil shale formation results in resistive Joule heating.
Shale oil
The properties of raw shale oil vary depending on the composition of the parent oil shale and the extraction technology used. Like conventional oil, shale oil is a complex mixture of hydrocarbons, and it is characterized using bulk properties of the oil. Shale oil usually contains large quantities of olefinic and aromatic hydrocarbons. Shale oil can also contain significant quantities of heteroatoms. A typical shale oil composition includes 0.5–1% of oxygen, 1.5–2% of nitrogen and 0.15–1% of sulfur, and some deposits contain more heteroatoms. Mineral particles and metals are often present as well. Generally, the oil is less fluid than crude oil, becoming pourable at temperatures between , while conventional crude oil is pourable at temperatures between ; this property affects shale oil's ability to be transported in existing pipelines.
Shale oil contains polycyclic aromatic hydrocarbons which are carcinogenic. It has been described that raw shale oil has a mild carcinogenic potential which is comparable to some intermediate refinery products, while upgraded shale oil has lower carcinogenic potential as most of the polycyclic aromatics are believed to broken down by hydrogenation.
Although raw shale oil can be immediately burnt as a fuel oil, many of its applications require that it be upgraded. The differing properties of the raw oils call for correspondingly various pre-treatments before it can be sent to a conventional oil refinery.
Particulates in the raw oil clog downstream processes; sulfur and nitrogen create air pollution. Sulfur and nitrogen, along with the arsenic and iron that may be present, also destroy the catalysts used in refining. Olefins form insoluble sediments and cause instability. The oxygen within the oil, present at higher levels than in crude oil, lends itself to the formation of destructive free radicals. Hydrodesulfurization and hydrodenitrogenation can address these problems and result in a product comparable to benchmark crude oil. Phenols can be first be removed by water extraction. Upgrading shale oil into transport fuels requires adjusting hydrogen–carbon ratios by adding hydrogen (hydrocracking) or removing carbon (coking).
Before World War II, most shale oil was upgraded for use as transport fuels. Afterwards, it was used as a raw material for chemical intermediates, pure chemicals and industrial resins, and as a railroad wood preservative. As of 2008, it is primarily used as a heating oil and marine fuel, and to a lesser extent in the production of various chemicals.
Shale oil's concentration of high-boiling point compounds is suited for the production of middle distillates such as kerosene, jet fuel and diesel fuel. Additional cracking can create the lighter hydrocarbons used in gasoline.
Economics
The dominant question for shale oil production is under what conditions shale oil is economically viable. According to the United States Department of Energy, the capital costs of a ex-situ processing complex are $3–10 billion. The various attempts to develop oil shale deposits have succeeded only when the shale-oil production cost in a given region is lower than the price of petroleum or its other substitutes. According to a survey conducted by the RAND Corporation, the cost of producing shale oil at a hypothetical surface retorting complex in the United States (comprising a mine, retorting plant, upgrading plant, supporting utilities, and spent oil shale reclamation), would be in a range of $70–95 per barrel ($440–600/m3), adjusted to 2005 values. Assuming a gradual increase in output after the start of commercial production, the analysis projects a gradual reduction in processing costs to $30–40 per barrel ($190–250/m3) after achieving the milestone of . The United States Department of Energy estimates that the ex-situ processing would be economic at sustained average world oil prices above $54 per barrel and in-situ processing would be economic at prices above $35 per barrel. These estimates assume a return rate of 15%. Royal Dutch Shell announced in 2006 that its Shell ICP technology would realize a profit when crude oil prices are higher than $30 per barrel ($190/m3), while some technologies at full-scale production assert profitability at oil prices even lower than $20 per barrel ($130/m3).
To increase the efficiency of oil shale retorting and by this the viability of the shale oil production, researchers have proposed and tested several co-pyrolysis processes, in which other materials such as biomass, peat, waste bitumen, or rubber and plastic wastes are retorted along with the oil shale. Some modified technologies propose combining a fluidized bed retort with a circulated fluidized bed furnace for burning the by-products of pyrolysis (char and oil shale gas) and thereby improving oil yield, increasing throughput, and decreasing retorting time.
Other ways of improving the economics of shale oil extraction could be to increase the size of the operation to achieve economies of scale, use oil shale that is a by-product of coal mining such as at Fushun China, produce specialty chemicals as by Viru Keemia Grupp in Estonia, co-generate electricity from the waste heat and process high grade oil shale that yields more oil per shale processed.
A possible measure of the viability of oil shale as an energy source lies in the ratio of the energy in the extracted oil to the energy used in its mining and processing (Energy Returned on Energy Invested, or EROEI). A 1984 study estimated the EROEI of the various known oil shale deposits as varying between 0.7 and 13.3; Some companies and newer technologies assert an EROEI between 3 and 10. According to the World Energy Outlook 2010, the EROEI of ex-situ processing is typically 4 to 5 while of in-situ processing it may be even as low as 2.
To increase the EROEI, several combined technologies were proposed. These include the usage of process waste heat, e.g. gasification or combustion of the residual carbon (char), and the usage of waste heat from other industrial processes, such as coal gasification and nuclear power generation.
The water requirements of extraction processes are an additional economic consideration in regions where water is a scarce resource.
Environmental considerations
Mining oil shale involves a number of environmental impacts, more pronounced in surface mining than in underground mining. These include acid drainage induced by the sudden rapid exposure and subsequent oxidation of formerly buried materials, the introduction of metals including mercury into surface-water and groundwater, increased erosion, sulfur-gas emissions, and air pollution caused by the production of particulates during processing, transport, and support activities. In 2002, about 97% of air pollution, 86% of total waste and 23% of water pollution in Estonia came from the power industry, which uses oil shale as the main resource for its power production.
Oil-shale extraction can damage the biological and recreational value of land and the ecosystem in the mining area. Combustion and thermal processing generate waste material. In addition, the atmospheric emissions from oil shale processing and combustion include carbon dioxide, a greenhouse gas. Environmentalists oppose production and usage of oil shale, as it creates even more greenhouse gases than conventional fossil fuels. Experimental in situ conversion processes and carbon capture and storage technologies may reduce some of these concerns in the future, but at the same time they may cause other problems, including groundwater pollution. Among the water contaminants commonly associated with oil shale processing are oxygen and nitrogen heterocyclic hydrocarbons. Commonly detected examples include quinoline derivatives, pyridine, and various alkyl homologues of pyridine (picoline, lutidine).
Water concerns are sensitive issues in arid regions, such as the western US and Israel's Negev Desert, where plans exist to expand oil-shale extraction despite a water shortage. Depending on technology, above-ground retorting uses between one and five barrels of water per barrel of produced shale-oil. A 2008 programmatic environmental impact statement issued by the US Bureau of Land Management stated that surface mining and retort operations produce of waste water per of processed oil shale. In situ processing, according to one estimate, uses about one-tenth as much water.
Environmental activists, including members of Greenpeace, have organized strong protests against the oil shale industry. In one result, Queensland Energy Resources put the proposed Stuart Oil Shale Project in Australia on hold in 2004.
See also
Oil shale in China
Oil shale in Estonia
Oil shale in Jordan
Oil shale geology
Oil shale reserves
References
External links
Oil Shale. A Scientific-Technical Journal ()
Oil Shale and Tar Sands Programmatic Environmental Impact Statement (EIS) Information Center. Concerning potential leases of Federal oil sands lands in Utah and oil shale lands in Utah, Wyoming, and Colorado.
The United States National Oil Shale Association (NOSA)
Petroleum production
Arab inventions | Shale oil extraction | [
"Chemistry"
] | 6,229 | [
"Petroleum technology",
"Oil shale technology",
"Synthetic fuel technologies"
] |
12,324,611 | https://en.wikipedia.org/wiki/Small%20Cajal%20body-specific%20RNA | Small Cajal body-specific RNAs (scaRNAs) are a class of small nucleolar RNAs (snoRNAs) that specifically localise to the Cajal body, a nuclear organelle (cellular sub-organelle) involved in the biogenesis of small nuclear ribonucleoproteins (snRNPs or snurps). ScaRNAs guide the modification (methylation and pseudouridylation) of RNA polymerase II transcribed spliceosomal RNAs U1, U2, U4, U5 and U12.
The first scaRNA identified was U85. It is unlike typical snoRNAs in that it is a composite C/D box and H/ACA box snoRNAs and can guide both pseudouridylation and 2′-O-methylation. Not all scaRNAs are composite C/D and H/ACA box snoRNA and most scaRNAs are structurally and functionally indistinguishable from snoRNAs, directing ribosomal RNA (rRNA) modification in the nucleolus.
Drosophila scaRNAs
Several studies identified scaRNAs in Drosophila. One of the studies showed that several Drosophila scaRNAs could function equally well in the nucleoplasm of mutant flies that lack Cajal bodies. Further investigation showed that scaRNA pugU6-40 targets U6 snRNA, whose modification occurs in the nucleolus not CB and that pugU1-6 and pug U2-55 guide 2 RNAs: snRNA and 28s rRNA.
Small Cajal body-specific RNA 1
In molecular biology, Small Cajal body-specific RNA 1 (also known as SCARNA1 or ACA35) is a small nucleolar RNA found in Cajal bodies and believed to be involved in the pseudouridylation of U2 spliceosomal RNA at residue U89.
scaRNA1 is a non-coding RNA, which are functional products of genes not translated into proteins. Such RNA molecules usually contain important secondary structure or ligand-binding motifs and are involved in many important biological processes in the cell.
scaRNA1 belongs to the H/ACA box class of snoRNAs, as it has the predicted hairpin-hinge-hairpin-tail structure, conserved H/ACA-box motifs, and is found associated with GAR1 protein.
References
External links
snoRNAbase: human H/ACA and C/D box snoRNA database
Non-coding RNA
Small nuclear RNA
Spliceosome
RNA splicing | Small Cajal body-specific RNA | [
"Chemistry",
"Biology"
] | 529 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
12,324,910 | https://en.wikipedia.org/wiki/Boolean%20conjunctive%20query | In the theory of relational databases, a Boolean conjunctive query is a conjunctive query without distinguished predicates, i.e., a query in the form , where each is a relation symbol and each is a tuple of variables and constants; the number of elements in is equal to the arity of . Such a query evaluates to either true or false depending on whether the relations in the database contain the appropriate tuples of values, i.e. the conjunction is valid according to the facts in the database.
As an example, if a database schema contains the relation symbols (binary, who's the father of whom) and (unary, who is employed), a conjunctive query could be . This query evaluates to true if there exists an individual who is a child of Mark and employed. In other words, this query expresses the question: "does Mark have an employed child?"
Complexity
See also
Logical conjunction
Conjunctive query
References
Boolean algebra | Boolean conjunctive query | [
"Mathematics"
] | 206 | [
"Boolean algebra",
"Fields of abstract algebra",
"Mathematical logic"
] |
12,325,841 | https://en.wikipedia.org/wiki/Hazardous%20drugs | In pharmacology, hazardous drugs are drugs that are known to cause harm, which may or may not include genotoxicity (the ability to cause a change or mutation in genetic material). Genotoxicity might involve carcinogenicity, the ability to cause cancer in animal models, humans or both; teratogenicity, which is the ability to cause defects on fetal development or fetal malformation; and lastly hazardous drugs are known to have the potential to cause fertility impairment, which is a major concern for most clinicians. These drugs can be classified as antineoplastics, cytotoxic agents, biologic agents, antiviral agents and immunosuppressive agents. This is why safe handling of hazardous drugs is crucial.
Safe handling
Safe handling refers to the process in which health care workers adhere to practices set forth by national health and safety organizations, that have been designed to eliminate or significantly reduce occupational exposure. Some of these practices include but are not limited to, donning of personal protective equipment such as a disposable gown, gloves, masks and the utilization of a closed-system drug transfer device. The key safe handling is to protect the health care worker throughout the three phases of contact with the hazardous drugs. These phases are drug preparation, administration and disposal. Some studies have shown that while compounding hazardous drugs in a Class II BSC in conjunction with a closed-system drug transfer device, a significant decrease in drug contaminants inside a Class II BSC has resulted. This led the Oncology Nursing Society (ONS) to make the statement in 2003 that a closed-system drug transfer device is viewed as one of safest measures to prevent hazardous drug exposure in a clinician’s working environment. However, a Cochrane review published in 2018 that synthesized all available controlled studies found no evidence of a closed-system drug transfer device offering an additional decrease in contamination or exposure to safe handling practices alone.
It has been determined that current personal protective equipment (PPE) does not provide adequate protection against workers handling hazardous drugs - NIOSH states that “... measurable concentrations of some hazardous drugs have been documented in the urine of health care workers who prepared or administered them − even after safety precautions had been employed.” Further, NIOSH recommends that institutions should "consider using devices such as closed-system transfer devices. Closed systems limit the potential for generating aerosols and exposing workers". Other guidelines outline that "As other products become available, they should meet the definition of a closed system drug transfer device established by NIOSH and should be required to demonstrate their effectiveness in independent studies".
See also
USP 800
References
External links
American Society of Health-System Pharmacists (ASHP)
National Institute for Occupational Safety and Health (NIOSH)
Oncology Nursing Society (ONS)
The International Agency for Research on Cancer (IARC) Classification of Hazardous Drugs
International Society of Oncology Pharmacy Practitioners (ISOPP)
Clinical pharmacology
Pharmacy | Hazardous drugs | [
"Chemistry"
] | 610 | [
"Pharmacology",
"Drug safety",
"Clinical pharmacology",
"Pharmacy"
] |
12,327,015 | https://en.wikipedia.org/wiki/Persoz%20pendulum | A Persoz pendulum is a device used for measuring hardness of materials. The instrument consists of a pendulum which is free to swing on two balls resting on a coated test panel. The pendulum hardness test is based on the principle that the amplitude of the pendulum's oscillation will decrease more quickly when supported on a softer surface. The hardness of any given coating is given by the number of oscillations made by the pendulum within the specified limits of amplitude determined by accurately positioned photo sensors. An electronic counter records the number of swings made by the pendulum
Construction
The pendulum consists of balls which rest on the coating under test and form the fulcrum. The Persoz pendulum is very similar to the Konig pendulum. Both employ the same principle, that is the softer the coating the more the pendulum oscillations are damped and the shorter the time needed for the amplitude of oscillation to be reduced by a specified amount. The two pendulums differ in shape, mass and oscillation time, and there is no general relationship between the results obtained using the two pieces of equipment. In either case, the test simply involves noting the time in seconds for the amplitude of swing to decrease from either 6 to 3 degrees (Konig pendulum) or 12 to 4 degrees (Persoz pendulum).
References
Hardness instruments
Materials science
Pendulums
Coatings | Persoz pendulum | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 276 | [
"Applied and interdisciplinary physics",
"Coatings",
"Hardness instruments",
"Materials science",
"Measuring instruments",
"nan"
] |
12,328,822 | https://en.wikipedia.org/wiki/Attenuation%20coefficient | The linear attenuation coefficient, attenuation coefficient, or narrow-beam attenuation coefficient characterizes how easily a volume of material can be penetrated by a beam of light, sound, particles, or other energy or matter. A coefficient value that is large represents a beam becoming 'attenuated' as it passes through a given medium, while a small value represents that the medium had little effect on loss. The (derived) SI unit of attenuation coefficient is the reciprocal metre (m−1). Extinction coefficient is another term for this quantity, often used in meteorology and climatology. Most commonly, the quantity measures the exponential decay of intensity, that is, the value of downward e-folding distance of the original intensity as the energy of the intensity passes through a unit (e.g. one meter) thickness of material, so that an attenuation coefficient of 1 m−1 means that after passing through 1 metre, the radiation will be reduced by a factor of e, and for material with a coefficient of 2 m−1, it will be reduced twice by e, or e2. Other measures may use a different factor than e, such as the decadic attenuation coefficient below. The broad-beam attenuation coefficient counts forward-scattered radiation as transmitted rather than attenuated, and is more applicable to radiation shielding.
The mass attenuation coefficient is the attenuation coefficient normalized by the density of the material.
Overview
The attenuation coefficient describes the extent to which the radiant flux of a beam is reduced as it passes through a specific material. It is used in the context of:
X-rays or gamma rays, where it is denoted μ and measured in cm−1;
neutrons and nuclear reactors, where it is called macroscopic cross section (although actually it is not a section dimensionally speaking), denoted Σ and measured in m−1;
ultrasound attenuation, where it is denoted α and measured in dB⋅cm−1⋅MHz−1;
acoustics for characterizing particle size distribution, where it is denoted α and measured in m−1.
The attenuation coefficient is called the "extinction coefficient" in the context of
solar and infrared radiative transfer in the atmosphere, albeit usually denoted with another symbol (given the standard use of for slant paths);
A small attenuation coefficient indicates that the material in question is relatively transparent, while a larger value indicates greater degrees of opacity. The attenuation coefficient is dependent upon the type of material and the energy of the radiation. Generally, for electromagnetic radiation, the higher the energy of the incident photons and the less dense the material in question, the lower the corresponding attenuation coefficient will be.
Mathematical definitions
Attenuation coefficient
The attenuation coefficient of a volume, denoted μ, is defined as
where
Φe is the radiant flux;
z is the path length of the beam.
Note that for an attenuation coefficient which does not vary with z, this equation is solved along a line from =0 to as:
where is the incoming radiation flux at =0 and is the radiation flux at .
Spectral hemispherical attenuation coefficient
The spectral hemispherical attenuation coefficient in frequency and spectral hemispherical attenuation coefficient in wavelength of a volume, denoted μν and μλ respectively, are defined as:
where
Φe,ν is the spectral radiant flux in frequency;
Φe,λ is the spectral radiant flux in wavelength.
Directional attenuation coefficient
The directional attenuation coefficient of a volume, denoted μΩ, is defined as
where Le,Ω is the radiance.
Spectral directional attenuation coefficient
The spectral directional attenuation coefficient in frequency and spectral directional attenuation coefficient in wavelength of a volume, denoted μΩ,ν and μΩ,λ respectively, are defined as
where
Le,Ω,ν is the spectral radiance in frequency;
Le,Ω,λ is the spectral radiance in wavelength.
Absorption and scattering coefficients
When a narrow (collimated) beam passes through a volume, the beam will lose intensity due to two processes: absorption and scattering. Absorption indicates energy that is lost from the beam, while scattering indicates light that is redirected in a (random) direction, and hence is no longer in the beam, but still present, resulting in diffuse light.
The absorption coefficient of a volume, denoted μa, and the scattering coefficient of a volume, denoted μs, are defined the same way as the attenuation coefficient.
The attenuation coefficient of a volume is the sum of absorption coefficient and scattering coefficients:
Just looking at the narrow beam itself, the two processes cannot be distinguished. However, if a detector is set up to measure beam leaving in different directions, or conversely using a non-narrow beam, one can measure how much of the lost radiant flux was scattered, and how much was absorbed.
In this context, the "absorption coefficient" measures how quickly the beam would lose radiant flux due to the absorption alone, while "attenuation coefficient" measures the total loss of narrow-beam intensity, including scattering as well. "Narrow-beam attenuation coefficient" always unambiguously refers to the latter. The attenuation coefficient is at least as large as the absorption coefficient; they are equal in the idealized case of no scattering.
Expression in terms of density and cross section
The absorption coefficient may be expressed in terms of a number density of absorbing centers n and an absorbing cross section area σ. For a slab of area A and thickness dz, the total number of absorbing centers contained is n A dz. Assuming that dz is so small that there will be no overlap of the cross section areas, the total area available for absorption will be n A σ dz and the fraction of radiation absorbed is then n σ dz. The absorption coefficient is thus μ = n σ
Mass attenuation, absorption, and scattering coefficients
The mass attenuation coefficient, mass absorption coefficient, and mass scattering coefficient are defined as
where ρm is the mass density.
Napierian and decadic attenuation coefficients
Decibels
Engineering applications often express attenuation in the logarithmic units of decibels, or "dB", where 10 dB represents attenuation by a factor of 10. The units for attenuation coefficient are thus dB/m (or, in general, dB per unit distance). Note that in logarithmic units such as dB, the attenuation is a linear function of distance, rather than exponential. This has the advantage that the result of multiple attenuation layers can be found by simply adding up the dB loss for each individual passage. However, if intensity is desired, the logarithms must be converted back into linear units by using an exponential:
Naperian attenuation
The decadic attenuation coefficient or decadic narrow beam attenuation coefficient, denoted μ10, is defined as
Just as the usual attenuation coefficient measures the number of e-fold reductions that occur over a unit length of material, this coefficient measures how many 10-fold reductions occur: a decadic coefficient of 1 m−1 means 1 m of material reduces the radiation once by a factor of 10.
μ is sometimes called Napierian attenuation coefficient or Napierian narrow beam attenuation coefficient rather than just simply "attenuation coefficient". The terms "decadic" and "Napierian" come from the base used for the exponential in the Beer–Lambert law for a material sample, in which the two attenuation coefficients take part:
where
T is the transmittance of the material sample;
ℓ is the path length of the beam of light through the material sample.
In case of uniform attenuation, these relations become
Cases of non-uniform attenuation occur in atmospheric science applications and radiation shielding theory for instance.
The (Napierian) attenuation coefficient and the decadic attenuation coefficient of a material sample are related to the number densities and the amount concentrations of its N attenuating species as
where
σi is the attenuation cross section of the attenuating species i in the material sample;
ni is the number density of the attenuating species i in the material sample;
εi is the molar attenuation coefficient of the attenuating species i in the material sample;
ci is the amount concentration of the attenuating species i in the material sample,
by definition of attenuation cross section and molar attenuation coefficient.
Attenuation cross section and molar attenuation coefficient are related by
and number density and amount concentration by
where NA is the Avogadro constant.
The half-value layer (HVL) is the thickness of a layer of material required to reduce the radiant flux of the transmitted radiation to half its incident magnitude. The half-value layer is about 69% (ln 2) of the penetration depth. Engineers use these equations predict how much shielding thickness is required to attenuate radiation to acceptable or regulatory limits.
Attenuation coefficient is also inversely related to mean free path. Moreover, it is very closely related to the attenuation cross section.
Other radiometric coefficients
See also
Absorption (electromagnetic radiation)
Absorption cross section
Absorption spectrum
Acoustic attenuation
Attenuation
Attenuation length
Beer–Lambert law
Cargo scanning
Compton edge
Compton scattering
Computation of radiowave attenuation in the atmosphere
Cross section (physics)
Grey atmosphere
High-energy X-rays
Mass attenuation coefficient
Mean free path
Propagation constant
Radiation length
Scattering theory
Transmittance
References
External links
Absorption Coefficients α of Building Materials and Finishes
Sound Absorption Coefficients for Some Common Materials
Tables of X-Ray Mass Attenuation Coefficients and Mass Energy-Absorption Coefficients from 1 keV to 20 MeV for Elements Z = 1 to 92 and 48 Additional Substances of Dosimetric Interest
Physical quantities
Radiometry
Acoustics | Attenuation coefficient | [
"Physics",
"Mathematics",
"Engineering"
] | 2,025 | [
"Physical phenomena",
"Telecommunications engineering",
"Physical quantities",
"Quantity",
"Classical mechanics",
"Acoustics",
"Physical properties",
"Radiometry"
] |
11,368,321 | https://en.wikipedia.org/wiki/CDK7%20pathway | CDK7 is a cyclin-dependent kinase shown to be not easily classified. CDK7 is both a CDK-activating kinase (CAK) and a component of the general transcription factor TFIIH.
Introduction
An intricate network of cyclin-dependent kinases (CDKs) is organized in a pathway to ensure that each cell accurately replicates its DNA and segregates it equally between the two daughter cells. One CDK–the CDK7 complex–cannot be so easily classified. CDK7 is both a CDK-activating kinase (CAK), which phosphorylates cell-cycle CDKs within the activation segment (T-loop), and a component of the general transcription factor TFIIH, which phosphorylates the C-terminal domain (CTD) of the largest subunit of Pol II. A proposed mode of CDK7 inhibition is the phosphorylation of cyclin H by CDK7 itself or by another kinase.
CDK7 has been observed as a prerequisite to S phase entry and mitosis. CDK7 is activated by the binding of cyclin H and its substrate specificity is altered by the binding of MAT1. The free form of the complex formed, CDK7-cycH-MAT1, operates as CDK-activating kinase (CAK). In vivo, CDK7 forms a stable complex with cyclin H and MAT1 only when its T-loop is phosphorylated on either Ser164 or Thr170 residues.
The T-loop
To be active, most CDKs require not only a cyclin partner but also phosphorylation at one particular site, which corresponds to Thr161 in human CDK1, and which is located within the so-called T-loop (or activation loop) of kinase subdomain VIII. CDKl, CDK2 and CDK4 all require T-loop phosphorylation for maximum activity.
The free form of CDK7-cycH-MAT1 phosphorylates the T-loops of CDK1, CDK2, CDK4 and CDK6. For all CDK substrates of CDK7, phosphorylation by CDK7 occurs following the binding of the substrate kinase to its associated cyclin. This two-step process has been observed in CDK2, where the association of CDK2 with cyclin A results in a conformational change that primes the catalytic site for binding of its ATP substrate and phosphorylation by CDK7 of Thr160 in its activation segment improves the substrate protein’s ability to bind. It’s been further observed that CDK1 is not phosphorylated by CDK7 in its monomeric form and that monomeric CDK2 and CDK6 are poorly phosphorylated by CDK7, since the activation segment threonine is inaccessible to CDK7 in monomeric CDKs.
While CDK7 is indeed responsible for the phosphorylation of CDK1, CDK2, CDK4 and CDK6 in vivo, it has been observed that they have varying levels of dependence on CDK7. CDK1 and CDK2 require phosphorylation by CDK7 in order to reach their active states, while CDK4 and CDK6 have been found to require consistent CDK7 activity in order to maintain their phosphorylation states. This discrepancy is likely because the phosphorylated T-loops on CDK2 and CDK1 are protected when they are bound to cyclin while the phosphorylated T-loops on CDK4 and CDK6 remain exposed and therefore are vulnerable to phosphatases. It is therefore proposed that phosphatases work to counter the phosphorylation of CDK4 and CDK6 by CDK7, creating a competition between CDK7 and phosphatases.
Dual activity
An entirely new perspective on CDK7 function was opened when CDK7 was identified as a subunit of transcription factor IIH (TFIIH) and shown to phosphorylate the carboxy-terminal domain (CTD) of RNA polymerase II (RNAPII). TFIIH is a multiprotein complex required not only for class II transcription but also for nucleotide-excision repair. Its associated CTD-kinase activity is considered important for the promoter-clearance step of transcription, but the precise structural consequences of the phosphorylation of the CTD remain the subject of debate. Cyclin H and MAT1 are also present in TFIIH, and it is not known what, if anything, distinguishes the TFIIH-associated form of CDK7 from the quantitatively predominant free form. Whether CDK7 really displays dual-substrate specificity remains to be further explored, but there is no question that the CDK7-cyclin H-MAT1 complex is able to phosphorylate both the T-loop of CDKs and the YSPTSPS (single-letter code for amino acids) repeats of the RNAPII CTD in vitro.
CDK7-cycH-MAT1 binds to TFIIH, which alters the substrate preference of CDK7. CDK7-cycH-MAT1 then preferentially phosphorylates the large subunit C-terminal domain of polymerase II instead of CDK2 when compared to the free-form complex. In addition, phosphorylation of the Thr170 residue on the T-loop of CDK7 has been found to greatly increase activity of the CDK7– cyclin H–MAT1 complex in favor of CTD phosphorylation. Phosphorylation of Thr170, then, is a proposed mechanism for regulating CTD phosphorylation when CDK7 is associated with TFIIH.
The role of CDK7 in transcription was confirmed in vivo in Caenorhabditis elegans embryos. Mutants with cdk-7(ax224) were both unable to synthesize most mRNAs and had greatly reduced CTD phosphorylation as well, indicating that CDK7 is required for both transcription and CTD phosphorylation. In addition, similar results have been found in human cells. An “analog sensitive” CDK7 mutant (CDK7as) was devised which operates normally but is inhibited by an ATP analog competitive inhibitor. Inhibition of CDK7as was correlated with a reduction in CTD phosphorylation, where high inhibition led to very little instances of phosphorylated CTD-Ser5 (the phosphorylation target of CDK7 on CTD).
HIV latency
It has been demonstrated that TFIIH is a rate-limiting factor for HIV transcription in unactivated T-cells by using a combination of in vivo ChIP experiments and cell-free transcription studies. The ability of NF-κB to rapidly recruit TFIIH during HIV activation in T-cells is an unexpected discovery; however, there are several precedents in the literature of cellular genes that are activated through the recruitment of TFIIH. In an early and influential paper, demonstrated that type I activators such as Sp1 and CTF, which were able to support initiation but were unable to support efficient elongation, were also unable to bind TFIIH. By contrast, type II activators such as VP16, p53 and E2F1, which supported both initiation and elongation, were able to bind to TFIIH. In one of the most thoroughly characterized transcription systems, have studied the temporal order of recruitment of transcription factors during the activation of the major histocompatibility class II (MHC II) DRA gene by IFN-gamma. Following induction of the CIITA transcription factor by IFN-gamma, there was recruitment of both CDK7 and CDK9 causing RNAP CTD phosphorylation and elongation. Finally, Nissen and Yamamoto (2000) in their studies of the activation of the IL-8 and ICAM-1 promoters observed enhanced CDK7 recruitment and RNAP II CTD phosphorylation in response to NF-κB activation by TNF.
Stem Cells
The CDK7-cycH-MAT1 complex has been found to play a role in the differentiation of embryonic stem cells. It has been observed that the depletion of Cyclin H leads to differentiation of embryonic stem cells. In addition, Spt5, which leads to the differentiation of stem cells upon down-regulation, is a phosphorylation target of CDK7 in vitro, suggesting a possible mechanism by which Cyclin H depletion leads to differentiation.
Role in Cancer Therapy
Given that CDK7 is involved in two important regulation roles, it’s expected that CDK7 regulation may play a role in cancerous cells. Cells from breast cancer tumors were found to have elevated levels of CDK7 and Cyclin H when compared to normal breast cells. It was also found that the higher levels were generally found in ER-positive breast cancer. Together, these findings indicate that CDK7 therapy might make sense for some breast cancer patients. Further confirming these findings, recent research indicates that inhibition of CDK7 may be an effective therapy for HER2-positive breast cancers, even overcoming therapeutic resistance. THZ1 was used as a treatment for HER2-positive breast cancer cells and exhibited high potency for the cells regardless of their sensitivity to HER2 inhibitors. This finding was demonstrated in vivo, where inhibition of HER2 and CDK7 resulted in tumor regression in therapeutically resistant HER2+ xenograft models.
Inhibitors
The growth suppressor p53 has been shown to interact with cyclin H both in vitro and in vivo. Addition of wild type p53 was found to heavily downregulated CAK activity, resulting in decreased phosphorylation of both CDK2 and CTD by CDK7. Mutant p53 was unable to downregulate CDK7 activity and mutant p21 had no effect on downregulation, indicating that p53 is responsible for negative regulation of CDK7.
THZ1 has recently been discovered to be an inhibitor for CDK7 that selectively forms a covalent bond with the CDK7-cycH-MAT1 complex. This selectivity stems from forming a bond at C312, which is unique to CDK7 within the CDK family. CDK12 and CDK13 could also be inhibited using THZ1 (but at higher concentrations) because they have similar structures in the region surrounding C312. It was found that treatment of 250 nM THZ1 was sufficient to inhibit global transcription and that cancer cell lines were sensitive to much lower concentrations, opening up further research into the efficacy of using THZ1 as a component of cancer therapy, as described above.
References
See also
Cyclin
Cell cycle
Wee (cell cycle)
Cell cycle checkpoint
Cellular processes | CDK7 pathway | [
"Biology"
] | 2,342 | [
"Cell cycle",
"Cellular processes"
] |
11,369,079 | https://en.wikipedia.org/wiki/Nappe%20%28water%29 | In hydraulic engineering, a nappe is a sheet or curtain of water that flows over a weir or dam. The upper and lower water surface have well-defined characteristics that are created by the crest of a dam or weir. Both structures have different features that characterize how a nappe might flow through or over impervious concrete structures. Hydraulic engineers distinguish these two water structures in characterizing and calculating the formation of a nappe. Engineers account for the bathymetry of standing bodies (like lakes) or moving bodies of water (like rivers or streams). An appropriate crest is built for the dam or weir so that dam failure is not caused by nappe vibration or air cavitation from free-overall structures.
Weirs
There are three types of nappe that form over the crest of a weir, depending on the air ventilation structure of a weir: free nappes, depressed nappes, and clinging nappes. A free nappe, which is ventilated to maintain atmospheric pressure below, does not come into contact with the underside of the weir. A depressed nappe is partially ventilated, which creates negative pressure beneath the nappe. The negative pressure leads to a 6% to 7% increase in discharged water compared to a free nappe. Clinging nappes have no air beneath, and the stream flows along the face of the weir. The shape that fills in this area is called an ogee. Discharge for these weirs is approximately 25% to 30% more than free nappes. The geometry of a weir dictates the coefficient of discharge that passes through the crest, which is proportional to the nappe formation. Engineers solve for the amount of discharge and the cross sectional area of a river to calculate the adequate shape of the weir that should be implemented.
Dams
Many pathways of water can enter through a dam structure to produce a well-defined nappe. However, engineers classify dams as either overflow dams, where water consistently flows over or is blocked through a gate on top of crest, or non-overflow dams, which channel water through or around the dam with emergency floodgates. They both range in size. An overflow dam has a similar nappe typology to weirs (free, depressed, and clinging nappes). Engineers usually construct an ogee crest, which forms a clinging nappe. This increases discharge, reduces atmospheric pressure, and decreases the chances of air cavitation occurring.
Problems
Nappe vibration
Nappe vibration is classified in hydraulic literature as fluid dynamic excitation; vibrations are generated by the fluid, and the flow characteristics at the point of detachment and impact are critical. This well-known phenomenon occurs on free-overall structures (i.e. weirs, fountains, or dams) and produces excessive noise on concrete structures. These are undesirable and dangerous on gates and further characterized by oscillations in the thin-flow nappe cascading downstream of the crest. The vibrations send out a constant noise as water flows over structure, and may lead to cracks or air cavitation, which cause catastrophic failure. The phenomenon results from Kelvin–Helmholtz instability, the shear forces that occur between two fluids of different velocities.
Cavitation
Cavitation is defined as the explosive growth of vapor bubbles within a liquid. These bubbles are formed in, and may be carried into, areas of higher local pressures, which disappear before by collapse. Surface irregularities on hydraulic structures are prone to experiencing cavitation. Damage on this type of surface will start at the downstream end of the cloud of collapsing cavitation bubbles. Damage from cavitation has been reported in several hydraulic structures, including open-channel spillways, bottom outlets in dams, high-head gates and gate slots, and energy dissipators with hydraulic-jump stilling basins. The velocity of water that impinges at the surface point is one of the causes of cavitation. Also, the increased height of spillways on high dams leads to an increase of cavitation caused by nappe flow.
References
Hydrology | Nappe (water) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 831 | [
"Hydrology",
"Environmental engineering"
] |
11,370,573 | https://en.wikipedia.org/wiki/Flow%20splitter | A flow splitter, in hydraulic engineering, is any device designed to break up the flow of water or nappe over a dam wall or weir. Flow splitters are used to reduce the likelihood of nappe vibration that might cause the failure of a dam wall by aerating the water flow. They are also used to restrict large flows of stormwater, in situations where a stormwater management device is designed only to treat small storms.
Another use for a flow splitter is to again break up the nappe so as to allow fish, such as salmon to swim upstream and over small weirs.
Split flow weirs are also used in drinking water and wastewater treatment plants (sewage treatment or industrial wastewater treatment) to proportion flows to different outlets in a junction box.
See also
Flow control structure
References
Hydraulic engineering | Flow splitter | [
"Physics",
"Engineering",
"Environmental_science"
] | 163 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
11,371,560 | https://en.wikipedia.org/wiki/Expectation%20value%20%28quantum%20mechanics%29 | In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. It can be thought of as an average of all the possible outcomes of a measurement as weighted by their likelihood, and as such it is not the most probable value of a measurement; indeed the expectation value may have zero probability of occurring (e.g. measurements which can only yield integer values may have a non-integer mean). It is a fundamental concept in all areas of quantum physics.
Operational definition
Consider an operator . The expectation value is then in Dirac notation with a normalized state vector.
Formalism in quantum mechanics
In quantum theory, an experimental setup is described by the observable to be measured, and the state of the system. The expectation value of in the state is denoted as .
Mathematically, is a self-adjoint operator on a separable complex Hilbert space. In the most commonly used case in quantum mechanics, is a pure state, described by a normalized vector in the Hilbert space. The expectation value of in the state is defined as
If dynamics is considered, either the vector or the operator is taken to be time-dependent, depending on whether the Schrödinger picture or Heisenberg picture is used. The evolution of the expectation value does not depend on this choice, however.
If has a complete set of eigenvectors , with eigenvalues so that then () can be expressed as
This expression is similar to the arithmetic mean, and illustrates the physical meaning of the mathematical formalism: The eigenvalues are the possible outcomes of the experiment, and their corresponding coefficient is the probability that this outcome will occur; it is often called the transition probability.
A particularly simple case arises when is a projection, and thus has only the eigenvalues 0 and 1. This physically corresponds to a "yes-no" type of experiment. In this case, the expectation value is the probability that the experiment results in "1", and it can be computed as
In quantum theory, it is also possible for an operator to have a non-discrete spectrum, such as the position operator in quantum mechanics. This operator has a completely continuous spectrum, with eigenvalues and eigenvectors depending on a continuous parameter, . Specifically, the operator acts on a spatial vector as . In this case, the vector can be written as a complex-valued function on the spectrum of (usually the real line). This is formally achieved by projecting the state vector onto the eigenvalues of the operator, as in the discrete case . It happens that the eigenvectors of the position operator form a complete basis for the vector space of states, and therefore obey a completeness relation in quantum mechanics:
The above may be used to derive the common, integral expression for the expected value (), by inserting identities into the vector expression of expected value, then expanding in the position basis:
Where the orthonormality relation of the position basis vectors , reduces the double integral to a single integral. The last line uses the modulus of a complex valued function to replace with , which is a common substitution in quantum-mechanical integrals.
The expectation value may then be stated, where is unbounded, as the formula
A similar formula holds for the momentum operator, in systems where it has continuous spectrum.
All the above formulas are valid for pure states only. Prominently in thermodynamics and quantum optics, also mixed states are of importance; these are described by a positive trace-class operator , the statistical operator or density matrix. The expectation value then can be obtained as
General formulation
In general, quantum states are described by positive normalized linear functionals on the set of observables, mathematically often taken to be a C*-algebra. The expectation value of an observable is then given by
If the algebra of observables acts irreducibly on a Hilbert space, and if is a normal functional, that is, it is continuous in the ultraweak topology, then it can be written as
with a positive trace-class operator of trace 1. This gives formula () above. In the case of a pure state, is a projection onto a unit vector . Then , which gives formula () above.
is assumed to be a self-adjoint operator. In the general case, its spectrum will neither be entirely discrete nor entirely continuous. Still, one can write in a spectral decomposition,
with a projection-valued measure . For the expectation value of in a pure state , this means
which may be seen as a common generalization of formulas () and () above.
In non-relativistic theories of finitely many particles (quantum mechanics, in the strict sense), the states considered are generally normal. However, in other areas of quantum theory, also non-normal states are in use: They appear, for example. in the form of KMS states in quantum statistical mechanics of infinitely extended media, and as charged states in quantum field theory. In these cases, the expectation value is determined only by the more general formula ().
Example in configuration space
As an example, consider a quantum mechanical particle in one spatial dimension, in the configuration space representation. Here the Hilbert space is , the space of square-integrable functions on the real line. Vectors are represented by functions , called wave functions. The scalar product is given by . The wave functions have a direct interpretation as a probability distribution:
gives the probability of finding the particle in an infinitesimal interval of length about some point .
As an observable, consider the position operator , which acts on wavefunctions by
The expectation value, or mean value of measurements, of performed on a very large number of identical independent systems will be given by
The expectation value only exists if the integral converges, which is not the case for all vectors . This is because the position operator is unbounded, and has to be chosen from its domain of definition.
In general, the expectation of any observable can be calculated by replacing with the appropriate operator. For example, to calculate the average momentum, one uses the momentum operator in configuration space, . Explicitly, its expectation value is
Not all operators in general provide a measurable value. An operator that has a pure real expectation value is called an observable and its value can be directly measured in experiment.
See also
Rayleigh quotient
Uncertainty principle
Virial theorem
Notes
References
Further reading
The expectation value, in particular as presented in the section "Formalism in quantum mechanics", is covered in most elementary textbooks on quantum mechanics.
For a discussion of conceptual aspects, see:
Quantum mechanics
de:Erwartungswert#Quantenmechanischer Erwartungswert | Expectation value (quantum mechanics) | [
"Physics"
] | 1,386 | [
"Theoretical physics",
"Quantum mechanics"
] |
18,991,816 | https://en.wikipedia.org/wiki/Dynamic%20equation | In mathematics, dynamic equation can refer to:
difference equation in discrete time
differential equation in continuous time
time scale calculus in combined discrete and continuous time
Dynamical systems | Dynamic equation | [
"Physics",
"Mathematics"
] | 33 | [
"Mechanics",
"Dynamical systems"
] |
18,993,816 | https://en.wikipedia.org/wiki/Solid | Solid is one of the four fundamental states of matter along with liquid, gas, and plasma. The molecules in a solid are closely packed together and contain the least amount of kinetic energy. A solid is characterized by structural rigidity (as in rigid bodies) and resistance to a force applied to the surface. Unlike a liquid, a solid object does not flow to take on the shape of its container, nor does it expand to fill the entire available volume like a gas. The atoms in a solid are bound to each other, either in a regular geometric lattice (crystalline solids, which include metals and ordinary ice), or irregularly (an amorphous solid such as common window glass). Solids cannot be compressed with little pressure whereas gases can be compressed with little pressure because the molecules in a gas are loosely packed.
The branch of physics that deals with solids is called solid-state physics, and is the main branch of condensed matter physics (which also includes liquids). Materials science is primarily concerned with the physical and chemical properties of solids. Solid-state chemistry is especially concerned with the synthesis of novel materials, as well as the science of identification and chemical composition.
Microscopic description
The atoms, molecules or ions that make up solids may be arranged in an orderly repeating pattern, or irregularly. Materials whose constituents are arranged in a regular pattern are known as crystals. In some cases, the regular ordering can continue unbroken over a large scale, for example diamonds, where each diamond is a single crystal. Solid objects that are large enough to see and handle are rarely composed of a single crystal, but instead are made of a large number of single crystals, known as crystallites, whose size can vary from a few nanometers to several meters. Such materials are called polycrystalline. Almost all common metals, and many ceramics, are polycrystalline.
In other materials, there is no long-range order in the position of the atoms. These solids are known as amorphous solids; examples include polystyrene and glass.
Whether a solid is crystalline or amorphous depends on the material involved, and the conditions in which it was formed. Solids that are formed by slow cooling will tend to be crystalline, while solids that are frozen rapidly are more likely to be amorphous. Likewise, the specific crystal structure adopted by a crystalline solid depends on the material involved and on how it was formed.
While many common objects, such as an ice cube or a coin, are chemically identical throughout, many other common materials comprise a number of different substances packed together. For example, a typical rock is an aggregate of several different minerals and mineraloids, with no specific chemical composition. Wood is a natural organic material consisting primarily of cellulose fibers embedded in a matrix of organic lignin. In materials science, composites of more than one constituent material can be designed to have desired properties.
Classes of solids
The forces between the atoms in a solid can take a variety of forms. For example, a crystal of sodium chloride (common salt) is made up of ionic sodium and chlorine, which are held together by ionic bonds. In diamond or silicon, the atoms share electrons and form covalent bonds. In metals, electrons are shared in metallic bonding. Some solids, particularly most organic compounds, are held together with van der Waals forces resulting from the polarization of the electronic charge cloud on each molecule. The dissimilarities between the types of solid result from the differences between their bonding.
Metals
Metals typically are strong, dense, and good conductors of both electricity and heat.
The bulk of the elements in the periodic table, those to the left of a diagonal line drawn from boron to polonium, are metals.
Mixtures of two or more elements in which the major component is a metal are known as alloys.
People have been using metals for a variety of purposes since prehistoric times.
The strength and reliability of metals has led to their widespread use in construction of buildings and other structures, as well as in most vehicles, many appliances and tools, pipes, road signs and railroad tracks. Iron and aluminium are the two most commonly used structural metals. They are also the most abundant metals in the Earth's crust. Iron is most commonly used in the form of an alloy, steel, which contains up to 2.1% carbon, making it much harder than pure iron.
Because metals are good conductors of electricity, they are valuable in electrical appliances and for carrying an electric current over long distances with little energy loss or dissipation. Thus, electrical power grids rely on metal cables to distribute electricity. Home electrical systems, for example, are wired with copper for its good conducting properties and easy machinability. The high thermal conductivity of most metals also makes them useful for stovetop cooking utensils.
The study of metallic elements and their alloys makes up a significant portion of the fields of solid-state chemistry, physics, materials science and engineering.
Metallic solids are held together by a high density of shared, delocalized electrons, known as "metallic bonding". In a metal, atoms readily lose their outermost ("valence") electrons, forming positive ions. The free electrons are spread over the entire solid, which is held together firmly by electrostatic interactions between the ions and the electron cloud. The large number of free electrons gives metals their high values of electrical and thermal conductivity. The free electrons also prevent transmission of visible light, making metals opaque, shiny and lustrous.
More advanced models of metal properties consider the effect of the positive ions cores on the delocalised electrons. As most metals have crystalline structure, those ions are usually arranged into a periodic lattice. Mathematically, the potential of the ion cores can be treated by various models, the simplest being the nearly free electron model.
Minerals
Minerals are naturally occurring solids formed through various geological processes under high pressures. To be classified as a true mineral, a substance must have a crystal structure with uniform physical properties throughout. Minerals range in composition from pure elements and simple salts to very complex silicates with thousands of known forms. In contrast, a rock sample is a random aggregate of minerals and/or mineraloids, and has no specific chemical composition. The vast majority of the rocks of the Earth's crust consist of quartz (crystalline SiO2), feldspar, mica, chlorite, kaolin, calcite, epidote, olivine, augite, hornblende, magnetite, hematite, limonite and a few other minerals. Some minerals, like quartz, mica or feldspar are common, while others have been found in only a few locations worldwide. The largest group of minerals by far is the silicates (most rocks are ≥95% silicates), which are composed largely of silicon and oxygen, with the addition of ions of aluminium, magnesium, iron, calcium and other metals.
Ceramics
Ceramic solids are composed of inorganic compounds, usually oxides of chemical elements. They are chemically inert, and often are capable of withstanding chemical erosion that occurs in an acidic or caustic environment. Ceramics generally can withstand high temperatures ranging from . Exceptions include non-oxide inorganic materials, such as nitrides, borides and carbides.
Traditional ceramic raw materials include clay minerals such as kaolinite, more recent materials include aluminium oxide (alumina). The modern ceramic materials, which are classified as advanced ceramics, include silicon carbide and tungsten carbide. Both are valued for their abrasion resistance, and hence find use in such applications as the wear plates of crushing equipment in mining operations.
Most ceramic materials, such as alumina and its compounds, are formed from fine powders, yielding a fine grained polycrystalline microstructure that is filled with light-scattering centers comparable to the wavelength of visible light. Thus, they are generally opaque materials, as opposed to transparent materials. Recent nanoscale (e.g. sol-gel) technology has, however, made possible the production of polycrystalline transparent ceramics such as transparent alumina and alumina compounds for such applications as high-power lasers. Advanced ceramics are also used in the medicine, electrical and electronics industries.
Ceramic engineering is the science and technology of creating solid-state ceramic materials, parts and devices. This is done either by the action of heat, or, at lower temperatures, using precipitation reactions from chemical solutions. The term includes the purification of raw materials, the study and production of the chemical compounds concerned, their formation into components, and the study of their structure, composition and properties.
Mechanically speaking, ceramic materials are brittle, hard, strong in compression and weak in shearing and tension. Brittle materials may exhibit significant tensile strength by supporting a static load. Toughness indicates how much energy a material can absorb before mechanical failure, while fracture toughness (denoted KIc) describes the ability of a material with inherent microstructural flaws to resist fracture via crack growth and propagation. If a material has a large value of fracture toughness, the basic principles of fracture mechanics suggest that it will most likely undergo ductile fracture. Brittle fracture is very characteristic of most ceramic and glass-ceramic materials that typically exhibit low (and inconsistent) values of KIc.
For an example of applications of ceramics, the extreme hardness of zirconia is utilized in the manufacture of knife blades, as well as other industrial cutting tools. Ceramics such as alumina, boron carbide and silicon carbide have been used in bulletproof vests to repel large-caliber rifle fire. Silicon nitride parts are used in ceramic ball bearings, where their high hardness makes them wear resistant. In general, ceramics are also chemically resistant and can be used in wet environments where steel bearings would be susceptible to oxidation (or rust).
As another example of ceramic applications, in the early 1980s, Toyota researched production of an adiabatic ceramic engine with an operating temperature of over . Ceramic engines do not require a cooling system and hence allow a major weight reduction and therefore greater fuel efficiency. In a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. Work is also being done in developing ceramic parts for gas turbine engines. Turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. Such engines are not in production, however, because the manufacturing of ceramic parts in the sufficient precision and durability is difficult and costly. Processing methods often result in a wide distribution of microscopic flaws that frequently play a detrimental role in the sintering process, resulting in the proliferation of cracks, and ultimate mechanical failure.
Glass ceramics
Glass-ceramic materials share many properties with both non-crystalline glasses and crystalline ceramics. They are formed as a glass, and then partially crystallized by heat treatment, producing both amorphous and crystalline phases so that crystalline grains are embedded within a non-crystalline intergranular phase.
Glass-ceramics are used to make cookware (originally known by the brand name CorningWare) and stovetops that have high resistance to thermal shock and extremely low permeability to liquids. The negative coefficient of thermal expansion of the crystalline ceramic phase can be balanced with the positive coefficient of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net coefficient of thermal expansion close to zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C.
Glass ceramics may also occur naturally when lightning strikes the crystalline (e.g. quartz) grains found in most beach sand. In this case, the extreme and immediate heat of the lightning (~2500 °C) creates hollow, branching rootlike structures called fulgurite via fusion.
Organic solids
Organic chemistry studies the structure, properties, composition, reactions, and preparation by synthesis (or other means) of chemical compounds of carbon and hydrogen, which may contain any number of other elements such as nitrogen, oxygen and the halogens: fluorine, chlorine, bromine and iodine. Some organic compounds may also contain the elements phosphorus or sulfur. Examples of organic solids include wood, paraffin wax, naphthalene and a wide variety of polymers and plastics.
Wood
Wood is a natural organic material consisting primarily of cellulose fibers embedded in a matrix of lignin. Regarding mechanical properties, the fibers are strong in tension, and the lignin matrix resists compression. Thus wood has been an important construction material since humans began building shelters and using boats. Wood to be used for construction work is commonly known as lumber or timber. In construction, wood is not only a structural material, but is also used to form the mould for concrete.
Wood-based materials are also extensively used for packaging (e.g. cardboard) and paper, which are both created from the refined pulp. The chemical pulping processes use a combination of high temperature and alkaline (kraft) or acidic (sulfite) chemicals to break the chemical bonds of the lignin before burning it out.
Polymers
One important property of carbon in organic chemistry is that it can form certain compounds, the individual molecules of which are capable of attaching themselves to one another, thereby forming a chain or a network. The process is called polymerization and the chains or networks polymers, while the source compound is a monomer. Two main groups of polymers exist: those artificially manufactured are referred to as industrial polymers or synthetic polymers (plastics) and those naturally occurring as biopolymers.
Monomers can have various chemical substituents, or functional groups, which can affect the chemical properties of organic compounds, such as solubility and chemical reactivity, as well as the physical properties, such as hardness, density, mechanical or tensile strength, abrasion resistance, heat resistance, transparency, color, etc.. In proteins, these differences give the polymer the ability to adopt a biologically active conformation in preference to others (see self-assembly).
People have been using natural organic polymers for centuries in the form of waxes and shellac, which is classified as a thermoplastic polymer. A plant polymer named cellulose provided the tensile strength for natural fibers and ropes, and by the early 19th century natural rubber was in widespread use. Polymers are the raw materials (the resins) used to make what are commonly called plastics. Plastics are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Polymers that have been around, and that are in current widespread use, include carbon-based polyethylene, polypropylene, polyvinyl chloride, polystyrene, nylons, polyesters, acrylics, polyurethane, and polycarbonates, and silicon-based silicones. Plastics are generally classified as "commodity", "specialty" and "engineering" plastics.
Composite materials
Composite materials contain two or more macroscopic phases, one of which is often ceramic. For example, a continuous matrix, and a dispersed phase of ceramic particles or fibers.
Applications of composite materials range from structural elements such as steel-reinforced concrete, to the thermally insulative tiles that play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is Reinforced Carbon-Carbon (RCC), the light gray material that withstands reentry temperatures up to and protects the nose cap and leading edges of Space Shuttle's wings. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfural alcohol in a vacuum chamber, and cured/pyrolized to convert the furfural alcohol to carbon. In order to provide oxidation resistance for reuse capability, the outer layers of the RCC are converted to silicon carbide.
Domestic examples of composites can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for strength, bulk, or electro-static dispersion. These additions may be referred to as reinforcing fibers, or dispersants, depending on their purpose.
Thus, the matrix material surrounds and supports the reinforcement materials by maintaining their relative positions. The reinforcements impart their special mechanical and physical properties to enhance the matrix properties. A synergism produces material properties unavailable from the individual constituent materials, while the wide variety of matrix and strengthening materials provides the designer with the choice of an optimum combination.
Semiconductors
Semiconductors are materials that have an electrical resistivity (and conductivity) between that of metallic conductors and non-metallic insulators. They can be found in the periodic table moving diagonally downward right from boron. They separate the electrical conductors (or metals, to the left) from the insulators (to the right).
Devices made from semiconductor materials are the foundation of modern electronics, including radio, computers, telephones, etc. Semiconductor devices include the transistor, solar cells, diodes and integrated circuits. Solar photovoltaic panels are large semiconductor devices that directly convert light into electrical energy.
In a metallic conductor, current is carried by the flow of electrons, but in semiconductors, current can be carried either by electrons or by the positively charged "holes" in the electronic band structure of the material. Common semiconductor materials include silicon, germanium and gallium arsenide.
Nanomaterials
Many traditional solids exhibit different properties when they shrink to nanometer sizes. For example, nanoparticles of usually yellow gold and gray silicon are red in color; gold nanoparticles melt at much lower temperatures (~300 °C for 2.5 nm size) than the gold slabs (1064 °C); and metallic nanowires are much stronger than the corresponding bulk metals. The high surface area of nanoparticles makes them extremely attractive for certain applications in the field of energy. For example, platinum metals may provide improvements as automotive fuel catalysts, as well as proton exchange membrane (PEM) fuel cells. Also, ceramic oxides (or cermets) of lanthanum, cerium, manganese and nickel are now being developed as solid oxide fuel cells (SOFC). Lithium, lithium-titanate and tantalum nanoparticles are being applied in lithium-ion batteries. Silicon nanoparticles have been shown to dramatically expand the storage capacity of lithium-ion batteries during the expansion/contraction cycle. Silicon nanowires cycle without significant degradation and present the potential for use in batteries with greatly expanded storage times. Silicon nanoparticles are also being used in new forms of solar energy cells. Thin film deposition of silicon quantum dots on the polycrystalline silicon substrate of a photovoltaic (solar) cell increases voltage output as much as 60% by fluorescing the incoming light prior to capture. Here again, surface area of the nanoparticles (and thin films) plays a critical role in maximizing the amount of absorbed radiation.
Biomaterials
Many natural (or biological) materials are complex composites with remarkable mechanical properties. These complex structures, which have risen from hundreds of million years of evolution, are inspiring materials scientists in the design of novel materials. Their defining characteristics include structural hierarchy, multifunctionality and self-healing capability. Self-organization is also a fundamental feature of many biological materials and the manner by which the structures are assembled from the molecular level up. Thus, self-assembly is emerging as a new strategy in the chemical synthesis of high performance biomaterials.
Physical properties
Physical properties of elements and compounds that provide conclusive evidence of chemical composition include odor, color, volume, density (mass per unit volume), melting point, boiling point, heat capacity, physical form and shape at room temperature (solid, liquid or gas; cubic, trigonal crystals, etc.), hardness, porosity, index of refraction and many others. This section discusses some physical properties of materials in the solid state.
Mechanical
The mechanical properties of materials describe characteristics such as their strength and resistance to deformation. For example, steel beams are used in construction because of their high strength, meaning that they neither break nor bend significantly under the applied load.
Mechanical properties include elasticity, plasticity, tensile strength, compressive strength, shear strength, fracture toughness, ductility (low in brittle materials) and indentation hardness. Solid mechanics is the study of the behavior of solid matter under external actions such as external forces and temperature changes.
A solid does not exhibit macroscopic flow, as fluids do. Any degree of departure from its original shape is called deformation. The proportion of deformation to original size is called strain. If the applied stress is sufficiently low, almost all solid materials behave in such a way that the strain is directly proportional to the stress (Hooke's law). The coefficient of the proportion is called the modulus of elasticity or Young's modulus. This region of deformation is known as the linearly elastic region. Three models can describe how a solid responds to an applied stress:
Elasticity – When an applied stress is removed, the material returns to its undeformed state.
Viscoelasticity – These are materials that behave elastically, but also have damping. When the applied stress is removed, work has to be done against the damping effects and is converted to heat within the material. This results in a hysteresis loop in the stress–strain curve. This implies that the mechanical response has a time-dependence.
Plasticity – Materials that behave elastically generally do so when the applied stress is less than a yield value. When the stress is greater than the yield stress, the material behaves plastically and does not return to its previous state. That is, irreversible plastic deformation (or viscous flow) occurs after yield that is permanent.
Many materials become weaker at high temperatures. Materials that retain their strength at high temperatures, called refractory materials, are useful for many purposes. For example, glass-ceramics have become extremely useful for countertop cooking, as they exhibit excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C.
In the aerospace industry, high performance materials used in the design of aircraft and/or spacecraft exteriors must have a high resistance to thermal shock. Thus, synthetic fibers spun out of organic polymers and polymer/ceramic/metal composite materials and fiber-reinforced polymers are now being designed with this purpose in mind.
Thermal
Because solids have thermal energy, their atoms vibrate about fixed mean positions within the ordered (or disordered) lattice. The spectrum of lattice vibrations in a crystalline or glassy network provides the foundation for the kinetic theory of solids. This motion occurs at the atomic level, and thus cannot be observed or detected without highly specialized equipment, such as that used in spectroscopy.
Thermal properties of solids include thermal conductivity, which is the property of a material that indicates its ability to conduct heat. Solids also have a specific heat capacity, which is the capacity of a material to store energy in the form of heat (or thermal lattice vibrations).
Electrical
Electrical properties include both electrical resistivity and conductivity, dielectric strength, electromagnetic permeability, and permittivity. Electrical conductors such as metals and alloys are contrasted with electrical insulators such as glasses and ceramics. Semiconductors behave somewhere in between. Whereas conductivity in metals is caused by electrons, both electrons and holes contribute to current in semiconductors. Alternatively, ions support electric current in ionic conductors.
Many materials also exhibit superconductivity at low temperatures; they include metallic elements such as tin and aluminium, various metallic alloys, some heavily doped semiconductors, and certain ceramics. The electrical resistivity of most electrical (metallic) conductors generally decreases gradually as the temperature is lowered, but remains finite. In a superconductor, however, the resistance drops abruptly to zero when the material is cooled below its critical temperature. An electric current flowing in a loop of superconducting wire can persist indefinitely with no power source.
A dielectric, or electrical insulator, is a substance that is highly resistant to the flow of electric current. A dielectric, such as plastic, tends to concentrate an applied electric field within itself, which property is used in capacitors. A capacitor is an electrical device that can store energy in the electric field between a pair of closely spaced conductors (called 'plates'). When voltage is applied to the capacitor, electric charges of equal magnitude, but opposite polarity, build up on each plate. Capacitors are used in electrical circuits as energy-storage devices, as well as in electronic filters to differentiate between high-frequency and low-frequency signals.
Electro-mechanical
Piezoelectricity is the ability of crystals to generate a voltage in response to an applied mechanical stress. The piezoelectric effect is reversible in that piezoelectric crystals, when subjected to an externally applied voltage, can change shape by a small amount. Polymer materials like rubber, wool, hair, wood fiber, and silk often behave as electrets. For example, the polymer polyvinylidene fluoride (PVDF) exhibits a piezoelectric response several times larger than the traditional piezoelectric material quartz (crystalline SiO2). The deformation (~0.1%) lends itself to useful technical applications such as high-voltage sources, loudspeakers, lasers, as well as chemical, biological, and acousto-optic sensors and/or transducers.
Optical
Materials can transmit (e.g. glass) or reflect (e.g. metals) visible light.
Many materials will transmit some wavelengths while blocking others. For example, window glass is transparent to visible light, but much less so to most of the frequencies of ultraviolet light that cause sunburn. This property is used for frequency-selective optical filters, which can alter the color of incident light.
For some purposes, both the optical and mechanical properties of a material can be of interest. For example, the sensors on an infrared homing ("heat-seeking") missile must be protected by a cover that is transparent to infrared radiation. The current material of choice for high-speed infrared-guided missile domes is single-crystal sapphire. The optical transmission of sapphire does not actually extend to cover the entire mid-infrared range (3–5 μm), but starts to drop off at wavelengths greater than approximately 4.5 μm at room temperature. While the strength of sapphire is better than that of other available mid-range infrared dome materials at room temperature, it weakens above 600 °C. A long-standing trade-off exists between optical bandpass and mechanical durability; new materials such as transparent ceramics or optical nanocomposites may provide improved performance.
Guided lightwave transmission involves the field of fiber optics and the ability of certain glasses to transmit, simultaneously and with low loss of intensity, a range of frequencies (multi-mode optical waveguides) with little interference between them. Optical waveguides are used as components in integrated optical circuits or as the transmission medium in optical communication systems.
Opto-electronic
A solar cell or photovoltaic cell is a device that converts light energy into electrical energy. Fundamentally, the device needs to fulfill only two functions: photo-generation of charge carriers (electrons and holes) in a light-absorbing material, and separation of the charge carriers to a conductive contact that will transmit the electricity (simply put, carrying electrons off through a metal contact into an external circuit). This conversion is called the photoelectric effect, and the field of research related to solar cells is known as photovoltaics.
Solar cells have many applications. They have long been used in situations where electrical power from the grid is unavailable, such as in remote area power systems, Earth-orbiting satellites and space probes, handheld calculators, wrist watches, remote radiotelephones and water pumping applications. More recently, they are starting to be used in assemblies of solar modules (photovoltaic arrays) connected to the electricity grid through an inverter, that is not to act as a sole supply but as an additional electricity source.
All solar cells require a light absorbing material contained within the cell structure to absorb photons and generate electrons via the photovoltaic effect. The materials used in solar cells tend to have the property of preferentially absorbing the wavelengths of solar light that reach the earth surface. Some solar cells are optimized for light absorption beyond Earth's atmosphere, as well.
History
Fields of study
Solid-state physics
Solid-state chemistry
Materials science
References
External links
Solid
Articles containing video clips | Solid | [
"Physics",
"Chemistry",
"Materials_science"
] | 6,026 | [
"Solids",
"Phases of matter",
"Condensed matter physics",
"Matter"
] |
18,993,825 | https://en.wikipedia.org/wiki/Liquid | A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a nearly constant volume independent of pressure. It is one of the four fundamental states of matter (the others being solid, gas, and plasma), and is the only state with a definite volume but no fixed shape.
The density of a liquid is usually close to that of a solid, and much higher than that of a gas. Therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, they are both called fluids.
A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by intermolecular bonds. Like a gas, a liquid is able to flow and take the shape of a container. Unlike a gas, a liquid maintains a fairly constant density and does not disperse to fill every space of a container.
Although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is either gas (as interstellar clouds) or plasma (as stars).
Introduction
Liquid is one of the four primary states of matter, with the others being solid, gas and plasma. A liquid is a fluid. Unlike a solid, the molecules in a liquid have a much greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, allowing a liquid to flow while a solid remains rigid.
A liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, and, if placed in a sealed container, will distribute applied pressure evenly to every surface in the container. If liquid is placed in a bag, it can be squeezed into any shape. Unlike a gas, a liquid is nearly incompressible, meaning that it occupies nearly a constant volume over a wide range of pressures; it does not generally expand to fill available space in a container but forms its own surface, and it may not always mix readily with another liquid. These properties make a liquid suitable for applications such as hydraulics.
Liquid particles are bound firmly but not rigidly. They are able to move around one another freely, resulting in a limited degree of particle mobility. As the temperature increases, the increased vibrations of the molecules causes distances between the molecules to increase. When a liquid reaches its boiling point, the cohesive forces that bind the molecules closely together break, and the liquid changes to its gaseous state (unless superheating occurs). If the temperature is decreased, the distances between the molecules become smaller. When the liquid reaches its freezing point the molecules will usually lock into a very specific order, called crystallizing, and the bonds between them become more rigid, changing the liquid into its solid state (unless supercooling occurs).
Examples
Only two elements are liquid at standard conditions for temperature and pressure: mercury and bromine. Four more elements have melting points slightly above room temperature: francium, caesium, gallium and rubidium. In addition, certain mixtures of elements are liquid at room temperature, even if the individual elements are solid under the same conditions (see eutectic mixture). An example is the sodium-potassium metal alloy NaK. Other metal alloys that are liquid at room temperature include galinstan, which is a gallium-indium-tin alloy that melts at , as well as some amalgams (alloys involving mercury).
Pure substances that are liquid under normal conditions include water, ethanol and many other organic solvents. Liquid water is of vital importance in chemistry and biology, and it is necessary for all known forms of life.
Inorganic liquids include water, magma, inorganic nonaqueous solvents and many acids.
Important everyday liquids include aqueous solutions like household bleach, other mixtures of different substances such as mineral oil and gasoline, emulsions like vinaigrette or mayonnaise, suspensions like blood, and colloids like paint and milk.
Many gases can be liquefied by cooling, producing liquids such as liquid oxygen, liquid nitrogen, liquid hydrogen and liquid helium. Not all gases can be liquified at atmospheric pressure, however. Carbon dioxide, for example, can only be liquified at pressures above 5.1 atm.
Some materials cannot be classified within the classical three states of matter. For example, liquid crystals (used in liquid-crystal displays) possess both solid-like and liquid-like properties, and belong to their own state of matter distinct from either liquid or solid.
Applications
Lubrication
Liquids are useful as lubricants due to their ability to form a thin, freely flowing layer between solid materials. Lubricants such as oil are chosen for viscosity and flow characteristics that are suitable throughout the operating temperature range of the component. Oils are often used in engines, gear boxes, metalworking, and hydraulic systems for their good lubrication properties.
Solvation
Many liquids are used as solvents, to dissolve other liquids or solids. Solutions are found in a wide variety of applications, including paints, sealants, and adhesives. Naphtha and acetone are used frequently in industry to clean oil, grease, and tar from parts and machinery. Body fluids are water-based solutions.
Surfactants are commonly found in soaps and detergents. Solvents like alcohol are often used as antimicrobials. They are found in cosmetics, inks, and liquid dye lasers. They are used in the food industry, in processes such as the extraction of vegetable oil.
Cooling
Liquids tend to have better thermal conductivity than gases, and the ability to flow makes a liquid suitable for removing excess heat from mechanical components. The heat can be removed by channeling the liquid through a heat exchanger, such as a radiator, or the heat can be removed with the liquid during evaporation. Water or glycol coolants are used to keep engines from overheating. The coolants used in nuclear reactors include water or liquid metals, such as sodium or bismuth. Liquid propellant films are used to cool the thrust chambers of rockets. In machining, water and oils are used to remove the excess heat generated, which can quickly ruin both the work piece and the tooling. During perspiration, sweat removes heat from the human body by evaporating. In the heating, ventilation, and air-conditioning industry (HVAC), liquids such as water are used to transfer heat from one area to another.
Cooking
Liquids are often used in cooking due to their excellent heat-transfer capabilities. In addition to thermal conduction, liquids transmit energy by convection. In particular, because warmer fluids expand and rise while cooler areas contract and sink, liquids with low kinematic viscosity tend to transfer heat through convection at a fairly constant temperature, making a liquid suitable for blanching, boiling, or frying. Even higher rates of heat transfer can be achieved by condensing a gas into a liquid. At the liquid's boiling point, all of the heat energy is used to cause the phase change from a liquid to a gas, without an accompanying increase in temperature, and is stored as chemical potential energy. When the gas condenses back into a liquid this excess heat-energy is released at a constant temperature. This phenomenon is used in processes such as steaming.
Distillation
Since liquids often have different boiling points, mixtures or solutions of liquids or gases can typically be separated by distillation, using heat, cold, vacuum, pressure, or other means. Distillation can be found in everything from the production of alcoholic beverages, to oil refineries, to the cryogenic distillation of gases such as argon, oxygen, nitrogen, neon, or xenon by liquefaction (cooling them below their individual boiling points).
Hydraulics
Liquid is the primary component of hydraulic systems, which take advantage of Pascal's law to provide fluid power. Devices such as pumps and waterwheels have been used to change liquid motion into mechanical work since ancient times. Oils are forced through hydraulic pumps, which transmit this force to hydraulic cylinders. Hydraulics can be found in many applications, such as automotive brakes and transmissions, heavy equipment, and airplane control systems. Various hydraulic presses are used extensively in repair and manufacturing, for lifting, pressing, clamping and forming.
Liquid metals
Liquid metals have several properties that are useful in sensing and actuation, particularly their electrical conductivity and ability to transmit forces (incompressibility). As freely flowing substances, liquid metals retain these bulk properties even under extreme deformation. For this reason, they have been proposed for use in soft robots and wearable healthcare devices, which must be able to operate under repeated deformation. The metal gallium is considered to be a promising candidate for these applications as it is a liquid near room temperature, has low toxicity, and evaporates slowly.
Miscellaneous
Liquids are sometimes used in measuring devices. A thermometer often uses the thermal expansion of liquids, such as mercury, combined with their ability to flow to indicate temperature. A manometer uses the weight of the liquid to indicate air pressure.
The free surface of a rotating liquid forms a circular paraboloid and can therefore be used as a telescope. These are known as liquid-mirror telescopes. They are significantly cheaper than conventional telescopes, but can only point straight upward (zenith telescope). A common choice for the liquid is mercury.
Mechanical properties
Volume
Quantities of liquids are measured in units of volume. These include the SI unit cubic metre (m3) and its divisions, in particular the cubic decimeter, more commonly called the litre (1 dm3 = 1 L = 0.001 m3), and the cubic centimetre, also called millilitre (1 cm3 = 1 mL = 0.001 L = 10−6 m3).
The volume of a quantity of liquid is fixed by its temperature and pressure. Liquids generally expand when heated, and contract when cooled. Water between 0 °C and 4 °C is a notable exception.
On the other hand, liquids have little compressibility. Water, for example, will compress by only 46.4 parts per million for every unit increase in atmospheric pressure (bar). At around 4000 bar (400 megapascals or 58,000 psi) of pressure at room temperature water experiences only an 11% decrease in volume. Incompressibility makes liquids suitable for transmitting hydraulic power, because a change in pressure at one point in a liquid is transmitted undiminished to every other part of the liquid and very little energy is lost in the form of compression.
However, the negligible compressibility does lead to other phenomena. The banging of pipes, called water hammer, occurs when a valve is suddenly closed, creating a huge pressure-spike at the valve that travels backward through the system at just under the speed of sound. Another phenomenon caused by liquid's incompressibility is cavitation. Because liquids have little elasticity they can literally be pulled apart in areas of high turbulence or dramatic change in direction, such as the trailing edge of a boat propeller or a sharp corner in a pipe. A liquid in an area of low pressure (vacuum) vaporizes and forms bubbles, which then collapse as they enter high pressure areas. This causes liquid to fill the cavities left by the bubbles with tremendous localized force, eroding any adjacent solid surface.
Pressure and buoyancy
In a gravitational field, liquids exert pressure on the sides of a container as well as on anything within the liquid itself. This pressure is transmitted in all directions and increases with depth. If a liquid is at rest in a uniform gravitational field, the pressure at depth is given by
where:
is the pressure at the surface
is the density of the liquid, assumed uniform with depth
is the gravitational acceleration
For a body of water open to the air, would be the atmospheric pressure.
Static liquids in uniform gravitational fields also exhibit the phenomenon of buoyancy, where objects immersed in the liquid experience a net force due to the pressure variation with depth. The magnitude of the force is equal to the weight of the liquid displaced by the object, and the direction of the force depends on the average density of the immersed object. If the density is smaller than that of the liquid, the buoyant force points upward and the object floats, whereas if the density is larger, the buoyant force points downward and the object sinks. This is known as Archimedes' principle.
Surfaces
Unless the volume of a liquid exactly matches the volume of its container, one or more surfaces are observed. The presence of a surface introduces new phenomena which are not present in a bulk liquid. This is because a molecule at a surface possesses bonds with other liquid molecules only on the inner side of the surface, which implies a net force pulling surface molecules inward. Equivalently, this force can be described in terms of energy: there is a fixed amount of energy associated with forming a surface of a given area. This quantity is a material property called the surface tension, in units of energy per unit area (SI units: J/m2). Liquids with strong intermolecular forces tend to have large surface tensions.
A practical implication of surface tension is that liquids tend to minimize their surface area, forming spherical drops and bubbles unless other constraints are present. Surface tension is responsible for a range of other phenomena as well, including surface waves, capillary action, wetting, and ripples. In liquids under nanoscale confinement, surface effects can play a dominating role since – compared with a macroscopic sample of liquid – a much greater fraction of molecules are located near a surface.
The surface tension of a liquid directly affects its wettability. Most common liquids have tensions ranging in the tens of mJ/m2, so droplets of oil, water, or glue can easily merge and adhere to other surfaces, whereas liquid metals such as mercury may have tensions ranging in the hundreds of mJ/m2, thus droplets do not combine easily and surfaces may only wet under specific conditions.
The surface tensions of common liquids occupy a relatively narrow range of values when exposed to changing conditions such as temperature, which contrasts strongly with the enormous variation seen in other mechanical properties, such as viscosity.
The free surface of a liquid is disturbed by gravity (flatness) and waves (surface roughness).
Flow
An important physical property characterizing the flow of liquids is viscosity. Intuitively, viscosity describes the resistance of a liquid to flow.
More technically, viscosity measures the resistance of a liquid to deformation at a given rate, such as when it is being sheared at finite velocity. A specific example is a liquid flowing through a
pipe: in this case the liquid undergoes shear deformation since it flows more slowly near the walls of the pipe
than near the center. As a result, it exhibits viscous resistance to flow. In order to maintain flow, an external force must be applied, such as a pressure difference between the ends of the pipe.
The viscosity of liquids decreases with increasing temperature.
Precise control of viscosity is important in many applications, particularly the lubrication industry. One way to achieve such control is by blending two or more liquids of differing viscosities in precise ratios.
In addition, various additives exist which can modulate the temperature-dependence of the
viscosity of lubricating oils. This capability is important since machinery often operate over a range of
temperatures (see also viscosity index).
The viscous behavior of a liquid can be either Newtonian or non-Newtonian. A Newtonian liquid exhibits a linear strain/stress curve, meaning its viscosity is independent of time, shear rate, or shear-rate history. Examples of Newtonian liquids include water, glycerin, motor oil, honey, or mercury. A non-Newtonian liquid is one where the viscosity is not independent of these factors and either thickens (increases in viscosity) or thins (decreases in viscosity) under shear. Examples of non-Newtonian liquids include ketchup, custard, or starch solutions.
Sound propagation
The speed of sound in a liquid is given by where is the bulk modulus of the liquid and the density. As an example, water has a bulk modulus of about 2.2 GPa and a density of 1000 kg/m3, which gives c = 1.5 km/s.
Thermodynamics
Phase transitions
At a temperature below the boiling point, any matter in liquid form will evaporate until reaching equilibrium with the reverse process of condensation of its vapor. At this point the vapor will condense at the same rate as the liquid evaporates. Thus, a liquid cannot exist permanently if the evaporated liquid is continually removed. A liquid at or above its boiling point will normally boil, though superheating can prevent this in certain circumstances.
At a temperature below the freezing point, a liquid will tend to crystallize, changing to its solid form. Unlike the transition to gas, there is no equilibrium at this transition under constant pressure, so unless supercooling occurs, the liquid will eventually completely crystallize. However, this is only true under constant pressure, so that (for example) water and ice in a closed, strong container might reach an equilibrium where both phases coexist. For the opposite transition from solid to liquid, see melting.
Liquids in space
The phase diagram explains why liquids do not exist in space or any other vacuum. Since the pressure is essentially zero (except on surfaces or interiors of planets and moons) water and other liquids exposed to space will either immediately boil or freeze depending on the temperature. In regions of space near the Earth, water will freeze if the sun is not shining directly on it and vaporize (sublime) as soon as it is in sunlight. If water exists as ice on the Moon, it can only exist in shadowed holes where the sun never shines and where the surrounding rock does not heat it up too much. At some point near the orbit of Saturn, the light from the Sun is too faint to sublime ice to water vapor. This is evident from the longevity of the ice that composes Saturn's rings.
Solutions
Liquids can form solutions with gases, solids, and other liquids.
Two liquids are said to be miscible if they can form a solution in any proportion; otherwise they are immiscible. As an example, water and ethanol (drinking alcohol) are miscible whereas water and gasoline are immiscible. In some cases a mixture of otherwise immiscible liquids can be stabilized to form an emulsion, where one liquid is dispersed throughout the other as microscopic droplets. Usually this requires the presence of a surfactant in order to stabilize the droplets. A familiar example of an emulsion is mayonnaise, which consists of a mixture of water and oil that is stabilized by lecithin, a substance found in egg yolks.
Microscopic description
The microscopic structure of liquids is complex and historically has been the subject of intense research and debate. A few of the key ideas are explained below.
General description
Microscopically, liquids consist of a dense, disordered packing of molecules. This contrasts with the other two common phases of matter, gases and solids. Although gases are disordered, the molecules are well-separated in space and interact primarily through molecule-molecule collisions. Conversely, although the molecules in solids are densely packed, they usually fall into a regular structure, such as a crystalline lattice (glasses are a notable
exception).
Short-range ordering
While liquids do not exhibit long-range ordering as in a crystalline lattice, they do possess short-range order, which persists over a few molecular diameters.
In all liquids, excluded volume interactions induce short-range order in molecular positions (center-of-mass coordinates). Classical monatomic liquids like argon and krypton are the simplest examples. Such liquids can be modeled as disordered "heaps" of closely packed spheres, and the short-range order corresponds to the fact that nearest and next-nearest neighbors in a packing of spheres tend to be separated by integer multiples of the diameter.
In most liquids, molecules are not spheres, and intermolecular forces possess a directionality, i.e., they depend on the relative orientation of molecules. As a result, there is short-ranged orientational order in addition to the positional order mentioned above. Orientational order is especially important in hydrogen-bonded liquids like water. The strength and directional nature of hydrogen bonds drives the formation of local "networks" or "clusters" of molecules. Due to the relative importance of thermal fluctuations in liquids (compared with solids), these structures are highly dynamic, continuously deforming, breaking, and reforming.
Energy and entropy
The microscopic features of liquids derive from an interplay between attractive intermolecular forces and entropic forces.
The attractive forces tend to pull molecules close together, and along with short-range repulsive interactions, they are the dominant forces behind the regular structure of solids. The entropic forces are not "forces" in the mechanical sense; rather, they describe the tendency of a system to maximize its entropy at fixed energy (see microcanonical ensemble). Roughly speaking, entropic forces drive molecules apart from each other, maximizing the volume they occupy. Entropic forces dominant in gases and explain the tendency of gases to fill their containers. In liquids, by contrast, the intermolecular and entropic forces are comparable, so it is not possible to neglect one in favor of the other. Quantitatively, the binding energy between adjacent molecules is the same order of magnitude as the thermal energy .
No small parameter
The competition between energy and entropy makes liquids difficult to model at the molecular level, as there is no idealized "reference state" that can serve as a starting point for tractable theoretical descriptions. Mathematically, there is no small parameter from which one can develop a systematic perturbation theory. This situation contrasts with both gases and solids. For gases, the reference state is the ideal gas, and the density can be used as a small parameter to construct a theory of real (nonideal) gases (see virial expansion). For crystalline solids, the reference state is a perfect crystalline lattice, and possible small parameters are thermal motions and lattice defects.
Role of quantum mechanics
Like all known forms of matter, liquids are fundamentally quantum mechanical. However, under standard conditions (near room temperature and pressure), much of the macroscopic behavior of liquids can be understood in terms of classical mechanics. The "classical picture" posits that the constituent molecules are discrete entities that interact through intermolecular forces according to Newton's laws of motion. As a result, their macroscopic properties can be described using classical statistical mechanics. While the intermolecular force law technically derives from quantum mechanics, it is usually understood as a model input to classical theory, obtained either from a fit to experimental data or from the classical limit of a quantum mechanical description. An illustrative, though highly simplified example is a collection of spherical molecules interacting through a Lennard-Jones potential.
For the classical limit to apply, a necessary condition is that the thermal de Broglie wavelength,
is small compared with the length scale under consideration. Here, is the Planck constant and is the molecule's mass. Typical values of are about 0.01-0.1 nanometers (Table 1). Hence, a high-resolution model of liquid structure at the nanoscale may require quantum mechanical considerations. A notable example is hydrogen bonding in associated liquids like water, where, due to the small mass of the proton, inherently quantum effects such as zero-point motion and tunneling are important.
For a liquid to behave classically at the macroscopic level, must be small compared with the average distance
between molecules. That is,
Representative values of this ratio for a few liquids are given in Table 1. The conclusion is that quantum effects are important for liquids at low temperatures and with small molecular mass. For dynamic processes, there is an additional timescale constraint:
where is the timescale of the process under consideration. For room-temperature liquids, the right-hand side is about 10−14 seconds, which generally means that time-dependent processes involving translational motion can be described classically.
At extremely low temperatures, even the macroscopic behavior of certain liquids deviates from classical mechanics. Notable examples are hydrogen and helium. Due to their low temperature and mass, such liquids have a thermal de Broglie wavelength comparable to the average distance between molecules.
Dynamic phenomena
The expression for the sound velocity of a liquid,
,
contains the bulk modulus K. If K is frequency-independent, then the liquid behaves as a linear medium, so that sound propagates without dissipation or mode coupling. In reality, all liquids show some dispersion: with increasing frequency, K crosses over from the low-frequency, liquid-like limit to the high-frequency, solid-like limit . In normal liquids, most of this crossover takes place at frequencies between GHz and THz, sometimes called hypersound.
At sub-GHz frequencies, a normal liquid cannot sustain shear waves: the zero-frequency limit of the shear modulus is 0. This is sometimes seen as the defining property of a liquid.
However, like the bulk modulus K, the shear modulus G is also frequency-dependent and exhibits a similar crossover at hypersound frequencies.
According to linear response theory, the Fourier transform of K or G describes how the system returns to equilibrium after an external perturbation; for this reason, the dispersion step in the GHz to THz region is also called relaxation. As a liquid is supercooled toward the glass transition, the structural relaxation time exponentially increases, which explains the viscoelastic behavior of glass-forming liquids.
Experimental methods
The absence of long-range order in liquids is mirrored by the absence of Bragg peaks in X-ray and neutron diffraction. Under normal conditions, the diffraction pattern has circular symmetry, expressing the isotropy of the liquid. Radially, the diffraction intensity smoothly oscillates. This can be described by the static structure factor , with wavenumber given by the wavelength of the probe (photon or neutron) and the Bragg angle . The oscillations of express the short-range order of the liquid, i.e., the correlations between a molecule and "shells" of nearest neighbors, next-nearest neighbors, and so on.
An equivalent representation of these correlations is the radial distribution function , which is related to the Fourier transform of . It represents a spatial average of a temporal snapshot of pair correlations in the liquid.
Prediction of liquid properties
Methods for predicting liquid properties can be organized by their "scale" of description, that is, the length scales and time scales over which they apply.
Macroscopic methods use equations that directly model the large-scale behavior of liquids, such as their thermodynamic properties and flow behavior.
Microscopic methods use equations that model the dynamics of individual molecules.
Mesoscopic methods fall in between, combining elements of both continuum and particle-based models.
Macroscopic
Empirical correlations
Empirical correlations are simple mathematical expressions intended to approximate a liquid's properties over a range of experimental conditions, such as varying temperature and pressure. They are constructed by fitting simple functional forms to experimental data. For example, the temperature-dependence of liquid viscosity is sometimes approximated by the function , where and are fitting constants. Empirical correlations allow for extremely efficient estimates of physical properties, which can be useful in thermophysical simulations. However, they require high quality experimental data to obtain a good fit and cannot reliably extrapolate beyond the conditions covered by experiments.
Thermodynamic potentials
Thermodynamic potentials are functions that characterize the equilibrium state of a substance. An example is the Gibbs free energy , which is a function of pressure and temperature. Knowing any one thermodynamic potential is sufficient to compute all equilibrium properties of a substance, often simply by taking derivatives of . Thus, a single correlation for can replace separate correlations for individual properties. Conversely, a variety of experimental measurements (e.g., density, heat capacity, vapor pressure) can be incorporated into the same fit; in principle, this would allow one to predict hard-to-measure properties like heat capacity in terms of other, more readily available measurements (e.g., vapor pressure).
Hydrodynamics
Hydrodynamic theories describe liquids in terms of space- and time-dependent macroscopic fields, such as density, velocity, and temperature. These fields obey partial differential equations, which can be linear or nonlinear. Hydrodynamic theories are more general than equilibrium thermodynamic descriptions, which assume that liquids are approximately homogeneous and time-independent. The Navier-Stokes equations are a well-known example: they are partial differential equations giving the time evolution of density, velocity, and temperature of a viscous fluid. There are numerous methods for numerically solving the Navier-Stokes equations and its variants.
Mesoscopic
Mesoscopic methods operate on length and time scales between the particle and continuum levels. For this reason, they combine elements of particle-based dynamics and continuum hydrodynamics.
An example is the lattice Boltzmann method, which models a fluid as a collection of fictitious particles that exist on a lattice. The particles evolve in time through streaming (straight-line motion) and collisions. Conceptually, it is based on the Boltzmann equation for dilute gases, where the dynamics of a molecule consists of free motion interrupted by discrete binary collisions, but it is also applied to liquids. Despite the analogy with individual molecular trajectories, it is a coarse-grained description that typically operates on length and time scales larger than those of true molecular dynamics (hence the notion of "fictitious" particles).
Other methods that combine elements of continuum and particle-level dynamics include smoothed-particle hydrodynamics, dissipative particle dynamics, and multiparticle collision dynamics.
Microscopic
Microscopic simulation methods work directly with the equations of motion (classical or quantum) of the constituent molecules.
Classical molecular dynamics
Classical molecular dynamics (MD) simulates liquids using Newton's law of motion; from Newton's second law (), the trajectories of molecules can be traced out explicitly and used to compute macroscopic liquid properties like density or viscosity. However, classical MD requires expressions for the intermolecular forces ("F" in Newton's second law). Usually, these must be approximated using experimental data or some other input.
Ab initio (quantum) molecular dynamics
Ab initio quantum mechanical methods simulate liquids using only the laws of quantum mechanics and fundamental atomic constants. In contrast with classical molecular dynamics, the intermolecular force fields are an output of the calculation, rather than an input based on experimental measurements or other considerations. In principle, ab initio methods can simulate the properties of a given liquid without any prior experimental data. However, they are very expensive computationally, especially for large molecules with internal structure.
See also
References
Liq
Viscosity | Liquid | [
"Physics",
"Chemistry"
] | 6,449 | [
"Physical phenomena",
"Physical quantities",
"Phases of matter",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties",
"Matter",
"Liquids"
] |
18,993,869 | https://en.wikipedia.org/wiki/Gas | Gas is one of the four fundamental states of matter. The others are solid, liquid, and plasma. A pure gas may be made up of individual atoms (e.g. a noble gas like neon), elemental molecules made from one type of atom (e.g. oxygen), or compound molecules made from a variety of atoms (e.g. carbon dioxide). A gas mixture, such as air, contains a variety of pure gases. What distinguishes gases from liquids and solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer.
The gaseous state of matter occurs between the liquid and plasma states, the latter of which provides the upper-temperature boundary for gases. Bounding the lower end of the temperature scale lie degenerative quantum gases which are gaining increasing attention.
High-density atomic gases super-cooled to very low temperatures are classified by their statistical behavior as either Bose gases or Fermi gases. For a comprehensive listing of these exotic states of matter, see list of states of matter.
Elemental gases
The only chemical elements that are stable diatomic homonuclear molecular gases at STP are hydrogen (H2), nitrogen (N2), oxygen (O2), and two halogens: fluorine (F2) and chlorine (Cl2). When grouped with the monatomic noble gases – helium (He), neon (Ne), argon (Ar), krypton (Kr), xenon (Xe), and radon (Rn) – these gases are referred to as "elemental gases".
Etymology
The word gas was first used by the early 17th-century Flemish chemist Jan Baptist van Helmont. He identified carbon dioxide, the first known gas other than air. Van Helmont's word appears to have been simply a phonetic transcription of the Ancient Greek word – the g in Dutch being pronounced like ch in "loch" (voiceless velar fricative, ) – in which case Van Helmont simply was following the established alchemical usage first attested in the works of Paracelsus. According to Paracelsus's terminology, chaos meant something like .
An alternative story is that Van Helmont's term was derived from "gahst (or geist), which signifies a ghost or spirit". That story is given no credence by the editors of the Oxford English Dictionary. In contrast, the French-American historian Jacques Barzun speculated that Van Helmont had borrowed the word from the German , meaning the froth resulting from fermentation.
Physical characteristics
Because most gases are difficult to observe directly, they are described through the use of four physical properties or macroscopic characteristics: pressure, volume, number of particles (chemists group them by moles) and temperature. These four characteristics were repeatedly observed by scientists such as Robert Boyle, Jacques Charles, John Dalton, Joseph Gay-Lussac and Amedeo Avogadro for a variety of gases in various settings. Their detailed studies ultimately led to a mathematical relationship among these properties expressed by the ideal gas law (see section below).
Gas particles are widely separated from one another, and consequently, have weaker intermolecular bonds than liquids or solids. These intermolecular forces result from electrostatic interactions between gas particles. Like-charged areas of different gas particles repel, while oppositely charged regions of different gas particles attract one another; gases that contain permanently charged ions are known as plasmas. Gaseous compounds with polar covalent bonds contain permanent charge imbalances and so experience relatively strong intermolecular forces, although the compound's net charge remains neutral. Transient, randomly induced charges exist across non-polar covalent bonds of molecules and electrostatic interactions caused by them are referred to as Van der Waals forces. The interaction of these intermolecular forces varies within a substance which determines many of the physical properties unique to each gas. A comparison of boiling points for compounds formed by ionic and covalent bonds leads us to this conclusion.
Compared to the other states of matter, gases have low density and viscosity. Pressure and temperature influence the particles within a certain volume. This variation in particle separation and speed is referred to as compressibility. This particle separation and size influences optical properties of gases as can be found in the following list of refractive indices. Finally, gas particles spread apart or diffuse in order to homogeneously distribute themselves throughout any container.
Macroscopic view of gases
When observing gas, it is typical to specify a frame of reference or length scale. A larger length scale corresponds to a macroscopic or global point of view of the gas. This region (referred to as a volume) must be sufficient in size to contain a large sampling of gas particles. The resulting statistical analysis of this sample size produces the "average" behavior (i.e. velocity, temperature or pressure) of all the gas particles within the region. In contrast, a smaller length scale corresponds to a microscopic or particle point of view.
Macroscopically, the gas characteristics measured are either in terms of the gas particles themselves (velocity, pressure, or temperature) or their surroundings (volume). For example, Robert Boyle studied pneumatic chemistry for a small portion of his career. One of his experiments related the macroscopic properties of pressure and volume of a gas. His experiment used a J-tube manometer which looks like a test tube in the shape of the letter J. Boyle trapped an inert gas in the closed end of the test tube with a column of mercury, thereby making the number of particles and the temperature constant. He observed that when the pressure was increased in the gas, by adding more mercury to the column, the trapped gas' volume decreased (this is known as an inverse relationship). Furthermore, when Boyle multiplied the pressure and volume of each observation, the product was constant. This relationship held for every gas that Boyle observed leading to the law, (PV=k), named to honor his work in this field.
There are many mathematical tools available for analyzing gas properties. Boyle's lab equipment allowed the use of just a simple calculation to obtain his analytical results. His results were possible because he was studying gases in relatively low pressure situations where they behaved in an "ideal" manner. These ideal relationships apply to safety calculations for a variety of flight conditions on the materials in use. However, the high technology equipment in use today was designed to help us safely explore the more exotic operating environments where the gases no longer behave in an "ideal" manner. As gases are subjected to extreme conditions, tools to interpret them become more complex, from the Euler equations for inviscid flow to the Navier–Stokes equations that fully account for viscous effects. This advanced math, including statistics and multivariable calculus, adapted to the conditions of the gas system in question, makes it possible to solve such complex dynamic situations as space vehicle reentry. An example is the analysis of the space shuttle reentry pictured to ensure the material properties under this loading condition are appropriate. In this flight situation, the gas is no longer behaving ideally.
Pressure
The symbol used to represent pressure in equations is "p" or "P" with SI units of pascals.
When describing a container of gas, the term pressure (or absolute pressure) refers to the average force per unit area that the gas exerts on the surface of the container. Within this volume, it is sometimes easier to visualize the gas particles moving in straight lines until they collide with the container (see diagram at top). The force imparted by a gas particle into the container during this collision is the change in momentum of the particle. During a collision only the normal component of velocity changes. A particle traveling parallel to the wall does not change its momentum. Therefore, the average force on a surface must be the average change in linear momentum from all of these gas particle collisions.
Pressure is the sum of all the normal components of force exerted by the particles impacting the walls of the container divided by the surface area of the wall.
Temperature
The symbol used to represent temperature in equations is T with SI units of kelvins.
The speed of a gas particle is proportional to its absolute temperature. The volume of the balloon in the video shrinks when the trapped gas particles slow down with the addition of extremely cold nitrogen. The temperature of any physical system is related to the motions of the particles (molecules and atoms) which make up the [gas] system. In statistical mechanics, temperature is the measure of the average kinetic energy stored in a molecule (also known as the thermal energy). The methods of storing this energy are dictated by the degrees of freedom of the molecule itself (energy modes). Thermal (kinetic) energy added to a gas or liquid (an endothermic process) produces translational, rotational, and vibrational motion. In contrast, a solid can only increase its internal energy by exciting additional vibrational modes, as the crystal lattice structure prevents both translational and rotational motion. These heated gas molecules have a greater speed range (wider distribution of speeds) with a higher average or mean speed. The variance of this distribution is due to the speeds of individual particles constantly varying, due to repeated collisions with other particles. The speed range can be described by the Maxwell–Boltzmann distribution. Use of this distribution implies ideal gases near thermodynamic equilibrium for the system of particles being considered.
Specific volume
The symbol used to represent specific volume in equations is "v" with SI units of cubic meters per kilogram.
The symbol used to represent volume in equations is "V" with SI units of cubic meters.
When performing a thermodynamic analysis, it is typical to speak of intensive and extensive properties. Properties which depend on the amount of gas (either by mass or volume) are called extensive properties, while properties that do not depend on the amount of gas are called intensive properties. Specific volume is an example of an intensive property because it is the ratio of volume occupied by a unit of mass of a gas that is identical throughout a system at equilibrium. 1000 atoms a gas occupy the same space as any other 1000 atoms for any given temperature and pressure. This concept is easier to visualize for solids such as iron which are incompressible compared to gases. However, volume itself --- not specific --- is an extensive property.
Density
The symbol used to represent density in equations is ρ (rho) with SI units of kilograms per cubic meter. This term is the reciprocal of specific volume.
Since gas molecules can move freely within a container, their mass is normally characterized by density. Density is the amount of mass per unit volume of a substance, or the inverse of specific volume. For gases, the density can vary over a wide range because the particles are free to move closer together when constrained by pressure or volume. This variation of density is referred to as compressibility. Like pressure and temperature, density is a state variable of a gas and the change in density during any process is governed by the laws of thermodynamics. For a static gas, the density is the same throughout the entire container. Density is therefore a scalar quantity. It can be shown by kinetic theory that the density is inversely proportional to the size of the container in which a fixed mass of gas is confined. In this case of a fixed mass, the density decreases as the volume increases.
Microscopic view of gases
If one could observe a gas under a powerful microscope, one would see a collection of particles without any definite shape or volume that are in more or less random motion. These gas particles only change direction when they collide with another particle or with the sides of the container. This microscopic view of gas is well-described by statistical mechanics, but it can be described by many different theories. The kinetic theory of gases, which makes the assumption that these collisions are perfectly elastic, does not account for intermolecular forces of attraction and repulsion.
Kinetic theory of gases
Kinetic theory provides insight into the macroscopic properties of gases by considering their molecular composition and motion. Starting with the definitions of momentum and kinetic energy, one can use the conservation of momentum and geometric relationships of a cube to relate macroscopic system properties of temperature and pressure to the microscopic property of kinetic energy per molecule. The theory provides averaged values for these two properties.
The kinetic theory of gases can help explain how the system (the collection of gas particles being considered) responds to changes in temperature, with a corresponding change in kinetic energy.
For example: Imagine you have a sealed container of a fixed-size (a constant volume), containing a fixed-number of gas particles; starting from absolute zero (the theoretical temperature at which atoms or molecules have no thermal energy, i.e. are not moving or vibrating), you begin to add energy to the system by heating the container, so that energy transfers to the particles inside. Once their internal energy is above zero-point energy, meaning their kinetic energy (also known as thermal energy) is non-zero, the gas particles will begin to move around the container. As the box is further heated (as more energy is added), the individual particles increase their average speed as the system's total internal energy increases. The higher average-speed of all the particles leads to a greater rate at which collisions happen (i.e. greater number of collisions per unit of time), between particles and the container, as well as between the particles themselves.
The macroscopic, measurable quantity of pressure, is the direct result of these microscopic particle collisions with the surface, over which, individual molecules exert a small force, each contributing to the total force applied within a specific area. (Read .)
Likewise, the macroscopically measurable quantity of temperature, is a quantification of the overall amount of motion, or kinetic energy that the particles exhibit. (Read .)
Thermal motion and statistical mechanics
In the kinetic theory of gases, kinetic energy is assumed to purely consist of linear translations according to a speed distribution of particles in the system. However, in real gases and other real substances, the motions which define the kinetic energy of a system (which collectively determine the temperature), are much more complex than simple linear translation due to the more complex structure of molecules, compared to single atoms which act similarly to point-masses. In real thermodynamic systems, quantum phenomena play a large role in determining thermal motions. The random, thermal motions (kinetic energy) in molecules is a combination of a finite set of possible motions including translation, rotation, and vibration. This finite range of possible motions, along with the finite set of molecules in the system, leads to a finite number of microstates within the system; we call the set of all microstates an ensemble. Specific to atomic or molecular systems, we could potentially have three different kinds of ensemble, depending on the situation: microcanonical ensemble, canonical ensemble, or grand canonical ensemble. Specific combinations of microstates within an ensemble are how we truly define macrostate of the system (temperature, pressure, energy, etc.). In order to do that, we must first count all microstates though use of a partition function. The use of statistical mechanics and the partition function is an important tool throughout all of physical chemistry, because it is the key to connection between the microscopic states of a system and the macroscopic variables which we can measure, such as temperature, pressure, heat capacity, internal energy, enthalpy, and entropy, just to name a few. (Read: Partition function Meaning and significance)
Using the partition function to find the energy of a molecule, or system of molecules, can sometimes be approximated by the Equipartition theorem, which greatly-simplifies calculation. However, this method assumes all molecular degrees of freedom are equally populated, and therefore equally utilized for storing energy within the molecule. It would imply that internal energy changes linearly with temperature, which is not the case. This ignores the fact that heat capacity changes with temperature, due to certain degrees of freedom being unreachable (a.k.a. "frozen out") at lower temperatures. As internal energy of molecules increases, so does the ability to store energy within additional degrees of freedom. As more degrees of freedom become available to hold energy, this causes the molar heat capacity of the substance to increase.
Brownian motion
Brownian motion is the mathematical model used to describe the random movement of particles suspended in a fluid. The gas particle animation, using pink and green particles, illustrates how this behavior results in the spreading out of gases (entropy). These events are also described by particle theory.
Since it is at the limit of (or beyond) current technology to observe individual gas particles (atoms or molecules), only theoretical calculations give suggestions about how they move, but their motion is different from Brownian motion because Brownian motion involves a smooth drag due to the frictional force of many gas molecules, punctuated by violent collisions of an individual (or several) gas molecule(s) with the particle. The particle (generally consisting of millions or billions of atoms) thus moves in a jagged course, yet not so jagged as would be expected if an individual gas molecule were examined.
Intermolecular forces - the primary difference between Real and Ideal gases
Forces between two or more molecules or atoms, either attractive or repulsive, are called intermolecular forces. Intermolecular forces are experienced by molecules when they are within physical proximity of one another. These forces are very important for properly modeling molecular systems, as to accurately predict the microscopic behavior of molecules in any system, and therefore, are necessary for accurately predicting the physical properties of gases (and liquids) across wide variations in physical conditions.
Arising from the study of physical chemistry, one of the most prominent intermolecular forces throughout physics, are van der Waals forces. Van der Waals forces play a key role in determining nearly all physical properties of fluids such as viscosity, flow rate, and gas dynamics (see physical characteristics section). The van der Waals interactions between gas molecules, is the reason why modeling a "real gas" is more mathematically difficult than an "ideal gas". Ignoring these proximity-dependent forces allows a real gas to be treated like an ideal gas, which greatly simplifies calculation.
The intermolecular attractions and repulsions between two gas molecules depend on the distance between them. The combined attractions and repulsions are well-modelled by the Lennard-Jones potential, which is one of the most extensively studied of all interatomic potentials describing the potential energy of molecular systems. Due to the general applicability and importance, the Lennard-Jones model system is often referred to as 'Lennard-Jonesium'. The Lennard-Jones potential between molecules can be broken down into two separate components: a long-distance attraction due to the London dispersion force, and a short-range repulsion due to electron-electron exchange interaction (which is related to the Pauli exclusion principle).
When two molecules are relatively distant (meaning they have a high potential energy), they experience a weak attracting force, causing them to move toward each other, lowering their potential energy. However, if the molecules are too far away, then they would not experience attractive force of any significance. Additionally, if the molecules get too close then they will collide, and experience a very high repulsive force (modelled by Hard spheres) which is a much stronger force than the attractions, so that any attraction due to proximity is disregarded.
As two molecules approach each other, from a distance that is neither too-far, nor too-close, their attraction increases as the magnitude of their potential energy increases (becoming more negative), and lowers their total internal energy. The attraction causing the molecules to get closer, can only happen if the molecules remain in proximity for the duration of time it takes to physically move closer. Therefore, the attractive forces are strongest when the molecules move at low speeds. This means that the attraction between molecules is significant when gas temperatures is low. However, if you were to isothermally compress this cold gas into a small volume, forcing the molecules into close proximity, and raising the pressure, the repulsions will begin to dominate over the attractions, as the rate at which collisions are happening will increase significantly. Therefore, at low temperatures, and low pressures, attraction is the dominant intermolecular interaction.
If two molecules are moving at high speeds, in arbitrary directions, along non-intersecting paths, then they will not spend enough time in proximity to be affected by the attractive London-dispersion force. If the two molecules collide, they are moving too fast and their kinetic energy will be much greater than any attractive potential energy, so they will only experience repulsion upon colliding. Thus, attractions between molecules can be neglected at high temperatures due to high speeds. At high temperatures, and high pressures, repulsion is the dominant intermolecular interaction.
Accounting for the above stated effects which cause these attractions and repulsions, real gases, delineate from the ideal gas model by the following generalization:
At low temperatures, and low pressures, the volume occupied by a real gas, is less than the volume predicted by the ideal gas law.
At high temperatures, and high pressures, the volume occupied by a real gas, is greater than the volume predicted by the ideal gas law.
Mathematical models
An equation of state (for gases) is a mathematical model used to roughly describe or predict the state properties of a gas. At present, there is no single equation of state that accurately predicts the properties of all gases under all conditions. Therefore, a number of much more accurate equations of state have been developed for gases in specific temperature and pressure ranges. The "gas models" that are most widely discussed are "perfect gas", "ideal gas" and "real gas". Each of these models has its own set of assumptions to facilitate the analysis of a given thermodynamic system. Each successive model expands the temperature range of coverage to which it applies.
Ideal and perfect gas
The equation of state for an ideal or perfect gas is the ideal gas law and reads
where P is the pressure, V is the volume, n is amount of gas (in mol units), R is the universal gas constant, 8.314 J/(mol K), and T is the temperature. Written this way, it is sometimes called the "chemist's version", since it emphasizes the number of molecules n. It can also be written as
where is the specific gas constant for a particular gas, in units J/(kg K), and ρ = m/V is density. This notation is the "gas dynamicist's" version, which is more practical in modeling of gas flows involving acceleration without chemical reactions.
The ideal gas law does not make an assumption about the heat capacity of a gas. In the most general case, the specific heat is a function of both temperature and pressure. If the pressure-dependence is neglected (and possibly the temperature-dependence as well) in a particular application, sometimes the gas is said to be a perfect gas, although the exact assumptions may vary depending on the author and/or field of science.
For an ideal gas, the ideal gas law applies without restrictions on the specific heat. An ideal gas is a simplified "real gas" with the assumption that the compressibility factor Z is set to 1 meaning that this pneumatic ratio remains constant. A compressibility factor of one also requires the four state variables to follow the ideal gas law.
This approximation is more suitable for applications in engineering although simpler models can be used to produce a "ball-park" range as to where the real solution should lie. An example where the "ideal gas approximation" would be suitable would be inside a combustion chamber of a jet engine. It may also be useful to keep the elementary reactions and chemical dissociations for calculating emissions.
Real gas
Each one of the assumptions listed below adds to the complexity of the problem's solution. As the density of a gas increases with rising pressure, the intermolecular forces play a more substantial role in gas behavior which results in the ideal gas law no longer providing "reasonable" results. At the upper end of the engine temperature ranges (e.g. combustor sections – 1300 K), the complex fuel particles absorb internal energy by means of rotations and vibrations that cause their specific heats to vary from those of diatomic molecules and noble gases. At more than double that temperature, electronic excitation and dissociation of the gas particles begins to occur causing the pressure to adjust to a greater number of particles (transition from gas to plasma). Finally, all of the thermodynamic processes were presumed to describe uniform gases whose velocities varied according to a fixed distribution. Using a non-equilibrium situation implies the flow field must be characterized in some manner to enable a solution. One of the first attempts to expand the boundaries of the ideal gas law was to include coverage for different thermodynamic processes by adjusting the equation to read pVn = constant and then varying the n through different values such as the specific heat ratio, γ.
Real gas effects include those adjustments made to account for a greater range of gas behavior:
Compressibility effects (Z allowed to vary from 1.0)
Variable heat capacity (specific heats vary with temperature)
Van der Waals forces (related to compressibility, can substitute other equations of state)
Non-equilibrium thermodynamic effects
Issues with molecular dissociation and elementary reactions with variable composition.
For most applications, such a detailed analysis is excessive. Examples where real gas effects would have a significant impact would be on the Space Shuttle re-entry where extremely high temperatures and pressures were present or the gases produced during geological events as in the image of the 1990 eruption of Mount Redoubt.
Permanent gas
Permanent gas is a term used for a gas which has a critical temperature below the range of normal human-habitable temperatures and therefore cannot be liquefied by pressure within this range. Historically such gases were thought to be impossible to liquefy and would therefore permanently remain in the gaseous state. The term is relevant to ambient temperature storage and transport of gases at high pressure.
Historical research
Boyle's law
Boyle's law was perhaps the first expression of an equation of state. In 1662 Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was carefully measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. The image of Boyle's equipment shows some of the exotic tools used by Boyle during his study of gases.
Through these experiments, Boyle noted that the pressure exerted by a gas held at a constant temperature varies inversely with the volume of the gas. For example, if the volume is halved, the pressure is doubled; and if the volume is doubled, the pressure is halved. Given the inverse relationship between pressure and volume, the product of pressure (P) and volume (V) is a constant (k) for a given mass of confined gas as long as the temperature is constant. Stated as a formula, thus is:
Because the before and after volumes and pressures of the fixed amount of gas, where the before and after temperatures are the same both equal the constant k, they can be related by the equation:
Charles's law
In 1787, the French physicist and balloon pioneer, Jacques Charles, found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to the same extent over the same 80 kelvin interval. He noted that, for an ideal gas at constant pressure, the volume is directly proportional to its temperature:
Gay-Lussac's law
In 1802, Joseph Louis Gay-Lussac published results of similar, though more extensive experiments. Gay-Lussac credited Charles' earlier work by naming the law in his honor. Gay-Lussac himself is credited with the law describing pressure, which he found in 1809. It states that the pressure exerted on a container's sides by an ideal gas is proportional to its temperature.
Avogadro's law
In 1811, Amedeo Avogadro verified that equal volumes of pure gases contain the same number of particles. His theory was not generally accepted until 1858 when another Italian chemist Stanislao Cannizzaro was able to explain non-ideal exceptions. For his work with gases a century prior, the physical constant that bears his name (the Avogadro constant) is the number of atoms per mole of elemental carbon-12 (). This specific number of gas particles, at standard temperature and pressure (ideal gas law) occupies 22.40 liters, which is referred to as the molar volume.
Avogadro's law states that the volume occupied by an ideal gas is proportional to the amount of substance in the volume. This gives rise to the molar volume of a gas, which at STP is 22.4 dm3/mol (liters per mole). The relation is given by
where n is the amount of substance of gas (the number of molecules divided by the Avogadro constant).
Dalton's law
In 1801, John Dalton published the law of partial pressures from his work with ideal gas law relationship: The pressure of a mixture of non reactive gases is equal to the sum of the pressures of all of the constituent gases alone. Mathematically, this can be represented for n species as:
Pressuretotal = Pressure1 + Pressure2 + ... + Pressuren
The image of Dalton's journal depicts symbology he used as shorthand to record the path he followed. Among his key journal observations upon mixing unreactive "elastic fluids" (gases) were the following:
Unlike liquids, heavier gases did not drift to the bottom upon mixing.
Gas particle identity played no role in determining final pressure (they behaved as if their size was negligible).
Special topics
Compressibility
Thermodynamicists use this factor (Z) to alter the ideal gas equation to account for compressibility effects of real gases. This factor represents the ratio of actual to ideal specific volumes. It is sometimes referred to as a "fudge-factor" or correction to expand the useful range of the ideal gas law for design purposes. Usually this Z value is very close to unity. The compressibility factor image illustrates how Z varies over a range of very cold temperatures.
Boundary layer
Particles will, in effect, "stick" to the surface of an object moving through it. This layer of particles is called the boundary layer. At the surface of the object, it is essentially static due to the friction of the surface. The object, with its boundary layer is effectively the new shape of the object that the rest of the molecules "see" as the object approaches. This boundary layer can separate from the surface, essentially creating a new surface and completely changing the flow path. The classical example of this is a stalling airfoil. The delta wing image clearly shows the boundary layer thickening as the gas flows from right to left along the leading edge.
Turbulence
In fluid dynamics, turbulence or turbulent flow is a flow regime characterized by chaotic, stochastic property changes. This includes low momentum diffusion, high momentum convection, and rapid variation of pressure and velocity in space and time. The satellite view of weather around Robinson Crusoe Islands illustrates one example.
Viscosity
Viscosity, a physical property, is a measure of how well adjacent molecules stick to one another. A solid can withstand a shearing force due to the strength of these sticky intermolecular forces. A fluid will continuously deform when subjected to a similar load. While a gas has a lower value of viscosity than a liquid, it is still an observable property. If gases had no viscosity, then they would not stick to the surface of a wing and form a boundary layer. A study of the delta wing in the Schlieren image reveals that the gas particles stick to one another (see Boundary layer section).
Reynolds number
In fluid mechanics, the Reynolds number is the ratio of inertial forces (vsρ) which dominate a turbulent flow, to viscous forces (μ/L) which is proportional to viscosity. It is one of the most important dimensionless numbers in fluid dynamics and is used, usually along with other dimensionless numbers, to provide a criterion for determining dynamic similitude. As such, the Reynolds number provides the link between modeling results (design) and the full-scale actual conditions. It can also be used to characterize the flow.
Maximum entropy principle
As the total number of degrees of freedom approaches infinity, the system will be found in the macrostate that corresponds to the highest multiplicity. In order to illustrate this principle, observe the skin temperature of a frozen metal bar. Using a thermal image of the skin temperature, note the temperature distribution on the surface. This initial observation of temperature represents a "microstate". At some future time, a second observation of the skin temperature produces a second microstate. By continuing this observation process, it is possible to produce a series of microstates that illustrate the thermal history of the bar's surface. Characterization of this historical series of microstates is possible by choosing the macrostate that successfully classifies them all into a single grouping.
Thermodynamic equilibrium
When energy transfer ceases from a system, this condition is referred to as thermodynamic equilibrium. Usually, this condition implies the system and surroundings are at the same temperature so that heat no longer transfers between them. It also implies that external forces are balanced (volume does not change), and all chemical reactions within the system are complete. The timeline varies for these events depending on the system in question. A container of ice allowed to melt at room temperature takes hours, while in semiconductors the heat transfer that occurs in the device transition from an on to off state could be on the order of a few nanoseconds.
See also
Greenhouse gas
Landfill gas utilization
List of gases
Natural gas
Volcanic gas
Breathing gas
Wind
Notes
References
Further reading
Philip Hill and Carl Peterson. Mechanics and Thermodynamics of Propulsion: Second Edition Addison-Wesley, 1992.
National Aeronautics and Space Administration (NASA). Animated Gas Lab. Accessed February 2008.
Georgia State University. HyperPhysics. Accessed February 2008.
Antony Lewis WordWeb. Accessed February 2008.
Northwestern Michigan College The Gaseous State. Accessed February 2008.
Gases
Gas
Articles containing video clips | Gas | [
"Physics",
"Chemistry"
] | 7,151 | [
"Statistical mechanics",
"Gases",
"Phases of matter",
"Matter"
] |
18,995,926 | https://en.wikipedia.org/wiki/Lifting%20gas | A lifting gas or lighter-than-air gas is a gas that has a density lower than normal atmospheric gases and rises above them as a result, making it useful in lifting lighter-than-air aircraft. Only certain lighter than air gases are suitable as lifting gases. Dry air has a density of about 1.29 g/L (gram per liter) at standard conditions for temperature and pressure (STP) and an average molecular mass of 28.97 g/mol, and so lighter-than-air gases have a density lower than this.
Gases used for lifting
Hot air
Heated atmospheric air is frequently used in recreational ballooning. According to the ideal gas law, an amount of gas (and also a mixture of gases such as air) expands as it is heated. As a result, a certain volume of gas has a lower density as the temperature is higher. The temperature of the hot air in the envelope will vary depending upon the ambient temperature, but the maximum continuous operating temperature for most balloons is .
Hydrogen
Hydrogen, being the lightest existing gas (7% the density of air, 0.08988 g/L at STP), seems to be the most appropriate gas for lifting. It can be easily produced in large quantities, for example with the water-gas shift reaction or electrolysis, but hydrogen has several disadvantages:
Hydrogen is extremely flammable. Some countries have banned the use of hydrogen as a lift gas for commercial vehicles but it is allowed for recreational free ballooning in the United States, United Kingdom and Germany. The Hindenburg disaster is frequently cited as an example of the safety risks posed by hydrogen. The extremely high cost of helium (compared to hydrogen) has led researchers to re-investigate the safety issues of using hydrogen as a lift gas, especially for vehicles not carrying passengers and being deployed away from populated areas. With good engineering and good handling practices, the risks can be significantly reduced.
Because the diatomic hydrogen molecule is very small, it can easily diffuse through many materials such as latex, so that the balloon will deflate quickly. This is one reason that many hydrogen or helium filled balloons are constructed out of Mylar/BoPET.
Helium
Helium is the second lightest gas (0.1786 g/L at STP). For that reason, it is an attractive gas for lifting as well.
A major advantage is that this gas is noncombustible. But the use of helium has some disadvantages, too:
The diffusion issue shared with hydrogen (though, as helium's molecular radius (138 pm) is smaller, it diffuses through more materials than hydrogen).
Helium is expensive.
Although abundant in the universe, helium is very scarce on Earth. The only commercially viable reserves are a few natural gas wells, mostly in the US, that trapped it from the slow alpha decay of radioactive materials within Earth. By human standards, helium is a non-renewable resource that cannot be practically manufactured from other materials. When released into the atmosphere, e.g., when a helium-filled balloon leaks or bursts, helium eventually escapes into space and is lost.
Coal gas
In the past, coal gas, a mixture of hydrogen, carbon monoxide, and other gases, was also used in balloons. It was widely available and cheap.
Disadvantages include a higher density (reducing lift), its flammability and the high toxicity of the carbon monoxide content.
Ammonia
Ammonia has been used as a lifting gas in balloons, but while inexpensive, it is relatively heavy (density 0.769 g/L at STP, average molecular mass 17.03 g/mol), poisonous, an irritant, and can damage some metals and plastics.
Methane
Methane (density 0.716 g/L at STP, average molecular mass 16.04 g/mol), the main component of natural gas, is sometimes used as a lift gas when hydrogen and helium are not available. It has the advantage of not leaking through balloon walls as rapidly as the smaller molecules of hydrogen and helium. Many lighter-than-air balloons are made of aluminized plastic that limits such leakage; hydrogen and helium leak rapidly through latex balloons. However, methane is highly flammable and like hydrogen is not appropriate for use in passenger-carrying airships. It is also relatively dense and a potent greenhouse gas.
Combinations
It is also possible to combine some of the above solutions. A well-known example is the Rozière balloon which combines a core of helium with an outer shell of hot air.
Gases theoretically suitable for lifting
Water vapour
The gaseous state of water is lighter than air (density 0.804 g/L at STP, average molecular mass 18.015 g/mol) due to water's low molar mass when compared with typical atmospheric gases such as nitrogen gas (N2). It is non-flammable and much cheaper than helium. The concept of using steam for lifting is therefore already 200 years old. The biggest challenge has always been to make a material that can resist it. In 2003, a university team in Berlin, Germany, has successfully made a 150 °C steam lifted balloon. However, such a design is generally impractical due to high boiling point and condensation.
Hydrogen fluoride
Hydrogen fluoride is lighter than air and could theoretically be used as a lifting gas. However, it is extremely corrosive, highly toxic, expensive, is heavier than other lifting gases, and has a low boiling point of 19.5 °C. Its use would therefore be impractical.
Acetylene
Acetylene is 10% lighter than air and could be used as a lifting gas. Its extreme flammability and low lifting power make it an unattractive choice.
Hydrogen cyanide
Hydrogen cyanide, which is 7% lighter than air, is technically capable of being used as a lifting gas at temperatures above its boiling point of 25.6 °C. Its extreme toxicity, low buoyancy, and low boiling point have precluded such a use.
Neon
Neon is lighter than air (density 0.900 g/L at STP, average atomic mass 20.17 g/mol) and could lift a balloon. Like helium, it is non-flammable. However, it is rare on Earth and expensive, and is among the heavier lifting gases.
Nitrogen
Pure nitrogen has the advantage that it is inert and abundantly available, because it is the major component of air. However, because nitrogen is only 3% lighter than air, it is not a good choice for a lifting gas.
Ethylene
Ethylene is an unsaturated hydrocarbon that's 3% less dense than air. Unlike nitrogen however, ethylene is highly flammable and far more expensive, rendering use as a lifting gas highly impractical.
Diborane
Diborane is slightly lighter than molecular nitrogen with a molecular mass of 27.7. Being pyrophoric it is however a major safety hazard, on a scale even greater than that of hydrogen.
Vacuum
Theoretically, an aerostatic vehicle could be made to use a vacuum or partial vacuum. As early as 1670, over a century before the first manned hot-air balloon flight, the Italian monk Francesco Lana de Terzi envisioned a ship with four vacuum spheres.
In a theoretically perfect situation with weightless spheres, a "vacuum balloon" would have 7% more net lifting force than a hydrogen-filled balloon, and 16% more net lifting force than a helium-filled one. However, because the walls of the balloon must remain rigid without imploding, the balloon is impractical to construct with any known material. Despite that, sometimes there is discussion on the topic.
Aerogel
While not a gas, it is possible to synthesize an ultralight aerogel with a density less than air, the lightest recorded so far reaching a density approximately 1/6th that of air. Aerogels don't float in ambient conditions, however, because air fills the pores of an aerogel's microstructure, so the apparent density of the aerogel is the sum of the densities of the aerogel material and the air contained within. In 2021, a group of researchers successfully levitated a series of carbon aerogels by heating them with a halogen lamp, which had the effect of lowering the density of the air trapped in the porous microstructure of the aerogel, allowing the aerogel to float.
Hydrogen versus helium
Hydrogen and helium are the most commonly used lift gases. Although helium is twice as heavy as (diatomic) hydrogen, they are both significantly lighter than air.
The lifting power in air of hydrogen and helium can be calculated using the theory of buoyancy as follows:
Thus helium is almost twice as dense as hydrogen. However, buoyancy depends upon the difference of the densities (ρgas) − (ρair) rather than upon their ratios. Thus the difference in buoyancies is about 8%, as seen from the buoyancy equation:
FB = (ρair - ρgas) × g × V
Where FB = Buoyant force (in newton); g = gravitational acceleration = 9.8066 m/s2 = 9.8066 N/kg; V = volume (in m3).
Therefore, the amount of mass that can be lifted by hydrogen in air at sea level, equal to the density difference between hydrogen and air, is:
(1.292 - 0.090) kg/m3 = 1.202 kg/m3
and the buoyant force for one m3 of hydrogen in air at sea level is:
1 m3 × 1.202 kg/m3 × 9.8 N/kg= 11.8 N
Therefore, the amount of mass that can be lifted by helium in air at sea level is:
(1.292 - 0.178) kg/m3 = 1.114 kg/m3
and the buoyant force for one m3 of helium in air at sea level is:
1 m3 × 1.114 kg/m3 × 9.8 N/kg= 10.9 N
Thus hydrogen's additional buoyancy compared to helium is:
11.8 / 10.9 ≈ 1.08, or approximately 8.0%
This calculation is at sea level at 0 °C. For higher altitudes, or higher temperatures, the amount of lift will decrease proportionally to the air density, but the ratio of the lifting capability of hydrogen to that of helium will remain the same. This calculation does not include the mass of the envelope need to hold the lifting gas.
High-altitude ballooning
At higher altitudes, the air pressure is lower and therefore the pressure inside the balloon is also lower. This means that while the mass of lifting gas and mass of displaced air for a given lift are the same as at lower altitude, the volume of the balloon is much greater at higher altitudes.
A balloon that is designed to lift to extreme heights (stratosphere), must be able to expand enormously in order to displace the required amount of air. That is why such balloons seem almost empty at launch, as can be seen in the photo.
A different approach for high altitude ballooning, especially used for long duration flights is the superpressure balloon. A superpressure balloon maintains a higher pressure inside the balloon than the external (ambient) pressure.
Submerged balloons
Because of the enormous density difference between water and gases (water is about 1,000 times denser than most gases), the lifting power of underwater gases is very strong. The type of gas used is largely inconsequential because the relative differences between gases is negligible in relation to the density of water. However, some gases can liquefy under high pressure, leading to an abrupt loss of buoyancy.
A submerged balloon that rises will expand or even explode because of the strong pressure reduction, unless gas is allowed to escape continuously during the ascent or the balloon is strong enough to withstand the change in pressure.
Divers use lifting bags (upside down bags) that they fill with air to lift heavy items like cannons and even whole ships during underwater archaeology and shipwreck salvaging. The air is either supplied from diving cylinders or pumped through a hose from the diver's ship on the surface.
Submarines use ballast tanks and trim tanks with air to regulate their buoyancy, essentially making them underwater "airships". Bathyscaphes are a type of deep-sea submersibles that use gasoline as the "lifting gas".
Balloons on other celestial bodies
A balloon can only have buoyancy if there is a medium that has a higher average density than the balloon itself.
Balloons cannot work on the Moon because it has almost no atmosphere.
Mars has a very thin atmosphere – the pressure is only of earth atmospheric pressure – so a huge balloon would be needed even for a tiny lifting effect. Overcoming the weight of such a balloon would be difficult, but several proposals to explore Mars with balloons have been made.
Venus has a CO2 atmosphere. Because CO2 is about 50% denser than Earth air, ordinary Earth air could be a lifting gas on Venus. This has led to proposals for a human habitat that would float in the atmosphere of Venus at an altitude where both the pressure and the temperature are Earth-like. In 1985, the Soviet Vega program deployed two helium balloons in Venus's atmosphere at an altitude of .
Titan, Saturn's largest moon, has a dense, very cold atmosphere of mostly nitrogen that is appropriate for ballooning. A use of aerobots on Titan was proposed. The Titan Saturn System Mission proposal included a balloon to circumnavigate Titan.
Solids
In 2002, aerogel held the Guinness World Record for the least dense (lightest) solid. Aerogel is mostly air because its structure is like that of a highly vacuous sponge. The lightness and low density is due primarily to the large proportion of air within the solid and not the silicon construction materials. Taking advantage of this, SEAgel, in the same family as aerogel but made from agar, can be filled with helium gas to create a solid which floats when placed in an open top container filled with a dense gas.
See also
Aerostat
Airship
Balloon (aircraft)
Buoyancy
Buoyancy compensator (aviation)
Cloud Nine (tensegrity sphere)
Heavier than air
Hot air balloon
Vacuum airship/Vacuum balloon
References
External links
Lighter-than-air - An overview
Airship Association
Aerostats
Airship technology
Buoyancy
Gas technologies
Gases
Hydrogen technologies
Mass density | Lifting gas | [
"Physics",
"Chemistry"
] | 2,991 | [
"Matter",
"Mechanical quantities",
"Physical quantities",
"Intensive quantities",
"Phases of matter",
"Mass",
"Volume-specific quantities",
"Density",
"Statistical mechanics",
"Mass density",
"Gases"
] |
18,996,403 | https://en.wikipedia.org/wiki/Neuregulin%203 | Neuregulin 3, also known as NRG3, is a neural-enriched member of the neuregulin protein family which in humans is encoded by the NRG3 gene. The NRGs are a group of signaling proteins part of the superfamily of epidermal growth factor, EGF like polypeptide growth factor. These groups of proteins possess an 'EGF-like domain' that consists of six cysteine residues and three disulfide bridges predicted by the consensus sequence of the cysteine residues.
The neuregulins are a diverse family of proteins formed through alternative splicing from a single gene; they play crucial roles in regulating the growth and differentiation of epithelial, glial and muscle cells. These groups of proteins also aid cell-cell associations in the breast, heart and skeletal muscles. Four different kinds of neuregulin genes have been identified, namely: NRG1 NRG2 NRG3 and NRG4. While the NRG1 isoforms have been extensively studied, there is little information available about the other genes of the family. NRGs bind to the ERBB3 and ERBB4 tyrosine kinase receptors; they then form homodimers or heterodimers, often consisting of ERBB2, which is thought to function as a co-receptor as it has not been observed to bind any ligand. NRGs bind to the ERBB receptors to promote phosphorylation of specific tyrosine residues on the C-terminal link of the receptor and the interactions of intracellular signaling proteins.
NRGs also play significant roles in developing, maintaining, and repair of the nervous system; this is because NRG1, NRG2 and NRG3 are widely expressed in the central nervous system and also in the olfactory system. Studies have observed that in mice, NRG3 is limited to the developing Central nervous system as well as the adult form; previous studies also highlight the roles of NRG1, ERBB2, and ERBB4 in the development of the heart. Mice deficient in ERBB2, ERBB4, or NRG1 were observed to die at the mid-embryogenesis stage from the termination of myocardial trabeculae development in the ventricle. These results confirm that NRG1 expression in the endocardium is a significant ligand required to activate expression of ERBB2 and ERBB4 in the myocardium.
Function
Neuregulins are ligands of the ERBB-family receptors, while NRG1 and NRG2 are able to bind and activate both ERBB3 and ERBB4, NRG3 binding stimulates tyrosine phosphorylation, and can only bind to the extracellular domain of the ERBB4 receptor tyrosine kinase but not to the other members of the ERBB family receptors; ERBB2 and ERBB3.
NRG1, plays critical roles in the development of the embryonic cerebral cortex when it controls migration and sequencing of the cortical cell. Contrary to NRG1, there is limited information on pre-mRNA splicing of the NRG3 gene, together with its transcriptional profile and function in the brain. The recent discovery of hFBNRG3 (human fetal brain NRG3; DQ857894) which is an alternative cloned isoform of NRG3 from human fetal brain, promotes the survival of oligodendrocyte with the aid of ERBB4/PI3K/AKT1 pathway and also partakes in NRG3-ERBB4 signaling in neurodevelopment and brain functionalities.
Even though studies have revealed that NRG1 and NRG3 are paralogues, the EGF domain of NRG3 is only 31% identical to NRG1. The N-terminal domain of NRG3 resembles that of Sensory And Motor Neuron Derived Factor; SMDF because it lacks Ig-like as well as Kringle-like domains that are attributed to many NRG1 isomers. Hydropathy profile studies have shown that NRG3 lacks a hydrophobic N-terminal signal sequence common in secreted proteins, but contains a region of non-polar or uncharged amino acids in position (W66–V91). An amino acid region found in SMDF is similar to this non polar site of NRG3 and has been proposed to act as an internal, uncleaved signal sequence that functions as a translocation agent across the endoplasmic reticulum membrane.
Clinical significance
Recent human genetic studies reveals neuregulin 3 gene (NRG3) as a potential risk gene responsible for different kinds of neuro-developmental disorders, resulting to schizophrenia, stunted development, attention deficit related disorders and bipolar disorders when structural and genetic variations occur within the gene
Most importantly, variants of the NRG3 gene have been linked to a susceptibility to schizophrenia. An increase in Isoform-specific models of NRG3 involved in schizophrenia have been reported, and observed to have an interaction with rs10748842; a NRG3 risk polymorphism, which indicates that NRG3 transcriptional dysregulation is a molecular risk mechanism.
These isoforms have also been linked to Hirschsprung's disease.
Schizophrenia
Several genes in the NRG-ERBB signaling pathway have been implicated in genetic predisposition to schizophrenia, Neuregulin 3 (NRG3) encodes a protein similar to its paralog NRG1 and both play important roles in the developing nervous system. As observed with other pathologies like autism and schizophrenia, several members of any given protein family have a high chance of association with the same phenotype, individually or together.
A recent study of the temporal, diagnostic, and tissue-specific modulation of NRG3 isoform expression in human brain development, employed the use of qRT-PCR ; quantitative polymerase chain reaction to quantify 4 classes of NRG3 in human postmortem dorsolateral prefrontal cortex from 286 normal and affected (bipolar or extreme depressive disorder) candidates with age range of 14 weeks to 85 years old. The researches observed that each the 4 isoform class (I-IV) of NRG3 showed unique expression trajectories across human neopallium development and aging.
NRG3 class I was increased in bipolar and major depressive disorder, in agreement with observations in schizophrenia.
NRG3 class II was increased in bipolar disorder, and class III was increased in major depression cases.
NRG3 class I, II and IV were actively involved in the developmental stages,
The rs10748842 risk genotype predicted elevated class II and III expression, consistent with previous reports in the brain, with tissue-specific analyses suggesting that classes II and III are brain-specific isoforms of NRG3.
References
Further reading
Neurotrophic factors | Neuregulin 3 | [
"Chemistry"
] | 1,428 | [
"Neurotrophic factors",
"Neurochemistry",
"Signal transduction"
] |
18,996,438 | https://en.wikipedia.org/wiki/TIE1 | Tyrosine kinase with immunoglobulin-like and EGF-like domains 1 also known as TIE1 is an angiopoietin receptor which in humans is encoded by the TIE1 gene.
Function
TIE1 is a cell surface protein expressed exclusively in endothelial cells, however it has also been shown to be expressed in immature hematopoietic cells and platelets. TIE1 upregulates the cell adhesion molecules (CAMs) VCAM-1, E-selectin, and ICAM-1 through a p38-dependent mechanism. Attachment of monocyte derived immune cells to endothelial cells is also enhanced by TIE1 expression. TIE1 has a proinflammatory effect and may play a role in the endothelial inflammatory diseases such as atherosclerosis.
See also
References
External links
Tyrosine kinase receptors
Proteins | TIE1 | [
"Chemistry"
] | 184 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Molecular biology",
"Proteins",
"Tyrosine kinase receptors"
] |
18,998,319 | https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein%20theorem%20for%20measurable%20spaces | The Cantor–Bernstein–Schroeder theorem of set theory has a counterpart for measurable spaces, sometimes called the Borel Schroeder–Bernstein theorem, since measurable spaces are also called Borel spaces. This theorem, whose proof is quite easy, is instrumental when proving that two measurable spaces are isomorphic. The general theory of standard Borel spaces contains very strong results about isomorphic measurable spaces, see Kuratowski's theorem. However, (a) the latter theorem is very difficult to prove, (b) the former theorem is satisfactory in many important cases (see Examples), and (c) the former theorem is used in the proof of the latter theorem.
The theorem
Let and be measurable spaces. If there exist injective, bimeasurable maps then and are isomorphic (the Schröder–Bernstein property).
Comments
The phrase " is bimeasurable" means that, first, is measurable (that is, the preimage is measurable for every measurable ), and second, the image is measurable for every measurable . (Thus, must be a measurable subset of not necessarily the whole )
An isomorphism (between two measurable spaces) is, by definition, a bimeasurable bijection. If it exists, these measurable spaces are called isomorphic.
Proof
First, one constructs a bijection out of and exactly as in the proof of the Cantor–Bernstein–Schroeder theorem. Second, is measurable, since it coincides with on a measurable set and with on its complement. Similarly, is measurable.
Examples
Example 1
The open interval (0, 1) and the closed interval [0, 1] are evidently non-isomorphic as topological spaces (that is, not homeomorphic). However, they are isomorphic as measurable spaces. Indeed, the closed interval is evidently isomorphic to a shorter closed subinterval of the open interval. Also the open interval is evidently isomorphic to a part of the closed interval (just itself, for instance).
Example 2
The real line and the plane are isomorphic as measurable spaces. It is immediate to embed into The converse, embedding of into (as measurable spaces, of course, not as topological spaces) can be made by a well-known trick with interspersed digits; for example,
g(π,100e) = g(, ) = . ….
The map is clearly injective. It is easy to check that it is bimeasurable. (However, it is not bijective; for example, the number is not of the form ).
References
S.M. Srivastava, A Course on Borel Sets, Springer, 1998.
See Proposition 3.3.6 (on page 96), and the first paragraph of Section 3.3 (on page 94).
Theorems in measure theory
Descriptive set theory
Theorems in the foundations of mathematics | Schröder–Bernstein theorem for measurable spaces | [
"Mathematics"
] | 612 | [
"Theorems in mathematical analysis",
"Theorems in measure theory",
"Foundations of mathematics",
"Mathematical logic",
"Mathematical problems",
"Mathematical theorems",
"Theorems in the foundations of mathematics"
] |
16,766,322 | https://en.wikipedia.org/wiki/Akan%20goldweights | Akan goldweights (locally known as mrammou or abrammuo) are weights made of brass used as a measuring system by the Akan people of West Africa, particularly for wei and fair-trade arrangements with one another. The status of a man increased significantly if he owned a complete set of weights. Complete small sets of weights were gifts to newly wedded men. This insured that he would be able to enter the merchant trade respectably and successfully.
Beyond their practical application, the weights are miniature representations of West African culture items such as adinkra symbols, plants, animals and people.
Dating the weights
Stylistic studies of goldweights can provide relative dates into the two broad Early and Late periods. The Early period is thought to have been from about 1400–1720 AD, with some overlap with the Late period, 1700-1900 AD. There is a distinct difference between the Early and Late periods. Geometric weights are the oldest forms, dating from 1400 AD onwards while figurative weights, those made in the image of people, animals, building etc., first appear around 1600 AD. It's supposed that throughout the early and late periods from 1400-1900 AD there were around 4 million goldweights cast by the Ashanti and Baule ethnic groups of West Africa.
Radiocarbon dating, a standard and accurate method in many disciplines, cannot be used to date the weights, as it is an inorganic material. The base components of inorganic materials, such as metals, formed long before the manufacturing of the artifact. The copper and zinc used to make the alloy are vastly older than the artifact itself. Studies on the quality or origins of the base metals in brass are not very useful due to the broad distribution and recycling of the material.
Studying the weight's cultural background or provenance is an accurate method of dating the weights. Historical records accompanying the weight describing the people to whom it belonged to, as well as a comparative study of the weights and oral and artistic traditions of neighbouring communities should be part of studying the background and provenance of the weights.
Meanings behind the weights
Scholars use the weights, and the oral traditions behind the weights, to understand aspects of Akan culture that otherwise may have been lost. The weights represent stories, riddles, and codes of conduct that helped guide Akan peoples in the ways they lived their lives. Central to Akan culture is the concern for equality and justice; it is rich in oral histories on this subject. Many weights symbolize significant and well-known stories. The weights were part of the Akan's cultural reinforcement, expressing personal behaviour codes, beliefs, and values in a medium that was assembled by many people.
These goldweights and their meanings almost all solely emanate from Akan systems and ways of thinking. While it's inevitable that different cultures influence each other, the names, writing and philosophy ingrained into these weights are all ideas native to the Akan people and hardly found in other West African societies.
Anthony Appiah describes how his mother, who collected goldweights, was visited by Muslim Hausa traders from the north. The goldweights they brought were "sold by people who had no use for them any more, now that paper and coin had replaced gold-dust as currency. And as she collected them, she heard more and more of the folklore that went with them; the proverbs that every figurative gold-weight elicited; the folk-tales, Ananseasem, that the proverbs evoked." Appiah also heard these Ananseasem, Anansi stories, from his father, and writes: "Between his stories and the cultural messages that came with the gold-weights, we gathered the sort of sense of a cultural tradition that comes from growing up in it. For us it was not Asante tradition but the webwork of our lives."
There are a number of parallels between Akan goldweights and the seals used in Harappa. Both artifacts stabilized and secured regional and local trade between peoples, while they took on further meaning beyond their practical uses.
Many animals native to the region and shapes of all kind were depicted in these goldweights. However of interesting note and significant lack of prominence in Akan goldweights is the depiction of a lion. While the lion and the leopard are both symbols of strength and courage in Akan art and culture, the leopard stands out as the prominent motif. The Ashanti people of the Akan region are primarily located in forested areas where leopards thrive. Symbolic lions tend to be a coat of arms associated with European trading companies in West Africa. The heraldic lion was not considered to be king of the jungle by the Ashanti, but rather the leopard.
Shields are symbols of bravery, stamina, or a glorious deed, though not necessarily in battle. Double-edged swords symbolize a joint rule between female and male, rather than implying violence or rule with fear. The naming of the weights is incredibly complex, as a complete list of Akan weights had more than sixty values, and each set had a local name that varied regionally. There are, from studies done by Garrard, twelve weight-name lists from Ghana and the Ivory Coast.
Collections of weights
Some estimate that there are 3 million goldweights in existence. Simon Fraser University has a small collection, consisting mostly of geometric style weights, with a number of human figurative weights. Both types are pictured here and come from the SFU Museum of Archaeology and Ethnography. Many of the largest museums of in the US and Europe have sizable collections of goldweights. The National Museum of Ghana, the Musée des Civilisations de Côte d'Ivoire in Abidjan, Derby Museum and smaller museums in Mali all have collections of weights with a range of dates. Private collections have amassed a wide range of weights as well.
Manufacture of the weights
In the past, each weight was meticulously carved, then cast using the ancient technique of lost wax. As the Akan culture moved away from using gold as the basis of their economy, the weights lost their cultural day-to-day use and some of their significance. Their popularity with tourists has created a market that the locals fill with mass-produced weights. These modern reproductions of the weights have become a tourist favorite. Rather than the simple but artistic facial features of the anthropomorphic weights or the clean, smooth lines of the geomorphic weights, modern weights are unrefined and look mass-produced. The strong oral tradition of the Akan is not included in the creation of the weights; however, this does not seem to lessen their popularity.
The skill involved in casting weights was enormous; as most weights were less than 2½ ounces and their exact mass was meticulously measured. They were a standard of measure to be used in trade, and had to be accurate. The goldsmith, or adwumfo, would make adjustments if the casting weighed too much or too little. Even the most beautiful, figurative weights had limbs and horns removed, or edges filed down until it met the closest weight equivalent. Weights that were not heavy enough would have small lead rings or glass beads attached to bring up the weight to the desired standard. There are far more weights without modifications than not, speaking to the talent of the goldsmiths. Most weights were within 3% of their theoretical value; this variance is similar to those of European nest weights from the same time.
Early weights display bold, but simple, artistic designs. Later weights developed into beautiful works of art with fine details. However, by the 1890s (Late Period) the quality of both design and material was very poor, and the abandonment of the weights quickly followed.
Tim Garrard (April 28, 1943 – May 17, 2007) studied the Akan gold culture. His research was centered on goldweights and their cultural significances and purposes. He was also interested in the gold trade, the creation of the weight measurements, and how Akan trade networks operated with other networks. His works and those that use his work as a base are very informative about broader Akan culture.
The weights pictured here are part of the collection at the SFU museum. Donated to the museum in the late 1970s, they are part of a wide collection of African cultural pieces.
See also
Birimian
Economy of the Ashanti Empire
Geology of Ghana
References
External links
http://www.geocities.com/gmmbacc/ (Archived 2009-10-24)
"Gold in Asante Courtly Arts" from the Metropolitan Museum of Art
"Goldweights and Proverbs"
Akan works at the University of Michigan Museum of Art
"Gold weights from Ghana" from the National Museums Scotland
African art
Culture of Ghana
Gold objects
Measuring instruments
Akan culture
Ghanaian art | Akan goldweights | [
"Technology",
"Engineering"
] | 1,790 | [
"Measuring instruments"
] |
16,767,087 | https://en.wikipedia.org/wiki/Cure | A cure is a substance or procedure that ends a medical condition, such as a medication, a surgical operation, a change in lifestyle or even a philosophical mindset that helps end a person's sufferings; or the state of being healed, or cured. The medical condition could be a disease, mental illness, genetic disorder, or simply a condition a person considers socially undesirable, such as baldness or lack of breast tissue.
An incurable disease may or may not be a terminal illness; conversely, a curable illness can still result in the patient's death.
The proportion of people with a disease that are cured by a given treatment, called the cure fraction or cure rate, is determined by comparing disease-free survival of treated people against a matched control group that never had the disease.
Another way of determining the cure fraction and/or "cure time" is by measuring when the hazard rate in a diseased group of individuals returns to the hazard rate measured in the general population.
Inherent in the idea of a cure is the permanent end to the specific instance of the disease. When a person has the common cold, and then recovers from it, the person is said to be cured, even though the person might someday catch another cold. Conversely, a person that has successfully managed a disease, such as diabetes mellitus, so that it produces no undesirable symptoms for the moment, but without actually permanently ending it, is not cured.
Related concepts, whose meaning can differ, include response, remission and recovery.
Statistical model
In complex diseases, such as cancer, researchers rely on statistical comparisons of disease-free survival (DFS) of patients against matched, healthy control groups. This logically rigorous approach essentially equates indefinite remission with cure. The comparison is usually made through the Kaplan-Meier estimator approach.
The simplest cure rate model was published by Joseph Berkson and Robert P. Gage in 1952. In this model, the survival at any given time is equal to those that are cured plus those that are not cured, but who have not yet died or, in the case of diseases that feature asymptomatic remissions, have not yet re-developed signs and symptoms of the disease. When all of the non-cured people have died or re-developed the disease, only the permanently cured members of the population will remain, and the DFS curve will be perfectly flat. The earliest point in time that the curve goes flat is the point at which all remaining disease-free survivors are declared to be permanently cured. If the curve never goes flat, then the disease is formally considered incurable (with the existing treatments).
The Berkson and Gage equation is
where is the proportion of people surviving at any given point in time, is the proportion that are permanently cured, and is an exponential curve that represents the survival of the non-cured people.
Cure rate curves can be determined through an analysis of the data. The analysis allows the statistician to determine the proportion of people that are permanently cured by a given treatment, and also how long after treatment it is necessary to wait before declaring an asymptomatic individual to be cured.
Several cure rate models exist, such as the expectation-maximization algorithm and Markov chain Monte Carlo model. It is possible to use cure rate models to compare the efficacy of different treatments. Generally, the survival curves are adjusted for the effects of normal aging on mortality, especially when diseases of older people are being studied.
From the perspective of the patient, particularly one that has received a new treatment, the statistical model may be frustrating. It may take many years to accumulate sufficient information to determine the point at which the DFS curve flattens (and therefore no more relapses are expected). Some diseases may be discovered to be technically incurable, but also to require treatment so infrequently as to be not materially different from a cure. Other diseases may prove to have multiple plateaus, so that what was once hailed as a "cure" results unexpectedly in very late relapses. Consequently, patients, parents and psychologists developed the notion of psychological cure, or the moment at which the patient decides that the treatment was sufficiently likely to be a cure as to be called a cure. For example, a patient may declare himself to be "cured", and to determine to live his life as if the cure were definitely confirmed, immediately after treatment.
Related terms
Response Response is a partial reduction in symptoms after treatment.
RecoveryRecovery is a restoration of health or functioning. A person who has been cured may not be fully recovered, and a person who has recovered may not be cured, as in the case of a person in a temporary remission or who is an asymptomatic carrier for an infectious disease.
PreventionPrevention is a way to avoid an injury, sickness, disability, or disease in the first place, and generally it will not help someone who is already ill (though there are exceptions). For instance, many babies and young children are vaccinated against polio (a highly infectious disease) and other infectious diseases, which prevents them from contracting polio. But the vaccination does not work on patients who already have polio. A treatment or cure is applied after a medical problem has already started.
TherapyTherapy treats a problem, and may or may not lead to its cure. In incurable conditions, a treatment ameliorates the medical condition, often only for as long as the treatment is continued or for a short while after treatment is ended. For example, there is no cure for AIDS, but treatments are available to slow down the harm done by HIV and extend the treated person's life. Treatments don't always work. For example, chemotherapy is a treatment for cancer, but it may not work for every patient. In easily cured forms of cancer, such as childhood leukaemia's, testicular cancer and Hodgkin lymphoma, cure rates may approach 90%. In other forms, treatment may be essentially impossible. A treatment need not be successful in 100% of patients to be considered curative. A given treatment may permanently cure only a small number of patients; so long as those patients are cured, the treatment is considered curative.
Examples
Cures can take the form of natural antibiotics (for bacterial infections), synthetic antibiotics such as the sulphonamides, or fluoroquinolones, antivirals (for a very few viral infections), antifungals, antitoxins, vitamins, gene therapy, surgery, chemotherapy, radiotherapy, and so on. Despite a number of cures being developed, the list of incurable diseases remains long.
1700s
Scurvy became curable (as well as preventable) with doses of vitamin C (for example, in limes) when James Lind published A Treatise on the Scurvy (1753).
1890s
Antitoxins to diphtheria and tetanus toxins were produced by Emil Adolf von Behring and his colleagues from 1890 onwards. The use of diphtheria antitoxin for the treatment of diphtheria was regarded by The Lancet as the "most important advance of the [19th] Century in the medical treatment of acute infectious disease".
1930s
Sulphonamides become the first widely available cure for bacterial infections.
Antimalarials were first synthesized, making malaria curable.
1940s
Bacterial infections became curable with the development of antibiotics.
2010s
Hepatitis C, a viral infection, became curable through treatment with antiviral medications.
See also
Eradication of infectious diseases
Preventive medicine
Remission (medicine)
Relapse, the reappearance of a disease
Spontaneous remission
References
Drugs
Medical terminology
Therapy | Cure | [
"Chemistry"
] | 1,595 | [
"Pharmacology",
"Chemicals in medicine",
"Drugs",
"Products of chemical industry"
] |
16,768,458 | https://en.wikipedia.org/wiki/Posidonia%20australis | Posidonia australis, also known as fibre-ball weed or ribbon weed, is a species of seagrass that occurs in the southern waters of Australia. It forms large meadows important to environmental conservation. Balls of decomposing detritus from the foliage are found along nearby shore-lines.
In 2022, a single stand in Shark Bay was reported by scientists to not only be the largest plant in the world, but the largest organism by square size.
Description
Posidonia australis is a flowering plant occurring in dense meadows, or along channels, in white sand. It is found at depths from . Subsurface rhizomes and roots provide stability in the sands it occupies. Erect rhizomes and leaves reduce the accumulation of silt.
The leaves are ribbon-like and wide. They are bright green, perhaps becoming browned with age. The terminus of the leaf is rounded or absent through damage. They are arranged in groups with older leaves on the outside, longer and differing in form from the younger leaves they surround.
The species is monoecious. The flowers appear on small spikes on leafless stems, two bracts on each spike. The plant pollinates by hydrophily, by dispersing in the water.
Posidonia australis reproduction usually occurs through sexual or asexual methods but, under extreme conditions, by pseudovivipary.
A 2013 study showed that P. australis can sequester carbon 35 times more efficiently than rainforests.
In 2022, a study by the School of Biological Sciences and Oceans Institute at The University of Western Australia showed that a single plant of this species can grow vegetatively by using rhizomes to cover an extensive area, similar to buffalo grass. This particular specimen has double the number of chromosomes of other studied populations (40 chromosomes instead of the usual 20).
Distribution
This species is found in waters around the southern coast of Australia. In Western Australia it occurs in the Shark Bay region, around islands of the Houtman Abrolhos, and southward along the coast of the Swan Coastal Plain. The species is recorded at the edge of the Esperance Plains, the Archipelago of the Recherche, at the southern coast of the southwest region. The range extends to the east to coastal areas of New South Wales, South Australia, Tasmania, and Victoria.
A sign of a nearby occurrence of Posidonia is the presence of masses of decomposing leaves on beaches, forming fibrous balls.
Largest known organism
A research article in the Proceedings of the Royal Society reported in June 2022 that genetic testing had revealed that samples of Posidonia australis taken from a meadow in Shark Bay up to apart were all from a single clone of the same plant. The plant covers an area of seafloor of around . This would make it the largest known organism in the world by area, exceeding the size of a colony of the Armillaria ostoyae fungus in Malheur National Forest, Oregon that extends , as well as a stand of quaking aspen trees in Utah that extends over more than .
The plant is estimated to have taken at least 4,500 years to grow to this size by using rhizomes to colonise new parts of the seafloor, assuming a rhizome growth rate of around a year. This age puts it among the oldest known clonal plants too.
Taxonomy
This species is a member of the family Posidoniaceae, one of eight occurring in Australia. The ninth member, Posidonia oceanica, is found in the Mediterranean sea. The genus name for this species, Posidonia, is given for the god of the seas Poseidon, and australis refers to the southern distribution.
The species was first described by Joseph Hooker in Flora Tasmaniae. Common names for the plant include fibre-ball weed and ribbon weed.
Conservation status
IUCN lists this species as "near threatened", while the meadows in New South Wales have been listed by the Commonwealth of Australia as an endangered ecological community since 2015.
References
External links
Posidonia australis occurrence data from Australasian Virtual Herbarium
australis
Flora of New South Wales
Flora of South Australia
Flora of Tasmania
Flora of Victoria (state)
Angiosperms of Western Australia
Monocots of Australia
Largest organisms | Posidonia australis | [
"Biology"
] | 890 | [
"Largest organisms",
"Organism size"
] |
16,770,101 | https://en.wikipedia.org/wiki/Long%20non-coding%20RNA | Long non-coding RNAs (long ncRNAs, lncRNA) are a type of RNA, generally defined as transcripts more than 200 nucleotides that are not translated into protein. This arbitrary limit distinguishes long ncRNAs from small non-coding RNAs, such as microRNAs (miRNAs), small interfering RNAs (siRNAs), Piwi-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs), and other short RNAs. Given that some lncRNAs have been reported to have the potential to encode small proteins or micro-peptides, the latest definition of lncRNA is a class of transcripts of over 200 nucleotides that have no or limited coding capacity. However, John S. Mattick and colleagues suggested to change definition of long non-coding RNAs to transcripts more than 500 nt, which are mostly generated by Pol II. That means that question of lncRNA exact definition is still under discussion in the field. Long intervening/intergenic noncoding RNAs (lincRNAs) are sequences of transcripts that do not overlap protein-coding genes.
Long non-coding RNAs include intergenic lincRNAs, intronic ncRNAs, and sense and antisense lncRNAs, each type showing different genomic positions in relation to genes and exons.
The definition of lncRNAs differs from that of other RNAs such as siRNAs, mRNAs, miRNAs, and snoRNAs because it is not connected to the function of the RNA. A lncRNA is any transcript that is not one of the other well-characterized RNAs and is longer than 200-500 nucleotides. Some scientists think that most lncRNAs do not have a biologically relevant function because they are transcripts of junk DNA.
Abundance
Long non-coding transcripts are found in many species. Large-scale complementary DNA (cDNA) sequencing projects such as FANTOM reveal the complexity of these transcripts in humans. The FANTOM3 project identified ~35,000 non-coding transcripts that bear many signatures of messenger RNAs, including 5' capping, splicing, and poly-adenylation, but have little or no open reading frame (ORF). This number represents a conservative lower estimate, since it omitted many singleton transcripts and non-polyadenylated transcripts (tiling array data shows more than 40% of transcripts are non-polyadenylated). Identifying ncRNAs within these cDNA libraries is challenging since it can be difficult to distinguish protein-coding transcripts from non-coding transcripts. It has been suggested through multiple studies that testis, and neural tissues express the greatest amount of long non-coding RNAs of any tissue type. Using FANTOM5, 27,919 long ncRNAs have been identified in various human sources.
Quantitatively, lncRNAs demonstrate ~10-fold lower abundance than mRNAs, which is explained by higher cell-to-cell variation of expression levels of lncRNA genes in the individual cells, when compared to protein-coding genes. In general, the majority (~78%) of lncRNAs are characterized as tissue-specific, as opposed to only ~19% of mRNAs. Only 3.6% of human lncRNA genes are expressed in various biological contexts and 34% of lncRNA genes are expressed at high level (top 25% of both lncRNAs and mRNAs) in at least one biological context. In addition to higher tissue specificity, lncRNAs are characterized by higher developmental stage specificity, and cell subtype specificity in tissues such as human neocortex and other parts of the brain, regulating correct brain development and function. In 2022, a comprehensive integration of lncRNAs from existing databases, revealed that there are 95,243 lncRNA genes and 323,950 transcripts in humans.
In comparison to mammals relatively few studies have focused on the prevalence of lncRNAs in plants. However an extensive study considering 37 higher plant species and six algae identified ~200,000 non-coding transcripts using an in-silico approach, which also established the associated Green Non-Coding Database (GreeNC), a repository of plant lncRNAs.
Genomic organization
In 2005 the landscape of the mammalian genome was described as numerous 'foci' of transcription that are separated by long stretches of intergenic space. While some long ncRNAs are located within the intergenic stretches, the majority are overlapping sense and antisense transcripts that often include protein-coding genes, giving rise to a complex hierarchy of overlapping isoforms. Genomic sequences within these transcriptional foci are often shared within a number of coding and non-coding transcripts in the sense and antisense directions For example, 3012 out of 8961 cDNAs previously annotated as truncated coding sequences within FANTOM2 were later designated as genuine ncRNA variants of protein-coding cDNAs. While the abundance and conservation of these arrangements suggest they have biological relevance, the complexity of these foci frustrates easy evaluation.
The GENCODE consortium has collated and analysed a comprehensive set of human lncRNA annotations and their genomic organisation, modifications, cellular locations and tissue expression profiles. Their analysis indicates human lncRNAs show a bias toward two-exon transcripts.
Identification software
Translation
There has been considerable debate about whether lncRNAs have been misannotated and do in fact encode proteins. Several lncRNAs have been found to in fact encode for peptides with biologically significant function. Ribosome profiling studies have suggested that anywhere from 40% to 90% of annotated lncRNAs are in fact translated, although there is disagreement about the correct method for analyzing ribosome profiling data. Additionally, it is thought that many of the peptides produced by lncRNAs may be highly unstable and without biological function.
Conservation
Unlike protein coding genes, sequence of long non-coding RNAs has lower level of conservation. Initial studies into lncRNA conservation noted that as a class, they were enriched for conserved sequence elements, depleted in substitution and insertion/deletion rates and depleted in rare frequency variants, indicative of purifying selection maintaining lncRNA function. However, further investigations into vertebrate lncRNAs revealed that while lncRNAs are conserved in sequence, they are not conserved in transcription. In other words, even when the sequence of a human lncRNA is conserved in another vertebrate species, there is often no transcription of a lncRNA in the orthologous genomic region. Some argue that these observations suggest non-functionality of the majority of lncRNAs, while others argue that they may be indicative of rapid species-specific adaptive selection.
While the turnover of lncRNA transcription is much higher than initially expected, it is important to note that still, hundreds of lncRNAs are conserved at the sequence level. There have been several attempts to delineate the different categories of selection signatures seen amongst lncRNAs including: lncRNAs with strong sequence conservation across the entire length of the gene, lncRNAs in which only a portion of the transcript (e.g. 5′ end, splice sites) is conserved, and lncRNAs that are transcribed from syntenic regions of the genome but have no recognizable sequence similarity. Additionally, there have been attempts to identify conserved secondary structures in lncRNAs, though these studies have currently given way to conflicting results.
Functions
Some groups have claimed that the majority of long noncoding RNAs in mammals are likely to be functional, but other groups have claimed the opposite. This is an active area of research.
Some lncRNAs have been functionally annotated in LncRNAdb (a database of literature described lncRNAs), with the majority of these being described in humans. Over 2600 human lncRNAs with experimental evidences have been community-curated in LncRNAWiki (a wiki-based, publicly editable and open-content platform for community curation of human lncRNAs). According to the curation of functional mechanisms of lncRNAs based on the literatures, lncRNAs are extensively reported to be involved in ceRNA regulation, transcriptional regulation, and epigenetic regulation. A further large-scale sequencing study provides evidence that many transcripts thought to be lncRNAs may, in fact, be translated into proteins.
In the regulation of gene transcription
In gene-specific transcription
In eukaryotes, RNA transcription is a tightly regulated process. Noncoding RNAs act upon different aspects of this process, targeting transcriptional modulators, RNA polymerase (RNAP) II and even the DNA duplex to regulate gene expression.
NcRNAs modulate transcription by several mechanisms, including functioning themselves as co-regulators, modifying transcription factor activity, or regulating the association and activity of co-regulators. For example, the noncoding RNA Evf-2 functions as a co-activator for the homeobox transcription factor Dlx2, which plays important roles in forebrain development and neurogenesis. Sonic hedgehog induces transcription of Evf-2 from an ultra-conserved element located between the Dlx5 and Dlx6 genes during forebrain development. Evf-2 then recruits the Dlx2 transcription factor to the same ultra-conserved element whereby Dlx2 subsequently induces expression of Dlx5. The existence of other similar ultra- or highly conserved elements within the mammalian genome that are both transcribed and fulfill enhancer functions suggest Evf-2 may be illustrative of a generalised mechanism that regulates developmental genes with complex expression patterns during vertebrate growth. Indeed, the transcription and expression of similar non-coding ultraconserved elements was shown to be abnormal in human leukaemia and to contribute to apoptosis in colon cancer cells, suggesting their involvement in tumorigenesis in like fashion to protein-coding RNA.
Local ncRNAs can also recruit transcriptional programmes to regulate adjacent protein-coding gene expression.
The RNA binding protein TLS binds and inhibits the CREB binding protein and p300 histone acetyltransferase activities on a repressed gene target, cyclin D1. The recruitment of TLS to the promoter of cyclin D1 is directed by long ncRNAs expressed at low levels and tethered to 5' regulatory regions in response to DNA damage signals. Moreover, these local ncRNAs act cooperatively as ligands to modulate the activities of TLS. In the broad sense, this mechanism allows the cell to harness RNA-binding proteins, which make up one of the largest classes within the mammalian proteome, and integrate their function in transcriptional programs. Nascent long ncRNAs have been shown to increase the activity of CREB binding protein, which in turn increases the transcription of that ncRNA. A study found that a lncRNA in the antisense direction of the Apolipoprotein A1 (APOA1) regulates the transcription of APOA1 through epigenetic modifications.
Recent evidence has raised the possibility that transcription of genes that escape from X-inactivation might be mediated by expression of long non-coding RNA within the escaping chromosomal domains.
Regulating basal transcription machinery
NcRNAs also target general transcription factors required for the RNAP II transcription of all genes. These general factors include components of the initiation complex that assemble on promoters or involved in transcription elongation. A ncRNA transcribed from an upstream minor promoter of the dihydrofolate reductase (DHFR) gene forms a stable RNA-DNA triplex within the major promoter of DHFR to prevent the binding of the transcriptional co-factor TFIIB. This novel mechanism of regulating gene expression may represent a widespread method of controlling promoter usage, as thousands of RNA-DNA triplexes exist in eukaryotic chromosome. The U1 ncRNA can induce transcription by binding to and stimulating TFIIH to phosphorylate the C-terminal domain of RNAP II. In contrast the ncRNA 7SK is able to repress transcription elongation by, in combination with HEXIM1/2, forming an inactive complex that prevents PTEFb from phosphorylating the C-terminal domain of RNAP II, repressing global elongation under stressful conditions. These examples, which bypass specific modes of regulation at individual promoters provide a means of quickly affecting global changes in gene expression.
The ability to quickly mediate global changes is also apparent in the rapid expression of non-coding repetitive sequences. The short interspersed nuclear (SINE) Alu elements in humans and analogous B1 and B2 elements in mice have succeeded in becoming the most abundant mobile elements within the genomes, comprising ~10% of the human and ~6% of the mouse genome, respectively. These elements are transcribed as ncRNAs by RNAP III in response to environmental stresses such as heat shock, where they then bind to RNAP II with high affinity and prevent the formation of active pre-initiation complexes. This allows for the broad and rapid repression of gene expression in response to stress.
A dissection of the functional sequences within Alu RNA transcripts has drafted a modular structure analogous to the organization of domains in protein transcription factors. The Alu RNA contains two 'arms', each of which may bind one RNAP II molecule, as well as two regulatory domains that are responsible for RNAP II transcriptional repression in vitro. These two loosely structured domains may even be concatenated to other ncRNAs such as B1 elements to impart their repressive role. The abundance and distribution of Alu elements and similar repetitive elements throughout the mammalian genome may be partly due to these functional domains being co-opted into other long ncRNAs during evolution, with the presence of functional repeat sequence domains being a common characteristic of several known long ncRNAs including Kcnq1ot1, Xlsirt and Xist.
In addition to heat shock, the expression of SINE elements (including Alu, B1, and B2 RNAs) increases during cellular stress such as viral infection in some cancer cells where they may similarly regulate global changes to gene expression. The ability of Alu and B2 RNA to bind directly to RNAP II provides a broad mechanism to repress transcription. Nevertheless, there are specific exceptions to this global response where Alu or B2 RNAs are not found at activated promoters of genes undergoing induction, such as the heat shock genes. This additional hierarchy of regulation that exempts individual genes from the generalised repression also involves a long ncRNA, heat shock RNA-1 (HSR-1). It was argued that HSR-1 is present in mammalian cells in an inactive state, but upon stress is activated to induce the expression of heat shock genes. This activation involves a conformational alteration of HSR-1 in response to rising temperatures, permitting its interaction with the transcriptional activator HSF-1, which trimerizes and induces the expression of heat shock genes. In the broad sense, these examples illustrate a regulatory circuit nested within ncRNAs whereby Alu or B2 RNAs repress general gene expression, while other ncRNAs activate the expression of specific genes.
Transcribed by RNA polymerase III
Many of the ncRNAs that interact with general transcription factors or RNAP II itself (including 7SK, Alu and B1 and B2 RNAs) are transcribed by RNAP III, uncoupling their expression from RNAP II, which they regulate. RNAP III also transcribes other ncRNAs, such as BC2, BC200 and some microRNAs and snoRNAs, in addition to housekeeping ncRNA genes such as tRNAs, 5S rRNAs and snRNAs. The existence of an RNAP III-dependent ncRNA transcriptome that regulates its RNAP II-dependent counterpart is supported by the finding of a set of ncRNAs transcribed by RNAP III with sequence homology to protein-coding genes. This prompted the authors to posit a 'cogene/gene' functional regulatory network, showing that one of these ncRNAs, 21A, regulates the expression of its antisense partner gene, CENP-F in trans.
In post-transcriptional regulation
In addition to regulating transcription, ncRNAs also control various aspects of post-transcriptional mRNA processing. Similar to small regulatory RNAs such as microRNAs and snoRNAs, these functions often involve complementary base pairing with the target mRNA. The formation of RNA duplexes between complementary ncRNA and mRNA may mask key elements within the mRNA required to bind trans-acting factors, potentially affecting any step in post-transcriptional gene expression including pre-mRNA processing and splicing, transport, translation, and degradation.
In splicing
The splicing of mRNA can induce its translation and functionally diversify the repertoire of proteins it encodes. The Zeb2 mRNA requires the retention of a 5'UTR intron that contains an internal ribosome entry site for efficient translation. The retention of the intron depends on the expression of an antisense transcript that complements the intronic 5' splice site. Therefore, the ectopic expression of the antisense transcript represses splicing and induces translation of the Zeb2 mRNA during mesenchymal development. Likewise, the expression of an overlapping antisense Rev-ErbAa2 transcript controls the alternative splicing of the thyroid hormone receptor ErbAa2 mRNA to form two antagonistic isoforms.
In translation
NcRNA may also apply additional regulatory pressures during translation, a property particularly exploited in neurons where the dendritic or axonal translation of mRNA in response to synaptic activity contributes to changes in synaptic plasticity and the remodelling of neuronal networks. The RNAP III transcribed BC1 and BC200 ncRNAs, that previously derived from tRNAs, are expressed in the mouse and human central nervous system, respectively. BC1 expression is induced in response to synaptic activity and synaptogenesis and is specifically targeted to dendrites in neurons. Sequence complementarity between BC1 and regions of various neuron-specific mRNAs also suggest a role for BC1 in targeted translational repression. Indeed, it was recently shown that BC1 is associated with translational repression in dendrites to control the efficiency of dopamine D2 receptor-mediated transmission in the striatum and BC1 RNA-deleted mice exhibit behavioural changes with reduced exploration and increased anxiety.
In siRNA-directed gene regulation
In addition to masking key elements within single-stranded RNA, the formation of double-stranded RNA duplexes can also provide a substrate for the generation of endogenous siRNAs (endo-siRNAs) in Drosophila and mouse oocytes. The annealing of complementary sequences, such as antisense or repetitive regions between transcripts, forms an RNA duplex that may be processed by Dicer-2 into endo-siRNAs. Also, long ncRNAs that form extended intramolecular hairpins may be processed into siRNAs, compellingly illustrated by the esi-1 and esi-2 transcripts. Endo-siRNAs generated from these transcripts seem particularly useful in suppressing the spread of mobile transposon elements within the genome in the germline. However, the generation of endo-siRNAs from antisense transcripts or pseudogenes may also silence the expression of their functional counterparts via RISC effector complexes, acting as an important node that integrates various modes of long and short RNA regulation, as exemplified by the Xist and Tsix (see above).
In epigenetic regulation
Epigenetic modifications, including histone and DNA methylation, histone acetylation and sumoylation, affect many aspects of chromosomal biology, primarily including regulation of large numbers of genes by remodeling broad chromatin domains. While it has been known for some time that RNA is an integral component of chromatin, it is only recently that we are beginning to appreciate the means by which RNA is involved in pathways of chromatin modification. For example, Oplr16 epigenetically induces the activation of stem cell core factors by coordinating intrachromosomal looping and recruitment of DNA demethylase TET2.
In Drosophila, long ncRNAs induce the expression of the homeotic gene, Ubx, by recruiting and directing the chromatin modifying functions of the trithorax protein Ash1 to Hox regulatory elements. Similar models have been proposed in mammals, where strong epigenetic mechanisms are thought to underlie the embryonic expression profiles of the Hox genes that persist throughout human development. Indeed, the human Hox genes are associated with hundreds of ncRNAs that are sequentially expressed along both the spatial and temporal axes of human development and define chromatin domains of differential histone methylation and RNA polymerase accessibility. One ncRNA, termed HOTAIR, that originates from the HOXC locus represses transcription across 40 kb of the HOXD locus by altering chromatin trimethylation state. HOTAIR is thought to achieve this by directing the action of Polycomb chromatin remodeling complexes in trans to govern the cells' epigenetic state and subsequent gene expression. Components of the Polycomb complex, including Suz12, EZH2 and EED, contain RNA binding domains that may potentially bind HOTAIR and probably other similar ncRNAs. This example nicely illustrates a broader theme whereby ncRNAs recruit the function of a generic suite of chromatin modifying proteins to specific genomic loci, underscoring the complexity of recently published genomic maps. Indeed, the prevalence of long ncRNAs associated with protein coding genes may contribute to localised patterns of chromatin modifications that regulate gene expression during development. For example, the majority of protein-coding genes have antisense partners, including many tumour suppressor genes that are frequently silenced by epigenetic mechanisms in cancer. A recent study observed an inverse expression profile of the p15 gene and an antisense ncRNA in leukaemia. A detailed analysis showed the p15 antisense ncRNA (CDKN2BAS) was able to induce changes to heterochromatin and DNA methylation status of p15 by an unknown mechanism, thereby regulating p15 expression. Therefore, misexpression of the associated antisense ncRNAs may subsequently silence the tumour suppressor gene contributing towards cancer.
Imprinting
Many emergent themes of ncRNA-directed chromatin modification were first apparent within the phenomenon of imprinting, whereby only one allele of a gene is expressed from either the maternal or the paternal chromosome. In general, imprinted genes are clustered together on chromosomes, suggesting the imprinting mechanism acts upon local chromosome domains rather than individual genes. These clusters are also often associated with long ncRNAs whose expression is correlated with the repression of the linked protein-coding gene on the same allele. Indeed, detailed analysis has revealed a crucial role for the ncRNAs Kcnqot1 and Igf2r/Air in directing imprinting.
Almost all the genes at the Kcnq1 loci are maternally inherited, except the paternally expressed antisense ncRNA Kcnqot1. Transgenic mice with truncated Kcnq1ot fail to silence the adjacent genes, suggesting that Kcnqot1 is crucial to the imprinting of genes on the paternal chromosome. It appears that Kcnqot1 is able to direct the trimethylation of lysine 9 (H3K9me3) and 27 of histone 3 (H3K27me3) to an imprinting centre that overlaps the Kcnqot1 promoter and actually resides within a Kcnq1 sense exon. Similar to HOTAIR (see above), Eed-Ezh2 Polycomb complexes are recruited to the Kcnq1 loci paternal chromosome, possibly by Kcnqot1, where they may mediate gene silencing through repressive histone methylation. A differentially methylated imprinting centre also overlaps the promoter of a long antisense ncRNA Air that is responsible for the silencing of neighbouring genes at the Igf2r locus on the paternal chromosome. The presence of allele-specific histone methylation at the Igf2r locus suggests Air also mediates silencing via chromatin modification.
Xist and X-chromosome inactivation
The inactivation of a X-chromosome in female placental mammals is directed by one of the earliest and best characterized long ncRNAs, Xist. The expression of Xist from the future inactive X-chromosome, and its subsequent coating of the inactive X-chromosome, occurs during early embryonic stem cell differentiation. Xist expression is followed by irreversible layers of chromatin modifications that include the loss of the histone (H3K9) acetylation and H3K4 methylation that are associated with active chromatin, and the induction of repressive chromatin modifications including H4 hypoacetylation, H3K27 trimethylation, H3K9 hypermethylation and H4K20 monomethylation as well as H2AK119 monoubiquitylation. These modifications coincide with the transcriptional silencing of the X-linked genes. Xist RNA also localises the histone variant macroH2A to the inactive X–chromosome. There are additional ncRNAs that are also present at the Xist loci, including an antisense transcript Tsix, which is expressed from the future active chromosome and able to repress Xist expression by the generation of endogenous siRNA. Together these ncRNAs ensure that only one X-chromosome is active in female mammals.
Telomeric non-coding RNAs
Telomeres form the terminal region of mammalian chromosomes and are essential for stability and aging and play central roles in diseases such as cancer. Telomeres have been long considered transcriptionally inert DNA-protein complexes until it was shown in the late 2000s that telomeric repeats may be transcribed as telomeric RNAs (TelRNAs) or telomeric repeat-containing RNAs. These ncRNAs are heterogeneous in length, transcribed from several sub-telomeric loci and physically localise to telomeres. Their association with chromatin, which suggests an involvement in regulating telomere specific heterochromatin modifications, is repressed by SMG proteins that protect chromosome ends from telomere loss. In addition, TelRNAs block telomerase activity in vitro and may therefore regulate telomerase activity. Although early, these studies suggest an involvement for telomeric ncRNAs in various aspects of telomere biology.
In regulation of DNA replication timing and chromosome stability
Asynchronously replicating autosomal RNAs (ASARs) are very long (~200kb) non-coding RNAs that are non-spliced, non-polyadenylated, and are required for normal DNA replication timing and chromosome stability. Deletion of any one of the genetic loci containing ASAR6, ASAR15, or ASAR6-141 results in the same phenotype of delayed replication timing and delayed mitotic condensation (DRT/DMC) of the entire chromosome. DRT/DMC results in chromosomal segregation errors that lead to increased frequency of secondary rearrangements and an unstable chromosome. Similar to Xist, ASARs show random monoallelic expression and exist in asynchronous DNA replication domains. Although the mechanism of ASAR function is still under investigation, it is hypothesized that they work via similar mechanisms as the Xist lncRNA, but on smaller autosomal domains resulting in allele specific changes in gene expression.
Incorrect reparation of DNA double-strand breaks (DSB) leading to chromosomal rearrangements is one of the oncogenesis's primary causes. A number of lncRNAs are crucial at the different stages of the main pathways of DSB repair in eukaryotic cells: nonhomologous end joining (NHEJ) and homology-directed repair (HDR). Gene mutations or variation in expression levels of such RNAs can lead to local DNA repair defects, increasing the chromosome aberration frequency. Moreover, it was demonstrated that some RNAs could stimulate long-range chromosomal rearrangements.
In aging and disease
The discovery that long ncRNAs function in various aspects of cell biology has led to research on their role in disease. Tens of thousands of lncRNAs are potentially associated with diseases based on the multi-omics evidence. A handful of studies have implicated long ncRNAs in a variety of disease states and support an involvement and co-operation in neurological disease and cancer.
The first published report of an alteration in lncRNA abundance in aging and human neurological disease was provided by Lukiw et al. in a study using short post-mortem interval tissues from patients with Alzheimer's disease and non-Alzheimer's dementia (NAD) ; this early work was based on the prior identification of a primate brain-specific cytoplasmic transcript of the Alu repeat family by Watson and Sutcliffe in 1987 known as BC200 (brain, cytoplasmic, 200 nucleotide).
While many association studies have identified unusual expression of long ncRNAs in disease states, there is little understanding of their role in causing disease. Expression analyses that compare tumor cells and normal cells have revealed changes in the expression of ncRNAs in several forms of cancer. For example, in prostate tumours, PCGEM1 (one of two overexpressed ncRNAs) is correlated with increased proliferation and colony formation suggesting an involvement in regulating cell growth. PRNCR1 was found to promote tumor growth in several malignancies like prostate cancer, breast cancer, non-small cell lung cancer, oral squamous cell carcinoma and colorectal cancer. MALAT1 (also known as NEAT2) was originally identified as an abundantly expressed ncRNA that is upregulated during metastasis of early-stage non-small cell lung cancer and its overexpression is an early prognostic marker for poor patient survival rates. LncRNAs such as HEAT2 or KCNQ1OT1 have been shown to be regulated in the blood of patients with cardiovascular diseases such as heart failure or coronary artery disease and, moreover, to predict cardiovascular disease events. More recently, the highly conserved mouse homologue of MALAT1 was found to be highly expressed in hepatocellular carcinoma. Intronic antisense ncRNAs with expression correlated to the degree of tumor differentiation in prostate cancer samples have also been reported. Despite a number of long ncRNAs having aberrant expression in cancer, their function and potential role in tumourigenesis is relatively unknown. For example, the ncRNAs HIS-1 and BIC have been implicated in cancer development and growth control, but their function in normal cells is unknown. In addition to cancer, ncRNAs also exhibit aberrant expression in other disease states. Overexpression of PRINS is associated with psoriasis susceptibility, with PRINS expression being elevated in the uninvolved epidermis of psoriatic patients compared with both psoriatic lesions and healthy epidermis.
Genome-wide profiling revealed that many transcribed non-coding ultraconserved regions exhibit distinct profiles in various human cancer states. An analysis of chronic lymphocytic leukaemia, colorectal carcinoma and hepatocellular carcinoma found that all three cancers exhibited aberrant expression profiles for ultraconserved ncRNAs relative to normal cells. Further analysis of one ultraconserved ncRNA suggested it behaved like an oncogene by mitigating apoptosis and subsequently expanding the number of malignant cells in colorectal cancers. Many of these transcribed ultraconserved sites that exhibit distinct signatures in cancer are found at fragile sites and genomic regions associated with cancer. It seems likely that the aberrant expression of these ultraconserved ncRNAs within malignant processes results from important functions they fulfil in normal human development.
Recently, a number of association studies examining single nucleotide polymorphisms (SNPs) associated with disease states have been mapped to long ncRNAs. For example, SNPs that identified a susceptibility locus for myocardial infarction mapped to a long ncRNA, MIAT (myocardial infarction associated transcript). Likewise, genome-wide association studies identified a region associated with coronary artery disease that encompassed a long ncRNA, ANRIL. ANRIL is expressed in tissues and cell types affected by atherosclerosis and its altered expression is associated with a high-risk haplotype for coronary artery disease. Lately there has been increasing evidence on the role of non-coding RNAs in the development and in the categorization of heart failure.
The complexity of the transcriptome, and our evolving understanding of its structure may inform a reinterpretation of the functional basis for many natural polymorphisms associated with disease states. Many SNPs associated with certain disease conditions are found within non-coding regions and the complex networks of non-coding transcription within these regions make it particularly difficult to elucidate the functional effects of polymorphisms. For example, a SNP both within the truncated form of ZFAT and the promoter of an antisense transcript increases the expression of ZFAT not through increasing the mRNA stability, but rather by repressing the expression of the antisense transcript.
The ability of long ncRNAs to regulate associated protein-coding genes may contribute to disease if misexpression of a long ncRNA deregulates a protein coding gene with clinical significance. In similar manner, an antisense long ncRNA that regulates the expression of the sense BACE1 gene, a crucial enzyme in Alzheimer's disease etiology, exhibits elevated expression in several regions of the brain in individuals with Alzheimer's disease Alteration of the expression of ncRNAs may also mediate changes at an epigenetic level to affect gene expression and contribute to disease aetiology. For example, the induction of an antisense transcript by a genetic mutation led to DNA methylation and silencing of sense genes, causing β-thalassemia in a patient.
Alongside their role in mediating pathological processes, long noncoding RNAs play a role in the immune response to vaccination, as identified for both the influenza vaccine and the yellow fever vaccine.
Structure
It took over two decades after the discovery of the first human long non-coding transcripts for the functional significance of lncRNA structures to be fully recognized. Early structural studies led to the proposal of several hypotheses for classifying lncRNA architectures. One hypothesis suggests that lncRNAs may feature a compact tertiary structure, similar to ribozymes like the ribosome or self-splicing introns. Another possibility is that lncRNAs could have structured protein-binding sites arranged in a decentralized scaffold, lacking a compact core. A third hypothesis posits that lncRNAs might exhibit a largely unstructured architecture, with loosely organized protein-binding domains interspersed with long regions of disordered single-stranded RNA.
Studying the tertiary structure of lncRNAs by conventional methods such as X- ray crystallography, cryo-EM and nuclear magnetic resonance (NMR) is unfortunately still hampered by their size and conformational dynamics, and by the fact that for now we still know too little about their mechanism to reconstruct stable and functionally-active lncRNA-ribonucleoprotein complexes. But some pioneering studies, showed that lncRNAs can already be studied by low-resolution single-particle and in-solution methods, such as atomic force microscopy (AFM) and small-angle X-ray scattering (SAXS), in some cases even in complexes with small molecule modulators.
For instance, lncRNA MEG3 was shown to regulate transcription factor p53 thanks to its compact structured core. Moreover, lncRNA Braveheart (Bvht) was shown to have a well-defined, albeit flexible 3D structure that is remodeled upon binding CNBP (Cellular Nucleic-acid Binding Protein) which recognizes distal domains in the RNA. Finally, Xist a master regulator of X chromosome inactivation was shown to specifically bind a small molecule compound, which alters the conformation of Xist RepA motif and displaces two known interacting protein factors (PRC2 and SPEN) from the RNA. By such mechanism of action, the compound abrogates the initiation of X-chromosome inactivation.
See also
List of long non-coding RNA databases
NONCODE
Pinc
Sphinx (gene)
VIS1
ZNRD1-AS1
References
RNA
Non-coding RNA
Biotechnology | Long non-coding RNA | [
"Biology"
] | 7,769 | [
"nan",
"Biotechnology"
] |
15,044,544 | https://en.wikipedia.org/wiki/Pyrroline | Pyrrolines, also known under the name dihydropyrroles, are three different heterocyclic organic chemical compounds that differ in the position of the double bond. Pyrrolines are formally derived from the aromate pyrrole by hydrogenation. 1-Pyrroline is a cyclic imine, whereas 2-pyrroline and 3-pyrroline are cyclic amines.
Substituted pyrrolines
Notable examples of pyrrolines containing various substituents include:
2-Acetyl-1-pyrroline, an aroma compound with a white bread-like smell
Thienamycin, a beta-lactam antibiotic
MTSL, a chemical used for certain NMR experiments
Pyrrolysine, an unusual proteinogenic amino acid
1-Pyrroline-5-carboxylic acid, a biosynthetic metabolite
Porphyrin, consisting of two alternating pairs of pyrrol and pyrroline connected via methine (=CH-) bridges
N-substituted pyrrolines can be generated by ring-closing metathesis.
See also
Pyrrole, the aromatic analog with two double bonds
Pyrrolidine, the fully saturated analog without double bonds
References
External links
Pyrroline, 1-pyrroline, 2-pyrroline, and 3-pyrroline at EMBL-EBI
Advanced glycation end-products | Pyrroline | [
"Chemistry",
"Biology"
] | 305 | [
"Senescence",
"Carbohydrates",
"Advanced glycation end-products",
"Biomolecules"
] |
15,054,348 | https://en.wikipedia.org/wiki/Dual-phase%20steel | Dual-phase steel (DP steel) is a high-strength steel that has a ferritic–martensitic microstructure.
DP steels are produced from low or medium carbon steels that are quenched from a temperature above A1 but below A3 determined from continuous cooling transformation diagram.
This results in a microstructure consisting of a soft ferrite matrix containing islands of martensite as the secondary phase (martensite increases the tensile strength).
Therefore, the overall behaviour of DP steels is governed by the volume fraction, morphology (size, aspect ratio, interconnectivity, etc.), the grain size and the carbon content.
For achieving these microstructures, DP steels typically contain 0.06–0.15 wt.% C and 1.5-3% Mn (the former strengthens the martensite, and the latter causes solid solution strengthening in ferrite, while both stabilize the austenite), Cr & Mo (to retard pearlite or bainite formation), Si (to promote ferrite transformation), V and Nb (for precipitation strengthening and microstructure refinement).
The desire to produce high strength steels with formability greater than microalloyed steel led to development of DP steels in 2007 by Tata Steel.
DP steels have high ultimate tensile strength (UTS, enabled by the martensite) combined with low initial yielding stress (provided by the ferrite phase), high early-stage strain hardening and macroscopically homogeneous plastic flow (enabled through the absence of Lüders effects).
These features render DP steels ideal materials for automotive-related sheet forming operations.
The steel melt is produced in an oxygen top blowing process in the converter, and undergoes an alloy treatment in the secondary metallurgy phase. The product is aluminium-killed steel, with high tensile strength achieved by the composition with manganese, chromium and silicon.
Their advantages are as follows:
Low yield strength
Low yield to tensile strength ratio (yield strength / tensile strength = 0.5)
High initial strain hardening rates
Good uniform elongation
A high strain rate sensitivity (the faster it is crushed the more energy it absorbs)
Good fatigue resistance
Due to these properties DP steels are often used for automotive body panels, wheels, and bumpers.
References
Notes
Bibliography
.
.
.
.
.
Steels
Vehicle technology | Dual-phase steel | [
"Engineering"
] | 514 | [
"Steels",
"Vehicle technology",
"Alloys",
"Mechanical engineering by discipline"
] |
15,054,768 | https://en.wikipedia.org/wiki/Degasperis%E2%80%93Procesi%20equation | In mathematical physics, the Degasperis–Procesi equation
is one of only two exactly solvable equations in the following family of third-order, non-linear, dispersive PDEs:
where and b are real parameters (b=3 for the Degasperis–Procesi equation). It was discovered by Antonio Degasperis and Michela Procesi in a search for integrable equations similar in form to the Camassa–Holm equation, which is the other integrable equation in this family (corresponding to b=2); that those two equations are the only integrable cases has been verified using a variety of different integrability tests. Although discovered solely because of its mathematical properties, the Degasperis–Procesi equation (with ) has later been found to play a similar role in water wave theory as the Camassa–Holm equation.
Soliton solutions
Among the solutions of the Degasperis–Procesi equation (in the special case ) are the so-called multipeakon solutions, which are functions of the form
where the functions and satisfy
These ODEs can be solved explicitly in terms of elementary functions, using inverse spectral methods.
When the soliton solutions of the Degasperis–Procesi equation are smooth; they converge to peakons in the limit as tends to zero.
Discontinuous solutions
The Degasperis–Procesi equation (with ) is formally equivalent to the (nonlocal) hyperbolic conservation law
where , and where the star denotes convolution with respect to x.
In this formulation, it admits weak solutions with a very low degree of regularity, even discontinuous ones (shock waves). In contrast, the corresponding formulation of the Camassa–Holm equation contains a convolution involving both and , which only makes sense if u lies in the Sobolev space with respect to x. By the Sobolev embedding theorem, this means in particular that the weak solutions of the Camassa–Holm equation must be continuous with respect to x.
Notes
References
Further reading
Mathematical physics
Solitons
Partial differential equations
Equations of fluid dynamics
Eponymous equations of mathematics
Eponymous equations of physics | Degasperis–Procesi equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 463 | [
"Equations of fluid dynamics",
"Equations of physics",
"Applied mathematics",
"Theoretical physics",
"Eponymous equations of physics",
"Mathematical physics",
"Fluid dynamics"
] |
7,667,706 | https://en.wikipedia.org/wiki/Active%20suspension | An active suspension is a type of automotive suspension that uses an onboard control system to control the vertical movement of the vehicle's wheels and axles relative to the chassis or vehicle frame, rather than the conventional passive suspension that relies solely on large springs to maintain static support and dampen the vertical wheel movements caused by the road surface. Active suspensions are divided into two classes: true active suspensions, and adaptive or semi-active suspensions. While adaptive suspensions only vary shock absorber firmness to match changing road or dynamic conditions, active suspensions use some type of actuator to raise and lower the chassis independently at each wheel.
These technologies allow car manufacturers to achieve a greater degree of ride quality and car handling by keeping the chassis parallel to the road when turning corners, preventing unwanted contacts between the vehicle frame and the ground (especially when going over a depression), and allowing overall better traction and steering control. An onboard computer detects body movement from sensors throughout the vehicle and, using that data, controls the action of the active and semi-active suspensions. The system virtually eliminates body roll and pitch variation in many driving situations including cornering, accelerating and braking. When used on commercial vehicles such as buses, active suspension can also be used to temporarily lower the vehicle's floor, thus making it easier for passengers to board and exit the vehicle.
Principle
Skyhook theory is that the ideal suspension would let the vehicle maintain a stable posture, unaffected by weight transfer or road surface irregularities, as if suspended from an imaginary hook in the sky continuing at a constant altitude above sea level, therefore remaining stable.
Since an actual skyhook is obviously impractical, real active suspension systems are based on actuator operations. The imaginary line (of zero vertical acceleration) is calculated based on the value provided by an acceleration sensor installed on the body of the vehicle (see Figure 3). The dynamic elements comprise only the linear spring and the linear damper; therefore, no complicated calculations are necessary.
A vehicle contacts the ground through the spring and damper in a normal spring damper suspension, as in Figure 1. To achieve the same level of stability as the Skyhook theory, the vehicle must contact the ground through the spring, and the imaginary line with the damper, as in Figure 2.
Theoretically, in a case where the damping coefficient reaches an infinite value, the vehicle will be in a state where it is completely fixed to the imaginary line, thus the vehicle will not shake.
Active
Active suspensions, the first to be introduced, use separate actuators which can exert an independent force on the suspension to improve the riding characteristics. The drawbacks of this design are high cost, added complication and mass of the apparatus, and the need for frequent maintenance on some implementations. Maintenance can require specialised tools, and some problems can be difficult to diagnose.
Hydraulic actuation
Hydraulically actuated suspensions are controlled with the use of hydraulics. The first example appeared in 1954, with the hydropneumatic suspension developed by Paul Magès at Citroën. The hydraulic pressure is supplied by a high pressure radial piston hydraulic pump. Sensors continually monitor body movement and vehicle ride level, constantly supplying the hydraulic height correctors with new data. In a matter of a few milliseconds, the suspension generates counter forces to raise or lower the body. During driving maneuvers, the encased nitrogen compresses instantly, offering six times the compressibility of the steel springs used by vehicles up to this time.
In practice, the system has always incorporated the desirable self-levelling suspension and height adjustable suspension features, with the latter now tied to vehicle speed for improved aerodynamic performance, as the vehicle lowers itself at high speed.
This system performed remarkably well in straight ahead driving, including over uneven surfaces, but had little control over roll stiffness.
Millions of production vehicles have been built with variations on this system.
Electronic actuation of hydraulic suspension
Colin Chapman developed the original concept of computer management of hydraulic suspension in the 1980s to improve cornering in racing cars. Lotus fitted and developed a prototype system to a 1985 Excel with electro-hydraulic active suspension, but never offered it for sale to the public, although many demonstration cars were built for other manufacturers.
Sensors continually monitor body movement and vehicle ride level, constantly supplying the computer with new data. As the computer receives and processes data, it operates the hydraulic servos, mounted beside each wheel. Almost instantly, the servo-regulated suspension generates counter forces to body lean, dive, and squat during driving maneuvers.
In 1990, Nissan installed a hydraulic supported MacPherson strut based setup, called Full-Active Suspension that was used in the Nissan Q45 and President. The system used a hydraulic oil pump, a hydraulic cylinder, an accumulator and damping valve, which connected two independent circuits for the front and rear strut assemblies. The system would then recover motion energy to balance the car continuously. The system was revised and is now called Hydraulic Body Motion Control System, installed on the Nissan Patrol and Infiniti QX80.
Williams Grand Prix Engineering prepared an active suspension, devised by designer-aerodynamicist Frank Dernie, for the team's Formula 1 cars in 1992, creating such successful cars that the Fédération Internationale de l'Automobile decided to ban the technology to decrease the gap between Williams F1 team and its competitors.
Computer Active Technology Suspension (CATS) co-ordinates the best possible balance between ride quality and handling by analysing road conditions and making up to 3,000 adjustments every second to the suspension settings via electronically controlled dampers.
The 1999 Mercedes-Benz CL-Class (C215) introduced Active Body Control, where high pressure hydraulic servos are controlled by electronic computing, and this feature is still available. Vehicles can be designed to actively lean into curves to improve occupant comfort.
Active anti-roll bar
Active anti-roll bar stiffens under command of the driver or suspension electronic control unit (ECU) during hard cornering. The first production car with this technology was the Mitsubishi Mirage Cyborg in 1988.
Electromagnetic recuperative
In fully active electronically controlled production cars, the application of electric servos and motors married to electronic computing allows for flat cornering and instant reactions to road conditions.
The Bose Corporation has a proof of concept model. The founder of Bose, Amar Bose, had been working on exotic suspensions for many years while he was an MIT professor.
Electromagnetic active suspension uses linear electromagnetic motors attached to each wheel. It provides extremely fast response, and allows regeneration of power consumed, by using the motors as generators. This nearly surmounts the issues of slow response times and high power consumption of hydraulic systems. Electronically controlled active suspension system (ECASS) technology was patented by the University of Texas Center for Electromechanics in the 1990s and has been developed by L-3 Electronic Systems for use on military vehicles. The ECASS-equipped Humvee exceeded the performance specifications for all performance evaluations in terms of absorbed power to the vehicle operator, stability and handling.
Active Wheel
Michelin's Active Wheel from 2004 incorporates an in-wheel electrical suspension motor that controls torque distribution, traction, turning maneuvers, pitch, roll and suspension damping for that wheel, in addition to an in-wheel electric traction motor.
Audi active electromechanical suspension system introduced in 2017. It drives each wheel individually and adapts to the prevailing road conditions. Each wheel has an electric motor which is powered by the 48-volt main electrical system. Additional components include gears, a rotary tube together with internal titanium torsion bar and a lever which exerts up to 1,100 Nm (811.3 lb-ft) on the suspension via a coupling rod. Thanks to the front camera, the sedan detects bumps in the road early on and predictively adjusts the active suspension. Even before the car reaches a bump in the road, the preview function developed by Audi transmits the right amount of travel to the actuators and actively controls the suspension. The computer-controlled motors can sense imperfection on the road, and can raise the suspension up from the wheel which would go over the undulation, thus aiding the ride quality. The system will direct the motors on the outside to push up or pull down the suspension while cornering. This will result in a flatter drive and reduced body-roll around corners which in turn means more confident handling dynamics.
Adaptive and semi-active
Adaptive or semi-active systems can only change the viscous damping coefficient of the shock absorber, and do not add energy to the suspension system. While adaptive suspensions have generally a slow time response and a limited number of damping coefficient values, semi-active suspensions have time response close to a few milliseconds and can provide a wide range of damping values. Therefore, adaptive suspensions usually only propose different riding modes (comfort, normal, sport...) corresponding to different damping coefficients, while semi-active suspensions modify the damping in real time, depending on the road conditions and the dynamics of the car. Though limited in their intervention (for example, the control force can never have different direction than the current vector of velocity of the suspension), semi-active suspensions are less expensive to design and consume far less energy. In recent times, research in semi-active suspensions has continued to advance with respect to their capabilities, narrowing the gap between semi-active and fully active suspension systems.
Solenoid/valve actuated
This type is the most economic and basic type of semi-active suspensions. They consist of a solenoid valve which alters the flow of the hydraulic medium inside the shock absorber, therefore changing the damping characteristics of the suspension setup. The solenoids are wired to the controlling computer, which sends them commands depending on the control algorithm (usually the so-called "Sky-Hook" technique).
This type of system is used in Cadillac's Computer Command Ride (CCR) suspension system. The first production car was the Toyota Soarer with semi-active Toyota Electronic Modulated Suspension, from 1983.
In 1985, Nissan introduced a shock absorber using a similar version, called "Super Sonic Suspension," adding an ultrasonic sensor that would provide information that a microcomputer would then interpret, combined with information from the steering, brakes, throttle, and vehicle speed sensor. The adjustment information signals would then modify the shock absorbers when a driver-controlled switch was placed in "Auto". The automatic adjustment could be limited if the switch was placed in "Soft," "Medium," or "Hard" settings. A modified version that didn't use the sonar module was also used, allowing the settings to be manually selected. This implementation is currently used industry-wide by a number of manufacturers, provided by Monroe Shock Absorbers called CVSAe, or Continuously Variable Semi-Active electronic.
In 2008, with the introduction of the Nissan GT-R, "DampTronic" was jointly developed by Nissan and Bilstein. DampTronic provides three selectable driver settings that can also interact with the Vehicle Dynamics Control technology to modify the transmission's shift points. The settings are labeled as Normal, Comfort, or R, and can be either set in Normal for automatic adjustment or the "R" setting for high-speed driving, while "Comfort" is for touring and a more compliant ride. The "R" mode enables the vehicle to utilize the yaw angle rate with a reduced steering angle for a crisper, more communicative steering, while the "Comfort" setting produces less vertical G-loading in comparison to the "Normal" or computer determined suspension setting.
Magnetorheological damper
Another method incorporates magnetorheological dampers with a brand name MagneRide. It was initially developed by Delphi Corporation for GM and was standard, as many other new technologies, for Cadillac STS (from model 2002), and on some other GM models from 2003. This was an upgrade for semi-active systems ("automatic road-sensing suspensions") used in upscale GM vehicles for decades. It allows, together with faster modern computers, changing the stiffness of all wheel suspensions independently. These dampers are finding increased usage in the US and already leases to some foreign brands, mostly in more expensive vehicles.
This system was in development for 25 years. The damper fluid contains metallic particles. Through the onboard computer, the dampers' compliance characteristics are controlled by an electromagnet. Essentially, increasing the current flow into the damper magnetic circuit increases the circuit magnetic flux. This in turn causes the metal particles to change their alignment, which increases fluid viscosity thereby raising the compression/rebound rates, while a decrease softens the effect of the dampers by aligning the particles in the opposite direction. If we imagine the metal particles as dinner plates then whilst aligned so they are on edge - viscosity is minimised. At the other end of the spectrum they will be aligned at 90 degrees so flat. Thus making the fluid much more viscous. It is the electric field produced by the electromagnet that changes the alignment of the metal particles. Information from wheel sensors (about suspension extension), steering, acceleration sensors - and other data, is used to calculate the optimal stiffness at that point in time. The fast reaction of the system (milliseconds) allows, for instance, making a softer passing by a single wheel over a bump in the road at a particular instant in time.
Production vehicles
By calendar year:
1954: Citroën Traction Avant 15-6H:, self-leveling Citroën hydropneumatic suspension on rear wheels.
1955: Citroën DS, self-leveling Citroën hydropneumatic suspension on all four wheels.
1957: Cadillac Eldorado Brougham: premiere of self-leveling GM air suspension
1967: Rolls-Royce Silver Shadow Partial load bearing hydropneumatic suspension on all four wheels. Front system deleted in 1969
1970: Citroën SM, self-leveling Citroën hydropneumatic suspension on all four wheels.
1970: Citroën GS, self-leveling Citroën hydropneumatic suspension on all four wheels.
1974: Citroën CX, self-leveling Citroën hydropneumatic suspension on all four wheels.
1975: Mercedes Benz 450 SEL 6.9 Hydropneumatic suspension on all four wheels.
1982: Citroën BX, self-leveling Citroën hydropneumatic suspension on all four wheels.
1979: Mercedes Benz W126 Hydropneumatic suspension on all four wheels as an option on the LWB v8 models
1983: Toyota Soarer: world first Electronically controlled (TEMS) that used a shock absorber control actuator (spring constant, variable attenuation force) installed
1985 Mercedes Benz 190E 2.3-16 Partial load bearing hydropneumatic suspension on all four wheels as an option on the 16v model. Standard on the Evo 1 and Evo 2 models
1985: Nissan introduced ultrasound semi-active suspension sensing "Super Sonic Suspension" optionally on the Cedric, Gloria and Nissan Laurel that integrated actuators inside the MacPherson struts on the front and rear suspension.
1986: Jaguar XJ40, self-leveling suspension.
1986: Mercedes Benz W126 Hydropneumatic suspension on all four wheels with electronically controlled adaptive damping as an option on the LWB v8 models
1987: Mitsubishi Galant (sixth generation) - features Active Controlled Suspension (Dynamic ECS). The system enables a comfortable ride and handling stability by automatically adjusting the vehicle height and damping force.
1989: Citroën XM - self-levelling, semi-active Hydractive on all four wheels with automatically adjusted spring rates and dampeners.
1989: Mercedes Benz R129 Partial load bearing hydropneumatic suspension with automatically adjusted spring rates and dampers as an option (ADS)
1990: Infiniti Q45 and Nissan President "Full-Active Suspension (FAS)", active suspension system
1990: Toyota Celica (ST183) Active Sports with fully Hydropneumatic active suspension and 4WS GT-R with Toyota Electronic Modulated Suspension (TEMS) semi-active suspension
1992: Citroën Xantia VSX - self-levelling, semi-active Hydractive 2 on all four wheels, with automatically adjusted spring rates and dampeners.
1993: Cadillac, several models with RSS road sensing suspension. RSS was available in both standard and CVRSS (continuously variable road sensing suspension) systems. It monitored damping rates of the shock absorbers every 15 milliseconds, selecting between two settings.
1994: Toyota Celsior introduced first Skyhook air suspension
1994: Citroën Xantia Activa - self-levelling, fully active Hydractive on all four wheels with hydraulic anti-roll bars and automatically adjusted spring rates and dampeners.
1998: Land Rover Discovery series 2 - Active Cornering Enhancement; an electronically controlled hydraulic anti-roll bar system was fitted to some versions, which reduced cornering roll.
1999: Mercedes Benz C215 Self leveling fully active hydraulic Active body control. Available on the S, CL and SL models
2000 Citroen C5 Hydractive 3 or Hydractive 3+
2002: Cadillac Seville STS, first MagneRide
2004: Volvo S60 R and V70 R (Four-C, a short name for "Continuously Controlled Chassis Concept", semi-active)
2006 Citroen C6 - Hydractive 3+
2010: Alfa Romeo MiTo Cloverleaf (DNA System based on Maserati's Skyhook technology)
2012: Jaguar XF Sportbrake, self-leveling air suspension.
2013: Mercedes Benz W222: Optional Magic body control. Self leveling fully active hydraulic system with road surface scanning electronics
2013: Volkswagen Mk7 Golf R User-Selectable Electronically Controlled Shock Dampening (Dynamic Chassis Control (DCC))
2019: Toyota Avalon Touring model (Adaptive Variable Suspension (AVS))
2025: Nio ET9 (SkyRide fully active suspension)
See also
Toyota Active Control Suspension
Hydropneumatic suspension
Active Body control
References
Advanced driver assistance systems
Automotive suspension technologies
Mechanical power control | Active suspension | [
"Physics"
] | 3,756 | [
"Mechanics",
"Mechanical power control"
] |
7,671,120 | https://en.wikipedia.org/wiki/Theta%20Orionis | The Bayer designation θ Orionis (Theta Orionis) is shared by several astronomical objects, located near RA DEC :
θ1 Orionis (41 Orionis), the Trapezium Cluster, an open star cluster, the Orion OB Association 1d
θ1 Orionis A (41 Orionis A, HD 37020, V1016 Orionis), a trinary star system
θ1 Orionis B (41 Orionis B, HD 37021), a quintet star system
θ1 Orionis B West (COUP 766, MAX 97), an astronomical X-ray source
θ1 Orionis B East (COUP 778, MAX 101), an astronomical X-ray source
θ1 Orionis C (41 Orionis C, HD 37022), a binary star system
θ1 Orionis D (41 Orionis D, HD 37023), a B0.5Vp variable star
θ1 Orionis E (COUP 732), a spectroscopic binary star system
θ1 Orionis F, a B8 variable star at 11th magnitude with a companion
θ1 Orionis G (COUP 826, MAX 116), a young stellar object
θ1 Orionis H (COUP 746, MAX 87), a young stellar object
θ2 Orionis (43 Orionis)
θ2 Orionis A (43 Orionis, HD 37041), a spectroscopic trinary
θ2 Orionis B (HD 37042), a B1V variable star
θ2 Orionis C (HD 37062), a binary star system
The various components are spread over several arc-minutes in and near the Orion Nebula.
Orionis, Theta
Orion (constellation) | Theta Orionis | [
"Astronomy"
] | 344 | [
"Constellations",
"Orion (constellation)"
] |
7,672,374 | https://en.wikipedia.org/wiki/List%20of%20protein%20structure%20prediction%20software | This list of protein structure prediction software summarizes notable used software tools in protein structure prediction, including homology modeling, protein threading, ab initio methods, secondary structure prediction, and transmembrane helix and signal peptide prediction.
Software list
Below is a list which separates programs according to the method used for structure prediction.
Homology modeling
Threading and fold recognition
Ab initio structure prediction
Secondary structure prediction
Detailed list of programs can be found at List of protein secondary structure prediction programs
See also
List of protein secondary structure prediction programs
Comparison of nucleic acid simulation software
List of software for molecular mechanics modeling
Molecular design software
Protein design
External links
bio.tools, finding more tools
References
Lists of software
Protein methods
Protein structure
Structural bioinformatics software
Proteomics | List of protein structure prediction software | [
"Chemistry",
"Technology",
"Biology"
] | 155 | [
"Biochemistry methods",
"Lists of software",
"Computing-related lists",
"Protein methods",
"Protein biochemistry",
"Structural biology",
"Protein structure"
] |
13,950,019 | https://en.wikipedia.org/wiki/Civil%20calendar | The civil calendar is the calendar, or possibly one of several calendars, used within a country for civil, official, or administrative purposes. The civil calendar is almost always used for general purposes by people and private organizations.
The most widespread civil calendar and de facto international standard is the Gregorian calendar. Although that calendar was first declared by Pope Gregory XIII to be used in Catholic countries in 1582, it has since been adopted, as a matter of convenience, by many secular and non-Christian countries although some countries use other calendars.
Civil calendars worldwide
168 of the world's countries use the Gregorian calendar as their sole civil calendar as of 2021. Most non-Christian countries have adopted it as a result of colonization, with some cases of voluntary adoption.
Four countries have not adopted the Gregorian calendar: Afghanistan and Iran (which use the Solar Hijri calendar), Ethiopia (the Ethiopian calendar), and Nepal (Vikram Samvat and Nepal Sambat).
Three countries use a modified version of the Gregorian calendar (with eras different from Anno Domini): Japan (Japanese calendar), Taiwan (Minguo calendar), and Thailand (Thai solar calendar). In the former two countries, the Anno Domini era is also in use. South Korea previously used the Korean calendar from 1945 to 1961. North Korea used the North Korean calendar from 1997 but ceased its use on October 2024.
Eighteen countries use another calendar alongside the Gregorian calendar:
Algeria, Iraq, Jordan, Libya, Mauritania, Morocco, Oman, Pakistan, Saudi Arabia, Somalia, Tunisia, United Arab Emirates, and Yemen (Lunar Hijri calendar),
Bangladesh (Bengali calendar),
Egypt (Lunar Hijri calendar and Coptic calendar),
India (Indian national calendar),
Israel (Hebrew calendar),
Myanmar (Burmese calendar).
See also
List of calendars
Chinese calendar
Persian calendar
References
Calendars | Civil calendar | [
"Physics"
] | 384 | [
"Spacetime",
"Calendars",
"Physical quantities",
"Time"
] |
13,952,850 | https://en.wikipedia.org/wiki/Free-piston%20engine | A free-piston engine is a linear, 'crankless' internal combustion engine, in which the piston motion is not controlled by a crankshaft but determined by the interaction of forces from the combustion chamber gases, a rebound device (e.g., a piston in a closed cylinder) and a load device (e.g. a gas compressor or a linear alternator).
The purpose of all such piston engines is to generate power. In the free-piston engine, this power is not delivered to a crankshaft but is instead extracted through either exhaust gas pressure driving a turbine, through driving a linear load such as an air compressor for pneumatic power, or by incorporating a linear alternator directly into the pistons to produce electrical power.
The basic configuration of free-piston engines is commonly known as single piston, dual piston or opposed pistons, referring to the number of combustion cylinders. The free-piston engine is usually restricted to the two-stroke operating principle, since a power stroke is required every fore-and-aft cycle. However, a split cycle four-stroke version has been patented, GB2480461 (A) published 2011-11-23.
First generation
The modern free-piston engine was proposed by R.P. Pescara and the original application was a single piston air compressor. Pescara set up the Bureau Technique Pescara to develop free-piston engines and Robert Huber was technical director of the Bureau from 1924 to 1962.
The engine concept was a topic of much interest in the period 1930–1960, and a number of commercially available units were developed. These first generation free-piston engines were without exception opposed piston engines, in which the two pistons were mechanically linked to ensure symmetric motion. The free-piston engines provided some advantages over conventional technology, including compactness and a vibration-free design.
Air compressors
The first successful application of the free-piston engine concept was as air compressors. In these engines, air compressor cylinders were coupled to the moving pistons, often in a multi-stage configuration. Some of these engines utilised the air remaining in the compressor cylinders to return the piston, thereby eliminating the need for a rebound device.
Free-piston air compressors were in use among others by the German Navy, and had the advantages of high efficiency, compactness and low noise and vibration.
Gas generators
After the success of the free-piston air compressor, a number of industrial research groups started the development of free-piston gas generators. In these engines there is no load device coupled to the engine itself, but the power is extracted from an exhaust turbine. The turbine's rotary motion can thus drive a pump, propeller, generator, or other device.
In this arrangement, the only load for the engine is supercharging the inlet air, albeit in theory some of this air could be diverted for use as a compressed-air source if desired. Such a modification would enable the free-piston engine, when used in conjunction with the aforementioned exhaust-driven turbine, to provide both motive power (from the output shaft of the turbine) in addition to compressed air on demand.
A number of free-piston gas generators were developed, and such units were in widespread use in large-scale applications such as stationary and marine powerplants. Attempts were made to use free-piston gas generators for vehicle propulsion (e.g. in gas turbine locomotives) but without success.
Modern applications
Modern applications of the free-piston engine concept include hydraulic engines, aimed for off-highway vehicles, and free-piston engine generators, aimed for use with hybrid electric vehicles.
Hydraulic
These engines are commonly of the single piston type, with the hydraulic cylinder acting as both load and rebound device using a hydraulic control system. This gives the unit high operational flexibility. Excellent part load performance has been reported.
Generators
Free-piston linear generators that eliminate a heavy crankshaft with electrical coils in the piston and cylinder walls are being investigated by multiple research groups for use in hybrid electric vehicles as range extenders. The first free piston generator was patented in 1934. Examples include the Stelzer engine and the Free Piston Power Pack manufactured by Pempek Systems based on a German patent. A single piston Free-piston linear generator was demonstrated in 2013 at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR).
These engines are mainly of the dual piston type, giving a compact unit with high power-to-weight ratio. A challenge with this design is to find an electric motor with sufficiently low weight. Control challenges in the form of high cycle-to-cycle variations were reported for dual piston engines.
In June 2014 Toyota announced a prototype Free Piston Engine Linear Generator (FPEG). As the piston is forced downward during its power stroke it passes through windings in the cylinder to generate a burst of three-phase AC electricity. The piston generates electricity on both strokes, reducing piston dead losses. The generator operates on a two-stroke cycle, using hydraulically activated exhaust poppet valves, gasoline direct injection and electronically operated valves. The engine is easily modified to operate under various fuels including hydrogen, natural gas, ethanol, gasoline and diesel. A two-cylinder FPEG is inherently balanced.
Toyota claims a thermal-efficiency rating of 42% in continuous use, greatly exceeding today's average of 25-30%. Toyota demonstrated a 24 inch long by 2.5 inch in diameter unit producing 15 hp (greater than 11 kW).
Features
The operational characteristics of free-piston engines differ from those of conventional, crankshaft engines. The main difference is due to the piston motion not being restricted by a crankshaft in the free-piston engine, leading to the potentially valuable feature of variable compression ratio. This does, however, also present a control challenge, since the position of the dead centres must be accurately controlled in order to ensure fuel ignition and efficient combustion, and to avoid excessive in-cylinder pressures or, worse, the piston hitting the cylinder head. The free-piston engine has a number of unique features, some give it potential advantages and some represent challenges that must be overcome for the free-piston engine to be a realistic alternative to conventional technology.
As the piston motion between the endpoints is not mechanically restricted by a crank mechanism, the free-piston engine has the valuable feature of variable compression ratio, which may provide extensive operation optimization, higher part load efficiency and possible multi-fuel operation. These are enhanced by variable fuel injection timing and valve timing through proper control methods.
Variable stroke length is achieved by a proper frequency control scheme such as PPM (Pulse Pause Modulation) control [1], in which piston motion is paused at BDC using a controllable hydraulic cylinder as rebound device. The frequency can therefore be controlled by applying a pause between the time the piston reaches BDC and the release of compression energy for the next stroke.
Since there are fewer moving parts, the frictional losses and manufacturing cost are reduced. The simple and compact design thus requires less maintenance and this increases lifetime.
The purely linear motion leads to very low side loads on the piston, hence lesser lubrication requirements for the piston.
The combustion process of free piston engine is well suited for Homogeneous Charge Compression Ignition (HCCI) mode, in which the premixed charge is compressed and self-ignited, resulting in very rapid combustion, along with lower requirements for accurate ignition timing control. Also, high efficiencies are obtained due to nearly constant volume combustion and the possibility to burn lean mixtures to reduce gas temperatures and thereby some types of emissions.
By running multiple engines in parallel, vibrations due to balancing issues may be reduced, but this requires accurate control of engine speed. Another possibility is to apply counterweights, which results in more complex design, increased engine size and weight and additional friction losses.
Lacking a kinetic energy storage device, like a flywheel in conventional engines, free-piston engines are more susceptible to shutdown caused by minute variations in the timing or pressure of the engine cycle. Precise control of the speed and timing is required as, if the engine fails to build up sufficient compression or if other factors influence the injection/ignition and combustion, the engine may misfire or stop.
Advantages
Potential advantages of the free-piston concept include:
Simple design with few moving parts, giving a compact engine with low maintenance costs and reduced frictional losses.
The operational flexibility through the variable compression ratio allows operation optimisation for all operating conditions and multi-fuel operation. The free-piston engine is further well suited for homogeneous charge compression ignition (HCCI) operation.
High piston speed around top dead centre (TDC) and a fast power stroke expansion enhances fuel-air mixing and reduces the time available for heat transfer losses and the formation of temperature-dependent emissions such as nitrogen oxides (NOx).
Challenges
The main challenge for the free-piston engine is engine control, which can only be said to be fully solved for single piston hydraulic free-piston engines. Issues such as the influence of cycle-to-cycle variations in the combustion process and engine performance during transient operation in dual piston engines are topics that need further investigation. Crankshaft engines can connect traditional accessories such as alternator, oil pump, fuel pump, cooling system, starter etc.
Rotational movement to spin conventional automobile engine accessories such as alternators, air conditioner compressors, power steering pumps, and anti-pollution devices could be captured from a turbine situated in the exhaust stream.
Opposing piston engine
Most free piston engines are of the opposed piston type with a single central combustion chamber. A variation is the Opposing piston engine which has two separate combustion chambers. An example is the Stelzer engine.
Recent developments
In the 21st century, research continues into free-piston engines and patents have been published in many countries. In the UK, Newcastle University is undertaking research into free-piston engines.
A new kind of the free-piston engine, a Free-piston linear generator is being developed by the German aerospace center.
In addition to these prototypes, researchers at West Virginia University in the US, are working on the development of a single cylinder free-piston engine prototype with mechanical springs at an operating frequency of 90 Hz.
See also
Pulse tube refrigerator: Similar free-piston devices used for refrigeration
References
Sources
Mikalsen R., Roskilly A.P. A review of free-piston engine history and applications. Applied Thermal Engineering, Volume 27, Issues 14-15, Pages 2339-2352, 2007. .
External links
DLR researchers unveil a new kind of range extender for electric cars
Extensive homepage about Free Piston Engines
Innas BV
Newcastle University
Vanderbilt University Free Piston Engine Compressor
"Engine of Tomorrow - Goes to Work Today." Popular Science, September 1957, pp. 138–141/294, detailed article/cutaway drawing on free-piston diesel engines.
Internal combustion engine
Piston engines
Engines
Engine technology | Free-piston engine | [
"Physics",
"Technology",
"Engineering"
] | 2,216 | [
"Internal combustion engine",
"Machines",
"Engines",
"Piston engines",
"Combustion engineering",
"Physical systems",
"Engine technology"
] |
13,953,779 | https://en.wikipedia.org/wiki/Hudson%27s%20equation | Hudson's equation, also known as Hudson formula, is an equation used by coastal engineers to calculate the minimum size of riprap (armourstone) required to provide satisfactory stability characteristics for rubble structures such as breakwaters under attack from storm wave conditions.
The equation was developed by the United States Army Corps of Engineers, Waterways Experiment Station (WES), following extensive investigations by Hudson (1953, 1959, 1961a, 1961b)
Initial equation
The equation itself is:
where:
W is the design weight of the riprap armor (Newton)
is the specific weight of the armor blocks (N/m3)
H is the design wave height at the toe of the structure (m)
KD is a dimensionless stability coefficient, deduced from laboratory experiments for different kinds of armour blocks and for very small damage (a few blocks removed from the armour layer) (-):
KD = around 3 for natural quarry rock
KD = around 10 for artificial interlocking concrete blocks
Sr = (ρr / ρw is the relative density of rock, i.e. (ρr / ρw - 1) = around 1.58 for granite in sea water
ρr and ρw are the densities of rock and (sea)water (-)
θ is the angle of revetment with the horizontal
Updated equation
This equation was rewritten as follows in the nineties:
where:
Hs is the design significant wave height at the toe of the structure (m)
Δ is the dimensionless relative buoyant density of rock, i.e. (ρr / ρw - 1) = around 1.58 for granite in sea water
ρr and ρw are the densities of rock and (sea)water (kg/m3)
Dn50 is the nominal median diameter of armor blocks = (W50/ρr)1/3 (m)
KD is a dimensionless stability coefficient, deduced from laboratory experiments for different kinds of armor blocks and for very small damage (a few blocks removed from the armor layer) (-):
KD = around 3 for natural quarry rock
KD = around 10 for artificial interlocking concrete blocks
θ is the angle of revetment with the horizontal
The armourstone may be considered stable if the stability number Ns = Hs / Δ Dn50 < 1.5 to 2, with damage rapidly increasing for Ns > 3. This formula has been for many years the US standard for the design of rock structures under influence of wave action Obviously, these equations may be used for preliminary design, but scale model testing (2D in wave flume, and 3D in wave basin) is absolutely needed before construction is undertaken.
The drawback of the Hudson formula is that it is only valid for relatively steep waves (so for waves during storms, and less for swell waves). Also it is not valid for breakwaters and shore protections with an impermeable core. It is not possible to estimate the degree of damage on a breakwater during a storm with this formula. Therefore nowadays for armourstone the Van der Meer formula or a variant of it is used. For concrete breakwater elements often a variant of the Hudson formula is used.
See also
Breakwater (structure)
Coastal erosion
Coastal management
Armourstone
Riprap
References
Equations
Coastal engineering
Coastal erosion | Hudson's equation | [
"Mathematics",
"Engineering"
] | 674 | [
"Coastal engineering",
"Civil engineering",
"Mathematical objects",
"Equations"
] |
13,957,273 | https://en.wikipedia.org/wiki/MAS1 | MAS proto-oncogene, or MAS1 proto-oncogene, G protein-coupled receptor (MRGA, MAS, MGRA), is a protein that in humans is encoded by the MAS1 gene.
The structure of the MAS1 product indicates that it belongs to the class of receptors that are coupled to GTP-binding proteins and share a conserved structural motif, which is described as a '7-transmembrane segment' following the prediction that these hydrophobic segments form membrane-spanning alpha-helices. The MAS1 protein may be a receptor that, when activated, modulates a critical component in a growth-regulating pathway to bring about oncogenic effects.
Agonists of the receptor include angiotensin-(1-7). Antagonist include A-779 (angiotensin-1-7 with c-terminal proline substituted for D-Ala), or D-Pro (angiotensin-1-7 with c-terminal proline submitted for D-proline).
Mas1 proto-oncogene (MAS1, MGRA) is not to be confused with the MAS-related G-protein coupled receptor, a recently believed to be activated by the ligand alamandine (generated by catalysis of Ang A via ACE2 or directly from Ang-(1-7)).
See also
MAS1 oncogene
References
Further reading
G protein-coupled receptors | MAS1 | [
"Chemistry"
] | 300 | [
"G protein-coupled receptors",
"Signal transduction"
] |
13,958,600 | https://en.wikipedia.org/wiki/Rotavirus%20translation | Rotavirus translation, the process of translating mRNA into proteins, occurs in a different way in Rotaviruses. Unlike the vast majority of cellular proteins in other organisms, in Rotaviruses the proteins are translated from capped but nonpolyadenylated mRNAs. The viral nonstructural protein NSP3 specifically binds the 3'-end consensus sequence of viral mRNAs and interacts with the eukaryotic translation initiation factor eIF4G. The Rotavirus replication cycle occurs entirely in the cytoplasm. Upon virus entry, the viral transcriptase synthesizes capped but nonpolyadenylated mRNA The viral mRNAs bear 5' and 3' untranslated regions (UTR) of variable length and are flanked by two different sequences common to all genes.
In the group A rotaviruses, the 3'-end consensus sequence UGACC is highly conserved among the 11 genes. Rotavirus NSP3 presents several similarities to PABP; in rotavirus-infected cells, NSP3 can be cross-linked to the 3' end of rotavirus mRNAs and is coimmunoprecipitated with eIF4G. The binding of NSP3A to eIF4G and its specific interaction with the 3' end of viral mRNA brings the viral mRNA and the translation initiation machinery into contact, thus favoring efficient translation of the viral mRNA. NSP3 interacts with the same region of eIF4G as PABP does. As a consequence, during rotavirus infection PABP is evicted from eIF4G, probably impairing the translation of polyadenylated mRNA and leading to the shutoff of cellular mRNA translation observed during rotavirus infection.
References
Rotaviruses
Molecular biology
Protein biosynthesis
Gene expression | Rotavirus translation | [
"Chemistry",
"Biology"
] | 381 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
2,375,025 | https://en.wikipedia.org/wiki/Lead%20dioxide | Lead(IV) oxide, commonly known as lead dioxide, is an inorganic compound with the chemical formula . It is an oxide where lead is in an oxidation state of +4. It is a dark-brown solid which is insoluble in water. It exists in two crystalline forms. It has several important applications in electrochemistry, in particular as the positive plate of lead acid batteries.
Properties
Physical
Lead dioxide has two major polymorphs, alpha and beta, which occur naturally as rare minerals scrutinyite and plattnerite, respectively. Whereas the beta form had been identified in 1845, α- was first identified in 1946 and found as a naturally occurring mineral 1988.
The alpha form has orthorhombic symmetry, space group Pbcn (No. 60), Pearson symbol oP12, lattice constants a = 0.497 nm, b = 0.596 nm, c = 0.544 nm, Z = 4 (four formula units per unit cell). The lead atoms are six-coordinate.
The symmetry of the beta form is tetragonal, space group P42/mnm (No. 136), Pearson symbol tP6, lattice constants a = 0.491 nm, c = 0.3385 nm, Z = 2 and related to the rutile structure and can be envisaged as containing columns of octahedra sharing opposite edges and joined to other chains by corners. This contrasts with the alpha form where the octahedra are linked by adjacent edges to give zigzag chains.
Chemical
Lead dioxide decomposes upon heating in air as follows:
The stoichiometry of the end product can be controlled by changing the temperature – for example, in the above reaction, the first step occurs at 290 °C, second at 350 °C, third at 375 °C and fourth at 600 °C. In addition, can be obtained by decomposing at 580–620 °C under an oxygen pressure of . Therefore, thermal decomposition of lead dioxide is a common way of producing various lead oxides.
Lead dioxide is an amphoteric compound with prevalent acidic properties. It dissolves in strong bases to form the hydroxyplumbate ion, :
It also reacts with basic oxides in the melt, yielding orthoplumbates .
Because of the instability of its cation, lead dioxide reacts with hot acids, converting to the more stable state and liberating oxygen:
However these reactions are slow.
Lead dioxide is well known for being a good oxidizing agent, with an example reactions listed below:
Electrochemical
Although the formula of lead dioxide is nominally given as , the actual oxygen to lead ratio varies between 1.90 and 1.98 depending on the preparation method. Deficiency of oxygen (or excess of lead) results in the characteristic metallic conductivity of lead dioxide, with a resistivity as low as 10−4 Ω·cm and which is exploited in various electrochemical applications. Like metals, lead dioxide has a characteristic electrode potential, and in electrolytes it can be polarized both anodically and cathodically. Lead dioxide electrodes have a dual action, that is both the lead and oxygen ions take part in the electrochemical reactions.
Production
Chemical processes
Lead dioxide is produced commercially by several methods, which include oxidation of red lead () in alkaline slurry in a chlorine atmosphere, reaction of lead(II) acetate with "chloride of lime" (calcium hypochlorite), The reaction of with nitric acid also affords the dioxide:
reacts with sodium hydroxide to form the hexahydroxoplumbate(IV) ion , soluble in water.
Electrolysis
An alternative synthesis method is electrochemical: lead dioxide forms on pure lead, in dilute sulfuric acid, when polarized anodically at electrode potential about +1.5 V at room temperature. This procedure is used for large-scale industrial production of anodes. Lead and copper electrodes are immersed in sulfuric acid flowing at a rate of 5–10 L/min. The electrodeposition is carried out galvanostatically, by applying a current of about 100 A/m2 for about 30 minutes.
The drawback of this method for the production of lead dioxide anodes is its softness, especially compared to the hard and brittle which has a Mohs hardness of 5.5. This mismatch in mechanical properties results in peeling of the coating which is preferred for bulk production. Therefore, an alternative method is to use harder substrates, such as titanium, niobium, tantalum or graphite and deposit onto them from lead(II) nitrate in static or flowing nitric acid. The substrate is usually sand-blasted before the deposition to remove surface oxide and contamination and to increase the surface roughness and adhesion of the coating.
Applications
Lead dioxide is used in the production of matches, pyrotechnics, dyes and the curing of sulfide polymers. It is also used in the construction of high-voltage lightning arresters.
Lead dioxide is used as an anode material in electrochemistry. β- is more attractive for this purpose than the α form because it has relatively low resistivity, good corrosion resistance even in low-pH medium, and a high overvoltage for the evolution of oxygen in sulfuric- and nitric-acid-based electrolytes. Lead dioxide can also withstand chlorine evolution in hydrochloric acid. Lead dioxide anodes are inexpensive and were once used instead of conventional platinum and graphite electrodes for regenerating potassium dichromate. They were also applied as oxygen anodes for electroplating copper and zinc in sulfate baths. In organic synthesis, lead dioxide anodes were applied for the production of glyoxylic acid from oxalic acid in a sulfuric acid electrolyte.
Lead acid battery
The most important use of lead dioxide is as the cathode of lead acid batteries. Its utility arises from the anomalous metallic conductivity of . The lead acid battery stores and releases energy by shifting the equilibrium (a comproportionation) between metallic lead, lead dioxide, and lead(II) salts in sulfuric acid.
E° = +2.05 V
Safety
Lead compounds are poisons. Chronic contact with the skin can potentially cause lead poisoning through absorption, or redness and irritation in the short term.
is not combustible, but it enhances flammability of other substances and the intensity of the fire. In case of a fire it gives off irritating and toxic fumes.
Lead dioxide is poisonous to aquatic life, but because of its insolubility it usually settles out of water.
References
External links
National Pollutant Inventory: Lead and Lead Compounds Fact Sheet
Oxides
Lead(IV) compounds
Acidic oxides
Pyrotechnic oxidizers
Oxidizing agents | Lead dioxide | [
"Chemistry"
] | 1,409 | [
"Redox",
"Oxides",
"Oxidizing agents",
"Salts"
] |
2,375,338 | https://en.wikipedia.org/wiki/Carbon%20tetrafluoride | Tetrafluoromethane, also known as carbon tetrafluoride or R-14, is the simplest perfluorocarbon (CF4). As its IUPAC name indicates, tetrafluoromethane is the perfluorinated counterpart to the hydrocarbon methane. It can also be classified as a haloalkane or halomethane. Tetrafluoromethane is a useful refrigerant but also a potent greenhouse gas. It has a very high bond strength due to the nature of the carbon–fluorine bond.
Bonding
Because of the multiple carbon–fluorine bonds, and the high electronegativity of fluorine, the carbon in tetrafluoromethane has a significant positive partial charge which strengthens and shortens the four carbon–fluorine bonds by providing additional ionic character. Carbon–fluorine bonds are the strongest single bonds in organic chemistry. Additionally, they strengthen as more carbon–fluorine bonds are added to the same carbon. In the one-carbon organofluorine compounds represented by molecules of fluoromethane, difluoromethane, trifluoromethane, and tetrafluoromethane, the carbon–fluorine bonds are strongest in tetrafluoromethane. This effect is due to the increased coulombic attractions between the fluorine atoms and the carbon because the carbon has a positive partial charge of 0.76.
Preparation
Tetrafluoromethane is the product when any carbon compound, including carbon itself, is burned in an atmosphere of fluorine. With hydrocarbons, hydrogen fluoride is a coproduct. It was first reported in 1926. It can also be prepared by the fluorination of carbon dioxide, carbon monoxide or phosgene with sulfur tetrafluoride. Commercially it is manufactured by the reaction of hydrogen fluoride with dichlorodifluoromethane or chlorotrifluoromethane; it is also produced during the electrolysis of metal fluorides MF, MF2 using a carbon electrode.
Although it can be made from a myriad of precursors and fluorine, elemental fluorine is expensive and difficult to handle. Consequently, is prepared on an industrial scale using hydrogen fluoride:
CCl2F2 + 2 HF → CF4 + 2 HCl
Laboratory synthesis
Tetrafluoromethane and silicon tetrafluoride can be prepared in the laboratory by the reaction of silicon carbide with fluorine.
SiC + 4 F2 → CF4 + SiF4
Reactions
Tetrafluoromethane, like other fluorocarbons, is very stable due to the strength of its carbon–fluorine bonds. The bonds in tetrafluoromethane have a bonding energy of 515 kJ⋅mol−1. As a result, it is inert to acids and hydroxides. However, it reacts explosively with alkali metals. Thermal decomposition or combustion of CF4 produces toxic gases (carbonyl fluoride and carbon monoxide) and in the presence of water will also yield hydrogen fluoride.
It is very slightly soluble in water (about 20 mg⋅L−1), but miscible with organic solvents.
Uses
Tetrafluoromethane is sometimes used as a low temperature refrigerant (R-14). It is used in electronics microfabrication alone or in combination with oxygen as a plasma etchant for silicon, silicon dioxide, and silicon nitride. It also has uses in neutron detectors.
Environmental effects
Tetrafluoromethane is a potent greenhouse gas that contributes to the greenhouse effect. It is very stable, has an atmospheric lifetime of 50,000 years, and a high greenhouse warming potential 6,500 times that of CO2.
Tetrafluoromethane is the most abundant perfluorocarbon in the atmosphere, where it is designated as PFC-14. Its atmospheric concentration is growing. As of 2019, the man-made gases CFC-11 and CFC-12 continue to contribute a stronger radiative forcing than PFC-14.
Although structurally similar to chlorofluorocarbons (CFCs), tetrafluoromethane does not deplete the ozone layer because the carbon–fluorine bond is much stronger than that between carbon and chlorine.
Main industrial emissions of tetrafluoromethane besides hexafluoroethane are produced during production of aluminium using Hall-Héroult process. CF4 also is produced as product of the breakdown of more complex compounds such as halocarbons.
Health risks
Due to its density, tetrafluoromethane can displace air, creating an asphyxiation hazard in inadequately ventilated areas. Otherwise, it is normally harmless due to its stability.
See also
Trifluoromethane
Hexafluoroethane
Octafluoropropane
References
External links
National Pollutant Inventory – Fluoride and compounds fact sheet
Data from Air Liquide
Vapor pressure graph at Air Liquide
MSDS at Oxford University
Protocol for measurement of tetrafluoromethane and hexafluoroethane from primary aluminium production
Chemical and physical properties table
WebBook page for CF4
Nonmetal halides
1
Halomethanes
Refrigerants
Greenhouse gases | Carbon tetrafluoride | [
"Chemistry",
"Environmental_science"
] | 1,138 | [
"Greenhouse gases",
"Environmental chemistry"
] |
2,375,391 | https://en.wikipedia.org/wiki/Hydraulic%20manifold | A hydraulic manifold is a component that regulates fluid flow between pumps and actuators and other components in a hydraulic system. It is like a switchboard in an electrical circuit because it lets the operator control how much fluid flows between which components of a hydraulic machinery. For example, in a backhoe loader a manifold turns on or shuts off or diverts flow to the telescopic arms of the front bucket and the back bucket. The manifold is connected to the levers in the operator's cabin which the operator uses to achieve the desired manifold behaviour.
A manifold is composed of assorted hydraulic valves connected to each other. It is the various combinations of states of these valves that allow complex control behaviour in a manifold. A hydraulic manifold is a block of metal with flow paths drilled through it, connecting various ports. Hydraulic manifolds consist of one or more relative large pipes called a "barrel" or "main", with numerous junctions connecting smaller pipes and ports.
See also
Block and bleed manifold
References
Fluid mechanics
Manifold, Hydraulic | Hydraulic manifold | [
"Physics",
"Chemistry",
"Engineering"
] | 209 | [
"Physical systems",
"Hydraulics",
"Civil engineering",
"Fluid mechanics",
"Fluid dynamics"
] |
2,375,824 | https://en.wikipedia.org/wiki/Dihedral%20group%20of%20order%206 | In mathematics, D3 (sometimes alternatively denoted by D6) is the dihedral group of degree 3 and order 6. It equals the symmetric group S3. It is also the smallest non-abelian group.
This page illustrates many group concepts using this group as example.
Symmetries of an equilateral triangle
The dihedral group D3 is the symmetry group of an equilateral triangle, that is, it is the set of all rigid transformations (reflections, rotations, and combinations of these) that leave the shape and position of this triangle fixed. In the case of D3, every possible permutation of the triangle's vertices constitutes such a transformation, so that the group of these symmetries is isomorphic to the symmetric group S3 of all permutations of three distinct elements. This is not the case for dihedral groups of higher orders.
The dihedral group D3 is isomorphic to two other symmetry groups in three dimensions:
one with a 3-fold rotation axis and a perpendicular 2-fold rotation axis (hence three of these): D3
one with a 3-fold rotation axis in a plane of reflection (and hence also in two other planes of reflection): C3v
Permutations of a set of three objects
Consider three colored blocks (red, green, and blue), initially placed in the order RGB. The symmetric group S3 is then the group of all possible rearrangements of these blocks.
If we denote by a the action "swap the first two blocks", and by b the action "swap the last two blocks", we can write all possible permutations in terms of these two actions.
In multiplicative form, we traditionally write xy for the combined action "first do y, then do x"; so that ab is the action , i.e., "take the last block and move it to the front".
If we write e for "leave the blocks as they are" (the identity action), then we can write the six permutations of the set of three blocks as the following actions:
e : RGB ↦ RGB or ()
a : RGB ↦ GRB or (RG)
b : RGB ↦ RBG or (GB)
ab : RGB ↦ BRG or (RGB)
ba : RGB ↦ GBR or (RBG)
aba : RGB ↦ BGR or (RB)
The notation in parentheses is the cycle notation.
Note that the action aa has the effect , leaving the blocks as they were; so we can write .
Similarly,
bb = e,
(aba)(aba) = e, and
(ab)(ba) = (ba)(ab) = e;
so each of the above actions has an inverse.
By inspection, we can also determine associativity and closure (two of the necessary group axioms); note for example that
(ab)a = a(ba) = aba, and
(ba)b = b(ab) = bab.
The group is non-abelian since, for example, . Since it is built up from the basic actions a and b, we say that the set generates it.
The group has presentation
, also written
or
, also written
where a and b are swaps and is a cyclic permutation. Note that the second presentation means that the group is a Coxeter group. (In fact, all dihedral and symmetry groups are Coxeter groups.)
Summary of group operations
With the generators a and b, we define the additional shorthands , and , so that a, b, c, d, e, and f are all the elements of this group. We can then summarize the group operations in the form of a Cayley table:
Note that non-equal non-identity elements only commute if they are each other's inverse. Therefore, the group is centerless, i.e., the center of the group consists only of the identity element.
Conjugacy classes
We can easily distinguish three kinds of permutations of the three blocks, the conjugacy classes of the group:
no change (), a group element of order 1
interchanging two blocks: (RG), (RB), (GB), three group elements of order 2
a cyclic permutation of all three blocks: (RGB), (RBG), two group elements of order 3
For example, (RG) and (RB) are both of the form (x y); a permutation of the letters R, G, and B (namely (GB)) changes the notation (RG) into (RB). Therefore, if we apply (GB), then (RB), and then the inverse of (GB), which is also (GB), the resulting permutation is (RG).
Note that conjugate group elements always have the same order, but in general two group elements that have the same order need not be conjugate.
Subgroups
From Lagrange's theorem we know that any non-trivial subgroup of a group with 6 elements must have order 2 or 3. In fact the two cyclic permutations of all three blocks, with the identity, form a subgroup of order 3, index 2, and the swaps of two blocks, each with the identity, form three subgroups of order 2, index 3. The existence of subgroups of order 2 and 3 is also a consequence of Cauchy's theorem.
The first-mentioned is the alternating group A3.
The left cosets and the right cosets of A3 coincide (as they do for any subgroup of index 2) and consist of A3 and the set of three swaps }.
The left cosets of are:
{ (), (RG) }
{ (RB), (RGB) }
{ (GB), (RBG) }
The right cosets of are:
{ (RG), () }
{ (RBG), (RB) }
{ (RGB), (GB) }
Thus A3 is normal, and the other three non-trivial subgroups are not. The quotient group is isomorphic with C2.
, a semidirect product, where H is a subgroup of two elements: () and one of the three swaps. This decomposition is also a consequence (particular case) of the Schur–Zassenhaus theorem.
In terms of permutations the two group elements of are the set of even permutations and the set of odd permutations.
If the original group is that generated by a 120°-rotation of a plane about a point, and reflection with respect to a line through that point, then the quotient group has the two elements which can be described as the subsets "just rotate (or do nothing)" and "take a mirror image".
Note that for the symmetry group of a square, an uneven permutation of vertices does not correspond to taking a mirror image, but to operations not allowed for rectangles, i.e. 90° rotation and applying a diagonal axis of reflection.
Semidirect products
is if both φ(0) and φ(1) are the identity.
The semidirect product is isomorphic to the dihedral group of order 6 if φ(0) is the identity and φ(1) is the non-trivial automorphism of C3, which inverses the elements.
Thus we get:
(n1, 0) * (n2, h2) = (n1 + n2, h2)
(n1, 1) * (n2, h2) = (n1 − n2, 1 + h2)
for all n1, n2 in C3 and h2 in C2.
More concisely,
for all n1, n2 in C3 and h1, h2 in C2.
In a Cayley table:
Note that for the second digit we essentially have a 2×2 table, with 3×3 equal values for each of these 4 cells. For the first digit the left half of the table is the same as the right half, but the top half is different from the bottom half.
For the direct product the table is the same except that the first digits of the bottom half of the table are the same as in the top half.
Group action
Consider D3 in the geometrical way, as a symmetry group of isometries of the plane, and consider the corresponding group action on a set of 30 evenly spaced points on a circle, numbered 0 to 29, with 0 at one of the reflexion axes.
This section illustrates group action concepts for this case.
The action of G on X is called
transitive if for any two x, y in X there exists a g in G such that ; this is not the case
faithful (or effective) if for any two different g, h in G there exists an x in X such that ; this is the case, because, except for the identity, symmetry groups do not contain elements that "do nothing"
free if for any two different g, h in G and all x in X we have ; this is not the case because there are reflections
Orbits and stabilizers
The orbit of a point x in X is the set of elements of X to which x can be moved by the elements of G. The orbit of x is denoted by Gx:
The orbits are and The points within an orbit are "equivalent". If a symmetry group applies for a pattern, then within each orbit the color is the same.
The set of all orbits of X under the action of G is written as .
If Y is a subset of X, we write GY for the set We call the subset Y invariant under G if (which is equivalent to . In that case, G also operates on Y. The subset Y is called fixed under G if for all g in G and all y in Y. The union of e.g. two orbits is invariant under G, but not fixed.
For every x in X, we define the stabilizer subgroup of x (also called the isotropy group or little group) as the set of all elements in G that fix x:
If x is a reflection point , its stabilizer is the group of order two containing the identity and the reflection in x. In other cases the stabilizer is the trivial group.
For a fixed x in X, consider the map from G to X given by . The image of this map is the orbit of x and the coimage is the set of all left cosets of Gx. The standard quotient theorem of set theory then gives a natural bijection between and Gx. Specifically, the bijection is given by . This result is known as the orbit-stabilizer theorem. In the two cases of a small orbit, the stabilizer is non-trivial.
If two elements x and y belong to the same orbit, then their stabilizer subgroups, Gx and Gy, are isomorphic. More precisely: if y = g · x, then Gy = gGx g−1. In the example this applies e.g. for 5 and 25, both reflection points. Reflection about 25 corresponds to a rotation of 10, reflection about 5, and rotation of −10.
A result closely related to the orbit-stabilizer theorem is Burnside's lemma:
where Xg is the set of points fixed by g. I.e., the number of orbits is equal to the average number of points fixed per group element.
For the identity all 30 points are fixed, for the two rotations none, and for the three reflections two each: and Thus, the average is six, the number of orbits.
Representation theory
Up to isomorphism, this group has three irreducible complex unitary representations, which we will call (the trivial representation), and , where the subscript indicates the dimension. By its definition as a permutation group over the set with three elements, the group has a representation on by permuting the entries of the vector, the fundamental representation. This representation is not irreducible, as it decomposes as a direct sum of and . appears as the subspace of vectors of the form and is the representation on its orthogonal complement, which are vectors of the form .
The nontrivial one-dimensional representation arises through the groups grading: The action is multiplication by the sign of the permutation of the group element. Every finite group has such a representation since it is a subgroup of a cyclic group by its regular action. Counting the square dimensions of the representations (, the order of the group), we see these must be all of the irreducible representations.
A 2-dimensional irreducible linear representation yields a 1-dimensional projective representation (i.e., an action on the projective line, an embedding in the Möbius group ), as elliptic transforms. This can be represented by matrices with entries 0 and ±1 (here written as fractional linear transformations), known as the anharmonic group:
order 1:
order 2:
order 3:
and thus descends to a representation over any field, which is always faithful/injective (since no two terms differ only by only a sign). Over the field with two elements, the projective line has only 3 points, and this is thus the exceptional isomorphism In characteristic 3, this embedding stabilizes the point since (in characteristic greater than 3 these points are distinct and permuted, and are the orbit of the harmonic cross-ratio). Over the field with three elements, the projective line has 4 elements, and since is isomorphic to the symmetric group on 4 elements, S4, the resulting embedding equals the stabilizer of the point .
See also
Dihedral group of order 8
References
External links
http://mathworld.wolfram.com/DihedralGroupD3.html
https://groupprops.subwiki.org/wiki/Symmetric_group:S3
Finite groups | Dihedral group of order 6 | [
"Mathematics"
] | 2,927 | [
"Mathematical structures",
"Algebraic structures",
"Finite groups"
] |
2,376,008 | https://en.wikipedia.org/wiki/Dwang | In construction, a dwang (Scotland and New Zealand), nogging piece, nogging, noggin or nog (England and Australia; all derived from brick nog), or blocking (North America), is a horizontal bracing piece used between wall studs to give rigidity to the wall frames of a building. Noggings may be made of timber, steel, or aluminium. If made of timber they are cut slightly longer than the space they fit into and are driven tightly into place or rabbeted into the wall stud. Although noggings between vertical studs brace the studs against buckling in compression they provide no bracing effect in shear, which is resisted by diagonal bracing to stop the frame racking.
The interval between noggings is dictated by local building codes and by the type of timber used; a typical timber-framed house in a non-cyclonic area will have two or three noggings per storey between studs. Additional noggings may be added as grounds for later fixings and are supplemented by lintels, sills and jack studs to form openings.
Joist bridging, or blocking, is used between floor or ceiling joists, but this is to prevent the joists from (twisting or rotating under load) rather than to prevent buckling in compression. Herringbone strutting may replace blocking with smaller, timber battens fixed diagonally, in pairs, between joists.
References
See also
Blocking (construction)
Carpentry
Building engineering
Structural system
Carpentry | Dwang | [
"Technology",
"Engineering"
] | 316 | [
"Structural engineering",
"Building engineering",
"Structural system",
"Civil engineering",
"Architecture stubs",
"Architecture"
] |
2,376,516 | https://en.wikipedia.org/wiki/Smoke%20hood | A smoke hood, also called an Air-Purifying Respiratory Protective Smoke Escape Device (RPED), is a hood wherein a transparent airtight bag seals around the head of the wearer while an air filter held in the mouth connects to the outside atmosphere and is used to breathe. Smoke hoods are a class of emergency breathing apparatus intended to protect victims of fire from the effects of smoke inhalation. A smoke hood is a predecessor to the gas mask. The first modern smoke hood design was by Garrett Morgan and patented in 1912.
History
Although the concept of air filtration masks dates back as far as Pliny the Elder, many early designs suffered from serious flaws, including an inability to adequately filter or provide enough air to the user, or design shortcomings that led to equipment that was either uncomfortable or difficult to don and use.
The first known modern-design smoke hood was developed by Garrett Morgan and was patented in 1912. The Morgan hood represented a significant improvement in the engineering and operability of smoke hoods or masks. Due partly to race issues in the United States at the time, Morgan, an African American, and his device went largely unrecognized until 1916. During construction of a tunnel under Lake Erie, an explosion trapped a number of sandhogs in the partially completed tunnel and filled the space with toxic fumes. Two separate rescue attempts failed, and the rescue teams themselves ended up in need of rescuing. Morgan, along with his brother and two volunteers, entered wearing Morgan smoke hoods and rescued several men apiece, which prompted others to don Morgan hoods and join the rescue attempt. In the end, Morgan's smoke hood enabled the rescue of many of the previous rescuers and allowed Morgan himself to make four trips into the tunnel—a journey that, without the hood, was not possible even once.
Modern hoods
High-quality smoke hoods are generally constructed of heat-resistant material like Kapton, and can withstand relatively high temperatures. The most important part of a smoke hood is the filter that provides protection from the toxic byproducts of combustion. Virtually all smoke hood designs utilize some form of activated charcoal filter and particulate filter to screen out corrosive fumes like ammonia and chlorine, as well as acid gases like hydrogen chloride and hydrogen sulfide. The defining characteristic of an effective smoke hood is the ability to convert deadly carbon monoxide to relatively harmless carbon dioxide through a catalytic process.
They are included in preparedness kits, after the September 11 attacks. Preparedness lists, such as those presented by Ready.Gov, often recommend smoke hoods, although some lists use alternate names such as "fume hoods," "respirator hoods," or "self-rescue hoods." As most modern construction contains materials that produce toxic smoke or fumes when burned, smoke hoods can allow people to make a safe escape from buildings when it might not otherwise be possible.
Positive-pressure hoods
Smoke hoods present on aircraft, also called protective breathing equipment (or PBEs), typically generate oxygen for anywhere from 30 seconds to 15 minutes. The oxygen is kept in a closed circuit, usually thanks to a tight neck seal. A scrubber system may be present to reduce the levels of carbon dioxide, and is breathable for around 20 minutes. When the oxygen supply ends, the hood will begin deflating and must be removed to avoid suffocation. These devices represent a subgroup of smoke hoods called positive-pressure respirators, which prevent the ingress of smoke or toxic gases by maintaining a higher air pressure inside the mask than outside. Consequently, any leak will cause fresh air to leak out of the mask, rather than toxic air to leak in.
Standards and Certification
In North America, respirators are required to be certified by the National Institute for Occupational Safety and Health (NIOSH). Additional voluntary consensus standards have been developed for respiratory protective devices for specific applications that go beyond the minimum governmental requirements. For smoke hoods, or RPEDs, ASTM International has developed ASTM E2952, Standard Specification for Air-Purifying Respiratory Protective Smoke Escape Devices (RPED). Conformity to voluntary standards like ASTM E2952 is often shown through third-party certification such as those issued by the Safety Equipment Institute (SEI).
References
Personal protective equipment
Respirators
Emergency equipment
Headgear | Smoke hood | [
"Engineering",
"Environmental_science"
] | 899 | [
"Safety engineering",
"Personal protective equipment",
"Environmental social science"
] |
2,376,557 | https://en.wikipedia.org/wiki/Engineering%20plastic | Engineering plastics are a group of plastic materials that have better mechanical or thermal properties than the more widely used commodity plastics (such as polystyrene, polyvinyl chloride, polypropylene and polyethylene).
Engineering plastics are more expensive than standard plastics, therefore they are produced in lower quantities and tend to be used for smaller objects or low-volume applications (such as mechanical parts), rather than for bulk and high-volume ends (like containers and packaging). Engineering plastics have a higher heat resistance than standard plastics and are continuously usable at temperatures up to about .
The term usually refers to thermoplastic materials rather than thermosetting ones. Examples of engineering plastics include polyamides (PA, nylons), used for skis and ski boots; polycarbonates (PC), used in motorcycle helmets and optical discs; and poly(methyl methacrylate) (PMMA, major brand names acrylic glass and plexiglass), used e.g. for taillights and protective shields. The currently most-consumed engineering plastic is acrylonitrile butadiene styrene (ABS), used for e.g. car bumpers, dashboard trim and Lego bricks.
Engineering plastics have gradually replaced traditional engineering materials such as metal, glass or ceramics in many applications. Besides equalling or surpassing them in strength, weight, and other properties, engineering plastics are much easier to manufacture, especially in complicated shapes. Across all different product types, more than 22 million tonnes of engineering plastics were consumed worldwide in 2020.
Relevant properties
Each engineering plastic usually has a unique combination of properties that may make it the material of choice for some application. For example, polycarbonates are highly resistant to impact, while polyamides are highly resistant to abrasion. Other properties exhibited by various grades of engineering plastics include heat resistance, mechanical strength, rigidity, chemical stability, self lubrication (specially used in manufacturing of gears and skids) and fire safety.
Examples
Acrylonitrile butadiene styrene (ABS)
Nylon 6
Nylon 6-6
Polyamides (PA)
Polybutylene terephthalate (PBT)
Polycarbonates (PC)
Polyetheretherketone (PEEK)
Polyetherimide (PEI)
Polyetherketoneketone (PEKK)
Polyetherketones (PEK)
Polyketone (PK)
Polyethylene terephthalate (PET)
Polyimides
Polyoxymethylene plastic (POM / Acetal)
Polyphenylene sulfide (PPS)
Polyphenylene oxide (PPO)
Polysulphone (PSU)
Polytetrafluoroethylene (PTFE / Teflon)
Poly(methyl methacrylate) (PMMA)
See also
High-performance plastics
References
Plastics | Engineering plastic | [
"Physics",
"Chemistry"
] | 593 | [
"Polymer stubs",
"Unsolved problems in physics",
"Amorphous solids",
"Organic chemistry stubs",
"Plastics"
] |
2,377,273 | https://en.wikipedia.org/wiki/Functional%20requirement | In software engineering and systems engineering, a functional requirement defines a function of a system or its component, where a function is described as a summary (or specification or statement) of behavior between inputs and outputs.
Functional requirements may involve calculations, technical details, data manipulation and processing, and other specific functionality that define what a system is supposed to accomplish. Behavioral requirements describe all the cases where the system uses the functional requirements, these are captured in use cases. Functional requirements are supported by non-functional requirements (also known as "quality requirements"), which impose constraints on the design or implementation (such as performance requirements, security, or reliability). Generally, functional requirements are expressed in the form "system must do <requirement>," while non-functional requirements take the form "system shall be <requirement>." The plan for implementing functional requirements is detailed in the system design, whereas non-functional requirements are detailed in the system architecture.
As defined in requirements engineering, functional requirements specify particular results of a system. This should be contrasted with non-functional requirements, which specify overall characteristics such as cost and reliability. Functional requirements drive the application architecture of a system, while non-functional requirements drive the technical architecture of a system.
In some cases, a requirements analyst generates use cases after gathering and validating a set of functional requirements. The hierarchy of functional requirements collection and change, broadly speaking, is: user/stakeholder request → analyze → use case → incorporate. Stakeholders make a request; systems engineers attempt to discuss, observe, and understand the aspects of the requirement; use cases, entity relationship diagrams, and other models are built to validate the requirement; and, if documented and approved, the requirement is implemented/incorporated. Each use case illustrates behavioral scenarios through one or more functional requirements. Often, though, an analyst will begin by eliciting a set of use cases, from which the analyst can derive the functional requirements that must be implemented to allow a user to perform each use case.
Process
A typical functional requirement will contain a unique name and number, a brief summary, and a rationale. This information is used to help the reader understand why the requirement is needed, and to track the requirement through the development of the system. The crux of the requirement is the description of the required behavior, which must be clear and readable. The described behavior may come from organizational or business rules, or it may be discovered through elicitation sessions with users, stakeholders, and other experts within the organization. Many requirements may be uncovered during the use case development. When this happens, the requirements analyst may create a placeholder requirement with a name and summary, and research the details later, to be filled in when they are better known.
See also
Function (computer science)
Function (engineering)
Function (mathematics)
Function point
Functional decomposition
Functional design
Functional model
Separation of concerns
Software sizing
References
Software requirements
Systems engineering | Functional requirement | [
"Engineering"
] | 588 | [
"Software engineering",
"Systems engineering",
"Software requirements"
] |
2,377,391 | https://en.wikipedia.org/wiki/Fuse%20%28hydraulic%29 | In hydraulic systems, a fuse (or velocity fuse) is a component which prevents the sudden loss of hydraulic fluid pressure. It is a safety feature, designed to allow systems to continue operating, or at least to not fail catastrophically, in the event of a system breach. It does this by stopping or greatly restricting the flow of hydraulic fluid through the fuse if the flow exceeds a threshold.
The term "fuse" is used here in analogy with electrical fuses which perform a similar function.
Hydraulic systems rely on high pressures (usually over 7000 kPa) to work properly. If a hydraulic system loses fluid pressure, such as due to a burst hydraulic hose, it will become inoperative and components such as actuators may collapse. This is an undesirable condition in life-critical systems such as aircraft or heavy machinery, such as forklifts. Hydraulic fuses help guard against catastrophic failure of a hydraulic system by automatically isolating the defective branch.
When a hydraulic system is damaged, there is generally a rapid flow of hydraulic fluid towards the breach. Most hydraulic fuses detect this flow and seal themselves (or restrict flow) if the flow exceeds a predetermined limit. There are many different fuse designs but most involve a passive spring-controlled mechanism which closes when the pressure differential across the fuse becomes excessive.
Many gas station pumps are equipped with a velocity fuse to limit gasoline flow. The fuse can be heard to engage with a "click" on some pumps if the nozzle trigger is depressed fully. A slight reduction in fuel flow can be observed. The fuse resets instantly upon releasing the trigger.
Types
There are two types of hydraulic fuses. The first one acts like a pressure relief valve, venting in case of a pressure surge. The second is more or less like a check valve. The only difference is a check valve is in place to prevent upstream fluid from coming back and venting out. A fuse is in place before the venting area and stops fluid from venting forward of it.
Hydraulic fuses are not a perfect solution to fluid loss. They will probably be ineffective against slow, seeping loss of hydraulic fluid, and they may be unable to prevent fluid loss in the event of a catastrophic system failure involving multiple breaches to hydraulic lines. Also, when a fuse activates it is likely that the system will no longer function as designed, as hydraulically-actuated devices may be present in the section isolated by the fuse.
Depending on the system, hydraulic fuses may reset automatically after a delay, or may require manual re-opening. Forklift main hoist cylinders are usually equipped with a fuse built into the hose adapter at the base of the cylinder that resets immediately upon stopping the flow.
In dam spillways
In the design of a spillway for a dam, a fuse plug is a water retaining structure designed to wash out in a controlled fashion if the main dam is in danger of overtopping due to flood, and if the normal spillway channel is insufficient to control the overtopping.
See also
Relief valve
Safety valve
Hydraulics
Safety valves | Fuse (hydraulic) | [
"Physics",
"Chemistry",
"Engineering"
] | 629 | [
"Physical systems",
"Hydraulics",
"Fluid dynamics",
"Industrial safety devices",
"Safety valves"
] |
2,378,039 | https://en.wikipedia.org/wiki/Automotive%20Electronics%20Council | The Automotive Electronics Council (AEC) is an organization originally established in the 1990s by Chrysler, Ford, and GM for the purpose of establishing common part-qualification and quality-system standards.
The AEC Component Technical Committee is the standardization body for establishing standards for reliable, high quality electronic components. Components meeting these specifications are suitable for use in the harsh automotive environment without additional component-level qualification testing. The technical documents developed by the AEC Component Technical Committee are available at the AEC web site.
Most commonly referenced AEC documents are:
AEC-Q100 "Failure Mechanism Based Stress Test Qualification For Integrated Circuits"
AEC-Q101 "Failure Mechanism Based Stress Test Qualification For Discrete Semiconductors"
AEC-Q200 "Stress Test Qualification For Passive Components"
See also
VW 80808 ()
References
External links
Automotive Electronics Council
Automotive technologies
Automotive electronics
Electrical engineering organizations
Motor trade associations | Automotive Electronics Council | [
"Engineering"
] | 182 | [
"Electrical engineering",
"Electrical engineering organizations"
] |
2,378,378 | https://en.wikipedia.org/wiki/Liquid%20chromatography%E2%80%93mass%20spectrometry | Liquid chromatography–mass spectrometry (LC–MS) is an analytical chemistry technique that combines the physical separation capabilities of liquid chromatography (or HPLC) with the mass analysis capabilities of mass spectrometry (MS). Coupled chromatography – MS systems are popular in chemical analysis because the individual capabilities of each technique are enhanced synergistically. While liquid chromatography separates mixtures with multiple components, mass spectrometry provides spectral information that may help to identify (or confirm the suspected identity of) each separated component. MS is not only sensitive, but provides selective detection, relieving the need for complete chromatographic separation. LC–MS is also appropriate for metabolomics because of its good coverage of a wide range of chemicals. This tandem technique can be used to analyze biochemical, organic, and inorganic compounds commonly found in complex samples of environmental and biological origin. Therefore, LC–MS may be applied in a wide range of sectors including biotechnology, environment monitoring, food processing, and pharmaceutical, agrochemical, and cosmetic industries. Since the early 2000s, LC–MS (or more specifically LC–MS/MS) has also begun to be used in clinical applications.
In addition to the liquid chromatography and mass spectrometry devices, an LC–MS system contains an interface that efficiently transfers the separated components from the LC column into the MS ion source. The interface is necessary because the LC and MS devices are fundamentally incompatible. While the mobile phase in a LC system is a pressurized liquid, the MS analyzers commonly operate under high vacuum. Thus, it is not possible to directly pump the eluate from the LC column into the MS source. Overall, the interface is a mechanically simple part of the LC–MS system that transfers the maximum amount of analyte, removes a significant portion of the mobile phase used in LC and preserves the chemical identity of the chromatography products (chemically inert). As a requirement, the interface should not interfere with the ionizing efficiency and vacuum conditions of the MS system. Nowadays, most extensively applied LC–MS interfaces are based on atmospheric pressure ionization (API) strategies like electrospray ionization (ESI), atmospheric-pressure chemical ionization (APCI), and atmospheric pressure photoionization (APPI). These interfaces became available in the 1990s after a two decade long research and development process.
History
The coupling of chromatography with MS is a well developed chemical analysis strategy dating back from the 1950s. Gas chromatography (GC)–MS was originally introduced in 1952, when A. T. James and A. J. P. Martin were trying to develop tandem separation – mass analysis techniques. In GC, the analytes are eluted from the separation column as a gas and the connection with electron ionization (EI) or chemical ionization (CI) ion sources in the MS system was a technically simpler challenge. Because of this, the development of GC-MS systems was faster than LC–MS and such systems were first commercialized in the 1970s. The development of LC–MS systems took longer than GC-MS and was directly related to the development of proper interfaces. Victor Talrose and his collaborators in Russia started the development of LC–MS in the late 1960s, when they first used capillaries to connect an LC column to an EI source. A similar strategy was investigated by McLafferty and collaborators in 1973 who coupled the LC column to a CI source, which allowed a higher liquid flow into the source. This was the first and most obvious way of coupling LC with MS, and was known as the capillary inlet interface. This pioneer interface for LC–MS had the same analysis capabilities of GC-MS and was limited to rather volatile analytes and non-polar compounds with low molecular mass (below 400 Da). In the capillary inlet interface, the evaporation of the mobile phase inside the capillary was one of the main issues. Within the first years of development of LC–MS, on-line and off-line alternatives were proposed as coupling alternatives. In general, off-line coupling involved fraction collection, evaporation of solvent, and transfer of analytes to the MS using probes. Off-line analyte treatment process was time-consuming and there was an inherent risk of sample contamination. Rapidly, it was realized that the analysis of complex mixtures would require the development of a fully automated on-line coupling solution in LC–MS.
The key to the success and widespread adoption of LC–MS as a routine analytical tool lies in the interface and ion source between the liquid-based LC and the vacuum-base MS. The following interfaces were stepping-stones on the way to the modern atmospheric-pressure ionization interfaces, and are described for historical interest.
Moving-belt interface
The moving-belt interface (MBI) was developed by McFadden et al. in 1977 and commercialized by Finnigan. This interface consisted of an endless moving belt onto which the LC column effluent was deposited in a band. On the belt, the solvent was evaporated by gently heating and efficiently exhausting the solvent vapours under reduced pressure in two vacuum chambers. After the liquid phase was removed, the belt passed over a heater which flash desorbed the analytes into the MS ion source. One of the significant advantages of the MBI was its compatibility with a wide range of chromatographic conditions. MBI was successfully used for LC–MS applications between 1978 and 1990 because it allowed coupling of LC to MS devices using EI, CI, and fast-atom bombardment (FAB) ion sources. The most common MS systems connected by MBI interfaces to LC columns were magnetic sector and quadrupole instruments. MBI interfaces for LC–MS allowed MS to be widely applied in the analysis of drugs, pesticides, steroids, alkaloids, and polycyclic aromatic hydrocarbons. This interface is no longer used because of its mechanical complexity and the difficulties associated with belt renewal (or cleaning) as well as its inability to handle very labile biomolecules.
Direct liquid-introduction interface
The direct liquid-introduction (DLI) interface was developed in 1980. This interface was intended to solve the problem of evaporation of liquid inside the capillary inlet interface. In DLI, a small portion of the LC flow was forced through a small aperture or diaphragm (typically 10 μm in diameter) to form a liquid jet composed of small droplets that were subsequently dried in a desolvation chamber. The analytes were ionized using a solvent-assisted chemical ionization source, where the LC solvents acted as reagent gases. To use this interface, it was necessary to split the flow coming out of the LC column because only a small portion of the effluent (10 to 50 μl/min out of 1 ml/min) could be introduced into the source without raising the vacuum pressure of the MS system too high. Alternately, Henion at Cornell University had success with using micro-bore LC methods so that the entire (low) flow of the LC could be used. One of the main operational problems of the DLI interface was the frequent clogging of the diaphragm orifices. The DLI interface was used between 1982 and 1985 for the analysis of pesticides, corticosteroids, metabolites in horse urine, erythromycin, and vitamin B12. However, this interface was replaced by the thermospray interface, which removed the flow rate limitations and the issues with the clogging diaphragms.
A related device was the particle beam interface (PBI), developed by Willoughby and Browner in 1984. Particle beam interfaces took over the wide applications of MBI for LC–MS in 1988. The PBI operated by using a helium gas nebulizer to spray the eluant into the vacuum, drying the droplets and pumping away the solvent vapour (using a jet separator) while the stream of monodisperse dried particles containing the analyte entered the source. Drying the droplets outside of the source volume, and using a jet separator to pump away the solvent vapour, allowed the particles to enter and be vapourized in a low-pressure EI source. As with the MBI, the ability to generate library-searchable EI spectra was a distinct advantage for many applications. Commercialized by Hewlett Packard, and later by VG and Extrel, it enjoyed moderate success, but has been largely supplanted by the atmospheric pressure interfaces such as electrospray and APCI which provide a broader range of compound coverage and applications.
Thermospray interface
The thermospray (TSP) interface was developed in 1980 by Marvin Vestal and co-workers at the University of Houston. It was commercialized by Vestec and several of the major mass spectrometer manufacturers. The interface resulted from a long-term research project intended to find a LC–MS interface capable of handling high flow rates (1 ml/min) and avoiding the flow split in DLI interfaces. The TSP interface was composed of a heated probe, a desolvation chamber, and an ion focusing skimmer. The LC effluent passed through the heated probe and emerged as a jet of vapor and small droplets flowing into the desolvation chamber at low pressure. Initially operated with a filament or discharge as the source of ions (thereby acting as a CI source for vapourized analyte), it was soon discovered that ions were also observed when the filament or discharge was off. This could be attributed to either direct emission of ions from the liquid droplets as they evaporated in a process related to electrospray ionization or ion evaporation, or to chemical ionization of vapourized analyte molecules from buffer ions (such as ammonium acetate). The fact that multiply-charged ions were observed from some larger analytes suggests that direct analyte ion emission was occurring under at least some conditions. The interface was able to handle up to 2 ml/min of eluate from the LC column and would efficiently introduce it into the MS vacuum system. TSP was also more suitable for LC–MS applications involving reversed phase liquid chromatography (RT-LC). With time, the mechanical complexity of TSP was simplified, and this interface became popular as the first ideal LC–MS interface for pharmaceutical applications comprising the analysis of drugs, metabolites, conjugates, nucleosides, peptides, natural products, and pesticides. The introduction of TSP marked a significant improvement for LC–MS systems and was the most widely applied interface until the beginning of the 1990s, when it began to be replaced by interfaces involving atmospheric pressure ionization (API).
FAB based interfaces
The first fast atom bombardment (FAB) and continuous flow-FAB (CF-FAB) interfaces were developed in 1985 and 1986 respectively. Both interfaces were similar, but they differed in that the first used a porous frit probe as connecting channel, while CF-FAB used a probe tip. From these, the CF-FAB was more successful as a LC–MS interface and was useful to analyze non-volatile and thermally labile compounds. In these interfaces, the LC effluent passed through the frit or CF-FAB channels to form a uniform liquid film at the tip. There, the liquid was bombarded with ion beams or high energy atoms (fast atoms). For stable operation, the FAB based interfaces were able to handle liquid flow rates of only 1–15 μl and were also restricted to microbore and capillary columns. In order to be used in FAB MS ionization sources, the analytes of interest had to be mixed with a matrix (e.g., glycerol) that could be added before or after the separation in the LC column. FAB based interfaces were extensively used to characterize peptides, but lost applicability with the advent of electrospray based interfaces in 1988.
Liquid chromatography
Liquid chromatography is a method of physical separation in which the components of a liquid mixture are distributed between two immiscible phases, i.e., stationary and mobile. The practice of LC can be divided into five categories, i.e., adsorption chromatography, partition chromatography, ion-exchange chromatography, size-exclusion chromatography, and affinity chromatography. Among these, the most widely used variant is the reverse-phase (RP) mode of the partition chromatography technique, which makes use of a nonpolar (hydrophobic) stationary phase and a polar mobile phase. In common applications, the mobile phase is a mixture of water and other polar solvents (e.g., methanol, isopropanol, and acetonitrile), and the stationary matrix is prepared by attaching long-chain alkyl groups (e.g., n-octadecyl or C18) to the external and internal surfaces of irregularly or spherically shaped 5 μm diameter porous silica particles.
In HPLC, typically 20 μl of the sample of interest are injected into the mobile phase stream delivered by a high pressure pump. The mobile phase containing the analytes permeates through the stationary phase bed in a definite direction. The components of the mixture are separated depending on their chemical affinity with the mobile and stationary phases. The separation occurs after repeated sorption and desorption steps occurring when the liquid interacts with the stationary bed. The liquid solvent (mobile phase) is delivered under high pressure (up to 400 bar or 5800 psi) into a packed column containing the stationary phase. The high pressure is necessary to achieve a constant flow rate for reproducible chromatography experiments. Depending on the partitioning between the mobile and stationary phases, the components of the sample will flow out of the column at different times. The column is the most important component of the LC system and is designed to withstand the high pressure of the liquid. Conventional LC columns are 100–300 mm long with outer diameter of 6.4 mm (1/4 inch) and internal diameter of 3.0–4.6 mm. For applications involving LC–MS, the length of chromatography columns can be shorter (30–50 mm) with 3–5 μm diameter packing particles. In addition to the conventional model, other LC columns are the narrow bore, microbore, microcapillary, and nano-LC models. These columns have smaller internal diameters, allow for a more efficient separation, and handle liquid flows under 1 ml/min (the conventional flow-rate). In order to improve separation efficiency and peak resolution, ultra performance liquid chromatography (UHPLC) can be used instead of HPLC. This LC variant uses columns packed with smaller silica particles (~1.7 μm diameter) and requires higher operating pressures in the range of 310000 to 775000 torr (6000 to 15000 psi, 400 to 1034 bar).
Mass spectrometry
Mass spectrometry (MS) is an analytical technique that measures the mass-to-charge ratio (m/z) of charged particles (ions). Although there are many different kinds of mass spectrometers, all of them make use of electric or magnetic fields to manipulate the motion of ions produced from an analyte of interest and determine their m/z. The basic components of a mass spectrometer are the ion source, the mass analyzer, the detector, and the data and vacuum systems. The ion source is where the components of a sample introduced in a MS system are ionized by means of electron beams, photon beams (UV lights), laser beams or corona discharge. In the case of electrospray ionization, the ion source moves ions that exist in liquid solution into the gas phase. The ion source converts and fragments the neutral sample molecules into gas-phase ions that are sent to the mass analyzer. While the mass analyzer applies the electric and magnetic fields to sort the ions by their masses, the detector measures and amplifies the ion current to calculate the abundances of each mass-resolved ion. In order to generate a mass spectrum that a human eye can easily recognize, the data system records, processes, stores, and displays data in a computer.
The mass spectrum can be used to determine the mass of the analytes, their elemental and isotopic composition, or to elucidate the chemical structure of the sample. MS is an experiment that must take place in gas phase and under vacuum (1.33 * 10−2 to 1.33 * 10−6 pascal). Therefore, the development of devices facilitating the transition from samples at higher pressure and in condensed phase (solid or liquid) into a vacuum system has been essential to develop MS as a potent tool for identification and quantification of organic compounds like peptides. MS is now in very common use in analytical laboratories that study physical, chemical, or biological properties of a great variety of compounds. Among the many different kinds of mass analyzers, the ones that find application in LC–MS systems are the quadrupole, time-of-flight (TOF), ion traps, and hybrid quadrupole-TOF (QTOF) analyzers.
Interfaces
The interface between a liquid phase technique (HPLC) with a continuously flowing eluate, and a gas phase technique carried out in a vacuum was difficult for a long time. The advent of electrospray ionization changed this. Currently, the most common LC–MS interfaces are electrospray ionization (ESI), atmospheric pressure chemical ionization (APCI), and atmospheric pressure photo-ionization (APPI). These are newer MS ion sources that facilitate the transition from a high pressure environment (HPLC) to high vacuum conditions needed at the MS analyzer. Although these interfaces are described individually, they can also be commercially available as dual ESI/APCI, ESI/APPI, or APCI/APPI ion sources. Various deposition and drying techniques were used in the past (e.g., moving belts) but the most common of these was the off-line MALDI deposition. A new approach still under development called direct-EI LC–MS interface, couples a nano HPLC system and an electron ionization equipped mass spectrometer.
Electrospray ionization (ESI)
ESI interface for LC–MS systems was developed by Fenn and collaborators in 1988. This ion source/ interface can be used for the analysis of moderately polar and even very polar molecules (e.g., metabolites, xenobiotics, peptides, nucleotides, polysaccharides). The liquid eluate coming out of the LC column is directed into a metal capillary kept at 3 to 5 kV and is nebulized by a high-velocity coaxial flow of gas at the tip of the capillary, creating a fine spray of charged droplets in front of the entrance to the vacuum chamber. To avoid contamination of the vacuum system by buffers and salts, this capillary is usually perpendicularly located at the inlet of the MS system, in some cases with a counter-current of dry nitrogen in front of the entrance through which ions are directed by the electric field. In some sources, rapid droplet evaporation and thus maximum ion emission is achieved by mixing an additional stream of hot gas with the spray plume in front of the vacuum entrance. In other sources, the droplets are drawn through a heated capillary tube as they enter the vacuum, promoting droplet evaporation and ion emission. These methods of increasing droplet evaporation now allow the use of liquid flow rates of 1 - 2 mL/min to be used while still achieving efficient ionisation and high sensitivity. Thus while the use of 1 – 3 mm microbore columns and lower flow rates of 50 - 200 μl/min was commonly considered necessary for optimum operation, this limitation is no longer as important, and the higher column capacity of larger bore columns can now be advantageously employed with ESI LC–MS systems. Positively and negatively charged ions can be created by switching polarities, and it is possible to acquire alternate positive and negative mode spectra rapidly within the same LC run . While most large molecules (greater than MW 1500–2000) produce multiply charged ions in the ESI source, the majority of smaller molecules produce singly charged ions.
Atmospheric pressure chemical ionization (APCI)
The development of the APCI interface for LC–MS started with Horning and collaborators in the early 1973. However, its commercial application was introduced at the beginning of the 1990s after Henion and collaborators improved the LC–APCI–MS interface in 1986. The APCI ion source/ interface can be used to analyze small, neutral, relatively non-polar, and thermally stable molecules (e.g., steroids, lipids, and fat soluble vitamins). These compounds are not well ionized using ESI. In addition, APCI can also handle mobile phase streams containing buffering agents. The liquid from the LC system is pumped through a capillary and there is also nebulization at the tip, where a corona discharge takes place. First, the ionizing gas surrounding the interface and the mobile phase solvent are subject to chemical ionization at the ion source. Later, these ions react with the analyte and transfer their charge. The sample ions then pass through small orifice skimmers by means of or ion-focusing lenses. Once inside the high vacuum region, the ions are subject to mass analysis. This interface can be operated in positive and negative charge modes and singly-charged ions are mainly produced. APCI ion source can also handle flow rates between 500 and 2000 μl/min and it can be directly connected to conventional 4.6 mm ID columns.
Atmospheric pressure photoionization (APPI)
The APPI interface for LC–MS was developed simultaneously by Bruins and Syage in 2000. APPI is another LC–MS ion source/ interface for the analysis of neutral compounds that cannot be ionized using ESI. This interface is similar to the APCI ion source, but instead of a corona discharge, the ionization occurs by using photons coming from a discharge lamp. In the direct-APPI mode, singly charged analyte molecular ions are formed by absorption of a photon and ejection of an electron. In the dopant-APPI mode, an easily ionizable compound (Dopant) is added to the mobile phase or the nebulizing gas to promote a reaction of charge-exchange between the dopant molecular ion and the analyte. The ionized sample is later transferred to the mass analyzer at high vacuum as it passes through small orifice skimmers.
Applications
The coupling of MS with LC systems is attractive because liquid chromatography can separate delicate and complex natural mixtures, which chemical composition needs to be well established (e.g., biological fluids, environmental samples, and drugs). Further, LC–MS has applications in volatile explosive residue analysis. Nowadays, LC–MS has become one of the most widely used chemical analysis techniques because more than 85% of natural chemical compounds are polar and thermally labile and GC-MS cannot process these samples. As an example, HPLC–MS is regarded as the leading analytical technique for proteomics and pharmaceutical laboratories. Other important applications of LC–MS include the analysis of food, pesticides, and plant phenols.
Pharmacokinetics
LC–MS is widely used in the field of bioanalysis and is specially involved in pharmacokinetic studies of pharmaceuticals. Pharmacokinetic studies are needed to determine how quickly a drug will be cleared from the body organs and the hepatic blood flow. MS analyzers are useful in these studies because of their shorter analysis time, and higher sensitivity and specificity compared to UV detectors commonly attached to HPLC systems. One major advantage is the use of tandem MS–MS, where the detector may be programmed to select certain ions to fragment. The measured quantity is the sum of molecule fragments chosen by the operator. As long as there are no interferences or ion suppression in LC–MS, the LC separation can be quite quick.
Proteomics/metabolomics
LC–MS is used in proteomics as a method to detect and identify the components of a complex mixture. The bottom-up proteomics LC–MS approach generally involves protease digestion and denaturation using trypsin as a protease, urea to denature the tertiary structure, and iodoacetamide to modify the cysteine residues. After digestion, LC–MS is used for peptide mass fingerprinting, or LC–MS/MS (tandem MS) is used to derive the sequences of individual peptides. LC–MS/MS is most commonly used for proteomic analysis of complex samples where peptide masses may overlap even with a high-resolution mass spectrometry. Samples of complex biological (e.g., human serum) may be analyzed in modern LC–MS/MS systems, which can identify over 1000 proteins. However, this high level of protein identification is possible only after separating the sample by means of SDS-PAGE gel or HPLC-SCX. Recently, LC–MS/MS has been applied to search peptide biomarkers. Examples are the recent discovery and validation of peptide biomarkers for four major bacterial respiratory tract pathogens (Staphylococcus aureus, Moraxella catarrhalis; Haemophilus influenzae and Streptococcus pneumoniae) and the SARS-CoV-2 virus.
LC–MS has emerged as one of the most commonly used techniques in global metabolite profiling of biological tissue (e.g., blood plasma, serum, urine). LC–MS is also used for the analysis of natural products and the profiling of secondary metabolites in plants. In this regard, MS-based systems are useful to acquire more detailed information about the wide spectrum of compounds from a complex biological samples. LC–nuclear magnetic resonance (NMR) is also used in plant metabolomics, but this technique can only detect and quantify the most abundant metabolites. LC–MS has been useful to advance the field of plant metabolomics, which aims to study the plant system at molecular level providing a non-biased characterization of the plant metabolome in response to its environment. The first application of LC–MS in plant metabolomics was the detection of a wide range of highly polar metabolites, oligosaccharides, amino acids, amino sugars, and sugar nucleotides from Cucurbita maxima phloem tissues. Another example of LC–MS in plant metabolomics is the efficient separation and identification of glucose, sucrose, raffinose, stachyose, and verbascose from leaf extracts of Arabidopsis thaliana.
Drug development
LC–MS is frequently used in drug development because it allows quick molecular weight confirmation and structure identification. These features speed up the process of generating, testing, and validating a discovery starting from a vast array of products with potential application. LC–MS applications for drug development are highly automated methods used for peptide mapping, glycoprotein mapping, lipodomics, natural products dereplication, bioaffinity screening, in vivo drug screening, metabolic stability screening, metabolite identification, impurity identification, quantitative bioanalysis, and quality control.
See also
Gas chromatography–mass spectrometry
Capillary electrophoresis–mass spectrometry
Ion-mobility spectrometry–mass spectrometry
References
Further reading
Chromatography
Mass spectrometry | Liquid chromatography–mass spectrometry | [
"Physics",
"Chemistry"
] | 5,829 | [
"Chromatography",
"Spectrum (physical sciences)",
"Separation processes",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
17,950,868 | https://en.wikipedia.org/wiki/Locally%20discrete%20collection | In mathematics, particularly topology, collections of subsets are said to be locally discrete if they look like they have precisely one element from a local point of view. The study of locally discrete collections is worthwhile as Bing's metrization theorem shows.
Formal definition
Let X be a topological space. A collection {Ga} of subsets of X is said to be locally discrete, if each point of the space has a neighbourhood intersecting at most one element of the collection. A collection of subsets of X is said to be countably locally discrete, if it is the countable union of locally discrete collections.
Properties and examples
1. Locally discrete collections are always locally finite. See the page on local finiteness.
2. If a collection of subsets of a topological space X is locally discrete, it must satisfy the property that each point of the space belongs to at most one element of the collection. This means that only collections of pairwise disjoint sets can be locally discrete.
3. A Hausdorff space cannot have a locally discrete basis unless it is itself discrete. The same property holds for a T1 space.
4. The following is known as Bing's metrization theorem:
A space X is metrizable iff it is regular and has a basis that is countably locally discrete.
5. A countable collection of sets is necessarily countably locally discrete. Therefore, if X is a metrizable space with a countable basis, one implication of Bing's metrization theorem holds. In fact, Bing's metrization theorem is almost a corollary of the Nagata-Smirnov theorem.
See also
Locally finite collection
Nagata-Smirnov metrization theorem
Bing metrization theorem
References
James Munkres (1999). Topology, 2nd edition, Prentice Hall. .
Topology | Locally discrete collection | [
"Physics",
"Mathematics"
] | 371 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
396,575 | https://en.wikipedia.org/wiki/F%CF%83%20set | {{DISPLAYTITLE:Fσ set}}
In mathematics, an Fσ set (said F-sigma set) is a countable union of closed sets. The notation originated in French with F for (French: closed) and σ for (French: sum, union).
The complement of an Fσ set is a Gδ set.
Fσ is the same as in the Borel hierarchy.
Examples
Each closed set is an Fσ set.
The set of rationals is an Fσ set in . More generally, any countable set in a T1 space is an Fσ set, because every singleton is closed.
The set of irrationals is not an Fσ set.
In metrizable spaces, every open set is an Fσ set.
The union of countably many Fσ sets is an Fσ set, and the intersection of finitely many Fσ sets is an Fσ set.
The set of all points in the Cartesian plane such that is rational is an Fσ set because it can be expressed as the union of all the lines passing through the origin with rational slope:
where is the set of rational numbers, which is a countable set.
See also
Gδ set — the dual notion.
Borel hierarchy
P-space, any space having the property that every Fσ set is closed
References
Topology
Descriptive set theory | Fσ set | [
"Physics",
"Mathematics"
] | 277 | [
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
396,611 | https://en.wikipedia.org/wiki/Leon%20Cooper | Leon N. Cooper (né Kupchik; February 28, 1930 – October 23, 2024) was an American theoretical physicist and neuroscientist. He won the Nobel Prize in Physics for his work on superconductivity. Cooper developed the concept of Cooper pairs and collaborated with John Bardeen and John Robert Schrieffer to develop the BCS theory of conventional superconductivity. In neuroscience, Cooper co-developed the BCM theory of synaptic plasticity.
Biography
Childhood and education
Leon N. Kupchick was born in the Bronx, New York City on February 28, 1930. His middle initial N. does not stand for anything, though some sources erroneously suggested his middle name was Neil.
His father Irving Kupchik was from Belarus and moved to the United States after the Russian Revolution in 1917. His mother Anna (née Zola) Kupchik was from Poland; she died when Leon was seven. His father later changed the family's surname from Kupchick to Cooper when he remarried.
Leon attended the Bronx High School of Science, graduating in 1947
He then studied at Columbia University in nearby Upper Manhattan, receiving a Bachelor of Arts degree in 1951. He remained at Columbia for graduate school, obtaining a Master of Arts degree in 1953 and a Doctor of Philosophy (PhD) in 1954. His PhD was on the subject of muonic atoms, with Robert Serber as his thesis advisor.
Scientific career
Cooper spent one year as a postdoctoral researcher at the Institute for Advanced Study in Princeton. New Jersey. He then taught at the University of Illinois at Urbana–Champaign and Ohio State University before joining Brown University in 1958. He would remain at Brown for the rest of his career.
Cooper founded Brown's Institute for Brain and Neural Systems in 1973, becoming its first director. In 1974 he was appointed Professor of Science at Brown, an endowed chair funded by Thomas J. Watson Sr. Cooper held visiting research positions at various institutions including the Institute for Advanced Study in Princeton, New Jersey, and at CERN (European Organization for Nuclear Research) in Geneva, Switzerland.
Along with colleague Charles Elbaum, he founded the tech company Nestor in 1975, which sought commercial applications for artificial neural networks. Nestor partnered with Intel to develop the Ni1000 neural network computer chip in 1994.
Personal life
Cooper first married Martha Kennedy, with whom he had two daughters. In 1969, he married for a second time, to Kay Allard.
He died at his home in Providence, Rhode Island, on October 23, 2024, at the age of 94.
Research
Superconductivity
While Cooper was a postdoc in Princeton, he was approached by John Bardeen, a professor at the University of Illinois, and Bardeen's graduate student John Robert Schrieffer. Bardeen and Schrieffer were working on superconductivity, a topic which was new to Cooper but he agreed to collaborate with them. Superconductivity had been experimentally discovered in 1911, but there was no theoretical explanation for the phenomenon. Cooper moved to Illinois as a postdoc to work with Bardeen.
After a year of theoretical investigation, Cooper developed the idea of a quasiparticle composed of two bound electrons, now known as a Cooper pair. Cooper published his concept of Cooper pairs in Physical Review in September 1956. The movement of Cooper pairs through a low-temperature metal would be almost unimpeded, producing a very low electrical resistance. After further development, Bardeen, Cooper and Schrieffer showed how this could produce superconductivity, publishing their theory in Physical Reviews in two papers during 1957. This theory became known as the BCS theory, after the authors' initials, and is widely accepted as the explanation for conventional superconductivity. Bardeen, Schrieffer and Cooper were awarded the Nobel Prize in Physics in 1972 for their theory.
Neuroscience
After joining Brown University, Cooper became interested in neuroscience, particularly the process of learning. In 1982, Cooper and two doctoral students, Elie Bienenstock and Paul Munro, published their theory of synaptic plasticity in The Journal of Neuroscience. They estimated the weakening and strengthening of synapses that could occur without saturation of the connections. As synapses saturate, electrical connections become less effective, thereby reducing the saturation. Connections therefore oscillate between saturation and unsaturation without reaching their limits. Their theory explained how the visual cortex works and how people learn to see. It became known as the BCM theory, after the authors' initials.
Memberships and honors
Fellow of the American Physical Society
Fellow of the American Academy of Arts and Sciences
Member of the National Academy of Sciences
Member of the American Philosophical Society
Member of the American Association for the Advancement of Science
Associate member of the Neuroscience Research Program
Research fellow of the Alfred P. Sloan Foundation (1959–1966)
Fellow of the Guggenheim Institute (1965–66)
Nobel Prize Recipient for Physics (1972)
Co-winner (with Dr. Schrieffer) of the Comstock Prize in Physics of the National Academy of Sciences (1968)
Received the Award of Excellence, Graduate Faculties Alumni of Columbia University
Received the Descartes Medal, Academie de Paris, Université René Descartes.
Received the John Jay Award of Columbia College (1985)
Recipient of seven honorary doctorates
Publications
Cooper was the author of Science and Human Experience – a collection of essays, including previously unpublished material, on issues such as consciousness and the structure of space.
(Cambridge University Press, 2014).
Cooper also wrote an unconventional liberal-arts physics textbook, originally An Introduction to the Meaning and Structure of Physics (Harper and Row, 1968) and still in print in a somewhat condensed form as Physics: Structure and Meaning (Lebanon: New Hampshire, University Press of New England, 1992).
Cooper, L. N. & J. Rainwater. "Theory of Multiple Coulomb Scattering from Extended Nuclei", Nevis Cyclotron Laboratories at Columbia University, Office of Naval Research (ONR), United States Department of Energy (through predecessor agency the Atomic Energy Commission), (August 1954).
Cooper, L. N., Lee, H. J., Schwartz, B. B. & W. Silvert. "Theory of the Knight Shift and Flux Quantization in Superconductors", Brown University, United States Department of Energy (through predecessor agency the Atomic Energy Commission), (May 1962).
Cooper, L. N. & Feldman, D. "BCS: 50 years", World Scientific Publishing Co., (November 2010).
See also
List of Jewish Nobel laureates
References
External links
including the Nobel Lecture, December 11, 1972 Microscopic Quantum Interference Effects in the Theory of Superconductivity
Brown University researcher profile
Brown University Physics Department profile
Critical Review evaluations of Professor Cooper
1930 births
2024 deaths
20th-century American physicists
21st-century American physicists
American Nobel laureates
American people of Belarusian-Jewish descent
American people of Polish-Jewish descent
Brown University faculty
Columbia College (New York) alumni
Columbia Graduate School of Arts and Sciences alumni
Fellows of the American Physical Society
Institute for Advanced Study visiting scholars
Jewish American physicists
Jewish neuroscientists
Members of the United States National Academy of Sciences
Nobel laureates in Physics
People associated with CERN
Scientists from the Bronx
Superconductivity
The Bronx High School of Science alumni | Leon Cooper | [
"Physics",
"Materials_science",
"Engineering"
] | 1,515 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
396,622 | https://en.wikipedia.org/wiki/Polish%20space | In the mathematical discipline of general topology, a Polish space is a separable completely metrizable topological space; that is, a space homeomorphic to a complete metric space that has a countable dense subset. Polish spaces are so named because they were first extensively studied by Polish topologists and logicians—Sierpiński, Kuratowski, Tarski and others. However, Polish spaces are mostly studied today because they are the primary setting for descriptive set theory, including the study of Borel equivalence relations. Polish spaces are also a convenient setting for more advanced measure theory, in particular in probability theory.
Common examples of Polish spaces are the real line, any separable Banach space, the Cantor space, and the Baire space. Additionally, some spaces that are not complete metric spaces in the usual metric may be Polish; e.g., the open interval is Polish.
Between any two uncountable Polish spaces, there is a Borel isomorphism; that is, a bijection that preserves the Borel structure. In particular, every uncountable Polish space has the cardinality of the continuum.
Lusin spaces, Suslin spaces, and Radon spaces are generalizations of Polish spaces.
Properties
Every Polish space is second countable (by virtue of being separable and metrizable).
A subspace of a Polish space is Polish (under the induced topology) if and only if is the intersection of a sequence of open subsets of (i. e., is a -set).
(Cantor–Bendixson theorem) If is Polish then any closed subset of can be written as the disjoint union of a perfect set and a countable set. Further, if the Polish space is uncountable, it can be written as the disjoint union of a perfect set and a countable open set.
Every Polish space is homeomorphic to a -subset of the Hilbert cube (that is, of , where is the unit interval and is the set of natural numbers).
The following spaces are Polish:
closed subsets of a Polish space,
open subsets of a Polish space,
products and disjoint unions of countable families of Polish spaces,
locally compact spaces that are metrizable and countable at infinity,
countable intersections of Polish subspaces of a Hausdorff topological space,
the set of irrational numbers with the topology induced by the standard topology of the real line.
Characterization
There are numerous characterizations that tell when a second-countable topological space is metrizable, such as Urysohn's metrization theorem. The problem of determining whether a metrizable space is completely metrizable is more difficult. Topological spaces such as the open unit interval (0,1) can be given both complete metrics and incomplete metrics generating their topology.
There is a characterization of complete separable metric spaces in terms of a game known as the strong Choquet game. A separable metric space is completely metrizable if and only if the second player has a winning strategy in this game.
A second characterization follows from Alexandrov's theorem. It states that a separable metric space is completely metrizable if and only if it is a subset of its completion in the original metric.
Polish metric spaces
Although Polish spaces are metrizable, they are not in and of themselves metric spaces; each Polish space admits many complete metrics giving rise to the same topology, but no one of these is singled out or distinguished. A Polish space with a distinguished complete metric is called a Polish metric space. An alternative approach, equivalent to the one given here, is first to define "Polish metric space" to mean "complete separable metric space", and then to define a "Polish space" as the topological space obtained from a Polish metric space by forgetting the metric.
Generalizations of Polish spaces
Lusin spaces
A Hausdorff topological space is a Lusin space (named after Nikolai Lusin) if some stronger topology makes it into a Polish space.
There are many ways to form Lusin spaces. In particular:
Every Polish space is a Lusin space
A subspace of a Lusin space is a Lusin space if and only if it is a Borel set.
Any countable union or intersection of Lusin subspaces of a Hausdorff space is a Lusin space.
The product of a countable number of Lusin spaces is a Lusin space.
The disjoint union of a countable number of Lusin spaces is a Lusin space.
Suslin spaces
A Hausdorff topological space is a Suslin space (named after Mikhail Suslin) if it is the image of a Polish space under a continuous mapping. So every Lusin space is Suslin.
In a Polish space, a subset is a Suslin space if and only if it is a Suslin set (an image of the Suslin operation).
The following are Suslin spaces:
closed or open subsets of a Suslin space,
countable products and disjoint unions of Suslin spaces,
countable intersections or countable unions of Suslin subspaces of a Hausdorff topological space,
continuous images of Suslin spaces,
Borel subsets of a Suslin space.
They have the following properties:
Every Suslin space is separable.
Radon spaces
A Radon space, named after Johann Radon, is a topological space on which every Borel probability measure on is inner regular. Since a probability measure is globally finite, and hence a locally finite measure, every probability measure on a Radon space is also a Radon measure. In particular a separable complete metric space is a Radon space.
Every Suslin space is a Radon space.
Polish groups
A Polish group is a topological group that is also a Polish space, in other words homeomorphic to a separable complete metric space. There are several classic results of Banach, Freudenthal and Kuratowski on homomorphisms between Polish groups. Firstly, Banach's argument applies mutatis mutandis to non-Abelian Polish groups: if and are separable metric spaces with Polish, then any Borel homomorphism from to is continuous. Secondly, there is a version of the open mapping theorem or the closed graph theorem due to Kuratowski: a continuous injective homomorphism of a Polish subgroup onto another Polish group is an open mapping. As a result, it is a remarkable fact about Polish groups that Baire-measurable mappings (i.e., for which the preimage of any open set has the property of Baire) that are homomorphisms between them are automatically continuous. The group of homeomorphisms of the Hilbert cube is a universal Polish group, in the sense that every Polish group is isomorphic to a closed subgroup of it.
Examples:
All finite dimensional Lie groups with a countable number of components are Polish groups.
The unitary group of a separable Hilbert space (with the strong operator topology) is a Polish group.
The group of homeomorphisms of a compact metric space is a Polish group.
The product of a countable number of Polish groups is a Polish group.
The group of isometries of a separable complete metric space is a Polish group
See also
Standard Borel space
References
Further reading
Descriptive set theory
General topology
Science and technology in Poland | Polish space | [
"Mathematics"
] | 1,509 | [
"General topology",
"Topology"
] |
397,175 | https://en.wikipedia.org/wiki/Hexamethylenetetramine | Hexamethylenetetramine (HMTA), also known as 1,3,5,7-tetraazaadamantane, is a heterocyclic organic compound with diverse applications. It has the chemical formula (CH2)6N4 and is a white crystalline compound that is highly water soluble in water and polar organic solvents. It is useful in the synthesis of other organic compounds, including plastics, pharmaceuticals, and rubber additives. The compound is also used medically for certain conditions. It sublimes in vacuum at 280°C. It has a tetrahedral cage-like structure similar to adamantane. The four vertices are occupied by nitrogen atoms, which are linked by methylene groups. Although the molecular shape defines a cage, no void space is available at the interior.
Synthesis, structure, reactivity
Hexamethylenetetramine was discovered by Aleksandr Butlerov in 1859.
It is prepared industrially by combining formaldehyde and ammonia:
The molecule behaves like an amine base, undergoing protonation and as a ligand. N-alkylation with chloroallyl chloride gives quaternium-15).
Applications
The dominant use of hexamethylenetetramine is in the production of solid (powder) or liquid phenolic resins and phenolic resin moulding compounds, in which it is added as a hardening component. These products are used as binders, e.g., in brake and clutch linings, abrasives, non-woven textiles, formed parts produced by moulding processes, and fireproof materials.
Medical uses
The compound is also used medically as a urinary antiseptic and antibacterial medication under the name methenamine or hexamine. It is used as an alternative to antibiotics to prevent urinary tract infections (UTIs) and is sold under the brand names Hiprex, Urex, and Urotropin, among others.
As the mandelic acid salt (methenamine mandelate) or the hippuric acid salt (methenamine hippurate), it is used for the treatment of urinary tract infections. In an acidic environment, methenamine is believed to act as an antimicrobial by converting to formaldehyde. A systematic review of its use for this purpose in adult women found there was insufficient evidence of benefit and further research was needed. A UK study showed that methenamine is as effective as daily low-dose antibiotics at preventing UTIs among women who experience recurrent UTIs. As methenamine is an antiseptic, it may avoid the issue of antibiotic resistance.
Methenamine acts as an over-the-counter antiperspirant due to the astringent property of formaldehyde. Specifically, methenamine is used to minimize perspiration in the sockets of prosthetic devices.
Histological stains
Methenamine silver stains are used for staining in histology, including the following types:
Grocott's methenamine silver stain, used widely as a screen for fungal organisms.
Jones' stain, a methenamine silver-Periodic acid-Schiff that stains for basement membrane, availing to view the "spiked" Glomerular basement membrane associated with membranous glomerulonephritis.
Solid fuel
Together with 1,3,5-trioxane, hexamethylenetetramine is a component of hexamine fuel tablets used by campers, hobbyists, the military and relief organizations for heating camping food or military rations. It burns smokelessly, has a high energy density of 30.0 megajoules per kilogram (MJ/kg), does not liquify while burning, and leaves no ashes, although its fumes are toxic.
Standardized 0.149 g tablets of methenamine (hexamine) are used by fire-protection laboratories as a clean and reproducible fire source to test the flammability of carpets and rugs.
Food additive
Hexamethylenetetramine or hexamine is also used as a food additive as a preservative (INS number 239). It is approved for usage for this purpose in the EU, where it is listed under E number E239, however it is not approved in the USA, Russia, Australia, or New Zealand.
Reagent in organic chemistry
Hexamethylenetetramine is a versatile reagent in organic synthesis. It is used in the Duff reaction (formylation of arenes), the Sommelet reaction (converting benzyl halides to aldehydes), and in the Delepine reaction (synthesis of amines from alkyl halides).
Explosives
Hexamethylenetetramine is the base component to produce RDX and, consequently, C-4 as well as octogen (a co-product with RDX), hexamine dinitrate, hexamine diperchlorate, HMTD, and R-salt.
From October 2023, sale of hexamethylenetetramine in the UK is restricted to licenced persons (as a "regulated precursor" under the terms of the Poisons Act 1972).
Pyrotechnics
Hexamethylenetetramine is also used in pyrotechnics to reduce combustion temperatures and decrease the color intensity of various fireworks. Because of its ash-free combustion, hexamethylenetetramine is also utilized in indoor fireworks alongside magnesium and lithium salts.
Historical uses
Hexamethylenetetramine was first introduced into the medical setting in 1895 as a urinary antiseptic. It was officially approved by the FDA for medical use in the United States in 1967. However, it was only used in cases of acidic urine, whereas boric acid was used to treat urinary tract infections with alkaline urine. Scientist De Eds found that there was a direct correlation between the acidity of hexamethylenetetramine's environment and the rate of its decomposition. Therefore, its effectiveness as a drug depended greatly on the acidity of the urine rather than the amount of the drug administered. In an alkaline environment, hexamethylenetetramine was found to be almost completely inactive.
Hexamethylenetetramine was also used as a method of treatment for soldiers exposed to phosgene in World War I. Subsequent studies have shown that large doses of hexamethylenetetramine provide some protection if taken before phosgene exposure but none if taken afterwards.
Producers
Since 1990 the number of European producers has been declining. The French SNPE factory closed in 1990; in 1993, the production of hexamethylenetetramine in Leuna, Germany ceased; in 1996, the Italian facility of Agrolinz closed down; in 2001, the UK producer Borden closed; in 2006, production at Chemko, Slovak Republic, was closed. Remaining producers include INEOS in Germany, Caldic in the Netherlands, and Hexion in Italy. In the US, Eli Lilly and Company stopped producing methenamine tablets in 2002. In Australia, Hexamine Tablets for fuel are made by Thales Australia Ltd. In México, Hexamine is produced by Abiya. Many other countries who still produce this include Russia, Saudi Arabia, and China.
References
Adamantane-like molecules
Antimicrobials
Corrosion inhibitors
E-number additives
Fuels
Heterocyclic compounds with 3 rings
Nitrogen heterocycles
Preservatives
Reagents for organic chemistry
Substances discovered in the 19th century
Tertiary amines | Hexamethylenetetramine | [
"Chemistry",
"Biology"
] | 1,599 | [
"Antimicrobials",
"Chemical energy sources",
"Fuels",
"Reagents for organic chemistry",
"Corrosion inhibitors",
"Biocides",
"Process chemicals"
] |
397,263 | https://en.wikipedia.org/wiki/Truss | A truss is an assembly of members such as beams, connected by nodes, that creates a rigid structure.
In engineering, a truss is a structure that "consists of two-force members only, where the members are organized so that the assemblage as a whole behaves as a single object". A "two-force member" is a structural component where force is applied to only two points. Although this rigorous definition allows the members to have any shape connected in any stable configuration, architectural trusses typically comprise five or more triangular units constructed with straight members whose ends are connected at joints referred to as nodes.
In this typical context, external forces and reactions to those forces are considered to act only at the nodes and result in forces in the members that are either tensile or compressive. For straight members, moments (torques) are explicitly excluded because, and only because, all the joints in a truss are treated as revolutes, as is necessary for the links to be two-force members.
A planar truss is one where all members and nodes lie within a two-dimensional plane, while a space frame has members and nodes that extend into three dimensions. The top beams in a truss are called 'top chords' and are typically in compression, the bottom beams are called 'bottom chords', and are typically in tension. The interior beams are called webs, and the areas inside the webs are called panels, or from graphic statics (see Cremona diagram) 'polygons'.
Etymology
Truss derives from the Old French word trousse, from around 1200 AD, which means "collection of things bound together". The term truss has often been used to describe any assembly of members such as a cruck frame or a couple of rafters.
Characteristics
A truss consists of typically (but not necessarily) straight members connected at joints, traditionally termed panel points. Trusses are typically (but not necessarily) composed of triangles because of the structural stability of that shape and design. A triangle is the simplest geometric figure that will not change shape when the lengths of the sides are fixed. In comparison, both the angles and the lengths of a four-sided figure must be fixed for it to retain its shape.
Simple truss
The simplest form of a truss is one single triangle. This type of truss is seen in a framed roof consisting of rafters and a ceiling joist, and in other mechanical structures such as bicycles and aircraft. Because of the stability of this shape and the methods of analysis used to calculate the forces within it, a truss composed entirely of triangles is known as a simple truss. However, a simple truss is often defined more restrictively by demanding that it can be constructed through successive addition of pairs of members, each connected to two existing joints and to each other to form a new joint, and this definition does not require a simple truss to comprise only triangles. The traditional diamond-shape bicycle frame, which utilizes two conjoined triangles, is an example of a simple truss.
Planar truss
A planar truss lies in a single plane. Planar trusses are typically used in parallel to form roofs and bridges.
The depth of a truss, or the height between the upper and lower chords, is what makes it an efficient structural form. A solid girder or beam of equal strength would have substantial weight and material cost as compared to a truss. For a given span, a deeper truss will require less material in the chords and greater material in the verticals and diagonals. An optimum depth of the truss will maximize the efficiency.
Space frame truss
A space frame truss is a three-dimensional framework of members pinned at their ends. A tetrahedron shape is the simplest space truss, consisting of six members that meet at four joints. Large planar structures may be composed from tetrahedrons with common edges, and they are also employed in the base structures of large free-standing power line pylons.
Types
For more truss types, see truss types used in bridges.
There are two basic types of truss:
The pitched truss, or common truss, is characterized by its triangular shape. It is most often used for roof construction. Some common trusses are named according to their "web configuration". The chord size and web configuration are determined by span, load and spacing.
The parallel chord truss, or flat truss, gets its name from its parallel top and bottom chords. It is often used for floor construction.
A combination of the two is a truncated truss, used in hip roof construction. A metal plate-connected wood truss is a roof or floor truss whose wood members are connected with metal connector plates.
Warren truss
Truss members form a series of equilateral triangles, alternating up and down.
Octet truss
Truss members are made up of all equivalent equilateral triangles. The minimum composition is two regular tetrahedrons along with an octahedron. They fill up three dimensional space in a variety of configurations.
Pratt truss
The Pratt truss was patented in 1844 by two Boston railway engineers, Caleb Pratt and his son Thomas Willis Pratt. The design uses vertical members for compression and diagonal members to respond to tension. The Pratt truss design remained popular as bridge designers switched from wood to iron, and from iron to steel. This continued popularity of the Pratt truss is probably due to the fact that the configuration of the members means that longer diagonal members are only in tension for gravity load effects. This allows these members to be used more efficiently, as slenderness effects related to buckling under compression loads (which are compounded by the length of the member) will typically not control the design. Therefore, for given planar truss with a fixed depth, the Pratt configuration is usually the most efficient under static, vertical loading.
The Southern Pacific Railroad bridge in Tempe, Arizona is a 393 meter (1,291 foot) long truss bridge built in 1912. The structure, still in use today, consists of nine Pratt truss spans of varying lengths.
The Wright Flyer used a Pratt truss in its wing construction, as the minimization of compression member lengths allowed for lower aerodynamic drag.
Town's lattice truss
American architect Ithiel Town designed Town's Lattice Truss as an alternative to heavy-timber bridges. His design, patented in 1820 and 1835, uses easy-to-handle planks arranged diagonally with short spaces in between them, to form a lattice.
Bowstring truss
Named for their shape, bowstring trusses were first used for arched truss bridges, often confused with tied-arch bridges.
Thousands of bowstring trusses were used during World War II for holding up the curved roofs of aircraft hangars and other military buildings. Many variations exist in the arrangements of the members connecting the nodes of the upper arc with those of the lower, straight sequence of members, from nearly isosceles triangles to a variant of the Pratt truss.
King and queen post trusses
One of the simplest truss styles to implement, the king post consists of two angled supports leaning into a common vertical support.
The queen post truss, sometimes queenpost or queenspost, is similar to a king post truss in that the outer supports are angled towards the centre of the structure. The primary difference is the horizontal extension at the centre which relies on beam action to provide mechanical stability. This truss style is only suitable for relatively short spans.
Lenticular truss
Lenticular trusses, patented in 1878 by William Douglas (although the Gaunless Bridge of 1823 was the first of the type), have the top and bottom chords of the truss arched, forming a lens shape. A lenticular pony truss bridge is a bridge design that involves a lenticular truss extending above and below the roadbed.
Vierendeel structure
The members of a Vierendeel structure are not triangulated but form rectangular openings. The structure has a frame with fixed joints that are capable of transferring and resisting bending moments. As such, it does not fit the definition of a truss, since it contains non-two-force members: regular trusses comprise members that are commonly assumed to have pinned joints, with the implication that no moments exist at the jointed ends. This style of structure was named after the Belgian engineer Arthur Vierendeel, who developed the design in 1896. It is rarely used for bridges because of higher costs compared to a triangulated truss, but in buildings it has the advantage that a large amount of the exterior envelope remains unobstructed and it can therefore be used for windows and door openings. In some applications this is preferable to a braced-frame system, which would leave some areas obstructed by the diagonal braces.
Statics
A truss that is assumed to comprise members that are connected by means of pin joints, and which is supported at both ends by means of hinged joints and rollers, is described as being statically determinate. Newton's Laws apply to the structure as a whole, as well as to each node or joint. In order for any node that may be subject to an external load or force to remain static in space, the following conditions must hold: the sums of all (horizontal and vertical) forces, as well as all moments acting about the node equal zero. Analysis of these conditions at each node yields the magnitude of the compression or tension forces.
Trusses that are supported at more than two positions are said to be statically indeterminate, and the application of Newton's Laws alone is not sufficient to determine the member forces.
In order for a truss with pin-connected members to be stable, it does not need to be entirely composed of triangles. In mathematical terms, the following necessary condition for stability of a simple truss exists:
where m is the total number of truss members, j is the total number of joints and r is the number of reactions (equal to 3 generally) in a 2-dimensional structure.
When , the truss is said to be statically determinate, because the (m+3) internal member forces and support reactions can then be completely determined by 2j equilibrium equations, once we know the external loads and the geometry of the truss. Given a certain number of joints, this is the minimum number of members, in the sense that if any member is taken out (or fails), then the truss as a whole fails. While the relation (a) is necessary, it is not sufficient for stability, which also depends on the truss geometry, support conditions and the load carrying capacity of the members.
Some structures are built with more than this minimum number of truss members. Those structures may survive even when some of the members fail. Their member forces depend on the relative stiffness of the members, in addition to the equilibrium condition described.
Analysis
Because the forces in each of its two main girders are essentially planar, a truss is usually modeled as a two-dimensional plane frame. However if there are significant out-of-plane forces, the structure must be modeled as a three-dimensional space.
The analysis of trusses often assumes that loads are applied to joints only and not at intermediate points along the members. The weight of the members is often insignificant compared to the applied loads and so is often omitted; alternatively, half of the weight of each member may be applied to its two end joints. Provided that the members are long and slender, the moments transmitted through the joints are negligible, and the junctions can be treated as "hinges" or "pin-joints".
Under these simplifying assumptions, every member of the truss is then subjected to pure compression or pure tension forces – shear, bending moment, and other more-complex stresses are all practically zero. Trusses are physically stronger than other ways of arranging structural elements, because nearly every material can resist a much larger load in tension or compression than in shear, bending, torsion, or other kinds of force.
These simplifications make trusses easier to analyze. Structural analysis of trusses of any type can readily be carried out using a matrix method such as the direct stiffness method, the flexibility method, or the finite element method.
Forces in members
Illustrated is a simple, statically determinate flat truss with 9 joints and (2 x 9) − 3 = 15 members. External loads are concentrated in the outer joints. Since this is a symmetrical truss with symmetrical vertical loads, the reactive forces at A and B are vertical, equal, and half the total load.
The internal forces in the members of the truss can be calculated in a variety of ways, including graphical methods:
Cremona diagram
Culmann diagram
Ritter analytical method (method of sections)
Design of members
A truss can be thought of as a beam where the web consists of a series of separate members instead of a continuous plate. In the truss, the lower horizontal member (the bottom chord) and the upper horizontal member (the top chord) carry tension and compression, fulfilling the same function as the flanges of an I-beam. Which chord carries tension and which carries compression depends on the overall direction of bending. In the truss pictured above right, the bottom chord is in tension, and the top chord in compression.
The diagonal and vertical members form the truss web, and carry the shear stress. Individually, they are also in tension and compression, the exact arrangement of forces is depending on the type of truss and again on the direction of bending. In the truss shown above right, the vertical members are in tension, and the diagonals are in compression.
In addition to carrying the static forces, the members serve additional functions of stabilizing each other, preventing buckling. In the adjacent picture, the top chord is prevented from buckling by the presence of bracing and by the stiffness of the web members.
The inclusion of the elements shown is largely an engineering decision based upon economics, being a balance between the costs of raw materials, off-site fabrication, component transportation, on-site erection, the availability of machinery and the cost of labor. In other cases the appearance of the structure may take on greater importance and so influence the design decisions beyond mere matters of economics. Modern materials such as prestressed concrete and fabrication methods, such as automated welding, have significantly influenced the design of modern bridges.
Once the force on each member is known, the next step is to determine the cross section of the individual truss members. For members under tension the cross-sectional area A can be found using A = F × γ / σy, where F is the force in the member, γ is a safety factor (typically 1.5 but depending on building codes) and σy is the yield tensile strength of the steel used.
The members under compression also have to be designed to be safe against buckling.
The weight of a truss member depends directly on its cross section—that weight partially determines how strong the other members of the truss need to be. Giving one member a larger cross section than on a previous iteration requires giving other members a larger cross section as well, to hold the greater weight of the first member—one needs to go through another iteration to find exactly how much greater the other members need to be. Sometimes the designer goes through several iterations of the design process to converge on the "right" cross section for each member. On the other hand, reducing the size of one member from the previous iteration merely makes the other members have a larger (and more expensive) safety factor than is technically necessary, but doesn't require another iteration to find a buildable truss.
The effect of the weight of the individual truss members in a large truss, such as a bridge, is usually insignificant compared to the force of the external loads.
Design of joints
After determining the minimum cross section of the members, the last step in the design of a truss would be detailing of the bolted joints, e.g., involving shear stress of the bolt connections used in the joints. Based on the needs of the project, truss internal connections (joints) can be designed as rigid, semi rigid, or hinged. Rigid connections can allow transfer of bending moments leading to development of secondary bending moments in the members.
Applications
Post frame structures
Component connections are critical to the structural integrity of a framing system. In buildings with large, clearspan wood trusses, the most critical connections are those between the truss and its supports. In addition to gravity-induced forces (a.k.a. bearing loads), these connections must resist shear forces acting perpendicular to the plane of the truss and uplift forces due to wind. Depending upon overall building design, the connections may also be required to transfer bending moment.
Wood posts enable the fabrication of strong, direct, yet inexpensive connections between large trusses and walls. Exact details for post-to-truss connections vary from designer to designer, and may be influenced by post type. Solid-sawn timber and glulam posts are generally notched to form a truss bearing surface. The truss is rested on the notches and bolted into place. A special plate/bracket may be added to increase connection load transfer capabilities. With mechanically-laminated posts, the truss may rest on a shortened outer-ply or on a shortened inner-ply. The later scenario places the bolts in double shear and is a very effective connection.
Gallery
See also
Brown truss
Convex uniform honeycomb
Geodesic dome
Lattice tower
Serrurier truss
Stress:
Compressive stress
Tensile stress
Structural mechanics
Structural steel
Tensegrity
Truss rod
References
Airship technology
Architectural elements
Bridge components
Mechanics
Structural system | Truss | [
"Physics",
"Technology",
"Engineering"
] | 3,552 | [
"Structural engineering",
"Building engineering",
"Trusses",
"Structural system",
"Architectural elements",
"Mechanics",
"Mechanical engineering",
"Bridge components",
"Components",
"Architecture"
] |
397,388 | https://en.wikipedia.org/wiki/Rankine%E2%80%93Hugoniot%20conditions | The Rankine–Hugoniot conditions, also referred to as Rankine–Hugoniot jump conditions or Rankine–Hugoniot relations, describe the relationship between the states on both sides of a shock wave or a combustion wave (deflagration or detonation) in a one-dimensional flow in fluids or a one-dimensional deformation in solids. They are named in recognition of the work carried out by Scottish engineer and physicist William John Macquorn Rankine and French engineer Pierre Henri Hugoniot.
The basic idea of the jump conditions is to consider what happens to a fluid when it undergoes a rapid change. Consider, for example, driving a piston into a tube filled with non-reacting gas. A disturbance is propagated through the fluid somewhat faster than the speed of sound. Because the disturbance propagates supersonically, it is a shock wave, and the fluid downstream of the shock has no advance information of it. In a frame of reference moving with the wave, atoms or molecules in front of the wave slam into the wave supersonically. On a microscopic level, they undergo collisions on the scale of the mean free path length until they come to rest in the post-shock flow (but moving in the frame of reference of the wave or of the tube). The bulk transfer of kinetic energy heats the post-shock flow. Because the mean free path length is assumed to be negligible in comparison to all other length scales in a hydrodynamic treatment, the shock front is essentially a hydrodynamic discontinuity. The jump conditions then establish the transition between the pre- and post-shock flow, based solely upon the conservation of mass, momentum, and energy. The conditions are correct even though the shock actually has a positive thickness. This non-reacting example of a shock wave also generalizes to reacting flows, where a combustion front (either a detonation or a deflagration) can be modeled as a discontinuity in a first approximation.
Governing Equations
In a coordinate system that is moving with the discontinuity, the Rankine–Hugoniot conditions can be expressed as:
{|
| style="text-align: right; padding-right: 1em;" |
| Conservation of mass
|-
| style="text-align: right; padding-right: 1em;" |
| Conservation of momentum
|-
| style="text-align: right; padding-right: 1em;" |
| Conservation of energy
|}
where m is the mass flow rate per unit area, ρ1 and ρ2 are the mass density of the fluid upstream and downstream of the wave, u1 and u2 are the fluid velocity upstream and downstream of the wave, p1 and p2 are the pressures in the two regions, and h1 and h2 are the specific (with the sense of per unit mass) enthalpies in the two regions. If in addition, the flow is reactive, then the species conservation equations demands that
to vanish both upstream and downstream of the discontinuity. Here, is the mass production rate of the i-th species of total N species involved in the reaction.
Combining conservation of mass and momentum gives us
which defines a straight line known as the Michelson–Rayleigh line, named after the Russian physicist Vladimir A. Mikhelson (usually anglicized as Michelson) and Lord Rayleigh, that has a negative slope (since is always positive) in the plane. Using the Rankine–Hugoniot equations for the conservation of mass and momentum to eliminate u1 and u2, the equation for the conservation of energy can be expressed as the Hugoniot equation:
The inverse of the density can also be expressed as the specific volume, . Along with these, one has to specify the relation between the upstream and downstream equation of state
where is the mass fraction of the species. Finally, the calorific equation of state is assumed to be known, i.e.,
Simplified Rankine–Hugoniot relations
The following assumptions are made in order to simplify the Rankine–Hugoniot equations. The mixture is assumed to obey the ideal gas law, so that relation between the downstream and upstream equation of state can be written as
where is the universal gas constant and the mean molecular weight is assumed to be constant (otherwise, would depend on the mass fraction of the all species). If one assumes that the specific heat at constant pressure is also constant across the wave, the change in enthalpies (calorific equation of state) can be simply written as
where the first term in the above expression represents the amount of heat released per unit mass of the upstream mixture by the wave and the second term represents the sensible heating. Eliminating temperature using the equation of state and substituting the above expression for the change in enthalpies into the Hugoniot equation, one obtains an Hugoniot equation expressed only in terms of pressure and densities,
where is the specific heat ratio, which for ordinary room temperature air (298 KELVIN) = 1.40. An Hugoniot curve without heat release () is often called a "shock Hugoniot", or simply a(n) "Hugoniot". Along with the Rayleigh line equation, the above equation completely determines the state of the system. These two equations can be written compactly by introducing the following non-dimensional scales,
The Rayleigh line equation and the Hugoniot equation then simplifies to
Given the upstream conditions, the intersection of above two equations in the - plane determine the downstream conditions; in the - plane, the upstream condition correspond to the point . If no heat release occurs, for example, shock waves without chemical reaction, then . The Hugoniot curves asymptote to the lines and , which are depicted as dashed lines in the figure. As mentioned in the figure, only the white region bounded by these two asymptotes are allowed so that is positive. Shock waves and detonations correspond to the top-left white region wherein and , that is to say, the pressure increases and the specific volume decreases across the wave (the Chapman–Jouguet condition for detonation is where Rayleigh line is tangent to the Hugoniot curve). Deflagrations, on the other hand, correspond to the bottom-right white region wherein and , that is to say, the pressure decreases and the specific volume increases across the wave; the pressure decrease a flame is typically very small which is seldom considered when studying deflagrations.
For shock waves and detonations, the pressure increase across the wave can take any values between ; the steeper the slope of the Rayleigh line, the stronger is the wave. On the contrary, here the specific volume ratio is restricted to the finite interval (the upper bound is derived for the case because pressure cannot take negative values). If (diatomic gas without the vibrational mode excitation), the interval is , in other words, the shock wave can increase the density at most by a factor of 6. For monatomic gas, , the allowed interval is . For diatomic gases with vibrational mode excited, we have leading to the interval . In reality, the specific heat ratio is not constant in the shock wave due to molecular dissociation and ionization, but even in these cases, density ratio in general do not exceed a factor of about .
Derivation from Euler equations
Consider gas in a one-dimensional container (e.g., a long thin tube). Assume that the fluid is inviscid (i.e., it shows no viscosity effects as for example friction with the tube walls). Furthermore, assume that there is no heat transfer by conduction or radiation and that gravitational acceleration can be neglected. Such a system can be described by the following system of conservation laws, known as the 1D Euler equations, that in conservation form is:
where
fluid mass density,
fluid velocity,
specific internal energy of the fluid,
fluid pressure, and
is the total energy density of the fluid, [J/m3], while e is its specific internal energy
Assume further that the gas is calorically ideal and that therefore a polytropic equation-of-state of the simple form
is valid, where is the constant ratio of specific heats . This quantity also appears as the polytropic exponent of the polytropic process described by
For an extensive list of compressible flow equations, etc., refer to NACA Report 1135 (1953).
Note: For a calorically ideal gas is a constant and for a thermally ideal gas is a function of temperature. In the latter case, the dependence of pressure on mass density and internal energy might differ from that given by equation ().
The jump condition
Before proceeding further it is necessary to introduce the concept of a jump condition – a condition that holds at a discontinuity or abrupt change.
Consider a 1D situation where there is a jump in the scalar conserved physical quantity , which is governed by integral conservation law
for any , , , and, therefore, by partial differential equation
for smooth solutions.
Let the solution exhibit a jump (or shock) at , where and , then
The subscripts 1 and 2 indicate conditions just upstream and just downstream of the jump respectively, i.e. and is the therefore sign.
Note, to arrive at equation () we have used the fact that and .
Now, let and , when we have and , and in the limit
where we have defined (the system characteristic or shock speed), which by simple division is given by
Equation () represents the jump condition for conservation law (). A shock situation arises in a system where its characteristics intersect, and under these conditions a requirement for a unique single-valued solution is that the solution should satisfy the admissibility condition or entropy condition. For physically real applications this means that the solution should satisfy the Lax entropy condition
where and represent characteristic speeds at upstream and downstream conditions respectively.
Shock condition
In the case of the hyperbolic conservation law (), we have seen that the shock speed can be obtained by simple division. However, for the 1D Euler equations (), () and (), we have the vector state variable and the jump conditions become
Equations (), () and () are known as the Rankine–Hugoniot conditions for the Euler equations and are derived by enforcing the conservation laws in integral form over a control volume that includes the shock. For this situation cannot be obtained by simple division. However, it can be shown by transforming the problem to a moving co-ordinate system
(setting , , to remove ) and some algebraic manipulation (involving the elimination of from the transformed equation () using the transformed equation ()), that the shock speed is given by
where is the speed of sound in the fluid at upstream conditions.
Shock Hugoniot and Rayleigh line in solids
For shocks in solids, a closed form expression such as equation () cannot be derived from first principles. Instead, experimental observations indicate that a linear relation can be used instead (called the shock Hugoniot in the us-up plane) that has the form
where c0 is the bulk speed of sound in the material (in uniaxial compression), s is a parameter (the slope of the shock Hugoniot) obtained from fits to experimental data, and is the particle velocity inside the compressed region behind the shock front.
The above relation, when combined with the Hugoniot equations for the conservation of mass and momentum, can be used to determine the shock Hugoniot in the p-v plane, where v is the specific volume (per unit mass):
Alternative equations of state, such as the Mie–Grüneisen equation of state may also be used instead of the above equation.
The shock Hugoniot describes the locus of all possible thermodynamic states a material can exist in behind a shock, projected onto a two dimensional state-state plane. It is therefore a set of equilibrium states and does not specifically represent the path through which a material undergoes transformation.
Weak shocks are isentropic and that the isentrope represents the path through which the material is loaded from the initial to final states by a compression wave with converging characteristics. In the case of weak shocks, the Hugoniot will therefore fall directly on the isentrope and can be used directly as the equivalent path. In the case of a strong shock we can no longer make that simplification directly. However, for engineering calculations, it is deemed that the isentrope is close enough to the Hugoniot that the same assumption can be made.
If the Hugoniot is approximately the loading path between states for an "equivalent" compression wave, then the jump conditions for the shock loading path can be determined by drawing a straight line between the initial and final states. This line is called the Rayleigh line and has the following equation:
Hugoniot elastic limit
Most solid materials undergo plastic deformations when subjected to strong shocks. The point on the shock Hugoniot at which a material transitions from a purely elastic state to an elastic-plastic state is called the Hugoniot elastic limit (HEL) and the pressure at which this transition takes place is denoted pHEL. Values of pHEL can range from 0.2 GPa to 20 GPa. Above the HEL, the material loses much of its shear strength and starts behaving like a fluid.
Magnetohydrodynamics
Rankine–Hugoniot conditions in magnetohydrodynamics are interesting to consider since they are very relevant to astrophysical applications. Across the discontinuity the normal component of the magnetic field and the tangential component of the electric field (infinite conductivity limit) must be continuous. We thus have
where is the difference between the values of any physical quantity on the two sides of the discontinuity. The remaining conditions are given by
These conditions are general in the sense that they include contact discontinuities () tangential discontinuities (), rotational or Alfvén discontinuities () and shock waves ().
See also
Euler equations (fluid dynamics)
Shock polar
Becker–Morduchow–Libby solution
Mie–Grüneisen equation of state
Engineering Acoustics Wikibook
Atmospheric focusing
References
Equations of fluid dynamics
Scottish inventions
Conservation equations
Continuum mechanics
Combustion
Fluid dynamics | Rankine–Hugoniot conditions | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,960 | [
"Equations of fluid dynamics",
"Equations of physics",
"Continuum mechanics",
"Chemical engineering",
"Conservation laws",
"Mathematical objects",
"Classical mechanics",
"Equations",
"Combustion",
"Piping",
"Fluid dynamics",
"Conservation equations",
"Symmetry",
"Physics theorems"
] |
398,124 | https://en.wikipedia.org/wiki/Transcriptional%20regulation | In molecular biology and genetics, transcriptional regulation is the means by which a cell regulates the conversion of DNA to RNA (transcription), thereby orchestrating gene activity. A single gene can be regulated in a range of ways, from altering the number of copies of RNA that are transcribed, to the temporal control of when the gene is transcribed. This control allows the cell or organism to respond to a variety of intra- and extracellular signals and thus mount a response. Some examples of this include producing the mRNA that encode enzymes to adapt to a change in a food source, producing the gene products involved in cell cycle specific activities, and producing the gene products responsible for cellular differentiation in multicellular eukaryotes, as studied in evolutionary developmental biology.
The regulation of transcription is a vital process in all living organisms. It is orchestrated by transcription factors and other proteins working in concert to finely tune the amount of RNA being produced through a variety of mechanisms. Bacteria and eukaryotes have very different strategies of accomplishing control over transcription, but some important features remain conserved between the two. Most importantly is the idea of combinatorial control, which is that any given gene is likely controlled by a specific combination of factors to control transcription. In a hypothetical example, the factors A and B might regulate a distinct set of genes from the combination of factors A and C. This combinatorial nature extends to complexes of far more than two proteins, and allows a very small subset (less than 10%) of the genome to control the transcriptional program of the entire cell.
In bacteria
Much of the early understanding of transcription came from bacteria, although the extent and complexity of transcriptional regulation is greater in eukaryotes. Bacterial transcription is governed by three main sequence elements:
Promoters are elements of DNA that may bind RNA polymerase and other proteins for the successful initiation of transcription directly upstream of the gene.
Operators recognize repressor proteins that bind to a stretch of DNA and inhibit the transcription of the gene.
Positive control elements that bind to DNA and incite higher levels of transcription.
While these means of transcriptional regulation also exist in eukaryotes, the transcriptional landscape is significantly more complicated both by the number of proteins involved as well as by the presence of introns and the packaging of DNA into histones.
The transcription of a basic bacterial gene is dependent on the strength of its promoter and the presence of activators or repressors. In the absence of other regulatory elements, a promoter's sequence-based affinity for RNA polymerases varies, which results in the production of different amounts of transcript. The variable affinity of RNA polymerase for different promoter sequences is related to regions of consensus sequence upstream of the transcription start site. The more nucleotides of a promoter that agree with the consensus sequence, the stronger the affinity of the promoter for RNA Polymerase likely is.
In the absence of other regulatory elements, the default state of a bacterial transcript is to be in the “on” configuration, resulting in the production of some amount of transcript. This means that transcriptional regulation in the form of protein repressors and positive control elements can either increase or decrease transcription. Repressors often physically occupy the promoter location, occluding RNA polymerase from binding. Alternatively a repressor and polymerase may bind to the DNA at the same time with a physical interaction between the repressor preventing the opening of the DNA for access to the minus strand for transcription. This strategy of control is distinct from eukaryotic transcription, whose basal state is to be off and where co-factors required for transcription initiation are highly gene dependent.
Sigma factors are specialized bacterial proteins that bind to RNA polymerases and orchestrate transcription initiation. Sigma factors act as mediators of sequence-specific transcription, such that a single sigma factor can be used for transcription of all housekeeping genes or a suite of genes the cell wishes to express in response to some external stimuli such as stress.
In addition to processes that regulate transcription at the stage of initiation, mRNA synthesis is also controlled by the rate of transcription elongation. RNA polymerase pauses occur frequently and are regulated by transcription factors, such as NusG and NusA, transcription-translation coupling, and mRNA secondary structure.
In eukaryotes
The added complexity of generating a eukaryotic cell carries with it an increase in the complexity of transcriptional regulation. Eukaryotes have three RNA polymerases, known as Pol I, Pol II, and Pol III. Each polymerase has specific targets and activities, and is regulated by independent mechanisms. There are a number of additional mechanisms through which polymerase activity can be controlled. These mechanisms can be generally grouped into three main areas:
Control over polymerase access to the gene. This is perhaps the broadest of the three control mechanisms. This includes the functions of histone remodeling enzymes, transcription factors, enhancers and repressors, and many other complexes
Productive elongation of the RNA transcript. Once polymerase is bound to a promoter, it requires another set of factors to allow it to escape the promoter complex and begin successfully transcribing RNA.
Termination of the polymerase. A number of factors which have been found to control how and when termination occurs, which will dictate the fate of the RNA transcript.
All three of these systems work in concert to integrate signals from the cell and change the transcriptional program accordingly.
While in prokaryotic systems the basal transcription state can be thought of as nonrestrictive (that is, “on” in the absence of modifying factors), eukaryotes have a restrictive basal state which requires the recruitment of other factors in order to generate RNA transcripts. This difference is largely due to the compaction of the eukaryotic genome by winding DNA around histones to form higher order structures. This compaction makes the gene promoter inaccessible without the assistance of other factors in the nucleus, and thus chromatin structure is a common site of regulation. Similar to the sigma factors in prokaryotes, the general transcription factors (GTFs) are a set of factors in eukaryotes that are required for all transcription events. These factors are responsible for stabilizing binding interactions and opening the DNA helix to allow the RNA polymerase to access the template, but generally lack specificity for different promoter sites. A large part of gene regulation occurs through transcription factors that either recruit or inhibit the binding of the general transcription machinery and/or the polymerase. This can be accomplished through close interactions with core promoter elements, or through the long distance enhancer elements.
Once a polymerase is successfully bound to a DNA template, it often requires the assistance of other proteins in order to leave the stable promoter complex and begin elongating the nascent RNA strand. This process is called promoter escape, and is another step at which regulatory elements can act to accelerate or slow the transcription process. Similarly, protein and nucleic acid factors can associate with the elongation complex and modulate the rate at which the polymerase moves along the DNA template.
At the level of chromatin state
In eukaryotes, genomic DNA is highly compacted in order to be able to fit it into the nucleus. This is accomplished by winding the DNA around protein octamers called histones, which has consequences for the physical accessibility of parts of the genome at any given time. Significant portions are silenced through histone modifications, and thus are inaccessible to the polymerases or their cofactors. The highest level of transcription regulation occurs through the rearrangement of histones in order to expose or sequester genes, because these processes have the ability to render entire regions of a chromosome inaccessible such as what occurs in imprinting.
Histone rearrangement is facilitated by post-translational modifications to the tails of the core histones. A wide variety of modifications can be made by enzymes such as the histone acetyltransferases (HATs), histone methyltransferases (HMTs), and histone deacetylases (HDACs), among others. These enzymes can add or remove covalent modifications such as methyl groups, acetyl groups, phosphates, and ubiquitin. Histone modifications serve to recruit other proteins which can either increase the compaction of the chromatin and sequester promoter elements, or to increase the spacing between histones and allow the association of transcription factors or polymerase on open DNA. For example, H3K27 trimethylation by the polycomb complex PRC2 causes chromosomal compaction and gene silencing. These histone modifications may be created by the cell, or inherited in an epigenetic fashion from a parent.
At the level of cytosine methylation
Transcription regulation at about 60% of promoters is controlled by methylation of cytosines within CpG dinucleotides (where 5’ cytosine is followed by 3’ guanine or CpG sites). 5-methylcytosine (5-mC) is a methylated form of the DNA base cytosine (see Figure). 5-mC is an epigenetic marker found predominantly within CpG sites. About 28 million CpG dinucleotides occur in the human genome. In most tissues of mammals, on average, 70% to 80% of CpG cytosines are methylated (forming 5-methylCpG or 5-mCpG). Methylated cytosines within 5’cytosine-guanine 3’ sequences often occur in groups, called CpG islands. About 60% of promoter sequences have a CpG island while only about 6% of enhancer sequences have a CpG island. CpG islands constitute regulatory sequences, since if CpG islands are methylated in the promoter of a gene this can reduce or silence gene transcription.
DNA methylation regulates gene transcription through interaction with methyl binding domain (MBD) proteins, such as MeCP2, MBD1 and MBD2. These MBD proteins bind most strongly to highly methylated CpG islands. These MBD proteins have both a methyl-CpG-binding domain as well as a transcription repression domain. They bind to methylated DNA and guide or direct protein complexes with chromatin remodeling and/or histone modifying activity to methylated CpG islands. MBD proteins generally repress local chromatin such as by catalyzing the introduction of repressive histone marks, or creating an overall repressive chromatin environment through nucleosome remodeling and chromatin reorganization.
Transcription factors are proteins that bind to specific DNA sequences in order to regulate the expression of a gene. The binding sequence for a transcription factor in DNA is usually about 10 or 11 nucleotides long. As summarized in 2009, Vaquerizas et al. indicated there are approximately 1,400 different transcription factors encoded in the human genome by genes that constitute about 6% of all human protein encoding genes. About 94% of transcription factor binding sites (TFBSs) that are associated with signal-responsive genes occur in enhancers while only about 6% of such TFBSs occur in promoters.
EGR1 protein is a particular transcription factor that is important for regulation of methylation of CpG islands. An EGR1 transcription factor binding site is frequently located in enhancer or promoter sequences. There are about 12,000 binding sites for EGR1 in the mammalian genome and about half of EGR1 binding sites are located in promoters and half in enhancers. The binding of EGR1 to its target DNA binding site is insensitive to cytosine methylation in the DNA.
While only small amounts of EGR1 transcription factor protein are detectable in cells that are un-stimulated, translation of the EGR1 gene into protein at one hour after stimulation is drastically elevated. Expression of EGR1 transcription factor proteins, in various types of cells, can be stimulated by growth factors, neurotransmitters, hormones, stress and injury. In the brain, when neurons are activated, EGR1 proteins are up-regulated and they bind to (recruit) the pre-existing TET1 enzymes which are highly expressed in neurons. TET enzymes can catalyse demethylation of 5-methylcytosine. When EGR1 transcription factors bring TET1 enzymes to EGR1 binding sites in promoters, the TET enzymes can demethylate the methylated CpG islands at those promoters. Upon demethylation, these promoters can then initiate transcription of their target genes. Hundreds of genes in neurons are differentially expressed after neuron activation through EGR1 recruitment of TET1 to methylated regulatory sequences in their promoters.
The methylation of promoters is also altered in response to signals. The three mammalian DNA methyltransferasess (DNMT1, DNMT3A, and DNMT3B) catalyze the addition of methyl groups to cytosines in DNA. While DNMT1 is a “maintenance” methyltransferase, DNMT3A and DNMT3B can carry out new methylations. There are also two splice protein isoforms produced from the DNMT3A gene: DNA methyltransferase proteins DNMT3A1 and DNMT3A2.
The splice isoform DNMT3A2 behaves like the product of a classical immediate-early gene and, for instance, it is robustly and transiently produced after neuronal activation. Where the DNA methyltransferase isoform DNMT3A2 binds and adds methyl groups to cytosines appears to be determined by histone post translational modifications.
On the other hand, neural activation causes degradation of DNMT3A1 accompanied by reduced methylation of at least one evaluated targeted promoter.
Through transcription factors and enhancers
Transcription factors
Transcription factors are proteins that bind to specific DNA sequences in order to regulate the expression of a given gene. There are approximately 1,400 transcription factors in the human genome and they constitute about 6% of all human protein coding genes. The power of transcription factors resides in their ability to activate and/or repress wide repertoires of downstream target genes. The fact that these transcription factors work in a combinatorial fashion means that only a small subset of an organism's genome encodes transcription factors.
Transcription factors function through a wide variety of mechanisms. In one mechanism, CpG methylation influences binding of most transcription factors to DNA—in some cases negatively and in others positively. In addition, often they are at the end of a signal transduction pathway that functions to change something about the factor, like its subcellular localization or its activity. Post-translational modifications to transcription factors located in the cytosol can cause them to translocate to the nucleus where they can interact with their corresponding enhancers. Other transcription factors are already in the nucleus, and are modified to enable the interaction with partner transcription factors. Some post-translational modifications known to regulate the functional state of transcription factors are phosphorylation, acetylation, SUMOylation and ubiquitylation.
Transcription factors can be divided in two main categories: activators and repressors. While activators can interact directly or indirectly with the core machinery of transcription through enhancer binding, repressors predominantly recruit co-repressor complexes leading to transcriptional repression by chromatin condensation of enhancer regions. It may also happen that a repressor may function by allosteric competition against a determined activator to repress gene expression: overlapping DNA-binding motifs for both activators and repressors induce a physical competition to occupy the site of binding. If the repressor has a higher affinity for its motif than the activator, transcription would be effectively blocked in the presence of the repressor.
Tight regulatory control is achieved by the highly dynamic nature of transcription factors. Again, many different mechanisms exist to control whether a transcription factor is active. These mechanisms include control over protein localization or control over whether the protein can bind DNA. An example of this is the protein HSF1, which remains bound to Hsp70 in the cytosol and is only translocated into the nucleus upon cellular stress such as heat shock. Thus the genes under the control of this transcription factor will remain untranscribed unless the cell is subjected to stress.
Enhancers
Enhancers or cis-regulatory modules/elements (CRM/CRE) are non-coding DNA sequences containing multiple activator and repressor binding sites. Enhancers range from 200 bp to 1 kb in length and can be either proximal, 5’ upstream to the promoter or within the first intron of the regulated gene, or distal, in introns of neighboring genes or intergenic regions far away from the locus. Through DNA looping, active enhancers contact the promoter dependently of the core DNA binding motif promoter specificity. Promoter-enhancer dichotomy provides the basis for the functional interaction between transcription factors and transcriptional core machinery to trigger RNA Pol II escape from the promoter. Whereas one could think that there is a 1:1 enhancer-promoter ratio, studies of the human genome predict that an active promoter interacts with 4 to 5 enhancers. Similarly, enhancers can regulate more than one gene without linkage restriction and are said to “skip” neighboring genes to regulate more distant ones. Even though infrequent, transcriptional regulation can involve elements located in a chromosome different from one where the promoter resides. Proximal enhancers or promoters of neighboring genes can serve as platforms to recruit more distal elements.
Enhancer activation and implementation
Up-regulated expression of genes in mammals can be initiated when signals are transmitted to the promoters associated with the genes. Cis-regulatory DNA sequences that are located in DNA regions distant from the promoters of genes can have very large effects on gene expression, with some genes undergoing up to 100-fold increased expression due to such a cis-regulatory sequence. These cis-regulatory sequences include enhancers, silencers, insulators and tethering elements. Among this constellation of sequences, enhancers and their associated transcription factor proteins have a leading role in the regulation of gene expression.
Enhancers are sequences of the genome that are major gene-regulatory elements. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. In a study of brain cortical neurons, 24,937 loops were found, bringing enhancers to promoters. Multiple enhancers, each often at tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control expression of their common target gene.
The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function specific transcription factor proteins (in 2018 Lambert et al. indicated there were about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern the level of transcription of the target gene. Mediator (coactivator) (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (RNAP II) enzyme bound to the promoter.
Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the Figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of a transcription factor bound to an enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating a promoter to initiate transcription of messenger RNA from its target gene.
Regulatory landscape
Transcriptional initiation, termination and regulation are mediated by “DNA looping” which brings together promoters, enhancers, transcription factors and RNA processing factors to accurately regulate gene expression. Chromosome conformation capture (3C) and more recently Hi-C techniques provided evidence that active chromatin regions are “compacted” in nuclear domains or bodies where transcriptional regulation is enhanced. The configuration of the genome is essential for enhancer-promoter proximity. Cell-fate decisions are mediated upon highly dynamic genomic reorganizations at interphase to modularly switch on or off entire gene regulatory networks through short to long range chromatin rearrangements. Related studies demonstrate that metazoan genomes are partitioned in structural and functional units around a megabase long called Topological association domains (TADs) containing dozens of genes regulated by hundreds of enhancers distributed within large genomic regions containing only non-coding sequences. The function of TADs is to regroup enhancers and promoters interacting together within a single large functional domain instead of having them spread in different TADs. However, studies of mouse development point out that two adjacent TADs may regulate the same gene cluster. The most relevant study on limb evolution shows that the TAD at the 5’ of the HoxD gene cluster in tetrapod genomes drives its expression in the distal limb bud embryos, giving rise to the hand, while the one located at 3’ side does it in the proximal limb bud, giving rise to the arm. Still, it is not known whether TADs are an adaptive strategy to enhance regulatory interactions or an effect of the constrains on these same interactions.
TAD boundaries are often composed by housekeeping genes, tRNAs, other highly expressed sequences and Short Interspersed Elements (SINE). While these genes may take advantage of their border position to be ubiquitously expressed, they are not directly linked with TAD edge formation. The specific molecules identified at boundaries of TADs are called insulators or architectural proteins because they not only block enhancer leaky expression but also ensure an accurate compartmentalization of cis-regulatory inputs to the targeted promoter. These insulators are DNA-binding proteins like CTCF and TFIIIC that help recruiting structural partners such as cohesins and condensins. The localization and binding of architectural proteins to their corresponding binding sites is regulated by post-translational modifications. DNA binding motifs recognized by architectural proteins are either of high occupancy and at around a megabase of each other or of low occupancy and inside TADs. High occupancy sites are usually conserved and static while intra-TADs sites are dynamic according to the state of the cell therefore TADs themselves are compartmentalized in subdomains that can be called subTADs from few kb up to a TAD long (19). When architectural binding sites are at less than 100 kb from each other, Mediator proteins are the architectural proteins cooperate with cohesin. For subTADs larger than 100 kb and TAD boundaries, CTCF is the typical insulator found to interact with cohesion.
Of the pre-initiation complex and promoter escape
In eukaryotes, ribosomal rRNA and the tRNAs involved in translation are controlled by RNA polymerase I (Pol I) and RNA polymerase III (Pol III) . RNA Polymerase II (Pol II) is responsible for the production of messenger RNA (mRNA) within the cell. Particularly for Pol II, much of the regulatory checkpoints in the transcription process occur in the assembly and escape of the pre-initiation complex. A gene-specific combination of transcription factors will recruit TFIID and/or TFIIA to the core promoter, followed by the association of TFIIB, creating a stable complex onto which the rest of the General Transcription Factors (GTFs) can assemble. This complex is relatively stable, and can undergo multiple rounds of transcription initiation.
After the binding of TFIIB and TFIID, Pol II the rest of the GTFs can assemble. This assembly is marked by the post-translational modification (typically phosphorylation) of the C-terminal domain (CTD) of Pol II through a number of kinases. The CTD is a large, unstructured domain extending from the RbpI subunit of Pol II, and consists of many repeats of the heptad sequence YSPTSPS. TFIIH, the helicase that remains associated with Pol II throughout transcription, also contains a subunit with kinase activity which will phosphorylate the serines 5 in the heptad sequence. Similarly, both CDK8 (a subunit of the massive multiprotein Mediator complex) and CDK9 (a subunit of the p-TEFb elongation factor), have kinase activity towards other residues on the CTD. These phosphorylation events promote the transcription process and serve as sites of recruitment for mRNA processing machinery. All three of these kinases respond to upstream signals, and failure to phosphorylate the CTD can lead to a stalled polymerase at the promoter.
In cancer
In vertebrates, the majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes silenced. Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-expressed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).
References
External links
Plant Transcription Factor Database and Plant Transcriptional Regulation Data and Analysis Platform
MIT : Activating a new understanding of gene regulation
Gene expression | Transcriptional regulation | [
"Chemistry",
"Biology"
] | 5,492 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.